Skip to content

Latest commit

 

History

History
executable file
·
337 lines (264 loc) · 15.1 KB

ADVANCED_USAGE.md

File metadata and controls

executable file
·
337 lines (264 loc) · 15.1 KB

🔥 Advanced Start

To get started, please first set up the environment:

# Install to use bigcodebench.evaluate
pip install bigcodebench --upgrade
# If you want to use the evaluate locally, you need to install the requirements
pip install -I -r https://raw.githubusercontent.com/bigcode-project/bigcodebench/main/Requirements/requirements-eval.txt

# Install to use bigcodebench.generate
# You are strongly recommended to install the generate dependencies in a separate environment
pip install bigcodebench[generate] --upgrade
⏬ Install nightly version :: click to expand ::
# Install to use bigcodebench.evaluate
pip install "git+https://github.com/bigcode-project/bigcodebench.git" --upgrade

# Install to use bigcodebench.generate
pip install "git+https://github.com/bigcode-project/bigcodebench.git#egg=bigcodebench[generate]" --upgrade
⏬ Using BigCodeBench as a local repo? :: click to expand ::
git clone https://github.com/bigcode-project/bigcodebench.git
cd bigcodebench
export PYTHONPATH=$PYTHONPATH:$(pwd)
# Install to use bigcodebench.evaluate
pip install -e .
# Install to use bigcodebench.generate
pip install -e .[generate]

🚀 Remote Evaluation

Below are all the arguments for bigcodebench.evaluate for the remote evaluation:

Required Arguments:

  • --model: The model to evaluate
  • --split: The split of the dataset to evaluate
  • --subset: The subset of the dataset to evaluate

Optional Arguments:

  • --root: The root directory to store the results, default to bcb_results
  • --bs: The batch size, default to 1
  • --n_samples: The number of samples, default to 1
  • --temperature: The temperature, default to 0.0
  • --max_new_tokens: The length of max new tokens, default to 1280
  • --greedy: Whether to use greedy decoding, default to False
  • --strip_newlines: Whether to strip newlines, default to False, set to True to strip newlines for some model series like StarCoder2
  • --direct_completion: Whether to use direct completion, default to False
  • --resume: Whether to resume the evaluation, default to True, set to False to re-run the evaluation
  • --id_range: The range of the tasks to evaluate, default to None, e.g. --id_range 10,20 will evaluate the tasks from 10 to 20
  • --backend: The backend to use, default to vllm
  • --base_url: The base URL of the backend for OpenAI-compatible APIs, default to None
  • --tp: The tensor parallel size for the vLLM backend, default to 1
  • --trust_remote_code: Whether to trust the remote code, default to False
  • --tokenizer_name: The name of the customized tokenizer, default to None
  • --tokenizer_legacy: Whether to use the legacy tokenizer, default to False
  • --samples: The path to the generated samples file, default to None
  • --local_execute: Whether to execute the samples locally, default to False
  • --remote_execute_api: The API endpoint for remote execution, default to https://bigcode-bigcodebench-evaluator.hf.space/, you can also use your own Gradio API endpoint by cloning the bigcodebench-evaluator repo and check Use via API at the bottom of the HF space page.
  • --pass_k: The k in Pass@k, default to [1, 5, 10], e.g. --pass_k 1,5,10 will evaluate Pass@1, Pass@5 and Pass@10
  • --save_pass_rate: Whether to save the pass rate to a file, default to True
  • --parallel: The number of parallel processes, default to None, e.g. --parallel 10 will evaluate 10 samples in parallel
  • --min_time_limit: The minimum time limit for the execution, default to 1, e.g. --min_time_limit 10 will evaluate the samples with at least 10 seconds
  • --max_as_limit: The maximum address space limit for the execution, default to 30*1024 (30 GB), e.g. --max_as_limit 20*1024 will evaluate the samples with at most 20 GB
  • --max_data_limit: The maximum data segment limit for the execution, default to 30*1024 (30 GB), e.g. --max_data_limit 20*1024 will evaluate the samples with at most 20 GB
  • --max_stack_limit: The maximum stack limit for the execution, default to 10, e.g. --max_stack_limit 20 will evaluate the samples with at most 20 MB
  • --check_gt_only: Whether to only check the ground truths, default to False
  • --no_gt: Whether to not check the ground truths, default to False

🚀 Full Script

We provide an example script to run the full pipeline for the remote evaluation:

bash run.sh

🚀 Local Generation

# when greedy, there is no need for temperature and n_samples
bigcodebench.generate \
    --model [model_name] \
    --split [complete|instruct] \
    --subset [full|hard] \
    [--greedy] \
    --bs [bs] \
    --temperature [temp] \
    --n_samples [n_samples] \
    --resume \
    --backend [vllm|openai|mistral|anthropic|google|hf] \
    --tp [TENSOR_PARALLEL_SIZE] \
    [--trust_remote_code] \
    [--base_url [base_url]] \
    [--tokenizer_name [tokenizer_name]]

The generated code samples will be stored in a file named [model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated.jsonl. Alternatively, you can use the following command to utilize our pre-built docker images for generating code samples:

# If you are using GPUs
docker run --gpus '"device=$CUDA_VISIBLE_DEVICES"' -v $(pwd):/app -t bigcodebench/bigcodebench-generate:latest \
    --model [model_name] \ 
    --split [complete|instruct] \
    --subset [full|hard] \
    [--greedy] \
    --bs [bs] \   
    --temperature [temp] \
    --n_samples [n_samples] \
    --resume \
    --backend [vllm|openai|mistral|anthropic|google|hf] \
    --tp [TENSOR_PARALLEL_SIZE]

# ...Or if you are using CPUs
docker run -v $(pwd):/app -t bigcodebench/bigcodebench-generate:latest \
    --model [model_name] \ 
    --split [complete|instruct] \
    --subset [full|hard] \
    [--greedy] \
    --bs [bs] \   
    --temperature [temp] \
    --n_samples [n_samples] \
    --resume \
    --backend [vllm|hf|openai|mistral|anthropic|google]
# If you wish to use gated or private HuggingFace models and datasets
docker run -e HUGGING_FACE_HUB_TOKEN=$token -v $(pwd):/app -t bigcodebench/bigcodebench-generate:latest # omit other arguments4

# Similarly, to use other backends that require authentication
docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -v $(pwd):/app -t bigcodebench/bigcodebench-generate:latest # omit other arguments
docker run -e ANTHROPIC_KEY=$ANTHROPIC_KEY -v $(pwd):/app -t bigcodebench/bigcodebench-generate:latest # omit other arguments
docker run -e MISTRAL_KEY=$MISTRAL_KEY -v $(pwd):/app -t bigcodebench/bigcodebench-generate:latest # omit other arguments
docker run -e GOOGLE_API_KEY=$OPENAI_API_KEY -v $(pwd):/app -t bigcodebench/bigcodebench-generate:latest # omit other arguments

Following which, you can run the built container as shown in above.

🤔 Structure of `problem`? :: click to expand ::
  • task_id is the identifier string for the task
  • entry_point is the name of the function
  • complete_prompt is the prompt for BigCodeBench-Complete
  • instruct_prompt is the prompt for BigCodeBench-Instruct
  • canonical_solution is the ground-truth implementation
  • test is the unittest.TestCase class

Note

Expected Schema of [model_name]--bigcodebench-[task]--[backend]-[temp]-[n_samples].jsonl

  1. task_id: Task ID, which are the keys of get_bigcodebench()
  2. solution (optional): Self-contained solution (usually including the prompt)
  3. raw_solution (optional): The raw solution generated by the LLM
    • Example: {"task_id": "BigCodeBench/?", "solution": "def f():\n return 1", "raw_solution": "def f():\n return 1\nprint(f())"}
🔎 Checking the compatibility of post-processed code:: click to expand ::

To double-check the post-processing results, you can use bigcodebench.syncheck to check the code validity before and after sanitization, which will print erroneous code snippets and why they are wrong:

# 💡 If you are storing codes in jsonl:
bigcodebench.syncheck --samples samples.jsonl

# 💡 If you are storing codes in directories:
bigcodebench.syncheck --samples /path/to/vicuna-[??]b_temp_[??]

# 💡 Or change the entrypoint to bigcodebench.syncheck in any pre-built docker image, like 
docker run -it --entrypoint bigcodebench.syncheck -v $(pwd):/app bigcodebench/bigcodebench-evaluate:latest --samples samples.jsonl

🚀 Local Evaluation

You are strongly recommended to use a sandbox such as docker:

# Mount the current directory to the container
# If you want to change the RAM address space limit (in MB, 30 GB by default): `--max-as-limit XXX`
# If you want to change the RAM data segment limit (in MB, 30 GB by default): `--max-data-limit`
# If you want to change the RAM stack limit (in MB, 10 MB by default): `--max-stack-limit`
# If you want to increase the execution time limit (in seconds, 240 seconds by default): `--min-time-limit`
docker run -v $(pwd):/app bigcodebench/bigcodebench-evaluate:latest --local_execute --split [complete|instruct] --subset [full|hard] --samples samples-sanitized-calibrated.jsonl

# If you only want to check the ground truths
docker run -v $(pwd):/app bigcodebench/bigcodebench-evaluate:latest --local_execute --split [complete|instruct] --subset [full|hard] --samples samples-sanitized-calibrated.jsonl --check-gt-only

...Or if you want to try it locally regardless of the risks ⚠️:

First, install the dependencies for BigCodeBench:

pip install -r https://raw.githubusercontent.com/bigcode-project/bigcodebench/main/Requirements/requirements-eval.txt

Then, run the evaluation:

# ...Or locally ⚠️
bigcodebench.evaluate --local_execute --split [complete|instruct] --subset [full|hard] --samples samples-sanitized-calibrated.jsonl
# ...If you really don't want to check the ground truths
bigcodebench.evaluate --local_execute --split [complete|instruct] --subset [full|hard] --samples samples-sanitized-calibrated.jsonl --no-gt
# If you want to save the pass rate to a file
bigcodebench.evaluate --local_execute --split [complete|instruct] --subset [full|hard] --samples samples-sanitized-calibrated.jsonl --save_pass_rate

# You are strongly recommended to use the following command to clean up the environment after evaluation:
pids=$(ps -u $(id -u) -o pid,comm | grep 'bigcodebench' | awk '{print $1}'); if [ -n \"$pids\" ]; then echo $pids | xargs -r kill; fi;
rm -rf /tmp/*

Tip

If you want to customize the k in Pass@k, please pass --pass_k with a comma-separated string. For example, if you want to use Pass@1 and Pass@100, you can pass --pass_k 1,100.

Tip

Do you use a very slow machine?

LLM solutions are regarded as failed on timeout (and OOM etc.). Specifically, we set the dynamic timeout based on the ground-truth solution's runtime.

Additionally, you are NOT encouraged to make your test-bed over stressed while running evaluation. For example, using --parallel 64 on a 4-core machine or doing something else during evaluation are bad ideas...

⌨️ More command-line flags :: click to expand ::
  • --parallel: by default half of the cores

The output should be like (below is GPT-4 greedy decoding example):

Asserting the groundtruth...
Expected outputs computed in 1200.0 seconds
Reading samples...
1140it [00:00, 1901.64it/s]
Evaluating samples...
100%|██████████████████████████████████████████| 1140/1140 [19:53<00:00, 6.75it/s]
BigCodeBench-Instruct-calibrated
Groundtruth pass rate: 1.000
pass@1: 0.568
  • A cache file named like samples_eval_results.json will be cached. Remove it to re-run the evaluation
🤔 How long it would take? :: click to expand ::

If you do greedy decoding where there is only one sample for each task, the evaluation should take just a few minutes on Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz, composed of 2 sockets, with 18 cores per socket. However, if you have multiple samples for each task, the evaluation will take longer. Here are some tips to speed up the evaluation:

🔍 Failure Inspection

You can inspect the failed samples by using the following command:

# Inspect the failed samples and save the results to `inspect/`
bigcodebench.inspect --eval_results sample-sanitized-calibrated_eval_results.json --split complete --subset hard

# Re-run the inspection in place
bigcodebench.inspect --eval_results sample-sanitized-calibrated_eval_results.json --split complete --subset hard --in_place

📊 Result Analysis

We provide a script to replicate the analysis like Elo Rating and Task Solve Rate, which helps you understand the performance of the models further.

To run the analysis, you need to put all the `samples_eval_results.json` files in a `results` folder, which is in the same directory as the script.

```bash
cd analysis
python get_results.py

🐞 Resolved Issues

  • Due to the Hugging Face tokenizer update, some tokenizers may be broken and will degrade the performance of the evaluation. Therefore, we set up with legacy=False for the initialization. If you notice the unexpected behaviors, please try --tokenizer_legacy during the generation.

  • Due to the flakiness in the evaluation, the execution results may vary slightly (~0.2% for Full set, and ~0.6% for Hard set) between runs. We are working on improving the evaluation stability.

  • You may get errors like ImportError: /usr/local/lib/python3.10/site-packages/matplotlib/_c_internal_utils.cpython-310-x86_64-linux-gnu.so: failed to map segment from shared object when running the evaluation. This is due to the memory limit of the docker container. You can increase the memory limit of the docker container to solve this issue. If the issue persists ,please use the real-time code execution session to evaluate the code in the leaderboard.

  • We are aware of the issue of some users needing to use a proxy to access the internet. Please use Remote Evaluation to get the accurate results.

📜 Citation

@article{zhuo2024bigcodebench,
  title={BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions},
  author={Zhuo, Terry Yue and Vu, Minh Chien and Chim, Jenny and Hu, Han and Yu, Wenhao and Widyasari, Ratnadira and Yusuf, Imam Nur Bani and Zhan, Haolan and He, Junda and Paul, Indraneil and others},
  journal={arXiv preprint arXiv:2406.15877},
  year={2024}
}

🙏 Acknowledgement