| | --- |
| | license: mit |
| | base_model: microsoft/phi-2 |
| | datasets: |
| | - TokenBender/code_instructions_122k_alpaca_style |
| | language: |
| | - en |
| | tags: |
| | - code |
| | - nlp |
| | --- |
| | ## Model Summary |
| |
|
| | CodePhi2 is finetuning of the Microsoft Phi-2 LLM with **2.7 billion** parameters. It was finetuned on TokenBender's [code_instructions_122k_alpaca_style]("https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style"). The end goal was to increase Phi-2's coding ability while imbuing the Alpaca format. |
| |
|
| |
|
| | ## Instruction Format (Alpaca) |
| |
|
| | CodePhi2 has been finetuned on the Alpaca instruction format, and as such should be prompted like below: |
| | ``` |
| | Below is an instruction that describes a task. Write a response that appropriately completes the request. |
| | ### Instruction: |
| | {instruction} |
| | |
| | ### Response: |
| | ``` |
| | #### Notes |
| | If you are using transformers>=4.36.0, always load the model with trust_remote_code=True to prevent side-effects. |