Instructions to use keras/qwen2.5_coder_instruct_7b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- KerasHub
How to use keras/qwen2.5_coder_instruct_7b with KerasHub:
import keras_hub # Load CausalLM model (optional: use half precision for inference) causal_lm = keras_hub.models.CausalLM.from_preset("hf://keras/qwen2.5_coder_instruct_7b", dtype="bfloat16") causal_lm.compile(sampler="greedy") # (optional) specify a sampler # Generate text causal_lm.generate("Keras: deep learning for", max_length=64)import keras_hub # Create a Backbone model unspecialized for any task backbone = keras_hub.models.Backbone.from_preset("hf://keras/qwen2.5_coder_instruct_7b") - Keras
How to use keras/qwen2.5_coder_instruct_7b with Keras:
# Available backend options are: "jax", "torch", "tensorflow". import os os.environ["KERAS_BACKEND"] = "jax" import keras model = keras.saving.load_model("hf://keras/qwen2.5_coder_instruct_7b") - Notebooks
- Google Colab
- Kaggle
| { | |
| "module": "keras_hub.src.models.qwen.qwen_backbone", | |
| "class_name": "QwenBackbone", | |
| "config": { | |
| "name": "qwen_backbone", | |
| "trainable": true, | |
| "dtype": { | |
| "module": "keras", | |
| "class_name": "DTypePolicy", | |
| "config": { | |
| "name": "float32" | |
| }, | |
| "registered_name": null | |
| }, | |
| "vocabulary_size": 152064, | |
| "num_layers": 28, | |
| "num_query_heads": 28, | |
| "hidden_dim": 3584, | |
| "intermediate_dim": 18944, | |
| "rope_max_wavelength": 1000000.0, | |
| "rope_scaling_factor": 1.0, | |
| "num_key_value_heads": 4, | |
| "layer_norm_epsilon": 1e-06, | |
| "dropout": 0, | |
| "tie_word_embeddings": false, | |
| "use_sliding_window_attention": false, | |
| "sliding_window_size": 131072 | |
| }, | |
| "registered_name": "keras_hub>QwenBackbone" | |
| } |