Text Generation
Transformers
PyTorch
English
MAELM
feature-extraction
audio2text
music2text
musicllm
music foundation model
custom_code
Instructions to use UniMus/OpenJMLA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UniMus/OpenJMLA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="UniMus/OpenJMLA", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("UniMus/OpenJMLA", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use UniMus/OpenJMLA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "UniMus/OpenJMLA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UniMus/OpenJMLA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/UniMus/OpenJMLA
- SGLang
How to use UniMus/OpenJMLA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UniMus/OpenJMLA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UniMus/OpenJMLA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "UniMus/OpenJMLA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UniMus/OpenJMLA", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use UniMus/OpenJMLA with Docker Model Runner:
docker model run hf.co/UniMus/OpenJMLA
| # coding=utf-8 | |
| # Copyright 2023 the Falcon authors and HuggingFace Inc. team. All rights reserved. | |
| # | |
| # Licensed under the Apache License, Version 2.0 (the "License"); | |
| # you may not use this file except in compliance with the License. | |
| # You may obtain a copy of the License at | |
| # | |
| # http://www.apache.org/licenses/LICENSE-2.0 | |
| # | |
| # Unless required by applicable law or agreed to in writing, software | |
| # distributed under the License is distributed on an "AS IS" BASIS, | |
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
| # See the License for the specific language governing permissions and | |
| # limitations under the License. | |
| """ Falcon configuration""" | |
| from transformers.configuration_utils import PretrainedConfig | |
| from transformers.utils import logging | |
| from transformers import AutoConfig | |
| logger = logging.get_logger(__name__) | |
| class MAELMConfig(PretrainedConfig): | |
| """ | |
| This is the configuration class to store the configuration of a [`FalconModel`]. It is used to instantiate a Falcon | |
| model according to the specified arguments, defining the model architecture. Instantiating a configuration with the | |
| defaults will yield a similar configuration to that of the | |
| [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) architecture. | |
| Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | |
| documentation from [`PretrainedConfig`] for more information. | |
| Args: | |
| vocab_size (`int`, *optional*, defaults to 65024): | |
| Vocabulary size of the Falcon model. Defines the number of different tokens that can be represented by the | |
| `inputs_ids` passed when calling [`FalconModel`] | |
| hidden_size (`int`, *optional*, defaults to 4544): | |
| Dimension of the hidden representations. | |
| num_hidden_layers (`int`, *optional*, defaults to 32): | |
| Number of hidden layers in the Transformer decoder. | |
| num_attention_heads (`int`, *optional*, defaults to 71): | |
| Number of attention heads for each attention layer in the Transformer encoder. | |
| layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): | |
| The epsilon used by the layer normalization layers. | |
| initializer_range (`float`, *optional*, defaults to 0.02): | |
| The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | |
| use_cache (`bool`, *optional*, defaults to `True`): | |
| Whether the model should return the last key/values attentions (not used by all models). Only relevant if | |
| `config.is_decoder=True`. | |
| hidden_dropout (`float`, *optional*, defaults to 0.0): | |
| The dropout probability for MLP layers. | |
| attention_dropout (`float`, *optional*, defaults to 0.0): | |
| The dropout probability for attention layers. | |
| num_kv_heads (`int`, *optional*): | |
| Number of key-value heads to use per attention layer. If unset, defaults to the same value as | |
| `num_attention_heads`. | |
| alibi (`bool`, *optional*, defaults to `False`): | |
| Whether to use ALiBi positional biases during self-attention. | |
| new_decoder_architecture (`bool`, *optional*, defaults to `False`): | |
| Whether to use the new (Falcon-40B) decoder architecture. If `True`, the `multi_query` and `parallel_attn` | |
| arguments are ignored, as the new decoder always uses parallel attention. | |
| multi_query (`bool`, *optional*, defaults to `True`): | |
| Whether to use multi-query attention in the decoder. Ignored when `new_decoder_architecture` is `True`. | |
| parallel_attn (`bool`, *optional*, defaults to `True`): | |
| Whether to compute attention in parallel with the feedforward layer. If False, they are consecutive | |
| instead, as in the original Transformer architecture. Ignored when `new_decoder_architecture` is `True`. | |
| bias (`bool`, *optional*, defaults to `False`): | |
| Whether to use bias on Linear layers. | |
| max_position_embeddings (`int`, *optional*, defaults to 2048): | |
| The maximum sequence length that this model might ever be used with, when `alibi` is `False`. Pretrained | |
| Falcon models with RoPE support up to 2048 tokens. | |
| rope_theta (`float`, *optional*, defaults to 10000.0): | |
| The base period of the RoPE embeddings. | |
| rope_scaling (`Dict`, *optional*): | |
| Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling | |
| strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is | |
| `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update | |
| `max_position_embeddings` to the expected new maximum. See the following thread for more information on how | |
| these scaling strategies behave: | |
| https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an | |
| experimental feature, subject to breaking API changes in future versions. | |
| bos_token_id (`int`, *optional*, defaults to 11): | |
| The id of the "beginning-of-sequence" token. | |
| eos_token_id (`int`, *optional*, defaults to 11): | |
| The id of the "end-of-sequence" token. | |
| """ | |
| model_type = "MAELM" | |
| def __init__( | |
| self, | |
| seed=42, | |
| cache_dir=None, | |
| do_train=True, | |
| do_eval=False, | |
| do_test=False, | |
| dataset_name=None, | |
| spect_len=2992, | |
| train_dataset_list=[{'train_file': '/mnt/bn/music-nas-dxj1/datasets/MCC_AIGC/mccaigc_train_1w.csv', \ | |
| 'train_tokenized_data': None, 'train_data_root': '/mnt/bn/music-nas-dxj1/datasets/MCC_AIGC/logmel',}], | |
| per_device_eval_batch_size=32, | |
| preprocessing_num_workers=64, | |
| overwrite_cache=True, | |
| output_dir='/mnt/bn/music-nas-dxj1/VWork/ckpts_vault/cap_lynx-apm_umg_PT-mccaigc1w_FT', | |
| save_interval_steps=1000, | |
| overwrite_output_dir=True, | |
| gradient_accumulation_steps=1, | |
| num_train_epochs=50, | |
| per_device_train_batch_size=12, | |
| learning_rate=0.00005, | |
| lm_lr_ratio=0.1, | |
| tokenizer_name='meta-llama/Llama-2-7b-hf', | |
| resume_from_checkpoint=None, | |
| resume_from_pth='epoch_4-step_8639-allstep_60000.pth', | |
| backbone={'name': 'MAEViT', 'arch': 'b', 'patch_size': 16, 'mask_ratio': 0.0, 'img_size': [80, 2992], \ | |
| 'ckpt': 'epoch_20.pth'}, | |
| neck={'name': 'LMDecoder', 'patch_size': 16, 'img_size': [80, 2992], 'in_chans': 3, 'embed_dim': 768, \ | |
| 'decoder_embed_dim': 4544, 'freeze_decoder': True, 'decoder_type': 'meta-llama/Llama-2-7b-hf'}, | |
| wandb={'proj': 'ATRena_cap', 'expname': 'cap_lynx_apmPT_mccaigc1wFT'}, | |
| **kwargs, | |
| ): | |
| self.backbone = backbone | |
| self.neck = neck | |
| self.tokenizer_name = tokenizer_name | |
| self._name_or_path = None | |
| self.resume_from_checkpoint = resume_from_checkpoint | |
| self.resume_from_pth = resume_from_pth | |
| self.auto_map = {"AutoConfig": "configuration_maelm.MAELMConfig", | |
| "AutoModel": "modeling_maelm.MAEForCausalLM"} | |