Instructions to use codeparrot/unixcoder-java-complexity-prediction with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use codeparrot/unixcoder-java-complexity-prediction with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="codeparrot/unixcoder-java-complexity-prediction")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("codeparrot/unixcoder-java-complexity-prediction") model = AutoModelForSequenceClassification.from_pretrained("codeparrot/unixcoder-java-complexity-prediction") - Notebooks
- Google Colab
- Kaggle
File size: 537 Bytes
771f699 f891144 1786810 771f699 57e0249 | 1 2 3 4 5 6 7 | ---
license: apache-2.0
language: code
datasets:
- codeparrot/codecomplex
---
This is a fine-tuned version of [UniXcoder](https://huggingface.co/microsoft/unixcoder-base-nine), a unified cross-modal pre-trained model for programming languages, on [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex), a dataset for complexity prediction of Java code. You can also find the code for the fine-tuning in this [repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/codeparrot/examples) |