Instructions to use Johnesss/Toxic-Comment-Classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Keras
How to use Johnesss/Toxic-Comment-Classification with Keras:
# Available backend options are: "jax", "torch", "tensorflow". import os os.environ["KERAS_BACKEND"] = "jax" import keras model = keras.saving.load_model("hf://Johnesss/Toxic-Comment-Classification") - Notebooks
- Google Colab
- Kaggle
| library_name: keras | |
| language: | |
| - en | |
| pipeline_tag: text-classification | |
| tags: | |
| - toxic | |
| - comment | |
| - toxic comment | |
| ## Model description | |
| This model used for text classification with toxic and non-toxic labels. | |
| ## Intended uses & limitations | |
| If you want to reuse model, try copy this | |
| ``` | |
| from huggingface_hub import from_pretrained_keras | |
| reloaded_model = from_pretrained_keras('Johnesss/Toxic-Comment-Classification') | |
| y_testing=reloaded_model.predict(x_testing,verbose=1,batch_size=32) | |
| test_df['Toxic']=['Not Toxic' if x<0.5 else 'Toxic' for x in y_testing] | |
| test_df[['comment_text','Toxic']].head(20) | |
| ``` | |
| ## Training and evaluation data | |
| Full info in .ipynb file | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| | Hyperparameters | Value | | |
| | :-- | :-- | | |
| | name | Adam | | |
| | weight_decay | None | | |
| | clipnorm | None | | |
| | global_clipnorm | None | | |
| | clipvalue | None | | |
| | use_ema | False | | |
| | ema_momentum | 0.99 | | |
| | ema_overwrite_frequency | None | | |
| | jit_compile | False | | |
| | is_legacy_optimizer | False | | |
| | learning_rate | 0.0010000000474974513 | | |
| | beta_1 | 0.9 | | |
| | beta_2 | 0.999 | | |
| | epsilon | 1e-07 | | |
| | amsgrad | False | | |
| | training_precision | float32 | | |
| ## Model Plot | |
| <details> | |
| <summary>View Model Plot</summary> | |
|  | |
| </details> |