Model Card: BERT DAPT Civil Comments

A domain-adapted BERT-base model, further pre-trained on the Civil Comments dataset text.

Model Details

Description

This model is based on the BERT base (uncased) architecture and was further pre-trained (domain-adapted) using the text in Civil Comments dataset, excluding its test split. Only the masked language modeling (MLM) objective was used during domain adaptation.

Checkpoints

Intermediate checkpoints from the pre-training process are available and can be accessed using specific tags, which correspond to training epochs and steps:

Epoch Step Tags
1 17833 epoch-1 step-17833
2 35666 epoch-2 step-35666
3 53499 epoch-3 step-53499
5 89166 epoch-5 step-89166
10 178333 epoch-10 step-178333
15 267495 epoch-15 step-267495
20 356660 epoch-20 step-356660
25 445825 epoch-25 step-445825

To load a model from a specific intermediate checkpoint, use the revision parameter with the corresponding tag:

from transformers import AutoModelForMaskedLM

model = AutoModelForMaskedLM.from_pretrained("<model-name>", revision="<checkpoint-tag>")

Sources

  • Paper: [Information pending]

Training Details

For more details on the training procedure, please refer to the base model's documentation: Training procedure.

Training Data

All texts from Civil Comments dataset, excluding the test partition.

Preprocessing

All markup and symbols were removed from the texts, including punctuation.

Training Hyperparameters

  • Precision: fp16
  • Batch size: 32
  • Gradient accumulation steps: 3

Uses

For typical use cases and limitations, please refer to the base model's guidance: Inteded uses & limitations.

Bias, Risks, and Limitations

This model inherits potential risks and limitations from the base model. Refer to: Limitations and bias.

Environmental Impact

  • Hardware Type: NVIDIA Tesla V100 PCIE 32GB
  • Runtime: 44.5 h
  • Cluster Provider: Artemisa
  • Compute Region: EU
  • Carbon Emitted: 8.28 kg CO2 eq.

Citation

BibTeX:

[More Information Needed]

Downloads last month
5
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cglez/bert-dapt-civil_comments-uncased

Finetuned
(6278)
this model

Dataset used to train cglez/bert-dapt-civil_comments-uncased