Update README.md
Browse files
README.md
CHANGED
|
@@ -2,72 +2,92 @@
|
|
| 2 |
library_name: transformers
|
| 3 |
language: en
|
| 4 |
license: apache-2.0
|
| 5 |
-
datasets:
|
| 6 |
-
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
-
# Model Card
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
## Model Details
|
| 14 |
|
| 15 |
-
###
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
| 18 |
|
| 19 |
- **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
|
| 20 |
- **Funded by:** [ERC](https://erc.europa.eu)
|
| 21 |
-
- **
|
| 22 |
-
- **Language
|
| 23 |
-
- **License:** Apache
|
| 24 |
-
- **
|
| 25 |
-
|
| 26 |
-
###
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
## Training Details
|
| 47 |
|
| 48 |
-
|
|
|
|
| 49 |
|
| 50 |
### Training Data
|
| 51 |
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
#### Preprocessing [optional]
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
|
| 58 |
#### Training Hyperparameters
|
| 59 |
|
| 60 |
-
- **
|
| 61 |
- **Batch size:** 32
|
| 62 |
- **Gradient accumulation steps:** 3
|
| 63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
## Environmental Impact
|
| 65 |
|
| 66 |
- **Hardware Type:** NVIDIA Tesla V100 PCIE 32GB
|
| 67 |
-
- **Hours used:** [More Information Needed]
|
| 68 |
- **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
|
| 69 |
- **Compute Region:** EU
|
| 70 |
-
- **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
|
| 71 |
|
| 72 |
## Citation
|
| 73 |
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
language: en
|
| 4 |
license: apache-2.0
|
| 5 |
+
datasets:
|
| 6 |
+
- community-datasets/ohsumed
|
| 7 |
+
base_model:
|
| 8 |
+
- google-bert/bert-base-uncased
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Model Card: BERT-Ohsumed
|
| 12 |
|
| 13 |
+
An in-domain BERT-base model, pre-trained from scratch on the Ohsumed dataset text.
|
| 14 |
|
| 15 |
## Model Details
|
| 16 |
|
| 17 |
+
### Description
|
| 18 |
|
| 19 |
+
This model is based on the [BERT base (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
|
| 20 |
+
architecture and was pre-trained from scratch (in-domain) using the text in Ohsumed dataset, excluding its test split.
|
| 21 |
+
Only the masked language modeling (MLM) objective was used during pre-training.
|
| 22 |
|
| 23 |
- **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
|
| 24 |
- **Funded by:** [ERC](https://erc.europa.eu)
|
| 25 |
+
- **Architecture:** BERT-base
|
| 26 |
+
- **Language:** English
|
| 27 |
+
- **License:** Apache 2.0
|
| 28 |
+
- **Base model:** [BERT base model (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
|
| 29 |
+
|
| 30 |
+
### Checkpoints
|
| 31 |
+
|
| 32 |
+
Intermediate checkpoints from the pre-training process are available and can be accessed using specific tags,
|
| 33 |
+
which correspond to training epochs and steps:
|
| 34 |
+
|
| 35 |
+
| Epoch | Step | Tags | |
|
| 36 |
+
|---|---|---|---|
|
| 37 |
+
| 1 | 98 | epoch-1 | step-98 |
|
| 38 |
+
| 5 | 490 | epoch-5 | step-490 |
|
| 39 |
+
| 10 | 980 | epoch-10 | step-980 |
|
| 40 |
+
| 20 | 1960 | epoch-20 | step-1960 |
|
| 41 |
+
| 30 | 2940 | epoch-30 | step-2940 |
|
| 42 |
+
| 40 | 3920 | epoch-40 | step-3920 |
|
| 43 |
+
| 50 | 4900 | epoch-50 | step-4900 |
|
| 44 |
+
| 60 | 5880 | epoch-60 | step-5880 |
|
| 45 |
+
| 70 | 6860 | epoch-70 | step-6860 |
|
| 46 |
+
| 80 | 7840 | epoch-80 | step-7840 |
|
| 47 |
+
| 90 | 8820 | epoch-90 | step-8820 |
|
| 48 |
+
| 100 | 9800 | epoch-100 | step-9800 |
|
| 49 |
+
|
| 50 |
+
To load a model from a specific intermediate checkpoint, use the `revision` parameter with the corresponding tag:
|
| 51 |
+
```python
|
| 52 |
+
from transformers import AutoModelForMaskedLM
|
| 53 |
+
|
| 54 |
+
model = AutoModelForMaskedLM.from_pretrained("<model-name>", revision="<checkpoint-tag>")
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
### Sources
|
| 58 |
+
|
| 59 |
+
- **Paper:** [Information pending]
|
| 60 |
|
| 61 |
## Training Details
|
| 62 |
|
| 63 |
+
For more details on the training procedure, please refer to the base model's documentation:
|
| 64 |
+
[Training procedure](https://huggingface.co/google-bert/bert-base-uncased#training-procedure).
|
| 65 |
|
| 66 |
### Training Data
|
| 67 |
|
| 68 |
+
All texts from Ohsumed dataset, excluding the test partition.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
#### Training Hyperparameters
|
| 71 |
|
| 72 |
+
- **Precision:** fp16
|
| 73 |
- **Batch size:** 32
|
| 74 |
- **Gradient accumulation steps:** 3
|
| 75 |
|
| 76 |
+
## Uses
|
| 77 |
+
|
| 78 |
+
For typical use cases and limitations, please refer to the base model's guidance:
|
| 79 |
+
[Inteded uses & limitations](https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations).
|
| 80 |
+
|
| 81 |
+
## Bias, Risks, and Limitations
|
| 82 |
+
|
| 83 |
+
This model inherits potential risks and limitations from the base model. Refer to:
|
| 84 |
+
[Limitations and bias](https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias).
|
| 85 |
+
|
| 86 |
## Environmental Impact
|
| 87 |
|
| 88 |
- **Hardware Type:** NVIDIA Tesla V100 PCIE 32GB
|
|
|
|
| 89 |
- **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
|
| 90 |
- **Compute Region:** EU
|
|
|
|
| 91 |
|
| 92 |
## Citation
|
| 93 |
|