cglez commited on
Commit
884d33a
·
verified ·
1 Parent(s): ad6a7d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -39
README.md CHANGED
@@ -2,72 +2,92 @@
2
  library_name: transformers
3
  language: en
4
  license: apache-2.0
5
- datasets: []
6
- tags: []
 
 
7
  ---
8
 
9
- # Model Card for <Model>
10
 
11
- A pretrained BERT using <Dataset>.
12
 
13
  ## Model Details
14
 
15
- ### Model Description
16
 
17
- A MLM-only pretrained BERT-base using <Dataset>.
 
 
18
 
19
  - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
20
  - **Funded by:** [ERC](https://erc.europa.eu)
21
- - **Model type:** MLM pretrained BERT
22
- - **Language(s) (NLP):** English
23
- - **License:** Apache license 2.0
24
- - **Pretrained from model:** [BERT base model (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
25
-
26
- ### Model Checkpoints
27
-
28
- [More Information Needed]
29
-
30
- ### Model Sources
31
-
32
- - **Paper:** [More Information Needed]
33
-
34
- ## Uses
35
-
36
- See <https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations>.
37
-
38
- ### Checkpoint Use
39
-
40
- [More Information Needed]
41
-
42
- ## Bias, Risks, and Limitations
43
-
44
- See <https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias>.
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ## Training Details
47
 
48
- See <https://huggingface.co/google-bert/bert-base-uncased#training-procedure>.
 
49
 
50
  ### Training Data
51
 
52
- [More Information Needed]
53
-
54
- #### Preprocessing [optional]
55
-
56
- [More Information Needed]
57
 
58
  #### Training Hyperparameters
59
 
60
- - **Training regime:** fp16
61
  - **Batch size:** 32
62
  - **Gradient accumulation steps:** 3
63
 
 
 
 
 
 
 
 
 
 
 
64
  ## Environmental Impact
65
 
66
  - **Hardware Type:** NVIDIA Tesla V100 PCIE 32GB
67
- - **Hours used:** [More Information Needed]
68
  - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
69
  - **Compute Region:** EU
70
- - **Carbon Emitted:** [More Information Needed] <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
71
 
72
  ## Citation
73
 
 
2
  library_name: transformers
3
  language: en
4
  license: apache-2.0
5
+ datasets:
6
+ - community-datasets/ohsumed
7
+ base_model:
8
+ - google-bert/bert-base-uncased
9
  ---
10
 
11
+ # Model Card: BERT-Ohsumed
12
 
13
+ An in-domain BERT-base model, pre-trained from scratch on the Ohsumed dataset text.
14
 
15
  ## Model Details
16
 
17
+ ### Description
18
 
19
+ This model is based on the [BERT base (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
20
+ architecture and was pre-trained from scratch (in-domain) using the text in Ohsumed dataset, excluding its test split.
21
+ Only the masked language modeling (MLM) objective was used during pre-training.
22
 
23
  - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
24
  - **Funded by:** [ERC](https://erc.europa.eu)
25
+ - **Architecture:** BERT-base
26
+ - **Language:** English
27
+ - **License:** Apache 2.0
28
+ - **Base model:** [BERT base model (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
29
+
30
+ ### Checkpoints
31
+
32
+ Intermediate checkpoints from the pre-training process are available and can be accessed using specific tags,
33
+ which correspond to training epochs and steps:
34
+
35
+ | Epoch | Step | Tags | |
36
+ |---|---|---|---|
37
+ | 1 | 98 | epoch-1 | step-98 |
38
+ | 5 | 490 | epoch-5 | step-490 |
39
+ | 10 | 980 | epoch-10 | step-980 |
40
+ | 20 | 1960 | epoch-20 | step-1960 |
41
+ | 30 | 2940 | epoch-30 | step-2940 |
42
+ | 40 | 3920 | epoch-40 | step-3920 |
43
+ | 50 | 4900 | epoch-50 | step-4900 |
44
+ | 60 | 5880 | epoch-60 | step-5880 |
45
+ | 70 | 6860 | epoch-70 | step-6860 |
46
+ | 80 | 7840 | epoch-80 | step-7840 |
47
+ | 90 | 8820 | epoch-90 | step-8820 |
48
+ | 100 | 9800 | epoch-100 | step-9800 |
49
+
50
+ To load a model from a specific intermediate checkpoint, use the `revision` parameter with the corresponding tag:
51
+ ```python
52
+ from transformers import AutoModelForMaskedLM
53
+
54
+ model = AutoModelForMaskedLM.from_pretrained("<model-name>", revision="<checkpoint-tag>")
55
+ ```
56
+
57
+ ### Sources
58
+
59
+ - **Paper:** [Information pending]
60
 
61
  ## Training Details
62
 
63
+ For more details on the training procedure, please refer to the base model's documentation:
64
+ [Training procedure](https://huggingface.co/google-bert/bert-base-uncased#training-procedure).
65
 
66
  ### Training Data
67
 
68
+ All texts from Ohsumed dataset, excluding the test partition.
 
 
 
 
69
 
70
  #### Training Hyperparameters
71
 
72
+ - **Precision:** fp16
73
  - **Batch size:** 32
74
  - **Gradient accumulation steps:** 3
75
 
76
+ ## Uses
77
+
78
+ For typical use cases and limitations, please refer to the base model's guidance:
79
+ [Inteded uses & limitations](https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations).
80
+
81
+ ## Bias, Risks, and Limitations
82
+
83
+ This model inherits potential risks and limitations from the base model. Refer to:
84
+ [Limitations and bias](https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias).
85
+
86
  ## Environmental Impact
87
 
88
  - **Hardware Type:** NVIDIA Tesla V100 PCIE 32GB
 
89
  - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/)
90
  - **Compute Region:** EU
 
91
 
92
  ## Citation
93