Proyag commited on
Commit
4bf8289
·
verified ·
1 Parent(s): 8d35c54

Update bibtex

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -437,9 +437,6 @@ Please be aware that this contains unfiltered data from the internet, and may co
437
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
438
 
439
  Please cite the paper if you use this dataset.
440
- Until the ACL Anthology is updated with ACL 2024 papers, you can use the following BibTeX:
441
-
442
- <!-- Update with ACL Anthology bibtex-->
443
  ```
444
  @inproceedings{pal-etal-2024-document,
445
  title = "Document-Level Machine Translation with Large-Scale Public Parallel Corpora",
@@ -454,8 +451,10 @@ Until the ACL Anthology is updated with ACL 2024 papers, you can use the followi
454
  year = "2024",
455
  address = "Bangkok, Thailand",
456
  publisher = "Association for Computational Linguistics",
457
- url = "https://aclanthology.org/2024.acl-long.712",
 
458
  pages = "13185--13197",
 
459
  }
460
  ```
461
 
 
437
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
438
 
439
  Please cite the paper if you use this dataset.
 
 
 
440
  ```
441
  @inproceedings{pal-etal-2024-document,
442
  title = "Document-Level Machine Translation with Large-Scale Public Parallel Corpora",
 
451
  year = "2024",
452
  address = "Bangkok, Thailand",
453
  publisher = "Association for Computational Linguistics",
454
+ url = "https://aclanthology.org/2024.acl-long.712/",
455
+ doi = "10.18653/v1/2024.acl-long.712",
456
  pages = "13185--13197",
457
+ abstract = "Despite the fact that document-level machine translation has inherent advantages over sentence-level machine translation due to additional information available to a model from document context, most translation systems continue to operate at a sentence level. This is primarily due to the severe lack of publicly available large-scale parallel corpora at the document level. We release a large-scale open parallel corpus with document context extracted from ParaCrawl in five language pairs, along with code to compile document-level datasets for any language pair supported by ParaCrawl. We train context-aware models on these datasets and find improvements in terms of overall translation quality and targeted document-level phenomena. We also analyse how much long-range information is useful to model some of these discourse phenomena and find models are able to utilise context from several preceding sentences."
458
  }
459
  ```
460