You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Yodas2-Mimi Pretraining Dataset

Dataset Summary

Based on Yodas2, this dataset contains interleaved (at the utterance level) text-audio documents derived from Yodas2-Mimi tokenized. It is designed for pretraining audio language models that can process both text and audio in a unified format.

The audio is represented as unicode strings (converted from Mimi audio codec tokens), making it compatible with standard language model training pipelines.

Tokenizer

Since text column contains special tokens, you should use this tokenizer: marin-mimi-bpe-8cb-16k-tokenizer.

Data Format

Each document in the dataset contains:

  • id: Unique identifier for the document (suffixed with _type1 or _type2)
  • split: Source shard/subshard identifier (format: {shard_id}/{subshard_id})
  • text: The interleaved text-audio content as a single string with special tokens included already

Two Document Types

The dataset provides two types of interleaved formats for each source document:

Type 1: Text → Audio (text comes first)

<|begin_of_text|><|text_start|>text1<|text_end|><|audio_start|>audio1<|audio_end|>...<|end_of_text|>

Type 2: Audio → Text (audio comes first)

<|begin_of_text|><|audio_start|>audio1<|audio_end|><|text_start|>text1<|text_end|>...<|end_of_text|>

This dual format enables training models for both text-to-speech and speech-to-text tasks simultaneously.

Special Tokens

The following special tokens are used to structure the documents:

  • <|begin_of_text|>: Marks the start of a document
  • <|end_of_text|>: Marks the end of a document
  • <|text_start|>: Marks the start of a text segment
  • <|text_end|>: Marks the end of a text segment
  • <|audio_start|>: Marks the start of an audio segment
  • <|audio_end|>: Marks the end of an audio segment

Audio Encoding

Audio is encoded using the Mimi audio codec with the following configuration:

  • Number of codebooks: 8
  • Codebook size: 2048
  • Unicode offset: 0xe000 (private use area)

Audio tokens are converted to Unicode characters, allowing them to be processed as text by language models. Each audio code is mapped to a unique Unicode character starting from the private use area.

Dataset Statistics

  • Documents (Rows): 4,258,874
  • Total tokens: 295B (tokenized by marin-mimi-bpe-8cb-16k-tokenizer)

Dataset Statistics by Language

Language Rows Shards Subshards Num Tokens
aa 86 1 1 219,532
ab 264 1 1 626,448
af 174 1 1 938,570
ak 32 1 1 222,516
am 664 1 1 7,609,996
ar 20,838 1 2 313,273,752
as 240 1 1 475,060
ay 48 1 1 64,148
az 416 1 1 3,597,864
ba 40 1 1 392,942
be 2,126 1 1 35,385,082
bg 1,668 1 1 43,204,440
bh 4 1 1 9,696
bi 10 1 1 197,866
bm 8 1 1 1,654
bn 11,060 1 1 54,150,790
bo 14 1 1 690,586
br 36 1 1 564,178
bs 252 1 1 7,175,348
ca 2,662 1 1 57,126,328
co 12 1 1 32,626
cr 6 1 1 4,288
cs 4,746 1 1 128,435,506
cy 260 1 1 3,996,964
da 1,338 1 1 21,472,962
de 132,832 4 15 10,989,993,042
dz 18 1 1 48,842
ee 10 1 1 31,186
el 3,220 1 1 100,290,700
en 1,791,316 38 183 131,123,750,680
eo 396 1 1 12,922,032
es 476,532 10 49 35,383,236,180
et 282 1 1 11,377,258
eu 954 1 1 15,428,750
fa 914 1 1 43,572,308
ff 12 1 1 29,884
fi 3,220 1 1 164,835,624
fj 14 1 1 13,054
fo 28 1 1 141,920
fr 202,596 5 19 16,624,847,906
fy 16 1 1 64,850
ga 102 1 1 1,195,830
gd 16 1 1 15,858
gl 432 1 1 8,044,654
gn 40 1 1 308,564
gu 1,178 1 1 2,757,342
ha 346 1 1 120,984
hi 36,462 2 3 413,234,530
ho 10 1 1 41,884
hr 738 1 1 20,202,718
ht 228 1 1 969,466
hu 2,558 1 1 156,280,022
hy 284 1 1 2,621,494
ia 32 1 1 63,498
id 153,938 3 15 8,083,093,336
ie 28 1 1 309,702
ig 16 1 1 126,742
ik 6 1 1 3,374
is 306 1 1 3,981,186
it 115,366 3 11 8,292,491,254
iu 12 1 1 8,024
iw 3,478 1 1 131,223,906
ja 65,482 2 5 3,108,465,094
jv 106 1 1 906,828
ka 694 1 1 21,052,458
ki 2 1 1 420
kk 540 1 1 5,890,808
kl 8 1 1 3,812
km 816 1 1 4,602,562
kn 574 1 1 3,944,608
ko 239,366 5 23 15,873,307,780
ks 18 1 1 26,118
ku 102 1 1 1,621,536
ky 1,824 1 1 11,731,944
la 162 1 1 5,851,778
lb 16 1 1 85,592
lg 4 1 1 2,866
ln 64 1 1 155,462
lo 28 1 1 202,632
lt 428 1 1 13,809,912
lv 108 1 1 1,965,880
mg 38 1 1 151,252
mi 132 1 1 846,874
mk 196 1 1 4,300,234
ml 1,496 1 1 19,551,968
mn 120 1 1 1,776,112
mr 2,570 1 1 6,299,296
ms 1,160 1 1 21,647,004
my 174 1 1 2,025,598
na 10 1 1 30,850
nd 2 1 1 896
ne 554 1 1 3,252,512
nl 51,700 2 5 2,283,828,818
no 2,500 1 1 99,862,490
nv 6 1 1 2,418
oc 12 1 1 131,114
om 206 1 1 189,614
or 540 1 1 805,718
pa 636 1 1 1,529,608
pl 19,056 1 1 612,614,280
ps 172 1 1 1,326,312
pt 220,746 5 21 15,769,904,990
qu 6 1 1 572,198
rm 20 1 1 153,982
rn 6 1 1 3,306
ro 3,770 1 1 84,549,356
ru 405,758 9 41 31,069,562,816
rw 94 1 1 557,810
sa 42 1 1 942,280
sc 6 1 1 13,676
sd 40 1 1 58,524
sg 2 1 1 406
sh 6 1 1 189,934
si 616 1 1 5,219,200
sk 1,426 1 1 24,986,906
sl 748 1 1 17,687,176
sm 4 1 1 78,288
sn 6 1 1 10,480
so 426 1 1 4,872,624
sq 302 1 1 2,800,904
sr 540 1 1 12,476,862
st 6 1 1 2,656
su 36 1 1 30,028
sv 2,638 1 1 97,736,862
sw 350 1 1 1,520,946
ta 4,954 1 1 68,388,738
te 2,510 1 1 8,288,766
tg 90 1 1 283,358
th 16,372 2 2 515,106,000
ti 24 1 1 379,410
tk 20 1 1 922,818
tn 10 1 1 96,390
to 2 1 1 424
tr 64,938 2 7 3,847,050,740
ts 4 1 1 97,266
tt 10 1 1 33,672
ug 36 1 1 137,706
uk 20,554 2 2 687,471,758
ur 5,328 1 1 25,751,098
uz 1,200 1 1 7,591,098
ve 8 1 1 2,676
vi 133,954 3 14 8,394,394,682
vo 4 1 1 161,142
wo 54 1 1 122,826
xh 14 1 1 119,632
yi 14 1 1 73,686
yo 28 1 1 243,384
zh 4,330 1 1 239,288,480
zu 260 1 1 2,904,654
TOTAL 4,258,874 230 549 295,274,191,398

Usage

Loading the Dataset

from datasets import load_dataset

# Load all data
dataset = load_dataset("potsawee/yodas2-mm-pretrain")
# Load specific language
dataset = load_dataset("potsawee/yodas2-mm-pretrain", "en")
# Stream the dataset (recommended for large datasets)
dataset = load_dataset("potsawee/yodas2-mm-pretrain", "en", streaming=True)

Example Document

from datasets import load_dataset

dataset = load_dataset("potsawee/yodas2-mm-pretrain", "en", split="train", streaming=True)
sample = next(iter(dataset))

print(f"ID: {sample['id']}")
print(f"Split: {sample['split']}")
print(f"Text length: {len(sample['text'])} characters")
print(f"Preview: {sample['text'][:200]}...")
Downloads last month
185

Collection including potsawee/yodas2-mm-pretrain