1
Original referenceNumber of participantsParticipant populationAmount of data per participantNeurorecording systemStimuliSex of competing talkersLocation of competing talkersAcoustic room conditionsCommentsLink datasetLink paper
2
W. Biesmans, N. Das, T. Francart, and A. Bertrand, “Auditory-Inspired Speech Envelope Extraction Methods for Improved EEG-Based Auditory Attention Detection in a Cocktail Party Scenario,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 5, pp. 402–412, 201716young, normal hearing72 min (8 trials x 6 min + 12 trials x 2 min)EEG 64-channel BioSemiDutch short storiesmale-male90/-90 degreesdichotic and HRTF-filtered in anechoic roomhttps://zenodo.org/record/3997352https://ieeexplore.ieee.org/abstract/document/7478117?casa_token=DPLjqYCwet4AAAAA:a_0R8XjnyyUC-ngXDejGkBbwa9fhNzuaZFWJuw4uIs19HEs-zJiSdi9iGWcBM2LK_PBipL5KR58
3
S. A. Fuglsang, T. Dau, and J. Hjortkjær, “Noise-robust cortical tracking of attended speech in real-world acoustic scenes,” NeuroImage, vol. 156, pp. 435–444, 201718young, normal hearing50 min (60 trials x 50 s)EEG 64-channel BioSemiDanish fictional storiesmale-female60/-60 degreesHRTF-filtered in anechoic, mildly, and highly reverberant roomEOG availablehttps://zenodo.org/record/1199011https://www.sciencedirect.com/science/article/abs/pii/S105381191730318X
4
S. A. Fuglsang, J. Märcher-Rørsted, T. Dau, J. Hjortkjær, "Effects of Sensorineural Hearing Loss on Cortical Synchronization to Competing Speech during Selective Attention," Journal of Neuroscience, vol. 40, no. 12, pp. 2562-2572, 20204422 hearing impaired + 22 normal hearing26.7 min (32 trials x 50 s)EEG 64-channel BioSemiDanish audiobooksmale-female90/-90 degreesHRTF-filtered
single-talker, ERPs, EFRs, resting-state also available / in-ear EEG for 19 of 44 participants/ EOG available
https://zenodo.org/record/3618205https://www.jneurosci.org/content/40/12/2562.abstract
5
A. J. Power, J. J. Foxe, E.-J. Forde, R. B. Reilly, and E. C. Lalor, “At what time is the cocktail party? A late locus of selective attention to natural speech,” European Journal of Neuroscience, vol. 35, pp. 1497–1503, 201233young, normal hearing30 min (30 trials x 1 min)EEG 128-channel BioSemiEnglish fictional storiesmale-male90/-90 degreesdichotic used in seminal O'Sullivan paperhttps://datadryad.org/stash/dataset/doi:10.5061/dryad.070jchttps://onlinelibrary.wiley.com/doi/full/10.1111/j.1460-9568.2012.08060.x?casa_token=YzjOvz581xIAAAAA%3ATUaCce9Rl1U15TDKZwH4VY0YKm2XGQ-ByDu-Eu0dLeJdQkNCjcq3ORoOb7cJuCpCJyimUp9PrONK7R8
6
A. Mundanad Narayanan, R. Zink, and A. Bertrand, “EEG miniaturization limits for stimulus decoding with EEG sensor networks,” Journal of Neural Engineering, vol. 18, no. 5, p. 056042, 202130young, normal hearing24 min (4 trials x 6 min)EEG 255-channel SynAmps RTDutch fictional storiesmale-male90/-90 degreesHRTF-filteredhttps://zenodo.org/record/4518754https://iopscience.iop.org/article/10.1088/1741-2552/ac2629/meta?casa_token=RZjlv8g0a-4AAAAA:iL_oezve917P146qVVrzwGu5g-qrsOoHr9YlvfLHRedW32upOeeGIUJa0WZul1uM2Qg1AqzcT3mmhS57cEAZYtTsdweS
7
L. Straetmans, B. Holtze, S. Debener, M. Jaeger, and B. Mirkovic, “Neural tracking to go: auditory attention decoding and saliency detection with mobile EEG,” Journal of Neural Engineering, vol. 18, no. 6, p. 066054, 202120young, normal hearing30 min (6 trials x 5 min)EEG 24-channel EasyCap GmbH/SMARTINGGerman audiobooks + natural salient eventsmale-male45/-45 degreesHRTF-filtered, recorded in public cafeteria without other people3 trials during walking, 3 trials sitting / salient environmental sounds addedhttps://openneuro.org/datasets/ds003801/versions/1.0.0https://iopscience.iop.org/article/10.1088/1741-2552/ac42b5/meta
8
S. Akram, A. Presacco, J. Z. Simon, S. A. Shamma, and B. Babadi, “Robust decoding of selective auditory attention from MEG in a competingspeaker environment via state-space modeling,” NeuroImage, vol. 124, pp. 906–917, 20167young, normal hearing6 min (2 conditions x 3 repetitions x 1 min)MEG 157-channelEnglish fictional storiesmale-female90/-90 degreesdichoticinstructed attention switches 1 time per trialhttps://drum.lib.umd.edu/items/d935265d-895c-4fc7-aec2-36dc7682d87dhttps://www.sciencedirect.com/science/article/abs/pii/S1053811915008708
9
A. Presacco, S. Miran, B. Babadi, and J. Z. Simon, “Real-Time Tracking of Magnetoencephalographic Neuromarkers during a Dynamic Attention-Switching Task,” in Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4148–4151, 20195young, normal hearing4.5 min (3 trials x 90 s)MEG 157-channelEnglish fictional storiesmale-female90/-90 degreesdichoticat-will attention switches 1-3 times per trialhttps://drum.lib.umd.edu/items/d935265d-895c-4fc7-aec2-36dc7682d87dhttps://ieeexplore.ieee.org/abstract/document/8857953?casa_token=RH4jg3qBWxQAAAAA:CA2qsckU1BFuISfPEYlvw2VshiZyQODONjuKGCaW0gjAUukKHAxvWqk4wslU8Rc8r5xk2p6h1Ww
10
C. Brodbeck, L. E. Hong, and J. Z. Simon, “Rapid Transformation from Auditory to Linguistic Representations of Continuous Speech,” Current Biology, vol. 28, no. 24, pp. 3976–3983.e5, 201826normal hearing16 min (4 trials x 4 repetitions x 1 min)MEG 157-channelEnglish audiobooksmale-femaleNANAhttps://drum.lib.umd.edu/items/6ce1b090-3446-46d8-9582-f689afcd23dehttps://www.sciencedirect.com/science/article/pii/S096098221831409X
11
G. Cantisani, G. Trégoat, S. Essid, G. Richard, "MAD-EEG: an EEG dataset for decoding auditory attention to a target instrument in polyphonic music," Speech, Music and Mind (SMM), Satellite Workshop of Interspeech 2019, Vienna, Austria, 2019 8young, normal hearing, non-professional musicians30-32 min (78 stimuli x 4 repetitions x 6 s)EEG 20-channel B-Alert X24 headsetPolyphonic music mixture (14 solos, 40 duets, 24 trios)various instrumentsNASpeakers at 45/-45 degrees, convex weighting of instruments in the mixtureEOG, EMG, ECG, head motion acceleration available / single instrument availablehttps://zenodo.org/record/4537751#.YS5MOI4zYuUhttps://hal.science/hal-02291882/
12
O. Etard, R. B. Messaoud, G. Gaugain, and T. Reichenbach, “No Evidence of Attentional Modulation of the Neural Response to the Temporal Fine Structure of Continuous Musical Pieces,” Journal of Cognitive Neuroscience, vol. 34, no. 3, pp. 411-424, 202217young, normal hearing22.4 min (7 stimuli of 11.2 min in total x 2 repetitions)EEG 4-channel Ag/AgCl electrodes (Multitrode, BrainProducts)Music (Bach's Two-Part Inventions)piano-guitarNAdichoticsingle instrument availablehttps://zenodo.org/record/4470135https://direct.mit.edu/jocn/article-abstract/34/3/411/109069/No-Evidence-of-Attentional-Modulation-of-the
13
Y. Zhang, H. Ruan, Z. Yuan, H. Du, X. Gao, and J. Lu, "A Learnable Spatial Mapping for Decoding the Directional Focus of Auditory Attention Using EEG," 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, pp. 1-5, 202321normal hearing64 min (32 trials x 2 min)EEG 32-channel EMOTIV Epoc Flex SalineChinese news programsmale-female
random pairs from +-135/120/90/60/45/30/15 degrees
loudspeaker arrayper trial, random pairs of the competing speaker directions are takenhttps://ieee-dataport.org/documents/nju-auditory-attention-decoding-datasethttps://ieeexplore.ieee.org/abstract/document/10096819?casa_token=8HlRcqJS4UwAAAAA:aGFqfktUJosro62F1SrXJMm3EEluHXhssjm1FB3kDn1DJu9XncYgIX_L-x4QM1HB17mob1OFC40
14
O. Etard, M. Kegler, C. Braiman, A.E. Forte, and T. Reichenbach, “Decoding of selective attention to continuous speech from the human auditory brainstem response,” NeuroImage, vol. 200, pp. 1-11, 201918young, normal hearing20 min (2 trials x 4 parts x 2.5 min)EEG 64-channel actiCAPEnglish audiobooksmale-female90/-90 degreesdichotichttps://zenodo.org/records/7778289https://www.sciencedirect.com/science/article/pii/S1053811919305208
15
I. Rotaru, S. Geirnaert, N. Heintz, I. Van de Ryck, A. Bertrand, and T. Francart, "What are we really decoding? Unveiling biases in EEG-based decoding of the spatial focus of auditory attention," Journal of Neural Engineering, vol. 21, no. 1, 016017, 202413young, normal hearing80 min (2 blocks x 4 conditions x 10 min)EEG 64-channel BioSemiDutch science-outreach podcastsmale-male90/-90 degreesHRTF-filtered in anechoic room
Per condition, a different audio-visual condition is used (moving video, moving target noise, no visuals, static video). EOG also available. Per trial (=condition), there is one switch in attention after 5 minutes
https://zenodo.org/records/11058711
https://iopscience.iop.org/article/10.1088/1741-2552/ad2214/meta
16
Z. Lin, T. He, S. Cai, and H. Li, "ASA: An Auditory Spatial Attention Dataset with Multiple Speaking Locations," Interspeech 2024, Kos, Greece, 202420normal hearing24 min (20 trials x 1-1.5 min)EEG 64-channel EasycapMandarin storiesmale-female
90/-90, -60/60, -45/45, -30/30, -5/5 degrees
HRTF-filtered through headphoneshttps://zenodo.org/records/11541114https://www.isca-archive.org/interspeech_2024/lin24f_interspeech.pdf
1
Original referenceNumber of participantsParticipant populationAmount of data per participantNeurorecording systemStimuliSex of the talkerCommentsLink datasetLink paper
2
G. M. Di Liberto, J. A. O’Sullivan, and E. C. Lalor, “Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing,” Current Biology, vol. 25, no. 19, pp. 2457–2465, 2015 and M. P. Broderick, A. J. Anderson, G. M. Di Liberto, M. J. Crosse, and E. C. Lalor, “Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech,” Current Biology, vol. 28, no. 5, pp. 803–809.e3, 2018
19young, normal-hearing60 min (20 trials x 180 s) EEG 128-channel BioSemiEnglish fictional storiesmalehttps://datadryad.org/stash/dataset/doi:10.5061/dryad.070jchttps://www.cell.com/current-biology/pdf/S0960-9822(15)01001-5.pdf
3
G. M. Di Liberto, J. A. O’Sullivan, and E. C. Lalor, “Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing,” Current Biology, vol. 25, no. 19, pp. 2457–2465, 2015 10young, normal-hearing72.3 min (28 trials x 155 s)EEG 128-channel BioSemiEnglish fictional stories, reversedmale same stimuli as dataset above, but reversedhttps://datadryad.org/stash/dataset/doi:10.5061/dryad.070jchttps://www.cell.com/current-biology/pdf/S0960-9822(15)01001-5.pdf
4
H. Weissbart, K. D. Kandylaki, and T. Reichenbach, “Cortical Tracking of Surprisal during Continuous Speech Comprehension,” Journal of Cognitive Neuroscience, vol. 32, no. 1, pp. 155–166, 202013young, normal-hearing40 min (15 trials x approx. 2.6 min)EEG 64-channel actiCAPEnglish short storiesmalehttps://figshare.com/articles/dataset/EEG_recordings_and_stimuli/9033983/1https://direct.mit.edu/jocn/article-abstract/32/1/155/95401/Cortical-Tracking-of-Surprisal-during-Continuous
5
F. J. Vanheusden, M. Kegler, K. Ireland, C. Georga, D. M. Simpson, T. Reichenbach, and S. L. Bell, “Hearing Aids Do Not Alter Cortical Entrainment to Speech at Audible Levels in Mild-to-Moderately Hearing- Impaired Subjects,” Frontiers in Human Neuroscience, vol. 14, no. 109, 202017older, hearing impaired, hearing aid users25 min (8 trials x approx. 3 min)EEG 32-channel BioSemiEnglish audiobookfemaletrials aided and unaided by hearing aidhttps://eprints.soton.ac.uk/438737/https://www.frontiersin.org/articles/10.3389/fnhum.2020.00109/full
6
L. Bollens, B. Accou, H. Van hamme, and T. Francart, "A Large Auditory EEG Decoding Dataset", KU Leuven RDR, 202385young, normal-hearing130-150 min (8 -10 trials x 15 min)EEG 64-channel BioSemiFlemish audiobooks and podcastsmale and femalehttps://rdr.kuleuven.be/dataset.xhtml?persistentId=doi:10.48804/K3VSND
7
J. R. Brennan, and J. T. Hale, "Hierarchical structure guides rapid linguistic predictions during naturalistic listening," PLoS ONE, vol. 14, no. 1, e0207741, 201949young12.4 minEEG 61-channel actiCAPEnglish audiobookfemaleno mention of medical conditions of participantshttps://deepblue.lib.umich.edu/data/concern/data_sets/bg257f92thttps://journals.plos.org/plosone/article?id=10.1371/journal.pone.0207741
8
S. A. Fuglsang, J. Märcher-Rørsted, T. Dau, J. Hjortkjær, "Effects of Sensorineural Hearing Loss on Cortical Synchronization to Competing Speech during Selective Attention," Journal of Neuroscience, vol. 40, no. 12, pp. 2562-2572, 20204422 hearing impaired + 22 normal hearing13.3 min (16 trials x 50 s)EEG 64-channel BioSemiDanish audiobooksmale and femaledual-talker, ERPs, EFRs, resting-state also available / in-ear EEG for 19 of 44 participantshttps://zenodo.org/record/3618205https://www.jneurosci.org/content/40/12/2562.abstract
9
L. Gwilliams, J.R. King, Marantz, A. Marantz, and D. Poeppel, "Neural dynamics of phoneme sequences reveal position-invariant code for content and order," Nature Communications, 13, 6606, 2022 27young, normal-hearing120 min (2 sessions x 1 hour)MEG 208-channelEnglish fictional storieshttps://osf.io/ag3kj/https://www.nature.com/articles/s41467-022-34326-1
10
N. H. L. Lam, A. Hultén, P. Hagoort, and J.-M. Schoffelen, "Robust neuronal oscillatory entrainment to speech displays individual variation in lateralisation," Language, Cognition and Neuroscience, no. 33, vol. 8, pp. 943-954, 2018 + various other papers102young, healthyaround 8.4 min (120 sentences x 2.8-6 s)MEG 275-channelDutch sentencesfemalefMRI also available. Also resting-state and reading availablehttps://data.donders.ru.nl/collections/di/dccn/DSC_3011020.09_236?0https://www.tandfonline.com/doi/full/10.1080/23273798.2018.1437456
11
C. Brodbeck, L. E. Hong, and J. Z. Simon, “Rapid Transformation from Auditory to Linguistic Representations of Continuous Speech,” Current Biology, vol. 28, no. 24, pp. 3976–3983.e5, 201826normal-hearing8 min (8 trials x 1 min)MEG 157-channelEnglish audiobooksmale and femalehttps://drum.lib.umd.edu/items/6ce1b090-3446-46d8-9582-f689afcd23dehttps://www.sciencedirect.com/science/article/pii/S096098221831409X
12
O. Etard, and T. Reichenbach, "Neural speech tracking in the theta and in the delta frequency band differentially encode clarity and comprehension of speech in noise," Journal of Neuroscience, vol. 39, no. 29, pp. 5750-5759, 201912young, normal-hearing40 min (4 noise levels x 4 parts x 2.5 min)EEG 64-channel actiCAPEnglish audiobooksmale and female
4 levels of babble noise. Also EEG data of participants listening to Dutch available (0% speech comprehension)
https://zenodo.org/records/7778289https://www.jneurosci.org/content/39/29/5750.abstract
13
Q. Wang, Q. Zhou, Z. Ma, N. Wang, T. Zhang, Y. Fu, and J. Li, "Le Petit Prince (LPP) multi-talker: Naturalistic 7 T fMRI and EEG dataset," Scientific Data,vol. 12, no. 829, 202525young20 min (2 trials x 10 min)EEG 64-channel actiCAP
Mandarin audiobook ("The Little Prince")
male and female (synthesized)
also dual-speaker data availablehttps://openneuro.org/datasets/ds005345/versions/1.0.1
https://www.nature.com/articles/s41597-025-05158-7