We will draw a poster on a white paper board summarizing each discussion for an informal discussion from 4pm to 4.30pm with all the attendees. We will have 4 different sessions in parallel: - Language learning in grounded environments: In this session, we will discuss how infants and other agents might and or do benefit from grounding for language acquisition. A second part of this break-out session will be dedicated to reverse engineering the language acquisition capacity to construct language learning systems in a naturalistic way (noisy multimodal input, few shot learning…). We will discuss virtual environments or datasets to train grounded language learning agents. - Modeling language acquisition using Recurrent Neural Networks: Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. In this break-out session, we will discuss how and when such models induce natural language rules by just being exposed to lots of textual data, and when they fail to do so. - Statistical learning for language acquisition: Infants learn their first language using multiple linguistic and non-linguistic cues . A process extracting the structure of these cues, by finding patterns, called statistical learning has been hypothesized to be used by infants. For instance, in speech perception and processing, transitional probabilities between adjacent sound units are thought to be computed by infants in their first life to detect word boundaries. To what extent are statistical learning mechanisms key to language acquisition? How do these skills evolve during development? What insights does the modeling literature provide. - Bootstrapping language : In the field of psycholinguistic, a theoretical framework developed the idea of synergies (or bootstrapping) in language acquisition. Impoverished knowledge in one area of language (eg semantic) might help infants and toddlers adjust their representation in another area (eg syntax). For instance, the syntactic bootstrapping hypothesis claims that infants learn the meaning of words by paying attention to the syntactic structure in which these words occur. In parallel ways, other researchers have proposed a semantic bootstrapping hypothesis, and other multi-level bootstrapping could also be posited.In this session, we will discuss the psycholinguistic evidence supporting this general multi-level bootstrapping framework as well as extant counter-evidence. We will also discuss the computational models that have been proposed so far in the literature implementing such hypotheses, and the insights that can be drawn from them. Please choose the session that you will attend below. Please note that each break-out session will be animated by two moderators. One moderator is in charge of engaging the audience and aiding the group to address the topic of the break-out. The second moderator is in charge of taking notes for future use. Both moderators will be acknowledged in the final version of the book of abstracts.If you are interested in being a moderator, please indicate so in the form.
Thank you and looking forward to seeing you.