Language Learning Model Earns Prestigious Statistics Award

The International Society for Bayesian Analysis has awarded the Mitchell Prize, a highly prestigious award in the field of statistics and Bayesian analysis, to a team of statisticians who developed a new model of the second language learning process. A team of four University of Texas at Austin statisticians Abhra Sarkar and Giorgio Paulon (alongside […] The post Language Learning Model Earns Prestigious Statistics Award appeared first on MultiLingual.

Language Learning Model Earns Prestigious Statistics Award

The International Society for Bayesian Analysis has awarded the Mitchell Prize, a highly prestigious award in the field of statistics and Bayesian analysis, to a team of statisticians who developed a new model of the second language learning process. A team of four University of Texas at Austin statisticians Abhra Sarkar and Giorgio Paulon (alongside two communications researchers, Bharath Chandrasekaran and Fernando Llanos) were awarded the prize for their research modelling English speakers’ brain activity during the process of learning a tonal language (specifically, Mandarin Chinese).

English is, of course, a non-tonal language, while Mandarin has a four-way tonal distinction — Sarkar (who also received the Mitchell Prize back in 2018) and the team’s research explores the ways in which the brain “rewires” itself while learning a new language’s tonal system. The tonal system was selected as a useful case study for the model because English and Mandarin’s respective phonologies deal with tone (or a lack thereof) quite differently.


“This is an ambitious goal, but this could help eventually develop precision learning strategies for different people depending on how their individual brains work,” Sarkar said.

The researchers observed a group of 20 different English speakers, teaching them how to perceive the differences between the four phonemic tones of Mandarin Chinese. Over time, the participants began to recognize the difference between the tones more accurately, and the results “shed novel insights into the mechanisms underlying experience-dependent brain plasticity.” The team found that both “good” learners and “poor” learners needed the same amount of input in order to develop their categorical perception — the difference then, was the fact that “good” learners simply learned to process this input at a quicker rate than the “poor” learners.

“The outstanding contribution of this work was to eliminate these limitations by overcoming daunting methodological and computational challenges, thereby advancing the statistical capabilities many significant steps forward through the development of a novel Bayesian model for multi-alternative decision-making in dynamic longitudinal settings,” Sarkar wrote in a summary for the funders of the Mitchell Prize.

The post Language Learning Model Earns Prestigious Statistics Award appeared first on MultiLingual.