Speakers

Meet our distinguished panel of experts leading the Child Speech AI Workshop 2025

Abeer Alwan
Keynote Speaker

Abeer Alwan

UCLA

Abeer Alwan received her Ph.D. in Electrical Engineering and Computer Science from MIT in 1992. Since then, she has been with the UCLA ECE department where she is now a Distinguished Professor and directs the Speech Processing and Auditory Perception Laboratory (http://www.seas.ucla.edu/spapl/). Dr. Alwan’s research interests are in the areas of speech production and perception modeling and applications to speech technology such as automatic speech recognition, speaker identification and text-to-speech synthesis. Her focus is on limited data or low-resource systems where knowledge of speech production and perception, and linguistics can be critical to system performance. Current projects include detecting depression from speech signals, children’s speech recognition, speaker recognition with limited data, and the recognition of low-resource dialects. She is the recipient of several awards including the NSF Research Initiation and Career Awards, NIH FIRST Award, UCLA-TRW Excellence in Teaching Award, Okawa Foundation Award in Telecommunication, and the Engineer’s Council Educator Award. She is a Fellow of the Acoustical Society of America, IEEE, and the International Speech Communication Association (ISCA). She was a Fellow at the Radcliffe Institute, Harvard University, co-Editor in Chief of Speech Communication, and Associate Editor of both JASA and IEEE TSALP.

Advances and Challenges of Child ASR

14:05–14:35CC308

Mark Hasegawa-Johnson
Invited Speaker

Mark Hasegawa-Johnson

UIUC

Mark A. Hasegawa-Johnson received his B.S. M.S. and Ph.D. in Electrical Engineering from MIT in 1996. He was a postdoctoral researcher at University of California, Los Angeles and is now the M.E. Van Valkenburg Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, Illinois, 61801, U.S.A. Dr. Hasegawa-Johnson is currently Editor in Chief of the IEEE Transactions on Audio, Speech, and Language Processing. His research interests include automatic speech understanding as an empowerment tool for parents, children, learners, and people with disabilities, and unsupervised learning methods for speech technology. He is a Fellow of the IEEE, a Fellow of the ASA, and a Fellow of ISCA.

Speech as a modality for the characterization and adaptation of neurodiversity

14:35–15:05CC308

Gopala Anumanchipalli
Invited Speaker

Gopala Anumanchipalli

UC Berkeley

Building AI for Children’s Universal Language Function: From Real-World Need to Scalable Technology and Deployment

15:05–15:35CC308

Emma O'Neill
Invited Speaker

Emma O'Neill

Curriculum Associates

Dr Emma O’Neill is a Senior Data Scientist at Curriculum Associates’ AI Labs working on voice-based technologies and tools for education. She has a background in Computational Linguistics with a multidisciplinary degree in Linguistics and Informatics from The University of Edinburgh and a PhD in Computer Science from University College Dublin. Their doctoral research focused on Human Language Technologies and spanned many areas including Automatic Speech Recognition, Spoken Variation, and Child Literacy Acquisition. She now works to develop, test, (and break) AI tools designed for students and educators with the goal of ensuring that the technology works equitably for all users.

Equitable Voice Technologies for Real World Classrooms

15:50–16:20CC308

Tiantian Feng
Invited Speaker

Tiantian Feng

USC

Tiantian Feng is a Postdoctoral Researcher at the University of Southern California (USC), Los Angeles, CA. He received both his M.S. and Ph.D. degrees from USC. He has several industrial research experiences at Amazon and Meta. His research interests include human-centered computing, affective computing, wearable computing, machine learning for speech and multimodal modeling, and trustworthy technology. He has published over 50 papers in peer-reviewed journals and conferences including KDD, ACM MM, IEEE JBHI, ICASSP, INTERSPEECH, JMIR, and Nature Scientific Data, and has contributed several notable datasets and benchmarks in wearable sensing, speech modeling, and multimedia. His work has been recognized with multiple honors, including a Best Student Paper Finalist award at ICASSP 2019 and first place in the Speech Emotion Recognition in Naturalistic Conditions Challenge at INTERSPEECH 2025. He also serves as an active reviewer for top-tier journals including IEEE Transactions on Affective Computing, Computer Speech & Language, IEEE EMBC, and IEEE JBHI.

Developing Robust Speaker Diarization for Child-Adult Dyadic Interaction

16:20–16:50CC308