Cortical basis for continuous speech comprehension

  • Date: May 16, 2025
  • Time: 11:00 AM - 12:30 PM (Local Time Germany)
  • Speaker: Dr. Yulia Oganian
  • Location: MPI for Intelligent Systems, Max-Planck-Ring 4
  • Room: Room 203 + Zoom
  • Host: Dr. Assaf Breska
  • Contact: chiara.zanonato@tuebingen.mpg.de
Cortical basis for continuous speech comprehension

Abstract: As speech evolves in time, auditory cortices track and respond to salient acoustic and linguistic features in the speech signal, a phenomenon known as speech tracking. Speech tracking is a prerequisite for successful mapping between the acoustic signal and linguistic categories. In my talk I will discuss recent work where we characterise local cortical encoding of vowel formants and categories in naturalistic speech in human speech cortex. I will then discuss how we can use encoding models to dissociate between tracking of syllabic content and oscillatory entrainment to it, with no evidence of the latter.

Bio: Yulia is a cognitive neuroscientist with interests in everything related to speech and language. Prior to starting the lab in Tübingen, she worked as a postdoc at the University of California, San Francisco and got her PhD at the Freie Universitat and the Bernstein Center for Computational Neuroscience (BCCN) in Berlin. Outside science Yulia studies trilingual language acquisition with her two sons and waits for them to walk well enough to explore the hiking trails around Tuebingen. Yulia speaks Russian, German, Hebrew, and English in her day to day. She sometimes tries to speak Spanish and French, but mostly with little success.

Lab's webpage: https://hvclab.github.io/

Access to the meeting: Zoom Link

Go to Editor View