About me

“To know how much there is to know is the beginning of learning to live.” —Dorothy West

I am a Ph.D. student at the Erik Jonsson School of Engineering and Computer Science at The University of Texas at Dallas, working in the Multimodal Signal Processing (MSP) Laboratory on developing machine learning algorithms for studying expressive behavior. My research topics include emotion recognition, self-supervised learning, multimodal modelling, handling missing modalities, and audio and video signal processing. Part of the team collecting the largest spontaneous speech emotion dataset based on real-world podcast audios.

News

  • Our work “Perceptual Evaluation of Audio-Visual Synchrony Grounded in Viewers’ Opinion Scores” was accepted to ECCV 2024.
  • Our work “Bridging Emotions Across Languages: Low Rank Adaptation for Multilingual Speech Emotion Recognition,” was accepted to Interspeech 2024.
  • Our work “Odyssey 2024 - Speech Emotion Recognition Challenge: Dataset, Baseline Framework, and Results,” was accepted to: The Speaker and Language Recognition Workshop - Odyssey 2024.
  • Our work “Versatile Audio-Visual Learning for Emotion Recognition,” was accepted to IEEE Transactions on Affective Computing.
  • Intern as Applied Scientist at Amazon Web Services (AWS) AI Labs - Summer 2024