The Vocal-Vagal Communication Channel

Humans and other mammals spontaneously produce and respond to the emotional tone in vocal communications -- independent of linguistic content. This capacity would presumably have adaptive function for both affiliative and defensive social behaviors. We posit that vagal innervation of the vocal apparatus is a key component of this communication mechanism. Premotor parasympathetic neurons in Nucleus Ambiguus (NAmb) send projections through the ventral branch of the vagus to modulate the tone of muscles involved in vocal production. This likely alters acoustic signals of the voice, such as the average pitch, pitch modulation statistics, acoustic resonance, and vibrato. These acoustic signals could encode information about the global autonomic state of a speaker and thus, indirectly, their emotional state and social intentions. Vagal sensory processes in the auditory periphery could potentially underlie autonomic mirroring, by detecting acoustic cues correlated with the vagal tone of the vocalizer and directly modulating vagal tone of the listener.

We are now testing this broad hypothesis in a multi-lab collaboration funded by the Kavli Institute of Mind and Brain at UCSD. Ongoing experiments include:

  1. In collaboration with the Exercise and Physical Activity Resource Center (EPARC) at the UCSD School of Medicine, we are recording autonomic physiology and vocal signals during spontaneous speech production and passive speech reception, in order to measure the mutual information between vocal acoustic features and autonomic states of both speakers and listeners.
  2. In collaboration with the Autism Center of Excellence at the UCSD School of Medicine, we are collecting vocal samples of Autistic and typically developing children, to test whether deficits in the vocal-vagal communication channel are present, and whether this predicts or underlies clinical deficits in emotional communication.
  3. In collaboration with the Gentner Lab (Psychology) and Bakovic Lab (Linguistics) at UCSD, we are applying unsupervised machine learning to our vocal data sets to identify dimensions of acoustic feature space that correalte with autonomic states, linguistic criteria for emotional prosody, and/or clinical diagnosis in Autism.
  4. Due to parasympathetic proprioceptive neurons, long slow exhalation coupled with prosodic vocalization is predicted to stimulate vagal activity. Therefore we are recording autonomic physiology of singers to test whether singing efficiently increases vagal tone. This would provide a handy non-invasive tool for manipulating vagal tone in the laboratory, in order to test whether we can causally induce the acoustic features that putatively encode vagal tone. If we are able to show that singing, or listening to it, elevates vagal activity, this would also have therapeutic potential and interesting cultural implications.

    Spectral Features of Vocal Signals. Features in the spectrogram of vocal signals, such as harmonic stacks, duration of phonations, small oscillations in pitch (vibrato), and distribution of pitches over time, may encode information about vagal tone of the vocalizer. These features are apparent in the Modulation Power Spectrum.



    Using Respiratory Sinus Arrhythmia to Index Vagal Tone During Vocalization A. Breathing at rest, as measured by chest expansion. B. Electrocardiogram (ECG) recorded simultaneously with panel A. C. Sound waveform during vocalizing. D. Breathing measured simultaneously with C (vertical scale same as in A). Notice the inhalations are faster (steeper slope) and greater amplitude than at rest, while exhalations (corresponding to the vocal phrases in C) are much longer in duration. E. ECG recorded simultaneously with panel C and D. F. Using data like those shown in A-E we compute the heart rate relative to time in the breathing cycle, averaged over many breath cycles. Here subject was recorded before, during and after 60 minutes of singing, a vocal exercise predicted to increase vagal tone. At rest, the heart rate had no trend over the breath cycle (black curve is flat), indicating low vagal tone. During vocalization the heart rate slowed down during each exhalation (red, green and blue curves slope downward), indicating increased vagal activity. On a slower timescale, the average heart rate gradually declined over 60 minutes of singing (compare red curve to blue curve). At rest after singing, the overall heart rate was much lower than before (magenta curve is below black curve) and Respiratory Sinus Arrhythmia persisted (magenta curve slopes downward).


    Credits: A huge number of people are or have been involved in this project, including: Maxwell Chen, Eliza Zhang, Huixin Yan, and Michaela Juels (Reinagel lab); Anna Mai (Gentner/Bakovic labs); Karen Pierce, Cindy Carter, Eric Courchesne, Srinivasa Nalabolu, Elizabeth Bacon, and Adrienne Moore (Autism Center of Excellence); Linda Hill, Jeanne Nichols, David Wing, Michael Higgins and Heather Furr (EPARC). We are indebted to Stephen Porges for consultation in early phases of this project.