SEEING SPEECH

About the project

This online resource is a product of the collaboration between researchers at six Scottish Universities: the University of Glasgow, Queen Margaret University Edinburgh, Edinburgh Napier University, the University of Strathclyde, the University of Edinburgh and the University of Aberdeen. The resource provides teachers and students of practical Phonetics with ultrasound tongue imaging (UTI) and lip video of speech, magnetic resonance imaging (MRI) video of speech, and 2D midsagittal head animations based on MRI and UTI data.

To date, the resource has been created in three phases:

  1. a Carnegie Trust for the Universities of Scotland grant (2011-2013) funded the initial development and design of a single resource;
  2. an Arts and Humanities Research Council grant (2014-15) funded the extension into two sister resources, Seeing Speech and Dynamic Dialects;
  3. an Economic and Social Research Council grant (2016-19) funded the /r/ and /l/ in English resource

Acknowledgements

We thank the Carnegie Trust for the Universities of Scotland for funding the development of the initial resource, and the Arts and Humanities Research Council and the Economic and Social Research Council for providing funding to extend it further. We also thank: the University of Glasgow and University College London for providing Web design expertise, the Clinical Audiology Speech and Language research centre at Queen Margaret University Edinburgh for allowing use of their ultrasound tongue imaging recording studios, Edinburgh University’s Edinburgh Imaging Facility QMRI for their expertise and help with MRI recordings, Steve Cowen of the CASL lab for his technical assistance during MRI recordings, Edinburgh Napier University's School of Computing for producing the articulatory animations, and Alan Wrench of Articulate Instruments for his help and advice. We would also like to thank Prof. Janet Beck of Queen Margaret University and Prof. John Esling of the University of Victoria for contributing audio and articulatory recordings of modeled speech sounds.

The project team:

Core team

Advisory panel