On 28/9/2017, we got an article accepted to the special issue of IEEE/ACM Transactions on Audio, Speech and Language Processing. The article is a result of our collaboration with Daryush D. Mehta and Jarrad Van Stan from the Eye and Ear, Massachusetts General Hospital. The title of the article is “Modal and nonmodal voice quality classification using acoustic and electroglottographic features.” It analyses a set of glottal source, vocal tract and harmonic model features in the task of voice quality classification. In a nutshell, we are trying to predict whether the speaker will speak in a breathy, normal, strained or rough voice. The figure below summarizes the achieved results.
The initial manuscript was sent for peer-review in December 2016. The special issue is expected to come out by the end of this year and the links to the paper will be available in a few weeks, hopefully :). For this paper, we used recordings of normal people mimicking the asked voice quality so our next target is to extend the results to a much bigger database, which contains recordings of patients with various voice disorders. So stayed tuned, if things go well, we might make our way to Seoul in 2018 for the ICASSP conference with new results.
Happy pictures of all authors below (Michal, Daryush, Jarrad, Jón).
Link : IEEE/ACM Transactions on Audio, Speech and Language Processing
Leave a Reply