Article accepted to IEEE/ACM Transactions on Audio, Speech and Language Processing.

On 28/9/2017, we got an article accepted to the special issue of IEEE/ACM Transactions on Audio, Speech and Language Processing. The article is a result of our collaboration with Daryush D. Mehta and Jarrad Van Stan from the Eye and Ear, Massachusetts General Hospital. The title of the article is “Modal and non-modal voice quality classification using acoustic and electroglottographic features”. It analyses a set of glottal source, vocal tract and harmonic model features in the task of voice quality classification. In a nutshell, we are trying to predict whether the speaker will speak in a breathy, normal, strained or rough voice. The figure below summarizes the achieved results.


The initial manuscript was sent for peer-review in December 2016. The special issue is expected to come out by the end of this year and the links to the paper will be available in a few weeks, hopefully :). For this paper, we used recordings of normal people mimicking the asked voice quality so our next target is to extend the results to a much bigger database, which contains recordings of patients with various voice disorders. So stayed tuned, if things go well, we might make our way to Seoul in 2018 for the ICASSP conference with new results.

Happy pictures of all authors below (Michal, Daryush, Jarrad, Jón).


Interspeech 2017 Conference Summary

Yu-Ren gave a really good talk about measuring voice severity.

Image uploaded from iOS2

Then, on Wednesday Anna, Inga, Matthías, and Jón answered questions about their posters to conference attendees.

Image uploaded from iOS15

During the conference, welcome reception, and the banquet we reconnected with many old colleagues and met many new  in the speech processing and speech recognition world. For anyone who didn’t get a chance to attend Yu-Ren’s talk or to see the posters by Jón and Inga, we have links at the bottom of the post.

This slideshow requires JavaScript.

Anna’s data is also available on Malfong now.

Inga’s Icelandic Parliament ASR Corpus

Yu-Ren’s slides

Jón’s Eyra Speech Corpora Poster

We hope to see you next year!

LVL goes to Hungary


In September Eydís will be representing our group at the CogInfoCom 2017 Conference. This will be LVL’s second time at CogInfoCom in Hungary, and we hope it will be an exciting conference with valuable insights. She will be presenting her paper “Cognitive workload classification using cardiovascular measures and dynamic features.” This will be Eydís’ last conference and paper before her study abroad semester starts later this year. So we hope she stays healthy and enjoys her time in Hungary.


Largest Icelandic LVL Group at Interspeech in 2017

This August, our LVL group members will be attending Interspeech 2017 to present their three papers, meet other folks in the speech recognition field, and have lots of fun. Two posters,“Building an ASR corpus using Althingi’s Parliamentary Speeches” and “Building ASR corpora using Eyra,” will be presented during the Wednesday Special Session: Digital Revolution for Under-resourced Languages 2 poster session at 13:30-15:30 so go say, “Hi!” to our members if you can. The third paper, “Objective Severity Assessment From Disordered Voice Using Estimated Glottal Airflow,” will be presented as a talk by Yu-Ren on Monday afternoon.

For the Alþingi Speech paper by Inga Rún, the language corpus can be found at and the Kaldi recipe can be found on but the best resource will be Inga Rún herself so grab a drink and find her at the Welcome Reception or the Standing Banquet.

We hope to meet you all there!

Article Publication in the Periodica Polytechnica Electrical Engineering and Computer Science journal

At CogInfoCom 2016, Eydís Huld Magnusdottir, gave a great presentation about Monitoring Cognitive Workload Using Vocal Tract and Voice Source Features. It was so informative in fact that the Periodica Polytechnica Electrical Engineering and Computer Science, a great open access journal, has published it. The article has the same name as the conference paper, Monitoring Cognitive Workload Using Vocal Tract and Voice Source Features. Eydís’ article is about the study she did in Iceland involving cognitive workload, Stroop tasks, and nearly 100 participants to find out whether vocal tract features perform better or worse than voice source features.

Figure: The results of speech formants tracking using the KARMA algorithm


To read the exciting conclusion, go directly to the journal’s website and read about it from home!



Paper accepted to an IEEE-Transactions journal

Congratulations to Yu-Ren for getting his paper accepted to the highly recognized scholarly journal in the field of speech processing, “IEEE/ACM Transactions on Audio, Speech, and Language Processing”! However, the paper wouldn’t have come about without the collaborative efforts of MIT Lincoln Laboratory, Universidad Tecnica Federico Santa Maria, Massachusetts General Hospital, and Harvard Medical School. His paper, “Evaluation of glottal inverse filtering algorithms using a physiologically based articulatory speech synthesizer,” is about measuring the performance of different algorithms on glottal flow generated by the VocalTractLab speech synthesizer. Since the paper just got accepted, we don’t have a link to it yet but we will add one as soon as it is available.


Edit: Volume 25 Issue 8 with Yu-ren’s article is now out. For those with a subscription, here is the link:

Main results showing the performance of inverse filtering algorithms

Welcome, a new member

A new member just joined our team all the way from California, USA. Welcome, Judy !

Judy will be working with Inga on the ASR for parliament speeches project utilizing some mad Computer Science skills to among other things design an interface to get Inga’s research into the hands of the parliament staff.

It’s always necessary to not only have researchers, but also programmers and software engineers. All theory and no fun makes us all dull boys (and unemployed).