Radio Host Interviews Jón Guðnason about Robotics and AI

Darpa-robotics-challenge-concept

Recently, Jón was interviewed on the Icelandic afternoon radio show,  Síðdegisútvarpið. On the program, he conversed about the progress in robotics and artificial intelligence. The focus of the conversation was on a new video from Boston Dynamics and how their robots can get about by opening doors and how the latest development in AI is allowing robots to sense their world and behave in more natural ways.

The interview was conducted at the Icelandic National Broadcasting Service (RÚV).

 

 

ASR: A SMASHING HIT AT UTMESSAN!!!

On Saturday Róbert and Anna also presented a prototype for an automatic speech recognition web service for general Icelandic (Vefgátt fyrir íslenskan talgreini). This speech recognizer for Icelandic will be made available to the public later this spring.

ASRImageShopped
LVL members explain how to use the ASR.
ASRDemoTesters
Róbert presents the microphone to attendees to try out the Icelandic ASR for themselves.

 

Funding Approved for Icelandic Youth Language Study

Eyra
The Eyra app

We are delighted to be participating in the project “Icelandic Youth Language: An Empirical Study of Communicative Resources” headed by dr. Helga Hilmisdóttir at The Árni Magnússon Institute for Icelandic Studies.  Using Eyra, Teenage group conversations will be recorded and annotated during the project and used to study linguistic features such as communication patterns, vocabulary, imagery, inflection, sentence order and voice.  The data will also be used to improve language technology for teenage voices.

 

 

Does the future understand Icelandic? – UTMessan Talk

On Friday, LVL will be at the UTMessan conference being held here in Reykjavik. UTMessan is a IT conference presented in both Icelandic and English. The talk, “Skilur framtíðin íslensku? (Does the future understand Icelandic?)” will be presented by Anna Björk. This will be our first time presenting at UTMessan so we are very excited.

If you are interested in attending, below are the talk details:

Abstract: In recent years language technology and artificial intelligence have made a giant leap forward. The revolution becomes clear in the development of intelligent assistants like Siri, Alexa, Google Assistant and Cortana, that in ever more households are becoming a part of the family. In the discussion about language technology for Icelandic, the most prominent subject has been how and when we are going to speak Icelandic with those assistants.

But what about language technology for the purpose of increasing efficiency and service quality of companies and institutions? This talk discusses language technology in this context and what is needed so that Icelandic companies can join the rapid development in the field.

Watch the talkhttps://www.youtube.com/watch?v=IaOYG23R7_k
Date: 2 February, 2018
Location: Tækjatal – Chatbot Talks, Silfurberg B, Harpa, Reykjavík, Iceland
Language: Icelandic (perfect opportunity to brush up on your Icelandic)
Time: 11:55am – 12:25pm
Speaker:

AnnaUTMessanTalk

More details at UTMessan

We hope to see you there!

Delicious Summer Grill Party

Now that fall is in full swing, it’s a great time to reminisce about our summer. We arranged a grill party to celebrate the success of our papers this summer: three papers were accepted to Interspeech 2017 and two papers were published in journals. On a nice cloudy Thursday afternoon with our friends and family, we enjoyed some grilled halloumi, meats, and vegetables prepared by our grill masters, Róbert and Michal.

This slideshow requires JavaScript.

 

Article accepted to IEEE/ACM Transactions on Audio, Speech and Language Processing.

On 28/9/2017, we got an article accepted to the special issue of IEEE/ACM Transactions on Audio, Speech and Language Processing. The article is a result of our collaboration with Daryush D. Mehta and Jarrad Van Stan from the Eye and Ear, Massachusetts General Hospital. The title of the article is “Modal and nonmodal voice quality classification using acoustic and electroglottographic features.” It analyses a set of glottal source, vocal tract and harmonic model features in the task of voice quality classification. In a nutshell, we are trying to predict whether the speaker will speak in a breathy, normal, strained or rough voice. The figure below summarizes the achieved results.

res_COVAREP

The initial manuscript was sent for peer-review in December 2016. The special issue is expected to come out by the end of this year and the links to the paper will be available in a few weeks, hopefully :). For this paper, we used recordings of normal people mimicking the asked voice quality so our next target is to extend the results to a much bigger database, which contains recordings of patients with various voice disorders. So stayed tuned, if things go well, we might make our way to Seoul in 2018 for the ICASSP conference with new results.

Happy pictures of all authors below (Michal, Daryush, Jarrad, Jón).

 

Link : IEEE/ACM Transactions on Audio, Speech and Language Processing

Preprint version