Get to Know the Alexa Speech Team at Interspeech 2016
The Alexa Speech Science team looks forward to meeting you at Interspeech 2016 on September 8-12, 2016! Come and visit us at Booth #2 to learn more about our research and career opportunities. For an overview of the conference, click here. Read on for more information about our technology and team.
Alexa Speech Science Technology
The Amazon Alexa Speech Science and Machine Learning team employs cutting-edge research and technology to create magical customer experiences with Alexa, the voice service that powers Amazon’s family of Echo products, Amazon Fire TV, and more.
Alexa is a spoken language understanding system comprised of several data-driven audio and text processing components that process and respond to customer voice requests. These components include signal processing, wake word detection, automatic speech recognition (ASR), natural language understanding (NLU), question answering, dialog management, and text-to-speech synthesis.
For example, on a hands-free WiFi-connected audio device such as Echo, a customer may ask, “Alexa, what’s the weather in Boston?” The wake word engine then detects the word “Alexa” and begins streaming audio to the speech platform in the Amazon cloud. The platform passes the audio stream to the ASR module, which returns a list of the most likely transcriptions of the user’s speech. This result is then transferred to NLU, which analyzes the top ASR results to extract the most likely interpretation of the user’s request, consisting of an intent (e.g., “GetWeatherForecast”) and any associated slots (e.g., “Boston”). Based on the type of intent, the speech platform routes the intent to the appropriate skill, which then specifies what Alexa should say to the user. Finally, Text-to-Speech technology converts Alexa’s text response into audio, which is streamed to the device to play back to the user from the device’s speaker.
This seamless interaction requires relentless focus on the customer experience and customer feedback. The speech science team focuses on incorporating learning from every customer interaction to accelerate Alexa’s learning, using highly scalable deep learning techniques. The Amazon speech scientist team trains deep neural networks on large datasets using distributed processing, working at massive Amazon scale to optimize training for the AWS network. Learning at scale requires the right balance of invention and simplification to find the set of algorithms that maximize Alexa’s accuracy given the data. The challenge of interacting with the Amazon-scale catalog of shopping, music, and media requires world-class solutions for almost every known NLP task, from Anaphora Resolution to Semantic Parsing.
Conference Papers
Past Interspeech publications and submissions from the Amazon team are listed below.
2015
- Robust i-vector based Adaptation of DNN Acoustic Model for Speech Recognition
- Scalable Distributed DNN Training Using Commodity GPU Cloud Computing
- fMLLR based feature-space speaker adaptation of DNN acoustic models
- Accurate Endpointing with Expected Pause Duration
2016
- Anchored Speech Detection
- Multi-task learning and Weighted Cross-entropy for DNN-based Keyword Spotting
- LatticeRNN: Recurrent Neural Networks over Lattices
- Optimizing Speech Recognition Evaluation Using Stratified Sampling
- Active and Semi-Supervised Learning in ASR: Benefits on the Acoustic and Language Models
- Model Compression applied to small- footprint keyword spotting-
Meet the Team
Are you ready for your next opportunity? Check out our open positions on this page, and meet some of our speech scientists here. We have global opportunities available, and speech and machine learning scientists from the following locations will be available to meet:
- Cambridge, Massachusetts
- Rohit Prasad, Vice President and Head Scientist
- Shankar Ananthakrishnan, Machine Learning Manager; PhD, University of Southern California
- Shiv Vitaladevuni, Senior Machine Learning Manager; PhD, University of Maryland
- George Zavaliagkos, Senior Machine Learning Manager; PhD, Northeastern University
- Imre Kiss, Machine Learning Manager, NLP; PhD, Tampere University of Technology
- Alborz Geramifard, Machine Learning Manager; PhD, Massachusetts Institute of Technology
- Francois Mairesse, Sr. Machine Learning Scientist; PhD, University of Sheffield
- Janet Slifka, Senior Manager, Data Scientist; PhD, Massachusetts Institute of Technology
- Sunnyvale, California
- Arindam Mandal, Senior Machine Learning Manager; PhD, University of Washington
- Ashwin Ram, Senior Machine Learning Manager, Artificial Intelligence; PhD, Yale University
- Zornitsa Kozareva, Machine Learning Manager, NLP; PhD, University of Alicante
- Seattle, Washington
- Bjorn Hoffmeister, Senior Machine Learning Manager; PhD, Rheinisch-Westfälische Technische Hochschule Aachen
- Pittsburgh, Pennsylvania
- Thomas Schaaf, Machine Learning Manager; PhD, University of Karlsruhe
- Aachen, Germany
- Max Bisani, Senior Machine Learning Manager; PhD, Rheinisch-Westfälische Technische Hochschule
If you would like to meet with a speech scientist in person at the conference, please contact interspeech2016@amazon.com.
Share Information Can Help Many People To understant :)
Get to Know the Alexa Speech Team at Interspeech 2016
Reviewed by Unknown
on
05:56:00
Rating:
Aucun commentaire