Compact System

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg
Showing posts with label Voice Search. Show all posts
Showing posts with label Voice Search. Show all posts

Friday, 15 July 2011

Google Americas Faculty Summit Day 1: Mobile Search

Posted on 10:29 by Unknown
Posted by Johan Schalkwyk, Software Engineer

On July 14 and 15, we held our seventh annual Faculty Summit for the Americas with our New York City offices hosting for the first time. Over the next few days, we will be bringing you a series of blog posts dedicated to sharing the Summit's events, topics and speakers. --Ed

Google’s mobile speech team has a lofty goal: recognize any search query spoken in English and return the relevant results. Regardless of whether your accent skews toward a Southern drawl, a Boston twang, or anything in between, spoken searches like “navigate to the Metropolitan Museum,” “call California Pizza Kitchen” or “weather, Scarsdale, New York” should provide immediate responses with a map, the voice of the hostess at your favorite pizza place or an online weather report. The responses must be fast and accurate or people will stop using the tool, and—given that the number of speech queries has more than doubled over the past year—the team is clearly succeeding.

As a software engineer on the mobile speech team, I took the opportunity of the Faculty Summit this week to present some of the interesting challenges surrounding developing and implementing mobile search. One of the immediate puzzles we have to solve is how to train a computer system to recognize speech queries. There are two aspects to consider: the acoustic model, or the sound of letters and words in a language; and the language model, which in English is essentially grammar, or what allows us to predict words that follow one another. The language model we can put together using a huge amount of data gathered from our query logs. The acoustic model, however, is more challenging.

To build our acoustic model, we could conduct “supervised learning” where we collect 100+ hours of audio data from search queries and then transcribe and label the data. We use this data to translate a speech query into a written query. This approach works fairly well, but it doesn’t improve as we collect more audio data. Thus, we use an “unsupervised model” where we continuously add more audio data to our training set as users do speech queries.

Given the scale of this system, another interesting challenge is testing accuracy. The traditional approach is to have human testers run assessments. Over the past year, however, we have determined that our automated system has the same or better level of accuracy as our human testers, so we’ve decided to create a new method for automated testing at scale, a project we are working on now.

The current voice search system is trained on over 230 billion words and has a one million word vocabulary, meaning it understands all the different contexts in which those one million words can be used. It requires multiple CPU decades for training and data processing, plus a significant amount of storage, so this is an area where Google’s large infrastructure is essential. It’s exciting to be a part of such cutting edge research, and the Faculty Summit was an excellent opportunity to share our latest innovations with people who are equally inspired by this area of computer science.
Read More
Posted in Education, Voice Search | No comments

Friday, 1 April 2011

Ig-pay Atin-lay Oice-vay Earch-say

Posted on 07:15 by Unknown
Posted by Martin Jansche and Alex Salcianu, Google Speech Team

As you might know, Google Voice Search is available in more than two dozen languages and dialects, making it easy to perform Google searches just by speaking into your phone.

Today it is our pleasure to announce the launch of Pig Latin Voice Search!

What is Pig Latin you may ask? Wikipedia describes it as a language game where, for each English word, the first consonant (or consonant cluster) is moved to the end of the word and an “ay” is affixed (for example, “pig” yields “ig-pay” and “search” yields “earch-say”).

Our Pig Latin Voice Search is even more fun than our other languages, because when you speak in Pig Latin, our speech recognizer not only recognizes your piggy speech but also translates it automagically to normal English and does a Google search.



To configure Pig Latin Voice Search in your Android phone just go to Settings, select “Voice input & output settings”, and then “Voice recognizer settings”. In the list of languages you’ll see Pig Latin. Just select it and you are ready to roll in the mud!

It also works on iPhone with the Google Search app. In the app, tap the Settings icon, then "Voice Search" and select Pig Latin.

Ave-hay un-fay ith-way Ig-pay Atin-lay.


Pig Latin Voice Search works on Android 2.2 (Froyo) and later Android versions. If you don't already have Google Voice Search on your Android phone, scan or tap this QR code to download it.

The list of languages and dialects now supported by Google Voice Search includes:
  • US English, UK English, Australian English, Indian English, South African English
  • Spanish from Spain, Mexico, Argentina, and Latin America
  • French (France), Italian, and Portuguese (Brazil)
  • German (Germany) and Dutch
  • Russian, Polish, and Czech
  • Turkish
  • Japanese, Korean, Mandarin (Mainland China and Taiwan), and Cantonese
  • Bahasa Indonesia and Bahasa Malaysia
  • Afrikaans and isiZulu
  • Latin
  • Pig Latin
Read More
Posted in Voice Search | No comments

Wednesday, 30 March 2011

Word of Mouth: Introducing Voice Search for Indonesian, Malaysian and Latin American Spanish

Posted on 10:00 by Unknown
Posted by Linne Ha, International Program Manager

Read more about the launch of Voice Search in Latin American Spanish on the Google América Latina blog.

Today we are excited to announce the launch of Voice Search in Indonesian, Malaysian, and Latin American Spanish, making Voice Search available in over two dozen languages and accents since our first launch in November 2008. This accomplishment could not have been possible without the help of local users in the region - really, we couldn’t have done it without them. Let me explain:

In 2010 we launched Voice Search in Dutch, the first language where we used the “word of mouth” project, a crowd-sourcing effort to collect the most accurate voice data possible.The traditional method of acquiring voice samples is to license the data from companies who specialize in the distribution of speech and text databases. However, from day one we knew that to build the most accurate Voice Search acoustic models possible, the best data would come from the people who would use Voice Search once it launched - our users.

Since then, in each country, we found small groups of people who were avid fans of Google products and were part of a large social network, either in local communities or on online. We gave them phones and asked them to get voice samples from their friends and family. Everyone was required to sign a consent form and all voice samples were anonymized. When possible, they also helped to test early versions of Voice Search as the product got closer to launch.

Building a speech recognizer is not just limited to localizing the user interface. We require thousands of hours of raw data to capture regional accents and idiomatic speech in all sorts of recording environments to mimic daily life use cases. For instance, when developing Voice Search for Latin American Spanish, we paid particular attention to Mexican and Argentinean Spanish. These two accents are more different from one another than any other pair of widely-used accents in all of South and Central America. Samples collected in these countries were very important bookends for building a version of Voice Search that would work across the whole of Latin America. We also chose key countries such as Peru, Chile, Costa Rica, Panama and Colombia to bridge the divergent accent varieties.

As an International Program Manager at Google, I have been fortunate enough to travel around the world and meet many of our local Google users. They often have great suggestions for the products that they love, and word of mouth was created with the vision that our users could participate in developing the product. These Voice Search launches would not have been possible without the help of our users, and we’re excited to be able to work together on the product development with the people who will ultimately use our products.
Read More
Posted in localization, Voice Search | No comments

Thursday, 17 February 2011

Query Language Modeling for Voice Search

Posted on 14:15 by Unknown
Posted by Ciprian Chelba, Research Scientist

About three years ago we set a goal to enable speaking to the Google Search engine on smart-phones. On the language modeling side, the motivation was that we had access to large amounts of typed text data from our users. At the same time, that meant that the users also had a clear expectation for how they would interact with a speech-enabled version of the Google Search application.

The challenge lay in the scale of the problem and the perceived sparsity of the query data. Our paper, Query Language Modeling for Voice Search, describes the approach we took, and the empirical findings along the way.

Besides data availability, the project succeeded due to our excellent computational platform, the culture built around teams that wholeheartedly tackle such challenges with the conviction that they will set a new bar, and a collaborative mindset that leverages resources across the company. In this case we used training data made available by colleagues working in query spelling correction, query stream sampling procedures devised for search quality evaluation, the open finite state tools, and distributed language modeling infrastructure built for machine translation.

Perhaps the most satisfying part of this research project was its impact on the end-user: when presenting the poster at SLT 2010 in Berkeley I offered to demo Google Voice Search, and often got the answer “Thanks, I already use it!”.
Read More
Posted in Publications, Voice Search | No comments

Thursday, 2 December 2010

Google Launches Cantonese Voice Search in Hong Kong

Posted on 10:46 by Unknown
Posted by Posted by Yun-hsuan Sung (宋雲軒) and Martin Jansche, Google Research

On November 30th 2010, Google launched Cantonese Voice Search in Hong Kong. Google Search by Voice has been available in a growing number of languages since we launched our first US English system in 2008. In addition to US English, we already support Mandarin for Mainland China, Mandarin for Taiwan, Japanese, Korean, French, Italian, German, Spanish, Turkish, Russian, Czech, Polish, Brazilian Portuguese, Dutch, Afrikaans, and Zulu, along with special recognizers for English spoken with British, Indian, Australian, and South African accents.

Cantonese is widely spoken in Hong Kong, where it is written using traditional Chinese characters, similar to those used in Taiwan. Chinese script is much harder to type than the Latin alphabet, especially on mobile devices with small or virtual keyboards. People in Hong Kong typically use either “Cangjie” (倉頡) or “Handwriting” (手寫輸入) input methods. Cangjie (倉頡) has a steep learning curve and requires users to break the Chinese characters down into sequences of graphical components. The Handwriting (手寫輸入) method is easier to learn, but slow to use. Neither is an ideal input method for people in Hong Kong trying to use Google Search on their mobile phones.

Speaking is generally much faster and more natural than typing. Moreover, some Chinese characters – like “滘” in “滘西州” (Kau Sai Chau) and “砵” in “砵典乍街” (Pottinger Street) – are so rarely used that people often know only the pronunciation, and not how to write them. Our Cantonese Voice Search begins to address these situations by allowing Hong Kong users to speak queries instead of entering Chinese characters on mobile devices. We believe our development of Cantonese Voice Search is a step towards solving the text input challenge for devices with small or virtual keyboards for users in Hong Kong.

There were several challenges in developing Cantonese Voice Search, some unique to Cantonese, some typical of Asian languages and some universal to all languages. Here are some examples of problems that stood out:
  • Data Collection: In contrast to English, there are few existing Cantonese datasets that can be used to train a recognition system. Building a recognition system requires both audio and text data so it can recognize both the sounds and the words. For audio data, our efficient DataHound collection technique uses smartphones to record and upload large numbers of audio samples from local Cantonese-speaking volunteers. For text data, we sample from anonymized search query logs from http://www.google.com.hk to obtain the large amounts of data needed to train language models.
  • Chinese Word Boundaries: Chinese writing doesn’t use spaces to indicate word boundaries. To limit the size of the vocabulary for our speech recognizer and to simplify lexicon development, we use characters, rather than words, as the basic units in our system and allow multiple pronunciations for each character.
  • Mixing of Chinese Characters and English Words: We found that Hong Kong users mix more English into their queries than users in Mainland China and Taiwan. To build a lexicon for both Chinese characters and English words, we map English words to a sequence of Cantonese pronunciation units.
  • Tone Issues: Linguists disagree on the best count of the number of tones in Cantonese – some say 6, some say 7, or 9, or 10. In any case, it’s a lot. We decided to model tone-plus-vowel combinations as single units. In order to limit the complexity of the resulting model, some rarely-used tone-vowel combinations are merged into single models.
  • Transliteration: We found that some users use English words while others use the Cantonese transliteration (e.g.,: “Jordan” vs. “佐敦­”). This makes it challenging to develop and evaluate the system, since it’s often impossible for the recognizer to distinguish between an English word and its Cantonese transliteration. During development we use a metric that simply checks whether the correct search results are returned.
  • Different Accents and Noisy Environment: People speak in different styles with different accents. They use our systems in a variety of environments, including offices, subways, and shopping malls. To make our system work in all these different conditions, we train it using data collected from many different volunteers in many different environments.
Cantonese is Google’s third spoken language for Voice Search in the Chinese linguistic family, after Mandarin for Mainland China and Mandarin for Taiwan. We plan to continue to use our data collection and language modeling technologies to help speakers of Chinese languages easily input text and look up information.
Read More
Posted in Cantonese, Voice Search | No comments

Tuesday, 9 November 2010

Voice Search in Underrepresented Languages

Posted on 14:21 by Unknown
Posted by Pedro J. Moreno, Staff Research Scientist and Johan Schalkwyk, Senior Staff Engineer

Welkom*!

Today we’re introducing Voice Search support for Zulu and Afrikaans, as well as South African-accented English. The addition of Zulu in particular represents our first effort in building Voice Search for underrepresented languages.

We define underrepresented languages as those which, while spoken by millions, have little presence in electronic and physical media, e.g., webpages, newspapers and magazines. Underrepresented languages have also often received little attention from the speech research community. Their phonetics, grammar, acoustics, etc., haven’t been extensively studied, making the development of ASR (automatic speech recognition) voice search systems challenging.

We believe that the speech research community needs to start working on many of these underrepresented languages to advance progress and build speech recognition, translation and other Natural Language Processing (NLP) technologies. The development of NLP technologies in these languages is critical for enabling information access for everybody. Indeed, these technologies have the potential to break language barriers.

We also think it’s important that researchers in these countries take a leading role in advancing the state of the art in their own languages. To this end, we’ve collaborated with the Multilingual Speech Technology group at South Africa’s North-West University led by Prof. Ettiene Barnard (also of the Meraka Research Institute), an authority in speech technology for South African languages. Our development effort was spearheaded by Charl van Heerden, a South African intern and a student of Prof. Barnard. With the help of Prof. Barnard’s team, we collected acoustic data in the three languages, developed lexicons and grammars, and Charl and others used those to develop the three Voice Search systems. A team of language specialists traveled to several cities collecting audio samples from hundreds of speakers in multiple acoustic conditions such as street noise, background speech, etc. Speakers were asked to read typical search queries into an Android app specifically designed for audio data collection.

For Zulu, we faced the additional challenge of few text sources on the web. We often analyze the search queries from local versions of Google to build our lexicons and language models. However, for Zulu there weren’t enough queries to build a useful language model. Furthermore, since it has few online data sources, native speakers have learned to use a mix of Zulu and English when searching for information on the web. So for our Zulu Voice Search product, we had to build a truly hybrid recognizer, allowing free mixture of both languages. Our phonetic inventory covers both English and Zulu and our grammars allow natural switching from Zulu to English, emulating speaker behavior.

This is our first release of Voice Search in a native African language, and we hope that it won’t be the last. We’ll continue to work on technology for languages that have until now received little attention from the speech recognition community.

Salani kahle!**

* “Welcome” in Afrikaans
** “Stay well” in Zulu
Read More
Posted in Africa, Speech, Voice Search | No comments

Thursday, 14 October 2010

Korean Voice Input -- Have you Dictated your E-Mails in Korean lately?

Posted on 09:00 by Unknown
Posted by Mike Schuster & Kaisuke Nakajima, Google Research

Google Voice Search has been available in various flavors of English since 2008, in Mandarin and Japanese since 2009, in French, Italian, German and Spanish since June 2010 (see also in this blog post), and shortly after that in Taiwanese. On June 16th 2010, we took the next step by launching our Korean Voice Search system.

Korean Voice Search, by focusing on finding the correct web page for a spoken query, has been quite successful since launch. We have improved the acoustic models several times which resulted in significantly higher accuracy and reduced latency, and we are committed to improving it even more over time.

While voice search significantly simplifies input for search, especially for longer queries, there are numerous applications on any smartphone that could also benefit from general voice input, such as dictating an email or an SMS. Our experience with US English has taught us that voice input is as important as voice search, as the time savings from speaking rather than typing a message are substantial. Korean is the first non-English language where we are launching general voice input. This launch extends voice input to emails, SMS messages, and more on Korean Android phones. Now every text field on the phone will accept Korean speech input.

Creating a general voice input service had different requirements and technical challenges compared to voice search. While voice search was optimized to give the user the correct web page, voice input was optimized to minimize (Hangul) character error rate. Voice inputs are usually longer than searches (short full sentences or parts of sentences), and the system had to be trained differently for this type of data. The current system’s language model was trained on millions of Korean sentences that are similar to those we expect to be spoken. In addition to the queries we used for training voice search, we also used parts of web pages, selected blogs, news articles and more. Because the system expects spoken data similar to what it was trained on, it will generally work well on normal spoken sentences, but may yet have difficulty on random or rare word sequences -- we will work to keep improving on those.

Korean voice input is part of Google’s long-term goal to make speech input an acceptable and useful form of input on any mobile device. As with voice search, our cloud computing infrastructure will help us to improve quality quickly, as we work to better support all noise conditions, all Korean dialects, and all Korean users.
Read More
Posted in Android, Korean, Voice Search | No comments
Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • CDC Birth Vital Statistics in BigQuery
    Posted by Dan Vanderkam, Software Engineer Google’s BigQuery Service lets enterprises and developers crunch large-scale data sets quickly...
  • Our Unique Approach to Research
    Posted by  Alfred Spector , Vice President of Research and Special Initiatives Google started as a research project —and research has remain...
  • Google, the World Wide Web and WWW conference: years of progress, prosperity and innovation
    Posted by Prabhakar Raghavan, Vice President of Engineering More than forty members of Google’s technical staff gathered in Lyon, France i...
  • More Google Cluster Data
    Posted by John Wilkes, Principal Software Engineer Google has a strong interest in promoting high quality systems research, and we believe t...
  • Partnering with Tsinghua University to support education in Western China
    Posted by Aimin Zhu, China University Relations We’re excited to announce that we’ve teamed up with Tsinghua University to provide educatio...
  • Our Faculty Institute brings faculty back to the drawing board
    Posted by Nina Kim Schultz, Google Education Research Cross-posted with the Official Google Blog School may still be out for summer, but tea...
  • Site Reliability Engineers: “solving the most interesting problems”
    Posted by Chris Reid, Sydney Staffing team I recently sat down with Ben Appleton, a Senior Staff Software Engineer, to talk about his recent...
  • Impact of Organic Ranking on Ad Click Incrementality
    Posted by David Chan, Statistician and Lizzy Van Alstine, Research Evangelist  In 2011, Google released a Search Ads Pause research study w...
  • Large-scale graph computing at Google
    Posted by Grzegorz Czajkowski, Systems Infrastructure Team If you squint the right way, you will notice that graphs are everywhere. For exam...
  • Market Algorithms and Optimization Meeting
    Posted by  Vahab S. Mirrokni and Muthu Muthukrishnan Google auctions ads, and enables a market with millions of advertisers and users.  This...

Categories

  • accessibility
  • ACL
  • ACM
  • Acoustic Modeling
  • ads
  • adsense
  • adwords
  • Africa
  • Android
  • API
  • App Engine
  • App Inventor
  • Audio
  • Awards
  • Cantonese
  • China
  • Computer Science
  • conference
  • conferences
  • correlate
  • crowd-sourcing
  • CVPR
  • datasets
  • Deep Learning
  • distributed systems
  • Earth Engine
  • economics
  • Education
  • Electronic Commerce and Algorithms
  • EMEA
  • EMNLP
  • entities
  • Exacycle
  • Faculty Institute
  • Faculty Summit
  • Fusion Tables
  • gamification
  • Google Books
  • Google+
  • Government
  • grants
  • HCI
  • Image Annotation
  • Information Retrieval
  • internationalization
  • Interspeech
  • jsm
  • jsm2011
  • K-12
  • Korean
  • Labs
  • localization
  • Machine Hearing
  • Machine Learning
  • Machine Translation
  • MapReduce
  • market algorithms
  • Market Research
  • ML
  • MOOC
  • NAACL
  • Natural Language Processing
  • Networks
  • Ngram
  • NIPS
  • NLP
  • open source
  • operating systems
  • osdi
  • osdi10
  • patents
  • ph.d. fellowship
  • PiLab
  • Policy
  • Public Data Explorer
  • publication
  • Publications
  • renewable energy
  • Research Awards
  • resource optimization
  • Search
  • search ads
  • Security and Privacy
  • SIGMOD
  • Site Reliability Engineering
  • Speech
  • statistics
  • Structured Data
  • Systems
  • Translate
  • trends
  • TV
  • UI
  • University Relations
  • UNIX
  • User Experience
  • video
  • Vision Research
  • Visiting Faculty
  • Visualization
  • Voice Search
  • Wiki
  • wikipedia
  • WWW
  • YouTube

Blog Archive

  • ▼  2013 (51)
    • ▼  December (3)
      • Groundbreaking simulations by Google Exacycle Visi...
      • Googler Moti Yung elected as 2013 ACM Fellow
      • Free Language Lessons for Computers
    • ►  November (9)
    • ►  October (2)
    • ►  September (5)
    • ►  August (2)
    • ►  July (6)
    • ►  June (7)
    • ►  May (5)
    • ►  April (3)
    • ►  March (4)
    • ►  February (4)
    • ►  January (1)
  • ►  2012 (59)
    • ►  December (4)
    • ►  October (4)
    • ►  September (3)
    • ►  August (9)
    • ►  July (9)
    • ►  June (7)
    • ►  May (7)
    • ►  April (2)
    • ►  March (7)
    • ►  February (3)
    • ►  January (4)
  • ►  2011 (51)
    • ►  December (5)
    • ►  November (2)
    • ►  September (3)
    • ►  August (4)
    • ►  July (9)
    • ►  June (6)
    • ►  May (4)
    • ►  April (4)
    • ►  March (5)
    • ►  February (5)
    • ►  January (4)
  • ►  2010 (44)
    • ►  December (7)
    • ►  November (2)
    • ►  October (9)
    • ►  September (7)
    • ►  August (2)
    • ►  July (7)
    • ►  June (3)
    • ►  May (2)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (2)
  • ►  2009 (44)
    • ►  December (8)
    • ►  November (4)
    • ►  August (4)
    • ►  July (5)
    • ►  June (5)
    • ►  May (4)
    • ►  April (6)
    • ►  March (3)
    • ►  February (1)
    • ►  January (4)
  • ►  2008 (11)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
    • ►  July (1)
    • ►  May (3)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
  • ►  2007 (9)
    • ►  October (1)
    • ►  September (2)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  February (2)
  • ►  2006 (15)
    • ►  December (1)
    • ►  November (1)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  April (3)
    • ►  March (4)
    • ►  February (1)
Powered by Blogger.

About Me

Unknown
View my complete profile