Compact System

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Monday, 16 December 2013

Groundbreaking simulations by Google Exacycle Visiting Faculty

Posted on 10:00 by Unknown
Posted by David Konerding, Staff Software Engineer

In April 2011, we announced the Google Exacycle for Visiting Faculty, a new academic research awards program donating one billion core-hours of computational capacity to researchers. The Exacycle project enables massive parallelism for doing science in the cloud, and inspired multiple proposals aiming to take advantage of cloud scale. Today, we would like to share some exciting results from a project built on Google’s infrastructure.

Google Research Scientist Kai Kohlhoff, in collaboration with Stanford University and Google engineers, investigated how an important signalling protein in the membrane of human cells can switch off and on by changing its three-dimensional structure following a sequence of local conformational changes. This research can help to better understand the effects of certain chemical compounds on the human body and assist future development of more potent drug molecules with fewer side effects.

The protein, known as the beta-2 adrenergic receptor, is a G protein-coupled receptor (GPCR), a primary drug target that plays a role in several debilitating health conditions. These include asthma, type-2 diabetes, obesity, and hypertension. The receptor and its close GPCR relatives bind to many familiar molecules, such as epinephrine, beta-blockers, and caffeine. Understanding their structure, function, and the underlying dynamics during binding and activation increases our chances to decode the causes and mechanisms of diseases.

To gain insights into the receptor’s dynamics, Kai performed detailed molecular simulations using hundreds of millions of core hours on Google’s infrastructure, generating hundreds of terabytes of valuable molecular dynamics data. The Exacycle program enabled the realization of simulations with longer sampling and higher accuracy than previous experiments, exposing the complex processes taking place on the nanoscale during activation of this biological switch.

The paper summarizing the results of Kai’s and his collaborators’ work is featured on the January cover of Nature Chemistry, with artwork by Google R&D UX Creative Lead Thor Lewis, to be published on December 17, 2013. The online version of his paper was published on their website today.

We are extremely pleased with the results of this program. We look forward to seeing this research continue to develop.
Read More
Posted in Exacycle | No comments

Wednesday, 11 December 2013

Googler Moti Yung elected as 2013 ACM Fellow

Posted on 10:00 by Unknown
Posted by Alfred Spector, VP of Engineering

Yesterday, the Association for Computing Machinery (ACM) released the list of those who have been elected ACM Fellows in 2013. I am excited to announce that Google Research Scientist Moti Yung is among the distinguished individuals receiving this honor.

Moti was chosen for his contributions to computer science and cryptography that have provided fundamental knowledge to the field of computing security. We are proud of the breadth and depth of his contributions, and believe they serve as motivation for computer scientists worldwide.

On behalf of Google, I congratulate our colleague, who joins the 17 ACM Fellow and other professional society awardees at Google, in exemplifying our extraordinarily talented people. You can read a more detailed summary of Moti’s accomplishments below, including the official citations from ACM.

Dr. Moti Yung: Research Scientist
For contributions to cryptography and its use in security and privacy of systems

Moti has made key contributions to several areas of cryptography including (but not limited to!) secure group communication, digital signatures, traitor tracing, threshold cryptosystems and zero knowledge proofs. Moti's work often seeds a new area in theoretical cryptography as well as finding applications broadly. For example, in 1992, Moti co-developed a protocol by which users can commonly compute a group key using their own private information that is secure against coalitions of rogue users. This work led to the growth of the broadcast encryption research area and has applications to pay-tv, network communication and sensor networks.
Moti is also a long-time leader of the security and privacy research communities, having mentored many of the leading researchers in the field, and serving on numerous program committees. A prolific author, Moti routinely publishes 10+ papers a year, and has been a key contributor to principled and consistent anonymization practices and data protection at Google.
Read More
Posted in ACM | No comments

Tuesday, 3 December 2013

Free Language Lessons for Computers

Posted on 10:10 by Unknown
Posted by Dave Orr, Google Research Product Manager

Not everything that can be counted counts.
Not everything that counts can be counted.
- William Bruce Cameron

50,000 relations from Wikipedia. 100,000 feature vectors from YouTube videos. 1.8 million historical infoboxes. 40 million entities derived from webpages. 11 billion Freebase entities in 800 million web documents. 350 billion words’ worth from books analyzed for syntax.

These are all datasets that we’ve shared with researchers around the world over the last year from Google Research.

But data by itself doesn’t mean much. Data is only valuable in the right context, and only if it leads to increased knowledge. Labeled data is critical to train and evaluate machine-learned systems in many arenas, improving systems that can increase our ability to understand the world. Advances in natural language understanding, information retrieval, information extraction, computer vision, etc. can help us tell stories, mine for valuable insights, or visualize information in beautiful and compelling ways.

That’s why we are pleased to be able to release sets of labeled data from various domains and with various annotations, some automatic and some manual. Our hope is that the research community will use these datasets in ways both straightforward and surprising, to improve systems for annotation or understanding, and perhaps launch new efforts we haven’t thought of.

Here’s a listing of the major datasets we’ve released in the last year, or you can subscribe to our mailing list. Please tell us what you’ve managed to accomplish, or send us pointers to papers that use this data. We want to see what the research world can do with what we’ve created.

50,000 Lessons on How to Read: a Relation Extraction Corpus

What is it: A human-judged dataset of two relations involving public figures on Wikipedia: about 10,000 examples of “place of birth” and 40,000 examples of “attended or graduated from an institution.”
Where can I find it: https://code.google.com/p/relation-extraction-corpus/
I want to know more: Here’s a handy blog post with a broader explanation, descriptions and examples of the data, and plenty of links to learn more.

11 Billion Clues in 800 Million Documents

What is it: We took the ClueWeb corpora and automatically labeled concepts and entities with Freebase concept IDs, an example of entity resolution. This dataset is huge: nearly 800 million web pages.
Where can I find it: We released two corpora: ClueWeb09 FACC and ClueWeb12 FACC.
I want to know more: We described the process and results in a recent blog post.

Features Extracted From YouTube Videos for Multiview Learning

What is it: Multiple feature families from a set of public YouTube videos of games. The videos are labeled with one of 30 categories, and each has an associated set of visual, auditory, and and textual features.
Where can I find it: The data and more information can be obtained from the UCI machine learning repository (multiview video dataset), or from Google’s repository.
I want to know more: Read more about the data and uses for it here.

40 Million Entities in Context

What is it: A disambiguation set consisting of pointers to 10 million web pages with 40 million entities that have links to Wikipedia. This is another entity resolution corpus, since the links can be used to disambiguate the mentions, but unlike the ClueWeb example above, the links are inserted by the web page authors and can therefore be considered human annotation.
Where can I find it: Here’s the WikiLinks corpus, and tools can be found to help use this data on our partner’s page: Umass Wiki-links.
I want to know more: Other disambiguation sets, data formats, ideas for uses of this data, and more can be found at our blog post announcing the release.

Distributing the Edit History of Wikipedia Infoboxes

What is it: The edit history of 1.8 million infoboxes in Wikipedia pages in one handy resource. Attributes on Wikipedia change over time, and some of them change more than others. Understanding attribute change is important for extracting accurate and useful information from Wikipedia.
Where can I find it: Download from Google or from Wikimedia Deutschland.
I want to know more: We posted a detailed look at the data, the process for gathering it, and where to find it. You can also read a paper we published on the release.
Note the change in the capital of Palau.


Syntactic Ngrams over Time

What is it: We automatically syntactically analyzed 350 billion words from the 3.5 million English language books in Google Books, and collated and released a set of fragments -- billions of unique tree fragments with counts sorted into types. The underlying corpus is the same one that underlies the recently updated Google Ngram Viewer.
Where can I find it: http://commondatastorage.googleapis.com/books/syntactic-ngrams/index.html
I want to know more: We discussed the nature of dependency parses and describe the data and release in a blog post. We also published a paper about the release.

Dictionaries for linking Text, Entities, and Ideas

What is it: We created a large database of pairs of 175 million strings associated with 7.5 million concepts, annotated with counts, which were mined from Wikipedia. The concepts in this case are Wikipedia articles, and the strings are anchor text spans that link to the concepts in question.
Where can I find it: http://nlp.stanford.edu/pubs/crosswikis-data.tar.bz2
I want to know more: A description of the data, several examples, and ideas for uses for it can be found in a blog post or in the associated paper.

Other datasets

Not every release had its own blog post describing it. Here are some other releases:
  • Automatic Freebase annotations of Trec’s Million Query and Web track queries.
  • A set of Freebase triples that have been deleted from Freebase over time -- 63 million of them.
Read More
Posted in Natural Language Processing | No comments

Tuesday, 26 November 2013

Released Data Set: Features Extracted From YouTube Videos for Multiview Learning

Posted on 09:00 by Unknown
Posted by Omid Madani, Senior Software Engineer

“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.”
The “duck test”.

Performance of machine learning algorithms, supervised or unsupervised, is often significantly enhanced when a variety of feature families, or multiple views of the data, are available. For example, in the case of web pages, one feature family can be based on the words appearing on the page, and another can be based on the URLs and related connectivity properties. Similarly, videos contain both audio and visual signals where in turn each modality is analyzed in a variety of ways. For instance, the visual stream can be analyzed based on the color and edge distribution, texture, motion, object types, and so on. YouTube videos are also associated with textual information (title, tags, comments, etc.). Each feature family complements others in providing predictive signals to accomplish a prediction or classification task, for example, in automatically classifying videos into subject areas such as sports, music, comedy, games, and so on.

We have released a dataset of over 100k feature vectors extracted from public YouTube videos. These videos are labeled by one of 30 classes, each class corresponding to a video game (with some amount of class noise): each video shows a gameplay of a video game, for teaching purposes for example. Each instance (video) is described by three feature families (textual, visual, and auditory), and each family is broken into subfamilies yielding up to 13 feature types per instance. Neither video identities nor class identities are released.

We hope that this dataset will be valuable for research on a variety of multiview related machine learning topics, including multiview clustering, co-training, active learning, classifier fusion and ensembles.

The data and more information can be obtained from the UCI machine learning repository (multiview video dataset), or from here.
Read More
Posted in Machine Learning, YouTube | No comments

Monday, 25 November 2013

The MiniZinc Challenge

Posted on 09:00 by Unknown
Posted by Jon Orwant, Engineering Manager

Constraint Programming is a style of problem solving where the properties of a solution are first identified, and a large space of solutions is searched through to find the best. Good constraint programming depends on modeling the problem well, and on searching effectively. Poor representations or slow search techniques can make the difference between finding a good solution and finding no solution at all.

One example of constraint programming is scheduling: for instance, determining a schedule for a conference where there are 30 talks (that’s one constraint), only eight rooms to hold them in (that’s another constraint), and some talks can’t overlap (more constraints).

Every year, some of the world’s top constraint programming researchers compete for medals in the MiniZinc challenge. Problems range from scheduling to vehicle routing to program verification and frequency allocation.

Google’s open source solver, or-tools, took two gold medals and two silver medals. The gold medals were in parallel and portfolio search, and the silver medals were in fixed and free search. Google’s success was due in part to integrating a SAT solver to handle boolean constraints, and a new presolve phase inherited from integer programming.

Laurent Perron, a member of Google’s Optimization team and a lead contributor to or-tools, noted that every year brings fresh techniques to the competition: “One of the big surprises this year was the success of lazy-clause generation, which combines techniques from the SAT and constraint programming communities.”

If you’re interested in learning more about constraint programming, you can start at the wikipedia page, or have a look at or-tools.

The full list of winners is available here.
Read More
Posted in | No comments

Friday, 22 November 2013

New Research Challenges in Language Understanding

Posted on 09:00 by Unknown
Posted by Maggie Johnson, Director of Education and University Relations

We held the first global Language Understanding and Knowledge Discovery Focused Faculty Workshop in Nanjing, China, on November 14-15, 2013. Thirty-four faculty members joined the workshop arriving from 10 countries and regions across APAC, EMEA and the US. Googlers from Research, Engineering and University Relations/University Programs also attended the event.

The 2-day workshop included keynote talks, panel discussions and break-out sessions [agenda]. It was an engaging and productive workshop, and we saw lots of positive interactions among the attendees. The workshop encouraged communication between Google and faculty around the world working in these areas.

Research in text mining continues to explore open questions relating to entity annotation, relation extraction, and more. The workshop’s goal was to brainstorm and discuss relevant topics to further investigate these areas. Ultimately, this research should help provide users search results that are much more relevant to them.

At the end of the workshop, participants identified four topics representing challenges and opportunities for further exploration in Language Understanding and Knowledge Discovery:

  • Knowledge representation, integration, and maintenance
  • Efficient and scalable infrastructure and algorithms for inferencing
  • Presentation and explanation of knowledge
  • Multilingual computation

Going forward, Google will be collaborating with academic researchers on a position paper related to these topics. We also welcome faculty interested in contributing to further research in this area to submit a proposal to the Faculty Research Awards program. Faculty Research Awards are one-year grants to researchers working in areas of mutual interest.

The faculty attendees responded positively to the focused workshop format, as it allowed time to go in depth into important and timely research questions. Encouraged by their feedback, we are considering similar workshops on other topics in the future.
Read More
Posted in Faculty Summit, Natural Language Processing, University Relations | No comments

Tuesday, 19 November 2013

Unique Strategies for Scaling Teacher Professional Development

Posted on 09:00 by Unknown
Posted by Candice Reimers, Senior Program Manager

Research shows that professional development for educators has a direct, positive impact on students, so it’s no wonder that institutions are eager to explore creative ways to enhance professional development for K-12 teachers. Open source MOOC platforms, such as Course Builder, offer the flexibility to extend the reach of standard curriculum; recently, several courses have launched that demonstrate new and creative applications of MOOCs. With their wide reach, participant engagement, and rich content, MOOCs that offer professional development opportunities for teachers bring flexibility and accessibility to an important area.

This summer, the ScratchEd team out of Harvard University launched the Creative Computing MOOC, a 6 week self paced workshop focused on building computational thinking skills in the classroom. As a MOOC, the course had 2600 participants, who created more than 4700 Scratch projects, and engaged in 3500 forum discussions, compared to the “in-person” class held last year, which reached only 50 educators.

Other creative uses of Course Builder for educator professional development come from National Geographic and Annenberg Learner who joined forces to develop Water: The Essential Resource, a course developed around California’s Education and Environment Initiative. The Friday Institute’s MOOC, Digital Learning Transitions, focused on the benefits of utilizing educational technology and reached educators across 50 states and 68 countries worldwide. The course design included embedded peer support, project-based learning, and case studies; a post-course survey showed an overwhelming majority of responders “were able to personalize their own learning experiences” in an “engaging, easy to navigate” curriculum and greatly appreciated the 24/7 access to materials.

In addition to participant surveys, course authors using the Course Builder platform are able to conduct deeper analysis via web analytics and course data to assess course effectiveness and make improvements for future courses.

New opportunities to experience professional development MOOCs are rapidly emerging; the University of Adelaide recently announced their Digital Technology course to provide professional development for primary school teachers on the new Australian curriculum, the Google in Education team just launched a suite of courses for teachers using Google technologies, and the Friday Institute course that aligns with the U.S. based Common Core State Standards is now available.

We’re excited about the innovative approaches underway and the positive impact it can have for students and teachers around the world. We also look forward to seeing new, creative applications of MOOC platforms in new, unchartered territory.
Read More
Posted in Education, MOOC | No comments
Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • CDC Birth Vital Statistics in BigQuery
    Posted by Dan Vanderkam, Software Engineer Google’s BigQuery Service lets enterprises and developers crunch large-scale data sets quickly...
  • Our Unique Approach to Research
    Posted by  Alfred Spector , Vice President of Research and Special Initiatives Google started as a research project —and research has remain...
  • Google, the World Wide Web and WWW conference: years of progress, prosperity and innovation
    Posted by Prabhakar Raghavan, Vice President of Engineering More than forty members of Google’s technical staff gathered in Lyon, France i...
  • Partnering with Tsinghua University to support education in Western China
    Posted by Aimin Zhu, China University Relations We’re excited to announce that we’ve teamed up with Tsinghua University to provide educatio...
  • Our Faculty Institute brings faculty back to the drawing board
    Posted by Nina Kim Schultz, Google Education Research Cross-posted with the Official Google Blog School may still be out for summer, but tea...
  • Site Reliability Engineers: “solving the most interesting problems”
    Posted by Chris Reid, Sydney Staffing team I recently sat down with Ben Appleton, a Senior Staff Software Engineer, to talk about his recent...
  • More Google Cluster Data
    Posted by John Wilkes, Principal Software Engineer Google has a strong interest in promoting high quality systems research, and we believe t...
  • Impact of Organic Ranking on Ad Click Incrementality
    Posted by David Chan, Statistician and Lizzy Van Alstine, Research Evangelist  In 2011, Google released a Search Ads Pause research study w...
  • Market Algorithms and Optimization Meeting
    Posted by  Vahab S. Mirrokni and Muthu Muthukrishnan Google auctions ads, and enables a market with millions of advertisers and users.  This...
  • Released Data Set: Features Extracted From YouTube Videos for Multiview Learning
    Posted by Omid Madani, Senior Software Engineer “If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a ...

Categories

  • accessibility
  • ACL
  • ACM
  • Acoustic Modeling
  • ads
  • adsense
  • adwords
  • Africa
  • Android
  • API
  • App Engine
  • App Inventor
  • Audio
  • Awards
  • Cantonese
  • China
  • Computer Science
  • conference
  • conferences
  • correlate
  • crowd-sourcing
  • CVPR
  • datasets
  • Deep Learning
  • distributed systems
  • Earth Engine
  • economics
  • Education
  • Electronic Commerce and Algorithms
  • EMEA
  • EMNLP
  • entities
  • Exacycle
  • Faculty Institute
  • Faculty Summit
  • Fusion Tables
  • gamification
  • Google Books
  • Google+
  • Government
  • grants
  • HCI
  • Image Annotation
  • Information Retrieval
  • internationalization
  • Interspeech
  • jsm
  • jsm2011
  • K-12
  • Korean
  • Labs
  • localization
  • Machine Hearing
  • Machine Learning
  • Machine Translation
  • MapReduce
  • market algorithms
  • Market Research
  • ML
  • MOOC
  • NAACL
  • Natural Language Processing
  • Networks
  • Ngram
  • NIPS
  • NLP
  • open source
  • operating systems
  • osdi
  • osdi10
  • patents
  • ph.d. fellowship
  • PiLab
  • Policy
  • Public Data Explorer
  • publication
  • Publications
  • renewable energy
  • Research Awards
  • resource optimization
  • Search
  • search ads
  • Security and Privacy
  • SIGMOD
  • Site Reliability Engineering
  • Speech
  • statistics
  • Structured Data
  • Systems
  • Translate
  • trends
  • TV
  • UI
  • University Relations
  • UNIX
  • User Experience
  • video
  • Vision Research
  • Visiting Faculty
  • Visualization
  • Voice Search
  • Wiki
  • wikipedia
  • WWW
  • YouTube

Blog Archive

  • ▼  2013 (51)
    • ▼  December (3)
      • Groundbreaking simulations by Google Exacycle Visi...
      • Googler Moti Yung elected as 2013 ACM Fellow
      • Free Language Lessons for Computers
    • ►  November (9)
    • ►  October (2)
    • ►  September (5)
    • ►  August (2)
    • ►  July (6)
    • ►  June (7)
    • ►  May (5)
    • ►  April (3)
    • ►  March (4)
    • ►  February (4)
    • ►  January (1)
  • ►  2012 (59)
    • ►  December (4)
    • ►  October (4)
    • ►  September (3)
    • ►  August (9)
    • ►  July (9)
    • ►  June (7)
    • ►  May (7)
    • ►  April (2)
    • ►  March (7)
    • ►  February (3)
    • ►  January (4)
  • ►  2011 (51)
    • ►  December (5)
    • ►  November (2)
    • ►  September (3)
    • ►  August (4)
    • ►  July (9)
    • ►  June (6)
    • ►  May (4)
    • ►  April (4)
    • ►  March (5)
    • ►  February (5)
    • ►  January (4)
  • ►  2010 (44)
    • ►  December (7)
    • ►  November (2)
    • ►  October (9)
    • ►  September (7)
    • ►  August (2)
    • ►  July (7)
    • ►  June (3)
    • ►  May (2)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (2)
  • ►  2009 (44)
    • ►  December (8)
    • ►  November (4)
    • ►  August (4)
    • ►  July (5)
    • ►  June (5)
    • ►  May (4)
    • ►  April (6)
    • ►  March (3)
    • ►  February (1)
    • ►  January (4)
  • ►  2008 (11)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
    • ►  July (1)
    • ►  May (3)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
  • ►  2007 (9)
    • ►  October (1)
    • ►  September (2)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  February (2)
  • ►  2006 (15)
    • ►  December (1)
    • ►  November (1)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  April (3)
    • ►  March (4)
    • ►  February (1)
Powered by Blogger.

About Me

Unknown
View my complete profile