Google introduces two new AI experiences called ‘Talk to Books’ and ‘Semantris’
Users can now direct a query directly to books using Talk to Books and also test their word association skills with ‘Semantris’.
Google is at the forefront when it comes to AI development and Natural Language Processing. It has now launched a couple of “Semantic Experiences” called ‘Talk to Books’ and ‘Semantris’. As the name suggests, ‘Talk to Books’ allows users to type in a query or a statement for talking to books. When one asks a question, the search engine finds and displays results, which contain similar statements and sentences that are written in a book. ‘Semantris’ is a word based game, which lets users test their word association skills. It is somewhat similar to Tetris and blasts words when you type in a word matching with the one that is highlighted.
The company says that both the new experiences don’t rely on keyword matching and their AI has been trained using "billion conversation-like pairs of sentences" for figuring out what a good response could be like. Talk to Books is said to be a new way of exploring books by starting at the sentence level, instead of simply searching for the author or topic, whereas Semantris is a word association game powered by machine learning where one needs to type words associated with a given prompt.
Ray Kurzweil, Director of Engineering and Rachel Bernstein, Product Manager of Google Research said in the blog post, “The examples we’re sharing today are just a few of the possible ways to think about experience and application design using these new tools. Other potential applications include classification, semantic similarity, semantic clustering, whitelist applications (selecting the right response from many alternatives), and semantic search (of which Talk to Books is an example). We hope you’ll come up with many more, inspired by these example applications. We look forward to seeing original and innovative uses of our TensorFlow models by the developer community.”
Google had previously announced the development of a solution which makes use of deep learning to identify human voices from a crowd by looking at people’s faces while they’re talking. The developers trained a neural network to identify voices of individual people who are speaking by themselves. Then they introduced background noise in virtual ‘parties’ for teaching the neural network to isolate multiple voices into individual audio tracks. You can read more about it here.