Google I/O 2018 announcements: Android P, Machine Learning, AI, Google Assistant, Digital Wellbeing, Maps, Google Lens and more

Updated on 09-May-2018
HIGHLIGHTS

Lots of AI, lots of new features, and a whole lot of brilliance - Here's everything that was announced at Google's annual developer conference where Machine Learning and AI ruled the roost.

Google CEO Sundar Pichai just concluded his keynote address here at I/O 2018, the company’s decade old annual developer conference which takes place every year at the Shoreline Amphitheatre in Mountain View, California.

The event was opened with electronic synth sounds created by Google's Neural Synthesizer machine learning (ML) algorithms. Sounds were created on a device called the NSynth Super, which uses deep neural networks to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. For the experiment, 16 original source sounds across a range of 15 pitches were recorded in a studio and then input into the NSynth algorithm, to precompute the new sounds.

This, so to say, set the theme for I/O 2018 and Pichai’s keynote – AI and Machine Learning. And yes, Google went big on both, leading the way for competition from the likes of Facebook, Microsoft and other Silicon Valley giants as well as startups alike.

The two hour long keynote delivered by Pichai and other top management at Google was packed with new announcements regarding Machine Learning and AI, including that of John Legend’s voice coming to the Google Assistant. Google has managed to stuff machine learning and AI in almost all of its products – from Android P to Maps, Google News to Gmail, Google Assistant to Google Lens and even its new Digital Wellbeing programme. From what we saw today, Google’s AI and ML applications are some of the most consumer-focussed ones in today’s date.

So without further ado, here’s everything Google announced at its tenth annual developer conference.

Gmail

Google’s email service Gmail just received a complete design overhaul and now, the company is adding some AI smarts to it. Starting this month, Google will be rolling out a new ‘Smart Compose’ feature to Gmail, which will use AI and ML to suggest phrases while typing emails. The feature works a lot like typing suggestions within messages and as an example, will take care of mundane things like typing addresses within emails.

Google Photos

Another product where users can see Google flex its AI muscles is Google Photos. The Photos app has been built from the ground up using artificial intelligence and is now adding a new feature called ‘Suggested Actions’. This new feature will provide users with a bunch of action suggestions for an image. For instance, if you click a friend’s picture at a wedding, Google Photos will now give you suggestions to share the image with your friend by automatically recognising your friend in that image. Google Photos will also suggest brightness adjustment for low-lit scenes and will automatically fix the brightness. One of the coolest feature that comes with Suggested Actions is the ability to convert images of documents into a PDF straight from the Photos app. Soon, when users store or take a picture of a document, they will be able to share the same as a PDF file with any of their contacts. This will make document storage and sharing much more easier. What’s more is that Google Photos will soon be able to reproduce black and white images in colour. Google uses its AI algorithms to add colour to old photos by recognising the scene and adding the necessary hues. All these new features are rolling out to Google Photos in the next couple of months.

TPUs

At I/O 2016, Google announced the second iteration of its Tensor Processing Unit (TPU) and this year the company has announced TPU version 3.0. TPUs are essentially special purpose machine learning chips that operate on Google’s TensorFlow AI platform to drive all of the company’s efforts in the ML space today. The third iteration of TPUs are so powerful that Google had to introduce liquid cooling to its data centers in order to keep them from overheating. These TPUs are now 8 times faster than their predecessors and help build better, larger and more accurate AI models.

Google Assistant

The Google Assistant is one of the biggest examples of Google’s prowess in AI and ML, so naturally, there were some key announcements when it comes to the digital helper. Google announced four key features for the Assistant. These include – New Voices, Continued Conversations, Multiple Actions and Pretty Please. Let’s break them down one at a time.

Google is introducing six new voices to the Google Assistant based on its machine learning speech synthesis, WaveNet. WaveNet was built using a convolutional neural network and was trained on a large dataset of speech samples to make the Assistant sound more natural and life-like. WaveNet makes it easier to add new languages and dialects to the Assistant and it also allowed Google to, wait for it, add John Legend’s voice to the Google Assistant. Yes, starting later this year, users will be able to get answers from the Google Assistant in the voice of the singer-songwriter. Usually, it takes hours of recordings to create a Google Assistant voice, but WaveNet allowed Google to shorten studio time and capture the accurate richness of Legend’s voice.  

At the keynote, Google Assistant VP, Scott Huffman announced that the Assistant is now available across 500 million devices, including 40 auto brands and 5,000 connected home devices. Highlighting Assistant’s new Continued Conversations feature, Huffman demonstrated how the AI helper can now naturally converse with users without them having to repeat the wake words ‘Ok Google’ or 'Hey Google'. The Google Assistant is now more context aware and can hold a back-and-forth conversation with ease. Continued Conversations is rolling out to the Google Assistant in the next few weeks.

The next big Assistant related feature announcement was that of Multiple Actions, allowing users to ask the assistant to perform more than one action at a time. The Google Assistant uses a technique called Coordination Reduction to break apart two different requests in a sentence and complete them individually.

Lastly, Google wants to reinforce positive behaviour in children with Assistant’s new Pretty Please function. The feature is pretty basic and encourages children to say please before asking the Assistant to perform a task. The Assistant responds with polite reinforcement by complimenting children for saying please, in an effort to improve conversations for families.

More power to the Google Assistant

At I/O 2018, we finally got a glimpse of how the Google Assistant will function on a visual interface. Multiple OEMs had announced Assistant-powered smart displays at CES this year, including the likes of Lenovo, LG, Sony, JBL and more. We now know that these Google Assistant powered smart displays will go on sale starting July 2018. The display cum speakers powered by the digital assistant will allow users to watch videos, bring up recipes, make video calls, view surveillance cameras, bring up maps and more.

Beyond smart displays, the Google Assistant is also getting a new interface on Android and iOS. The Assistant will now provide more visually rich and immersive responses. Users will be able to complete smart home requests straight from the Assistant’s interface. Google has also added some food delivery Actions to the Assistant, including those from Starbucks, Dominoes, Dunkin Donuts and more. However, the company did not announce the global availability of these Actions and it looks like they will be limited to the US for now. The new and improved visual experience for the Google Assistant will launch on Android this summer and iOS later this year.

Assistant does some real-life assisting and it was Awesome!

Most human assistants have to take up the mundane task of making actual appointments for their bosses and soon, the Google Assistant will do the same. Google wants to make it easier for users to interact with local businesses that do not have an online footprint and using a technology called Google Duplex, the company demonstrated one of the coolest AI use case scenario we have EVER seen. At I/O 2018, Google demoed an actual call made by the Assistant to a hair salon in order to procure a haircut appointment on behalf of a user. The conversation was so natural that it was difficult to tell the AI from the actual human being on the other line. We are impressed to say the least, however, Google did say that the tech is very much still in development stages. The company will be rolling out a few experiments relating to this in the next few weeks.

Assistant on Google Maps

There were a lot of cool Google Maps related announcements at I/O this year, but more on that later. Wrapping up Assistant announcements, Google said that digital helper will soon be available on Maps, allowing users to use their voice to share ETAs with friends and family. There will be more such voice-based features coming to Maps later this summer.

Android P

While we didn’t expect or get an official name for Android P at I/O, Google released the first public beta for the OS at the event. We also heard about some new features that Android P will bring, including the famed iPhone-X like gesture navigation. Android P, like all of Google’s other products, will be heavily reliant on AI and on-device machine learning. This will be visible in multiple forms, including a new 'Adaptive Battery' feature which was created in collaboration with the Alphabet owned AI subsidiary DeepMind. Adaptive Battery on Android P will predict a user’s app usage pattern and spend battery life only on apps and services the user is most likely to use. Google has seen around 30 percent reduction in CPU app wakeups owing to this new feature.

AI implementation on the core Android operating system can also be seen in the form of the 'Adaptive Brightness 'feature on Android P. Adaptive Brightness reduces the number of times a user has to manually adjust display brightness by implementing AI to understand the user’s preferences.

In the spirit of predicting usage patterns, Google has also introduced a feature called ‘App Actions’ to Android P. This will preempt what a user does with an app before he/she actually does it. For instance, if you connect headphones to the phone, Predictive Actions will automatically start playing the last song you were listening to on your phone.

Android P will also bring a new feature called ‘Slices’. This will display slices of an app withing Google Search. For instance, if you Google search for Uber, you will get a Slice of the application from your phone within the search and will also be able to book cabs directly from the Slice. Another example of how Slices would work on Android P is when you search for a particular city, say for instance Goa, you will see a Slice from Google Photos if you previously visited the city and clicked some pictures there.

All these Android P features will be available in the new Public Beta release, which will not only be available to Pixel users but also to those using OnePlus, Vivo, Sony, Xiaomi, Essential and Nokia phones. To be more particular these are the phones that will be getting the Android P public beta access –  Essential Phone, Oppo R15 Pro, Nokia 7 Plus, Sony Xperia XZ2, Xiaomi Mi Mix 2S, Vivo X21, OnePlus 6, Pixel and Pixel 2.

Gesture Navigation

With Android P, Google is making way for the notch and providing new ways to navigate phones that focus on screen real estate. There will now be a single clean home button that will allow users to navigate the new OS. Some things remain the same and users will be able to tap to go back to home, and hold down the home button to bring up the Google Assistant.

What’s changed is that swiping up on the home button will now bring up the multitasking menu, with five predictive apps displayed at the bottom of the screen. Swiping up twice will bring up the entire app list and the same will work on any screen. A quick scrub at the bottom of the display will let users scroll through apps. Volume controls have also been re-imagined to help users separate ringer volume from media volume. A new, vertical volume control will be located on the display next to the hardware buttons and will help distinguish the phone’s sound from that of the media being played on it. In addition, a new rotation button will now appear on the navbar to change screen orientation in a much easier fashion.

Digital Wellbeing or No More FOMO

Google, like many other tech companies, is realising the need to disconnect. At I/O, Pichai said that the company knows that people feel tethered to their devices because of the fear of missing out. To help people switch off, Google has taken a four step approach – Understand habits, Focussing on what matters, Winding down and Finding a balance. To achieve the same, Google is introducing a number of wellbeing features to Android P. The OS will come with a new Android Dashboard which will show users how much time they spend on apps through visual graphs and pie charts. Android P will also lets users set time limits on apps and will nudge them to do something else when that time is up.

There’s also a new Do not disturb Mode on Android P which will silence not just phone calls and texts, but visual disturbances that pop up on the screen. Turning the phone upside down will automatically activate this DND mode. In emergency situations, pre-determined contacts will be able to reach users that have activated the DND mode.

In addition to the DND mode, Android P will also get a new Wind Down Mode which will let users tell the Google Assistant their bedtime. The Assistant will then automatically activate the DND mode and the screen will turn grey through the night. Worry not, all colours will return back to normal in the morning.

YouTube will also jump aboard this Digital Wellbeing programme and will remind users to take a break while watching videos. It will also combine all notifications in the form of a daily digest which will be delivered to users once a day together.

Google Maps

Google has finally fixed walking navigation by combining the camera, Maps and Street View. Google Maps will soon make use of the camera to create a new walking navigation UI which will show the actual street view of the area along with the guiding arrows. Google is also working on a Visual Positioning System to help users figure out their exact position by making use of landmarks. Other new Maps features include – a new ‘For You’ section to bring up locally relevant suggestions, a ‘Your Match’ function which uses ML to predict your preferred places based on previous ratings, and lastly, a feature that allows you to create lists of places with friends, to help determine a common place of interest by voting and picking a group favourite.

Google Lens

Google Lens is also receiving an update and will soon be available on the main camera app of Pixel phones, LG G7, OnePlus devices, Sony handsets, Xiaomi and Asus phones. Lens also brings three new features – Smart Text Selection, Style Match and Real-time results. The most useful of these is the Smart Text Selection feature which lets users copy and paste text from the camera image directly to their phones. Lens can also turn a page of content into answers. For instance, when pointed at a menu, Lens will bring up images and ingredients of dishes right within the app to help users make a more informed choice.

That pretty much wraps up all the key announcements from Google I/O 2018. We still need to tell you about the transformation of Google News to support quality journalism and fight fake news. Stay tuned to Digit.in for more on that.

Adamya Sharma

Managing editor, Digit.in - News Junkie, Movie Buff, Tech Whizz!

Connect On :