In the past week, both Facebook and Microsoft used their respective developer conferences to give a glimpse into some of the stuff they’re cooking up. Yesterday, it was Google’s turn to lift the lid on what it’s building — for developers and consumers alike.

In the build up to Google’s I/O 2019, the internet giant announced a few things that it could’ve kept for the main event — last week, for example, it revealed that it was opening Android Automotive to app developers, while earlier this week the company unveiled an Android Auto redesign. In truth, with the usual swathe of leaks, we already had a good idea of what to expect to emerge from I/O 2019.

But with all of its main announcements now out the way, here’s a quick recap on everything Google revealed at I/O 2019.

Artificial intelligence (AI)

AI plays a central facet of most tech conferences today, and I/O 2019 was no different.

Some six months after launching its $25 million AI Impact Challenge, Google revealed the winners from 12 nations who will use a Google grant of up to $2 million each to help them apply machine learning to fight some of the world’s biggest challenges.

Kinda sorta related to that — insofar as Google is super keen to demonstrate the benefits of AI for the greater good of society — the company unveiled three accessibility projects designed to help people with disabilities: Project Euphonia, to assist people with speech impairments; Live Relay, to help the hard of hearing; and Project Diva, which is specifically about helping people give Google Assistant commands without using their voice.

AI, of course, is infiltrating just about every nook of the technology industry — and Google was keen to showcase a bunch of new smarts at I/O 2019.

At last year’s event, Google unveiled a new software development kit (SDK) called ML Kit, which helps developers add AI to their mobile apps via Firebase. At this year’s event, Google gave ML Kit a bunch of new features: translation, object detection and tracking, and AutoML Vision Edge — the latter of these new features will enable developers to create custom-tailored image classification models for Edge TPU, ARM, and Nvidia architectures.

ALSO READ  Spot is a cryptocurrency app to control all your wallets and exchange accounts – TechCrunch

Elsewhere on the AI front, the company revealed that Google Assistant will soon be 10 times faster with on-device machine learning, with plans to introduce the turbo-charged Assistant to Google’s own Pixel phones later this year.

For voice app creators, Google announced a number of upgrades to its Actions on Google platform. Developers, for example, will now be able to tether an action to “how to” questions using the newly-introduced “how-to markup language.” Google Assistant-powered apps will be better equipped to respond to commonly-asked questions such as “How do I tie a tie?” with relevant text, images, and instructional videos.

Above: “How to” template

Lens, Google’s visual search and computer vision tool capable of recognizing all manner of things from the real world such as plants, animals, text, and celebrities, will soon be able to surface top meals in a restaurant merely by pointing a smartphone camera at a menu. This will highlight informational tidbits such as ratings and reviews from the internet.

Additionally, Google Lens will soon be able to read translated text to you if you point your camera at a printed language you don’t understand, while it will also be able to help with splitting a bill or calculating a tip after a meal.

If there was any lingering doubt as to how advanced AI is getting, then Google Duplex should help settle it once and for all. Duplex, which started rolling out to mobile phones last year, is a verbal chat agent that can make appointments for you over the phone. At I/O 2019, Google announced that it is expanding onto the web, where it will be able to handle things like car rental bookings.

Finally, Google’s cloud unit announced that it’s making pods with 1,000 TPU chips available in public beta. Google has developed its own tensor processing units (TPU) for some time, and the programmable, custom chips are designed to power extreme machine learning tasks — and researchers and developers can use them to train AI models.

ALSO READ  Apple WatchOS 6 Tips and Tricks


Nearly two months after introducing the first beta of Android’s tenth major OS release, at I/O yesterday Google announced the third beta incarnation of Android Q.

Alongside this launch, Google debuted a new feature for Android Q called Live Caption, which provides real-time continuous speech transcription on your phone — this means that songs, podcasts, phone calls, video calls, and recordings can all be instantly captioned.

In the broader Android sphere, Google announced that Android has now passed 2.5 billion monthly active devices, and it finally updated its Android distribution board after six months with no updates — it now shows that Android Pie (the most recent version of Android) has passed the 10% adoption mark.


Google may have started out as a software company, but it is very much embedded in the hardware realm now. At I/O 2019, the Mountain View-based company sought to improve sales for its Pixel-branded phone lineup with the addition of two more affordable mid-range devices — the Pixel 3a and Pixel 3a XL.

Elsewhere, Google also introduced a new Google Assistant smart device for the home. Costing $229, the Nest Hub Max is a 10-inch smart display and video camera, and it will go on sale later this summer.

Nest Hub Max

Above: Nest Hub Max

Image Credit: Khari Johnson / VentureBeat

Augmented reality (AR)

After first demoing the feature last year, Google finally revealed that a really neat new AR feature is arriving in preview for Google Maps on some Pixel phones this week. The “heads-up” mode serves up directions via a phone’s cameras in real time.

Google Maps AR

Google’s AR announcements didn’t stop there. ARCore, which is Google’s SDK for AR app development, already offers an Augmented Images API that allows users to point their cameras at static 2D images and bring them to life. The API will now enable apps to track both moving images and multiple images simultaneously. Similarly, a new Environment HDR mode will harness AI to mimic real-world lighting in digital objects.

Google Search also got a look-in at I/O 2019 yesterday. Navigable 3D AR models will be arriving in Google’s omnipresent mobile search engine, so if you search for a specific thing — such as a Great White Shark — you’ll not only be able to learn about them through reading or watching videos, but in 3D AR.

Above: 3D AR of a shark in Google Search

Elsewhere on Search, Google also announced a news recommendation tool called Full Coverage, in addition to a fresh podcast tool that allows users to search for podcasts and save episodes to listen to on other devices.

ALSO READ  HQ Trivia replaces Quiz Daddy Scott Rogowsky – TechCrunch

Privacy & security

Privacy is never far from public debate these days, and as such Google used I/O 2019 to debut a handful of new privacy-focused features. Incognito mode, which you may already be familiar with from Chrome, is coming to Google Maps shortly, and will be followed by YouTube and Google Search later in the year.

Above: Google Maps: Incognito mode

And as for Chrome, the web’s most used cross-platform browser, Google announced plans to protect users from cross-site cookies and “fingerprinting,” though didn’t divulge exactly when it would roll out these changes beyond “later this year.” The company also mentioned an open-source browser extension for ads, which will highlight the names of all the companies “that we know were involved in the process that resulted in an ad,” it said.

Last month, Google announced a new service that allows any Android phone, running Android 7.0 Nougat and higher, to double as a Fast Identity Online (FIDO) security key to prevent phishing attacks. It appears that it was launched only in preview up until now, because at I/O 2019 Google said that this is only now generally available to everyone.

Tooling up

Google debuted a bunch of other tools and services for developers at I/O 2019. This included bringing Firebase performance monitoring to web apps; expanding its Flutter mobile app SDK to the web, desktop, and embedded devices; and adding 10 new libraries to Android Jetpack and introducing a new Kotlin toolkit for UI development.

Google I/O 2019: Click Here For Full Coverage



Please enter your comment!
Please enter your name here