Google and MIT have designed an AI camera algorithm for smartphones

Updated on 06-Jun-2020
HIGHLIGHTS

Google made a big leap with the always-on HDR support on the Pixel cameras last year. Will this be what comes this year?

Companies have tried various ways to improve smartphone cameras. Samsung does it using Sony’s dual-pixel sensors, while Apple, Huawei, OnePlus and many others put two sensors on their devices. Google though came out ahead of all of these companies last year, producing stellar cameras on the Pixel smartphones.

But Google’s focus in 2017 has been on machine learning. The company, in collaboration with scientists from MIT, has developed new software for smartphone cameras. These are machine learning algorithms that can retouch your photos in real time, while you’re taking a shot. While the use of computational algorithms isn’t particularly new, Google’s big achievement here is in the that the algorithm can seemingly run in real time, at low latency and without consuming too much power.

The researchers trained the algorithm using 5000 photos, all of which had been retouched by professional photographers. This allowed the program to learn what kind of enhancements are needed to make photos look better. The video below shows the system in practice.

While the technology isn’t necessarily revolutionary, if its impact is as advertised, it could indeed bring about meaningful changes to smartphone cameras. Given the lack of space, cameras on smartphones lack the hardware benefits of DSLR cameras. Although OEMs often advertised “DSLR quality” cameras, there’s still a lot of work to be done. In such a situation, software is indeed the way to go for smartphone makers.

Prasid Banerjee

Trying to explain technology to my parents. Failing miserably.

Connect On :