When the iPad 2 was launched, Jobs famously spoke of it as a post-PC product, heralding what many believed to be the much hyped “post-PC era”. Many industry analysts and commentators since then have at various points in time, rung the PC’s death knell (often prematurely). Every now and then you’d have reports surfacing based on region-specific usage studies highlighting how PC usage is declining and that smartphones and tablets are fast becoming preferred devices for people to connect to the internet. In developing countries such as India, this doesn’t seem all that surprising because we really have a whole generation of users whose first point of contact with the internet has been the ubiquitous smartphone. However, it’s not just the developing world that is experiencing this shift. I came across this Ofcom research study from Aug 2015 which revealed some surprising findings for the UK region. In this report for the first time ever, Tablet + Smartphone usage went past the PC + Laptop combined usage, making mobile devices the preferred way for people to get online in the developed world as well.
As Digit readers, whatever I’ve said so far is probably already known to you. What is new, however, is the cyclical nature of these developments as discovered by Bob O’Donnell in his piece about the same topic. He has observed that the the PC market peaked in the last quarter of 2011 and has steadily declined ever since. Similarly, despite the upbeat sentiment surrounding tablets, the category peaked and began to decline in the fourth quarter of 2013 almost exactly two years after the PC slowdown. The story gets even spookier: almost like clockwork, since the start of this year reports have been trickling in illustrating the decline (or at least the plateauing) of global smartphone shipments.
Obviously, people aren’t renouncing smartphones, just like no one really renounced PCs since Apple’s 2011 declaration of the post PC era. In fact, if you look at the fine print within such polling based reports, you’ll find caveats such as the data only relates to usage outside of work. Well, it’s kinda obvious that outside of the productivity sphere, most people are indeed glued to their small screens rather than full scale PCs.
Let’s do a thought experiment.
Let’s play along with the hype and imagine what this post-smartphone world would look like. Devices have been offloading the heavy lifting on to the cloud for quite some time now. Maybe the smartphone and tablet will morph into something thinner and leaner (in terms of processing power, if not footprint). Where will this progression end? Until the device disappears entirely and the UI vanishes? This is exactly what Bob O’Donnell wonders about in the piece referenced above: the age of device-less computing driven entirely by voice-based interactions which in turn off load the heavy lifting to cloud based deep learning algorithms. A post-device era, so to say. There will be devices of course in this future but they’ll be dumb devices backed by a very smart and ever evolving cloud brain.
We all saw the Google Home demo reel shown at the recent Google IO and while it seems promising, I firmly believe the post device is a little far out. Its time just hasn’t hasn’t come yet. For that device-less future to be a viable one, other ancillary technologies need to progress at the same rate and be part of a combined ecosystem that enables device-less computing.
I’m talking about technologies such as seamless eye tracking, locational awareness for devices, computer vision and even brain-machine interfacing. It’s one thing to tell your “always listening” voice assistant to play the morning playlist but quite another to say something like “display my Facebook feed on the living room screen” and have it actually mean something. Sure the tech today is at a stage where your feed will be displayed. But what next? How will you scroll without a device? (ans: eye tracking), how will you reply? (ans: a combination of eye-tracking and BMI).
Thankfully, there are real world developments inching us closer to this sci-fi horizon. For example, at an IoT conference recently, I got to listen to a fascinating keynote by Colin Angle, the CEO of iRobot in which he said that Natural language processing has come to a stage where machines can understand complex sentences like “Bring me a beer from the fridge”. They just can’t do anything about it because they don’t yet understand where the fridge is. His little Roomba bots are doing their bit to map people’s houses and at least take care of that bit for when the ecosystem evolves. Similarly, the kind of technology being developed by Mobileye can be brought into the home scenario for bringing more context to computer interaction which will eventually enable device-less computing. The ultimate vision of this future scenario would be the kind we saw in UK TV show Black Mirror.
When do you think we’ll see a post-device era? Let me know in the comments below.