Digit: Talking about Skylake, we tested the 6700K recently and noticed that over Devil’s Canyon (4790K) the performance upgrade was about five percent. Do you think we are hitting the proverbial brick wall when it comes to performance?
Anuj: No, not really. Actually for the ‘K’ series, in the desktop we basically skipped a generation of upgrade. We had a big jump with Haswell. Skylake, from a micro-architecture standpoint is re-architected. It’s a “tock” as you know, and it was built to be scalable and give you a big performance boost with a focus on the media engine – 4k playback etc. If you look at the notebook space, there is a double digit jump in performance, between generation to generation. When you think of the last product of an old architecture and the first product of a new architecture, typically the delta isn’t that great. If you compare end of life to end of life, there is a huge gain. So think of it as you start off the chute with an improvement and over time as you start you get the process technology figured out. At the end of the day, these gaming SKUs are all about clocking, so having faster IPC – instructions per clock gain – is one piece of it, but as the GHz start going up, as the process technology starts maturing, as you implement a lot of circuit fixes, the GHz gives you a big benefit in terms of gaming performance. So, out shoots a very mature architecture that is end of life and the new architecture having a 5-10 percent gain in the gaming series, that doesn’t surprise me.
Digit: Right, like you said it’s a “tock” so the process is still the 14nm Broadwell process. Ivybridge brought in the trigate 3D transistors to enable a shrinkage. What will be the next technological breakthrough which will change things like the three transistors did?
Anuj: One thing I can tell you is that there is a lot of physics and material science research involved at Intel in continuing the Moore’s Law as projected. We’ve always said that we never know the roadmap beyond 10 years. We have visibility for 10 years at any given point of time and even at this point of time, we have visibility till 10 years as well. Through it all it requires reinvention of the transistor structure, the dielectric, the materials that are used and a host of different things. Who had ever thought or had a notion that a 3D transistor would come to life right? So, I can’t tell you what exactly is planned. Part of it is that I don’t know all the details, and the other is even if I knew, I wouldn’t be in a position to tell you. But that’s essentially the IP that we have. People who are involved in the R&D at Intel, invest a lot of time and effort and money to find those breakthroughs. And so we will see the evolution that plays out but that’s certainly part of the 14nm maturing process.
Digit: You said that a lot of R&D is going on and you guys are trying to approach the problem from the efficiency point of view. The rationale is that in order to make our devices last longer the processors are becoming more and more efficient. In effect you are hacking away at the silicon but does Intel by any chance do any R&D into battery tech?
Anuj: Oh sure, we have a microprocessor technology team, we also have an ecosystem and a infrastructure technology team that looks at everything from cables to battery technologies to camera technology and all those things. When you think about something like Type-C connectors, and the thunderbolt connectors – those all came out of research at Intel. Because at the end of the day of you think about it, your microprocessor is becoming very fast, now the bottleneck has moved to memory subsystem. So if you don’t have a well balanced memory plus processor capability, you end up crunching up all the numbers but you’re waiting for the data to be processed. In this regard you might have heard our CEO talk about 3D cross point technology which is revolutionary nature. Its innovation in cost per bit of a storage cell and its innovation in speed and density is great. So, we take a very systems approach to everything we do from a compute standpoint because at the end of the day the process technology is one trajectory, but you need to have a whole system balance to come along to give you the best computing experience.
Digit: Since you mentioned Thunderbolt, I’m curious about adoption of cutting edge technologies. When can we see mainstream adoption of things like Real Sense, WiDi, or even Thunderbolt. Is there no pull from the market, or is Intel is not pushing these technologies aggressively enough?
Anuj: No, I think it’s an evolution. Like every new technology has sort of an adoption curve and the rate of adoption depends on many different factors: computing standard, how much marketing went behind it, what the ecosystem looks like. Look at Wi-Fi, circa 2003, when we packaged it as a part of the Centrino kit, it took off. Right after that you saw a high spike in notebooks. That was consequence of obviously engineering at the platform level, but more importantly I would say working within the ecosystem like hotspot vendors ensuring that for every centrino there was a Wi-Fi infrastructure that we now take for granted. And the same is true for things like Thunderbolt. By virtue of it being a bleeding edge technology, it tends to be expensive when it first comes off the chute. This makes for a bit of a chicken and egg problem. You find few partners that are willing to invest in a technology that’s at the bleeding edge of innovation, and over time as the waterfall happens it becomes more scalable. At the same time there are certainly competing standards – one way of connecting data vs another. Our approach is primarily that if you standardise, you get the scale and you can get a lot more partners to participate. But sometimes you do have people who have different views of different standards. I think as far as your Real Sense is concerned, as far as Thunderbolt is concerned, those will all have different adoption curves. It will be a function of how easy the technology is to use and how cost effective it is to use. So I can’t predict exactly how fast or how slow it will go. What I can tell you is that we are working with all our partners on all those technologies and everyone is on board. Sometimes you need an operating system to come along that helps speed things up… that was Windows 10 with RealSense integrated enabling breakthrough user interface experiences such as actually unlocking your PC with your face. This is of course a step. Now it has to get more consumer awareness and utility and if that catches on it’ll scale faster.
“Our approach is primarily that if you standardise, you get the scale and you can get a lot more partners to participate” |
Digit: Regarding the whole Microsoft demo where we saw the Surface Book and other products unveiled. What kind of involvement did Intel have?
Anuj: I can tell you that Intel has collaborated very closely with Microsoft. It’s been true in the past and particularly true with the launch of Windows 10 as well. So everything from speech algorithms for Cortana and the DSP and processing that’s required to make sure that makes a really fluid interface. Some of the data transfer and the recognition of those third party devices – those don’t happen magically. Those require deep engineering collaborations from the operating system and silicon manufacturer. So yeah, whether it’s the voice assistant, or it’s the WiGig technology, or it’s graphics optimisation, or 4K videos algorithms, there’s a lot of collaborations all the way through.
Digit: What was the HoloLens? They say that some custom silicon HPU (Holographic Processing Unit) was used. Was that Intel or is that ARM?
Anuj: I’m not that familiar with the HoloLens specifics, we collaborated for RealSense but I don’t know specifically if we did anything for HoloLens.
Digit: So we are witnessing tablets, convertibles and phones converging. The lines are blurring between devices and their intended purposes. We’ve have got PCs on a stick and NUC boxes now. As chipmakers, what is Intel’s game plan in the portable space, specifically Core M and the Atom SoCs?
Anuj: So we have a roadmap for all of these. Skylake is a scalable architecture. The same microprocessor architecture can be taken all the way down to a compute stick, you can take it to a thin and light notebook, you can take it to a mobile workstation, you can take it to a gaming desktop – so that’s the one architecture that spans the full gamut. There are some devices that are purpose-built and low powered. So the Atom processor line caters to those kinds of devices, typically in the, lets say 8 inches and below screens size category. Because the higher you go in screen size, the higher power it requires and you need performance as well. So think of that as we got the Atom line, we got the Core line. Now the Core M is sort of a line which is focused more on mobility, mobile computing, so you think of the thinnest and lightest notebooks where mobility is an everyday concern. But if you want performance and you don’t want that hourglass to happen when you’re video editing, then you can have a Core i-series.
Think of it as one being more mobile focused, one more performance focused, and within those you would have good, better and best segments. So it’s Core M3, Core M5, Core M7 and Core i3, i5 and i7. So that segmentation has worked really well for us. Even in the end user’s mind also, they first have to make a simple decision like do you want an exclusively performance oriented device that pushes the envelope, or do you want a device that’s a lot more mobile. Once you make those decisions you go with a good, better and the best.