VMware and NVIDIA talk cloud, HPC, GRID 2.0 and more
We sit down with Ramesh Vantipalli, VMware and Sundara Ramalingam, NVIDIA to learn more about their collaboration in the Enterprise Industry, about VMware Horizon and NVIDIA GRID.
It's unlikely that one hasn't heard of either VMware or NVIDIA, both are well-established market leaders when it comes to Virtualization software and graphics technologies, respectively, So why have the two giants come together in this day and age? Read on to know more.
Digit: Could you elaborate on NVIDIA and VMware's collaboration in the cloud space?
VMware: So, first let’s establish that mobile and cloud are the two things on everybody’s radar. We call these SMAC (Social, Mobile, Analytics and Cloud). Customers want to ensure that they mobilize their application. So they want to make sure that they can build a digital workspace, right? People want to deliver any app, on any device. That’s the key mantra seen in the market. The question we need to ask ourselves when you’re building a digital workspace is, “What is a digital workspace?” A digital workspace is nothing but a place where a user goes and works. But, traditionally, if you see for the last 20 years, we live in a PC world. Wherein, you buy a PC, and you work on the PC, and if you require graphics, you buy a graphics card, and you start working on it.
That’s the typical way of working, right? So what happens is, eventually, when we’re moving all those applications to the cloud. When you’re running everything on the cloud, the second question which comes is, “Why don’t I run my PC from the cloud?” To ensure that the user is now free from the PC-centric approach, and is now using the user-centric approach. Which is giving the user the ability to pick up any device that they want, and he can comfortably connect to the technology he requires, be it an app, be it a desktop, and he can comfortably work on that. That is the motive behind VMware.
Technology of desktop virtualization is pretty old. It’s been around since 2007. We’re in the 7th generation now. The product release that we’ve done is Horizon 7. One of the biggest hurdles we’ve crossed, thanks to NVIDIA, is that not all kinds of things are easy to virtualize. One of the components is graphics. In the past, we couldn’t virtualize graphics, because that’s one of the hardest things. That was one of the drawbacks for companies in the architecture, construction, and engineering verticals and industries such as design and manufacturing, media and entertainment, education, oil and gas, healthcare, government, etc. These are just some of the industries that push back saying that they like this concept, because fundamentally, they’re helping them secure the entire endpoint. Your entire information is on the cloud, and the user is only viewing it.
Everybody likes that, but the problem is, say I’m running a CAD (Computer Aided Design) software, and there are all these applications that I could not run since this technology of VDI (Virtual Desktop Infrastructure) does not support graphics virtualization. That’s where the evolution of workspace virtualization has come up. Why NVIDIA and VMware like this is because the market opportunity that we’re seeing is 1 billion per year, based on quarterly observations. NVIDIA has 82% market share on that. Imagine that we provide the capability of virtualizing the cards. This is called a GPU virtualization and allows the most complex applications to be run remotely, which helps people as they can work and they feel no difference between a physical PC, and a virtual one. That is where the technology started coming up to the mark.
Digit: Ramesh, I have two questions. First, you briefly spoke about how GPU virtualization was a tough cookie to crack, could you shed some light on that, and is VMware’s decision to partner with NVIDIA based on the fact they occupy a dominant market share, or is there something in their technology that is better than their competitors?
VMware: That’s a very good question, but from a leader standpoint, NVIDIA is the market leader, when it comes to graphics virtualization. There are other vendors also in the market, they have also had support compatibilities, what they’re creating with VMware and our vendors. More or less, the moment you think about graphics, the first name you think of is NVIDIA, and there is also the innovation. Look at the amount of work we do, among NVIDIA and VMware. It is phenomenal. It is not about a product to product partnership, we do co-engineering, between these companies. We ensure that the technology seamlessly integrates into the virtualization technology, so the customer can get the wonderful experience when they’re actually working on this. We ensure, in fact, we will talk more on the NVIDIA side, we went into the extent of trying GRID. If you have a problem with something, on the cloud, you can request for a desktop, it’s at your fingertips, and it can run the most complex applications whether it be 2D or 3D apps, and then you can look into the performance, and take a decision.
Digit: I was going through your documentation, regarding VMware Horizon 7, and seems you’ve implemented a new profiling protocol called Last Extreme. How exactly is it advantageous compared to the existing protocols in the TCP/IP space?
VMware: You know that protocols in the past, like RDP, TCP/IP, these are all like the proprietor protocols that they’ve built into it. What happens is, in these technologies, we all think of Netflix as a company. In the US, I see this trend of Netflix, you buy a $20 device, which is Android based, and you can comfortably watch a movie on those small devices. Do you know how is it possible to do that?
Digit: We just assume it happens magically.
VMware: What they have done is, instead of creating proprietor technology, they’ve used a lot of this technology on these underlying platforms like Windows, Mac, or Android, and iOS, or any other device in the market, today. One of the common thing in the market today is the X.264. The advantage here is, if you look into the TCP/IP, or RDP protocols, these protocols use coding and encoding, and encrypting and decrypting, because at the end of the day, you’re rendering pixel form data to an endpoint device. Decrypting is done in the data center, and you’re just seeing the pixels, and you’re actually interacting with that, and the execution is actually done in the cloud, in case of desktop virtualization. If you really see this, the protocols use a software mechanism, that means you install a client software, on an endpoint, and both the encoding and the decoding will be done on the software layer. Today, the reality is, we all carry mobile phones, when you want to download any application on your phone, the first thing that you ask is how much battery does it consume? Users want an app which consumes less battery life, less battery, and is able to comfortably run for many hours. When you look at this, we realize why don’t we use 264 coding because it is natively built in every 20$ device in the market, rather than use the proprietary protocol, and do the instruction of coding and decoding. It will be respired with the blast protocol, because HTML files have become an industry standard. And we all know how the industry is moving towards HTML files. Using the H.264 along with HTML file standards, based on TCP, we come out with a class extreme protocol, and we could create wonders to actually give high performance, for especially high latency, and low bandwidth layers, ensure that the device’s battery life used will be very minimal compared to the other devices in the market today.
Speaking of Codecs, we know that H.265 is now the upcoming standard in all low wattage devices. Even small 4 watts, or 6 watt embedded processors have support for X.265. That’s one advantage of using X.265 over X.264.
Anytime, you need to give some time for any new mature protocols. As of now, from a maturity standpoint, X.264 is widely adopted, because you may have old devices. From a supportability standpoint, we will interact with both. Devices which are available in the market are compatible with both and have no problems. But later, we want to be interacting with upcoming standards.
NVIDIA: To be honest, Ramesh summed it up perfectly, and that’s why I was keeping quiet. There is nothing more for me to add. From whatever he said, from the way graphics cards evolved and the way he answered questions related to X.264 and X.265, it’s not adopted and still we’re on 264. Whatever Ramesh said is 100% true, and there is nothing more for me to add.
And also, moving on to the part where graphics intensive apps are required by the user. Here, in the market today, if you categorize into 4 kinds of users – which is typically what we do – we start with task workers. Task workers are basically the users who use apps like SAP, Taskforce, Windows, PowerPoint. These are the things categorized as task workers. So what is it, in VMware, three years back, before NVIDIA, we were pressured by consumers who said, that they can’t use a computer without graphics. VMware is a known company for virtualizations. Anything in the world you want to virtualize, we do it. We did something called software based 3D around 3 years back. That’s the one that basically help you with play-doh kind of apps to work on.
When you move on to knowledge workers, people use OpenGL, DirectX applications. The next level is power users, people who use Adobe. Then in the Designer levels, there’s Siemens, AutoCAD, which are really high level.
If you really look into the categories, the number of users are not so many. You could say 100 million in Task workers, 400 million in knowledge workers, 200 million in power workers, and 25 million could be designers. The average potential when you look into the market, is what we’re seeing. We also started asking questions, manufacturing is one of the key aspects of 3D, Sundara is very experienced in the space, he can elaborate on that. We always hear from customers a lot, that work spaces are changing now, and it’s no more like a PC only, people are buying more devices, people want to comfortably sit and work anywhere. In fact, if you go to the VMware website, and go to Horizon, digital workspace page, you see a nice page, where a woman is in a shop, and she owns the shop, and they are designing things on the shop floor, and actually sending the stuff to another team, and that team is sending some stuff back to her. People want instructiveness. This thing is only possible if you make the consumer world more simplified, and help them use the most complex applications to work on easily. This is one of the reasons why we see a huge amount of demand for 3D graphics.
Digit: How easy is it to deploy a VDI instance?
NVIDIA: It’s absolutely easy. In the data center, you have a server with the NVIDIA GRID 2.0 card installed. What happens is, because it’s all driven on the virtual layer, provisioning and deprovisioning a desktop takes seconds. Because applications which are required are already pre provisioned and packaged, or if you use technology like rap volume, which can rapidly provision these applications into desktops faster. Between the traditional approach, and the new approach, you see a fundamentally big difference. The new approach is faster, you can provision apps more efficiently, and you don’t need to go through the mundane tasks of installing OS, security patches, and antivirus and everything. These applications are really complex, and if you don’t install these, you might run into problems. Whereas, in virtualization, you just need a desktop, and whether you need a thousand, or ten thousand, you can deploy.
Digit: So, this is GRID 3.0?
NVIDIA: We don’t call it Grid 3.0, but we have gone beyond 2.0. Grid 3.0, perhaps, but we don’t call it that. The 2.0 is the introduction we announced in last October, along with VMware. In fact, we use to be along with B forum, and VMware as well, to announce 2.0. But after 2.0, there have been a lot of improvements to the software.
Digit: Using Maxwell architecture, how long till you move on to Pascal? Because, I believe Pascal is going to be announced around May 5th – 8th.
NVIDIA: Pascal has already been announced, in a platform called GTC, which stands for “GPU technology conference. Our CEO Jen-Hsun Huang, he’s announced it already. The specific plans on when Grid will go to Pascal haven’t been announced yet. I also won’t be able to comment on roadmaps, because at this point in time it is too hazy. But then, we will certainly transfer all our GPU offerings to Pascal, when the time is right.
Digit: Do your data centers exist in India, at the moment?
NVIDIA: We do have a huge development team based in India, and yes a lot of development happens out of India.
Digit: I’m talking about data centers where the actual GRID is implemented, where the GPUs are actually present. Are these present in India, or are we looking at an overseas connection?
NVIDIA: We do have customers in India, it doesn’t just belong to the customers. See, if we sell our solution to one customer it’s stored in the data center of the customer. The GRID is offered as a service to the customers? The answer is no.
Digit: So, it’s focused more around the engineering industry, in India, and not the gaming industry? Because GeForce NOW is an implementation of it.
NVIDIA: We have two different lines for businesses, as you already know. One is the gaming business, which is the consumer business, and the next is the professional business, which is not gaming, but is used by manufacturing, media, by Finance, and oil and gas and etc. Whatever grade we’ve been discussing till now is about the professional line. We address multiple segments in that. The top one, as Ramesh mentioned is, the manufacturing segment. We also have great solutions, VMware and NVIDIA, both to AEC business. AEC stands for Architecture, Engineering and Construction, which is a huge booming industry all over the world, and even in India. The GRID solutions are ideally suited for the AEC business. Then on a need basis, we address to media and entertainment issues. To some extent we also cater to the oil and gas segment. All these segments are catered to by the GRID. We offer virtualization in gaming as well. There, NVIDIA also offers gaming as a service, wherein we host our own data centers, with GTX GeForce cards. The solution is called GeForce Now. That is a completely different topic, and that is nothing related to what we’ve been talking about, till now. Just one more point to add, the GeForce Now is offered only in the US, and not in India, as of now.
Digit: I was going to ask when it’s going to come to India.
NVIDIA: I won’t be able to comment on that now. Although, I promise I’ll tell you when it’s affirmed.
Digit: All of us here, at Digit, think that the professional side of NVIDIA is best kept a secret. We interact and have a lot of exposure to the consumer side of NVIDIA’s business. So, anything that you’d like to discuss, about how NVIDIA is leading the charge on the professional side in innovative business model, the new services that are coming out into the market. Anything that would help the consumers what’s going on behind the scenes.
NVIDIA: In India, we enjoy a market share in the high 80s, so far as the professional graphics cards are concerned. I’m talking about across the spectrum. From the conventional test based GPUs, to the GRID, or high performance computing. So, that is something we take very seriously, and we strive very hard to keep maintaining and overshooting that market share.
We have three types of solutions, so far as the professional offerings are concerned. One is the desktop workstation. The conventional way of working on a graphics card. Where there are stations kept at each user’s desk which have one or two graphics cards, plugged into the workstation, and he works on it. That is the traditional way of working.
The second is the GRID, where we remove the workstation from the engineer’s or the animator’s desk, and put it in the data center, and with the help of VMware, we provide the graphics capability from the data center, to the user’s desk. The user can be in the same building, a different building, or a different state, or a different country, it doesn’t matter. Irrespective of location. Wherever he’s physically present, thanks to the GRID solution, he gets exactly the same user experience, as if he’s working on a desktop workstation, sitting on his desk.
The third offering that we have is high performance computing. If you look at a GPU, it is very unlike a CPU. It has a very interesting architecture, and has a large amount of cores. Thousands of cores. A typical standard CPU has only 18 cores. We’re talking about more than 4000 cores in the highest GPU we have for sale, today. Because of the inherent physics of the GPU, it is able to handle paralyzed applications. By that, I mean, applications that can be broken down into smaller tasks. Each of the tasks can run in parallel. Ideally, because of the inherent physics of GPUs, the GPUs are more suited to run the high performance computing paralyzed codes. This is a very important segment for us. Anybody who is into CEE, or CFD, molecular modelling, or video analysis; these are all very niche software which can be parallelized. So, we address that segment.
With the high performance computing segment, we’re seeing very, very interesting times, which seem to be coming about in 3 to 4 years. It’s called Deep learning. So what is Deep Learning? We all have heard the term artificial intelligence, and some artificial learning subset, called machine learning was formed. Deep learning is a subset of machine learning. So, deep learning, let me explain to you very quickly. How does a computer work? So, you tell a computer that something needs to be done. How do you tell the computer that? You create a program, and get the computer to run it. The computer understands the program, which you create using a computer language. Then, the computer does what you want it to do. That is the traditional computing model.
Now, deep learning is a new computing model, where instead of telling the computer what to do, you teach the computer what to do. You give multiple scenarios, and the computer is able to analyze the multiple scenarios, and draw patterns from the scenarios, and the data you give. So you give a massive amount of data, and the system becomes intelligent enough to create patterns out of the huge amount of data that you enter, and once the pattern is identified, the computer is trained to identify the pattern. The next day, when the computer experiences a scenario, where it’s never been before, it is able to refer to whatever it has learned, and then it is able to take the right decision. It’s a highly parallelized code, and multiple tasks can happen simultaneously. Sifting through the training data is a parallel task, which kind of loops back to the high performance computing, which I explained earlier. As a corporate company, Deep Learning is a major focus area for us. We develop a lot of solutions for deep learning. I’m sure you can imagine that it also involves a lot of software also, in addition to hardware. NVIDIA is committed to develop solutions, both hardware, and software, and we expect that the technology is going to move towards deep learning. We’re sure it’s going to revolutionize the IT world today, but it’s a new science, and it comes in 2 years, three years, or 5 years from now. Everything that human life controls, or is being controlled, would have been touched by Deep learning. There is one more related field, which is called big data analytics, and Internet of Things. These are all intertwined. All these technologies scale up extremely well, when it comes to GPUs. Certainly, NVIDIA is interested in this, and deep learning is a major area for our professional offerings.
Digit: you just gave me another question to ask; you spoke about the future and these new markets that are coming up, and new applications of GPUs, which, at the physics level, are able to handle tasks better than CPU. Talk to us a little about how security will be managed, right down to the hardware level, and even on the software level, as far as VMware is concerned. I think that problem is not addressed enough, even the x86, the ARM architecture level, a lot of dependencies are on the software to be robust, when it comes to talking to the cloud. If you could shed some light on that, it would be good information to have.
VMware: So, the first thing is data doesn’t leave the data center. It is only the video information that leaves the data center, and that info is sent over the network, and is received by the user. So, there is no chance of the data being stolen. Even the pixel information, even if people don’t have access to the data, what if someone sees the data, that is enroute to the user. So what happens is, a very powerful compression that we use. One, it encrypts the data, and compresses it. At the endpoint, it decrypts the data, and decompressed. What travels is encrypted, and compressed, and it is 100% safe. If you look at it, we all do net banking. This is something similar to what we are doing. It’s a bigger risk than net banking, in which case, we believe that it is a very safe way of operating.
Digit: Are you at liberty to discuss any of the clients you work with?
VMware: From VMware standpoint, global references are available. If you go to VMware VGPU, you’ll see the public companies that they posted. In India, they have a referenceable customer called MSA technologies. One very quick addition; we concentrate on the physical security of the data, which is top priority. Think of a scenario where an engineer just puts his thumb drive to copy a song, or to some site to download a song, and inadvertently infects his computer with a virus. Compromising his data. Once the data is gone, it’s gone forever. So, in addition to preventing physical theft of data, and also ensures that the data doesn’t get infected by a virus, when it is exposed to thousands of desktops.
Jayesh Shinde
Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile