Yesterday Nvidia dropped some juicy information on their Volta graphics cards. Nvidia unveiled the Volta GV100, Nvidia is doing the same thing as AMD did when they announced their MI Instinct for data centers.
So, this Volta GPU is not consumer ready just yet. The Volta GV100 will be used for data centers on server racks, the typical use is for high competitional intensive workloads such as accelerating graphics, number crunching for simulations, AI, deep learning and machine learning.
If the Mi25 Instinct is an indicator for what consumer grade GPU, will it be like the RX Vega? Hopefully Volta GV100 will do the same for Nvidia consumer grade Volta product later this year or 2018.
Nvidia will use a 12 Nanometer FinFET process, AMD will use a 14 Nanometer FinFET process. Transistors count 21 billion for Nvida, 15-18 billion for AMD. CUDA cores are 5120 for Nvidia and 4096 Stream Processors for AMD. Now Nvidia performance is 15 Teraflops of single precision or fp32 versus the 12.5 Teraflops for AMD as fp32. Both company will use 16 gigabytes of HBM2, Nvidia has a lot higher bandwidth of 900 gigabytes per second versus AMD’s 512 gigabytes per second. I think that’s because the memory clock is higher over Nvidia.
The specs here for Nvidia is a little bit higher than Vega, this is to be expected because this is a new generation of GPU architecture for Nvidia and it’s coming out after AMD revealed all the information on this Mi25 Instinct. Knowing that expecting less from Nvidia is out of the question.
New thing in tech these days is deep learning and AI, every tech company is focusing on that right now. It’s all about Artificial Intelligence, machine learning, better computing and smarter computing. So, to capitalize on that Nvidia with Volta will have 630 Tensor cores built onto the GPU, giving it a computing power equivalent to 100 CPU’s.
If you don’t know what Tensor is, this year at Google when they announce their Tensor processing unit. What Tensor is just a machine learning micro-processor. Tensor is a specialize processor that only does one type of competition and that is machine learning and it does it at 8-bit. It’s design for efficiency because it only calculates on thing, where as a normal CPU can be 32-bit or 64-bit but it does all kind of competition. Pretty much very efficient at machine learning and as a result it’s a lot slower than Tensor when dealing with machine learning.
Also with Volta, Nvidia claims that it’s 5x performance improvement over Pascal in peak Teraflops and also 15x over Maxwell. With Volta GPU for data centers, customers will pay a good premium for these products because they need it and that’s why Nvidia is able to stack a bunch of technology into it especially HBM2.
For consumer version of Volta, they are going to use GDDR6 which right now is an improvement over HBM2 but in the long run HBM2 and HBM3 will beat out GDDR whatever at that time. So, for consumer version Nvidia will use GDDR6 because it’s something that is cheap and it’s mass-producible. Because Nvidia cards do sell, they sell a lot more than AMD so, they want to have a good supply for when they come out with Volta for consumers.
Volta for data centers will ship second half this year, the consumer Volta which is rumored to be releasing Q4 2017 or early 2018.