Intel have officially revealed yet more details upon it’s Xeon Scalable platform family known as Cascade Lake. So, as we all know AMD have been very aggressive with it’s server plans, we’ve seen EPYC which known as “Naples” and we have got into other details such as the 7nm, Zen 2 cores which is supposedly arriving in 2018.
Assuming there’s no delays and “Milan” is going to hit in 2019. Which can be using 7nm but it’s going to be on the plus fabrication and this is going to be on the Zen 3 Core which we can presume as you can imagine is going to have some IPC gains and possibly more aggressive clock speeds. Also maybe some additional stuff in terms of platform but you know, we are just going to wait.
And what is confirmed that Intel Cascade Lake is going to be based upon a SkyLake SP refresh and once you are going to be using 14 nm but it’s also the plus node. Yet there’s a kicker, here’s the thing which is probably going to be separating it from AMD and that is 6 TB optane Dimms support, which is absolutely insane. And According to Intel:
“Intel persistent memory will allow users to improve system performance dramatically by putting more data closer to the processor on nonvolatile media, and do it in an affordable manner. This will truly be a game-changer when it comes to the way applications and systems are designed.”
So, for those who do not understand what that means, basically means that let’s say you were to shutdown the machine the data will still be resident in memory. You can basically think of it almost like an SSD but operating much faster. So, obtained isn’t quite as fast as let’s say high-end DDR5 or DDR4 memory but it has the capacity of Volt 8 being absolutely ginormous in scope.
And the second thing is this goes to the Dimms socket so, essentially you can just think of it as putting in 6 TB, imagine you own your PC. Let’s say you own Skylake CPU and you literally rather than having Ram in there, you take out the DDR4 memory, you are putting sticks of this stuff in and you have got 6 TB of resident, which is absolutely amazing. Eventually we’ll probably see this technology trickle down to us. So, one of the things that Intel have done here is offering “SAP HANA”. It is for the organizations who deal who deal with absolutely massive data-sets, this allows you to do things such as predictive analytics spatial data processing, text analytics, searching streaming analytics so on and so on.
Basically we are dealing with absolutely ginormous sets of data here and therefore according to Intel, the upcoming Xeon processors if, you are running a 4 socket configuration, you can be able to run up to 3 TB of memory per system. Whereas, on the other hand if you are running an 8 socket configuration up to 6 TB of memory is usable.
And for what Intel are telling us compared to Broadwell-E CPU, you are looking at about 1.6 times the performance. Now bear in mind that that when you are dealing with absolutely big amounts of data like this. 60 % increase in performance is very important it means that you can run absolutely huge queries and not feel that you are going to run out of memory. 3D artists certainly will like this obtained technology.