During a two-hour long conference, NVIDIA has been making its bets for AI and especially for data centers. What has had the most weight in this presentation has not only been the products for AI, but the implementation of Mellanox DPUs to the company’s portfolio and a large amount of software based on AI by the company for various applications different such as transportation, medicine, manufacturing. In general it has been a presentation that places the future of NVIDIA not in PC gaming, but in the whole world of AI and Big Data and where the PC is only a small part of the entire ecosystem.
NVIDIA defines Omniverse at GTC 2021
A few years ago, reality simulators like Second Life appeared, which were extremely rudimentary for those that can be created today, we can also talk about games like Minecraft that despite their games have a great capacity for simulating virtual worlds.
Imagine for a moment such a simulation running on dozens or even hundreds of GPUs with the ability to act as a twin of the real world and simulate the interaction between objects, simulating from the simplest to the complex systems. Imagine, for example, simulating a car not as a part but as a system made up of its different parts, which interact with each other to create the complex system that is the car itself.
The usefulness of Omniverse? The idea is to use simulation to create environments in which computer vision-based AIs can be trained through virtual environments. Which is ideal for example for creating artificial intelligence models for automatic driving.
The enormous combined power of having tens and even hundreds of GPUs allows the creation of ultra-realistic virtual environments for Omniverse, which can be used for the production of audiovisual content such as advertisements without having to be on site and even be able to make variations of it. through advanced simulation.
The Mellanox purchase has brought the DPUs, Data Processing Units, into the NVIDIA portfolio. These are nothing more than SmartNICs and therefore advanced network controllers. His work? Free the CPU and GPUs from the complex tasks of data transport and memory access to be carried out in this type of specialized processors.
Where NVIDIA has first deployed the Bluefield processors has been in its cloud service, GeForce Now, this allows them to download certain tasks from the CPU and GPU to the BlueField DPU in order to reduce the latency of their Cloud Gaming service. But NVIDIA has quite ambitious plans for its range of DPUs, since it intends to integrate them together with an ARM CPU based on the A78 cores. Curiously, the same as those used by the future Tegra Orin for 2023 and later integrate the entire package with a GPU.
In this GTC 2021 NVIDIA has not shown anything about Bluefield 4, but it has spoken at the end of a new NVIDIA Tegra called Atlan, you do not have to be very smart seeing how the Blufield and Tegra ranges will end up integrated into one from 2024 Viewing the description that NVIDIA has given of the Tegra Atlan.
NVIDIA renews its DGX and its A100 in this GTC 2021
NVIDIA has not presented any graphics architecture, but has renewed its DGX computers, which bring with them several NVIDIA A100 GPUs, based on the Ampere architecture for high-performance computing, which should not be confused with that of the RTX 3000, since they are they differ on quite a few points.
The novelty? Now instead of 40 GB of HBM2 memory for A100 these have 80 GB, so the storage capacity of their VRAM has doubled.
On the other hand, NVIDIA has created a new range of products called Aerial A100 that combines a Bluefield processor with an A100 to create a kind of advanced switchboard for 5G communications. It is a new range of products by NVIDIA that was born from the purchase of Mellanox and that we have yet to see how it develops for the future, but that shows NVIDIA’s interest in entering new markets, especially in the Big Data where large amounts of information are handled and that is related to artificial intelligence where large amounts of data are handled, where most of it goes to travel through 5G networks.
NVIDIA Grace, NVIDIA’s ARM CPU for HPC
It is not the first time that AMD has developed a CPU based on the ISA ARM, we have already seen several of their attempts on various NVIDIA Tegra. The first attempt was on the Tegra K1 with Denver, Tegra X2 received Denver 2 and Tegra Xavier, the most recent of the Tegra SoCs, received Carmel architecture as CPU. The big difference is that this time Grace is not going to be part of an SoC, but rather a high-caliber standalone CPU that NVIDIA will use to build its servers for 2023. The date Grace will appear on the market .
But what specs does Grace promise to have? At the moment they have not given information on the number of cores it will have, but we can trust that it will be dozens and could even exceed a hundred. How do we know? Well, due to the fact that its interfaces with memory, GPU and other CPUs is really massive.
It draws powerfully attention is an LPDDR5X interface of more than 500 GB / s, which indicates that it is a large CPU, since in order to reach that level of power, an interface with very wide memory is required and therefore a perimeter on the huge chip. To that must be added a bandwidth via NVLink 4 of 900 GB / s with the GPU and 600 GB / s with another Grace CPU, which also indicate a large chip.
The power of the cores? All that NVIDIA has limited itself to saying is that each of these exceed 300 points in the SPECrate2017_int-base benchmark. At the moment it is too early and NVIDIA has not given all the details of its CPU for servers.