According to a Microsoft executive, the company spent “several hundred million dollars” to design the supercomputer that powers OpenAI projects like ChatGPT. At the heart of the machine, the most expensive components are Nvidia’s most powerful professional GPUs.
Besides being locked in a cave, you couldn’t escape the tsunami of news surrounding OpenAI and its jewel, ChatGPT. An AI capable of answering questions, writing homework (or lessons), simulating literary genres… and doing some horror too. But the fact is, ChatGPT and OpenAI’s tools are the information earthquake that reveals to the general public the powerful future of AI. However, AI does not come from engineers and researchers. Of course, the models think for themselves, but above all they need to be trained. An operation that consumes many resources: time, energy and state-of-the-art equipment. Because OpenAI and its financier, Microsoft, have spared no expense in training and operating ChatGPT. According to Scott Guthrie, Microsoft’s vice president of cloud and AI, the company has spent several hundred million dollars on the project. Particularly in the chips responsible for the calculations.
A Microsoft data center ©Microsoft
At the core of OpenAI’s technology is Microsoft’s Azure cloud and thousands of machines called “ND H100 v5”. Under this technical name hides an overwhelmed hardware module. A server equipped with 4 Intel Xeon Scalable processorse generation (known as “Sapphire Rapids”), processors that control the passion of the “horses” responsible for most of the computation, namely Nvidia’s GPUs. Not your good old GeForce, or even a super RTX 4090. No, these are professional chips called H100, specially optimized for AI related tasks. If Microsoft doesn’t disclose the number or nature of Intel’s CPUs (the “s” suggests there are at least two), we know that each server integrates no less than eight Nvidia H100s (up to $30,000 per card, the bill must be high! ). Or 640 billion transistors (80 billion per GPU) linked together by NVSwitch and NVlink technologies to split compute times up to nine times compared to 2020’s A100 GPUs. Or how to turn months of computation into a few weeks . And above all, bring as much money as possible to Nvidia, which takes the lion’s share of the value of these great machines.
Based on the Hopper architecture, the H100 is the most powerful compute GPU on the market. Thanks to its 80 million transistors, but also thanks to the software ecosystem and networking hardware that Nvidia has specially adapted to its needs. © NVIDIA
Because the power of these servers is in Nvidia’s double favor. In addition to its success in terms of the raw power of its GPUs – which it sells in packs of eight for each server rack! – the company with the green logo was also selected to link the servers with its Quantum-2 InfiniBand chips. Because with intensive computer use, brute force is not enough: you have to know how to distribute the tasks properly.
Nvidia dominates computers… and the network
If AMD prides itself on having a more powerful GPU than Nvidia – which, of course, claims the opposite – this power is just one of many other factors. Unless you have a supercomputer at home to do professional performance measurements – which we don’t! – you have to look at the full scope of these chips to see the lethal weapon of its computing solution: the network. A network that, along with memory, is the real bottleneck for improving the computational performance of supercomputers.
Nvidia played it fine. If the company has continued relentlessly developing better and better chips – in 2023 it will still have a clear lead in the world of GPUs gaming and professionals – in 2020 the company acquired a company unknown to the general public: Mellanox. A network specialist who swallowed up Nvidia and whose products it “entered” on its professional GPUs. The sale of both super network chips, but also switches (that distribute information) and its software, Nvidia has optimized its GPUs and networking equipment to perform optimally when they work together.
For example, Nvidia’s Quantum-2 InfiniBand solution can send 400 Gbits of data per server – it turns you from your box’s 1 Gbit router! Most importantly, network equipment and software are capable of intelligently distributing computations across thousands of GPUs across thousands of servers. An ‘intelligence’ required for the millions of requests that the services now have to handle not only from ChatGPT, but now also from Bing. So the next time you use ChatGPT, imagine unleashing the deluge of computing power and competitive transfer speeds to compute and deliver results in a split second. All this to write a song in praise of termites in the style of NTM!
Source :
Bloomberg
Besides being locked in a cave, you couldn’t escape the tsunami of news surrounding OpenAI and its jewel, ChatGPT. An AI capable of answering questions, writing homework (or lessons), simulating literary genres… and doing some horror too. But the fact is, ChatGPT and OpenAI’s tools are the information earthquake that reveals to the general public the powerful future of AI. However, AI does not come from engineers and researchers. Of course, the models think for themselves, but above all they need to be trained. An operation that consumes many resources: time, energy and state-of-the-art equipment. Because OpenAI and its financier, Microsoft, have spared no expense in training and operating ChatGPT. According to Scott Guthrie, Microsoft’s vice president of cloud and AI, the company has spent several hundred million dollars on the project. Particularly in the chips responsible for the calculations.

A Microsoft data center ©Microsoft
At the core of OpenAI’s technology is Microsoft’s Azure cloud and thousands of machines called “ND H100 v5”. Under this technical name hides an overwhelmed hardware module. A server equipped with 4 Intel Xeon Scalable processorse generation (known as “Sapphire Rapids”), processors that control the passion of the “horses” responsible for most of the computation, namely Nvidia’s GPUs. Not your good old GeForce, or even a super RTX 4090. No, these are professional chips called H100, specially optimized for AI related tasks. If Microsoft doesn’t disclose the number or nature of Intel’s CPUs (the “s” suggests there are at least two), we know that each server integrates no less than eight Nvidia H100s (up to $30,000 per card, the bill must be high! ). Or 640 billion transistors (80 billion per GPU) linked together by NVSwitch and NVlink technologies to split compute times up to nine times compared to 2020’s A100 GPUs. Or how to turn months of computation into a few weeks . And above all, bring as much money as possible to Nvidia, which takes the lion’s share of the value of these great machines.

Based on the Hopper architecture, the H100 is the most powerful compute GPU on the market. Thanks to its 80 million transistors, but also thanks to the software ecosystem and networking hardware that Nvidia has specially adapted to its needs. © NVIDIA
Because the power of these servers is in Nvidia’s double favor. In addition to its success in terms of the raw power of its GPUs – which it sells in packs of eight for each server rack! – the company with the green logo was also selected to link the servers with its Quantum-2 InfiniBand chips. Because with intensive computer use, brute force is not enough: you have to know how to distribute the tasks properly.
Nvidia dominates computers… and the network

If AMD prides itself on having a more powerful GPU than Nvidia – which, of course, claims the opposite – this power is just one of many other factors. Unless you have a supercomputer at home to do professional performance measurements – which we don’t! – you have to look at the full scope of these chips to see the lethal weapon of its computing solution: the network. A network that, along with memory, is the real bottleneck for improving the computational performance of supercomputers.
Read also: Frontier supercomputer: AMD at the heart of the world’s most powerful computer (May 2022)
Nvidia played it fine. If the company has continued relentlessly developing better and better chips – in 2023 it will still have a clear lead in the world of GPUs gaming and professionals – in 2020 the company acquired a company unknown to the general public: Mellanox. A network specialist who swallowed up Nvidia and whose products it “entered” on its professional GPUs. The sale of both super network chips, but also switches (that distribute information) and its software, Nvidia has optimized its GPUs and networking equipment to perform optimally when they work together.
For example, Nvidia’s Quantum-2 InfiniBand solution can send 400 Gbits of data per server – it turns you from your box’s 1 Gbit router! Most importantly, network equipment and software are capable of intelligently distributing computations across thousands of GPUs across thousands of servers. An ‘intelligence’ required for the millions of requests that the services now have to handle not only from ChatGPT, but now also from Bing. So the next time you use ChatGPT, imagine unleashing the deluge of computing power and competitive transfer speeds to compute and deliver results in a split second. All this to write a song in praise of termites in the style of NTM!
Source :
Bloomberg