- Nvidia has started this September in an inauspicious way — with its a huge daily selloff since April. The stock is now off about 20% from its all-time closing high of $135.58, as its recent earnings report made investors jittery about its future margin performance and whether it can keep up its big beats. (MarketWatch 09/04/2024)
- Nvidia CEO and co-founder Jensen Huang's fortune shrank by nearly $10 billion after the chipmaker’s share price fell amid a wider selloff of major tech stocks on September 03, 2024. (Forbes, 09/04/2024)
- Justice Department takes a major step toward an antitrust lawsuit against Nvidia. Officials are investigating whether Nvidia's dominance has made it difficult for buyers to use other suppliers, some complained that Nvidia threatens clients that use chips from both Nvidia and its competitors. (Business Insider, 09/04/2024)
- [Factcheck] Nvidia dictates how chips are allocated to stop companies from stockpiling them amid limited supply. Installing the chips the way Nvidia wanted would have reportedly hindered. Microsoft's ability to switch to different AI chips. Microsoft eventually won out when Nvidia backed down and agreed to let the Big Tech company design its own custom racks. (Business Insider, 06/20/2024)
- [Factcheck] Intel’s Xeon processors are capable of more complex data crunching, but they have fewer cores and are much slower for processing information typically used to train AI software. AMD announced an updated version of its MI300 AI processor would go on sale in Q4. But Nvidia’s advantage isn’t just in the performance of its hardware. Its CUDA, a language for its graphics chips that allows them to be programmed for the type of work that underpins AI programs.(QuickTake, 06/07/2024)
The AMD MI300X and Nvidia Blackwell are both cutting-edge AI processors, each with unique strengths and features. Here’s a comparison based on the latest information (prepared by an AI):
AMD MI300X
- Architecture: Built on AMD’s CDNA 3 architectures
- Compute Units: 304 compute units.
- Memory: 192 GB of HBM3 memory with a peak bandwidth of 5.3 TB/s.
- Performance: Known for its strong performance in FP8 floating point calculations and HPC-centric double precision workloads. It boasts impressive cache bandwidth and latency, outperforming Nvidia’s H100 in several benchmarks.
- Power Consumption: 750W.
Nvidia Blackwell
- Architecture: Successor to the H100, designed to leapfrog AMD in performance.
- Compute Units: Details on compute units are less specific, but it integrates compute and IO into each chip.
- Memory: 192 GB of high bandwidth memory, with a memory bandwidth of 2.8 TB/s.
- Performance: Excels in sparse performance, delivering 45 teraFLOPS of FP64 tensor core performance. It is optimized for AI inferencing and lower precision tasks.
- Power Consumption: 700W.
Key Differences
- Memory Bandwidth: MI300X has a higher memory bandwidth (5.3 TB/s) compared to Blackwell (2.8 TB/s), which can be crucial for AI performance.
- Compute Performance: MI300X shows a significant advantage in double precision performance, while Blackwell excels in sparse performance and lower precision tasks.
- Power Efficiency: Blackwell is slightly more power-efficient with a 700W consumption compared to MI300X’s 750W.
- Both processors are designed to handle demanding AI workloads, but their strengths lie in different areas. The choice between them would depend on the specific requirements of the tasks at hand.
- AMD's customers begin receiving the first Instinct MI300X AI GPUs — the company's toughest competitor to Nvidia's AI dominance is now shipping
- Nvidia Blackwell vs. MI300X : r/AMD_Stock (reddit.com)
- AMD MI300X performance compared with Nvidia H100 — low-level benchmarks testing cache, latency, inference, and more show strong results for a single GPU | Tom's Hardware (tomshardware.com)
- Nvidia Launches Next-Generation Blackwell GPUs Amid AI ‘Arms Race’ (datacenterknowledge.com)
- Nvidia turns up the AI heat with 1,200W Blackwell GPUs • The Register