sdecoret - stock.adobe.com

Tip

10 top AI hardware and chip-making companies in 2024

Due to rapid AI hardware advancement, companies are releasing advanced products yearly to keep up with the competition. The new competitive product on the market is the AI chip.

With a surge in popularity and advancement, AI hardware has become a competitive market. AI hardware companies must have quick turnarounds in product advancements to have the newest and most effective products on the market.

Although these 10 AI hardware companies focus on CPUs and data center technologies, their specializations have slowly broadened as the market expands. Now, these companies are competing to create the most powerful and efficient AI chip on the market.

10 top companies in the AI hardware market

The following AI hardware and chip-making companies are listed in alphabetical order.

Alphabet

Alphabet, Google's parent company, has various products for mobile devices, data storage and cloud infrastructure. Cloud TPU v5p is purpose-built to train large language models and generative AI. Each TPU v5p pod has 8,960 chips, and each chip has a bandwidth of 4,800 Gbps.

Alphabet has focused on producing powerful AI chips to meet the demand for large-scale projects. It has also unveiled Multislice, a performance-scaling technology. While hardware limitations typically restrict software scaling, runs have shown nearly 60% utilization of model floating-point operations per second on multibillion-parameter models on TPU v4.

AMD

AMD created its next generation of Epyc and Ryzen processors. The company released its latest product, the Zen 5 CPU microarchitecture, in 2024.

AMD released the MI300A and MI300X AI chips in December 2023. MI300A has a GPU with 228 compute units and 24 CPU cores, while the MI300X chip is a GPU model with 304 compute units. The MI300X and Nvidia's H100 rival in memory capacity and bandwidth.

Apple

Apple Neural Engine, specialized cores based on Apple chips, has furthered the company's AI hardware design and performance. Neural Engine led to the M1 chip for MacBooks. Compared to the generation before, MacBooks with an M1 chip are 3.5 times faster in general performance and five times faster with graphic performance.

After the success of the M1 chip, Apple announced further generations. As of 2024, Apple has released the M4 chip, but it is only available in iPad Pro. The M4 chip has a neural engine that is three times faster than the M1 chip and a 1.5 times faster CPU than M2.

AWS

AWS has switched its focus from cloud infrastructure to chips. Its Elastic Compute Cloud Trn1 instances are purpose-built for deep learning and large-scale generative models. They use AWS Trainium chips, AI accelerators, to function.

The trn1.2xlarge instance was the first iteration. It only had one Trainium accelerator, 32 GB of instance memory and 12.5 Gbps network bandwidth. Now, Amazon has the trn1.32xlarge instance, which has 16 accelerators, 512 GB of instance memory and 1,600 Gbps bandwidth.

AWS Inferentia is a machine learning chip that generates high-performance inference predictions at a low cost. Trainium accelerators train models, while Inferentia accelerators deploy models.

Cerebras Systems

Cerebras is making a name for itself with the release of its third-generation wafer-scale engine, WSE-3. WSE-3 is deemed the fastest processor on Earth with 900,000 AI cores on one unit and every core having access to 21 petabytes per second of memory bandwidth.

Compared to Nvidia's H100 chip, WSE-3 has a 7,000 times larger bandwidth, 880 times more on-chip memory and 52 times more cores. This WSE-3 chip is also 57 times larger in area, so more space is necessary to house the chip in a server.

IBM

After the success of its first specialized AI chip, Telum, IBM set out to design a powerful successor to rival its competitors.

In 2022, IBM created Artificial Intelligence Unit. The AI chip is purpose-built and runs better than the average general-purpose CPU. It has over 23 billion transistors and features 32 processing cores. It's far less memory-intensive and is more efficient than previous generations.

IBM is working on the NorthPole AI chip, which does not have a release date. NorthPole differs from IBM's TrueNorth chip. The NorthPole architecture is structured to improve energy use, decrease the amount of space the chip takes up and lower latency. The NorthPole chip is set to mark a new era of energy-efficient chips.

Intel

Intel has made a name for itself in the CPU market with its AI products.

Xeon 6 processors launched in 2024 and have been shipped to data centers. These processors offer up to 288 cores per socket, enabling faster processing time and enhancing the ability to perform multiple tasks at once.

Intel has released the Gaudi 3 GPU chip, in competition with Nvidia's H100 GPU chip. The Gaudi 3 chip trains models 1.5 times faster, output results are 1.5 times faster and it uses less power than Nvidia's H100 chip.

Nvidia

Nvidia became a strong competitor in the AI hardware market when its valuation surpassed $1 trillion in early 2023. The company's current work includes its A100 chip and Volta GPU for data centers. Both products are critical technologies for resource-intensive models. Nvidia also offers AI-powered hardware for the gaming sector.

In August 2023, Nvidia announced its latest technological breakthrough with the world's first HBM3e processor. It unveiled the Grace Hopper platform, a superchip with three times the bandwidth and over three times the memory capacity of the current generation's technology. Its design is focused on heightened scalability and performance in the age of accelerated AI.

The company's NVLink technology can connect the Grace Hopper superchip to other superchips. NVLink enables multiple GPUs to communicate through high-speed interconnection.

The Blackwell B200 AI chip, a GPU microarchitecture, has been delayed and should be released in late 2024, with B200A planned for 2025. Nvidia also plans to launch a new accelerator, Rubin, in 2026.

Qualcomm

Although Qualcomm is relatively new in the AI hardware market compared to its counterparts, its experience in the telecom and mobile sectors makes it a promising competitor.

Qualcomm's Cloud AI 100 chip beat Nvidia H100 in a series of tests. One test was to see the number of data center server queries each chip could carry out per watt. Qualcomm's Cloud AI 100 chip totaled 227 server queries per watt, while Nvidia H100 hit 108. The Cloud AI 100 chip also managed to net 3.8 queries per watt compared to Nvidia H100's 2.4 queries during object detection.

In 2024, the company released Snapdragon 8s Gen 3, a mobile chip. This chip supports 30 AI models and has generative AI features, like image generation and voice assistants.

Qualcomm also launched Snapdragon X Plus, a laptop processor with AI capabilities. This processor rivals competing units as it can have less power use and faster CPU performance.

Tenstorrent

Tenstorrent builds computers for AI and is led by the same man who designed AMD's Zen chip architecture, Jim Keller. Tenstorrent has multiple hardware products, including its Wormhole processors and Galaxy servers, which create Galaxy Wormhole Server.

Wormhole n150 and n300 are Tenstorrent's scalable GPUs. The n300 nearly doubles every spec of n150. These chips are for network AI and are put into Galaxy modules and servers. Each server holds up to 32 Wormhole processors, 2,560 cores and 384 GB of GDDR6 memory.

Editor's note: This article was updated in August 2024 to list companies in alphabetical order, update AI chips each has to offer and add two more competing AI chip companies.

Devin Partida is editor in chief of ReHack.com and a freelance writer. She has knowledge of niches such as biztech, medtech, fintech, IoT and cybersecurity.

Kelly Richardson is site editor for TechTarget's Data Center site.

Next Steps

Compare GPUs vs. CPUs for AI workloads

 A primer on AI chip design

How to choose the best GPUs for AI projects

Dig Deeper on Data center hardware and strategy

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close