Qualcomm Inc. and Nvidia Inc. are currently the two leading manufacturers of artificial intelligence chips, and in a new set of test data released Wednesday, Qualcomm’s AI chips beat Nvidia by 2 to 1 in three metrics that measure power efficiency.
AI models need to be trained with large amounts of data to improve their accuracy and performance. Once the training is complete, the AI model can be used for inference, i.e., to perform some specific tasks, such as generating text responses based on input or determining whether a picture contains a cat. Inference is a widely used aspect of AI technology in products, but it can also add costs to companies, one of the main costs being power.
Qualcomm has leveraged its experience in designing chips for low-power devices such as cell phones to introduce a chip designed to deliver high-performance, low-power AI processing for the cloud and edge, called the Cloud AI 100. The chip beat Nvidia’s flagship chip in two power efficiency metrics in test data released Wednesday by MLCommons, an engineering consortium that maintains testing standards for the AI chip industry The chip beat Nvidia’s flagship chip, the H100, in two power efficiency metrics in tests released Wednesday by MLCommons, an engineering consortium that maintains standards for the artificial intelligence industry.
The power efficiency metric refers to how many server queries can be executed per watt of power. Qualcomm’s Cloud AI 100 achieved 227.4 queries per watt for image classification, compared to 108.4 queries per watt for NVIDIA’s H100, which can be used to identify objects or scenes in images. Qualcomm also leads Nvidia in object detection, with 3.8 queries per watt and 2.4 queries per watt, respectively. Object detection can be used to analyze surveillance video from retail stores to understand which places customers visit most often.
However, when it comes to natural language processing, Nvidia has a definite advantage, ranking first in both performance and power efficiency. Natural language processing is the AI technology most widely used in systems such as chatbots, with NVIDIA reaching 10.8 queries per watt, while Qualcomm ranked second at 8.9 queries per watt.
Qualcomm and NVIDIA are both looking to capture share of the data center market by offering efficient AI chips. This market is expected to grow rapidly as more and more companies incorporate AI technology into their products.