Deep-learning technology gains traction in nearly every industry.
MOUNTAIN VIEW, CA—March 5, 2019 — Artificial intelligence (AI) processors have quickly become an essential technology for accelerating deep-learning applications in data centers, automotive, client devices, embedded (IoT) systems, and other areas. A new report from The Linley Group, “ A Guide to Processors for Deep Learning,” analyzes deep-learning accelerators and IP cores for artificial intelligence, neural networks, and vision processing for inference and training. Many new companies and products target this fast-growing market, which topped $4 billion in chip revenue in 2018.
The rapid adoption of deep-learning applications has spurred the development of AI chips and IP to meet the performance needs of complex neural networks. Because no single processor can satisfy such a broad range of applications, some vendors are developing diverse sets of products while others are specializing in custom architectures to provide superior performance and efficiency for specific workloads. These custom architectures are now starting to challenge the common use of CPUs and GPUs for AI. The report analyzes these architectures and products to determine which will win over time.
“With the advent of deep neural networks, we’ve seen AI processing sweep through the tech sector and spill over into many industries around the world,” said Linley Gwennap, principal analyst with The Linley Group. “Although it may appear that the landscape of AI chip and IP vendors is getting crowded, there are still opportunities for new suppliers to gain traction provided they have a superior product.”
The report provides detailed technical coverage of announced deep-learning accelerator chips from AMD, Google, Graphcore, Gyrfalcon, Intel (including Xeon, Stratix, Mobileye, and Movidius), Microsoft, Mythic, Nvidia (including Tesla and AGX), NXP, Qualcomm, Wave Computing, and Xilinx. It also covers deep-learning IP cores from AImotive, Arm, Cadence, Cambricon, Ceva, Imagination, Synopsys, and Videantis as well as the open-source NVDLA. Briefly covered are deep-learning accelerators from Amazon, Bitmain, BrainChip, Cerebras, Cornami, eSilicon, Eta Compute, General Processor, GreenWaves, Groq, Habana, Huami, Huawei, NovuMind, and SambaNova.
The report includes extensive technical and market overviews to help those coming up to speed on this complex technology. Those seeking a quantitative look at the market for deep-learning accelerators will find market size and forecasts in three market segments: data center and HPC, ADAS and autonomous vehicles, and client and IoT.
Availability
“A Guide to Processors for Deep Learning” is available now directly from The Linley Group. For further details, including pricing, visit the web site at https://www.linleygroup.com/deep-learning.
About The Linley Group
The Linley Group is the industry's leading source for independent technology analysis of semiconductors for networking, communications, mobile, and data-center applications. The company provides strategic consulting services, in-depth analytical reports, and conferences focused on advanced technologies for chip and system design. The Linley Group also publishes the weekly Microprocessor Report. For insights on recent industry news, subscribe to the company's free email newsletter: Linley Newsletter.