AMD Extends AI and High-Performance Leadership in Data Center and PCs with New AMD Instinct, Ryzen and EPYC Processors at Computex 2024

Ryzen™ AI is defined as the combination of a dedicated AI engine, AMD Radeon™ graphics engine, and Ryzen processor cores that enable AI capabilities. OEM and ISV enablement is required, and certain AI features may not yet be optimized for Ryzen AI processors. Ryzen AI is compatible with: (a) AMD Ryzen 7040 and 8040 Series processors except Ryzen 5 7540U, Ryzen 5 8540U, Ryzen 3 7440U, and Ryzen 3 8440U processors; (b) AMD Ryzen AI 300 Series processors, and (c) all AMD Ryzen 8000G Series desktop processors except the Ryzen 5 8500G/GE and Ryzen 3 8300G/GE. Please check with your system manufacturer for feature availability prior to purchase. GD-220c.

Radeon™ PRO Series and Radeon RX Series graphics cards mentioned herein are not designed, marketed nor recommended for datacenter usage. Use in a datacenter setting may adversely affect manageability, efficiency, reliability, and/or performance. GD-239s.

© 2024 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Instinct, AMD EPYC, AMD RDNA, AMD XDNA, Radeon, Ryzen, Versal, and combinations thereof are trademarks of Advanced Micro Devices, Inc. HP® and the HP logo are registered trademarks of Hewlett-Packard Development Company, L.P Lenovo® is a trademark of Lenovo in the United States, other countries, or both. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners. Certain AMD technologies may require third-party enablement or activation. Supported features may vary by operating system. Please confirm with the system manufacturer for specific features. No technology or product can be completely secure.

_________________________

1 MI300-48 - Calculations conducted by AMD Performance Labs as of May 22nd, 2024, based on current specifications and /or estimation. The AMD Instinct™ MI325X OAM accelerator is projected to have 288GB HBM3e memory capacity and 6 TFLOPS peak theoretical memory bandwidth performance. Actual results based on production silicon may vary.
The highest published results on the NVidia Hopper H200 (141GB) SXM GPU accelerator resulted in 141GB HBM3e memory capacity and 4.8 TB/s GPU memory bandwidth performance.
https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446
The highest published results on the NVidia Blackwell HGX B100 (192GB) 700W GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance.
https://resources.nvidia.com/en-us-blackwell-architecture?_gl=1*1r4pme7*_gcl_aw*R0NMLjE3MTM5NjQ3NTAuQ2p3S0NBancyNkt4we know QmhCREVpd0F1NktYdDlweXY1dlUtaHNKNmhPdHM4UVdPSlM3dFdQaE40WkI4THZBaWFVajFyTGhYd3hLQmlZQ3pCb0NsVElRQXZEX0J3RQ..*_gcl_au*MTIwNjg4NjU0Ny4xNzExMDM1NTQ3
The highest published results on the NVidia Blackwell HGX B200 (192GB) GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance.
https://resources.nvidia.com/en-us-blackwell-architecture?_gl=1*1r4pme7*_gcl_aw*R0NMLjE3MTM5NjQ3NTAuQ2p3S0NBancyNkt4QmhCREVpd0F1NktYdDlweXY1dlUtaHNKNmhPdHM4UVdPSlM3dFdQaE40WkI4THZBaWFVajFyTGhYd3hLQmlZQ3pCb0NsVElRQXZEX0J3RQ..*_gcl_au*MTIwNjg4NjU0Ny4xNzExMDM1NTQ3
2 MI300-49: Calculations conducted by AMD Performance Labs as of May 28th, 2024 for the AMD Instinct™ MI325X GPU at 2,100 MHz peak boost engine clock resulted in 1307.4 TFLOPS peak theoretical half precision (FP16), 1307.4 TFLOPS peak theoretical Bfloat16 format precision (BF16), 2614.9 TFLOPS peak theoretical 8-bit precision (FP8), 2614.9 TOPs INT8 floating-point performance. Actual performance will vary based on final specifications and system configuration.
Published results on Nvidia H200 SXM (141GB) GPU: 989.4 TFLOPS peak theoretical half precision tensor (FP16 Tensor), 989.4 TFLOPS peak theoretical Bfloat16 tensor format precision (BF16 Tensor), 1,978.9 TFLOPS peak theoretical 8-bit precision (FP8), 1,978.9 TOPs peak theoretical INT8 floating-point performance. BFLOAT16 Tensor Core, FP16 Tensor Core, FP8 Tensor Core and INT8 Tensor Core performance were published by Nvidia using sparsity; for the purposes of comparison, AMD converted these numbers to non-sparsity/dense by dividing by 2, and these numbers appear above.
3 MI300-55: Inference performance projections as of May 31, 2024 using engineering estimates based on the design of a future AMD CDNA 4-based Instinct MI350 Series accelerator as proxy for projected AMD CDNA™ 4 performance. A 1.8T GPT MoE model was evaluated assuming a token-to-token latency = 70ms real time, first token latency = 5s, input sequence length = 8k, output sequence length = 256, assuming a 4x 8-mode MI350 series proxy (CDNA4) vs. 8x MI300X per GPU performance comparison. Actual performance will vary based on factors including but not limited to final specifications of production silicon, system configuration and inference model and size used
4 Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.
5 Based on performance and power estimates correlated to measurements on hardware platforms as of May 2024 comparing projected Stable Diffusion iterations per second per watt for Ryzen AI 300 Series processor to a Ryzen 8945HS processor. Configuration for Ryzen AI 300 Series Processor: Reference platform, 32GB RAM, Radeon 890M graphics, Windows 11 Pro. Configuration for the Ryzen 8945HS processor is: Razer Blade 14, 32GB RAM, Radeon 780M graphics, Windows 11 Home. Specific projections are are subject to change when final products are released in market. STX-14.
6 As of May 2024, AMD has the first available NPU on a laptop PC processor (AMD Ryzen™ AI 300 Series processor) that supports Block FP16 functionality, where 'dedicated AI engine' is defined as an AI engine that has no function other than to process AI inference models and is part of the x86 processor die. STX-16.
7 Testing as of May 2024 by AMD Performance Labs on test systems configured as follows: AMD Ryzen 9 9950X system: 
GIGABYTE X670E AORUS MASTER, 
Balanced, 
DDR5-6000, 
Radeon RX 7900 XTX, VBS=On,SAM=On 
KRACKENX63 (May 10, 2024) vs. similarly configured Intel Core i9-14900KS system: 
MSI MEG Z790 ACE MAX (MS-7D86), 
Balanced, 
DDR5-6000, 
Radeon RX 7900 XTX, VBS=On,SAM=On 
KRAKENX63 (May 13, 2024) {Profile=Intel Default} on the following applications: 3DMarkDandia, Blender, Cinebench, GeekBench, PCMark10, PassMark, ProcyonOffice, ProcyonPhotoEditing, ProcyonVideoEditing. System manufacturers may vary configurations, yielding different results. GNR-01.  

Contact:
Brandi Martina
AMD Communications
(512) 705-1720
Email Contact  

Suresh Bhaskaran
AMD Investor Relations
408-749-2845
Email Contact


Primary Logo



« Previous Page 1 | 2             
Featured Video
Editorial
Jobs
Business Development Manager for Berntsen International, Inc. at Madison, Wisconsin
Upcoming Events
Consumer Electronics Show 2025 - CES 2025 at Las Vegas NV - Jan 7 - 10, 2025
Collaborate North America 2025 at Novi MI - Jan 28, 2025
Celebrate Manufacturing Excellence at Anaheim Convention Center Anaheim CA - Feb 4 - 6, 2025
3DEXPERIENCE World 2025 at George R. Brown Convention Center Houston TX - Feb 23 - 26, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering EDACafe - Electronic Design Automation GISCafe - Geographical Information Services TechJobsCafe - Technical Jobs and Resumes ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise