Flex Logix Unveils New Architectural Details on its NMAX Neural Inferencing Engine at the Edge AI Summit Conference

NMAX provides high inferencing throughput at batch =1 at low cost and power

MOUNTAIN VIEW, Calif., Dec. 11, 2018 — (PRNewswire) —   Flex Logix® Technologies, Inc. today announced new information on its NMAX™ neural inferencing engine optimized for edge applications.  NMAX provides inferencing throughput from 1 to >100 TOPS with high MAC utilization for batch size of 1, which is a critical requirement for edge applications.  However, unlike competitive solutions, NMAX achieves this at a much lower cost and with much less power consumption.

Flex Logix Corporate Logo

The new information unveiled today provides more insight into how the NMAX architecture works. For example, the NMAX compiler takes a neural model in Tensorflow or Caffe and generates binaries for programming the NMAX array, layer by layer.  At the start of each layer, the NMAX array's embedded FPGA (eFPGA) and interconnect are configured to run the matrix multiply needed for that stage. Streaming data from SRAM located near the NMAX tile, through a variable number of NMAX clusters where weights are located, accumulate the result.  This is then activated in the eFPGA and stored in SRAM located near the NMAX tile.  The NMAX compiler also configures the eFPGA to implement the state machines to address the SRAMs and other functions.  At the end of a stage, the NMAX array is reconfigured in <1000 nanoseconds to process the next layer.  In larger arrays, multiple layers can be configured in the array at once with data flowing directly from one layer to another.

"The two major challenges in neural inferencing are maximizing MAC utilization to achieve high throughput with the least silicon area, and delivering the data required to the MACs when needed so they remain active while consuming the least power," said Cheng Wang, Co-Founder and Senior VP Engineering and Software for Flex Logix. "NMAX achieves high MAC utilization, even for batch = 1, by loading the weights very quickly and keeping them very close to the MACs.  It also delivers data at low power by keeping most data in SRAM close to the MACs and eliminating unnecessary movement of SRAM between layers." 

As a result of these architectural innovations, NMAX achieves data center class performance with just 1 or 2 LPDDR4 DRAMs, compared to 8+ for other solutions. Flex Logix's interconnect technologies, developed for eFPGA, are what enables this new architecture.

Availability
NMAX is in development now and will be available in the middle of 2019 for integration in SoCs in TSMC16FFC/12FFC.  The NMAX compiler will be available at the same time. For more information, prospective customers can go to www.flex-logix.com to review the slides presented today at the Edge AI Summit and/or contact info@flex-logix.com for further details of NMAX under NDA.

About Flex Logix               
Flex Logix, founded in March 2014, provides solutions for making flexible chips and accelerating neural network inferencing. Its eFPGA platform enables chips to be flexible to handle changing protocols, standards, algorithms and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x faster than Microsoft Azure processing in the Cloud. eFPGA is available for any array size on the most popular process nodes now with increasing customer adoption. Flex Logix's second product line, NMAX, utilizes its eFPGA and interconnect technology to provide modular, scalable neural inferencing from 1 to >100 TOPS using 1/10th the typical DRAM bandwidth, resulting in much lower system power and cost. Having raised more than $13 million of venture capital, Flex Logix is headquartered in Mountain View, California, and has sales rep offices in China, Europe, Israel, Japan, Taiwan and throughout the USA. More information can be obtained at http://www.flex-logix.com or follow on Twitter at @efpga.

PRESS CONTACT:
Kelly Karr
Tanis Communications, Inc.
kelly.karr@taniscomm.com
+408-718-9350

Copyright 2018. All rights reserved. Flex Logix is a registered trademark and NMAX is a trademark of Flex Logix, Inc.

 

Cision View original content to download multimedia: http://www.prnewswire.com/news-releases/flex-logix-unveils-new-architectural-details-on-its-nmax-neural-inferencing-engine-at-the-edge-ai-summit-conference-300763133.html

SOURCE Flex Logix Technologies, Inc.

Contact:
Company Name: Flex Logix Technologies, Inc.
Web: http://www.flex-logix.com

Featured Video
Editorial
Jobs
Mechanical Engineer 3 for Lam Research at Fremont, California
Equipment Engineer, Raxium for Google at Fremont, California
Mechanical Test Engineer, Platforms Infrastructure for Google at Mountain View, California
Senior Principal Mechanical Engineer for General Dynamics Mission Systems at Canonsburg, Pennsylvania
Mechanical Manufacturing Engineering Manager for Google at Sunnyvale, California
Manufacturing Test Engineer for Google at Prague, Czechia, Czech Republic
Upcoming Events
Celebrate Manufacturing Excellence at Anaheim Convention Center Anaheim CA - Feb 4 - 6, 2025
3DEXPERIENCE World 2025 at George R. Brown Convention Center Houston TX - Feb 23 - 26, 2025
TIMTOS 2025 at Nangang Exhibition Center Hall 1 & 2 (TaiNEX 1 & 2) TWTC Hall Taipei Taiwan - Mar 3 - 8, 2025
Automate 2025 at Detroit, Michigan, USA MI - May 12 - 15, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering EDACafe - Electronic Design Automation GISCafe - Geographical Information Services TechJobsCafe - Technical Jobs and Resumes ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise