dna-white-blue-logo-v3

Dynamic Neural Accelerator™ IP

Run-time reconfigurable neural network IP for AI

Supported Frameworks & Applications

Modular, scalable, fully configurable neural network edge AI inference IP

EdgeCortix Dynamic Neural Accelerator (DNA) is a flexible neural accelerator IP core with run-time reconfigurable interconnects between compute units, achieving exceptional parallelism and efficiency through dynamic grouping. A single core can scale from 1024 MACs to 32768 MACs, running at clock speeds up to 1 GHz. Configuration is done with the EdgeCortix MERA compiler and software framework.

edgecortix-runtime-reconfigurable-icon
Run-Time Reconfigurable
Efficient AI inference core in FPGAs or SoCs
edgecortix-high-speed-low-latency-icon
Low Latency
< 4 msec when under demanding workloads
edgecortix-deploy-pretrained-model-icon
High Utilization
+80% for concurrent neural network models
Dynamic-Neural-Accelerator-IP-Diagram-2023

DNA IP provides more flexibility over hardware-optimized solutions

Single-model benchmarks provide an easy comparison. In the real world, more complex multi-model scenarios are emerging, such as in automotive sensing where different models handle different tasks in the processing pipeline.

Rather than being hardware-optimized for just a few popular neural network models, DNA IP combined with MERA software runs a wide range of models more efficiently.

 Inside the DNA IP are three different types of execution units, each supporting INT8 multiply-accumulate operations with INT32 accumulators.

  • A standard computation engine uses systolic arrays scalable from 3x3 to 64x64 elements, with the optimized data path capability.
  • A depth-wise computation engine takes a 2D approach, decomposing a 3D kernel into 2D slices, again with optimized interconnects.
  • A separate vector unit powers tasks such as neural network activation and scaling, a common operation in camera-based vision systems.
This enables the DNA IP to handle either streaming (batch=1) or high-resolution (such as video or point cloud) data proficiently. DNA IP can also scale up, or down, to provide an efficient AI inference IP solution for edge devices, where performance, power consumption, size, and flexibility matter.

How DNA IP delivers performance in several dimensions

Many AI inference accelerators quote a TOPS rating under optimum conditions, with compute units fully parallelized. When realistic AI models are mapped to hardware, parallelism between neural network layers drops - and utilization falls to 40% or less. This inefficiency costs designers in flexibility, size, and power consumption.

Using a patented approach, EdgeCortix reconfigures data paths between DNA IP execution units to achieve better parallelism and reduce on-chip memory bandwidth. MERA software then optimizes computation order and resource allocation in scheduling tasks for neural networks, even with several models running in the same DNA IP core.

The result is better performance. DNA IP maintains better than 80% efficiency under load. This translates to high inference/sec/W scores, as much as 16x vs. conventional GPU-based hardware. Latency, essential to determinism in real-time applications, stays less than 4 msec through reconfigured data paths and optimized scheduling.

EdgeCortix-Sakura-10x-Energy-Efficiency-1024
EdgeCortix-Sakura-10x-Energy-Efficiency

Microprocessor Report: EdgeCortix DNA IP Lowers AI Latency

How developers can explore DNA IP

Inference Pack on BittWare FPGA cards

The BittWare IA-420 with an Intel Agilex FPGA hosts a reference port of EdgeCortix DNA IP for edge AI developers

The DNA IP core is flexible for either FPGA or SoC designs. Working with BittWare, EdgeCortix has created an inference pack of the DNA IP core with ready-to-use bitstreams for BittWare IA-840F and IA-420F cards featuring high-performance Intel® Agilex™ FPGAs.

See more on the BittWare approach to pairing these technologies in their DNA ML framework product brief.

Block diagram of the EdgeCortix DNA IP in a bitstream for the Intel Agilex FPGA on the BittWare IA-420F or IA-840F
EdgeCortix MERA is a compiler and AI inference software framework translating models into code for an edge AI co-processor

Developing with the DNA IP and MERA

EdgeCortix MERA is the compiler and software framework needed to enable deep neural network graph compilation and AI inference with the DNA IP. With built-in support for the open-source Apache TVM compiler framework, it provides the tools, APIs, code-generator and runtime needed to deploy a pre-trained deep neural network bitstream to the DNA IP. MERA supports a model development workflow using tools including PyTorch, TensorFlow, TensorFlow Lite, and ONNX.

megachips-logo

Given the tectonic shift in information processing at the edge, companies are now seeking near cloud level performance where data curation and AI driven decision making can happen together. Due to this shift, the market opportunity for the EdgeCortix solutions set is massive, driven by the practical business need across multiple sectors which require both low power and cost-efficient intelligent solutions. Given the exponential global growth in both data and devices, I am eager to support EdgeCortix in their endeavor to transform the edge AI market with an industry-leading IP portfolio that can deliver performance with orders of magnitude better energy efficiency and a lower total cost of ownership than existing solutions."

Akira Takata
Former CEO of MegaChips Corporation
SoftBank

Improving the performance and the energy efficiency of our network infrastructure is a major challenge for the future. Our expectation of EdgeCortix is to be a partner who can provide both the IP and expertise that is needed to tackle these challenges simultaneously."

Ryuji Wakikawa
Head, Research Institute of Advanced Technology at SoftBank Corp
bittware-logo-v2

With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."

Craig Petrie
VP, Sales and Marketing at BittWare
trust-capitol-co-ltd-logo

EdgeCortix is in a truly unique market position. Beyond simply taking advantage of the massive need and growth opportunity in leveraging AI across many business key sectors, it’s the business strategy with respect to how they develop their solutions for their go-to-market that will be the great differentiator. In my experience, most technology companies focus very myopically, on delivering great code or perhaps semiconductor design. EdgeCortix’s secret sauce is in how they’ve co-developed their IP, applying equal importance to both the software IP and the chip design, creating a symbiotic software-centric hardware ecosystem, this sets EdgeCortix apart in the marketplace.”

DANIEL FUJII
President & CEO of Trust Capital Co., Ltd., member of the Executive Committee of Silicon Valley Japan Platform
renesas-quote-logo

We recognized immediately the value of adding the MERA compiler and associated tool set to the RZ/V MPU series, as we expect many of our customers to implement application software including AI technology. As we drive innovation to meet our customer's needs, we are collaborating with EdgeCortix to rapidly provide our customers with robust, high-performance and flexible AI-inference solutions. The EdgeCortix team has been terrific, and we are excited by the future opportunities and possibilities for this ongoing relationship."

Shigeki Kato
Vice President, Enterprise Infrastructure Business Division

Learn more about Dynamic Neural Accelerator now