dna-white-blue-logo-v3

Dynamic Neural Accelerator™

Run-time Reconfigurable Architecture for Edge AI

Supported Frameworks & Applications

Dynamic Neural Accelerator Architecture

Dynamic Neural Accelerator (DNA) is a flexible, modular neural accelerator architecture with run-time reconfigurable interconnects between compute engines, achieving exceptional parallelism and efficiency through dynamic grouping. Using a patented approach, EdgeCortix reconfigures data paths between DNA engines in real-time to achieve outstanding parallelism and reduce on-chip memory bandwidth, allowing faster, more efficient hardware execution.

The MERA software stack works in conjunction with DNA to optimize computation order and resource allocation in scheduling tasks for neural networks.

DNA is the driving force behind the SAKURA-II AI Accelerator, providing best-in-class processing at the edge in a small form factor package, and supporting the latest models for Generative AI applications.

Dynamic Neural Accelerator IP: Efficient, modular, scalable, and fully configurable hardware IP for FPGAs or SoCs.
Dynamic-Neural-Accelerator-IP-Diagram-2023

DNA architecture provides more flexibility over hardware-optimized solutions

Single-model benchmarks provide an easy comparison. In the real world, more complex multi-model scenarios are emerging, such as in multiple-sensor applications where different models handle different tasks in the processing pipeline at the same time.

Rather than being hardware-optimized for just a few popular neural network models, DNA combined with MERA runs a wide range of models more efficiently. Inside DNA are three different types of execution units, each supporting multiply-accumulate operations with accumulators.

DNA Architecture

Standard Computation Engine
Uses systolic arrays scalable from 3x3 to 64x64 elements, with optimized data path
Depth-Wise Computation Engine
Decomposes 3D kernels into 2D slices with
optimized interconnects
Vector Unit
Powers neural network activation and scaling for fast
and effective learning

DNA handles streaming (Batch=1) and high-resolution data proficiently and can scale up or down, providing an efficient AI inference solution for edge devices, where performance, power consumption, size, and flexibility matter.

How DNA IP delivers performance in several dimensions

Many AI inference accelerators quote a TOPS rating under optimum conditions, with compute units fully parallelized. When realistic AI models are mapped to hardware, parallelism between neural network layers drops - and utilization falls to 40% or less. This inefficiency costs designers in flexibility, size, and power consumption.

Using a patented run-time reconfigurable architecture, DNA AI engines optimize data paths between DNA IP execution units to achieve better parallelism and reduce on-chip memory bandwidth. MERA software then optimizes computation order and resource allocation in scheduling tasks for neural networks, even with several models running in the same DNA engine. Significantly faster processing time due to task and model parallelism results in better performance and efficiency.

DNA IP maintains up to 90% efficiency under load. This much higher utilization translates to high inferences per second per Watt, much higher than conventional GPU-based hardware. Latency, essential to determinism in real-time applications, remains low through reconfigured data paths and optimized scheduling. Finally, the DNA architecture efficiency results in very low power consumption, critical at the edge,

Typical AI Inference Flow in Industry IP Cores

timeline1
timeline2
timeline3

IP Core Inefficiencies

  • Slower processing due to batching
  • Higher power consumption due to higher re-use of resources
  • Low compute utilization resulting in lower efficiency

DNA Datapath Advantages

  • Much higher utilization and efficiency
  • Significantly faster processing due to task and model parallelism
  • Very low power consumption for edge AI use cases

EdgeCortix reconfigures data paths between DNA engines to achieve better parallelism and reduce on-chip memory bandwidth, using a patented runtime reconfigurable datapath architecture.

SAKURA-II M.2 Modules and PCIe Cards

EdgeCortix SAKURA-II can be easily integrated into a host system for software development and AI model inference tasks.

Pre-Order an M.2 Module or a PCIe Card and get started today!

EdgeCortix Edge AI Platform

mera-blue-white-logo
MERA Compiler and Framework
Industry first software platform enabling AI inference across heterogeneous systems

Unique Software
Learn More
edgecortix-sakura-icon
SAKURA®-II AI Accelerator
High Performance, Low Power, Generative AI ready, effectively handles multi-billion parameter models

Efficient Hardware
Learn More
SAKURA-II-M.2-Straight
AI Accelerator Modules and Cards
Up to 240 TOPS in systems powered by the latest SAKURA-II AI Accelerators

Deployable Systems
Learn More
megachips-logo

Given the tectonic shift in information processing at the edge, companies are now seeking near cloud level performance where data curation and AI driven decision making can happen together. Due to this shift, the market opportunity for the EdgeCortix solutions set is massive, driven by the practical business need across multiple sectors which require both low power and cost-efficient intelligent solutions. Given the exponential global growth in both data and devices, I am eager to support EdgeCortix in their endeavor to transform the edge AI market with an industry-leading IP portfolio that can deliver performance with orders of magnitude better energy efficiency and a lower total cost of ownership than existing solutions."

Akira Takata
Former CEO of MegaChips Corporation
SoftBank

Improving the performance and the energy efficiency of our network infrastructure is a major challenge for the future. Our expectation of EdgeCortix is to be a partner who can provide both the IP and expertise that is needed to tackle these challenges simultaneously."

Ryuji Wakikawa
Head, Research Institute of Advanced Technology at SoftBank Corp
bittware-logo-v2

With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."

Craig Petrie
VP, Sales and Marketing at BittWare
trust-capitol-co-ltd-logo

EdgeCortix is in a truly unique market position. Beyond simply taking advantage of the massive need and growth opportunity in leveraging AI across many business key sectors, it’s the business strategy with respect to how they develop their solutions for their go-to-market that will be the great differentiator. In my experience, most technology companies focus very myopically, on delivering great code or perhaps semiconductor design. EdgeCortix’s secret sauce is in how they’ve co-developed their IP, applying equal importance to both the software IP and the chip design, creating a symbiotic software-centric hardware ecosystem, this sets EdgeCortix apart in the marketplace.”

DANIEL FUJII
President & CEO of Trust Capital Co., Ltd., member of the Executive Committee of Silicon Valley Japan Platform
renesas-quote-logo

We recognized immediately the value of adding the MERA compiler and associated tool set to the RZ/V MPU series, as we expect many of our customers to implement application software including AI technology. As we drive innovation to meet our customer's needs, we are collaborating with EdgeCortix to rapidly provide our customers with robust, high-performance and flexible AI-inference solutions. The EdgeCortix team has been terrific, and we are excited by the future opportunities and possibilities for this ongoing relationship."

Shigeki Kato
Vice President, Enterprise Infrastructure Business Division
edgecortix-white-logo-hor-rgb-tm-cropped

Business Overview

Delivering Energy Efficient Edge Based AI Acceleration