Supported Frameworks & Applications
Modular, scalable, fully configurable neural network edge AI inference IP
EdgeCortix Dynamic Neural Accelerator (DNA) is a flexible neural accelerator IP core with run-time reconfigurable interconnects between compute units, achieving exceptional parallelism and efficiency through dynamic grouping. A single core can scale from 1024 MACs to 32768 MACs, running at clock speeds up to 1 GHz. Configuration is done with the EdgeCortix MERA compiler and software framework.
How DNA IP delivers performance in several dimensions
Many AI inference accelerators quote a TOPS rating under optimum conditions, with compute units fully parallelized. When realistic AI models are mapped to hardware, parallelism between neural network layers drops - and utilization falls to 40% or less. This inefficiency costs designers in flexibility, size, and power consumption.
Using a patented approach, EdgeCortix reconfigures data paths between DNA IP execution units to achieve better parallelism and reduce on-chip memory bandwidth. MERA software then optimizes computation order and resource allocation in scheduling tasks for neural networks, even with several models running in the same DNA IP core.
The result is better performance. DNA IP maintains better than 80% efficiency under load. This translates to high inference/sec/W scores, as much as 16x vs. conventional GPU-based hardware. Latency, essential to determinism in real-time applications, stays less than 4 msec through reconfigured data paths and optimized scheduling.
DNA IP provides more flexibility over hardware-optimized solutions
Single-model benchmarks provide an easy comparison. In the real world, more complex multi-model scenarios are emerging, such as in automotive sensing where different models handle different tasks in the processing pipeline.
Rather than being hardware-optimized for just a few popular neural network models, DNA IP combined with MERA software runs a wide range of models more efficiently.
Inside the DNA IP are three different types of execution units, each supporting INT8 multiply-accumulate operations with INT32 accumulators.
- A standard computation engine uses systolic arrays scalable from 3x3 to 64x64 elements, with the optimized data path capability.
- A depth-wise computation engine takes a 2D approach, decomposing a 3D kernel into 2D slices, again with optimized interconnects.
- A separate vector unit powers tasks such as neural network activation and scaling, a common operation in camera-based vision systems.
Get the details in the Dynamic Neural Accelerator IP datasheet.
See what Microprocessor Report said about DNA IP
How developers can explore DNA IP
Inference Pack on BittWare FPGA cards

The DNA IP core is flexible for either FPGA or SoC designs. Working with BittWare, EdgeCortix has created an inference pack of the DNA IP core with ready-to-use bitstreams for BittWare IA-840F and IA-420F cards featuring high-performance Intel® Agilex™ FPGAs.
See more on the BittWare approach to pairing these technologies in their DNA ML framework datasheet.
Developing with the DNA IP and MERA
EdgeCortix MERA is the compiler and software framework needed to enable deep neural network graph compilation and AI inference with the DNA IP. With built-in support for the open-source Apache TVM compiler framework, it provides the tools, APIs, code-generator and runtime needed to deploy a pre-trained deep neural network bitstream to the DNA IP. MERA supports a model development workflow using tools including PyTorch, TensorFlow, TensorFlow Lite, and ONNX.
With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."