Breaking the limits in AI inference acceleration at the edge
Where AI inference acceleration needs it all – more TOPS, lower latency, better area and power efficiency and scalability – EdgeCortix makes it happen.

SoftBank Corp. and EdgeCortix Partner to Jointly Realize Low-latency and Highly Energy-efficient 5G Wireless Accelerators
A software-first approach to edge AI inference
General-purpose processing cores - CPUs and GPUs - provide developers with flexibility for most applications. However, these general purpose cores don’t match up well with workloads found in deep neural networks. EdgeCortix began with a mission in mind: redefining edge AI processing from the ground up.
With EdgeCortix technology including a full-stack AI inference software development environment, run-time reconfigurable edge AI inference IP, and edge AI chips for boards and systems, designers can deploy near cloud-level AI performance at the edge. Think about what that can do for these and other applications.
It’s time for better edge AI hardware, IP, and software technology
EdgeCortix MERA: Software framework for modeling and co-processor core management
For full-stack AI inference applications, the MERA compiler and software framework translates AI models into code for an edge AI co-processor and a host CPU.
- Native support for PyTorch, TensorFlow, TensorFlow Lite, and ONNX
- INT8 quantization of user-defined and community AI inference models
- Pre-trained segmentation, detection, point cloud, and more applications
Dynamic Neural Accelerator IP: Run-time reconfigurable neural network IP for AI
Modular and fully run-time configurable, the Dynamic Neural Accelerator IP is an AI inference processing core.
- Inference/sec/watt of 16x vs conventional GPU-based hardware
- Scales from 1024 to 32768 MACs in three types of math units
- Dynamic grouping handles workloads with +80% utilization
EdgeCortix SAKURA-I: ASIC for fast, efficient AI inference acceleration in boards and systems
The SAKURA-I Edge AI Co-Processor is an advanced design for a high-performance AI inference engine that connects easily into a host system.
- Built in TSMC 12nm FinFET, 40 TOPS @ 800 MHz, TDP of 10W
- Extended life cycle availability needed for defense and industrial
- PCIe Gen 3 interface; can connect up to 5 together for 200 TOPS
With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."