Breaking the limits in AI processors and edge AI inference acceleration
Where AI inference acceleration needs it all – more TOPS, lower latency, better area and power efficiency and scalability – EdgeCortix AI processor cores make it happen.
Breaking News!
EdgeCortix Closes $20 Million in Additional Funding Round
Backed by leading Japanese and US VC firms and a world leading semiconductor company, EdgeCortix is set to disrupt the multi-billion-dollar edge market.
A software-first approach to edge AI processing
General-purpose processing cores - CPUs and GPUs - provide developers with flexibility for most applications. However, these general purpose cores don’t match up well with workloads found in deep neural networks. EdgeCortix began with a mission in mind: redefining edge AI processing from the ground up.
With EdgeCortix technology including a full-stack AI inference software development environment, run-time reconfigurable edge AI inference IP, and edge AI chips for boards and systems, designers can deploy near cloud-level AI performance at the edge. Think about what that can do for these and other applications.
It’s time for better edge AI hardware, IP, and software technology
EdgeCortix MERA: Software framework for modeling and hardware compilation
For full-stack AI inference applications, the MERA compiler and software framework translates AI models into code for an edge AI co-processor and a host CPU.
- Native support for PyTorch, TensorFlow, TensorFlow Lite, and ONNX
- INT8 quantization of user-defined and community AI inference models
- Pre-trained segmentation, detection, point cloud, and more applications
Dynamic Neural Accelerator: Run-time reconfigurable neural network IP for AI processors
Modular and fully run-time configurable, the Dynamic Neural Accelerator is an AI processor core for edge inference acceleration.
- Inference/sec/watt of 16x vs conventional GPU-based hardware
- Scales from 1024 to 32768 MACs in three types of math units
- Dynamic grouping handles workloads with +80% utilization
Meet SAKURA-I our Industry Leading AI-Coprocessor

EdgeCortix SAKURA-I: ASIC for fast, efficient AI inference acceleration in boards and systems
The SAKURA-I Edge AI Co-Processor is an advanced design for a high-performance AI inference engine that connects easily into a host system.
- Built in TSMC 12nm FinFET, 40 TOPS @ 800 MHz, TDP of 10W
- Extended life cycle availability needed for defense and industrial
- PCIe Gen 3 interface; can connect up to 5 together for 200 TOPS

Given the tectonic shift in information processing at the edge, companies are now seeking near cloud level performance where data curation and AI driven decision making can happen together. Due to this shift, the market opportunity for the EdgeCortix solutions set is massive, driven by the practical business need across multiple sectors which require both low power and cost-efficient intelligent solutions. Given the exponential global growth in both data and devices, I am eager to support EdgeCortix in their endeavor to transform the edge AI market with an industry-leading IP portfolio that can deliver performance with orders of magnitude better energy efficiency and a lower total cost of ownership than existing solutions."
Improving the performance and the energy efficiency of our network infrastructure is a major challenge for the future. Our expectation of EdgeCortix is to be a partner who can provide both the IP and expertise that is needed to tackle these challenges simultaneously."
With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."

EdgeCortix is in a truly unique market position. Beyond simply taking advantage of the massive need and growth opportunity in leveraging AI across many business key sectors, it’s the business strategy with respect to how they develop their solutions for their go-to-market that will be the great differentiator. In my experience, most technology companies focus very myopically, on delivering great code or perhaps semiconductor design. EdgeCortix’s secret sauce is in how they’ve co-developed their IP, applying equal importance to both the software IP and the chip design, creating a symbiotic software-centric hardware ecosystem, this sets EdgeCortix apart in the marketplace.”