Breaking the limits in AI inference acceleration at the edge

Where AI inference acceleration needs it all – more TOPS, lower latency, better area and power efficiency and scalability – EdgeCortix makes it happen.

edgecortix-sakura-icon

Strategic Partners

SoftBank-white-logo
edgecortix-white-logo-hor-rgb-tm-email

SoftBank Corp. and EdgeCortix Partner to Jointly Realize Low-latency and Highly Energy-efficient 5G Wireless Accelerators

A software-first approach to edge AI inference

General-purpose processing cores - CPUs and GPUs - provide developers with flexibility for most applications. However, these general purpose cores don’t match up well with workloads found in deep neural networks. EdgeCortix began with a mission in mind: redefining edge AI processing from the ground up.

With EdgeCortix technology including a full-stack AI inference software development environment, run-time reconfigurable edge AI inference IP, and edge AI chips for boards and systems, designers can deploy near cloud-level AI performance at the edge. Think about what that can do for these and other applications.

security
Defense
Finding threats, raising situational awareness, and making vehicles smarter
Learn More
robotics-drones
Robotics & Drones
Improving flexibility, reducing time-to-productivity, and simplifying programming
Learn More
city
Smart Manufacturing
Creating breakthroughs in configurability, efficiency, accuracy, and quality
Learn More
smart-cities
Smart Cities
Keeping people and vehicles flowing, saving energy, enhancing safety and security
Learn More
automotive
Automotive Sensing
Helping drivers see their surroundings, avoid hazards, and ease into self-driving vehicles
Learn More

It’s time for better edge AI hardware, IP, and software technology

EdgeCortix MERA is a compiler and AI inference software framework translating models into code for an edge AI co-processor

EdgeCortix MERA: Software framework for modeling and co-processor core management

For full-stack AI inference applications, the MERA compiler and software framework translates AI models into code for an edge AI co-processor and a host CPU.

  • Native support for PyTorch, TensorFlow, TensorFlow Lite, and ONNX
  • INT8 quantization of user-defined and community AI inference models
  • Pre-trained segmentation, detection, point cloud, and more applications

Dynamic Neural Accelerator IP: Run-time reconfigurable neural network IP for AI

Modular and fully run-time configurable, the Dynamic Neural Accelerator IP is an AI inference processing core.

  • Inference/sec/watt of 16x vs conventional GPU-based hardware
  • Scales from 1024 to 32768 MACs in three types of math units
  • Dynamic grouping handles workloads with +80% utilization
EdgeCortix Dynamic Neural Accelerator IP is run-time reconfigurable with high efficiency, low latency, and high utilization
EdgeCortix Sakura-I AI Co-Processor

Meet SAKURA-I our Industry Leading AI-Coprocessor

EdgeCortix SAKURA PCIe Dev Card expands a host system with a powerful, yet efficient edge AI chip delivering 40 TOPS

EdgeCortix SAKURA-I: ASIC for fast, efficient AI inference acceleration in boards and systems

The SAKURA-I Edge AI Co-Processor is an advanced design for a high-performance AI inference engine that connects easily into a host system.

  • Built in TSMC 12nm FinFET, 40 TOPS @ 800 MHz, TDP of 10W
  • Extended life cycle availability needed for defense and industrial
  • PCIe Gen 3 interface; can connect up to 5 together for 200 TOPS
bittware-logo-v2
With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."
Craig Petrie
VP, Sales and Marketing at BittWare