Breaking the limits in AI processors and edge AI inference acceleration

Where AI inference acceleration needs it all – more TOPS, lower latency, better area and power efficiency and scalability – EdgeCortix AI processor cores make it happen.

edgecortix-sakura-icon

Strategic Partners

EdgeCortix Exhibition at the AI Expo Tokyo 2023 @ Tokyo Big Sight
Here are some glimpses from Day 1 at the Edgecortix booth.

A software-first approach to edge AI processing

General-purpose processing cores - CPUs and GPUs - provide developers with flexibility for most applications. However, these general purpose cores don’t match up well with workloads found in deep neural networks. EdgeCortix began with a mission in mind: redefining edge AI processing from the ground up.

With EdgeCortix technology including a full-stack AI inference software development environment, run-time reconfigurable edge AI inference IP, and edge AI chips for boards and systems, designers can deploy near cloud-level AI performance at the edge. Think about what that can do for these and other applications.

security
Defense
Finding threats, raising situational awareness, and making vehicles smarter
Learn More
robotics-drones
Robotics & Drones
Improving flexibility, reducing time-to-productivity, and simplifying programming
Learn More
city
Smart Manufacturing
Creating breakthroughs in configurability, efficiency, accuracy, and quality
Learn More
smart-cities
Smart Cities
Keeping people and vehicles flowing, saving energy, enhancing safety and security
Learn More
automotive
Automotive Sensing
Helping drivers see their surroundings, avoid hazards, and ease into self-driving vehicles
Learn More

It’s time for better edge AI hardware, IP, and software technology

EdgeCortix MERA is a compiler and AI inference software framework translating models into code for an edge AI co-processor

EdgeCortix MERA: Software framework for modeling and co-processor core management

For full-stack AI inference applications, the MERA compiler and software framework translates AI models into code for an edge AI co-processor and a host CPU.

  • Native support for PyTorch, TensorFlow, TensorFlow Lite, and ONNX
  • INT8 quantization of user-defined and community AI inference models
  • Pre-trained segmentation, detection, point cloud, and more applications

Dynamic Neural Accelerator: Run-time reconfigurable neural network IP for AI processors

Modular and fully run-time configurable, the Dynamic Neural Accelerator is an AI processor core for edge inference acceleration.

  • Inference/sec/watt of 16x vs conventional GPU-based hardware
  • Scales from 1024 to 32768 MACs in three types of math units
  • Dynamic grouping handles workloads with +80% utilization
EdgeCortix Dynamic Neural Accelerator IP is run-time reconfigurable with high efficiency, low latency, and high utilization
EdgeCortix Sakura-I AI Co-Processor

Meet SAKURA-I our Industry Leading AI Co-processor

EdgeCortix SAKURA PCIe Dev Card expands a host system with a powerful, yet efficient edge AI chip delivering 40 TOPS

EdgeCortix SAKURA-I: ASIC for fast, efficient AI inference acceleration in boards and systems

The SAKURA-I Edge AI Co-Processor is an advanced design for a high-performance AI inference engine that connects easily into a host system.

  • Built in TSMC 12nm FinFET, 40 TOPS @ 800 MHz, TDP of 10W
  • Extended life cycle availability needed for defense and industrial
  • PCIe Gen 3 interface; can connect up to 5 together for 200 TOPS
SoftBank
Improving the performance and the energy efficiency of our network infrastructure is a major challenge for the future. Our expectation of EdgeCortix is to be a partner who can provide both the IP and expertise that is needed to tackle these challenges simultaneously."
Ryuji Wakikawa
Head, Research Institute of Advanced Technology at SoftBank Corp