Breaking the limits in AI processors and edge AI inference acceleration
Where AI inference acceleration needs it all – more TOPS, lower latency, better area and power efficiency and scalability – EdgeCortix AI processor cores make it happen.
EdgeCortix Exhibition at the AI Expo Tokyo 2023 @ Tokyo Big Sight
Here are some glimpses from Day 1 at the Edgecortix booth.
A software-first approach to edge AI processing
General-purpose processing cores - CPUs and GPUs - provide developers with flexibility for most applications. However, these general purpose cores don’t match up well with workloads found in deep neural networks. EdgeCortix began with a mission in mind: redefining edge AI processing from the ground up.
With EdgeCortix technology including a full-stack AI inference software development environment, run-time reconfigurable edge AI inference IP, and edge AI chips for boards and systems, designers can deploy near cloud-level AI performance at the edge. Think about what that can do for these and other applications.
It’s time for better edge AI hardware, IP, and software technology
EdgeCortix MERA: Software framework for modeling and co-processor core management
For full-stack AI inference applications, the MERA compiler and software framework translates AI models into code for an edge AI co-processor and a host CPU.
- Native support for PyTorch, TensorFlow, TensorFlow Lite, and ONNX
- INT8 quantization of user-defined and community AI inference models
- Pre-trained segmentation, detection, point cloud, and more applications
Dynamic Neural Accelerator: Run-time reconfigurable neural network IP for AI processors
Modular and fully run-time configurable, the Dynamic Neural Accelerator is an AI processor core for edge inference acceleration.
- Inference/sec/watt of 16x vs conventional GPU-based hardware
- Scales from 1024 to 32768 MACs in three types of math units
- Dynamic grouping handles workloads with +80% utilization
EdgeCortix SAKURA-I: ASIC for fast, efficient AI inference acceleration in boards and systems
The SAKURA-I Edge AI Co-Processor is an advanced design for a high-performance AI inference engine that connects easily into a host system.
- Built in TSMC 12nm FinFET, 40 TOPS @ 800 MHz, TDP of 10W
- Extended life cycle availability needed for defense and industrial
- PCIe Gen 3 interface; can connect up to 5 together for 200 TOPS
Improving the performance and the energy efficiency of our network infrastructure is a major challenge for the future. Our expectation of EdgeCortix is to be a partner who can provide both the IP and expertise that is needed to tackle these challenges simultaneously."