sakura-white-blue-icon

EdgeCortix SAKURA®-II

Energy-efficient Edge AI Accelerator
SAKURA-II-chip-silver

Supported Frameworks & Applications

SAKURA-II-chip-silver

SAKURA-II AI Accelerator

EdgeCortix SAKURA-II is an advanced AI accelerator providing best-in-class efficiency, driven by our low-latency Dynamic Neural Accelerator (DNA). SAKURA-II is designed for applications requiring fast, real-time Batch=1 AI inferencing with excellent performance in a small footprint, low power silicon device.

SAKURA-II is designed to handle the most challenging Generative AI applications at the edge, enabling designers to create new content based on disparate inputs like images, text, and sounds. Supporting multi-billion parameter models like Llama 2, Stable Diffusion, DETR, and ViT within a typical power envelope of 8W, SAKURA-II meets the requirements for a vast array of Edge Generative AI uses in Vision, Language, Audio, and many other applications.

SAKURA-II Key Benefits

Optimized for Generative AI
Supports multi-billion parameter Generative AI models like Llama 2, Stable Diffusion, DETR, and ViT within a typical power envelope of 8W
Efficient AI Compute
Achieves more than 2x the AI compute utilization of other solutions, resulting in exceptional energy efficiency
Robust Memory Bandwidth
Up to 4x more DRAM bandwidth than competing AI accelerators, ensuring superior performance for LLMs and LVMs
Large DRAM Capacity
Support for up to 32GB of DRAM, enabling efficient processing of complex vision and Generative AI workloads
Sparse Computing
Reduces memory footprint and optimizes
DRAM bandwidth
Real-Time Data Streaming
Optimized for low-latency operations with
Batch=1 processing
Arbitrary Activation Function Support
Hardware-accelerated approximation provides
enhanced adaptability
Advanced Precision
Software-enabled mixed-precision provides near
FP32 accuracy
Power Management
Advanced power management enables ultra-high efficiency modes
EdgeCortix SAKURA-II Size

SAKURA-II Technical Specs

Performance DRAM Support DRAM Bandwidth On-chip SRAM
60 TOPS (INT8)
30 TFLOPS (BF16)
Dual 64-bit LPDDR4x
(8/16/32GB total)
68 GB/sec 20MB
Compute Efficiency Temp Range Power Consumption Package
Up to 90% utilization -40C to 85C 8W (typical) 19mm x 19mm BGA
Performance
60 TOPS (INT8)
30 TFLOPS (BF16)
DRAM Support
Dual 64-bit LPDDR4x
(8/16/32GB total)
DRAM Bandwidth
68 GB/sec
On-chip SRAM
20MB
Compute Efficiency
Up to 90% utilization
Temp Range
40C to 85C
Power Consumption
8W (typical)
Package
19mm x 19mm BGA

Get the details in the SAKURA-II product brief

EdgeCortix-SAKURA-II-A4-Thumb

MERA Software Supports Diverse Neural Networks from Convolutions to the Latest Generative AI Models

Example Models Include:
Transformer Models Convolutional Models
DETR
DINO
Whisper Encoder / Decoder
DistillBERT
DistilBert - SST2
Nano-GPT
GPT-2 - 150M
Distil-GPT-2 (HF)
GPT-2 (HF) - 117M
GPT-2 (HF) - medium / large
GPT-2 - XL (HF) - 1.5B
TinyLama (HF) - 1.1B
Phi-2 (HF) - 3B
Open-Llama2 (HF) - 7B
CodeLlama (HF) - 7B
Mistral-v0.2 (HF) - 7B
Llama3 - 8B
ViT (HF) / CLIP / Mobile-ViT
ConvNextV1/V2 (HF)
SegFormer
Roberta-Emotion
StableDiffusion V1.5
ResNet 18
ResNet 50/101
Big YoloV3
TinyYolo V3
Yolo V5/V6/V8
YoloX
EfficientNet-Lite
EfficientNet-V2
SFA3D
MonoDepth - MiDaS
U-Net
MoveNet
DeepLab
MobileNet V1-V2
MobileNetV2-SSD
GladNet
ABPN
SCI
Example Models Include:
Transformer Models
DETR
DINO
Whisper Encoder / Decoder
DistillBERT
DistilBert - SST2
Nano-GPT
GPT-2 - 150M
Distil-GPT-2 (HF)
GPT-2 (HF) - 117M
GPT-2 (HF) - medium / large
GPT-2 - XL (HF) - 1.5B
TinyLama (HF) - 1.1B
Phi-2 (HF) - 3B
Open-Llama2 (HF) - 7B
CodeLlama (HF) - 7B
Mistral-v0.2 (HF) - 7B
Llama3 - 8B
ViT (HF) / CLIP / Mobile-ViT
ConvNextV1/V2 (HF)
SegFormer
Roberta-Emotion
StableDiffusion V1.5
Convolutional Models
ResNet 18
ResNet 50/101
Big YoloV3
TinyYolo V3
Yolo V5/V6/V8
YoloX
EfficientNet-Lite
EfficientNet-V2
SFA3D
MonoDepth - MiDaS
U-Net
MoveNet
DeepLab
MobileNet V1-V2
MobileNetV2-SSD
GladNet
ABPN
SCI

SAKURA-II Modules and Cards

SAKURA-II modules and cards are architected to run the latest vision and Generative AI models with market-leading energy efficiency and low latency.

SAKURA-II M.2 modules are high-performance, 60 TOPS, edge AI accelerators in the small M.2 2280 form factor and are the best choice for space-constrained designs.

SAKURA-II PCIe Cards are high-performance, up to 120 TOPS, edge AI accelerators in the low profile, single slot PCIe form factor. With single and dual options, the best choice will depend on the overall performance needed.

SAKURA-II-M.2-Straight-Tilt-Back
SAKURA-II-Low-Profile-Straight-Tilt-Back

Explore our Complete Edge AI Platform

mera-blue-white-logo
MERA Compiler and Framework
Industry first software platform enabling AI inference across heterogeneous systems

Unique Software
Learn More
dna-blue-white-logo-v3
Dynamic Neural Accelerator Technology
Flexible, run-time reconfigurable, highly-parallelized and efficient architecture

Proprietary Architecture
Learn More
SAKURA-II-M.2-Straight
AI Accelerator Modules and Cards
Up to 240 TOPS in systems powered by the latest SAKURA-II AI Accelerators

Deployable Systems
Learn More

EdgeCortix Platform Solves Critical Edge AI Market Challenges

security
Defense
Learn More
robotics-drones
Robotics & Drones
Learn More
city
Smart Manufacturing
Learn More
smart-cities
Smart Cities
Learn More
automotive
Automotive Sensing
Learn More
megachips-logo

Given the tectonic shift in information processing at the edge, companies are now seeking near cloud level performance where data curation and AI driven decision making can happen together. Due to this shift, the market opportunity for the EdgeCortix solutions set is massive, driven by the practical business need across multiple sectors which require both low power and cost-efficient intelligent solutions. Given the exponential global growth in both data and devices, I am eager to support EdgeCortix in their endeavor to transform the edge AI market with an industry-leading IP portfolio that can deliver performance with orders of magnitude better energy efficiency and a lower total cost of ownership than existing solutions."

Akira Takata
Former CEO of MegaChips Corporation
SoftBank

Improving the performance and the energy efficiency of our network infrastructure is a major challenge for the future. Our expectation of EdgeCortix is to be a partner who can provide both the IP and expertise that is needed to tackle these challenges simultaneously."

Ryuji Wakikawa
Head, Research Institute of Advanced Technology at SoftBank Corp
bittware-logo-v2

With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."

Craig Petrie
VP, Sales and Marketing at BittWare
trust-capitol-co-ltd-logo

EdgeCortix is in a truly unique market position. Beyond simply taking advantage of the massive need and growth opportunity in leveraging AI across many business key sectors, it’s the business strategy with respect to how they develop their solutions for their go-to-market that will be the great differentiator. In my experience, most technology companies focus very myopically, on delivering great code or perhaps semiconductor design. EdgeCortix’s secret sauce is in how they’ve co-developed their IP, applying equal importance to both the software IP and the chip design, creating a symbiotic software-centric hardware ecosystem, this sets EdgeCortix apart in the marketplace.”

DANIEL FUJII
President & CEO of Trust Capital Co., Ltd., member of the Executive Committee of Silicon Valley Japan Platform
renesas-quote-logo

We recognized immediately the value of adding the MERA compiler and associated tool set to the RZ/V MPU series, as we expect many of our customers to implement application software including AI technology. As we drive innovation to meet our customer's needs, we are collaborating with EdgeCortix to rapidly provide our customers with robust, high-performance and flexible AI-inference solutions. The EdgeCortix team has been terrific, and we are excited by the future opportunities and possibilities for this ongoing relationship."

Shigeki Kato
Vice President, Enterprise Infrastructure Business Division
edgecortix-white-logo-hor-rgb-tm-cropped

Business Overview

Delivering Energy Efficient Edge Based AI Acceleration