EdgeCortix was founded in July 2019 with the radical idea of applying a software-first approach while designing an artificial intelligence (AI) specific processor architecture from the ground up. Overcoming the fundamental mismatch between the current generation of AI software for deep neural networks and off-the-shelf hardware processors including CPUs and GPUs drives our core mission at EdgeCortix.
“To bring near cloud-level performance to all forms of devices at the edge, delivering orders of magnitude better energy efficiency and processing speed, while drastically reducing operating costs.”
We invented the Dynamic Neural Accelerator IP (DNA IP) with this mission in mind. It forms a tight coupling between today's neural networks and an underlying domain-specific hardware for seamlessly processing these networks. Backed by our technology, our strong team and our customer-first culture, we are driven to become the world’s leading fabless semiconductor design company specializing in low-power AI system software and processor IP design focused on efficiency at the edge.
Rethinking Edge AI Inference Technology
At EdgeCortix, we think about software flexibility and robustness first while designing the processor (hardware) architectures. Our design approach is called "hardware & software co-exploration." This is in contrast to the past several decades of processor design which concentrated on bringing a new hardware chip to market first with software, including compilers, as an afterthought.
Targeting advanced computer vision applications first, using proprietary hardware and software IP on existing processors like FPGAs and custom-designed ASICs, we are geared towards positively disrupting the rapidly growing need for accelerating AI inference at the edge in several application segments.
Realizing the Vision in Silicon
The SAKURA Energy-Efficient Edge AI Co-Processor is the first EdgeCortix product implementing the DNA IP in an ASIC. Compared with conventional GPU-based hardware, SAKURA provides designers significant advantages.
- 10x Faster
- 20x More Energy-efficient
- High Accuracy
- Runtime Reconfigurable
- Highly Scalable
Delivering AI Inference in Vital Application Segments
EdgeCortix's DNA IP architecture and industry-first AI hardware and software co-exploration engine enable adaptation of AI accelerator designs with focus on edge devices operating in real-time, at significantly reduced cost, power consumption and size. These core capabilities serve a range of applications in segments including automotive, smart cities, defense and aerospace, robotics, Industry 4.0, and more.
Our software stack seamlessly integrates with standard deep learning development environments. Developers can optimize models once and deploy flexibly across existing platforms, including our AI inference accelerator, Intel x86 architecture, Nvidia GPUs, and Arm CPUs. This enables smooth workflows and easier development of complete solutions for efficient deep learning in many applications.
It’s time for better edge AI hardware, IP, and software technology
We recognized immediately the value of adding the MERA compiler and associated tool set to the RZ/V MPU series, as we expect many of our customers to implement application software including AI technology. As we drive innovation to meet our customer's needs, we are collaborating with EdgeCortix to rapidly provide our customers with robust, high-performance and flexible AI-inference solutions. The EdgeCortix team has been terrific, and we are excited by the future opportunities and possibilities for this ongoing relationship."
Fast, Efficient & Affordable AI
AI Hardware Is All About Software
Sakyasingha Dasgupta, Founder and CEO, EdgeCortix
Sept 13-15, 2022 at the Marriott, Santa Clara, CA
Quote "SPEAKER10" to save 10% on registration