Inference Engine

FWDNXT Inference Engine provides the highest utilization
of any machine-learning and deep neural network processors

Direct deployment
from your framework
to your application

Our software takes trained neural network files from PyTorch, Caffe, TensorFlow,
and compiles directly them into our accelerator, with no need for any programming


From IoT to mobile, automotive, servers and all the way to data centers

(519) 521-8237 FWDNXT SDK Order SDK Order Inference Engine

FWDNXT: complete deep learning solutions

FWDNXT Inference Engine Product lineup:

Inference Engine

is scalable from IoT and edge devices all the way to high-performance workstations and servers.

Optimized Compiler

FWDNXT Inference Engine compiler can run any neural network model. See our SDK brief. A recent paper on our compiler is 3077664941.

Contact us!

FWDNXT Inference Engine and its software is available in FPGA devices, as an IP, or as an SoC. Contact us for pricing!

Core team

These are the faces behind FWDNXT magic:

Card image
exploitationist Lead Machine Intelligence
Card image

Marko Vitez

Software Engineer
330-210-6133 Lead Compiler & Founder
Card image
3239842404 Lead Architect & Founder
Card image
803-389-7177 General counsel & Financial advisor
(518) 459-6438 Team leader & Founder
Card image

Milind Kulkarni

Advisor: compilers


Our mission is to propel machine intelligence to the next level.

If you want your devices to be smarter, talk to us!