FPGAs are a powerful technology to deploy neural networks in embedded applications as they can execute the mathematical operations to classify an image in parallel and at high speed. Embedded AI applications are sensor systems, camera’s, frame grabbers, sorting & industrial machines, robots and autonomous driving.
Deep learning framework
Easics' deep learning framework can support different neural networks based on existing frameworks like tensorflow, caffé, keras, python, /C++, … The architecture of the neural nets is converted (if necessary) to C++ and used to build the sequencer. Via an API the sequencer can be stored on the SDRAM. The weights of the trained deep learning net are converted (floating point quantization) in an image that will be uploaded by the API on the SDRAM.
The classification result (what & where it is) of the deep learning algorithm will be send to the application where the detection of the result will be applied. We can supply a complete system design around the Deep learning core including camera interfaces or external interfaces. A standard solution can combine our TCP Offload Engine with the deep learning core.
Using FPGA technologies has the following advantages:
High performance per Watt and low latency make it suitable for real-time embedded applications.
The FPGA logic can be shaped to match any network architecture.
Performance, cost and power will define the FPGA of choice.
Future proof and scalable solution as the FPGA architecture can be re-configured for future neural networks or only update the weights.
Easics' framework offers a flexible approach and a fast-time to market.
The deep learning core can be easily integrated with other CPU’s, vision functionality and connectivity.