Our products

NNM100 Inference Operation Acceleration Module

It was created specifically for artificial intelligence cloud computing scenarios. It has a high-performance heterogeneous processor and large-capacity memory built in, supports various AI algorithms and virtualization containers under the mainstream deep learning framework, and can be elastically stacked to build AI computing clusters.

  • 2.4GHz 4-core ARMv8 64bit CPU
  • 4 heterogeneous operation acceleration clusters, [email protected] based on the ManyCore computing architecture
  • LPDDR4X-4266 18GB + 64/128GB eMMC Flash
  • 64-channel [email protected] video decoding / 16-channel [email protected] video decoding
  • PCIe 3.0 and GbE high-speed onboard connectors
  • Peak power consumption is 25W, and the working temperature is 055 °C.