UXC2200 Inference Operation Acceleration Module
It was created specifically for artificial intelligence cloud computing scenarios. It has a high-performance heterogeneous processor and large-capacity memory built in, supports various AI algorithms and virtualization containers under the mainstream deep learning framework, and can be elastically stacked to build AI computing clusters.
- 2.4GHz 4-core ARMv8 64bit CPU
- 4 heterogeneous operation acceleration clusters, 12.8TOPS@INT8 based on the ManyCore computing architecture
- LPDDR4X-4266 18GB + 64/128GB eMMC Flash
- 64-channel 1080P@25FPS video decoding / 16-channel 4K@25FPS video decoding
- PCIe 3.0 and GbE high-speed onboard connectors
- Peak power consumption is 25W, and the working temperature is 055 °C.