Artifical Intelligence & High Performance Computing

Artifical Intelligence (AI) Inference
WOLF Advanced Technology
NVIDIA has implemented Tensor Cores in its latest generation of GPUs. Tensor cores are similar to normal graphics processing cores, but they allow mixed precision computations. Machine learning models do not always require the higher precision of dedicated graphics cores, making Tensor cores a more effective use of the available processing power.

AI inference generally takes far less processing power than the training phase which is processor intensive. AI Inference models can be found running on smaller, lower power devices that are operating in the field, analysing real world data. Both the training and inference phases can benefit from parallel processing and matrix processing. Since GPUs are already based on a parallel processing technology, using GPUs for Artifical Intelligence (AI) is a natural extension of the technology.

WOLF modules which include Tensor Cores

WOLF offers a number of NVIDIA GPU-based modules which include Tensor Cores as well as specialized accelerator circuits for deep learning inference, machine vision, audio processing, and video encoding. These modules can also benefit from NVIDIA’s rich set of Artifical Intelligence (AI) tools and workflows.
Product Name GPU CUDA Cores Tensor Cores Memory
VPX6U-RTX5000E-DUAL-CV Dual NVIDIA RTX5000 6144 768 32 GB GDDR6
VPX3U-RTX5000E-CV NVIDIA RTX5000 3072 384 16 GB GDDR6
VPX3U-RTX3000E-CV NVIDIA RTX3000 2304 288 6 GB GDDR6

Contact us with your specific requirements or see the full list of WOLF's solutions here.