Artifical Intelligence & High Performance Computing

Artifical Intelligence (AI) Inference
WOLF Advanced Technology
NVIDIA has implemented Tensor Cores in its latest generation of GPUs. Tensor cores are similar to normal graphics processing cores, but they allow mixed precision computations. Machine learning models do not always require the higher precision of dedicated graphics cores, making Tensor cores a more effective use of the available processing power.

AI inference generally takes far less processing power than the training phase which is processor intensive. AI Inference models can be found running on smaller, lower power devices that are operating in the field, analysing real world data. Both the training and inference phases can benefit from parallel processing and matrix processing. Since GPUs are already based on a parallel processing technology, using GPUs for Artifical Intelligence (AI) is a natural extension of the technology.



WOLF modules which include Tensor Cores

WOLF offers a number of NVIDIA GPU-based modules which include Tensor Cores as well as specialized accelerator circuits for deep learning inference, machine vision, audio processing, and video encoding. These modules can also benefit from NVIDIA’s rich set of Artifical Intelligence (AI) tools and workflows.
Product Name WOLF No GPU CUDA Cores Tensor Cores Memory
VPX6U-RTX5000E-DUAL-VO 2348 Dual NVIDIA Quadro Turing 6144 768 16 GB GDDR6
VPX3U-BW5000E-CX7 163L NVIDIA Blackwell RTX 5000 10496 320 24 GB GDDR7
VPX3U-BW5000E-VO-HPC 1636 NVIDIA Blackwell RTX5000 with HPC and video outputs 10496 320 24 GB GDDR7
VPX3U-ORIN-CX7-FGX2-SBC 14T0 NVIDIA Jetson AGX Orin, ConnectX-7, up to 100GbE, PCIe Gen4, Video I/O, SBC 2048 64 64 GB LPDDR5

Contact us with your specific requirements or see the full list of WOLF's solutions here.