Artifical Intelligence & High Performance Computing
NVIDIA has implemented Tensor Cores in its latest generation of GPUs. Tensor cores are similar to normal graphics processing
cores, but they allow mixed precision computations. Machine learning models do not always require the higher precision of
dedicated graphics cores, making Tensor cores a more effective use of the available processing power.
AI inference generally takes far less processing power than the training phase which is processor intensive. AI Inference models can
be found running on smaller, lower power devices that are operating in the field, analysing real world data.
Both the training and inference phases can benefit from parallel processing and matrix processing. Since GPUs are already based
on a parallel processing technology, using GPUs for Artifical Intelligence (AI) is a natural extension of the technology.
WOLF modules which include Tensor Cores
WOLF offers a number of NVIDIA GPU-based modules which include Tensor Cores as well as specialized accelerator circuits for
deep learning inference, machine vision, audio processing, and video encoding. These modules can also benefit from NVIDIA’s
rich set of Artifical Intelligence (AI) tools and workflows.
Contact us with your specific requirements or see the full list of WOLF's solutions here.