Google on Wednesday announced that Nvidia’s Tesla T4 GPUs are now available on the Google Cloud Platform in beta. In November, GCP became the first cloud provider to offer the T4 GPUs via private alpha.
The T4 is now available in Brazil, India, the Netherlands, Singapore, Tokyo and the US. With T4s across eight regions globally, Google is promising customers low latency. Other GPUs in Google’s lineup include the Nvidia K80, P4, P100 and V100.
The T4 is the best GPU in Google’s portfolio for running inference workloads, Google notes, but it’s also well-suited for machine learning training workloads. It’s also the first data center GPU to include dedicated ray-tracing processors.
The T4’s 16GB of memory supports large ML models, or running inference on multiple smaller models simultaneously, Google notes. The V100 GPU is the go-to choice for ML training workloads, but the T4 offers a lower price point.
Meanwhile, the T4 is baesd on the Turing architecture, which combines ray-tracing and AI inference for a hybrid kind of computer graphics rendering. Ray-tracing is a rendering technique that creates realistic lighting effects. Google is also supporting virtual workstations on T4 instances, so designers can run rendering applications from anywhere.