Skip to main content

Google Cloud Platform users can now add GPUs to speed up machine learning tasks


Machine learning tasks, like image processing, are better suited running on GPUs rather than traditional CPUs. Google is now leveraging that by allowing its Cloud Platform users to attach GPUs to existing virtual machines and workloads.

The Nvidia Tesla K80 is the first GPU that customers can add to Compute Engine virtual machines to accelerate computing tasks. It features 2,496 stream processors with 12 GB of GDDR5 memory. Down the road, users will also be able to add the AMD FirePro and Nvidia Tesla P100s.

GPUs are particularly suited for accelerating tasks like video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics, and visualization.

The new Cloud GPUs are also tightly integrated with Google’s Cloud Machine Learning service. TensorFlow, along with other popular machine learning and deep learning frameworks, like Theano, Torch, MXNet, Caffe, and NVIDIA’s CUDA, are supported in VM instances.

At the moment, up to eight K80 GPUs can be added to any custom virtual machine in the the us-east1, asia-east1, and europe-west1 GCP regions.

Google notes competitive pricing with each K80 GPU attached to a VM priced at $0.700 per hour. In Asia and Europe, it will be $0.770 per hour per GPU.


FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel



Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: