Google’s AI-Focused Tensor Processing Units Now Available in Beta

Cloud-TPU-Feature

Google has been working on its Tensor Processing Units, or TPUs, for several years now, and has released several papers on the performance of its customized architecture in inferencing workloads compared with more traditional models built around CPUs or GPUs. Now the company is opening these parts up for public beta testing, to help researchers who want to train machine learning workloads and run them more quickly.

Google has talked about making this capability public since it demonstrated its first-generation TPUs back in 2016. Those chips, however, were only good for inference workloads. The simple way to understand the difference between training a machine learning system and an inference workload is that the former is when you create your model and train it in the tasks you want it to perform, while the latter is the actual process of applying what the machine has “learned.” Google never made its first-generation TPU available to corporations for general workloads, but these new chips are capable of addressing both model training and inference workloads, and offer a higher level of performance besides.

We don’t know how these new Cloud TPUs perform, but a slideshow comparing Google’s earlier TPU in inference workloads against equivalent parts from Intel and Nvidia is shown below:

Haswell-TPU
K80-TPU
TPU-TPU
CombinedResults-TPU

Each Cloud TPU consists of four separate ASICs, with a total of 180 TFLOPs of performance per board. Google even has plans to scale up these offerings further, with a dedicated network and scaleout systems it’s calling “TPU Pods.” [Please don’t eat these either. -Ed] Google claims that even at this early stage, a researcher following one of their tutorials could train a machine learning network on the public TPU network to “train ResNet-50 to the expected accuracy on the ImageNet benchmark challenge in less than a day, all for well under $200.”

Expect to see a lot of mud being slung at the wall over the next few years, as literally everyone piles into this market. AMD has Radeon Instinct, and Intel still has its own Xeon Phi accelerators (even if it canceled its upcoming Knights Hill), Knights Mill, launched in December, with additional execution resources and better AVX-512 utilization. Whether this will close the gap with Nvidia’s Tesla product family is yet to be seen, but Google isn’t the only company deploying custom silicon to address this space. Fujitsu has its own line of accelerators in the works, and Amazon and Microsoft have previously deployed FPGA’s in their own data centers and clouds.

Google’s new cloud offerings are billed by the second, with an average cost of $6.50 per Cloud TPU per hour. If you’re curious about signing up for the program, you can do so here. Cloud computing may have begun life as little more than a rebranding effort to capture previously available products under a catchy new term, but the entire semiconductor industry is now galloping towards these new computing paradigms as quickly as it can. From self-driving cars to digital assistants, “cloud computing” is being reinvented as something more significant than “everything I normally do, but with additional latency.” Ten years from now, it may be hard to remember why enterprises relied on anything else.

About Skype

Check Also

, Samsung Announces ‘Gauss’ AI for Galaxy S24, #Bizwhiznetwork.com Innovation ΛI

Samsung Announces ‘Gauss’ AI for Galaxy S24

For the last several years, smartphones have shipped with processors designed to accelerate machine learning …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation