Home / Software & Service News / Google’s AI chips are now open for public use

Google’s AI chips are now open for public use


Google’s Cloud Tensor Processing Units are now available in public beta for anyone to try, providing customers of the tech titan’s cloud platform with specialized hardware that massively accelerates the training and execution of AI models.

The Cloud TPUs, which Google first announced last year, work by providing customers with specialized circuits solely for the purpose of accelerating AI computation. Google tested using 64 of them to train ResNet-50 (a neural network for identifying images that also serves as a benchmarking tool for AI training speed) in only 30 minutes.

This new hardware could help attract customers to Google’s cloud platform with the promise of faster machine learning computation and execution. Accelerating the training of new AI systems can be a significant help, since data scientists can then use the results of those experiments to make improvements for future model iterations.

Google is using its advanced AI capabilities to attract new blood to its cloud platform and away from market leaders Amazon Web Services and Microsoft Azure. Businesses are increasingly looking to diversify their use of public cloud platforms, and Google’s new AI hardware could help the company capitalize on that trend.

Companies had already lined up to test the Cloud TPUs while they were in private alpha, including Lyft, which is using the hardware to train the AI models powering its self-driving cars.

It’s been a long road for the company to get here. Google announced the original Tensor Processing Units (which solely provided inference capabilities) in 2016 and promised that customers would be able to run custom models on them, in addition to providing a speed boost for other businesses’ workloads through the cloud machine learning APIs. But enterprises were never able to run their own custom workloads on top of an original TPU.

Google isn’t the only one to push AI acceleration through specialized hardware. Microsoft is using a fleet of field-programmable gate arrays (FPGAs) to speed up its in-house machine learning operations and provide customers of its Azure cloud platform with accelerated networking. In the future, Microsoft is working on providing customers with a way to run their machine learning models on top of the FPGAs, just like the company’s proprietary code.

Amazon, meanwhile, is providing its customers with compute instances that have their own dedicated FPGA. The company is also working on developing a specialized AI chip that will accelerate its Alexa devices’ machine learning computation, according to a report released by The Information today.

Actually getting AI acceleration from TPUs won’t be cheap. Google is currently charging $6.50 per TPU per hour, though that pricing may shift once the hardware is generally available. Right now, Google is still throttling the Cloud TPU quotas that are available to its customers, but anyone can request access to the new chips.

Once people get access to the Cloud TPUs, Google has several optimized reference models available that will let them start kicking the tires and using the hardware to accelerate AI computation.

Click Here For Original Source Of The Article

About Ms. A. C. Kennedy

Ms. A. C. Kennedy
My name is Ms A C Kennedy and I am a Health practitioner and Consultant by day and a serial blogger by night. I luv family, life and learning new things. I especially luv learning how to improve my business. I also luv helping and sharing my information with others. Don't forget to ask me anything!

Check Also

Existing EV batteries could be recharged five times faster

Lithium-ion batteries have massively improved in the last half-decade, but there are still issues. The biggest, especially for EVs, is that charging takes too long to make them as useful as regular cars for highway driving. Researchers from the University of Warwick (WMG) have discovered that we may not need to be so patient, though. They developed a new type of sensor that measures internal battery temperatures and discovered that we can probably recharge them up to five times quicker without overheating problems.

Overcharging a lithium-ion battery anode can lead to lithium buildup, which can break through a battery's separator, create a short-circuit and cause catastrophic failure. That can cause the electrolyte to emit gases and literally blow up the battery, so manufacturers impose strict charging power limits to prevent it.

Those limits are based on hard-to-measure internal temperatures, however, which is where the WMG probe comes in. It's a fiber optic sensor, protected by a chemical layer that can be directly inserted into a lithium-ion cell to give highly precise thermal measurements without affecting its performance.

The team tested the sensor on standard 18650 li-ion cells, used in Tesla's Model S and X, among other EVs. They discovered that they can be charged five times faster than previously thought without damage. Such speeds would reduce battery life, but if used judiciously, the impact would be minimized, said lead researcher Dr. Tazdin Amietszajew.

Faster charging as always comes at the expense of overall battery life but many consumers would welcome the ability to charge a vehicle battery quickly when short journey times are required and then to switch to standard charge periods at other times.

There's still some work to do. While the research showed the li-ion cells can support higher temperatures, EVs and charging systems would have to have "precisely tuned profiles/limits" to prevent problems. It's also not clear how battery makers would install the sensors in the cells.

Nevertheless, it shows a lot of promise for much faster charging speeds in the near future. Even if battery capacities stayed the same, charging in 5 minutes instead of 25 could flip a lot of drivers over to the green side.

Via: Clean Technica

Source: University of Warwick

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php