Description
Courtesy of PyTorch/XLA (a now generally available Python library), Google's Cloud TPUs can better support Facebook's PyTorch machine learning framework.
Summary
- How open banking is driving huge innovation In 2018, Google introduced accelerated linear algebra (XLA), an optimizing compiler that speeds up machine learning models’ operations by combining what used to be multiple kernels into one.
- ( Google announced the third generation at its annual I/O developer conference in 2018 and in July took the wraps off its successor, which is in the research stage.
- With the help of the reports PyTorch/XLA generates, PyTorch developers can find bottlenecks and adapt programs to run on Cloud TPUs.
- Alongside PyTorch/XLA, Google and Facebook today debuted tools to facilitate continuous AI model testing, which they say they’ve helped the PyTorch Lightning and Hugging Face teams use with Cloud TPUs.