The first experimental back end for our Glow compiler and runtime project, designed to target Habana’s existing hardware accelerator. This back end is the first to customize for various vendors’ accelerators.
Learn more about how hardware makers are working with our compiler and future plans for Glow.
WHY IT MATTERS:
Glow’s open source framework allows partners to more rapidly design and optimize new silicon products for machine learning (ML). This back end strengthens a growing ecosystem of products for accelerating neural network ML workloads at production scale. As the use of ML applications becomes more common, hardware accelerators targeted by Glow can help companies reduce power usage and latencies.
USE IT FOR:
Targeting Habana’s inference accelerator cards, while staying ready for the future of other accelerator products to come. Use Glow as a common layer for improved ML performance on top of any supported hardware accelerator. Applications include computer vision, recommendation or personalization, and natural language processing. As this back end is still experimental, we encourage feedback and contributions from the open source community.
GET IT ON GITHUB:
Habana back end for Glow
We’d like to thank the Habana engineering team for working closely with us on this release.