As enterprises across sectors race to bring their AI vision to life, vendors are moving to give them all the resources they need in one place. Case in point: a new strategic collaboration between Google and Hugging Face that gives developers a streamlined way to tap Google Cloud services and accelerate the development of open generative AI apps.
Under the engagement, teams using open-source models from Hugging Face will be able to train and serve them with Google Cloud. This means they will get everything Google Cloud has on offer for AI, right from the purpose-built Vertex AI to tensor processing units (TPUs) and graphics processing units (GPUs).
“From the original Transformers paper to T5 and the Vision Transformer, Google has been at the forefront of AI progress and the open science movement. With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together with leading optimized AI infrastructure and tools…to meaningfully advance developers’ ability to build their own AI models,” Clement Delangue, CEO at Hugging Face, said in a statement.
What can Hugging Face users expect?
In recent years, Hugging Face has become the GitHub for AI, serving as the go-to repository for more than 500,000 AI models and 250,000 datasets. More than 50,000 organizations rely on the platform for their AI efforts. Meanwhile, Google Cloud has been racing to serve enterprises with its AI-centric infrastructure and tools while also contributing to open AI research.
With this partnership between the two companies, hundreds of thousands of Hugging Face users who are active on Google Cloud every month will get the ability to train, tune and serve their models with Vertex AI, the end-to-end MLOps platform to build new generative AI applications.
The experience will be available with a few clicks from the main Hugging Face platform and will also include the option to train and deploy models within the Google Kubernetes Engine (GKE). This will give developers a way to serve their workloads with a “do it yourself” infrastructure and scale models using Hugging Face-specific deep learning containers on GKE.
As part of this, developers training the models will also be able to tap hardware capabilities offered with Google Cloud, including TPU v5e, A3 VMs, powered by Nvidia H100 Tensor Core GPUs and C3 VMs, powered by Intel Sapphire Rapid CPUs.
“Models will be easily deployed for production on Google Cloud with inference endpoints. AI builders will be able to accelerate their applications with TPU on Hugging Face spaces. Organizations will be able to leverage their Google Cloud account to easily manage the usage and billing of their Enterprise Hub subscription,” Jeff Boudier, who leads product and growth at Hugging Face, and Philipp Schmid, the technical lead at the company, wrote in a joint blog post.
Not available just yet
While the collaboration has just been announced, it is important to note that the new experiences, including Vertex AI and GKE deployment options, are not available just yet.
The company hopes to make the capabilities available to Hugging Face Hub users in the first half of 2024.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Credit: Source link