You are currently viewing Google Cloud Announces Beta Launch Of Cloud AI Platform Pipelines

Google Cloud Announces Beta Launch Of Cloud AI Platform Pipelines

Sharing is caring!

Google Cloud has announced the beta launch of Cloud AI Platform Pipelines, which is a new enterprise-grade service meant to provide a single tool to developers for deploying their machine learning pipelines along with tools to monitor and audit them.

In an announcement, Google said it can seem fairly straightforward when you’re just prototyping a machine learning (ML) model in a notebook. It was further added but things become more complex when you need to start paying attention to the other pieces required to make an ML workflow sustainable and scalable. The worst thing is that building a repeatable and auditable process becomes harder as complexity grows.

This is exactly where Pipelines come into the picture. It gives developers the unique ability to build these repeatable processes. Google announced there would be two parts to the service:

  1. Infrastructure for deploying and running those workflows.
  2. Tools for building and debugging the pipelines.

The service has the potential to automate critical processes like setting up of Kubernetes Engine clusters and storage as well as manually configuring Kubeflow Pipelines. It makes use of TensorFlow Extended to build TensorFlow-based workflows and the Argo workflow engine to run the pipelines.

Users will get visual tools to build the pipelines, artifact tracking, versioning, and more along with the infrastructure services. Google promised that getting started would just require a few clicks. A bit of flexibility is added by Google Cloud, given that users can leverage both the Kubeflow Pipelines SDK and the TensorFlow Extended SDK for authoring pipelines.

Google product manager Anusha Ramesh and staff developer advocate Amy Unruh in a blog post that a machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis to training, to evaluation, to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner — for example, in a set of notebooks or scripts — and things like auditing and reproducibility become increasingly problematic.

In addition, Pipeline Versioning is supported by AI Platform Pipelines. This lets developers to upload multiple versions of the same pipeline and group them in the UI, as well as automatic artifact and lineage tracking. Other forthcoming features include workload identity for supporting transparent access to Google Cloud Services; simpler cluster upgrades; more templates for authoring workflows; and a UI-based setup of off-cluster storage of backend data, including metadata, server data, job history, and metrics.

Close Menu
× How can I help you?