Skip to main content
Uber AI

Michelangelo PyML: Introducing Uber’s Platform for Rapid Python ML Model Development

23 October 2018 / Global
Featured image for Michelangelo PyML: Introducing Uber’s Platform for Rapid Python ML Model Development
Figure 1. Flexibility vs. resource efficiency tradeoffs in Michelangelo. At the cost of resource efficiency, PyML fills the gap of highly flexible model development.
Figure 2. Michelangelo’s UI lets users browse their managed PyML models, configure access, and deploy with a single click.
Figure 3. Overview of the PyML architecture. First, a model’s artifacts and dependencies get uploaded to and versioned by Michelangelo’s (MA) backend. Afterwards, a Docker image gets built which can be deployed as a Docker container for online predictions or leveraged to run large scale offline predictions.
Figure 4. Michelangelo’s Online Prediction Service serves prediction requests against deployed Apache Spark-based models directly from memory to keep the request latency to a minimum.
Figure 5: The built PyML model Docker images are launched as nested Docker containers by the Online Prediction Service Application. Prediction requests are then forwarded to the gRPC server running inside of the Docker container via a Unix domain socket.
Stepan Bedratiuk

Stepan Bedratiuk

Stepan Bedratiuk is a senior software engineer on Uber's Machine Learning Platform team.

Olcay Cirit

Olcay Cirit

Olcay Cirit is a Staff Research Scientist at Uber AI focused on ML systems and large-scale deep learning problems. Prior to Uber AI, he worked on ad targeting at Google.

Posted by Kevin Stumpf, Stepan Bedratiuk, Olcay Cirit

Category: