Tecton
timezone
+00:00 GMT
SIGN IN
  • Home
  • Events
  • Content
  • Help
Sign In
Sign in or Join the community to continue

Workshop: Bring Your Models to Production with Ray Serve

Posted Jun 01, 2022 | Views 935
# Data engineering
# Model serving
# Systems and Architecture
Share
SPEAKERS
Shreyas Krishnaswamy
Shreyas Krishnaswamy
Shreyas Krishnaswamy
Software Engineer @ Anyscale

Shreyas Krishnaswamy is a software engineer focusing on Ray Serve and Ray infrastructure at Anyscale.

+ Read More

Shreyas Krishnaswamy is a software engineer focusing on Ray Serve and Ray infrastructure at Anyscale.

+ Read More
Phi Nguyen
Phi Nguyen
Phi Nguyen
GTM Technical Lead @ Anyscale

Phi has been working with Fortune 500 customers in Retail, CPG, HCLS, Financial services and startups to accelerate their machine learning practices. This includes a wide range of engagements such as helping teams organize and build a center of excellence for ML, MLOps processes and automation, ML use cases development and feasibility to providing cloud best practices combining Ray and public cloud such as AWS and GCP or open source projects running on Kubernetes.

+ Read More

Phi has been working with Fortune 500 customers in Retail, CPG, HCLS, Financial services and startups to accelerate their machine learning practices. This includes a wide range of engagements such as helping teams organize and build a center of excellence for ML, MLOps processes and automation, ML use cases development and feasibility to providing cloud best practices combining Ray and public cloud such as AWS and GCP or open source projects running on Kubernetes.

+ Read More
SUMMARY

In this workshop, we will walk through a step-by-step guide on how to deploy an ML application with Ray Serve. Compared to building your own model servers with Flask and FastAPI, Ray Serve facilitates seamless building and scaling to multiple models and serving model nodes in a Ray Cluster.

Ray Serve supports inference on CPUs, GPUs (even fractional GPUs!), and other accelerators Ð using just Python code. In addition to single-node serving, Serve enables seamless multi-model inference pipelines (also known as model composition); autoscaling in Kubernetes, both locally and in the cloud; and integrations between business logic and machine learning model code.

We will also share how to integrate your model serving system with feature stores and operationalize your end-to-end ML application on Ray.

+ Read More

Watch More

30
Posted Mar 28, 2021 | Views 384
# Model serving
# Open Source
# Systems and Architecture
10
Posted Apr 12, 2022 | Views 740
# Explainability and Observability
# Production Use Case