In this workshop, we will walk through a step-by-step guide on how to deploy an ML application with Ray Serve. Compared to building your own model servers with Flask and FastAPI, Ray Serve facilitates seamless building and scaling to multiple models and serving model nodes in a Ray Cluster.
Ray Serve supports inference on CPUs, GPUs (even fractional GPUs!), and other accelerators Ð using just Python code. In addition to single-node serving, Serve enables seamless multi-model inference pipelines (also known as model composition); autoscaling in Kubernetes, both locally and in the cloud; and integrations between business logic and machine learning model code.
We will also share how to integrate your model serving system with feature stores and operationalize your end-to-end ML application on Ray.