Tecton
timezone
+00:00 GMT
SIGN IN
  • Home
  • Events
  • Content
  • Help
Sign In
Popular topics
# Systems and Architecture
# Data engineering
# Production Use Case
# Feature Stores
# Open Source
# Organization and Processes
# Explainability and Observability
# Model serving
# Research
# Panel Discussion
# Model training
# Data labeling
All (93)
Video (93)
Most Recent
Drew Banin
Drew Banin · Jun 1st, 2022

The dbt Semantic Layer

In this talk, Drew will discuss the dbt Semantic Layer and explore some of the ways that Semantic Layers and Feature Stores can be leveraged together to power consistent and precise analytics and machine learning applications.
# Data engineering
# Open Source
The dbt Semantic Layer
Shreyas Krishnaswamy
Phi Nguyen
Shreyas Krishnaswamy & Phi Nguyen · Jun 1st, 2022

Workshop: Bring Your Models to Production with Ray Serve

In this workshop, we will walk through a step-by-step guide on how to deploy an ML application with Ray Serve. Compared to building your own model servers with Flask and FastAPI, Ray Serve facilitates seamless building and scaling to multiple models and serving model nodes in a Ray Cluster. Ray Serve supports inference on CPUs, GPUs (even fractional GPUs!), and other accelerators Ð using just Python code. In addition to single-node serving, Serve enables seamless multi-model inference pipelines (also known as model composition); autoscaling in Kubernetes, both locally and in the cloud; and integrations between business logic and machine learning model code. We will also share how to integrate your model serving system with feature stores and operationalize your end-to-end ML application on Ray.
# Data engineering
# Model serving
# Systems and Architecture
Workshop: Bring Your Models to Production with Ray Serve
Discover how Vital powers its predictive, customer-facing, emergency department wait-time product with request-time input signals and how it solves its "cold-start" problem by building machine-learning feedback loops using Tecton.
# Feature Stores
# Production Use Case
# Systems and Architecture
Enabling rapid model deployment in the healthcare setting
Joao Santiago
Joao Santiago · May 12th, 2022

DIY Feature Store: A MinimalistÕs Guide

A feature store can solve many problems, with various degrees of complexity. In this talk IÕll go over our process to keep it simple, and the solutions we came up with.
# Data engineering
# Feature Stores
# Production Use Case
# Systems and Architecture
DIY Feature Store: A MinimalistÕs Guide
Yuchen Wu
Yuchen Wu · May 3rd, 2022

Engineering for Applied ML

Applied ML consists of ML algorithms at its core and engineering systems around it. For over a decade as an applied ML practitioner, I have built a number of such engineering systems to help unlock the full potential of ML in a variety of problem domains. This talk is about my learnings in building those systems and patterns that I've found to work well across applications.
# Organization and Processes
# Systems and Architecture
Engineering for Applied ML
In this talk, Matei will present the role of the Lakehouse as an open data platform for operational ML use cases. HeÕll discuss the ecosystem of data tooling that is commonly used to support ML use cases on the Lakehouse, including Delta Lake, Apache Hudi, and feature stores like Feast and Tecton.
# Data engineering
# Open Source
# Systems and Architecture
Lakehouse: A New Class of Platforms for Data and AI Workloads
Taking a model from research to production is hard Ñ and keeping it there is even harder! As more machine learning models are deployed into production, it is imperative to have tools to monitor, troubleshoot, and explain model decisions. Join Amber Roberts, Machine Learning Engineer at Arize AI, in an overview of Arize AIÕs ML Observability platform, enabling ML teams to surface, resolve, and improve model performance issues automatically. Experience ML observability firsthand with a deep dive into the Arize platform using a practical use case example. Attendees will learn how to identify segments where a model is underperforming, troubleshoot and perform root cause analysis, and proactively monitor your model for future degradations.
# Explainability and Observability
Workshop: The Key Pillars of ML Observability and How to Apply Them to Your ML Systems
Mike Del Balso
Martin Casado
Mike Del Balso & Martin Casado · Apr 20th, 2022

Fireside Chat: Is ML a Subset or a Superset of Programming?

Join Mike and Martin in this fireside chat where theyÕll discuss whether ML should be considered a subset or a superset of programming. ML can be considered a specialized subset of programming, which introduces unique requirements on the process of building and deploying applications. But, ML can also be considered a superset of programming, where the majority of applications being built today can be improved by infusing them with online ML predictions. Mike and Martin will share their thoughts and the implications for ML and Software Engineering teams.
# Panel Discussion
# Systems and Architecture
Fireside Chat: Is ML a Subset or a Superset of Programming?
At Snap, we train a large number of deep learning models every day to continuously improve the ad recommendation quality to Snapchatters and provide more value to the advertisers. These ad ranking models have hundreds of millions of parameters and are trained on billions of examples. Training an ad ranking model is a computation-intensive and memory-lookup-heavy task. It requires a state-of-the-art distributed system and performant hardware to complete the training reliably and in a timely manner. This session will describe how we leveraged GoogleÕs Tensor Processing Units (TPU) for fast and efficient training.
# Model training
# Production Use Case
# Systems and Architecture
Training Large-Scale Recommendation Models with TPUs
Travis Addair
Piero Molino
Travis Addair & Piero Molino · Apr 12th, 2022

Declarative Machine Learning Systems: Ludwig & Predibase

Declarative Machine Learning Systems are a new trend that marries the flexibility of DIY machine learning infrastructure and the simplicity of AutoML solutions. In this talk we will discuss about Ludwig, the open source declarative deep learning framework, and Predibase, an enterprise grade solution based on it.
# Data engineering
# Open Source
# Systems and Architecture
Declarative Machine Learning Systems: Ludwig & Predibase
Popular
Workshop: Bring Your Models to Production with Ray Serve
Shreyas Krishnaswamy & Phi Nguyen
Redis as an Online Feature Store
Taimur Rashid
Enabling rapid model deployment in the healthcare setting
Felix Brann
DIY Feature Store: A MinimalistÕs Guide
Joao Santiago