Tecton
timezone
+00:00 GMT
SIGN IN
  • Home
  • Events
  • Content
  • Help
Sign In
Popular topics
# Systems and Architecture
# Data engineering
# Production Use Case
# Feature Stores
# Open Source
# Organization and Processes
# Explainability and Observability
# Model serving
# Panel Discussion
# Research
# apply(recsys) 2022
# Model training
# Tecton
# Data labeling
Latest
Popular
All
Jake Noble
Danny Chiao
Jake Noble & Danny Chiao · Dec 12th, 2022

apply(recsys) Conference 2022 | Workshop: Choosing Feast or Tecton for Your RecSys Architecture

Feature stores can be essential components of your infrastructure for recommender systems. They simplify the process of deploying and managing RecSys models. But users often wonder which one is better suited for their use cases - Feast or Tecton? These two products are very different and address different requirements. In this workshop, we’ll give a hands-on, step-by-step example of building RecSys models using both Feast and Tecton. And we’ll provide our recommendations on when to use which product.
# apply(recsys) 2022
# Feature Stores
# Tecton
Mike Del Balso
Jacopo Tagliabue
Marc Lindner
Agnes van Belle
Krystal Zeng
Mike Del Balso, Jacopo Tagliabue, Marc Lindner, Agnes van Belle & Krystal Zeng · Dec 12th, 2022

apply(recsys) Conference 2022 | Lessons Learned: The Journey to Operationalizing Recommender Systems

Join us in this panel discussion to hear from ML practitioners on their journey to implementing Recommender Systems. We’ll discuss the most common challenges encountered when getting started, and best practices to address them. We’ll explore organizational dynamics, recommended tools, and how to align business requirements with technical capabilities. You’ll hear about approaches to phasing in Recommender Systems, starting small and progressively iterating on more sophisticated solutions.
# apply(recsys) 2022
# Panel Discussion
# Production Use Case
We’ll provide an introduction to Monolith, a system tailored for online training. Our design has been driven by observations of our application workloads and production environment that reflects a marked departure from other recommendations systems. Our contributions are manifold: first, we crafted a collisionless embedding table with optimizations such as expirable embeddings and frequency filtering to reduce its memory footprint; second, we provide an production-ready online training architecture with high fault-tolerance; finally, we proved that system reliability could be traded-off for real-time learning. Monolith has successfully landed in the BytePlus Recommend product.
# apply(recsys) 2022
# Production Use Case
# Model training
Slack, as a product, presents many opportunities for recommendation, where we can make suggestions to simplify the user experience and make it more delightful. Each one seems like a terrific use case for machine learning, but it isn’t realistic for us to create a bespoke solution for each. In the talk, we’ll dive into the Recommend API, a unified framework the team built over the years that allows us to quickly bootstrap new recommendation use cases. Behind the scenes, these recommenders reuse a common set of infrastructure for every part of the recommendation engine, such as data processing, model training, candidate generation, and monitoring. This has allowed us to deliver a number of different recommendation models across the product, driving improved customer experience in a variety of contexts.
# apply(recsys) 2022
# Production Use Case
# apply(recsys) 2022
# Systems and Architecture
# Production Use Case
Shreyas Krishnaswamy
Phi Nguyen
Shreyas Krishnaswamy & Phi Nguyen · Jun 1st, 2022

Workshop: Bring Your Models to Production with Ray Serve

In this workshop, we will walk through a step-by-step guide on how to deploy an ML application with Ray Serve. Compared to building your own model servers with Flask and FastAPI, Ray Serve facilitates seamless building and scaling to multiple models and serving model nodes in a Ray Cluster. Ray Serve supports inference on CPUs, GPUs (even fractional GPUs!), and other accelerators Ð using just Python code. In addition to single-node serving, Serve enables seamless multi-model inference pipelines (also known as model composition); autoscaling in Kubernetes, both locally and in the cloud; and integrations between business logic and machine learning model code. We will also share how to integrate your model serving system with feature stores and operationalize your end-to-end ML application on Ray.
# Data engineering
# Model serving
# Systems and Architecture
Drew Banin
Drew Banin · Jun 1st, 2022

The dbt Semantic Layer

In this talk, Drew will discuss the dbt Semantic Layer and explore some of the ways that Semantic Layers and Feature Stores can be leveraged together to power consistent and precise analytics and machine learning applications.
# Data engineering
# Open Source
Discover how Vital powers its predictive, customer-facing, emergency department wait-time product with request-time input signals and how it solves its "cold-start" problem by building machine-learning feedback loops using Tecton.
# Feature Stores
# Production Use Case
# Systems and Architecture
Joao Santiago
Joao Santiago · May 12th, 2022

DIY Feature Store: A Minimalist's Guide

A feature store can solve many problems, with various degrees of complexity. In this talk I'll go over our process to keep it simple, and the solutions we came up with.
# Data engineering
# Feature Stores
# Production Use Case
# Systems and Architecture
Yuchen Wu
Yuchen Wu · May 3rd, 2022

Engineering for Applied ML

Applied ML consists of ML algorithms at its core and engineering systems around it. For over a decade as an applied ML practitioner, I have built a number of such engineering systems to help unlock the full potential of ML in a variety of problem domains. This talk is about my learnings in building those systems and patterns that I've found to work well across applications.
# Organization and Processes
# Systems and Architecture
Popular
Enabling rapid model deployment in the healthcare setting
Felix Brann
DIY Feature Store: A Minimalist's Guide
Joao Santiago
Engineering for Applied ML
Yuchen Wu
Apache Kafka, Tiered Storage and TensorFlow for Streaming Machine Learning Without a Data Lake
Kai Waehner