Netflix at MIT CODE 2022

Netflix Technology Blog
3 min readNov 10, 2022

--

Netflix was proud to be the primary sponsor of the 2022 Conference on Digital Experimentation (CODE), hosted by the MIT Initiative on the Digital Economy. Our cast of Netflixers were excited to see new and old faces in person after two long years! In addition to stepping up our sponsorship game, we also improved our content release rate with 4 talks and 1 poster, as well as a trailer or two for next year’s CODE.

We started off strong with a new episode in our experimentation platform innovation series starring Matthew Wardrop (with Simon Ejdemyr and Martin Tingley). Matthew dove into the details of our experimentation platform’s design, highlighting its ability to support custom inference engines. This functionality allows our data scientists to implement or develop new methodologies and push them to production while abstracting away engineering concerns.

This was exemplified in our poster presentation on detecting treatment effect disparities, presented by Danielle Rich (with Winston Chou and Nathan Kallus). Our experimentation platform made it easy to graduate their novel conditional average treatment effect (CATE) estimators into production, where they are now routinely used in several reports. These estimators provide a more nuanced understanding of the differential impacts of new product innovations, thereby ensuring that we continue to deliver joy to all of our members around the world.

Next up was a two-part episode introducing a brand new series on sequential experimentation. In Part 1, Michael Lindon (with Dae Woong Ham, Martin Tingley, and Iavor Boljinov) presented how to do inference in linear models. In particular, they demonstrated how easy it is to perform sequential F-tests, which can be used to perform sequential inference with covariate adjustment. Similar to how covariate adjustment yields tighter confidence intervals in a fixed-n setting, it also leads to tighter confidence sequences and faster sequential tests in the anytime-valid setting.

In Part 2, Dae Woong Ham (with Michael Lindon, Martin Tingley, and Iavor Boljinov) stepped in to provide a design-based approach to anytime-valid inference. In the design-based perspective, the full set of potential outcomes is conditioned upon, with randomness entering only through treatment assignment. This permits sequential inference to be performed on the obtained sample with very few assumptions, broadening the kinds of experiments that can be analyzed through anytime-valid approaches.

Last and certainly not least, this season’s finale quickly arrived with a showcase from our amazing summer intern Apoorva Lal (with Wenjing Zheng and Simon Ejdemyr ) to tackle the external validity of experiments. During his 3 month internship, Apoorva developed doubly-robust estimators that allow for experimental results to either be generalized to the superpopulation or transported to a different target population of interest. These estimators are now available externally in the open-source package causalTransportR.

Overall, it was a packed and rewarding two days. We met tons of interesting and great people, learned a lot from our colleagues in academia and industry, and left with far fewer stickers than we arrived with. We’re excited to see what the 2023 season of CODE brings!

P.S. We are still looking for data scientists with expertise in experimentation and causal inference to join our dream team. We also just opened up recruitment for our summer 2023 class of interns (apply here)!

--

--

Netflix Technology Blog

Learn more about how Netflix designs, builds, and operates our systems and engineering organizations