The development of large-scale video systems includes complex, unprecedented engineering challenges. At Video @Scale 2019, engineers gathered in San Francisco for a day of technical talks focused on delivering video at scale. Speakers from various companies, including Amazon, Facebook, Netflix, Twitch, and YouTube, discussed video streaming, encoding, contextual ads, and more. This year’s event also included a women in engineering breakfast panel and an AV1 panel for interested attendees.
If you missed the event, you can view recordings of the presentations below. To learn more about future events, you can visit the @Scale website or join the @Scale community.
Welcome & keynote
Mike Coward, Director of Engineering, Facebook
Mike reminds attendees that events like @Scale are about connecting and community at the professional level.
Video quality keynote
Ioannis Katsavounidis, Research Scientist, Facebook
This talk gives a quick primer on video compression and introduces the intra-codec convex hull. Compute efficiency of video encoding is a key parameter that Facebook uses to select encoders and encoder settings for videos. Also discussed are the standard ways of reducing latency for video coding through parallelization and how time-slicing is both the most efficient and effective way to process video for VOD applications. Results obtained on an extensive set of data are presented, highlighting this critical trade-off between compute complexity and compression efficiency among AVC, VP9, and AV1 encoding.
Adopting video at scale
Steven Robertson, Engineer, YouTube
Steven works on streaming video performance at YouTube. In this talk, he discusses delivering AV1 to YouTube customers and introduces codec switching.
Turn up the video volume, HDR video @scale
Rich Gerber, Software Engineer, Netflix
While the rest of his team was working on projects such as VMAF and shot-based encoding (Dynamic Optimizer), Rich was creating the HDR encoding pipeline. Rich provides a primer on color and dynamic range, and how Netflix was able to build a highly scaled Dolby Vision ingest workflow. He also challenges us to consider the unsolved problem of HDR for user-generated content.
AV1 Panel
Video load testing at scale
Olga Hall, Senior Manager, Resilience Engineering, Amazon
In her role as global leader of Amazon Video availability, Olga works with engineering groups across Amazon Retail and AWS to bring an uninterrupted streaming experience to all customers at all times. In her talk, Olga encourages us to meet chaos where we are and take it on our journey. We should be prepared for failures, she says, and use them to harden our systems. She also reminds us that technology is not about people who write code, it’s about people who use code.
Live streaming at Twitch
Yueshi Shen, Principal Software Engineer, Twitch
Yueshi initiated and built a number of Twitch’s core video capabilities, such as a live ABR playback algorithm for highly interactive content and HLS-based low-latency live HTTP streaming. He gives us a glimpse into an internet minute at Twitch and an impressive 3.5-second glass-to-glass latency. He also talks about digging into a curious problem in which high-bandwidth users experience a high rate of rebuffers.
Video client bugs that are also funny
Denise Noyes, Software Engineer, Facebook
Denise introduces us to rage shakes, and shares some odd and interesting Facebook video bugs, such as disappearing video and the perils of downmixing phase-inverted stereo mixes.
Contextual video ad safety
Vijaya Chandra, Software Engineering Manager, Facebook
Rose Kanjirathinkal, Research Scientist, Facebook
Vijaya leads video understanding efforts at Facebook. Rose works on multimodal video understanding. Rose and Vijaya discuss implementing an ML platform for serving personalized and appropriate video content to Facebook and Instagram users. Vijaya describes the domain and the scale at which Facebook is operating, and Rose explains how Facebook is handling contextual ads for the platform. Understanding one video is hard; understanding two is even harder but critical.
Machine learning with a multiyear video corpus
Kayvon Fatahalian, Assistant Professor of Computer Science, Stanford University
Kayvon shares preliminary results from his study of 10 years of cable TV news video. His research focuses on the design of high-performance systems for real-time graphics (3D rendering and image processing), and the analysis and mining of images and videos at scale. He discusses work on an efficient video analysis platform to answer questions such as “How much screen time do women get in the news?” and to compute how frequently people say thank you.
Video integrity at scale
Sonal Gandhi, Software Engineer, Facebook
Sonal talks about reducing harmful content and harmful actors in the video ecosystem. She describes the journey of how the static video content was reduced using ML on the platform to provide a better experience for publishers and people using the platform.