Now Streaming In Your Language: The Technology Behind Netflix’s Global Interface

Netflix Technology Blog
4 min readApr 29, 2024

--

by Kai Li and Timothy Ellison, with special thanks to our stunning colleagues Ilya Stametau, Poorna Chandra, Tatiana Trubitcyna, Osama Al Jada, Tiffany Taylor Lee, Qing Zan, and Shishir Kakaraddi

Every time you open Netflix on your TV, computer, phone, or tablet you’ll see dozens of user interface strings: “Home”, “Browse by Languages”, and “Kids” in the navigation bar, the label “Only on Netflix” over a row of series and movies, the extremely enticing word “Play” on the central button.

Animated image showing the Netflix website with all text assets localized first in English and then in Japanese.

Being available in over 190 countries means that many of our members engage with our UI in non-English languages. As members of the Product Localization & Writing team at Netflix, we build software to create, edit, translate, and serve millions of text assets. Everything from episode synopses and show titles to customer service manuals and marketing emails moves in whole or in part through our systems. We support hundreds of applications, from Netflix on thousands of devices to our TUDUM and Top10 sites to internal Studio tools used to bring shows to life.

How does a UI string flow from a developer’s desk to your TV at home? We’ll dive into three steps: Ingestion, Localization, and Serving.

Ingestion

If a string is newly added in a code repository, we need to know about it. Since there are hundreds of applications internally and externally at Netflix, this process needs to be painless and simple. Developers define an app-localization.json file in their repository which tells us which source string files to read and where to write localized strings to.

Image of a block of code used to configure app localization pipelines.
Example of a localization configuration file. “read” paths are read into the localization system, and localized files are written back to the “write” paths.

The presence of this file in a repository flags catches the attention of our commit ingestion system. Strings in subsequent commits are sent to our backend service.

Developers can define transformers (AWS lambda functions) that reshape the strings as they are read from or written to their repositories. This allows us to support various formats like .properties, .xml, .json, iOS, Android, etc. and allows the community to expand support for new formats as need arises. Tight integration with source control allows us to capture history, create context, and even rollback values (see Serving). It also eliminates the need for developers to maintain any custom scripts or Jenkins jobs around string management.

Localization

Once strings are ingested, developers, product managers, and localization managers add context. Does the word “Home” in this string mean a website’s homepage or a user’s physical home? Translators need to know! Users also group strings by business context, such as “ads” or “games”, so that related strings from dozens of different codebases can be managed in one place. Strings are then localized into appropriate languages and sent back to source control in a pull request.

Image of a code repository with an open pull request. The pull request contains newly localized strings in Arabic and other languages.
Localized strings are sent to applications via pull request.

This maintains the code repository as the source of truth for strings and allows developers to instrument tests, builds, and QA with the newly minted strings. The localization process itself is complex and rapidly changing, so having a single and consistent source of truth is invaluable.

Serving

Strings can be fetched at runtime on our federated graph. To support ultra-low latency, high-throughput calls, strings are efficiently packaged into a Hollow feed. Currently 2.5 million strings are compacted into only 100 MB and loaded into memory on hundreds of edge servers across several cloud regions serving billions of requests/second. Runtime string consumption enables dynamic string changes and A/B testing. This paves the way for UI innovation and bug fixing. It also improves the experience on memory-constrained devices like TVs where only a small subset of strings can be pulled at a time.

Since UI strings also live inside the repository, engineers can package them with their deployable at build-time. This build-time consumption is useful for testing or for providing fallbacks in case of run-time consumption failure. Our recommendation is that apps do both: build-time for safety and run-time for innovation.

We’ve spent the past couple of years developing the architecture on an application we refer to internally as App Localization Hub. The end to end loop from a developer’s desk to your TV at home can be modeled as:

Diagram of the complete localization flow, from code repositories where the text assets are created all the way to the Netflix user interface where they are displayed to Netflix members.

Future Work

In the future, we want to build tighter integrations with string sources to enhance our localization workflow. For example, a Figma integration will streamline the localization process by automating the extraction of text strings from designs, reducing manual copying errors, and providing translators with direct visual context, which improves translation accuracy. For serving, we’re looking to personalize UI strings and apply continuous explore-exploit models. Finally, we want to dig into improving observability and testing for developers such as offering automated screenshot captures and QC features.

We hope you enjoyed the brief look into the Product Localization & Writing team and the lifecycle of UI strings at Netflix. Want to work with us? https://jobs.netflix.com/

--

--

Netflix Technology Blog
Netflix Technology Blog

Written by Netflix Technology Blog

Learn more about how Netflix designs, builds, and operates our systems and engineering organizations

Responses (1)