为什么应该使用 Stacked Diffs / Stacked PRs

Meta 与 Google 内部的代码管理工具都支持一种被称作 "stacked diffs / stacked PRs" 的 workflow. 然而, 基于 git 的主流平台 (github, gitlab) 都不支持这种 workflow. 许多离开 Meta 后不得不使用 github 的朋友表示, stacked diffs 对于工程师是一个 "ultimate productivity tool", 我也深有同感. 这篇文章介绍一下什么是 stacked diffs workflow, 以及为什么它能够极大的提升团队开发效率.

Read more

Registration Does Not Scale Well

People have many different opinions about config systems. Having worked with various styles of configs, I also want to write about what a great config subsystem in a large-scale (in terms of system complexity, number of users, etc.) system should look like.

The design space is complex, so in this article I'll start with a smaller topic: registration in config systems. I'll show why this common pattern, though works fine for small-scale projects, does not scale well in the long term. I'll also discuss an alternative.

Read more

Safe Static Initialization, No Destruction

Since I joined Google Brain, I brought PyTorch to Google's internal infra and owned its maintenance. Being a "tech island", it's well known that almost everything in Google works differently from the outside world, and that creates many challenges when building a massive library like PyTorch.

Among those challenges, there are a few tricky bugs related to static initialization order fiasco (SIOF) and their destructions. This time I was forced to learn a lot more details than I'd like to know about these topics, so it's good to write them down before I forget.

Read more

Some Useful Terminal Escape Sequences

最近学习到了一些 Terminal Escape Sequences, 其中尤其对 OSC52 相见恨晚. 这里稍微记录一下各种 Sequences.

Terminal Escape Sequences 是终端应用向 stdout 打出的一些具有特殊含义的字符串. 终端看到这些串之后不会显示它们, 而是执行这些串所对应的终端高级功能.

Read more

Demystify RAM Usage in Multi-Process Data Loaders

A typical PyTorch training program on 8 GPUs with 4 dataloader workers per GPU would create at least processes. A naive use of PyTorch dataset and dataloader can easily replicate your dataset's RAM usage by 40 times. This issue has probably affected everyone who has done anything nontrivial with PyTorch. In this post, we will explain why it happens, and how to avoid the 40x RAM usage.

Read more

Not Every Model Has a Separate "Loss Function"

"Loss function" is one of the most basic concepts today in deep learning. Despite that, it is actually not necessarily a good programming abstraction when designing general-purpose systems. A system should not assume that a model always comes together with a "loss function".

Read more

How to Maintain Clean Core APIs for Research

Building a library for research and experiments is quite different from building other types of software. A key challenge is that, in research, abstractions and APIs are rarely set in stone: users may want to propose a slight variant or modification to literally ANYWHERE in the whole program, just because they have a new idea.

Read more

Automatically Flatten & Unflatten Nested Containers

This post is about a small functionality that is found useful in TensorFlow / JAX / PyTorch.

Low-level components of these systems often use a plain list of values/tensors as inputs & outputs. However, end-users that develop models often want to work with more complicated data structures: Dict[str, Any], List[Any], custom classes, and their nested combinations. Therefore, we need bidirectional conversion between nested structures and a plain list of tensors. I found that different libraries invent similar approaches to solve this problem, and it's interesting to list them here.

Read more

TorchScript: Tracing vs. Scripting

PyTorch provides two methods to turn an nn.Module into a graph represented in TorchScript format: tracing and scripting. This article will:

  1. Compare their pros and cons, with a focus on useful tips for tracing.
  2. Try to convince you that torch.jit.trace should be preferred over torch.jit.script for deployment of non-trivial models.
Read more