Demystify RAM Usage in Multi-Process Data Loaders
A typical PyTorch training program on 8 GPUs with 4 dataloader
workers per GPU would create at least
A typical PyTorch training program on 8 GPUs with 4 dataloader
workers per GPU would create at least
"Loss function" is one of the most basic concepts today in deep learning. Despite that, it is actually not necessarily a good programming abstraction when designing general-purpose systems. A system should not assume that a model always comes together with a "loss function".
Building a library for research and experiments is quite different from building other types of software. A key challenge is that, in research, abstractions and APIs are rarely set in stone: users may want to propose a slight variant or modification to literally ANYWHERE in the whole program, just because they have a new idea.
This post is about a small functionality that is found useful in TensorFlow / JAX / PyTorch.
Low-level components of these systems often use a plain list of values/tensors
as inputs & outputs.
However, end-users that develop models often want to work with more
complicated data structures:
Dict[str, Any]
, List[Any]
, custom classes, and their nested combinations.
Therefore, we need bidirectional conversion between nested structures and a plain list of tensors.
I found that different libraries invent similar approaches to solve this problem, and it's interesting to list them here.
PyTorch provides two methods to turn an nn.Module
into a
graph represented in TorchScript format: tracing and scripting.
This article will:
torch.jit.trace
should be preferred over torch.jit.script
for deployment of non-trivial models.In large systems, logs can be terrifying: they are huge in volume, and hard to understand.
This note lists some suggestions and common misuse of Python's logging
module,
with the aim of:
Technically, an image is a function that maps a continuous domain, e.g.
a box array[H][W]
, where each element
array[i][j]
is a pixel.
How does discretization work? How does a discrete pixel relate to the abstract notion of the underlying continuous image? These basic questions play an important role in computer graphics & computer vision algorithms.
This article discusses these low-level details, and how they affect our CNN models and deep learning libraries. If you ever wonder which resize function to use or whether you should add/subtract 0.5 or 1 to some pixel coordinates, you may find answers here. Interestingly, these details have contributed to many accuracy improvements in Detectron and Detectron2.
TL;DR: How to find out if your favorite deep learning library is occasionally giving you wrong results? Such bugs happen from time to time, and are extremely difficult to notice, report, and debug.
Python's package management is a mess. I'm involved in a few open source projects and I often help users address their environment & installation issues. A large number of these environment issues essentially come down to incorrectly / accidentally mixing multiple different python environment together. This post lists a few common pitfalls and misconceptions of such.
TL;DR: People are hardly aware of any deep learning mistakes they made, because things always appear to work, and there are no expectations on how well they should work. The solution is to try to accurately reproduce settings & performance of high-quality papers & code.