layout | title | description | date | future | htmlwidgets | hidden | section_number | previous_section_url | previous_section_name | next_section_url | next_section_name | giscus_comments | authors | bibliography | toc | _styles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
distill |
All the Transformer Math You Need to Know |
Here we'll do a quick review of the Transformer architecture, specifically how to calculate FLOPs, bytes, and other quantities of interest. |
2025-02-04 |
true |
true |
false |
4 |
../sharding |
Part 3: Sharding |
../training |
Part 5: Training |
true |
|
main.bib |
|
.fake-img {
background: #bbb;
border: 1px solid rgba(0, 0, 0, 0.1);
box-shadow: 0 0px 4px rgba(0, 0, 0, 0.1);
margin-bottom: 12px;
} .fake-img p {
font-family: monospace;
color: white;
text-align: left;
margin: 12px 0;
text-align: center;
font-size: 16px;
}
|
Let's start with vectors
$$ \def \red#1{\textcolor{red}{#1}} \def \green#1{\textcolor{green}{#1}} \def \blue#1{\textcolor{blue}{#1}} \def \purple#1{\textcolor{purple}{#1}} \def \orange#1{\textcolor{orange}{#1}} \def \gray#1{\textcolor{gray}{#1}}
\begin{array}{cc} \textrm{array} & \textrm{shape} \ \hline x & \textrm{[P]} \ y & \textrm{[P]} \ A & \textrm{[N P]} \ B & \textrm{[P M]} \ \hline \end {array} $$
- A dot product of
$$x \cdot y$$ requires$$P$$ adds and multiplies, or$$2P$$ floating-point operations total. - A matrix-vector product
$$Ax$$ does$$N$$ dot-products along the rows of$$A$$ , for$$2NP$$ FLOPs. - A matrix-matrix product
$$AB$$ does$$M$$ matrix-vector products for each column of$$B$$ , for$$2NPM$$ FLOPs total. - In general, if we have two higher dimensional arrays
$$C$$ and$$D$$ , where some dimensions are CONTRACTING and some are BATCHING. (e.g.$$C[\blue{GH}IJ\red{KL}], D[\blue{GH}MN\red{KL}]$$ ) then the FLOPs cost of this contraction is two times the product of all of the$$C$$ and$$D$$ dimensions where the batch and contraction dimensions are only counted once, (e.g.$$2\blue{GH}IJMN\red{KL}$$ ). Note that a dimension is only batching if it occurs in both multiplicands. (Note also that the factor of 2 won't apply if there are no contracting dimensions and this is just an elementwise product.)
Make note of the fact that for a matrix-matrix multiply, the compute scales cubically
{% include figure.liquid path="assets/img/matmul-flops.gif" class="img-fluid" %}
During training, we don't particularly care about the result of a given matrix multiply; we really care about its derivative. That means we do significantly more FLOPs during backpropagation.
If we imagine B is just one matrix in a larger network and A are our input activations with C = A B, the derivative of the loss L with respect to B is given by the chain rule:
which is an outer product and requires
is again
Adding these up, we see that during training, we have a total of 6NPM FLOPs, compared to 2NPM during inference: 2NPM in the forward pass, 4NPM in the backward pass. Since PM is the number of parameters in the matrix, this is the simplest form of the famous
Transformers are the future. Well, they're the present at least. Maybe a few years ago, they were one of many architectures. But today, it's worth knowing pretty much every detail of the architecture. We won't reintroduce the architecture but this blog and the original Transformer paper may be helpful references.
Here's a basic diagram of the Transformer decoder architecture:
{% include figure.liquid path="assets/img/transformer-diagram.png" class="img-fluid" caption="Figure: this diagram shows one layer of a standard Transformer and flows from top-to-bottom. We use a single-letter convention to describe the shapes and layouts of arrays in a Transformer, again showing contracting dimensions in red, and batched dimensions in blue. In a given operation, the input shape is given on top-left and the parameter shape is given on the top-right, with the resulting shape below, e.g. BTD is the input shape for the gating einsum and DF is the weight shape." %}
Note [gating einsum]: The diagram above uses a "gating einsums” where we split the up-projection matrix into two matrices (
Note 2 [MHA attention]: With self-attention, T and S are the same but for cross-attention they may be different. With vanilla Multi-Head Attention (MHA), N and K are the same while for Multi-Query Attention (MQA) K=1 and for Grouped MQA (GMQA) K merely has to divide N.
For the below we're going to compute per-layer FLOPs to avoid having to stick factors of L everywhere.
The MLPs of a Transformer typically consist of 2 input matmuls that are element-wise combined and a single output matmul:
For the generic grouped-query attention case with different Q and KV head numbers, let us assume equal head dimension H for Q,K,V projections, and estimate the cost of the QKVO matmuls:
The dot-product attention operation is more subtle, effectively being a
There are several other operations happening in a Transformer. Layernorms are comparatively cheap and can be ignored for first-order cost estimates. There is also the final enormous (though not per-layer) unembedding matrix multiply.
$$ \begin{array}{ccc} \textsf{operation} & \textsf{train FLOPs} & \textsf{params} \ \hline \ \textrm{layernorm}D ;; A[B,T,\red{D}] & \gray{O\left(BTD\right)} & \gray{D} \[10pt] A[B,T,\red{D}] \cdot W{unembed}[\red{D}, V] & 6BTDV & DV \ \end{array} $$
If we neglect the cost of dot-product attention for shorter-context training, then the total FLOPs across all layers is
$$ \begin{align*} (18BTDF + 12BTD(N+K)H)L = 6 BT * (3DF + 2D(N+K)H)L \ = 6 * \textrm{num tokens} * \textrm{parameter count} \end{align} $$
Leading to a famous rule of thumb for estimating dense Transformer FLOP count, ignoring the attention FLOPs. (Unembedding is another simple matmul with
If we do account for dot-product attention above and assume
So the takeaway is that dot-product attention FLOPs only become dominant during training once T>8D. For D ~ 8k, this would be ~64K tokens. This makes some sense, since it means as the MLP size increases, the attention FLOPs become less critical. For large models, the quadratic cost of attention is not actually a huge obstacle to longer context training. However, for smaller models, even e.g. Gemma-27B, D=4608 which means attention becomes dominant around 32k sequence lengths. Flash Attention also helps alleviate the cost of long-context, which we discuss briefly in Appendix A.
We'd be remiss not to briefly discuss Mixture of Experts (MoE) models, which replace the single dense MLP blocks in a standard Transformer with a set of independent MLPs that can be dynamically routed between. To a first approximation, an MoE is just a normal dense model with E MLP blocks per layer, instead of just one. Each token activates
{% include figure.liquid path="assets/img/moe.png" class="img-fluid img-small" caption="Figure: an example MoE layer with
Compared to a dense model, an MoE introduces new comms, primarily two AllToAlls (one before and one after the MoE block) that route tokens to the correct expert and bring them back to their home device.Technically, this only happens if we are data or sequence sharded along the same axis as our experts. However as we saw in the previous section, the cost of each AllToAll is only 1/4 that of a comparable AllGather along a single axis (for a bidirectional ring).
Backpropagation as an algorithm trades memory for compute. Instead of a backward pass requiring
so to avoid recomputing we need to save
-
Block remat: only save the input to each layer. This is the most aggressive method we use and only saves 1 checkpoint per layer, meaning we'd only save 4.2TB in the example above. This forces us to repeat essentially all forward pass FLOPs in the backward pass, meaning we increase our FLOPs from
$$6ND$$ to roughly$$8ND$$ . - Big matmuls only: another simple policy is to only save the outputs of large matmuls. This lets us avoid recomputing any large matmuls during the backward pass, but still makes us recompute other activation functions and parts of attention. This reduces 20 per layer to closer to 7 per layer.
This by no means comprehensive. When using JAX, these are typically controlled by jax.remat
/jax.checkpoint
(you can read more here).
As we'll see in Section 7, LLM inference has two key parts, prefill and generation.
- Prefill processes a long prompt and saves its attention activations in a Key-Value Cache (KV Cache) for use in generation, specifically the key-value projections in the attention block.
- Generation batches several of these KV caches together and samples tokens from each of them.
Each KV cache is then effectively an array of size
- The overall parameters and FLOPs of a Transformer are fairly easy to calculate, and are summarized here, assuming MHA (with batch size B, vocab size V, a sequence of length T, D=dmodel, and F=dff):
Component | Params per layer | Training FLOPs per layer |
---|---|---|
MLP | 3DF | 18BTDF |
Attention | 4DNH | 24BTDNH + 12BT2NH |
Other | D | BTD |
Vocab | DV (total, not per-layer) | 12BTDV |
- The parameter count of the MLP block dominates the total parameter count and the MLP block also dominates the FLOPs budget as long as the sequence length
$T < 8D$ . - The total FLOPs budget during training is well approximated by
$$6 \cdot \text{num_params} \cdot \text{num_tokens}$$ for reasonable context lengths. - During inference, our KV caches are roughly
$$2 \cdot S \cdot L \cdot N \cdot H$$ per cache, although architectural modifications can often reduce this.
Question 1: How many parameters does a model with
{% details Click here for the answer. %}
- The total parameters is roughly
$$L \cdot (3DF + 4DNH + D) + 2DV$$ . For the given numbers, this is$$64 \cdot (3 \cdot 4e3 \cdot 16e3 + 4 \cdot 4e3 \cdot 4e3 + 4e3) + 2 \cdot 4e3 \cdot 32e3 = 16e9$$ , or 16B parameters. - The ratio of attention parameters to total parameters in general is
$$4DNH / (4DNH + 3DF) = 4D^2 / (4D^2 + 12D^2) = 1/4$$ . This gives us roughly 1/4 of parameters are used in attention. - Per token, our KV caches are
$$2 \cdot L \cdot N \cdot H = 2 \cdot 64 \cdot 4096$$ in int8, which is512kB / token
.
{% enddetails %}
Question 2: How many total FLOPs are required to perform A[BX, DY] *D W[DY, F] on {‘X': 4, ‘Y': 8, ‘Z': 4}
. How many FLOPs are performed by each TPU?
{% details Click here for the answer. %}
The total "theoretical” FLOPs of the operation is
{% enddetails %}
Question 3: How many FLOPs are involved in performing
{% details Click here for the answer. %}
Following the rule above, we have I and J as contracting dimensions and K, L, M, N, and O as non-contracting dimensions. We have no "batching dimensions”, so this is just
{% enddetails %}
Question 4: What is the arithmetic intensity of self-attention (ignoring the Q/K/V/O projections)? Give the answer as a function of the Q and KV lengths T and S. At what context length is attention FLOPs-bound? Given the HBM bandwidth of our TPUs, plot the effective relative cost of attention to the FFW block as the context length grows.
{% details Click here for the answer. %}
Self-attention requires loading the
So our total bytes is
So basically, during prefill we have
{% enddetails %}
Question 5: At what sequence length are self-attention FLOPs equal to the QKVO projection FLOPs?
{% details Click here for the answer. %}
This is purely a question of when
{% enddetails %}
Question 6: Say we only save the output of each of the 7 main matmuls in a Transformer layer during our forward pass (Q, K, V, O + the three FFW matrices). How many extra FLOPs do we need to "rematerialize” during the backwards pass?
Question 7: DeepSeek v3 says it was trained for 2.79M H800 hours on 14.8T tokens (source). Given that it has 37B activated parameters, roughly what hardware utilization did they achieve? Hint: note that they used FP8 FLOPs without structured sparsity.
{% details Click here for the answer. %}
From the spec sheet here, we find 3,026 TFLOPs/s of FP8 performance with sparsity, or typically half this (1.513e15
FLOPs/s) without sparsity. 2.79M H800 hours means 2.79e6 * 1.513e15 * 60 * 60 = 1.52e25
total FLOPs. Given the activated parameter count of 37B, this training run should have used about 6 * 37e9 * 14.8e12 = 3.3e24
FLOPs. That means the FLOPs utilization is about 3.3e24 / 1.52e25 = 21.7%
.
{% enddetails %}
Question 8: Mixture of Experts (MoE) models have
{% details Click here for the answer. %}
Because we have
Therefore, we need
{% enddetails %}
The traditional objection to scaling Transformers to very long context is that the attention FLOPs and memory usage scale quadratically with context length. While it's true that the attention QK product has shape
- As we noted in Section 4, even though this is quadratic, the attention FLOPs only dominated when
$$S > 8 \cdot D$$ , and especially during training the memory of a single attention matrix is small compared to all of the weights and activation checkpoints living in memory, especially when sharded. - We don't need to materialize the full attention matrix in order to compute attention! We can compute local sums and maxes and avoid ever materializing more than a small chunk of the array. While the total FLOPs is still quadratic, we drastically reduce memory pressure.
This second observation was first made by Rabe et al. 2021 and later in the Flash Attention paper (Dao et al. 2022). The basic idea is to compute the attention in chunks of K/V, where we compute the local softmax and some auxiliary statistics, then pass them onto the next chunk which combines them with its local chunk. Specifically, we compute
-
M: The running max of
$$q \cdot k$$ over the sequence dimension - O: The running full attention softmax over the sequence dimension
-
L: The running denominator
$$\sum_i (q \cdot k_i - \text{running max})$$
With these, we can compute the new max, the new running sum, and the new output with only a constant amount of memory. To give a sketchy description of how this works, attention is roughly this operation:
The max is subtracted for numerical stability and can be added without affecting the outcome since
Then we can combine these into the full softmax sum for these two chunks together by using
where
This can be done for the full softmax as well, giving us a way of accumulating arbitrarily large softmax sums. Here's the full algorithm from the Flash Attention paper.
{% include figure.liquid path="assets/img/flash-algo.png" class="img-fluid" %}
From a hardware standpoint, this lets us fit our chunk of Q into VMEM (what the algorithm above calls on-chip SRAM) so we only have to load the KV chunks on each iteration, reducing the arithmetic intensity. We can also keep the running statistics in VMEM.
One last subtle point worth emphasizing is an attention softmax property that's used to make the Flash VJP (reverse mode derivative) calculation practical for training. If we define an intermediate softmax array as:
In attention, we obtain dS from reverse-mode dO and V arrays:
During the backpropagation of this gradient to Q and K
We exploit an identity that allows us to exchange a contraction along the large key length dimension with a local contraction along the feature depth dimension.
This replacement is crucial for being able to implement a sequence-block local calculation for the VJP, and enables further clever sharding schemes like ring attention.