SDPA is Entropy-Regularized Optimal Transport
Scaled dot product attention (SDPA) is usually introduced as something like this: "We compute dot products between queries and keys to measure similarity, then apply softmax to get a probability distribution." If you then prod further as to why we use softmax specifically, you would probably get a reply that "it's a differentiable approximation to argmax" or maybe that it "gives us nice gradients."
Both of these things are true, and almost certainly were the reasons that softmax was originally picked. But there's more going on. SDPA is the unique optimal solution to a well-defined mathematical problem, and the story of how we get there connects 18th-century French mathematics, modern reinforcement learning, and the geometry of probability distributions.
This post is based on a very interesting essay by Zartbot on WeChat covering an original paper by Litman, 2025. The majority of what follows is a translation of his blog post, with further expansion around explanations and connections done by me. You can find more of Zartbot's translated essays on his GitHub.
The Optimization Problem Hidden in Attention
What attention actually computes
Let's start with the standard SDPA for a single query. We have:
- A query vector
- A set of key vectors
- A temperature parameter (usually )
- comes from the and vectors having an approximate variance of 1, therefore their sum has variance and a standard deviation of . Without this scaling factor, then attention logits would grow as .
The attention weights are:
This gives you a probability distribution over the keys, and we use these weights to take a weighted average of the value vectors. So far the standard story is holding true, but the interesting part is that it turns out these attention weights are the unique solution to the following optimization problem (we'll get into what this problem actually is next):
where:
- is the probability simplex (all with and )
- is the "cost" of attending to key (negative similarity)
- is Shannon entropy
Put simply, attention finds the distribution that minimizes expected cost while maximizing entropy. This perspective of attention tells us that it's actually balancing two competing objectives:
- Minimize cost: We want to concentrate attention on keys with a high similarity to the query
- Maximize entropy: Spread the attention out, don't put all your eggs in one basket
The cost-entropy tradeoff in attention. Left: attention weights over six keys. Right: the underlying query-key similarities (scores). As temperature τ increases, attention spreads from the highest-similarity keys toward uniform. The dashed line marks 1/n and keys above this line receive more than their "fair share" of attention. Try adjusting τ to see how the balance shifts between exploiting high-similarity keys and exploring the full context.
Here, the temperature parameter controls the tradeoff. A large means more entropy (smoother attention), while a small means lower cost (peakier attention). You can derive the softmax formula directly from this optimization problem using basic calculus. If we set up the Lagrangian with the constraint :
Then take the derivative with respect to and set it to zero:
Solve for :
After normalizing to satisfy the constraint, we get exactly the softmax formula:
If you've seen statistical mechanics, this formula might look familiar. The Boltzmann distribution describes the probability of a system being in state with energy at temperature :
Our attention formula has the same structure: , where the cost plays the role of energy. The temperature controls how sharply the distribution concentrates on low-energy (high-similarity) states. Both formulas arise from maximizing entropy subject to a constraint on expected value. In physics, that's expected energy, while in SDPA, it's expected cost.
Optimal Transport and Moving Dirt Efficiently
The optimization problem we talked about above turns out to have a name: one-sided entropic optimal transport. To understand where this comes from, let's take a brief detour to 18th century France.
The dirt-moving problem
In 1781, French mathematician Gaspard Monge posed an assumedly hypothetical question: Suppose you have a pile of dirt and you want to fill in a hole of the same volume, what's the most efficient way to move the dirt?
More formally: You have a source distribution (the dirt pile) and a target distribution (the hole). You want to find a "transport plan" that moves mass from the source to the target while minimizing total transportation cost. Monge originally required each grain of dirt to go to exactly one location, a deterministic mapping. In 1942, Soviet mathematician Leonid Kantorovich relaxed this constraint and allowed dirt to be split, with fractions going to different destinations. This makes the problem convex and much easier to solve.
The output of Kantorovich's problem is a coupling , or a join distribution whose marginals are and . The entry tells us how much mass flows from source to target .
Two-sided vs one-sided constraints
Standard optimal transport requires bilateral constraints, i.e., the transport plan must satisfy:
- Row sums equal the source distribution
- Column sums equal the target distribution
This is like a matching problem: every source must send its mass somewhere, and every target must receive its allocated amount. Finding such a plan requires iterative algorithms like Sinkhorn-Knopp, which alternates between normalizing rows and columns until convergence. Sinkhorn Attention applies this bilateral framework to transformers, requiring the attention matrix to be doubly stochastic (both rows and columns sum to 1). This enforces a form of "fairness" where every key receives equal attention across all queries.
However, standard softmax attention is different, it only constrains rows (each query's attention sums to 1). There's no requirement on columns, so some keys might receive massive attention while others receive almost none. This one-sided constraint is why softmax has a closed-form solution. Without the column constraint, the optimization problem decouples across queries, and each row can be solved independently via a simple normalization. SDPA's efficiency comes from solving a relaxed transport problem. We remove the bilateral "fairness" constraint in exchange for a closed-form solution that doesn't require iteration.
Entropic regularization: making transport smooth
Classical optimal transport has a problem for us though, the solutions are often sparse and non-differentiable. In 2013, Cuturi discovered that adding an entropy term resolves this conflict (Cuturi, 2013):
subject to the marginal constraints (rows sum to , columns sum to ).
This is called entropic optimal transport (EOT). The entropy term:
- Makes the problem strictly convex (meaning there is a unique solution)
- Makes the solution smooth and differentiable
- Enables fast algorithms (Sinkhorn iteration)
The optimal coupling takes the form , known as the Gibbs kernel. This kernel is strictly positive: every source-target pair gets some mass, though irrelevant pairs get exponentially little. This strict positivity is what makes the solution smooth and differentiable.
Attention as one-sided transport
Now, classical optimal transport also has two marginal constraints (source and target ). But in attention, the source is a single query with mass = 1, and there's no constraint on how much attention each key receives. This is called one-sided optimal transport. The query distributes its attention budget to the keys, but we don't require each key to receive any particular amount. When you have only a source constraint and use entropy regularization, the problem becomes:
That looks familiar, it turns out to be exactly our attention formula! The solution is closed-form (softmax) rather than requiring iterative algorithms like Sinkhorn.
While standard SDPA uses a one-sided constraint (only row sums = 1), giving a closed-form softmax solution, the bilateral doubly stochastic constraint appears in other transformer contexts. For example, Sinkhorn attention applies it to attention matrices for "fairness" across keys. More recently, DeepSeek's mHC architecture uses the Sinkhorn-Knopp algorithm to constrain residual stream mixing matrices to be doubly stochastic, ensuring stable signal propagation across layers. In both cases, the Birkhoff polytope (doubly stochastic matrices) provides geometric structure, but for different purposes: attention allocation vs. residual stability.
Therefore, every time you compute attention weights, you're solving an optimal transport problem. The query is "moving mass" to the keys, and the softmax gives the unique transport plan that balance cost-efficiency with smoothness.
Backprop is Secretly Doing Reinforcement Learning
The forward pass solves an optimal transport problem. It turns out that backpropagation through attention is mathematically equivalent to a policy gradient algorithm from reinforcement learning.
Setting up the RL interpretation
If we think of attention as a "policy" that chooses which keys to attend to:
- The attention weights are action probabilities
- Each key is an action
- The "utility" of attending to key is This utility measures how much the loss would decrease if we attended a bit more to key .
The gradient formula
When we backpropagate through softmax attention, the gradient with respect to the pre-softmax score is:
where is the expected utility under the current attention distribution.
Let's look at the structure of this gradient:
- : probability of action
- : the negative advantage of action
- The whole thing: probability times negative advantage
Note the sign: the gradient points away from high-utility keys (since we're computing and want to minimize loss). When we subtract this gradient during optimization, the sign flips and we get the familiar RL form. More on that below.
This is exactly the REINFORCE algorithm with a baseline! REINFORCE is the foundational policy gradient method from RL (Williams, 1992). The basic idea: if you want to learn a policy (a probability distribution over actions), you update it by:
Basically: increase the log-probability of actions proportionally to their reward. The problem is high variance since rewards can be noisy. The fix is to subtract a baseline (typically the expected reward under the current policy):
The term is called the advantage, or, how much better this action is than average. This is exactly what we derived for attention:
- plays the role of
- plays the role of
- is the baseline
When we do gradient descent with learning rate , we update . Substituting our gradient formula:
So if key has above-average utility (), its score increases. If it has below-average utility, its score decreases.
Thus, every time we train a transformer with standard backpropagation, the attention layers are learning via policy gradients. Keys that contribute above-average utility get "rewarded" (higher attention in the future), while keys that contribute below-average utility get "penalized." To be clear, this is not just a metaphor, it is literally the same mathematics. The gradient descent update to attention scores follows the exact form of a variance-reduced policy gradient.
The Geometry That Connects Everything
We've now been through two elegant results:
- The forward pass is equivalent to solving an optimal transport problem
- The backward pass is policy gradient reinforcement learning
The underlying connection between these two results is information geometry.
Fisher Information and the curvature of probability space
The space of probability distributions isn't flat, but curved. The "natural" way to measure distances on this space uses the Fisher Information Matrix (FIM). For the attention distribution , the FIM is:
This matrix tells us how "sensitive" the distribution is to changes in the underlying scores, and its entries are the covariances of the score functions. (This is the FIM in probability coordinates. In score coordinates, the softmax Jacobian introduces a factor of , giving , which is why the Hessian appears below.)
The connection: Hessian = Fisher Information
This unification emerges from Lagrangian duality. Lagrangian duality is a technique from optimization where instead of solving a constrained problem directly (the "primal"), you create an equivalent "dual" problem by folding the constraints into the objective with multipliers. The dual is often easier to work with, and under certain conditions (which hold here), both problems have the same optimal value, this is called "strong duality." The primal EOT problem minimizes over distributions:
This says: find the distribution on the simplex that minimizes the negative expected score (i.e., maximizes expected similarity) plus an entropy penalty. The first term rewards high scores; the second term is negative entropy scaled by temperature, which penalizes overly concentrated distributions.
This primal value function has a useful property from the envelope theorem: its gradient equals the negative optimal solution, . That negative sign is a bit awkward though, so Lagrangian duality gives us a cleaner object. When we form the dual problem (relaxing the simplex constraint with a Lagrange multiplier), the dual optimal value turns out to be:
This is the log-sum-exp (LSE) function, which is known in numerical computing as the "softmax denominator." By strong duality, , so that awkward negative sign flips and disappears. So the LSE function is the optimal value of an entropy-regularized transport problem. From that, we can see that the gradients tell the whole story.
First derivative (gradient)
The gradient of LSE is the attention distribution simply as a consequence of duality. The optimal solution to the primal problem equals the gradient of the dual optimal value.
Second derivative (Hessian)
Using the softmax Jacobian:
So:
The Hessian of LSE is (proportional to) the Fisher Information Matrix. This is why everything connects. LSE is the dual potential that generates both the forward solution (via its gradient) and the backward geometry (via its Hessian). The softmax Jacobian you compute in backprop is the Hessian of an optimal transport problem. Furthermore, the curvature of the optimal transport objective is the Fisher Information of the attention distribution. This identity connects everything.
The identity is a general property of exponential families, not something specific to attention. The softmax distribution is an exponential family with natural parameters and log-partition function . For any exponential family, the Hessian of the log-partition function equals the covariance of the sufficient statistics, which for categorical distributions is exactly the FIM. So the Hessian-FIM connection is guaranteed by the exponential family structure of softmax, and the OT framework gives us a new lens on why softmax takes this form.
One matrix, three identities
So our deepest result is that a single matrix appears in all three frameworks:
| Framework | Role of | Interpretation |
|---|---|---|
| Optimal Transport | Hessian of dual potential, curvature of the value landscape | |
| Information Geometry | Fisher Information Matrix, natural metric on probability space | |
| Reinforcement Learning | Control matrix, transforms utility into gradient |
As the saying goes: "Once is accident, twice is a coincidence, three times is a pattern". The matrix is:
- The second derivative of the optimization objective (how curved is the landscape?)
- The metric tensor on the statistical manifold (how do you measure distance between distributions?)
- The structure of the softmax Jacobian, up to scaling (how do small score changes affect attention weights?)
These are three descriptions of the same geometric object. The optimization landscape is the statistical manifold is the control geometry. And backpropagation through softmax computes , which is simultaneously:
- A Newton-like step using the optimization Hessian
- The natural gradient on probability space
- An advantage-weighted policy gradient
Why backprop implements natural gradients
This is a bit of a technical note so feel free to skip if you're not interested, it's a subtle but interesting point. The natural gradient is defined as:
It's the direction of steepest descent in the information geometry of the probability space, not Euclidean space. Natural gradient methods like K-FAC explicitly compute and invert the FIM. For attention, the standard gradient with respect to scores is:
Thus, the natural gradient would be:
The natural gradient is proportional to the marginal utility vector. This means:
- Standard gradient descent on scores implicitly applies the FIM
- If you did natural gradient descent, you'd update scores proportionally to utility
- The FIM "converts" raw utility into the geometrically correct update direction
This is why the advantage-base form emerges automatically. The softmax Jacobian is the FIM (with scaling), so backprop through softmax automatically applies the information-geometric correction that RL algorithms (like natural policy gradient) have to compute explicitly.
A Design Framework for New Attention Mechanisms
The most practical consequence of this theory is that it gives us a principled way to design new attention mechanisms. Instead of guessing and checking, we could:
- Choose an objective function with a different regularizer
- Derive the optimal solution
- Get a new attention mechanism with known properties
The variational framework
The general form is:
where is a convex regularizer. Different choices of give different mechanisms:
| Regularizer | Attention Mechanism | Properties |
|---|---|---|
| (negative entropy) | Softmax | Dense, smooth, closed-form |
| (L2 norm) | Sparsemax | Sparse (exactly zero weights) |
| Negative Tsallis entropy | -entmax | Tunable sparsity |
| ALiBi-like | Locality bias | |
| PriorSoftmax | Incorporates prior beliefs |
Example: Deriving ALiBi from first principles
ALiBi (Attention with Linear Biases) is a position encoding method that adds a distance-based penalty to attention logits. In the original paper, it was introduced as a heuristic. However, in this variational framework, it emerges naturally. If we add a linear position penalty to the entropy regularizer:
The solution is:
which is exactly ALiBi! The position penalty in the regularizer becomes an additive bias in the logits. The framework explains why ALiBi works: it's the optimal transport plan when you care not just about similarity, but also about distance.
PriorSoftmax: attention as Bayesian inference
The most conceptually rich variant uses KL divergence to a prior as the regularizer:
where is a prior distribution over keys. The solution is:
which has a nice Bayesian interpretation:
- is your prior belief about key 's relevance (before seeing the query)
- is the likelihood (evidence from the query-key similarity)
- is the posterior, your updated belief after combining prior and evidence
Standard softmax is the special case where the prior is uniform (). This reveals something about ALiBi as well: its additive position bias in the logits is equivalent to a log-prior:
In other words, ALiBi encodes a prior belief that nearby keys are more relevant. The "heuristic" position penalty is principled Bayesian reasoning.
Implications for Transformer Design
Based on this framework and some other research such as the Physics of Language Models (PoLM) series by Ziming Liu, Zeyuan Allen-Zhu, and collaborators, we can put together some implications for understanding and designing transformers. Note that this section is getting more into hypotheses than concrete math.
Why linear attention loses something fundamental
Linear attention methods (like those in Mamba or linear transformers) approximate the softmax kernel:
for some feature map , which enables complexity instead of . This approximation destroys the Gibbs kernel structure that defines the EOT solution. Recall that the Gibbs kernel is strictly positive, guaranteeing the optimal solution lives in the simplex interior where attention weights are always nonzero (just exponentially small for irrelevant keys). Feature map kernels can be zero, so the solution no longer satisfies the transport constraints at all. The entropy-cost tradeoff that makes softmax attention principled simply does not exist in the linear approximation.
The EOT framework predicts that linear attention should fail when you need precise "transport", when it matters exactly which keys attend to which queries. This matches empirical findings from PoLM Part 4.1 (Allen-Zhu, 2025) that linear attention struggles with multi-hop reasoning tasks where information needs to be retrieved precisely across multiple steps.
Why depth matters more than width for reasoning
Each attention layer solves one optimal transport problem, one routing decision about where to send information. Multi-hop reasoning requires composing these decisions: first retrieve fact A, then use A to look up fact B, then use B to answer the question. Width (more heads, larger dimensions) increases the capacity of each decision, but depth increases the number of sequential decisions. You cannot parallelize "use the output of step 1 as input to step 2" by making step 1 wider.
This aligns with empirical findings from PoLM Part 2.1, which shows that deeper models outperform wider models on reasoning tasks even with similar parameter counts. Their probing experiments make this concrete: shallower layers correctly identify dependencies close to the query, while deeper layers are required for dependencies further away. The model performs layer-by-layer reasoning, recursively building up the dependency graph across depth.
The dispersion problem: why attention struggles with length
The EOT framework predicts a fundamental limitation. DeepMind's paper 'Softmax is Not Enough' (Wortsman et al., 2024) proves:
If inputs have bounded range and temperature is constant, then as the number of keys , every attention weight must approach .
In other words, attention necessarily disperses with sequence length. No matter how relevant one key is, with enough keys in the context, its attention weight gets diluted. Why does this happen? The softmax denominator grows with . If scores are bounded (say, in ), then each weight is at most , which vanishes as .
For transformers, this is concerning. The inputs to attention (after layer norm) typically have a bounded range, so theoretically, attention should degrade on very long sequences. Indeed, empirically we can see that it does too.
Temperature selection
One possible solution to the problem described above is adaptive temperature: compute the entropy of the attention distribution and adjust to maintain a target entropy level. In the EOT framework, this means dynamically adjusting the cost-entropy tradeoff based on how many keys you're choosing among. This connects to some recent work on long-context attention (like Ring Attention, landmark tokens, etc.), they're all fighting the same underlying mathematical constraint.
The standard scaling ensures that has variance roughly independent of dimension. In the EOT framework, this means the cost scale stays constant, so the entropy-cost tradeoff remains stable. The theory suggests that temperature should perhaps scale with sequence length too: longer sequences mean more keys, so you might want stronger entropy regularization to prevent attention from becoming too sparse. This is an open research direction as well.
Connections to Mechanistic Interpretability
This is some of my own thinking, and may be entirely wrong, but maybe it provokes thinking on the bridge between this theoretical framework and empirical interpretability research.
What the EOT framework says about attention head circuits
In A Mathematical Framework for Transformer Circuits (Elhage et al., 2021), Anthropic decomposes attention heads into two separable operations:
- QK circuit: (where Q is the query projection and K is the key projection), this circuit determines which tokens attend to which, i.e., the attention pattern
- OV circuit: (where O is the output projection and V is the value projection), this circuit determines what happens when a token is attended to (the effect on logits)
The EOT framework gives a new perspective on this decomposition: The QK circuit defines the cost function. The attention score is the negative cost in the transport problem. So the QK circuit is really specifying, "how expensive is it to route information from key to this query?" The optimal transport plan (attention weights) then minimizes this cost subject to entropy regularization. The OV circuit defines the "cargo". Once the transport plan is determined, what actually gets moved is governed by the OV circuit. In OT terms, the QK circuit determines the transport plan, the OV circuit determines what gets transported. This separation is exactly the structure of optimal transport: first solve for the optimal plan, then execute the transport.
Induction heads as learned transport policies
The In-context Learning and Induction Heads paper (Olsson et al., 2022) identified induction heads, attention heads that implement the pattern (finding previous occurrences of the current token and predicting what came next). These require:
- A "copying" OV circuit (positive eigenvalues, attended tokens boost their own logits)
- A "prefix matching" QK circuit via K-composition with a previous-token head
From the EOT perspective, the model has learned a content-dependent cost function. The QK circuit doesn't just measure raw similarity, through K-composition it measures "is this position preceded by a token matching my current token?" This is a structured, algorithmic cost function that emerges from training. The induction head's attention pattern is still the optimal transport solution, but for a learned, compositional cost function that encodes temporal structure.
Why circuits are "separable": an OT explanation
Another key insight from the paper is that you can "freeze" attention patterns and study the OV circuit independently. The logits then become a linear function of tokens. The EOT framework explains why this works, the softmax attention weights are a deterministic function of the scores (the unique EOT solution). The QK circuit produces scores, then softmax produces the plan, then OV acts. Once you fix the scores (freeze the attention pattern), the OV circuit acts linearly.
Attention sinks: a necessary consequence of the simplex constraint
Attention sinks are an emergent phenomenon where models allocate disproportionate attention to the first tokens (especially BOS), even when semantically irrelevant. The EOT framework explains not just why it happens, but why it must happen due to the one-sided constraint.
In EOT, the source must send all its mass somewhere, the rows must equal 1. There's no option for "I don't want to transport anything this round." The query must attend to something, always. This creates a problem when the optimal action is indeed "don't attend to anything meaningful." The softmax/EOT solution space doesn't include "no transport."
Attention sinks as learned low-cost dump destinations. The model learns to make certain tokens (BOS, early tokens) have consistently low "cost", not because they're semantically relevant, but because:
- They're always available (visible to all subsequent tokens in causal attention)
- Their OV circuit contribution can be made nearly null (attending to them doesn't corrupt the output)
- They absorb the "excess" probability mass that must go somewhere
Why evicting breaks streaming. In sliding window attention, removing sink tokens removes the low-cost dump destination. The excess mass that must go somewhere now gets forced onto semantically meaningful recent tokens, corrupting the attention pattern. This explains, for example, why StreamingLLM (Xiao et al., 2023) must permanently retain the first few "sink" tokens even as the window slides.
The variation framework predicts this. When all costs are similar with no clear semantic match, entropy maximization would spread attention uniformly. But if one token has learned to be consistently "cheap" (low ), it absorbs disproportionate mass. The model has learned to exploit the EOT objective by creating designated low-cost destinations for mandatory (but unwanted) attention masses.
Composition and multi-step transport
The three types of composition (Q-, K-, V-composition) correspond to different ways of building complex transport:
- K-composition: Modify the cost function using information from previous layers (like induction heads)
- Q-composition: Modify the "source" of the transport query
- V-composition: Chain transports together (the output of one transport becomes input to another)
Multi-layer transformers are solving sequential transport problems where each layer's cost function can depend on previous layers' transport decisions, which is much richer than single-step OT.
Multi-head attention as multi-marginal transport
The question of whether MHA can be represented as multi-marginal transport was posed in Zartbot's essay too, so I took a stab at a hypothesis for it. Standard MHA runs independent one-sided EOT problems in parallel. Recall our core result from earlier: each query's attention weights solve . In MHA, each head has its own score vector , so each head independently solves its own version of this problem with no coupling between heads. But in OT theory, multi-marginal transport considers something richer: coupling distributions simultaneously.
The formulation looks like this. Multi-marginal OT couples histograms by solving:
where is now a tensor (not a matrix) with indices, and the constraint set requires each marginal to match. The key difference from standard OT: multi-marginal OT finds a joint coupling across all distributions simultaneously, not independent pairwise couplings.
If we treated MHA as multi-marginal EOT, we'd want a joint tensor ( times) where represents the probability that head 1 attends to key , head 2 attends to key , etc., jointly. The cost tensor could include interaction terms beyond the per-head costs:
Those interaction terms are where it gets interesting. You could penalize redundancy (heads attending to the same key) or reward coordination (heads attending to semantically related keys).
The mathematical obstacle
The problem is dimensionality. For heads and keys, the joint tensor has entries. Even with entropic regularization and Sinkhorn, this is intractable for typical values (, gives entries). However, Peyré & Cuturi do note that special cost structures can reduce this. When the cost decomposes nicely, the computation reduces from to via successive matrix-vector multiplications. Whether attention costs have such structure is an open question.
Head specialization as a constraint
We know from mechanistic interpretability that heads specialize: some do induction, some do previous-token attention, some do positional patterns, some become attention sinks, etc. This specialization suggests heads are solving different problems that would each have different cost functions, not coordinating on the same one. Multi-marginal OT would force coordination, which might actually hurt by preventing the diversity that makes MHA effective. Any viable multi-marginal formulation may need to preserve this ability to specialize while adding coordination only where it helps.
A more tractable alternative: Wasserstein barycenters
Instead of full multi-marginal OT, we could consider Wasserstein barycenters. The barycenter of distributions is:
This finds a single "consensus" distribution that's close to all heads, computable via Sinkhorn iterations. For MHA, this would mean: instead of concatenating head outputs, compute their barycenter in attention space and get a single attention pattern representing a "geometric average" of what all heads want to attend to.
A hybrid conjecture
This connects back to our earlier discussion of depth. Perhaps the right level of analysis isn't within a layer but across layers:
- Within a layer: Heads remain independent (different cost functions, encouraging specialization)
- Across layers: The residual stream aggregates head outputs, and this accumulation, rather than any single layer's attention, is where "coordination" emerges
Standard MHA's concatenation and linear projection is a linear combination of independently transported values. Wasserstein barycenters would be nonlinear, computed via optimization rather than matrix multiplication. The question of whether MHA's simplicity is a feature (efficiency, permitting specialization) or a limitation (no principled coordination) remains open. A concrete test: replace the linear projection with entropic barycenter aggregation and measure whether this helps or hurts on tasks requiring cross-head coordination. My suspicion is that independent heads are the right choice within a layer, and that coordination emerges across layers through the residual stream rather than within layers through explicit coupling.
The Bigger Picture
Let's take a step back and reflect on what we've covered.
Softmax attention is overdetermined
We've seen that softmax attention is the answer to multiple independent questions:
- Optimization: What distribution minimizes cost while maximizing entropy?
- Statistics: What's the maximum entropy distribution given expected similarity?
- Even physics: What's the Boltzmann distribution with energy = negative similarity?
- Learning: What produces advantage-based policy gradients under backprop?
These questions come from different fields, but all give the same answer, which suggests something more fundamental.
The unreasonable effectiveness of softmax
This helps explain why transformers work so well. Softmax attention is the unique solution to a natural optimization problem that balances exploration (entropy) and exploitation (similarity), and its learning dynamics follow the geometry of probability distributions.
Other attention mechanisms can work too, but they're solving different problems. The framework tells you exactly what tradeoffs you're making, e.g., Sparsemax prioritizes sparsity over smoothness, linear attention prioritizes speed over optimality, and so on.
Quick reference table
| Optimal Transport | Information Geometry | RL/Control |
|---|---|---|
| Cost | Negative log-likelihood | Negative reward |
| Entropy regularization | Temperature / FIM scale | Exploration parameter |
| Dual potential | Log-partition function | Value function |
| Transport plan | Exponential family mean params | Policy |
What remains open
Again taken from the original essay, several questions remain:
-
Multi-head geometry: See the discussion above. Full multi-marginal OT coordination between heads appears both intractable ( tensor entries) and potentially counterproductive (it may prevent beneficial specialization), but barycenter-based aggregation of head outputs remains unexplored.
-
Position encoding: RoPE rotates queries and keys, implicitly defining a position-dependent similarity metric. What cost function does this correspond to in the EOT framework?
-
Training dynamics: The FIM governs learning geometry. Can we use this to predict which layers will learn fastest, or design better optimizers for attention?
-
Scaling laws: Does the optimal transport perspective shed light on scaling laws? What happens to the "cost landscape" as model size grows?
References
Primary Source
Litman, E. (2025). Scaled-Dot-Product Attention as One-Sided Entropic Optimal Transport. arXiv preprint arXiv:2508.08369. https://arxiv.org/abs/2508.08369
This Post Builds On
渣B (Zartbot). (2025, August 21). 大模型时代的数学基础(9): SDPA、最优传输、强化学习与信息几何的联系 [Mathematical Foundations in the Era of Large Models (9): The Connection Between SDPA, Optimal Transport, Reinforcement Learning, and Information Geometry]. WeChat/微信公众号. https://mp.weixin.qq.com/s?__biz=MzUxNzQ5MTExNw==&mid=2247494688&idx=1&sn=3d589f6d4be56ee372d5db4f8631b0cc
Optimal Transport
Cuturi, M. (2013). Sinkhorn Distances: Lightspeed Computation of Optimal Transport. Advances in Neural Information Processing Systems 26 (NeurIPS 2013). https://arxiv.org/abs/1306.0895
Peyré, G. & Cuturi, M. (2019). Computational Optimal Transport. Foundations and Trends in Machine Learning, 11(5-6), 355-607. https://arxiv.org/abs/1803.00567
Information Geometry
Amari, S. (1998). Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2), 251-276.
Reinforcement Learning
Williams, R.J. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8(3-4), 229-256.
Physics of Language Models
Allen-Zhu, Z. (2025). Physics of Language Models: Part 4.1, Architecture Design and the Magic of Canon Layers. NeurIPS 2025. arXiv:2512.17351. https://arxiv.org/abs/2512.17351
Ye, T., Xu, Z., Li, Y., & Allen-Zhu, Z. (2024). Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process. ICLR 2025. https://arxiv.org/abs/2407.20311
Mechanistic Interpretability
Elhage, N., Nanda, N., Olsson, C., et al. (2021). A Mathematical Framework for Transformer Circuits. Transformer Circuits Thread, Anthropic. https://transformer-circuits.pub/2021/framework/index.html
Olsson, C., Elhage, N., Nanda, N., et al. (2022). In-context Learning and Induction Heads. Transformer Circuits Thread, Anthropic. https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html
Attention and Long Context
Xiao, G., Tian, Y., Chen, B., Han, S., & Lewis, M. (2023). Efficient Streaming Language Models with Attention Sinks. ICLR 2024. arXiv:2309.17453. https://arxiv.org/abs/2309.17453
Wortsman, M., et al. (2024). Softmax is Not Enough (for Sharp Out-of-Distribution Generalization). arXiv preprint arXiv:2410.01104. https://arxiv.org/abs/2410.01104
Transformer Architecture
Xie, Z., Wei, Y., Cao, H., et al. (2025). mHC: Manifold-Constrained Hyper-Connections. arXiv preprint arXiv:2512.24880. https://arxiv.org/abs/2512.24880