large language models, often implemented as autoregressive transformers models.
GPTs and friends
Most variants of LLMs are decoder-only (Radford et al., 2019)
Have “capabilities” to understand natural language.
Exhibits emergent behaviour of intelligence, but probably not AGI due to observer-expectancy effect.
One way or another is a form of behaviourism, through reinforcement learning. It is being “told” what is good or bad, and thus act accordingly towards the users. However, this induces confirmation bias where one aligns and contains his/her prejudices towards the problem.
Scalability
Incredibly hard to scale, mainly due to their large memory footprint and tokens memory allocation.
Optimization
See also: this talk
- Quantization: reduce computational and memory costs of running inference with representing the weight and activations with low-precision data type
- Continuous batching: Implementing Paged Attention with custom scheduler to manage swapping kv-cache for better resource utilisation
- Byte-Latent Transformer: idea to use entropy-based sampling to choose next tokens instead of token-level decoding. 1
on how we are being taught.
How would we assess thinking?
Similar to calculator, it simplifies and increase accessibility to the masses, but in doing so lost the value in the action of doing math.
We do math to internalize the concept, and practice to thinking coherently. Similarly, we write to help crystalised our ideas, and in the process improve through the act of putting it down.
The process of rephrasing and arranging sentences poses a challenges for the writer, and in doing so, teach you how to think coherently. Writing essays is an exercise for students to articulate their thoughts, rather than testing the understanding of the materials.
on ethics
See also Alignment.
There are ethical concerns with the act of “hallucinating” content, therefore alignment research is crucial to ensure that the model is not producing harmful content.
For medicare, ethical implications requires us to develop better interpretable models
as philosophical tool.
To create a better representations of the world for both humans and machines to understand, we can truly have assistive tools to enhance our understanding of the world surround us
AI generated content
Don’t shit where you eat, Garbage in, garbage out. The quality of the content is highly dependent on the quality of the data it was trained on, or model are incredibly sensitive to data variances and biases.
Bland doublespeak
See also: All the better to see you with
Here's a real problem though. Most people find writing hard and will get AIs to do it for them whenever they can get away with it. Which means bland doublespeak will become the default style of writing. Ugh.
— Paul Graham (@paulg) 25 février 2024
machine-assisted writings
source: gwern[dot]net
Idea: use sparse autoencoders to guide ideas generations
Good-enough
"How did we get AI art before self-driving cars?" IMHO this is the single best heuristic for predicting the speed at which certain AI advances will happen. pic.twitter.com/yAo6pwEsxD
— Joshua Achiam (@jachiam0) 1 décembre 2022
This only occurs if you only need a “good-enough” item where value outweighs the process.
However, one should always consider to put in the work, rather than being “ok” with good enough. In the process of working through a problem, one will learn about bottleneck and problems to be solved, which in turn gain invaluable experience otherwise would not achieved if one fully relies on the interaction with the models alone.
as search
These models are incredibly useful for summarization and information gathering. With the taxonomy of RAG or any other CoT tooling, you can pretty much augment and produce and improve search-efficiency bu quite a lot.
notable mentions:
- perplexity.ai: RAG-first search engine
- explorer.globe.engineer: tree-based information retrieval
- Exa labs
- You.com
Programming
Overall should be a net positive, but it’s a double-edged sword.
as end-users
I think it’s likely that soon all computer users will have the ability to develop small software tools from scratch, and to describe modifications they’d like made to software they’re already using
as developers
Tool that lower of barrier of entry is always a good thing, but it often will lead to probably even higher discrepancies in quality of software
Increased in productivity, but also increased in technical debt, as these generated code are mostly “bad” code, and often we have to nudge and do a lot of prompt engineering.
mechanistic interpretability
whirlwind tour, initial exploration, glossary
The subfield of alignment that delves into reverse engineering of a neural network, especially LLMs
To attack the curse of dimensionality, the question remains: how do we hope to understand a function over such a large space, without an exponential amount of time? 2
inference
application in the wild: Goodfire and Transluce
How we would do inference with SAE?
Quick 🧵 and some of quick introspection into how they might run inference https://t.co/1N3JrxFHSp
— aaron (@aarnphm_) 25 septembre 2024idea: treat SAEs as a
logit_processor
, similar to guided decodingCurrent known bottleneck in vLLM:
logit_processor
are row-wise, or logits are processed synchronously and blocking 3- no SPMD currently implemented
steering
refers to the process of manually modifying certain activations and hidden state of the neural net to influence its outputs
For example, the following is a toy example of how a decoder-only transformers (i.e: GPT-2) generate text given the prompt “The weather in California is”
flowchart LR A[The weather in California is] --> B[H0] --> D[H1] --> E[H2] --> C[... hot]
To steer to model, we modify layers with certain features amplifier with scale 20 (called it )4
flowchart LR A[The weather in California is] --> B[H0] --> D[H1] --> E[H3] --> C[... cold]
One usually use techniques such as sparse autoencoders to decompose model activations into a set of interpretable features.
For feature ablation, we observe that manipulation of features activation can be strengthened or weakened to directly influence the model’s outputs
A few examples where (Panickssery et al., 2024) uses contrastive activation additions to steer Llama 2
contrastive activation additions
intuition: using a contrast pair for steering vector additions at certain activations layers
Uses mean difference which produce difference vector similar to PCA:
Given a dataset of prompt with positive completion and negative completion , we calculate mean-difference at layer as follow:
implication
by steering existing learned representations of behaviors, CAA results in better out-of-distribution generalization than basic supervised finetuning of the entire model.
sparse autoencoders
abbrev: SAE
see also: landspace
Often contains one layers of MLP with few linear ReLU that is trained on a subset of datasets the main LLMs is trained on.
empirical example: if we wish to interpret all features related to the author Camus, we might want to train an SAEs based on all given text of Camus to interpret “similar” features from Llama-3.1
definition
We wish to decompose a models’ activitation into sparse, linear combination of feature directions:
Thus, the baseline architecture of SAEs is a linear autoencoder with L1 penalty on the activations:
training it to reconstruct a large dataset of model activations , constraining hidden representation to be sparse
L1 norm with coefficient to construct loss during training:
intuition
We need to reconstruction fidelity at a given sparsity level, as measured by L0 via a mixture of reconstruction fidelity and L1 regularization.
We can reduce sparsity loss term without affecting reconstruction by scaling up norm of decoder weights, or constraining norms of columns during training
Ideas: output of decoder has two roles
- detects what features acre active ⇐ L1 is crucial to ensure sparsity in decomposition
- estimates magnitudes of active features ⇐ L1 is unwanted bias
Gated SAE
uses Pareto improvement over training to reduce L1 penalty (Rajamanoharan et al., 2024)
Clear consequence of the bias during training is shrinkage (Sharkey, 2024) 5
Idea is to use gated ReLU encoder (Dauphin et al., 2017; Shazeer, 2020):
where is the (point-wise) Heaviside step function and denotes element-wise multiplication.
term annotations which features are deemed to be active feature activation magnitudes (for features that have been deemed to be active) sub-layer’s pre-activations to negate the increases in parameters, use weight sharing:
Scale in terms of with a vector-valued rescaling parameter :
Figure 3: Gated SAE with weight sharing between gating and magnitude paths
Figure 4: A gated encoder become a single layer linear encoder with JumpReLU (Erichson et al., 2019) activation function
feature suppression
See also: link
Loss function of SAEs combines a MSE reconstruction loss with sparsity term:
the reconstruction is not perfect, given that only one is reconstruction. For smaller value of , features will be suppressed
illustrated example
consider one binary feature in one dimension with probability and otherwise. Ideally, optimal SAE would extract feature activation of and have decoder
However, if we train SAE optimizing loss function , let say encoder outputs feature activation if and 0 otherwise, ignore bias term, the optimization problem becomes:
Lien vers l'originalHow do we fix feature suppression in training SAEs?
introduce element-wise scaling factor per feature in-between encoder and decoder, represented by vector :
sparse crosscoders
maturity
a research preview from Anthroppic and this is pretty much still a work in progress
see also reproduction on Gemma 2B and github
A variant of sparse autoencoder where it reads and writes to multiple layers (Lindsey et al., 2024)
Crosscoders produces shared features across layers and even models
motivations
Resolve:
cross-layer features: resolve cross-layer superposition
circuit simplification: remove redundant features from analysis and enable jumping across training many uninteresting identity circuit connections
model diffing: produce shared sets of features across models. This also introduce one model across training, and also completely independent models with different architectures.
cross-layer superposition
given the additive properties of transformers’ residual stream, adjacent layers in larger transformers can be thought as “almost parallel”
intuition
In basis of superposition hypothesis, a feature is a linear combinations of neurons at any given layers.
if we think of adjacent layers as being “almost parallel branches that potentially have superposition between them”, then we can apply dictionary learning jointly 6
persistent features and complexity
Current drawbacks of sparse autoencoders is that we have to train it against certain activations layers to extract features. In terms of the residual stream per layers, we end up having lots of duplicate features across layers.
Crosscoders can simplify the circuit given that we use an appropriate architecture 7
setup.
Autoencoders and transcoders as special cases of crosscoders.
- autoencoders: reads and predict the same layers
- transcoders: read from layer and predict layer
Crosscoder read/write to many layers, subject to causality constraints.
crosscoders
Let one compute the vector of feature activation on data point by summing over contributions of activations of different layers for layers :
We have loss
and regularization can be rewritten as:
weight of L1 regularization penalty by L1 norm of per-layer decoder weight norms 8
We use L1 due to
baseline loss comparison: L2 exhibits lower loss than sum of per-layer SAE losses, as they would effectively obtain a loss “bonus” by spreading features across layers
layer-wise sparsity surfaces layer-specific features: based on empirical results of model diffing, that L1 uncovers a mix of shared and model-specific features, whereas L2 tends to uncover only shared features.
variants
good to explore:
- strictly causal crosscoders to capture MLP computation and treat computation performed by attention layers as linear
- combine strictly causal crosscoders for MLP outputs without weakly causal crosscoders for attention outputs
- interpretable attention replacement layers that could be used in combination with strictly causal crosscoders for a “replacement model”
model diffing
see also: model stiching and SVCCA
(Laakso & Cottrell, 2000) proposes compare representations by transforming into representations of distances between data points. 9
questions
How do features change over model training? When do they form?
Lien vers l'originalAs we make a model wider, do we get more features? or they are largely the same, packed less densely?
superposition hypothesis
tl/dr
phenomena when a neural network represents more than features in a -dimensional space
Linear representation of neurons can represent more features than dimensions. As sparsity increases, model use superposition to represent more features than dimensions.
neural networks “want to represent more features than they have neurons”.
When features are sparsed, superposition allows compression beyond what linear model can do, at a cost of interference that requires non-linear filtering.
reasoning: “noisy simulation”, where small neural networks exploit feature sparsity and properties of high-dimensional spaces to approximately simulate much larger much sparser neural networks
In a sense, superposition is a form of lossy compression
importance
sparsity: how frequently is it in the input?
importance: how useful is it for lowering loss?
over-complete basis
reasoning for the set of directions 10
features
A property of an input to the model
When we talk about features (Elhage et al., 2022, p. see “Empirical Phenomena”), the theory building around several observed empirical phenomena:
- Word Embeddings: have direction which corresponding to semantic properties (Mikolov et al., 2013). For example:
- Latent space: similar vector arithmetics and interpretable directions have also been found in generative adversarial network.
We can define features as properties of inputs which a sufficiently large neural network will reliably dedicate a neuron to represent (Elhage et al., 2022, p. see “Features as Direction”)
ablation
refers to the process of removing a subset of a model’s parameters to evaluate its predictions outcome.
idea: deletes one activation of the network to see how performance on a task changes.
- zero ablation or pruning: Deletion by setting activations to zero
- mean ablation: Deletion by setting activations to the mean of the dataset
- random ablation or resampling
residual stream
flowchart LR A[Token] --> B[Embeddings] --> C[x0] C[x0] --> E[H] --> D[x1] C[x0] --> D D --> F[MLP] --> G[x2] D --> G[x2] G --> I[...] --> J[unembed] --> X[logits]
residual stream has dimension where
- : the number of tokens in context windows and
- : embedding dimension.
Attention mechanism process given residual stream as the result is added back to :
grokking
See also: writeup, code, circuit threads
A phenomena discovered by (Power et al., 2022) where small algorithmic tasks like modular addition will initially memorise training data, but after a long time ti will suddenly learn to generalise to unseen data
Lien vers l'originalempirical claims
related to phase change
Bibliographie
- Dauphin, Y. N., Fan, A., Auli, M., & Grangier, D. (2017). Language Modeling with Gated Convolutional Networks. arXiv preprint arXiv:1612.08083 [arxiv]
- Erichson, N. B., Yao, Z., & Mahoney, M. W. (2019). JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks. arXiv preprint arXiv:1904.03750 [arxiv]
- Rajamanoharan, S., Conmy, A., Smith, L., Lieberum, T., Varma, V., Kramár, J., Shah, R., & Nanda, N. (2024). Improving Dictionary Learning with Gated Sparse Autoencoders. arXiv preprint arXiv:2404.16014 [arxiv]
- Sharkey, L. (2024). Addressing Feature Suppression in SAEs. AI Alignment Forum. [post]
- Shazeer, N. (2020). GLU Variants Improve Transformer. arXiv preprint arXiv:2002.05202 [arxiv]
- Gorton, L. (2024). The Missing Curve Detectors of InceptionV1: Applying Sparse Autoencoders to InceptionV1 Early Vision. arXiv preprint arXiv:2406.03662 [arxiv]
- Laakso, A., & Cottrell, G. (2000). Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology, 13(1), 47–76. https://doi.org/10.1080/09515080050002726
- Lindsey, J., Templeton, A., Marcus, J., Conerly, T., Batson, J., & Olah, C. (2024). Sparse Crosscoders for Cross-Layer Features and Model Diffing. Transformer Circuits Thread. [link]
- Elhage, N., Hume, T., Olsson, C., Schiefer, N., Henighan, T., Kravec, S., Hatfield-Dodds, Z., Lasenby, R., Drain, D., Chen, C., Grosse, R., McCandlish, S., Kaplan, J., Amodei, D., Wattenberg, M., & Olah, C. (2022). Toy Models of Superposition. Transformer Circuits Thread. [link]
- Mikolov, T., Yih, W., & Zweig, G. (2013). Linguistic Regularities in Continuous Space Word Representations. In L. Vanderwende, H. Daumé III, & K. Kirchhoff (Eds.), Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 746–751). Association for Computational Linguistics. https://aclanthology.org/N13-1090
- Panickssery, N., Gabrieli, N., Schulz, J., Tong, M., Hubinger, E., & Turner, A. M. (2024). Steering Llama 2 via Contrastive Activation Addition. arXiv preprint arXiv:2312.06681 [arxiv]
- Power, A., Burda, Y., Edwards, H., Babuschkin, I., & Misra, V. (2022). Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets. arXiv preprint arXiv:2201.02177 [arxiv]
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners.
Remarque
-
Think of decoding each text into dynamic patches, and thus actually improving inference efficiency. See also link ↩
-
good read from Lawrence C for ambitious mech interp. ↩
-
the benchmark was run against
vllm#0.6.3.dev236+g48138a84
, with all configuration specified in the pull request. ↩ -
An example steering function can be:
↩ -
If we hold fixed, thus L1 pushes , while reconstruction loss pushes high enough to produce accurate reconstruction.
An optimal value is somewhere between.
However, rescaling the shrink feature activations (Sharkey, 2024) is not necessarily enough to overcome bias induced by L1: a SAE might learnt sub-optimal encoder and decoder directions that is not improved by the fixed. ↩
-
(Gorton, 2024) denotes that cross-branch superposition is significant in interpreting models with parallel branches (InceptionV1) ↩
-
causal description it provides likely differs from that of the underlying model. ↩
-
is the L2 norm of a single feature’s decoder vector at a given layer.
In principe, one might have expected to use L2 norm of per-layer norm ↩
-
Chris Colah’s blog post explains how t-SNE can be used to visualize collections of networks in a function space. ↩
-
Even though features still correspond to directions, the set of interpretable direction is larger than the number of dimensions ↩