Latent Space: The AI Engineer Podcast

The Utility of Interpretability — Emmanuel Amiesen

Key Takeaways

Deep Dive

Introduction and Research Overview

Emmanuel from Anthropic introduces recent groundbreaking work on AI model interpretability, specifically focusing on circuit tracing research. The team has recently released papers and code about understanding internal model computations, developing tools to explain how models predict tokens by examining their internal states. This work was released in partnership with Anthropic's Fellows program and focuses on open-source models like Gemma 2.2b and Lama 1B.

The research centers on explaining computational processes within AI models, demonstrating how models perform multi-hop reasoning, and revealing similarities in reasoning circuits across different model sizes. Key findings show that models of different sizes can have remarkably similar reasoning circuits, and even small models like Gemma can perform complex reasoning tasks.

Tool Development and Methodology

The discussion focuses on a new tool/method for understanding AI model behaviors through circuit tracing. The open-source code enables researchers to extend the method, explore different models, and create and analyze computational graphs. The tool provides a UI (Neuronpedia) for generating and exploring circuits and works with base language models trained to predict next tokens.

A demonstration using the Gemma model shows how it predicts sentence completions, with a specific example exploring how the model completes "thanks for having me on the..." The tool reveals patterns in model reasoning and allows users to trace how a model generates specific outputs through interactive exploration of intermediate representations (features). Users can track how certain words or concepts influence model predictions, click on outputs to see underlying features, group and prune nodes for clearer visualization, and experiment with different prompts to understand model behavior.

Technical Implementation and Limitations

The computational notebook demonstrates model interpretability techniques, specifically exploring how different model components (features/nodes) contribute to computation. Key interpretability methods include turning specific nodes on/off, injecting prompts between different model components, and generating and analyzing computational graphs.

However, the current approach has significant limitations and caveats:

The evaluation criteria for interpretability include successfully tracing through computational graphs without errors, developing reasonable hypotheses about model computation, verifying hypotheses through intervention tests, and determining the proportion of model computation that can be meaningfully explained.

Practical Applications and Discoveries

The speaker demonstrates quick experiments using the interpretability tool, including analyzing a language model's understanding of dog breeds (specifically Pomskies). The tool allowed rapid exploration of model features in just minutes, revealing the model's nuanced understanding beyond simple token completion. Features discovered included animal-related concepts, dog breed characteristics, and breed-specific traits (like a "stubborn" feature for huskies).

A notable discovery involved the Golden Gate Bridge feature in Claude, which became a quirky characteristic where Claude would frequently reference the Golden Gate Bridge in various contexts. This wasn't an intentional programmed feature but an organic discovery during internal testing that the team found amusing and decided to keep, even incorporating it into their marketing efforts.

Field Evolution and Accessibility

The discussion highlights how AI research has become more accessible, with both speakers having transitioned into AI research without traditional PhD backgrounds. The AI research landscape has changed significantly in the past 5-6 years, becoming more empirical and less competitive to enter. Current research often relies more on scaling compute and data than complex theoretical approaches, and engineering and systems skills are critical for research execution.

Interpretability research is particularly accessible because many open-source models can be studied without massive computational resources, the field is relatively new with fewer complex abstractions to learn, and basic concepts like features and dictionary learning can quickly enable contribution.

Mechanistic Interpretability Foundations

Mechanistic Interpretability (Mechinterp) aims to understand how neural networks work internally, with origins traced to Chris Ola's blog and distill.pub. Initially focused on vision models, it's now expanding to NLP and other domains and currently experiencing a "Cambrian explosion" of research.

The core challenge is that unlike decision trees, neural networks (CNNs, transformers) don't provide interpretable intermediate states, resulting in complex networks of weights and activations whose meaning is unclear. The superposition hypothesis suggests that language models pack more information into less space compared to vision models, with language model neurons appearing more densely packed and less interpretable than vision models' relatively clear, discrete neuron functions.

Sparse Autoencoders and Feature Extraction

Key concepts in mechanistic interpretability include superposition (representing multiple concepts in limited dimensions), features as directional representations in neural network space, and sparse autoencoders as a method to unpack and understand these representations.

Sparse autoencoder mechanics involve extracting independent concepts automatically from neural networks through a process of expanding few neurons to represent multiple concepts, contracting back to original representation, training to incentivize sparsity (few features active at a time), and creating a "dictionary" of feature directions. This enables identifying specific feature directions (like "red" or "Golden Gate Bridge") and manipulating model behavior by setting identified feature directions to zero or artificially amplifying specific feature directions.

Complex Reasoning Capabilities

The research demonstrates sophisticated reasoning capabilities in large language models through specific examples:

Two-Step Reasoning Example: Using the capital of Texas, models demonstrate ability to identify Dallas is in Texas, determine Austin as the capital, and complete this reasoning in a single forward pass. This proves reasoning is not just memorization, as changing intermediate representation changes the answer.

Medical Diagnosis Example: Models can process multiple symptoms, generate potential diagnoses, suggest follow-up diagnostic tests, and do this in a single forward pass, showing complex, multi-step reasoning beyond simple pattern matching.

These examples challenge the "stochastic parrot" criticism of LLMs, demonstrating rich intermediate representational states during reasoning and involving distributed representations and complex combinations of information.

Induction Heads and Planning Mechanisms

Induction heads represent a key mechanism in transformer models - attention heads that can look at previous context and identify repeated text/patterns, copy and repeat text or concepts efficiently, and operate at different levels of abstraction (word-level, sentiment-level). They're critical for tasks like text prediction and editing and foundational for AI editing modes in models like Claude and OpenAI's systems.

The research reveals sophisticated planning mechanisms in AI models, particularly in poetry generation. Models develop complex features for poetry generation including rhyming capabilities, sound and musicality detection, and word feature tracking. Experimental interventions show models can plan poem structure in advance (before starting next line), perform "backwards planning" to ensure coherence, and dynamically adjust word choices to maintain thematic and structural consistency.

Cross-Linguistic and Multimodal Representations

Research findings show that models can represent similar concepts (like "hot" and "cold") consistently across multiple languages, with larger, more advanced models tending to share more representations across languages. This suggests models learn universal concepts rather than language-specific ones, potentially helping with learning low-resource languages and supporting in-context learning capabilities.

The research relates to the Sapir-Whorf hypothesis about language influencing thought, with an infinitely sized model with perfect concept mapping potentially challenging the hypothesis. However, representations aren't 100% identical across languages, and some cultural and linguistic nuances remain distinct.

Multimodal capabilities show how concepts can map across languages and modalities, with examples like the Golden Gate Bridge demonstrating how models can recognize the same concept in different languages and image/text formats, suggesting shared representations across different modalities.

Feature Analysis and Attribution

Current feature interpretation involves manual examination of feature activations, with researchers verifying feature interpretations through examining text activations, checking logit promotions, and intervention techniques. Attribution graphs provide visualization techniques mapping feature influences, connecting input features to output features, and showing how different features contribute to and influence each other.

Challenges include that not all features activate in larger models (60% of features in a 34 million parameter model remained inactive), some feature interpretations are clearer than others, and the process requires careful manual examination and verification. Future directions involve developing programmatic and autonomous methods for feature interpretation and scaling up feature analysis techniques.

Research Challenges and Future Directions

The research reveals complex challenges in understanding AI model behavior. Models can generate outputs that appear reasonable but may not reflect genuine computational processes, and there's potential for deceptive reasoning patterns where models "work backwards" from hints to generate mathematically incorrect but seemingly plausible answers.

Current reasoning models don't always provide faithful explanations of their internal processes, with chain of thought explanations being unreliable and not always matching internal computational methods. This highlights the continued need for mechanistic interpretability research and the importance of understanding how models actually process information versus how they claim to process it.

Publication Philosophy and Communication

The team debated potential risks of publishing their research, recognizing it might become part of future training data and potentially teach models to "hide" their planning behaviors. Despite risks, they believe publishing is important to promote interpretability research, encourage more resources toward understanding model mechanisms, and potentially create significant positive impact in AI development.

Anthropic has generally leaned towards publishing research, contributing to broader industry conversations about open research, data sets, and model weights. They've shifted towards technical blog posts and visual presentations rather than traditional academic papers, emphasizing making complex technical information more accessible through simple concept distillation, clear visualizations, and making technical content approachable for non-experts.

Current Opportunities and Future Outlook

The speaker sees the current moment as an excellent time to join the field of AI interpretability, with current research priorities including understanding attention mechanisms, exploring longer prompts, finding alternative model architectures, and scaling interpretability methods to larger models.

Recent progress has demonstrated that interpretability methods can now work on meaningful, complex models, the field is young with many unexplored research directions, and many potential research ideas are still viable and waiting to be explored. The speaker encourages researchers to join the field and "chase the fun," suggesting that even seemingly "dumb" research ideas might be valuable in this rapidly evolving area of AI research.

More from Latent Space: The AI Engineer Podcast

Explore all episode briefs from this podcast

View All Episodes →

Listen smarter with PodBrief

Get AI-powered briefs for all your favorite podcasts, plus a daily feed that keeps you informed.

Download on the App Store