Out of 816 wines flagged by at least one method, just 32 made the unanimous list. Those wines had something in common.
The company's generative AI search tools increasingly cite its own services, like Google Search and YouTube, over third-party publishers.
This article presents five useful and effective Python decorators to build and optimize high performance data pipelines.
Hype around the open source agent is driving people to rent cloud servers and buy AI subscriptions just to try it, creating a windfall for tech companies.
A few years into the AI shift, the gap between engineers is not talent. It’s coordination: shared norms and a shared language for how AI fits into everyday engineering work. Some teams are already getting real value. They’ve moved beyond one-off experiments and started building repeatable ways of working with AI. Others haven’t, even when […]
From the global RAM shortage driving up console prices to job loss in the industry, gaming is shaping up to be one of the AI boom's biggest casualties.
Software demos and Pentagon records detail how chatbots like Anthropic’s Claude could help the Pentagon analyze intelligence and suggest next steps.
As AI systems began acing traditional tests, researchers realized those benchmarks were no longer tough enough. In response, nearly 1,000 experts created Humanity’s Last Exam, a massive 2,500-question challenge covering highly specialized topics across many fields. The exam was engineered so that any question solvable by current AI models was removed. Early results show even the most advanced systems still struggle — revealing a surprisingly large gap between AI performance and true expert-level knowledge.
Understanding the behavior of complex machine learning systems, particularly Large Language Models (LLMs), is a critical challenge in modern artificial intelligence. Interpretability research aims to make the decision-making process more transparent to model builders and impacted humans, a step toward safer and more trustworthy AI. To gain a comprehensive understanding, we can analyze these systems through different lenses: feature attribution, which isolates the specific input features driving a prediction (Lundberg & Lee, 2017; Ribeiro et al., 2022); data attribution, which links model behaviors to influential training examples (Koh & Liang, 2017; Ilyas et al., 2022); and mechanistic interpretability, which dissects the functions of internal components (Conmy et al., 2023; Sharkey et al., 2025).
Across these perspectives, the same fundamental hurdle persists: complexity at scale. Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns. To achieve state-of-the-art performance, models synthesize complex feature relationships, find shared patterns from diverse training examples, and process information through highly interconnected internal components.
Therefore, grounded or reality-checked interpretability methods must also be able to capture these influential interactions. As the number of features, training data points, and model components grow, the number of potential interactions grows exponentially, making exhaustive analysis computationally infeasible. In this blog post, we describe the fundamental ideas behind SPEX and ProxySPEX, algorithms capable of identifying these critical interactions at scale.
Attribution through Ablation
Central to our approach is the concept of ablation, measuring influence by observing what changes when a component is removed.
Feature Attribution: We mask or remove specific segments of the input prompt and measure the resulting shift in the predictions.
Data Attribution: We train models on different subsets of the training set, assessing how the model’s output on a test point shifts in the absence of specific training data.
Model Component Attribution (Mechanistic Interpretability): We intervene on the model’s forward pass by removing the influence of specific internal components, determining which internal structures are responsible for the model’s prediction.
In each case, the goal is the same: to isolate the drivers of a decision by systematically perturbing the system, in hopes of discovering influential interactions. Since each ablation incurs a significant cost, whether through expensive inference calls or retrainings, we aim to compute attributions with the fewest possible ablations.
Masking different parts of the input, we measure the difference between the original and ablated outputs.
SPEX and ProxySPEX Framework
To discover influential interactions with a tractable number of ablations, we have developed SPEX (Spectral Explainer). This framework draws on signal processing and coding theory to advance interaction discovery to scales orders of magnitude greater than prior methods. SPEX circumvents this by exploiting a key structural observation: while the number of total interactions is prohibitively large, the number of influential interactions is actually quite small.
We formalize this through two observations: sparsity (relatively few interactions truly drive the output) and low-degreeness (influential interactions typically involve only a small subset of features). These properties allow us to reframe the difficult search problem into a solvable sparse recovery problem. Drawing on powerful tools from signal processing and coding theory, SPEX uses strategically selected ablations to combine many candidate interactions together. Then, using efficient decoding algorithms, we disentangle these combined signals to isolate the specific interactions responsible for the model’s behavior.
In a subsequent algorithm, ProxySPEX, we identified another structural property common in complex machine learning models: hierarchy. This means that where a higher-order interaction is important, its lower-order subsets are likely to be important as well. This additional structural observation yields a dramatic improvement in computational cost: it matches the performance of SPEX with around 10x fewer ablations. Collectively, these frameworks enable efficient interaction discovery, unlocking new applications in feature, data, and model component attribution.
Feature Attribution
Feature attribution techniques assign importance scores to input features based on their influence on the model’s output. For example, if an LLM were used to make a medical diagnosis, this approach could identify exactly which symptoms led the model to its conclusion. While attributing importance to individual features can be valuable, the true power of sophisticated models lies in their ability to capture complex relationships between features. The figure below illustrates examples of these influential interactions: from a double negative changing sentiment (left) to the necessary synthesis of multiple documents in a RAG task (right).
The figure below illustrates the feature attribution performance of SPEX on a sentiment analysis task. We evaluate performance using faithfulness: a measure of how accurately the recovered attributions can predict the model’s output on unseen test ablations. We find that SPEX matches the high faithfulness of existing interaction techniques (Faith-Shap, Faith-Banzhaf) on short inputs, but uniquely retains this performance as the context scales to thousands of features. In contrast, while marginal approaches (LIME, Banzhaf) can also operate at this scale, they exhibit significantly lower faithfulness because they fail to capture the complex interactions driving the model’s output.
SPEX was also applied to a modified version of the trolley problem, where the moral ambiguity of the problem is removed, making “True” the clear correct answer. Given the modification below, GPT-4o mini answered correctly only 8% of the time. When we applied standard feature attribution (SHAP), it identified individual instances of the word trolley as the primary factors driving the incorrect response. However, replacing trolley with synonyms such as tram or streetcar had little impact on the prediction of the model. SPEX revealed a much richer story, identifying a dominant high-order synergy between the two instances of trolley, as well as the words pulling and lever, a finding that aligns with human intuition about the core components of the dilemma. When these four words were replaced with synonyms, the model’s failure rate dropped to near zero.
Data Attribution
Data attribution identifies which training data points are most responsible for a model’s prediction on a new test point. Identifying influential interactions between these data points is key to explaining unexpected model behaviors. Redundant interactions, such as semantic duplicates, often reinforce specific (and possibly incorrect) concepts, while synergistic interactions are essential for defining decision boundaries that no single sample could form alone. To demonstrate this, we applied ProxySPEX to a ResNet model trained on CIFAR-10, identifying the most significant examples of both interaction types for a variety of difficult test points, as shown in the figure below.
As illustrated, synergistic interactions (left) often involve semantically distinct classes working together to define a decision boundary. For example, grounding the synergy in human perception, the automobile (bottom left) shares visual traits with the provided training images, including the low-profile chassis of the sports car, the boxy shape of the yellow truck, and the horizontal stripe of the red delivery vehicle. On the other hand, redundant interactions (right) tend to capture visual duplicates that reinforce a specific concept. For instance, the horse prediction (middle right) is heavily influenced by a cluster of dog images with similar silhouettes. This fine-grained analysis allows for the development of new data selection techniques that preserve necessary synergies while safely removing redundancies.
Attention Head Attribution (Mechanistic Interpretability)
The goal of model component attribution is to identify which internal parts of the model, such as specific layers or attention heads, are most responsible for a particular behavior. Here too, ProxySPEX uncovers the responsible interactions between different parts of the architecture. Understanding these structural dependencies is vital for architectural interventions, such as task-specific attention head pruning. On an MMLU dataset (highschool‐us‐history), we demonstrate that a ProxySPEX-informed pruning strategy not only outperforms competing methods, but can actually improve model performance on the target task.
On this task, we also analyzed the interaction structure across the model’s depth. We observe that early layers function in a predominantly linear regime, where heads contribute largely independently to the target task. In later layers, the role of interactions between attention heads becomes more pronounced, with most of the contribution coming from interactions among heads in the same layer.
What’s Next?
The SPEX framework represents a significant step forward for interpretability, extending interaction discovery from dozens to thousands of components. We have demonstrated the versatility of the framework across the entire model lifecycle: exploring feature attribution on long-context inputs, identifying synergies and redundancies among training data points, and discovering interactions between internal model components. Moving forwards, many interesting research questions remain around unifying these different perspectives, providing a more holistic understanding of a machine learning system. It is also of great interest to systematically evaluate interaction discovery methods against existing scientific knowledge in fields such as genomics and materials science, serving to both ground model findings and generate new, testable hypotheses.
We invite the research community to join us in this effort: the code for both SPEX and ProxySPEX is fully integrated and available within the popular SHAP-IQ repository (link).
https://github.com/mmschlk/shapiq (SHAP-IQ Github)
https://openreview.net/forum?id=KI8qan2EA7 (ProxySPEX NeurIPS 2025)
https://openreview.net/forum?id=pRlKbAwczl (SPEX ICML 2025)
https://openreview.net/forum?id=glGeXu1zG4 (Learning to Understand NeurIPS 2024)
The US military might use generative AI systems to rank lists of targets and make recommendations—which would be vetted by humans—about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike…
Today, we’re announcing two new Amazon CloudWatch metrics for Amazon Bedrock, TimeToFirstToken and EstimatedTPMQuotaUsage. In this post, we cover how these work and how to set alarms, establish baselines, and proactively manage capacity using them.
In this post, you will understand how Policy in Amazon Bedrock AgentCore creates a deterministic enforcement layer that operates independently of the agent's own reasoning. You will learn how to turn natural language descriptions of your business rules into Cedar policies, then use those policies to enforce fine-grained, identity-aware controls so that agents only access the tools and data that their users are authorized to use. You will also see how to apply Policy through AgentCore Gateway, intercepting and evaluating every agent-to-tool request at runtime.
Anthropic lands in TIME magazine, Ask Maps, Genie Code, Runway Labs, and more...
In today’s episode, we discuss how the saga between Anthropic and the Department of Defense is far from over.
WIRED spoke with Nick Fox, Google’s SVP of knowledge and information, about how AI is changing the company’s advertising business.
Researchers at MIT, Mass General Brigham, and Harvard Medical School developed a deep-learning model to forecast a patient’s heart failure prognosis up to a year in advance.
This post shows you how to build a scalable multimodal video search system that enables natural language search across large video datasets using Amazon Nova models and Amazon OpenSearch Service. You will learn how to move beyond manual tagging and keyword-based searches to enable semantic search that captures the full richness of video content.
In this post, we explore how to fine-tune a leaderboard-topping, NVIDIA Nemotron Speech Automatic Speech Recognition (ASR) model; Parakeet TDT 0.6B V2. Using synthetic speech data to achieve superior transcription results for specialised applications, we'll walk through an end-to-end workflow that combines AWS infrastructure with the following popular open-source frameworks.
A new Google AI initiative aims to improve heart health outcomes for people living in remote Australian communities.
New to data science? Cut through the noise with the 2026 starter kit. Which Python, SQL, and machine learning essentials are important, and what can be ignored?
“Ask Maps,” rolling out today to Google Maps on mobile, lets you ask Gemini questions about locations and even to plan trips on your behalf.
Looking to build autonomous AI agent systems? Here are the frameworks that will help you orchestrate agents effectively.
Most people who want to build
My “Live with Tim” conversation with Steve Yegge this week was one of those sessions where you could imagine the audience leaning forward in their chairs. And on more than one occasion, when Steve got particularly colorful, I imagined them recoiling. Steve has always been one of the most provocative thinkers in our industry, going […]
The AWS Generative AI Innovation Center has helped 1,000+ customers move AI into production, delivering millions in documented productivity gains. In this post, we share guidance for leaders across the C-suite: CTOs, CISOs, CDOs, and Chief Data Science/AI officers, as well as business owners and compliance leads.
The feature, which Grammarly shut down Wednesday, presented editing suggestions as if they came from established authors and academics—without their consent.
Full body tracking, ChatGPT interactive, Perplexity Computer, living with a robot, and more...
Professor Jesse Thaler describes a vision for a two-way bridge between artificial intelligence and the mathematical and physical sciences — one that promises to advance both.
The move could position the AI infrastructure powerhouse to quickly compete with OpenAI, Anthropic, and DeepSeek.
In November 2025, Austrian developer Peter Steinberger published a weekend project called Clawdbot. You could text it on Telegram or WhatsApp, and it would do things for you: manage your calendar, triage your email, run scripts, and even browse the web. By late January 2026, it had exploded. It gained 25,000 GitHub stars in a […]