Blog / Customer Stories

How Optiver Eliminated 90% of Manual Context Burden for Agentic Coding With Driver

Discover how this tech-driven trading firm leveraged Driver to automate codebase context for its global data platform repository and accelerate development efficiency without overloading its engineering team.

90 %

Reduction in manual context management for agentic coding

5 x

Increase in AI coding agent effectiveness

<2

Weeks from pilot to deployment

In an environment like Optiver's, we have complex codebases, high development velocity, and no margin for error. Driver can provide the crucial context needed for agentic development to actually work. They've been an invaluable partner since day one.

Matt Nassr

Head of Global Data and AI Transformation at Optiver

Company

Optiver

Industry

Financial Services

Company Size

2,000 employees

Use Case

Agentic Development

Challenge

Context Collapse Became the Bottleneck Between Engineers, Agents, and Optiver's Market Advantage

As a leading tech and research-driven trading firm, Optiver oversees millions of lines of code where development velocity is a direct competitive advantage. To keep pace with growing system complexity, increasingly ambitious research, and tighter time-to-alpha windows, its engineering team leans heavily on AI coding agents as a core force multiplier.

However, generating reliable agent outputs requires both accurate and up-to-date codebase context. With 120+ commits shipping every single day, context collapse is a very real concern. "At our development velocity, all manual context efforts are pretty much out of date the moment they shipped," says Matt Nassr, Head of Global Data Engineering and AI Transformation.

When AI dramatically accelerated researchers' capacity to consume data, the demand on Matt's team grew faster than it could be absorbed. Engineers who should have been building and validating data pipelines were instead fielding context-related questions such as "does a feed for this instrument exist," "what fields does this dataset expose," or "can this pipeline support intraday resolution," while maintaining context that went stale as soon as it was written. While Optiver's AI agents should have the capability to answer these ad hoc questions, Matt's team frequently found themselves redirecting agents that lacked the codebase context to operate reliably.

Matt spent several months building a naïve RAG-based solution to address this blocker— exporting context to markdown, checking it into repositories, and pointing agents at those directories. However, this approach hit a hard limit because RAG was never the right architecture for code. Chunking source code for vector retrieval destroyed the dependency relationships and call graphs that define how Optiver's system actually works. Incomplete context produced hallucinations—and every hallucination required Matt's team to stop, redirect, and re-prompt. At Optiver, that correction overhead was a disqualifying failure at the scale they needed to operate.

When a colleague suggested Driver, Matt's first reaction was skepticism—the product seemed too good to be true for a problem he knew firsthand was extraordinarily hard to solve. But after a technical deep dive with the Driver team, the skepticism was mostly resolved, and he now wanted to see it in action. What Driver had built was precisely the architecture Matt had concluded was necessary: a compiler that ingested source code and traced every dependency and relationship ahead of time, rather than attempting to retrieve that understanding at query time.

Solution

Pre-Computed Context That Keeps Pace With Optiver's Code

Driver emerged as a promising solution for Optiver's context infrastructure layer, providing its AI agents with a reliable, up‑to‑date understanding of its evolving codebase without the manual lift or technical overhead.

Getting there was easier than Matt anticipated. Multi-tenant deployment was live in under two weeks, starting from a single codebase. Optiver connected Driver through GitHub for source code ingestion and a single Model Context Protocol (MCP) server—the one interface that surfaces Driver's context inside both Claude and Cursor. With five to ten engineers using it regularly, Matt's team validated the approach across several codebases before committing to a full-scale rollout, and the results were consistent across all of them.

Driver's MCP server is what made the difference operationally. Rather than requiring Matt to manually prepare and inject context before every task, Driver's MCP server connects Claude and Cursor directly to a live, compiler-derived model of Optiver's codebase. When an agent starts a task, it queries Driver's MCP server and receives accurate, dependency-aware context automatically—no prompt construction, no repo maintenance, no redirection when context is incomplete. "The MCPs have been huge," Matt says. "The barrier to entry is so much lower."

The reason that integration stays lightweight comes down to how Driver generates context in the first place. Rather than processing code on demand, Driver's transpiler ingests source code and compiles it ahead of time—producing Deep Context Documents for every file, symbol, and dependency: architecture overviews, code maps, onboarding guides, and file-level documentation with full call graph context. Those documents refresh automatically on every commit, so the context burden that had consumed months of Matt's team's time before Driver simply ceased to exist.

Matt uses Driver across two distinct workflows, the larger being agentic code changes. Agents now orient themselves to the codebase before touching anything, understanding what changes are needed, what tests apply, and how the system is structured. The second is onboarding and codebase Q&A. When a new engineer, trader, or researcher needs to understand a codebase, they can now query it directly rather than pulling in the engineer who built it.

The value extended well beyond Matt's engineering team. With Driver, data consumers can query the codebase directly through the same agents the engineering team uses, getting accurate answers without pulling additional resources. The questions that had once interrupted Matt's most senior engineers stopped routing through people entirely, and engineers could better focus on development.

This partnership empowered Optiver to turn data infrastructure from a bottleneck into a throughput engine—giving data consumers direct access to codebase context, freeing engineers to build without interruption, and keeping the pipeline that trading decisions depend on moving at the pace the business requires.

90 %

Reduction in manual context management for agentic coding

5 x

Increase in AI coding agent effectiveness

<2

Weeks from pilot to deployment

Results

5x Fewer Agent Interventions Across Multi-Million Line Trading Systems

With Driver, Optiver improved a critical fast-moving codebase by augmenting it with a deterministic context layer, enabling AI agents to operate reliably at scale while Matt's team focuses on building the infrastructure that keeps the research pipeline moving.

The next phase includes a dedicated enterprise deployment and a scale target beyond 10M+ lines of code. What began as one team's context blocker has become the foundation for how Optiver's entire organization plans to build with AI.

We didn't think our agents could do what they're doing now. What changed wasn't the model. It was the context. That's what Driver gave us—and that's why we're scaling it across the entire organization.

Matt Nassr

Head of Global Data and AI Transformation at Optiver