The Second Risk in Legal AI

The risk isn't just what AI generates. It's where your documents go when AI uses them.

By Chris Fernelius
·
April 2026
·
6 min read

Every serious conversation about legal AI risk starts in the same place. Is the model accurate? What happens when it hallucinates? Who reviews the output before it reaches a client? Who is responsible when something goes wrong?

These are the right questions. The legal profession has been deliberate about asking them, and that discipline is appropriate given the stakes.

But there is a second risk that receives far less attention: not what the AI generates, but what happens to your documents when AI tools use them. In legal, where documents live inside carefully governed environments, this distinction matters enormously. And most legal AI data governance programs aren't accounting for it yet.

What Document Governance Actually Means

Documents in legal don't sit in neutral folders. They live inside governed systems, and the governance that surrounds them is not administrative overhead. It is the foundation of how legal work is done responsibly.

Matter permissions determine which attorneys can access which files. Ethical walls create hard access boundaries between teams working on opposite sides of a transaction or dispute. Retention schedules define how long documents must be preserved and when they must be destroyed. Audit trails record who accessed what, and when. Client-specific guidelines restrict how certain documents can be used, shared, or stored.

In large firms, two teams may be working on opposite sides of the same deal. The controls that separate their work are not preferences; they are professional obligations. They exist because clients expect them, regulators require them, and the structure of the legal profession depends on them.

"Most governance frameworks I review address what lawyers are permitted to do with AI. Few address what happens to the documents when they do it. That is the gap. And you cannot manage what you cannot see, which means most firms are currently managing only half the risk."

— Kathleen Randall, Legal Value Engineer, NetDocuments

When we talk about legal AI data governance, we're not talking about IT policy documents. We're talking about the live, enforced controls that keep privileged content safe and professional obligations intact.

The Second Risk: When Governance Doesn't Travel

Here is the problem. When a lawyer copies a document into an external AI tool (a general-purpose assistant, a third-party drafting platform, an AI summarization service), they may be creating a second copy of that document outside the governed system it came from.

This is not primarily a cybersecurity question, though that matters too. It is a governance question. The controls tied to the original document may not travel with it. Matter restrictions become harder to enforce. Ethical walls become harder to monitor. Retention and deletion become harder to apply consistently, because the organization may not know a second copy exists. Audit trails that once told a complete story now tell a fragmented one.

Over time, as more attorneys use more AI tools with more documents, organizations can lose a clear picture of where sensitive content has gone, who has accessed it, and which rules still govern it. We call this governance drift — the gradual erosion of control that happens not through any single breach, but through the steady accumulation of ungoverned copies.

"Every time a document leaves the DMS to power an external AI tool, a governance gap opens. One gap is manageable. A hundred gaps, across hundreds of matters, multiplied by every AI tool the organization uses introduces significant risk. The approach of sending documents to AI leads to organizations not being able to answer the most basic compliance and governance questions in legal: where is this document, who controls it, and who can access it?"

— Kyle Kissell, Director of Legal Value Engineering, NetDocuments

In a litigation context, documents that exist in ungoverned external systems raise serious exposure. A firm may not even know a relevant document exists outside its DMS until a discovery request forces the question.

Governance drift is not a dramatic failure. It's a slow one. And in legal, slow failures are often the most expensive.

The Architecture That Created This Problem

Governance drift isn't random. It has a specific technical cause, one baked into how most AI integrations are built.

Some AI integration architectures (what we'll call the sync model) are built on document copying. An AI vendor pulls documents from the firm's document management system into its own storage, then keeps those copies continuously updated. On the surface, this looks like integration. In practice, it creates three compounding governance risks.

First, there is lag. The moment a permission changes in the source system (a matter closes, an attorney is walled off, a client restricts access), there is a window before that change propagates to the vendor's copy. During that window, the document is accessible in a way the firm's governance controls say it shouldn't be.

Second, there are two copies of the truth. When the same document exists in both the DMS and an external AI system, there is no clear governance hierarchy. Which version controls? Which audit trail is authoritative? Which retention policy applies?

Third, there is the N-system problem. If every AI tool a firm adopts syncs documents into its own storage, the firm now has N copies of its content, each with its own access controls, its own retention behavior, and its own incomplete audit trail. Every matter update, access revocation, or deletion event that happens in the source system must somehow propagate to every synced copy. At scale, this is not a solvable problem. It is a permanent compliance liability.

"Just-in-time, permissioned retrieval is the emerging standard for agentic integrations with a system of record. When an AI tool retrieves documents in real time from your system of record, it works with your controls. When it syncs a copy, those controls stop at the door. Every permission change, every matter closure, every retention event that happens afterward has to somehow reach that copy."

— Scott Kelly, VP of Product, NetDocuments

The practical consequence becomes most acute when a discovery request arrives. A firm served with litigation hold obligations may be required to collect and produce documents from every system that holds a relevant copy, including AI systems it adopted months ago and has since forgotten about. Most firms cannot answer that question today.

The sync model didn't arise from negligence. It arose because many document management systems lacked the capabilities AI vendors needed: specifically, the ability to retrieve documents in real time, with full context, at the moment of query. When the DMS couldn't support real-time retrieval, copying was the only option. But it was always a workaround, not a solution.

The Right Model: AI Comes to the Document

The alternative to sync is retrieval. Instead of pulling documents into external storage, the AI queries documents at the moment they're needed, in real time, from the system of record, without creating a copy.

In a retrieval-based integration, governance is enforced at the point of query. Ethical walls are checked against the live system, not a cached snapshot. Retention policies apply to the original document, not a disconnected copy that may never receive a deletion instruction. The audit trail is complete and centralized, because there is only one place the document ever lived.

This is what sound legal AI data governance actually looks like in practice: the AI comes to the content. The content doesn't go to the AI. The DMS remains the single source of truth, and the retrieval model enforces that at the architectural level, not just at the policy level.

"The law firms and organizations that are getting this right aren't asking AI to change how they govern documents. They are asking AI to work within the governance they have already built and invested in. When AI comes to the document rather than the document going to AI, everything an organization already has in place — permissions, ethical walls, audit trails — continues to do its job safeguarding valuable data in the DMS."

— Brandall Nelson, Legal Solutions Director, NetDocuments

NetDocuments' ndConnect program is built on this model. Rather than enabling document export or synchronization, the NetDocuments MCP server exposes intelligent, real-time retrieval to workflow partners, allowing AI tools to work with documents where they live, under the governance controls already in place, without creating a copy that your governance never sees.

Why This Distinction Matters

A sync integration and a retrieval integration look similar from the outside: both connect your DMS to an AI tool. But they have fundamentally different governance implications. One preserves your controls. The other distributes them. In legal, that difference is not cosmetic. It is the difference between a governed AI workflow and a compliance liability waiting to happen.

The Questions Legal AI Programs Should Be Asking

Legal AI policy has matured considerably around the right questions for models: which tools are approved, how outputs are reviewed, what oversight is required before AI-generated work reaches a client. That maturity is real, and it's important.

The next step in maturity is asking a parallel set of questions about documents and data movement: questions that are just as important, and far less commonly asked.

  • When AI is used with a document, where does that document go?
  • Is a copy created outside the governed system?
  • Do matter permissions and ethical walls remain intact in the AI environment?
  • Can the organization produce a complete, unbroken audit trail of how documents were accessed and used?
  • Are retention and deletion policies applied to every copy of every document, or only the original?
  • If a permission changes in the source system, how quickly does that change propagate everywhere the document exists?
Organizations that can answer these questions with confidence have a more mature legal AI data governance program than most. They've moved past the model risk conversation into the data movement conversation, and that is where the real, durable governance work happens.

"These are the questions legal organizations should be asking before adopting an AI tool. Evaluation of the model and its output is not enough — firms need to be asking what the tool does with their documents. AI tools are amazing and can dramatically improve workflows and work product for legal organizations, but if the cost is data and document security and ethical incidents, then the juice is most definitely not worth the squeeze."

— Eric Duncan, Legal Solutions Director, NetDocuments

In the next phase of legal AI, the competitive advantage won't come from having the newest model. It will come from designing workflows that preserve confidentiality, control document movement, and keep governance intact. The organizations that get this right won't just be the fastest AI adopters. They'll be the ones that can show they adopted AI without losing control of the document.

Want to understand how ndConnect enables AI tools to work with your documents in place, without copying, without sync lag, without governance drift? Learn how NetDocuments' retrieval-based integrations keep your controls where they belong.

Explore ndConnect →