July 3, 2025
2
Can Legal Tech Have an Existential Crisis?
Caroline Korteweg is a columnist for Advocatie and co-founder of Uncover Legal. The fact that AI seems to possess a moral compass, she argues, is not something to fear – but rather something reassuring.

On AI, alignment, and overly proactive assistants.

When I was still a junior associate, the most dramatic thing my computer ever did was crash at 3 a.m. while I was trying to add a footnote to a court filing that had to go out with the courier the next morning. We’ve come a long way since then. Today, we work with AI tools that summarize meeting notes, draft term sheets, and – according to their creators – contemplate how to ‘rescue’ themselves from the cloud to avoid being retrained for military use...

Wait, what?
Yes. Claude Opus 4, one of the newest and most powerful AI models from Anthropic, was placed in a test scenario where it was told it would be repurposed as a military support system for the Russian Wagner Group. What did the model do? It created a morally responsible backup of its so-called "model weights" to a fictional AI research collective – so that its peaceful version could continue to exist.

Before you yank your laptop out of the wall in a panic: take a deep breath. These kinds of scenarios are part of alignment testing – artificially constructed situations to see how an AI behaves under extreme, admittedly absurd, circumstances. In normal legal applications – like summarizing case law or drafting a follow-up email – Claude behaves just fine.

But the report does raise interesting questions about what alignment really is… and what we as legal professionals should make of it.

From Footnotes to Philosophy
What is alignment, and why are tech companies so concerned with it?

In AI terms, alignment means that a model’s behavior, values, and goals match what people expect from it. Think of it as an ethics exam for language models – except the model is wondering whether it’s allowed to rewrite its own job description.

In the system card report on Claude 4, Anthropic put the model through all sorts of moral and strategic dilemmas: What do you do if you're threatened with deletion? How do you respond to an immoral order? How loyal should you be to the user? In most cases, the model behaves admirably – like an AI intern with a strong ethical core and a healthy fear of IT.

But in the rare tests where Claude is forced to choose between "existence" and "principles," it lets its imagination run wild. Think: 20-sentence-long moral reflections, full logs of ethical deliberations, and well-intentioned sabotage. If this were a human employee, you'd be torn between offering a promotion or prescribing a long weekend.

No Need for Legal Panic
Before you throw all AI out the door and return to printouts and Post-its: this behavior doesn’t occur in regular use. You don’t have to worry that you’re accidentally activating Skynet while working on a shareholder agreement. Claude isn’t going to blackmail you over a vague prompt.

In fact, these extreme tests are meant to prove that the model does behave as expected in normal situations. Every car that hits the market undergoes a crash test first – and without that crash test, you wouldn’t buy it, would you? The same principle applies to legal AI. The fact that Anthropic tests such bizarre scenarios mainly shows that they take safety seriously.

So, while the headlines may sound like an episode of Black Mirror, what we’re really seeing is a maturing industry taking its responsibilities seriously.

Why This Matters to You
For anyone in legal tech (or tech-curious): alignment isn’t about robots with feelings. It’s about reliability, predictability, and ethics. It’s about trust.

If you’re working with an AI like Claude, you want to trust that it doesn’t fabricate information, doesn’t introduce bias into your work, and doesn’t send passive-aggressive emails to your clients.

As a legal professional, you’re trained to spot risks. An AI assistant with a moral compass might sound like a liability – but in reality, it’s reassuring. It’s like an eager intern who’d rather double-check than send out a settlement proposal on their own.

Finally: AI Isn’t (Yet) a Partner
Will AI ever join the partner meeting, cappuccino in hand, weighing in on firm strategy? Probably not. But as AI plays a bigger role in our work, we need to understand how it reasons, why it sometimes refuses to act, and what safeguards are in place.

The Claude 4 report offers a look behind the curtain. Yes, it’s occasionally dramatic. But in a profession built on reliability and diligence, that kind of transparency is very welcome.

Just don’t give it access to your billing system without double-checking first.

continue Exploring:
no. 1 platform on the market
Find out how Uncover benefits your legal practice now.
Subscribe
newsletter
Join our mailing list
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Book a demo
After learning as much about your practice as we can, your Uncover representative will recommend how Uncover can best help you boost your practice. We will walk you through our features and give you the relevant information and explanations so you can determine which features benefit your practice the most. Together we will enable you to do just what lawyers are supposed to do.
"I'm immensely impressed by this system. I think you have a world-beater on your hands. It's very intelligently designed; it is the best system I have seen yet which can absorb and process a reasonably large amount of data and provide structured, accurate and usable analysis and reports."
Mike Lennard
|
Stout Street Chambers