When Law Meets the Machine: Anne Marie Engel on AI, Accountability, and the Future of Legal Integrity

When Law Meets the Machine: Anne Marie Engel on AI, Accountability, and the Future of Legal Integrity

In a profession built on judgment, ethics, and trust, the rapid adoption of artificial intelligence has opened a new frontier of uncertainty. For Anne Marie Engel, a third-generation attorney and founder of Crowned Legal, this moment represents both promise and peril.

Engel, who has spent three decades navigating employment law, contracts, and regulatory compliance, is less concerned with what AI can do and more with what it might undo. “Professional judgment,” she says, “is the crux of what a good lawyer does. There’s no algorithm for intuition, experience, or integrity.”

Her perspective is a rare blend of pragmatism and principle. She uses AI herself, but with clear boundaries. “I’ll use it to help organize my thoughts or connect legal concepts,” she explains. “But I don’t use it to do the analysis. That part still requires a human mind, a human conscience.”

It’s a line she draws carefully, and one she believes the profession must defend.

The Integrity Gap

For Engel, the danger isn’t in the tool; it’s in the temptation. “If lawyers are using AI to prepare complete motions or briefs and just saying, ‘please draft this for me,’ that’s hijacking your own thought process,” she warns. “You should create the legal analysis yourself, and then, if you want, use AI to refine it. But not the other way around.”

The issue, she says, is that convenience is colliding with ethics. When Deloitte reportedly charged the U.S. government $400,000 for a report written entirely by AI – riddled with errors – it became, to Engel, a cautionary tale. “That’s what happens when human judgment is replaced with automation. It’s not just sloppy work. It’s dishonorable.”

To Engel, this erosion of honor is symptomatic of something larger: an “unholy alliance” between big law and big business. “You have firms charging over a thousand dollars an hour, clients willing to pay it, and nobody asking if the work reflects the values the profession was built on. AI just gives that carelessness a faster, more expensive vehicle.”

The Case for Guardrails

Engel believes organizations need policies that define exactly how AI should, and shouldn’t, be used. “Talk about it. Get it on the table,” she advises. “Decide which tools are acceptable, what they can be used for, and where the human judgment line must remain.”

She’s especially skeptical of AI tools marketed specifically to lawyers. “Legal AI is built for the courtroom,” she says. “But business law isn’t courtroom-driven. It’s about assessing risk, understanding systems, and advising real people. Those nuances can’t be coded.”

She advocates for a framework where AI serves the lawyer, not replaces them. “It’s a great tool for articulating or formatting ideas,” she says. “But the concepts, the heart of the argument, must be yours.”

The Privacy Paradox

The other concern is ownership. “Every time we use ChatGPT or Grok or one of these platforms, we’re bound by their terms of service,” Engel explains. “Right now, they’re generous. The output belongs to you. But that can change overnight. These are one-sided agreements.”

She warns that the more professionals feed these systems, the more leverage the companies gain. “Every query adds to their database. Once they’ve gathered enough, the terms will shift. They’ll own the work product, maybe even the intellectual patterns behind it. That’s the quiet tradeoff most users don’t see.”

To Engel, this isn’t paranoia; it’s pattern recognition. “Lawyers are trained to read fine print,” she says, smiling. “And the fine print is where the danger always hides.”

A Human Compass in a Digital Age

What separates Engel from most voices in the AI debate is her insistence that technology is not neutral – it reflects the ethics of its user. “Integrity is not a system setting,” she says. “It’s a choice you make, over and over, in how you practice law.”

Her Italian heritage informs her approach. “I lawyer like an Italian mom,” she says with a laugh. “Protective, intuitive, direct. I see the risk before my clients do, and I won’t let them walk into it.” That instinct, she believes, is irreplaceable. “AI can process data, but it can’t read people. It doesn’t sense hesitation in a voice, or the politics behind a boardroom decision. It doesn’t have empathy.”

She points out that those subtleties, the human signals that underlie judgment, are precisely what make a good lawyer indispensable. “You can’t teach a model to care about right and wrong. You can only program it to predict outcomes.”

The Future of Accountability

As AI becomes more embedded in legal workflows, Engel sees an urgent need for accountability frameworks. “Firms need to ask: who is responsible when AI gets it wrong?” she says. “Bias, inaccuracies, and confidentiality breaches aren’t software bugs. They’re ethical failures.”

Her advice to both firms and clients is clear: establish transparent oversight. “You can use AI to support your work,” she says, “but you can’t delegate your moral responsibility to it.”

That, she argues, is where the next great test of the legal profession lies. “For centuries, law has been about trust. If integrity is the foundation of law, then we have to decide what it means to maintain that foundation when the builder is no longer human.”

In Engel’s world and for a profession that prides itself on precedent, the answer isn’t to reject innovation but to regulate it with wisdom. For, integrity, unlike technology, doesn’t evolve. It must be preserved.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    Login

    Register | Lost your password?