Blog

Trust, but Verify: What California’s Report on AI Policy Means for Legal AI Tools

EvenUp Law

January 8, 2026

Trust, but Verify: What California’s Report on AI Policy Means for Legal AI Tools

Artificial intelligence is transforming the legal profession, especially in personal injury. But attorneys can’t be expected to master every nuance of AI. 

The California Council on Frontier AI Policy recently recognized that most consumers can’t fully grasp the differences, best uses, and risks of AI systems. Instead, AI companies bear the responsibility of explaining these complexities to their customers and providing meaningful transparency in the marketplace.

For personal injury lawyers, this means applying a “trust, but verify” mindset to every AI tool they consider and every action taken with it. Competence today is about ensuring the tools you rely on are transparent, verifiable, and aligned with the unique demands of PI practice. An informed legal AI policy is helpful here for defining approved platforms and holding staff accountable to AI usage best practices.

Read on to explore where firms can go wrong, what to ask when evaluating legal AI, and how to identify a personal injury partner who delivers more than technology.

Proven Action Plan for Your AI Adoption

Your step-by-step roadmap to launch, adopt, and scale AI across your PI firm.

Download Action Plan
Download EvenUp's AI implementation guide

An attorney’s duty when using AI is to abide by the rules of competence. 

In Formal Opinion 512, the American Bar Association (ABA) cites Model Rule 1.1 as the basic guideline for the use of AI—specifically generative AI (“GAI”)—in the legal industry:

“To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use. This means that lawyers should either acquire a reasonable understanding of the benefits and risks of the GAI tools that they employ in their practices or draw on the expertise of others who can provide guidance about the relevant GAI tool’s capabilities and limitations.”

— American Bar Association Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512 (July 29, 2024)

Attorneys have a duty to acquire a reasonable understanding of the benefits and risks associated with GAI in their practice. But without industry-specific strengths and weaknesses being called out, how can an attorney gather this information? This is where the California Report on Frontier AI Policy provides critical guidance.

The report frames “trust, but verify” as a balanced approach to governing frontier AI. It suggests that policymakers should support innovation and leverage industry expertise while also requiring independent oversight to ensure safety and accountability. 

“Trust, but verify” means encouraging AI development without relying solely on company claims. Transparency, third-party evaluations, whistleblower protections, and adverse event reporting are all essential to validate risks and build public trust.

By embedding these checks early, California’s government hopes to unlock AI’s transformative benefits while mitigating potentially severe harms.

The need for transparency in practice is underscored by real-world examples. In Roberto Mata v. Avianca, Inc., a law firm was sanctioned for submitting a sworn affidavit attaching what it represented as full-text opinions. These were later revealed to be fabrications generated by ChatGPT.

Key takeaways from the case are the error itself and the firm’s failure to acknowledge it. By doubling down rather than admitting the mistake, the attorneys demonstrated a lack of good-faith dealing.

The absence of transparency in their workflow and the overreliance on AI without verification were central to the sanctions imposed.

3 Tips for Attorneys to Self-advocate in AI Evaluations 

AI tools are rapidly emerging across nearly every corner of legal practice. While no attorney is required to adopt them, the reality is that using these tools may be the best way to protect your practice and future-proof your firm. 

That said, today’s AI marketplace is challenging to navigate. To protect yourself when evaluating tools, ask three critical questions:

  1. Security Standards: What safeguards does the company have in place? Are they SOC 2 compliant? Have they undergone HIPAA audits when handling sensitive health information?
  2. Transparency About Limitations: Every tool has shortcomings. How candid is the company about them, and how willing are they to walk you through both limitations and strengths?
  3. Technical Expertise: Who is actually building the product? Many companies rely on generic LLMs like ChatGPT for highly specialized tasks, which can lead to significant errors. Look for engineering teams with deep expertise in the legal domain.

You Deserve More Than Just Another Tech Provider

It’s also critical to realize that choosing the right legal AI solution isn’t about finding another software vendor. It’s about securing a trusted partner who helps you practice with confidence—someone who embeds verification, transparency, and legal expertise into every workflow.

For example:

  • A trusted partner goes beyond the minimum by embedding compliance into every stage of product design — not retrofitting security after the fact.
  • A trusted partner doesn’t hide tradeoffs; they openly discuss where human judgment is needed and provide mechanisms (like citations or human-in-the-loop review) to safeguard against risk.
  • A trusted partner invests in vertical AI — designed specifically for PI — and backs it with a team that understands the stakes of your practice, not just the mechanics of AI.

By considering all this, you can cut through the noise, avoid risky implementations, and identify AI solutions that genuinely support—rather than undermine—your practice. 

Cheat Sheets and More for AI Prompting Excellence

Learn how the right prompts can streamline repetitive tasks, speed up document drafting, and surface key details from complex case files in seconds.

Read More
Read out legal AI prompting guide

EvenUp Embodies “Trust, but Verify”

At EvenUp, the “Trust, but Verify” framework is built directly into our technology and workflows. 

Every output generated by our AI is accompanied by a line-level citation to the original source documents, ensuring attorneys can independently verify the information accuracy. This is foundational to our approach to Document AI.

Unlike generic AI tools that attempt to answer every query confidently, EvenUp’s goal is different: we focus on delivering the correct answer, backed by evidence, rather than speculative or incomplete responses. And with our SOC 2 Type 2 recertification and HIPAA attestation, EvenUp ensures robust protection and privacy measures for your personal injury cases.

This approach recognizes a core truth: without verifiable sourcing, AI cannot be fully trusted. 

By embedding verification into every step, we offer attorneys the efficiency of AI and the reliability of human-level diligence.

Scale Your Firm, Not Your Payroll

Schedule a call today to see how EvenUp’s AI tools automate repetitive tasks, streamline custom drafting, and empower staff to focus on case strategy and client engagement.

Schedule a Call
Schedule EvenUp Demo

Scale Your Firm, Not Your Payroll

Schedule a call today to see how EvenUp's AI tools automate repetitive tasks, streamline custom drafting, and empower staff to focus on case strategy and client engagement.

Schedule a call

Explore More


Ready to learn more about the Claims Intelligence Platform™?

View Case Study