top of page
Meeting

Insights

Explore expert insights on legal SEO, medical communication, compliance content, and digital strategy.

The CPD Fluency Trap: Why AI-Assisted Continuing Professional Development (CPD) Feels Better Than It Is

  • Writer: Imelda Wei Ding Lo
    Imelda Wei Ding Lo
  • 6 days ago
  • 5 min read
A lawyer frowns at a laptop screen in a dimly lit office, caught between the warm glow of a desk lamp and the cool light of the screen.

Many platforms producing accredited continuing professional development (CPD) courses for lawyers are now actively encouraging AI-assisted content production.

The mechanics are simple: just upload source PDFs, and the platform’s AI generates professional-looking slides, times them to an AI-voiced script, produces quizzes, and assembles learning materials that sync across the entire module. 

At first glance, the results can read as authoritative. However, that surface credibility is actually the crux of the problem.

Read to learn why. You’ll also learn how AI-assisted legal CPD actually needs to change to reduce risk, and what a responsible workflow actually looks like.

Why AI-Generated CPD Looks More Reliable Than It Actually Is

AI-generated CPD looks good these days, at least on the surface. Topics appear in a logical order, the design is polished, and the overall structure signals credibility. The experience can feel almost frictionless, which is precisely the problem.

Here’s why. Suppose a lawyer is developing a CPD module on professional negligence and uploads a PDF overview of the relevant law. The platform uses the PDF to generate a clean 10-slide deck with a confident voiceover, a case citation in the speaker notes, and 10 multiple-choice questions at the end to test learners’ recall. 

At first glance, the design appears polished, the structure seems logical, and the citation looks right.

However, on closer inspection, the cited case is a hallucination, a plausible-sounding reference that the AI constructed through pattern recognition rather than from verified legal sources. 

As for the limitation period stated in the quiz question, it’s actually no longer accurate as of two years ago. And the scenario used to illustrate professional negligence features a fact pattern so generic it doesn't map onto how such claims actually arise in practice.

None of these flaws is immediately obvious. 

Because these AI products appear reliable, legal CPD creators may review them less carefully than they would content built from scratch or content developed with AI as a thinking partner rather than a wholesale production engine.

The result is a truncated approval process that functions more like a visual scan than a substantive review. What gets through the process may include hallucinated case citations, outdated statutory references, and scenarios so disconnected from real practice that a lawyer taking the course can’t understand or apply the content to actual practice.

A Responsible Approach to AI-Assisted Legal CPD

To develop evidence-based legal content for CPD courses, course creators must treat AI output as a starting point rather than a finished or near-finished product. In other words, they must apply human judgment at every stage of the process.

Here’s a three-stage framework to help you get started.

Stage 1: Define the Educational Decision

Before developing any content or opening any AI tool, ask yourself the following questions: 

  • What educational problem is this module designed to solve, and for what jurisdiction(s)?

  • What should a lawyer be able to do differently after completing it? 

  • Who is the intended learner, and what do they already know?

To answer these questions, you need to identify a specific knowledge or practice gap in a specific jurisdiction(s) — the difference between what legal professionals currently know or do and what current law, regulation, or professional standards require.

You can find these practice gaps by: 

  • Looking at what’s popular on legal CPD sites and regional bar associations.

  • Monitoring legal magazines and practitioner publications for emerging issues.

  • Paying attention to guidance or notices issued by law societies and regulators in response to observed practice problems.

  • Consulting complaints data and disciplinary decisions published by law societies and regulatory bodies. These show where lawyers are actually getting things wrong.

Without a clearly defined practice gap, AI has no meaningful constraint on what it can generate. It can produce content that resembles legal education without addressing what lawyers actually need to know or do differently.

The answers to these questions should also account for your intended learner. A module addressing a gap for junior associates looks different from one designed for senior practitioners, even when the underlying legal issue is the same.

Stage 2: Source Calibration

Once you’ve defined the educational decision, the next step is gathering the right sources.

The sources you feed to an AI tool determine the ceiling on what it can produce. If you feed AI PDFs that are themselves AI-generated and barely edited, vague, outdated, or jurisdiction-ambiguous, you’ll get content that inherits those problems.

So, before uploading any source material, ask yourself the following:

  • Does this source reflect current law in the relevant jurisdiction(s)?

  • Does it come from a primary or authoritative secondary source, for example, a primary legal database or recognized reference publication such as CanLII, LexisNexis Quicklaw, or the Canadian Encyclopedic Digest?

  • Is it appropriate in scope for the practice gap you identified? For example, if you’re designing a module for addressing a specific procedural gap in Ontario limitation periods, an academic source covering the concept of professional negligence in UK common law may not be appropriate.

Stage 3: Review Before Approval

After AI has generated output based on your input, you need to perform a series of checks before submitting the course for approval:

  1. Check that every factual claim has a verified citation you can trace back to. Pay particular attention to limitation periods, case names, jurisdictions, procedural requirements, and regulatory thresholds. AI is particularly prone to mixing them up. For example, the case name could be correct, but the year and full citation are incorrect.

  2. Adjust the content's certainty level. AI often sounds overconfident even when it shouldn’t be. If AI has smoothed over areas where the law is unsettled, jurisdiction-specific, or actively evolving, inject nuance into those sections manually.

  3. Assess whether the scenarios reflect real practice. AI tends to generate case studies and hypotheticals that are topically correct but factually generic, which makes the course seem vague and makes it harder for learners to understand the concepts you’re teaching. For example, the scenarios could be about the right area of law, but feature fact patterns that don't reflect how matters actually arise, develop, or resolve in practice.

  4. Confirm all learning objectives have been covered. Ensure all content helps learners achieve the goals you set when designing the module. Ask yourself: would a lawyer who completed this module be able to apply what they learned to an actual file? If the answer is uncertain, the module isn’t ready for publication.

  5. Look at the tone of the content. Ideally, legal CPD content should strike a balance between relatable and professional. If it’s too peppy or, on the other end of the spectrum, wordy, it needs to be refined for tone.

AI has streamlined the production of legal CPD. However, that efficiency comes with real risks if you don’t keep human judgment in the process.

That’s where this framework comes in. While it won’t guarantee against error, it can help keep human judgment where it belongs: at every decision point in the process, from identifying the practice gap to approving the final module. 

That's what responsible legal CPD production has always required. AI hasn't changed that. It's just made it easier to minimize.


Comments


bottom of page