01 — The Brief
I made Harland & Finch up
A fictional mid-size UK law firm, complete enough to run a realistic Training Needs Analysis against. Working with a fictionalised client meant I could hold the process to the same standards I’d apply to a real commission, without the constraints of actual stakeholder sign-off or client sensitivity.
The TNA produced a finding that shaped everything that followed: staff at the firm understood phishing well enough to explain it. They couldn’t reliably act on that knowledge under pressure.
Those are different problems. The existing training addressed the first and ignored the second — click-through slides, no consequence mechanics, described by the firm itself as “a tick-box exercise.” Staff who could define phishing in a survey were still making the wrong call when tired, rushed, or covering for a colleague.
02 — Analysis
Is this actually a training problem?
The most useful thing a TNA produces isn’t a list of topics. It’s a clear answer to the question of whether training is the right intervention at all.
Here, it was — but for a specific reason. The gap wasn’t knowledge. It was application under realistic conditions: the speed of a busy day, the pressure of covering for an absent colleague, the social difficulty of questioning a senior partner’s request. You can’t close that gap with an explainer. You need a situation.
I designed three learner personas to map the full risk landscape of the firm: a legal secretary, a senior fee-earner, and administrative staff. Each carries different access levels, different authority, and a different relationship to the firm’s specific threat vectors. Diane — the legal secretary — is the first complete vignette. The other two are designed but not yet built.
Nine learning outcomes, drawn from the TNA. Not all are covered in the current demo — a completed course would address the full set.
| LO | Outcome |
|---|---|
| LO1 | Analyse messages for indicators of phishing and determine the appropriate verification or reporting action before responding. |
| LO2 | Recognise the need to report suspicious messages and near-misses promptly, and follow the firm’s reporting procedure before attempting to resolve the issue. |
| LO3 | Apply a deliberate recipient-checking process to verify the email is addressed to the correct person before sending. |
| LO4 | Check attachments to confirm they are correct, necessary, and relevant before sending. |
| LO5 | Evaluate the risk of alternatives to secure sending methods and choose the approved channel for sensitive information. |
| LO6 | Assess the sensitivity of outgoing information and apply the appropriate level of protection before sending. |
| LO7 | Verify the identity of callers and the legitimacy of requests before acting on verbal instructions. |
| LO8 | Follow the firm’s clean desk and device policy when leaving a workstation unattended. |
| LO9 | Follow the correct escalation procedure when unable to complete a task safely or without appropriate authority. |
03 — Where the Thinking Happened
Four decisions worth understanding
These aren’t the only decisions I made, but they’re the ones that most directly shaped what the experience became.
04 — Working with AI
What collaboration actually looked like
Claude was my thinking partner throughout this project — not a content generator. I used it to pressure test ideas and work through scenario logic. The exchanges below are drawn from real conversations. In each one, the direction came from me; Claude’s job was to examine whether that direction held up.
The incident report frame
Claude was offering standard narrative frames for the anthology: a message from the IT Director setting up the training, a case file wrapper presenting each vignette as a legal matter, a simple splash screen. All functional. None of them gave the learner any stake in the outcome. I wanted the frame to connect the three vignettes in a way that made the learner feel like an architect of what happened, not a witness to it.
Pressure mechanics
Replicating the pressure of a working day without creating accessibility barriers was one of the harder design problems. Claude started with the obvious answer.
Challenging the modal layout
When the feedback panel for Task 1 was first built, it appeared as a full-screen overlay — it covered the email completely when the learner submitted their decision. I wanted it repositioned. The feedback explains what the learner should have noticed in the email. Covering the email with the feedback panel removes the reference point at precisely the moment the learner most needs it.
05 — The Build
One file, deployed from GitHub to Cloudflare
Task 1 is complete and playable. Diane’s inbox contains the Calloway email — a fraudulent payment instruction impersonating senior partner Richard Calloway. The learner reads it, decides whether to act on it, report it, or investigate further, and navigates the consequences of each path. Forty test cases were defined before the interaction was coded.
Task 2 — the Patel bundle — is in development. Diane is asked to prepare a physical completion document pack while a courier waits at reception, covering for a colleague who’s gone home sick. It tests LO3, LO4, and LO9.
The tech stack is deliberate: one HTML file, vanilla CSS and JavaScript, hosted on Cloudflare Pages via GitHub. No authoring tool, no build step. Anyone who wants to look under the hood can open View Source and see exactly how it works — which felt appropriate for a portfolio piece.
06 — Quality Assurance
40 test cases before a line of production code
I wrote the QA matrix before writing any production JavaScript. When the learner can interact with multiple elements in multiple orders, you need to map all of those paths before you build. The alternative is discovering edge cases after the code is written, which is significantly harder to fix.
Each test case specifies: the trigger action, the expected system response, the feedback content and tone, the resulting state change, and any branching conditions. Design decisions that surfaced during QA — open questions on branching logic, ambiguous feedback wording — were resolved before the code was written.
07 — Reflection
What I’d want a reader to take away
This is a portfolio piece, and I’ve tried to be honest about what that means. The single-file format constrains the interaction types I can use. Task 2 isn’t finished. The anthology has three personas but only one is built. I’ve named those gaps rather than papering over them — a portfolio that hides its limitations is a less useful demonstration of professional judgement than one that owns them.
The QA-before-build discipline is something I’ll carry into every project from here. Writing 40 test cases before touching the JavaScript meant the full interaction space was mapped before it was coded. It found design problems that would have been expensive to fix later. It’s not standard practice in most elearning development, but it should be.
On AI: every design decision here was mine. Claude was useful for pressure-testing ideas, flagging when scenario logic didn’t hold, and keeping documentation current. The ideas themselves came from reading the TNA findings carefully and thinking through what they implied for the design. That’s not a small distinction — and it’s the one I most want this case study to demonstrate.