<\!DOCTYPE html> Case Study — Harland & Finch: Diane Vignette | Amy Tuffin

Portfolio Case Study

Harland & Finch:
The Diane Vignette

I created a fictional law firm and ran a full design process against it — TNA through to deployed web application — using AI as a thinking partner throughout.

▶  Launch the interactive demo
3
Learner Personas
9
Learning Outcomes
40
QA Test Cases
1
HTML File

I made Harland & Finch up

A fictional mid-size UK law firm, complete enough to run a realistic Training Needs Analysis against. Working with a fictionalised client meant I could hold the process to the same standards I’d apply to a real commission, without the constraints of actual stakeholder sign-off or client sensitivity.

The TNA produced a finding that shaped everything that followed: staff at the firm understood phishing well enough to explain it. They couldn’t reliably act on that knowledge under pressure.

Those are different problems. The existing training addressed the first and ignored the second — click-through slides, no consequence mechanics, described by the firm itself as “a tick-box exercise.” Staff who could define phishing in a survey were still making the wrong call when tired, rushed, or covering for a colleague.


Is this actually a training problem?

The most useful thing a TNA produces isn’t a list of topics. It’s a clear answer to the question of whether training is the right intervention at all.

Here, it was — but for a specific reason. The gap wasn’t knowledge. It was application under realistic conditions: the speed of a busy day, the pressure of covering for an absent colleague, the social difficulty of questioning a senior partner’s request. You can’t close that gap with an explainer. You need a situation.

I designed three learner personas to map the full risk landscape of the firm: a legal secretary, a senior fee-earner, and administrative staff. Each carries different access levels, different authority, and a different relationship to the firm’s specific threat vectors. Diane — the legal secretary — is the first complete vignette. The other two are designed but not yet built.


Four decisions worth understanding

These aren’t the only decisions I made, but they’re the ones that most directly shaped what the experience became.

01
Scenario-based wasn’t a stylistic preference — it was the only right answer
The finding
The TNA identified an application gap, not a knowledge gap. Staff could explain phishing. They weren’t catching it under pressure.
The decision
Scenario-based learning. If the gap had been knowledge, a well-structured explainer would have been the right call. It wasn’t, so the format had to follow from the problem.
The principle
Format is a design decision, not a default. Click-through slides weren’t wrong because they’re boring — they were wrong because they addressed the wrong gap.
02
Replicating pressure without using a timer
The tension
The TNA finding was clear: staff make wrong calls under pressure, not in calm conditions. The challenge was to address this naturally — not by using timers or penalising learners for ‘slow’ responses, but by creating ambient pressure that mirrors the real workplace environment.
The decision
Time-based mechanics out. Pressure through information overload, interruption, and social stakes instead — built into the scenario, not the interface. The “leave by 5pm” mechanic is narrative consequence only, not a metric.
The principle
Under the Equality Act, equivalent experience is required — not just equivalent content. A module that structurally penalises screen reader users for their assistive technology’s reading speed doesn’t meet that bar.
03
Making the learner do the hard part
The problem
In a training context, experienced learners know how to spot the ‘right’ answer from a pre-defined list without genuinely engaging with the problem — that’s pattern recognition, not meaningful thinking. Standard branching scenarios make this worse by labelling the decision point and asking the learner to choose from curated options, which removes the most important cognitive step: noticing that a decision needs to be made at all.
The decision
No prompted option lists. The learner can call a colleague, forward an email, open attachments, use the reporting tool — whatever they judge appropriate. The interface gives the learner a realistic but bounded set of options — the tools a legal secretary would actually reach for: an inbox, a phone directory, a reporting channel. The boundaries are deliberate; what isn’t bounded is which tool the learner reaches for, and when.
The principle
The learning value is in the identification, not just the choice. Spotting that something’s wrong is the skill we’re developing. Presenting the learner with a list of options after the fact doesn’t test it.
04
Remediation for learners who genuinely don’t know
The gap
The scenario assumes enough baseline knowledge to make decisions. Some learners won’t have it — they may not know what BEC is, or why a spoofed domain matters, before they’ve encountered either.
The decision
A ‘Learn more’ option built into the Task 1 feedback modal, covering BEC attacks and email spoofing. Not front-loaded — available after the learner has engaged with the scenario and has a reason to want it. Learners who already know don’t see it unless they look.
The principle
Pre-reading and explainers front-loaded before an activity create cognitive overhead before the learner has context. Post-scenario remediation meets learners where the gap actually surfaced. It’s not yet built, but it’s in the design spec.

What collaboration actually looked like

Claude was my thinking partner throughout this project — not a content generator. I used it to pressure test ideas and work through scenario logic. The exchanges below are drawn from real conversations. In each one, the direction came from me; Claude’s job was to examine whether that direction held up.

The incident report frame

Claude was offering standard narrative frames for the anthology: a message from the IT Director setting up the training, a case file wrapper presenting each vignette as a legal matter, a simple splash screen. All functional. None of them gave the learner any stake in the outcome. I wanted the frame to connect the three vignettes in a way that made the learner feel like an architect of what happened, not a witness to it.

Design conversation — April 2026
Claude
Three options for the unifying frame. A — a message from the IT Director, framing the training as a firm-wide response to the near-miss. B — a case file wrapper, each vignette presented as a matter the learner reviews in sequence. C — a simple introduction screen with the firm name and scenario context before each vignette.
Amy
What about a frame along the lines of ‘the incident report’, but where the learner’s decisions during interaction ultimately decide whether there ends up being a problem or not?
Claude’s options were containers — they held the vignettes without connecting them. The incident report is a document the learner writes through their choices. At the end, it reads either as a near-miss log or a breach notification. That’s a different relationship between the learner and the material entirely.

Pressure mechanics

Replicating the pressure of a working day without creating accessibility barriers was one of the harder design problems. Claude started with the obvious answer.

Design conversation — April 2026
Claude
Countdown timers and response-time scoring are the most direct mechanisms for replicating real-world urgency in a digital environment…
Amy
I don’t want any countdowns or metrics based on time spent responding to a challenge. Information overload, interruption and context switching, and social pressure I think can all be built into a learning scenario, however.
The redirect came before Claude had finished making the case. The principle was clear: pressure has to come from the scenario design, not the clock.

Challenging the modal layout

When the feedback panel for Task 1 was first built, it appeared as a full-screen overlay — it covered the email completely when the learner submitted their decision. I wanted it repositioned. The feedback explains what the learner should have noticed in the email. Covering the email with the feedback panel removes the reference point at precisely the moment the learner most needs it.

Build conversation — April 2026
Claude
The feedback panel appears as a full-screen overlay when the learner submits their decision. It presents the outcome, explains the rationale, and offers a way to continue to the next task.
Amy
I was actually thinking about having the panel up here so that the rest of the screen is still visible below it. The learner should still be able to see the email while they’re reading the feedback — that’s the whole point.
Claude
Repositioning the panel to the upper portion of the screen keeps the email visible below it. The feedback then sits in direct relation to the source material, which is especially useful where the rationale explains what the learner should have spotted.
The email is the evidence. Covering it with the feedback removes the reference point at the moment the learner most needs it — particularly for learners who got it wrong and need to understand why.

One file, deployed from GitHub to Cloudflare

Task 1 is complete and playable. Diane’s inbox contains the Calloway email — a fraudulent payment instruction impersonating senior partner Richard Calloway. The learner reads it, decides whether to act on it, report it, or investigate further, and navigates the consequences of each path. Forty test cases were defined before the interaction was coded.

Task 2 — the Patel bundle — is in development. Diane is asked to prepare a physical completion document pack while a courier waits at reception, covering for a colleague who’s gone home sick. It tests LO3, LO4, and LO9.

The tech stack is deliberate: one HTML file, vanilla CSS and JavaScript, hosted on Cloudflare Pages via GitHub. No authoring tool, no build step. Anyone who wants to look under the hood can open View Source and see exactly how it works — which felt appropriate for a portfolio piece.


40 test cases before a line of production code

I wrote the QA matrix before writing any production JavaScript. When the learner can interact with multiple elements in multiple orders, you need to map all of those paths before you build. The alternative is discovering edge cases after the code is written, which is significantly harder to fix.

40
Test cases
5
Interaction branches
3
Outcome types
Pre-build
Matrix written first

Each test case specifies: the trigger action, the expected system response, the feedback content and tone, the resulting state change, and any branching conditions. Design decisions that surfaced during QA — open questions on branching logic, ambiguous feedback wording — were resolved before the code was written.


What I’d want a reader to take away

This is a portfolio piece, and I’ve tried to be honest about what that means. The single-file format constrains the interaction types I can use. Task 2 isn’t finished. The anthology has three personas but only one is built. I’ve named those gaps rather than papering over them — a portfolio that hides its limitations is a less useful demonstration of professional judgement than one that owns them.

The QA-before-build discipline is something I’ll carry into every project from here. Writing 40 test cases before touching the JavaScript meant the full interaction space was mapped before it was coded. It found design problems that would have been expensive to fix later. It’s not standard practice in most elearning development, but it should be.

On AI: every design decision here was mine. Claude was useful for pressure-testing ideas, flagging when scenario logic didn’t hold, and keeping documentation current. The ideas themselves came from reading the TNA findings carefully and thinking through what they implied for the design. That’s not a small distinction — and it’s the one I most want this case study to demonstrate.