Simply Discover | Reasoning Canvas

Reasoning Canvas makes how you ran the case the artefact, not the by-product

Chain of custody tracks the object. Chain of reasoning tracks the thinking. Reasoning Canvas captures why each step exists while the work happens, so the case can later explain itself.

When a regulator, court, client or internal reviewer asks why a decision was made, teams should not have to reconstruct the answer from logs and memory. The canvas turns case work into a graph of triggers, transformations, references and actions, each with rationale attached.

Use cases DSAR, FOI, investigations, legal review and compliance workflows
Output Evidence-grade reasoning records and plain-English narrative reports
Operating model Templates for repeat work, blank canvases for novel matters
Chain of custody tracks the object. Chain of reasoning tracks the thinking. We already do the first. We are building the second. Simply Discover - Reasoning Canvas Vision

The problem

Three quiet costs of not having a reasoning record

These are familiar to anyone running regulated case work. They are expensive because the reasoning sits outside the system that did the work.

01

Regulators ask why. We reconstruct the answer.

When the ICO, a court or a client asks how a particular email was classified, redacted or excluded, teams assemble the answer from audit logs, classifier outputs and people’s recollections. That reconstruction is expensive, late and never quite complete.

02

Senior expertise stays in heads, not systems.

Experienced analysts know the playbook for DSARs, privilege reviews and investigations. Today that knowledge transfers through training and shadowing, not through tooling. When they leave, the playbook leaves with them.

03

Every case starts from zero.

Two cases that follow nearly the same shape still begin with the same set of decisions from scratch. The 200th DSAR of the year is configured with the same care as the first because there is no place to capture the pattern.

How it works

A case is a graph. The graph grows as the work progresses.

Hover any step in this DSAR investigation to see what the officer recorded about why that step exists. Every step carries this. At the end of the case, we walk the graph and produce a narrative the regulator can read.

Hover or tap a step to read the officer’s rationale

A real case from open to export. Hover any step.

The model

Every step in every case is one of four kinds

Keeping the model small is what makes it usable: four building blocks, three connection types and an open-ended set of behaviours inside each block.

Trigger

starts a flow · out only

When does work begin? An officer clicks open case, an email arrives, a schedule fires, or a deadline is reached.

On-demand, on-event, on-schedule

Transform

in → out

Changes the shape or count of what flows through: filters, classifiers, entity extraction and synthesis.

Filter, classify, extract, synthesise

Action

in only · reaches outside

Where the system commits to a real-world effect: sending, exporting, creating a case, or flagging for review.

Send, export, create case, notify

Reference

no flow · context only

Context that other steps read but does not move data: intake memory, knowledge, notes or prior decisions.

Memory, document, knowledge, sticky note

What the regulator sees

At the end of a case, the graph becomes a narrative

The graph is not just the report’s source. It is the report. We walk it in dependency order and write each step as a paragraph, using the rationale recorded at the time.

Case 2026-DSR-00412 - Reasoning Record Subject: Jane Doe (jane.doe@customer.com)
Case opened: 2026-03-14 09:12 UTC · A. Halford (officer)
Case closed: 2026-04-02 16:48 UTC · A. Halford (officer)
Graph hash: sha256:4a7b1c... · Template: exploratory

The investigation began on 2026-03-14 when A. Halford opened the case in response to a data subject access request from Jane Doe.

Trigger - On-demand · node #1 · 09:12 UTC

Correspondence was gathered from the subject’s mailbox covering the period 2025-03-01 to 2025-09-30. The rationale recorded at the time: “Subject’s complaint refers to HR events in summer 2025; extended to cover surrounding context.”

Filter by custodian · Filter by date range · nodes #2-3 · 09:14 UTC

Aboutness classification identified 147 emails as being about the subject. Forty-three were labelled irrelevant; eighteen were deferred for further context. The prompt and model fingerprint are available in the appendix.

Classify · node #4 · 09:16-09:42 UTC

Who uses it

Two audiences. Two ways of working. One tool.

Encodes the firm’s playbook so it survives any one person.

Senior analyst · process owner

Every DSAR response should start by pulling mail from the subject’s mailbox, then filter to the date range, then classify for aboutness. I want to record the playbook once and have everyone follow it consistently.
  • Authors templates - reusable graphs with the standard procedure encoded.
  • Defines spot policies - rules that watch for inbound items and react automatically.
  • Versions playbooks - updates roll forward into new cases without disrupting in-flight ones.
  • Promotes investigations - turns a one-off case that became a pattern into the next standard template.

Builds the reasoning as the case unfolds.

Case-working officer · investigator

The subject mentioned a complaint about their manager in March. Let me search that. This classifier flagged privileged documents. I need to see why. My draft response does not mention the HR thread; let me add a step to pull that in.
  • Starts from a template when one applies, or a blank graph when the case is novel.
  • Grows the graph as discovery surfaces new threads.
  • Records the rationale at the moment of decision, not in a separate journal.
  • Exports the case as a narrative report when the work is done.

Positioning

It is not automation plumbing. It is a reasoning artefact.

Reasoning Canvas borrows familiar patterns from workflow builders, notebooks and DAG tools, but the product position is different: evidence-grade reasoning for regulated investigations.

Tool What it does well Why Reasoning Canvas is different
n8n / Zapier Automation pipelines, trigger-first The graph is not just plumbing. It records why the work happened and what evidence supported the decision.
Miro / FigJam Free-form visual thinking The graph executes and records case reasoning. It is not a sketch detached from the workflow.
Jupyter notebooks Reproducible reasoning The canvas is branching, case-aware and framed for legal and regulatory audit rather than code-first analysis.
Airflow / Nextflow Execution DAGs The audience is officers, analysts and reviewers, with human decision points and narrative export built in.

The path

Eight phases. Each ships value on its own.

The canvas can land incrementally: first as a prototype, then as a persistent reasoning layer, then as templates, triggers and narrative export.

Phase A

Front-end prototype

Concept proof

Phase B

Persistence

Read-only schema lands

Phase C

Append-only edits

Real authoring

Phase D

Templates & slots

Reuse compounds

Phase E

Triggers wired

Spot policies live

Phase F

Narrative export

Regulator-ready

Phase G

Migrate cases

Backfill where useful

Phase H

Rich content

Deferred until needed

Why now

Three forces line up.

UK GDPR · EU AI Act · ICO

Regulators are sharpening their questions about AI-driven decisions.

A team that can produce a reasoning record on demand is in a different posture from one that produces a log dump.

200x

Template reuse compounds across cases.

Repeated DSARs, FOI requests and reviews stop starting from a blank page when the proven graph can be reused.

5 min x n

Triggers close the loop on long-tail events.

Spot-policy matches, inbound secure messages and draft rejections can be handled on-canvas, auditably.

Ready to make reasoning a first-class case artefact?

We can walk through how Reasoning Canvas would sit alongside Discovery, DSAR, FOI, investigations and compliance workflows.