Introduction¶
A guided map of Jane’s architecture.
Jane is built around a small set of well‑defined subsystems. Each one has a single responsibility, a clear mental model, and a predictable role in the pipeline. This page gives you a high‑level overview of each subsystem so you can understand how the pieces fit together before diving into the details.
Each section includes a short description and a link to the full concept page.
Pipeline concepts¶
The pipeline follows the same structured pattern: scan -> normalize -> parse -> validate, and may include a policy for decisions.
Scan¶
Structural safety before anything else.
Scan is the first stage of every pipeline. It detects structural hazards — circular references, unsafe Unicode, malformed values, deep objects — before any interpretation or validation happens. Scan never mutates the input and never enforces business rules. It’s the is this safe to even look at? stage.
Normalize¶
Structural hygiene that makes everything else predictable.
Normalization cleans up structural noise: trimming strings, compacting arrays, removing undefined keys, and more. It’s type‑selected, mode‑aware, and always produces a new value. Normalization never interprets meaning — it simply makes the structure clean and predictable.
Parse¶
Turning strings into meaningful values — explicitly and predictably.
Parsing converts strings into real types: numbers, booleans, dates, enums, JSON, and more. Parsing is always explicit; Jane never guesses. This is where raw input becomes meaningful data.
Validate¶
Enforcing rules on the final, interpreted value.
Validation checks whether the value meets your requirements. Validators emit events; policy decides what they mean. Validation never mutates or interprets — it simply enforces rules on the final value.
Policy¶
How Jane turns events into decisions.
Policy interprets events and determines whether the value is accepted, rejected, or requires review. It controls severity transforms, reject/review patterns, mode behavior, and analysis features. Policy is the decision‑making layer of the pipeline.
Boundary¶
Bringing multiple fields together into one coherent decision.
A boundary aggregates multiple pipelines, shapes the final object, applies boundary‑level policy, and produces a single decision for the entire structure. It’s the unit of meaning for real‑world objects.
Jane result¶
The complete, structured output of every pipeline and boundary run.
Every run returns a JaneResult: the final value, the decision, the issues, the events, the diff, the explanation, the replay steps, and full metadata. It’s the contract of the entire framework.
Analysis Layer¶
Jane’s analysis layer is optional, lazy, and designed for transparency.
These subsystems don’t affect decisions — they help you understand what happened.
Diff¶
Seeing exactly how your data changed during normalization.
Diff compares the safe value with the normalized value and records structural changes: added, removed, changed. It’s perfect for debugging, audits, and compliance.
Explain¶
A human‑readable narrative of the entire pipeline.
Explain turns scan, normalization, diff, parse, and validation events into a chronological story. It’s ideal for debugging, UI messages, and onboarding.
Replay¶
Reconstructing the normalized value step‑by‑step.
Replay applies diff entries in order and records each intermediate state. It’s a deterministic timeline of how the value evolved — perfect for audits and debugging.
Telemetry¶
Structured, stage‑aware observability for real systems.
Telemetry emits structured records for each stage of the pipeline. It’s designed for logs, dashboards, monitoring, and compliance — without affecting performance or decisions.