Beyond the process and methodology, this is what was built. Six production-grade UI screens showing the complete analyst journey — from IA through final compliance export. Every component, data state, and human-AI handoff moment is documented here.
The IA is structured around three primary user roles: Analyst (extraction & review), Lead (oversight & pipeline), Compliance (audit & export). All paths converge at the Human Approval Gate before any data touches the legacy database.
The complete end-to-end flow from when an analyst receives an audit assignment to when the structured ESG data is written to the legacy insurance database. Every AI decision is surfaced, every human intervention is explicit.
The dashboard gives every stakeholder their primary signal at a glance. Sarah sees her queue. Thomas sees team throughput. The AI performance metrics are always visible — not buried in settings.
The core working screen. Sarah sees the actual client PDF on the left, highlighted by AI extraction annotations. The Copilot panel on the right shows every extracted value with its source citation, confidence, and inline approve/flag controls. This is where trust is built or broken.
Once Sarah approves all fields, Aura auto-compiles the structured ESG report. Every finding is pinned to its source document. CSRD compliance status is mapped field by field. Julian can export the entire report as a regulator-ready PDF with full decision trail embedded.
This is where trust is operationalized. Sarah reviews each AI-extracted field individually. She can Approve (accept AI value), Override (edit with re-citation), or Flag (reject and escalate). No field reaches the legacy database without passing through this screen with an explicit human decision attached.
Every AI query, every extraction, every human approval, every override, every push to the legacy database — logged, timestamped, attributed. Julian can filter, search, and export a complete decision trail for any audit within seconds. This is the legal backbone of the entire system.