Radiologist Workflow

The review UX should make AI suggestions useful without replacing the hospital's existing reading and reporting workflow. It should capture disagreement, corrections, and sidecar review state as separate artifacts.

Review State Machine

Uploaded Processed AI complete Assigned In review Sidecar draft ready Legacy report completed Status recorded

State transitions are explicit records. We should record all state transitions and their timestamps for auditability and debugging.

Radiologist workflow diagram The radiologist opens a case, loads the image viewer, reviews AI findings, accepts edits or rejects suggestions, diagnoses in the existing workflow, optionally uses sidecar draft support, and the system records completion status. Radiologist Workflow AI proposes. Radiologist reviews and may use sidecar draft support; final reporting remains in the legacy workflow. Open Worklist / Assigned Case patient, accession, priors, modality context Load Image Viewer series, hanging protocol, metadata, priors Inspect AI Findings overlays, measurements, suspicious regions, confidence clinical judgment Accept clinically useful Edit label, wording, ROI Reject false positive review decisions are logged Create Diagnosis / Impression radiologist-authored clinical assessment Generate Sidecar Draft structured findings, measurements, comparisons Record Completion status / optional handoff EHR / RIS legacy reporting view, edit, sign events feed audit/state tracking

Tool Activation

The AI review tool should appear as an add-on only when the current study is eligible. For this proposal, activation is limited to the three supported examples: chest CT, mammography, and brain MRI. If a study falls outside supported modality/model coverage, the radiologist should stay in the normal viewer/reporting workflow without AI review prompts.

Study condition UX behavior
Supported modality and model available Show an AI review entry point and indicate which findings/model outputs are available.
Supported modality, but no model result yet Show a neutral pending/unavailable state; do not block normal review.
Unsupported modality or unsupported diagnosis type Do not activate the AI sidecar; keep the radiologist in the existing workflow.

AI Finding Review Storyboard

Product/UX concept. Basic DICOM viewer functionality is assumed; the AI finding review interface sits on top as a sidecar. The AI suggests a finding, and the radiologist can accept, reject, edit, relabel, or attach it to a sidecar report draft. This is not a production DICOM viewer implementation.

Six panel mock storyboard showing AI suggested finding, accept finding, reject finding, edit box or mask, change label, and attach to report draft states.
Mock UX storyboard for AI finding feedback. Open the image directly for full-size review. The important design point is that per-finding feedback is captured separately from the final diagnosis and legacy report submission flow.

Action-To-Data Mapping

Radiologist action Stored system artifact Why it matters later
Accept finding review_event: accepted, linked to finding_id and ai_run_id Measures AI usefulness by model version, modality, facility, and finding type.
Reject finding review_event: rejected with optional reason or freeform note Creates a first-class false-positive signal without changing raw imaging records.
Edit box / mask New annotation.version plus review_event: geometry_modified Preserves the original AI geometry and the radiologist-corrected geometry for later analysis.
Change label review_event: label_changed with previous and new labels Separates localization quality from classification quality.
Attach to sidecar draft report_finding_link and report draft edit event Shows which reviewed findings may have influenced the radiologist's final reporting workflow.

Review UX

Structured review path

Radiologist selects a response per AI finding: accept, reject, modify location, modify label, mark uncertain, defer to report, or ignore as irrelevant.

This is easier to analyze and provides clean data for monitoring AI usefulness.

Freeform review path

Radiologist works naturally in the viewer and report. A later AI/LLM process infers whether each AI prediction was accepted, contradicted, ignored, or superseded.

This may reduce UI burden but makes the interpretation layer more complex and less deterministic.

Recommendation Start with light structured controls around each AI finding, plus optional sidecar draft support. Keep the final diagnosis and legacy report submission path separate from per-finding feedback.

Final Diagnosis And Reporting Path

The final diagnosis is not the sum of AI finding feedback. A radiologist may reject every AI finding and still diagnose something else, or accept a finding but describe it differently in the official report.

Official report submission is expected to happen through the hospital's existing reporting workflow.
Final diagnosis and reporting sequence diagram The radiologist reviews images, optionally uses the AI sidecar for eligible studies, submits the official report through the hospital reporting workflow, and the sidecar records review and handoff status events. Final diagnosis sequence Radiologist AI Sidecar eligible studies only Audit / State append-only events Hospital legacy reporting Review images in normal reading workflow Open sidecar if study is eligible Present AI findings for review Accept, reject, edit, relabel, or ignore Append per-finding review events Optional sidecar draft support Submit official report through existing hospital tools Optional completion / handoff status
Radiologist Burden Feedback should stay lightweight: enough structure for analytics without forcing excessive form work during review.