The review UX should make AI suggestions useful without replacing the hospital's existing reading and reporting workflow. It should capture disagreement, corrections, and sidecar review state as separate artifacts.
Review State Machine
UploadedProcessedAI completeAssignedIn reviewSidecar draft readyLegacy report completedStatus recorded
State transitions are explicit records. We should record all state transitions and their timestamps for auditability and debugging.
Tool Activation
The AI review tool should appear as an add-on only when the current study is eligible. For this proposal, activation is limited to the three supported examples: chest CT, mammography, and brain MRI. If a study falls outside supported modality/model coverage, the radiologist should stay in the normal viewer/reporting workflow without AI review prompts.
Study condition
UX behavior
Supported modality and model available
Show an AI review entry point and indicate which findings/model outputs are available.
Supported modality, but no model result yet
Show a neutral pending/unavailable state; do not block normal review.
Unsupported modality or unsupported diagnosis type
Do not activate the AI sidecar; keep the radiologist in the existing workflow.
Product/UX concept. Basic DICOM viewer functionality is assumed; the AI finding review interface sits on top as a sidecar. The AI suggests a finding, and the radiologist can accept, reject, edit, relabel, or attach it to a sidecar report draft. This is not a production DICOM viewer implementation.
Mock UX storyboard for AI finding feedback. Open the image directly for full-size review.
The important design point is that per-finding feedback is captured separately from the final diagnosis and legacy report submission flow.
Action-To-Data Mapping
Radiologist action
Stored system artifact
Why it matters later
Accept finding
review_event: accepted, linked to finding_id and ai_run_id
Measures AI usefulness by model version, modality, facility, and finding type.
Reject finding
review_event: rejected with optional reason or freeform note
Creates a first-class false-positive signal without changing raw imaging records.
Edit box / mask
New annotation.version plus review_event: geometry_modified
Preserves the original AI geometry and the radiologist-corrected geometry for later analysis.
Change label
review_event: label_changed with previous and new labels
Separates localization quality from classification quality.
Attach to sidecar draft
report_finding_link and report draft edit event
Shows which reviewed findings may have influenced the radiologist's final reporting workflow.
Review UX
Structured review path
Radiologist selects a response per AI finding: accept, reject, modify location, modify label, mark uncertain, defer to report, or ignore as irrelevant.
This is easier to analyze and provides clean data for monitoring AI usefulness.
Freeform review path
Radiologist works naturally in the viewer and report. A later AI/LLM process infers whether each AI prediction was accepted, contradicted, ignored, or superseded.
This may reduce UI burden but makes the interpretation layer more complex and less deterministic.
Recommendation
Start with light structured controls around each AI finding, plus optional sidecar draft support. Keep the final diagnosis and legacy report submission path separate from per-finding feedback.
Final Diagnosis And Reporting Path
The final diagnosis is not the sum of AI finding feedback. A radiologist may reject every AI finding and still diagnose something else, or accept a finding but describe it differently in the official report.
Official report submission is expected to happen through the hospital's existing reporting workflow.
Radiologist Burden
Feedback should stay lightweight: enough structure for analytics without forcing excessive form work during review.