System Architecture

A queue-backed sidecar architecture that separates external records, derived assets, AI outputs, human review state, and optional report handoff/status tracking.

System and data flow diagram Studies enter from imaging sources through hospital systems, are ingested and processed by the AI product, reviewed by radiologists in a sidecar workflow, and logged into audit and state tracking. Official report submission remains in existing hospital systems. System / Data Flow No direct scanner connection. Studies move through hospital systems before entering the AI product boundary. Imaging Sources Chest CT Mammography Brain MRI No direct product connection Medical AI Product Boundary Ingestion Service Preprocess standardize prepare for ML Storage images + metadata ML Pipelines run models, generate findings, scores, saves AI Results boxes, masks scores, summaries Viz / Marking UI viewer + overlays Sidecar Draft optional report assist Audit and State Tracking user actions, data access, system events, AI runs, diagnoses, LLM traces single source of truth for traceability and future workflow analytics External Hospital Systems Hospital PACS EHR Radiologist Workflow Review Diagnose Legacy Report studies land in hospital systems fetch / receive patient context / priors events from every meaningful action

Core Flow

DICOM ingest Study metadata indexing Derived display assets AI inference request AI findings stored Radiologist review Sidecar draft support Legacy report completion Status / handoff recorded

Study Lifecycle And Failure Handling

Each study moves through a small set of asynchronous jobs. Every state transition, failure, retry, and partial result should emit an audit event with enough context to debug the workflow later.

Stage What happens Failure examples System behavior / logging
Receive study Register study reference, facility, modality, accession/study identifiers, and initial workflow state. Duplicate notification, missing required metadata, unsupported facility. Use idempotency keys; log study.receive_failed or study.duplicate_seen with facility and external identifiers.
Validate eligibility Check modality and model coverage before activating the sidecar. Unsupported modality, unsupported diagnosis type, study does not match model assumptions. Do not activate AI review; log study.ineligible with reason and keep normal hospital workflow unaffected.
Preprocess assets Create display assets, thumbnails, derived slices, and model-ready inputs. Image conversion error, corrupt object, software pipeline break. Retry if safe; otherwise mark preprocessing failed and log asset.preprocess_failed with job id, attempt, and error class.
Run ML inference Invoke available model pipeline and normalize outputs into findings, masks, boxes, scores, and summaries. Model timeout, model service unavailable, output fails validation, result shape unexpected. Retry transient errors; log ai_run.failed or ai_result.invalid. If no valid findings exist, show normal workflow without sidecar findings.
Build LLM assistance Construct optional report summary/chat context from structured findings and available patient context. LLM service timeout, prompt/context build error, missing prior report context. Treat as partial failure: still show AI findings and review controls, but hide or disable LLM summary/chat. Log llm_assist.failed.
Review and status handoff Radiologist reviews sidecar findings, optionally uses draft support, then completes official reporting in legacy tools. Cannot correlate legacy report completion, handoff endpoint unavailable. Keep review events stored; log handoff.sync_failed or legacy_report.unmatched for operational follow-up.

Services

Service Responsibility Notes
Ingestion API Receives or fetches DICOM studies, validates expected metadata, creates study and series records. Use idempotency keys based on hospital, accession, study UID, and series UID.
Object Store Stores original received objects, thumbnails, viewport images, heatmaps, masks, and generated GIF/video previews. Raw DICOM objects are append-only from our application perspective.
Workflow Store Tracks study state, assigned users, AI run status, review status, and optional report completion/handoff status. Relational database is appropriate for early stage because queryability matters.
Queue Workers Run preprocessing, AI inference dispatch, result normalization, thumbnail generation, and optional handoff/status jobs. Each job should be retryable and tied to an audit event.
AI Output Normalizer Converts model-specific outputs into generic findings: class, confidence, location, geometry, slice, mask, and model version. This is the boundary between model teams and the product workflow.
Review Web App Presents eligible studies, AI findings, review controls, and optional sidecar draft support. Initial proposal is static; production app would need a DICOM-capable viewer strategy and activation rules by modality/model coverage.
LLM Context Builder Builds structured prompts from findings, prior reports, patient context, radiologist edits, and optional sidecar draft state. LLM output remains assistive until accepted by the radiologist.