DEMO MODE — Simulated workflow on a fictional 100-case chest CT dataset. No real PHI.
Switch to Control Plane All demos
Clique Studio
DC Dr. ChenRadiology
Control Plane
Step 1 · Data

What are you working on today, Dr. Chen?

Drop a labeled dataset — we'll handle the rest. No code. No config files. No DevOps.

Drop DICOM / NIfTI / CSV folder here
or browse your files
HIPAA / BAA covered Never leaves your environment
or start from a sample:
Step 2 · Task

What kind of model?

Pick the task that best matches your clinical question. You can change this later.

Step 3 · Configure

Smart configuration

Clique analyzed your dataset and hardware. Here's what it chose.

"Clique auto-configures every pipeline to the hardware you have — not the hardware you wish you had."

Why this configuration? Auditable decision tree · nnU-Net-inspired
Every choice below is driven by two things the platform already measured for you — the dataset fingerprint and the available hardware. No manual guessing. Every decision is logged and reproducible.
  1. 1 Is the dataset 3D or 2D?
    3D — 100 CT volumes, median shape 512×512×220.
    → Picks a volumetric architecture family (UNet3D) over 2D slice-wise baselines.
  2. 2 Is the dataset large enough for the full nnU-Net?
    No — 100 cases is below the 200-case threshold (heuristic H-3).
    → Downgrades to the UNet3D-small variant to avoid over-parameterization.
  3. 3 What batch size fits in available VRAM?
    Batch = 2 — solved from 24 GB A100 VRAM minus ~3.1 GB headroom at median shape.
    → Keeps gradient accumulation off the critical path; no OOM at epoch 0.
  4. 4 Is mixed-precision safe for this task?
    Yes — segmentation on CT + A100 = ~2.1× speedup, <0.1% Dice impact in published benchmarks.
    → Enables Mixed FP16 by default; falls back to FP32 automatically if NaN losses are detected.
  5. 5 Which augmentation policy?
    Elastic + intensity jitter — small 3D CT dataset, single modality.
    → Matches nnU-Net's default 3D CT policy. Skips color / mixup (not meaningful on CT).
  6. 6 How to validate?
    5-fold CV — because N < 200 and we want a variance estimate for the handoff report.
    → Single-split would be misleading at this sample size.
Every branch of this tree is recorded in the audit log with the dataset hash, hardware ID, and resolved value — so IT and compliance can reproduce the run on demand.
Architecture
UNet3D-small
Lightweight 3D variant optimized for a 100-case training set.
Batch size
2
Fits comfortably in 24GB GPU with 3.1GB headroom.
Precision
Mixed FP16
2.1× speedup vs FP32, <0.1% accuracy impact.
Augmentation
Elastic + intensity jitter
Standard for 3D CT with small training sets.
Validation split
5-fold cross-validation
Recommended for datasets < 200 cases.
Estimated runtime
4 h 12 min
On 1× NVIDIA A100 (detected in your cluster).
Advanced options Researchers only — clinicians can safely ignore.
We'll provision a container and stream progress below.
Step 3.5 · Train

Training in progress

Container auto-spun-up. You can close this tab — we'll email you when it's done.

"Container spun up in 4.8s. Will shut down automatically when training completes — you only pay for what you use."

  1. Configuring
    Container provisioned · image: clique-runner:v4.2
  2. Training
    Epoch 0 / 250 · loss: — · val dice: —
  3. Validating
    5-fold cross-validation
  4. Exporting
    ONNX + versioned artifact
  5. Released
    Available in Model Registry
Loss & validation dice
Train loss Val dice
1.0 0.75 0.5 0.25 epoch 0 epoch 250
GPU util
0%
Power
0 W
Elapsed
00:00
ETA
Live log clique-runner:v4.2
Step 4 · Deploy

Your model is ready

Review the artifact, hand it to IT, or deploy it yourself — with every step audited.

pneumonia-seg-v1
Trained from chest-ct-100cases.zip · moments ago
Ready to deploy
Sensitivity
0.932
Specificity
0.908
AUC
0.961
Val Dice
0.891
Artifact
148 MB · ONNX
SHA-256
a7f4c2…e9b1
Lab book

Recent experiments

Every experiment is a container in some stage of its lifecycle — configured, trained, validated, exported, released, or archived.

Name Task State Dataset GPU-hours Accuracy Updated