Times Square
Library Chandelier
Storefront Window
Rainbow Art
Night Pub
ACM TOG (SIGGRAPH 2026)

Lucky High Dynamic Range Smartphone Imaging

Handheld HDR photography on smartphones by aligning and merging “lucky” pixels from bracketed exposures — no hallucination, no tripod.

1 Princeton University 2 Adobe

* Equal contribution

Abstract

While the human eye can perceive an impressive twenty stops of dynamic range, smartphone camera sensors remain limited to about twelve stops despite decades of research. A variety of high dynamic range (HDR) image capture and processing techniques have been proposed, and in practice they can extend the dynamic range by 3–5 stops for handheld photography. This paper proposes an approach that robustly captures dynamic range competitive with state-of-the-art methods, using a handheld smartphone camera and lightweight networks suitable for running on mobile devices.

Our method operates indirectly on linear raw pixels in bracketed exposures. Every pixel in the final HDR image is a convex combination of input pixels in the neighborhood, adjusted for exposure, and thus avoids hallucination artifacts typical of deep neural networks. We validate the efficacy of our system on both synthetic imagery and real bracketed images shot with smartphone cameras. Our iterative inference architecture is capable of processing an arbitrary number of bracketed input photos, and we show examples from bursts containing 3–9 images. Our training process relies only on synthetic captures yet generalizes to real photos. Moreover, we show that this training scheme improves other state-of-the-art methods over their pretrained counterparts.

From Brackets to HDR

Bracketed exposures are aligned and merged into a single HDR result. The animation plays automatically when the section scrolls into view — switch scenes with the tabs below to replay.

Merge result
Base Frame

Watching the bracket merge into the final HDR result…

Method Overview

LuckyHDR iteratively aligns and merges bracketed frames from shortest to longest exposure, gradually building up dynamic range. No network directly predicts pixel values — instead, lightweight neural networks predict shift maps for alignment and weight maps for merging.

LuckyHDR Method Overview

No Hallucination

Output pixels are always a weighted combination of actual captured pixels — never synthesized from scratch.

Mobile-Ready

Only 66K parameters — 50x smaller than HDRFlow. Runs at interactive rates on smartphone hardware.

Sim-to-Real

Trained entirely on synthetic data, generalizes robustly to real handheld photographs from any camera.

Flexible Input

Handles 3–5+ bracketed frames of varying exposure, iteratively improving quality with each new frame.

0K Parameters
0x Smaller than HDRFlow
0ms Inference (A6000)
0dB HDR-VDP2 (2 EV)

What the Network Sees

LuckyHDR decouples HDR reconstruction into two lightweight predictions. At every iteration, one network predicts a shift map that aligns the incoming exposure, and another network predicts a weight map that decides how to blend it into the running estimate. The final pixel is always a convex combination of real captured pixels — never synthesized — which is why the method cannot hallucinate.

Shift Map

A dense, per-pixel 2D displacement field (dx, dy) that warps the alternate exposure into the base frame’s coordinates, compensating for handshake and small scene motion (swaying leaves, moving pedestrians). We predict it coarse-to-fine — a coarse stage handles up to ~52 pixels of motion, then a residual stage refines within ~6 pixels — so the alignment network stays tiny yet robust to both global and local motion.

In the visualizations below, hue encodes direction and saturation encodes magnitude.

Weight Map

A per-pixel blending coefficient in [0,1], produced by a softmax across the base and warped-alternate frames. The network learns to upweight “lucky” pixels — well-exposed, unsaturated, sharp — and downweight noisy shadows, clipped highlights, or residual misalignment. Over iterations, this is equivalent to iterative alpha compositing with learned, content-aware alphas.

Red regions indicate pixels pulled from the newly-aligned frame; blue regions keep the current estimate.

Predicted HDR Result LuckyHDR Output
Shift Map B Shift Map (Iter 2)
Shift Map C Shift Map (Iter 3)
Attention Map A Merge Weights (Iter 1)
Attention Map B Merge Weights (Iter 2)
Attention Map C Merge Weights (Iter 3)

Comparisons

Drag the slider to compare LuckyHDR against baseline methods on real handheld captures. Switch scenes with the tabs below.

Baseline
LuckyHDR
HDRFlow
LuckyHDR (Ours)

Quantitative Results

LuckyHDR achieves state-of-the-art quality with 50x fewer parameters than the nearest competitor.

Method ITPI (ms) Params (K) PSNRl PSNRμ HDR-VDP2 ↑ LPIPS ↓
HDR+8043.727.331.20.427
SAFNet208112044.330.631.00.249
SAFNet/p208112036.825.527.30.460
AHDRNet504152040.430.029.80.305
HDRFlow61327048.733.238.10.226
HDRFlow/p61327050.226.732.70.507
HDR-Trans.9371122037.932.036.70.267
AFUNet21318116238.427.735.70.332
LuckyHDR (Ours)626650.033.640.50.241
Method ITPI (ms) Params (K) PSNRl PSNRμ HDR-VDP2 ↑ LPIPS ↓
HDR+8044.128.430.60.338
SAFNet208112046.432.433.80.164
SAFNet/p208112034.225.426.60.465
AHDRNet504152040.432.230.00.305
HDRFlow61327047.935.038.60.120
HDRFlow/p61327049.926.131.40.535
HDR-Trans.9371122037.931.836.40.253
AFUNet21318116238.530.635.40.335
LuckyHDR (Ours)626650.236.543.70.107
Method ITPI (ms) Params (K) PSNRl PSNRμ HDR-VDP2 ↑ LPIPS ↓
HDR+8045.528.046.80.308
SAFNet208112042.429.445.30.220
SAFNet/p208112038.024.738.80.441
AHDRNet504152043.731.437.40.311
HDRFlow61327049.831.648.50.154
HDRFlow/p61327051.826.244.80.503
HDR-Trans.9371122042.731.951.00.270
AFUNet21318116242.430.551.40.333
LuckyHDR (Ours)626652.033.053.20.143

Evaluation on our synthetic SI-HDR-fast test set with 3-frame input. All learning-based baselines re-trained on our data for fair comparison. /p rows use official pretrained checkpoints (dataset mismatch). ITPI benchmarked on an RTX A6000 at 1888×1280.

BibTeX

@article{li2026luckyhdr,
  title={Lucky High Dynamic Range Imaging},
  author={Li, Baiang and Yan, Ruyu and Tseng, Ethan and Zhang, Zhoutong and Finkelstein, Adam and Chen, Jiawen and Heide, Felix},
  journal={ACM Transactions on Graphics (TOG)},
  year={2026},
  publisher={ACM New York, NY, USA}
}