Spectral Imaging

A conventional camera captures three numbers per pixel — red, green, and blue intensities — which is a coarse approximation of the full spectral content of the light arriving at that pixel. The visible spectrum spans roughly 400 to 700 nanometers, and many materials, tissues, and chemical compounds have spectral signatures — characteristic patterns of absorption and reflectance across this range — that are invisible to an RGB camera but are precisely what distinguishes healthy tissue from diseased, ripe fruit from unripe, or a genuine pharmaceutical from a counterfeit. A hyperspectral camera captures tens to hundreds of narrow spectral bands per pixel, recovering the full spectral signature at every spatial location. The result is a three-dimensional data cube — two spatial dimensions and one spectral dimension — that contains far more information than any RGB image.
The challenge is that acquiring this data cube conventionally is slow and expensive. A traditional scanning hyperspectral imager sweeps a narrow slit across the scene, capturing one spatial line per exposure and building the cube sequentially. This requires the scene to remain stationary for the entire acquisition time — a fundamental limitation for moving objects, dynamic biological processes, or real-time industrial inspection. My research in compressive spectral imaging addresses this challenge by asking a different question: rather than acquiring every voxel of the data cube sequentially, can we design a sensor that acquires a small number of carefully coded measurements and then recovers the full cube computationally?
This page describes the theory behind compressive spectral imaging, my group's specific contributions to its design and reconstruction, and our extensions to joint spectral and depth sensing.
Background: Compressive Sensing and Coded Apertures
Compressive sensing theory. Compressive sensing, developed in the mid-2000s by Candès, Romberg, Tao, and Donoho, established that a signal with S nonzero coefficients in some basis can be recovered exactly from as few as O(S log N) linear measurements — far fewer than the N measurements that Nyquist sampling would require — provided that the measurement matrix satisfies a condition called the Restricted Isometry Property (RIP) and that the signal is recovered by solving a convex ℓ1-minimization problem. For signals that are sparse in a transform domain (wavelets, DCT, graph Fourier basis), this enables dramatic reduction in acquisition time or sensor complexity. Spectral images are highly compressible: adjacent spectral bands are strongly correlated, and the spatial structure of natural scenes is also sparse in appropriate transform domains. This makes spectral imaging a natural application for compressive sensing.
Figure 2 from US Patent 10,151,629: Optical path of a coded aperture snapshot spectral imager showing the scene, objective lens, coded aperture array, imaging lens, dispersive prism, and detector From US Patent 10,151,629: optical layout of a coded aperture snapshot spectral+ToF imager.
Coded aperture spectral imagers. The physical implementation of compressive spectral sensing that my group works with is the Coded Aperture Snapshot Spectral Imager (CASSI), originally developed by Brady and Gehm at Duke University. In a CASSI system, a spatial light modulator (SLM) — typically a digital micromirror device (DMD) — is placed at an intermediate focal plane in the optical path. The SLM applies a binary spatial mask to the scene, blocking or passing light at each spatial location according to a coded pattern. A dispersive element (prism or diffraction grating) then spectrally shears the masked image onto a focal plane array, so that different spectral bands land at different spatial positions on the detector. A single detector image thus contains a superposition of spatially shifted, spectrally coded versions of the scene — a single-shot compressive measurement of the full spectral cube.
Recovering the spectral cube from this single measurement is an ill-posed inverse problem. The key insight from compressive sensing is that the problem becomes well-posed when the coded aperture pattern is designed so that the measurement matrix satisfies RIP, and when the spectral cube is sparse or smooth in an appropriate basis. The design of the coded aperture — the spatial pattern on the SLM — is therefore a critical degree of freedom that directly determines reconstruction quality.
Phase 1: Coded Aperture Design
Blue-noise coded apertures. My group's first contribution to compressive spectral imaging was the application of blue-noise theory to coded aperture design. The connection is direct: a coded aperture is a binary spatial mask, and the design problem — place a fixed fraction of “open” pixels on the aperture so that the resulting measurement matrix has good RIP properties — is mathematically equivalent to the halftoning problem of placing a fixed number of printed dots so that their spatial distribution is spectrally optimal. Blue-noise distributions, which suppress low-frequency energy and distribute power uniformly in the mid-frequency band, are known to produce near-optimal RIP matrices for compressive sensing. We demonstrated that blue-noise coded apertures achieve reconstruction quality superior to random binary apertures and to regular grid apertures across a wide range of spectral scenes:
  • H. Zhang, X. Ma, D. L. Lau, J. Zhu, and G. R. Arce, “Compressive Spectral Imaging Based on Hexagonal Blue-Noise Coded Apertures,” IEEE Transactions on Computational Imaging, vol. 6, pp. 749–763, 2020.
Side information and adaptive design. In many practical scenarios, a low-resolution RGB image of the scene is available before the hyperspectral acquisition — for example, from a co-registered RGB camera on the same platform. This side information can be used to adapt the coded aperture pattern to the specific scene, concentrating measurements where the spectral content is most uncertain and reducing measurements where it is well predicted by the RGB image. We developed a coded aperture design framework that incorporates side information as a prior in the optimization:
  • L. Galvis, D. L. Lau, X. Ma, H. Arguello, and G. R. Arce, “Coded Aperture Design in Compressive Spectral Imaging Based on Side Information,” Applied Optics, vol. 56, no. 22, pp. 6332–6340, 2017.
Smoothness on rank-order path graphs. A key insight from our graph signal processing work (Thrust 2) is that spectral images are smooth not only in the spatial domain but also along the spectral axis: adjacent spectral bands are highly correlated. We formalized this observation by modeling the spectral axis as a path graph whose edge weights reflect inter-band correlations, and incorporated graph-based smoothness as a regularizer in the reconstruction optimization. Rank-order path graphs — in which bands are sorted by their mutual similarity rather than their physical wavelength ordering — proved particularly effective:
  • J. F. Florez-Ospina, D. L. Lau, D. Guillot, K. Barner, and G. R. Arce, “Smoothness on Rank-Order Path Graphs and its Use in Compressive Spectral Imaging with Side Information,” Signal Processing, vol. 196, 2022, 108707.
Sudoku multispectral filter arrays. A parallel approach to snapshot multispectral imaging uses filter arrays — spatial arrangements of narrowband optical filters placed directly on the detector, analogous to the Bayer RGB mosaic in standard cameras — rather than a coded aperture and dispersive element. The design challenge is to arrange filters of multiple spectral types across the array so that every local neighborhood contains a representative sample of all spectral bands, enabling accurate demosaicking. We introduced a Sudoku-inspired filter array design in which the constraint that every row, column, and block of the array contains each filter type exactly once guarantees locally complete spectral sampling while maintaining a regular, hardware-friendly spatial pattern:
  • A. Aguirre, A. Alrushud, G. R. Arce, and D. L. Lau, “Sudoku Multispectral Filter Arrays for Spectral Snapshot Cameras,” Optics Continuum, vol. 4, no. 9, pp. 2035–2052, 2025.
Phase 2: Joint Spectral and Depth Imaging
Dr. Lau with the first academically owned Zmini time-of-flight camera from 3DVSystems in North America With the first academically owned Zmini time-of-flight camera from 3DVSystems in North America.
Standard CASSI systems capture spectral information but discard depth — every pixel in the detector image is a superposition of contributions from surfaces at different distances, and there is no mechanism to separate them. Time-of-flight (ToF) cameras, conversely, capture accurate per-pixel depth but have no spectral resolution beyond a single amplitude measurement. My group developed the first compressive imaging architecture that captures both spectral and depth information simultaneously from a single sensor, by combining a coded aperture spectral imager with a time-of-flight modulation scheme.
The physical principle. In a ToF camera, the scene is illuminated with an amplitude-modulated light source at a known frequency, and the detector measures the phase delay of the reflected modulation — which is directly proportional to the round-trip distance to the surface. We recognized that the ToF modulation scheme is compatible with CASSI: by modulating the illumination at a ToF frequency while simultaneously applying a spatial coded aperture, a single detector exposure captures a measurement that is jointly coded in the spectral, spatial, and depth dimensions. Recovering the full spectral-depth data cube from this single exposure requires solving a higher-dimensional inverse problem, but the same compressive sensing and sparsity-based reconstruction framework applies.
  • H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, “Single Aperture Spectral+ToF Compressive Camera: Toward Hyperspectral+Depth Imagery,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 7, pp. 992–1003, 2017.
  • H. Rueda-Chacon, J. F. Florez, D. L. Lau, and G. R. Arce, “Snapshot Compressive ToF+Spectral Imaging via Optimized Color-Coded Apertures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2346–2360, October 2020.
RGB sensors for multispectral imaging. A practical limitation of CASSI systems is that they require a monochrome detector — standard RGB sensors are not directly compatible because the Bayer filter mosaic confounds the spectral coding. We developed a modified CASSI architecture and reconstruction algorithm that works directly with commodity RGB sensors, enabling multispectral imaging with standard camera hardware:
  • H. Rueda, D. L. Lau, and G. R. Arce, “Multi-Spectral Compressive Snapshot Imaging Using RGB Image Sensors,” Optics Express, vol. 23, no. 9, pp. 12207–12221, 2015.
Light field modeling. A further extension addresses the relationship between coded aperture systems and light field cameras — sensors that capture not just the total intensity at each pixel but the full directional distribution of incoming light. We developed a light field model for coded aperture systems that unifies the CASSI framework with plenoptic imaging, enabling joint spectral, depth, and angular reconstruction from a single sensor:
  • D. L. Lau, Y. Zhang, T. Hastings, H. Rueda, and G. R. Arce, “Light Field Modeling for Coded Aperture Systems,” OSA Imaging and Applied Optics Congress, 2017.
Phase 3: Graph-Based Spectral Image Reconstruction
A central challenge in compressive spectral imaging is reconstruction quality: given the compressed measurements, how accurately and efficiently can the full spectral cube be recovered? Early CASSI reconstruction algorithms used standard ℓ1 minimization with wavelet sparsity priors, which treat each spectral band independently and ignore inter-band correlations. My group developed reconstruction algorithms that exploit the joint spatial-spectral smoothness of natural spectral scenes using graph-based regularization.
Block-based graph reconstruction. We partitioned the spectral image into spatial blocks and modeled each block as a graph signal, where nodes represent pixels and edge weights reflect spatial and spectral similarity. Graph-Laplacian regularization within each block enforces smoothness while allowing sharp edges to be preserved across block boundaries:
  • J. F. Florez-Ospina, A. K. M. Alrushud, D. L. Lau, and G. R. Arce, “Block-Based Spectral Image Reconstruction for Compressive Spectral Imaging Using Smoothness on Graphs,” Optics Express, vol. 30, pp. 7187–7209, 2022.
Phase 4: Precision Agriculture and Remote Sensing Applications
A natural and timely application of compressive spectral imaging is precision agriculture, where UAS (unmanned aircraft systems) equipped with multispectral cameras are used to monitor crop health, detect disease, assess water stress, and guide variable-rate application of fertilizers and pesticides. Spectral reflectance indices — combinations of reflectance values at specific wavelengths — are established proxies for plant health parameters including chlorophyll content, leaf area index, and canopy nitrogen. However, the spectral and spatial calibration of UAS-mounted cameras is technically challenging: sensor characteristics vary with temperature and illumination angle, and the geometric distortions introduced by the UAS platform require careful correction.
A current USDA/NIFA grant ($613K, 2023–2027) supports my group's work on improving the spectral and spatial calibration of remote sensing imagery from UAS platforms, in collaboration with agricultural scientists in the University of Kentucky's College of Agriculture. This project connects the coded aperture design and calibration methods developed in the laboratory setting to the practical constraints of field deployment on a UAS — smaller sensors, wider illumination variation, faster acquisition, and the need for robust real-time processing.
A separate collaboration supported by the U.S. Department of Energy ($1M, 2025–2027) applies spectral sensing and signal processing methods to utility asset monitoring — using sensor data from electrical grid infrastructure to enable more accurate load modeling, capacity utilization assessment, and fault detection.
Funding Summary
SponsorProgramAmountPeriod
NSF VEC Small Collaborative ResearchJoint Compressive Spectral Imaging and 3D Range Sensing$860K2015–2019
NSF CIF: SmallBlue-Noise Graph Sampling (partial support)$500K2018–2021
NSF CIF: SmallHypergraph Signal Processing (partial support)$600K2023–2026
USDA/NIFAUAS Remote Sensing Spectral and Spatial Calibration$613K2023–2027
U.S. Department of EnergyUtility Asset Load Modeling and Event Detection$1.0M2025–2027
Graduate Alumni from This Thrust
StudentDegreeInstitutionYearCurrent Position
Hoover RuedaPh.D.University of Delaware2018Universidad Industrial de Santander
Juan Felipe Florez-OspinaPh.D.University of Delaware2022Paul Scherrer Institute
Connection to Other Research Thrusts
This thrust sits at the intersection of the other three areas of my research program:
  • Thrust 1 (Structured Light): Time-of-flight depth sensing, which features prominently in Thrust 3b, shares hardware and calibration infrastructure with structured light systems. The joint spectral-depth camera architecture is a natural extension of the structured light 3D scanner toward richer scene understanding.
  • Thrust 2 (Graph/Hypergraph SP): The graph-based smoothness priors used in spectral image reconstruction (Thrust 3c) are direct applications of the graph signal processing theory developed in Thrust 2. The Sudoku filter array design (Thrust 3a) draws on the same blue-noise sampling theory that underlies blue-noise graph sampling.
  • Thrust 4 (Halftoning): The coded aperture design problem — place a binary mask on a spatial array to optimize measurement quality — is mathematically equivalent to the halftoning problem. Blue-noise coded apertures are direct applications of blue-noise halftone mask theory to the optics domain.
This cross-thrust connectivity is not coincidental. It reflects the fact that all four areas of my research are, at a deep level, about the same problem: how to represent, acquire, and reconstruct structured information as efficiently as possible given the constraints of the physical sensor and the mathematical structure of the signal.
 
Lau Lab Logo
646A4AFA4ED0DC19