Research

Histogram of 73 journal publications by 2024 impact factor
My research program spans computational imaging, structured light 3D sensing, spatial augmented reality, hypergraph signal processing, and machine learning for imaging systems. Over 25 years at the University of Kentucky, this work has produced 75 peer-reviewed journal publications—distributed by 2024 Journal Citation Reports impact factor in the accompanying chart—30 U.S. patents, two spin-off companies, and over $12.5 million in sponsored funding from NSF, AFOSR, NIH, the Department of Homeland Security, the Department of Energy, and industry partners including Intel, Toyota, and Lockheed Martin. I am an IEEE Fellow, elevated through the Signal Processing Society for contributions to digital printing and 3D imaging, and hold a Ph.D. from the University of Delaware, where I maintain an active, ongoing collaboration with my doctoral advisor, Professor Gonzalo R. Arce.
A full list of publications is available on Google Scholar and ORCID.
Prospective Students
I welcome inquiries from prospective Ph.D. and M.S. students with strong backgrounds in signal processing, machine learning, computer vision, or optics. My group works on problems that combine mathematical rigor with physical systems and real-world applications — students who enjoy both analysis and building things tend to thrive here. Please send a CV and a brief description of your research interests to dllau@uky.edu.
Thrust 1: Structured Light 3D Imaging and Spatial Augmented Reality
Structured light 3D imaging — the science of recovering precise surface geometry by projecting coded light patterns onto a scene and analyzing the deformed patterns with a synchronized camera — has been the central thread of my research since 2002. My group has contributed to every layer of this pipeline: pattern design, projector and camera calibration, phase unwrapping, real-time point cloud reconstruction, multi-path artifact correction, and spatial augmented reality display.
My first major structured light program was launched with NASA STTR funding for real-time single-pattern structured light, followed by a project with Laurence Hassebrook for the NIJ Fast Fingerprint Capture Program — one of only four teams selected nationally and only the second academic team, alongside Carnegie Mellon University — to develop a high-speed, non-contact fingerprint scanner capable of capturing all five fingers in under 30 seconds. Funded through the National Institutes for Hometown Security ($988K), this work produced the spin-off company FlashScan3D, which subsequently received a Phase III grant from the Department of Homeland Security and a separate grant from the U.S. Army Criminal Investigation Laboratory to develop a scanner for 3D ballistic imaging. The peer-reviewed output of this program includes:
  • Y. Wang, L. G. Hassebrook, and D. L. Lau, "Data Acquisition and Processing of 3-D Fingerprints," IEEE Transactions on Information Forensics and Security, vol. 5, no. 4, 2010. (122 citations)
The touchless fingerprint system attracted wide media coverage, including features in the MIT Technology Review, Popular Science, EE Times, Photonics Spectra, and the SPIE Newsroom.
SLI Karaoke - real-time structured light 3D imaging demonstration
SLI Karaoke: real-time structured light 3D imaging demonstration.
A second major milestone was our development of structured light acquisition, processing, and display at over 150 frames per second — at the time, far beyond the state of the art. The dual-frequency pattern paper that described this system was the most downloaded open-access paper in the OSA library for four months following its publication, with over 700 downloads in that period alone: This work was featured in the SPIE Newsroom and led to the spin-off company Seikowave, Inc., which commercialized ruggedized 3D scanners for oil and gas pipeline inspection, achieving over $1M in sales in 2013. Additional high-impact publications from this period include:
  • Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G. Hassebrook, "Robust Active Stereo Vision Using Kullback-Leibler Divergence," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.
  • K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, "Gamma Model and its Analysis for Phase Measuring Profilometry," JOSA A, vol. 27, no. 3, 2010. (250+ citations)
  • D. L. Lau, K. Liu, and L. G. Hassebrook, "Real-Time Three-Dimensional Shape Measurement of Moving Objects without Edge Errors by Time-Synchronized Structured Illumination," Optics Letters, vol. 35, no. 14, 2010.
In parallel with these optical sensing efforts, a $10M+ program with M2 Technologies, CABEM Technologies, and Lockheed Martin — funded by the U.S. Marine Corps and completed in three phases from 2004 to 2010 — applied mid-infrared imaging to bullet-in-flight tracking for sniper detection. The University of Kentucky's share exceeded $2M (PI). The resulting portable, ruggedized anti-sniper system was delivered to the U.S. Air Force.
AR Sandbox at UK E-Day
The AR Sandbox unveiled at UK's E-Day festivities.
More recently, this thrust has expanded into new directions. Current work addresses multi-path artifacts in phase-shifting scanners, projector lens distortion correction, one-shot structured light using shearlet transforms, and spatial augmented reality (SAR) — systems in which structured light projectors serve simultaneously as 3D sensors and as displays that register digital content directly onto physical surfaces in real time. M.S. student Matthew Ruffner completed his thesis on the machine vision camera design in 2018, and Ph.D. student Ying Yu completed the first full dissertation on this topic in 2019, titled Spatial Augmented Reality Using Structured Light Illumination. An NSF Igniting Research Collaborations award ($36K, 2020–2021) funded an AR intubation training system for first responders. A notable interdisciplinary application of this SAR work is the Digital Skin project — a collaboration with artist Siavash Tohidi from UK's School of Art and Visual Studies and Dr. Michael Winkler from the Department of Radiology — in which real-time face tracking and projection mapping are used to overlay digital masks and textures onto a subject's face, exploring the boundary between physical and digital identity. Recent publications include:
  • Y. Zhang and D. L. Lau, "BimodalPS: Causes and Corrections for Bimodal Multi-Path in Phase-Shifting Structured Light Scanners," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
  • Y. Zhang, D. L. Lau, and Y. Yu, "Causes and Corrections for Bimodal Multipath in Structured Light Scanning," CVPR, 2019.
  • R. Gao, X. Zhao, D. L. Lau, et al., "One-shot structured light illumination based on shearlet transform," Optics Express, 2024.
This thrust is supported by 11 U.S. patents on structured light systems, projector calibration, real-time 3D imaging, and SAR display.
Thrust 2: Hypergraph and Graph Signal Processing
Since 2018, a major and rapidly growing part of my research has addressed signal processing and machine learning on graphs and hypergraphs — mathematical structures that model complex, non-pairwise relationships in sensor networks, social systems, biological data, and physical simulations. This work generalizes classical signal processing concepts — sampling, filtering, reconstruction — and modern deep learning architectures to these irregular domains.
The conceptual bridge from my halftoning work to graph signal processing is direct: the blue-noise sampling theory that produces visually optimal dot patterns in printing can be generalized to produce optimal sampling strategies on arbitrary graph topologies. My group introduced this connection formally in:
  • A. Parada-Mayorga, D. L. Lau, J. H. Giraldo, and G. R. Arce, "Blue-Noise Sampling on Graphs," IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 3, 2019.
  • D. L. Lau, G. R. Arce, A. Parada-Mayorga, D. Dapena, K. Pena-Pena, "Blue-Noise Sampling of Graph and Multigraph Signals: Dithering on Non-Euclidean Domains," IEEE Signal Processing Magazine, vol. 37, no. 6, 2020. (Invited tutorial)
Dr. Lau at a UK football game being honored for research funding achievements during halftime along with other UK faculty Honored at a UK football halftime ceremony for research funding achievements.
We subsequently extended these foundations into hypergraph signal processing, introducing the t-HGSP framework based on t-product tensor decompositions, learning hypergraph structure from data, and full hypergraph neural network architectures:
  • K. Pena-Pena, D. L. Lau, and G. R. Arce, "t-HGSP: Hypergraph Signal Processing Using t-Product Tensor Decompositions," IEEE Transactions on Signal and Information Processing over Networks, vol. 9, 2023.
  • K. Pena-Pena, L. Taipe, F. Wang, D. L. Lau, and G. R. Arce, "Learning Hypergraphs Tensor Representations From Data via t-HGSP," IEEE Transactions on Signal and Information Processing over Networks, vol. 10, 2024.
  • B. T. Brown, H. Zhang, D. L. Lau, and G. R. Arce, "Scalable Hypergraph Structure Learning with Diverse Smoothness Priors," IEEE Transactions on Signal and Information Processing over Networks, vol. 11, 2025.
  • F. Wang, K. Pena-Pena, D. L. Lau, and G. R. Arce, "T-HyperGNNs: Hypergraph Neural Networks via Tensor Representations," IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 3, 2025.
  • D. Dapena, D. L. Lau, and G. R. Arce, "Parallel Graph Signal Processing: Sampling and Reconstruction," IEEE Transactions on Signal and Information Processing over Networks, vol. 9, 2023.
This thrust is supported by two active federal grants:
  • NSF CIF: Small — Collaborative Research: Hypergraph Signal Processing and Networks via t-Product Decompositions ($600K, 2023–2026, with Prof. Arce, University of Delaware)
  • AFOSR DEPSCOR — Learning Multilayer and Hypergraph Networks from Data ($599K, 2022–2025, with Prof. Arce, University of Delaware)
Ph.D. alumni from this program include Karelia Pena-Pena (now at Intuit), Fuli Wang (now at Apple Research), Alejandro Parada-Mayorga (now at the University of Colorado), and Daniela Dapena (now at LightingAI). Active Ph.D. candidates Mundo Levano, Nicolas Bello, and Ziyuan Dong at the University of Delaware continue this work.
Thrust 3: Compressive Spectral Imaging and Computational Sensing
Since 2015, my group has developed compressive sensing architectures for joint spectral and depth imaging — cameras that acquire hyperspectral and 3D range information simultaneously from a single sensor using coded apertures. The core insight is that by designing the aperture coding strategically — using blue-noise and graph-based priors developed in Thrust 2 — one can reconstruct full spectral cubes from far fewer measurements than conventional cameras require.
Key contributions include the first snapshot compressive camera combining time-of-flight depth sensing with hyperspectral imaging, blue-noise coded aperture design for multispectral imaging, graph-based smoothness priors for spectral image reconstruction, and a new Sudoku multispectral filter array design:
  • H. Rueda, C. Fu, D. L. Lau, and G. R. Arce, "Single Aperture Spectral+ToF Compressive Camera: Toward Hyperspectral+Depth Imagery," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 7, 2017.
  • H. Rueda-Chacon, J. F. Florez, D. L. Lau, and G. R. Arce, "Snapshot Compressive ToF+Spectral Imaging via Optimized Color-Coded Apertures," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, 2020.
  • H. Zhang, X. Ma, D. L. Lau, J. Zhu, and G. R. Arce, "Compressive Spectral Imaging Based on Hexagonal Blue-Noise Coded Apertures," IEEE Transactions on Computational Imaging, vol. 6, 2020.
  • J. F. Florez-Ospina, A. K. M. Alrushud, D. L. Lau, and G. R. Arce, "Block-based Spectral Image Reconstruction for Compressive Spectral Imaging Using Smoothness on Graphs," Optics Express, vol. 30, 2022.
  • A. Aguirre, A. Alrushud, G. R. Arce, and D. L. Lau, "Sudoku Multispectral Filter Arrays for Spectral Snapshot Cameras," Optics Continuum, vol. 4, no. 9, 2025.
This thrust has been supported by:
  • NSF VEC Small Collaborative Research — Joint Compressive Spectral Imaging and 3D Range Sensing Using a Commodity Time-of-Flight Sensor ($860K, 2015–2019, with Prof. Arce and Intel Corporation)
  • USDA/NIFA — Improving the Spatial and Spectral Calibration of Remote Sensing Imagery from Unmanned Aircraft Systems ($613K, 2023–2027, with Prof. Sama and Prof. Bailey, University of Kentucky)
  • U.S. Department of Energy — Sensor Data Enabled Utility Asset Capacity Utilization Maximization, Load Modeling, and Event Detection ($1M, 2025–2027, with Prof. Liao, University of Kentucky)
Applied collaborations in this thrust include machine vision for high-throughput drug discovery screening (NIH R01, $879K, with Prof. Royce Mohan) and 3D scanning for rail-highway grade crossing assessment (NURail Center, $296K, with Prof. Reginald Souleyrette).
Thrust 4: Digital Halftoning, Error Diffusion, and Embedded Imaging Systems
On the left is Chris Brown, a former R&D engineer living the US with executives from Mutoh, a Japanese manufacturer of large format inkjet printers during a visit to Tokyo Japan. Chris Brown (left) with Mutoh executives during a visit to Tokyo, Japan.
My earliest and most foundational research introduced the green-noise model for digital halftoning — the characterization of the ideal spatial and spectral properties of clustered-dot halftone patterns produced by frequency-modulated screens. This work established the theoretical framework that distinguishes green-noise — the ideal model for clustered-dot halftones — from the blue-noise model appropriate for dispersed-dot patterns. The work led to two U.S. patents awarded to the University of Delaware, a suite of follow-on publications, and the field's primary reference text: After joining the University of Kentucky, I collaborated with Robert Ulichney of HP Labs — the originator of the blue-noise halftoning model — on two papers that extended our comparative framework and corrected a long-standing error in his original blue-noise analysis: The halftoning program has continued to evolve in two important directions. First, in collaboration with Prof. Gonzalo Arce at the University of Delaware, we extended green-noise and error diffusion theory to QR code image embedding — the problem of hiding a visually attractive color image inside a scannable QR code while preserving its decodability. This work attracted significant industry interest and resulted in four U.S. patents:
  • G. J. Garateguy, G. R. Arce, D. L. Lau, and O. P. Villareal, "QR Images: Optimized Image Embedding in QR Codes," IEEE Transactions on Image Processing, vol. 23, no. 7, July 2014. (151 citations)
  • K. Pena-Pena, D. L. Lau, A. J. Arce, and G. R. Arce, "QRnet: Fast Learning-Based QR Code Image Embedding," Multimedia Tools and Applications, 2022.
Second, in collaboration with Prof. Robert Heath at the University of Kentucky, we developed FPGA-based hardware implementations of error diffusion — translating the computational demands of blue-noise stacked error diffusion multitoning from software into real-time parallel hardware architectures. This work bridges signal processing theory and embedded systems design, and has direct applications in industrial printing, display systems, and real-time imaging pipelines:
  • Q. Hu, D. L. Lau, R. K. Venugopal, and J. R. Heath, "FPGA-Based Hardware Implementation of Blue-Noise Stacked Error Diffusion Multitoning," IEEE Transactions on Circuits and Systems I: Regular Papers, 2025.
M.S. student Matthew Ruffner, Ph.D. student Ying Yu, and Dr. Lau with a fellow graduate student From left: M.S. student Matthew Ruffner, Ph.D. student Ying Yu, a fellow graduate student, and Dr. Lau.
This thrust has been supported by industry grants from Mutoh America ($47K, 2013) for an embedded printer RIP on a PC platform, by Agere Systems for co-advised Ph.D. work on halftoning ASICs with Prof. Arce, and by a series of patents licensed through the University of Delaware and the University of Kentucky. The QR code patent portfolio (four U.S. patents) was developed in collaboration with Prof. Arce and industry partners.
Applied Vision: Cross-Cutting Collaborations
Beyond the four primary thrusts, my group has pursued applied machine vision collaborations that draw on techniques from structured light, depth sensing, and computational imaging across a range of domains:
  • In precision dairy farming, we developed 3D body condition scoring of dairy cows using depth cameras and automated feed intake measurement using volumetric 3D scanning (funded by KY Science and Technology Co., $50K; Journal of Dairy Science, 2016; International Journal of Agricultural and Biological Engineering, 2020).
  • In plant phenotyping, we developed 3D root measurement systems for large plants using structured light and depth imaging (The Plant Phenome Journal, 2022; funded as part of USDA/NIFA work).
  • In dental and facial imaging, we applied precision assessment of facial asymmetry using 3D imaging and AI (Journal of Clinical Medicine, 2025).
  • In rail infrastructure, we performed quantitative 3D assessment of rail-highway grade crossing roughness (NURail Center, $296K; Journal of Transportation Safety and Security, 2016).
  • In biomedical imaging, we developed pediatric vocal fold motion measurement using structured light laser projection (Journal of Voice, 2013); high-throughput drug discovery screening (NIH R01, $879K).
Sponsored Research Summary
SponsorProgramAmountPeriod
NSFCIF: Small — Hypergraph Signal Processing$600K2023–2026
AFOSRDEPSCOR — Multilayer and Hypergraph Networks$599K2022–2025
USDA/NIFAUAS Remote Sensing Calibration$613K2023–2027
U.S. Dept. of EnergyUtility Asset Load Modeling$1.0M2025–2027
NSFVEC — Compressive Spectral + 3D Imaging$860K2015–2019
NSFCIF: Small — Blue-Noise Graph Sampling$500K2018–2021
DHS / NIHS3D Fingerprint and Palm Print Scanner$988K2009–2010
NIH NCIHigh-Throughput Drug Discovery Screening$879K2008–2011
DoD (USMC/USAF)Anti-Sniper Infrared Targeting System$2M+ (UK share)2004–2010
NURail / Univ. of IllinoisRail Crossing 3D Assessment$296K2012–2016
NSFREU Site in ECE$399K2006–2009

 
Stacks Image 612
646A4AFA4ED0DC19