When you print a photograph on a laser printer or an inkjet, the printer does not reproduce the continuous range of gray tones or colors in the original image directly. Most printers are fundamentally binary devices: at any given location on the page, ink is either deposited or it is not. The process of converting a continuous-tone image into a binary pattern of printed and unprinted dots — in a way that preserves the visual appearance of the original to a human observer — is called digital halftoning. It is one of the oldest problems in digital imaging, with roots in the photomechanical halftone screens of the nineteenth century, and it remains an active area of research because every printer manufactured today uses some form of halftoning algorithm, and the quality of that algorithm directly determines the quality of every printed image.
My research in digital halftoning began with my Ph.D. dissertation at the University of Delaware under Professor Gonzalo R. Arce and Professor Neal C. Gallagher, and it has remained a productive and evolving part of my research program for over 25 years. The core theoretical contribution of my dissertation — the green-noise model — established a new framework for understanding and designing halftone patterns that has influenced both academic research and industrial practice. This page describes the development of that framework, its extensions and refinements, and the more recent directions in embedded hardware implementation and QR code imaging that have grown from it.
With friends from Tracer Inc in New York City, a company specializing in photo retail.
Background: The Halftoning Problem
The human visual system as a low-pass filter. The reason halftoning works at all is that the human visual system has limited spatial resolution. When viewed from a normal reading distance, individual printed dots are too small to be resolved, and the eye effectively averages the ink density over a small neighborhood. A region where half the dots are printed appears as a medium gray, a region where most dots are printed appears dark, and a region where few dots are printed appears light. The halftoning algorithm controls dot placement to produce the desired local average density at every point in the image. Error diffusion. The most widely used class of halftoning algorithms is error diffusion, introduced by Floyd and Steinberg in 1976. In error diffusion, pixels are processed sequentially, and at each step a threshold is applied to decide whether to print a dot. The difference between the desired continuous-tone value and the actual binary output — the quantization error — is then distributed to neighboring unprocessed pixels, so that the total ink density over any region converges to the desired average. Error diffusion produces high-quality halftones with good tonal rendition and sharp edges, but it can introduce structured artifacts — diagonal lines, worm-like textures, and directional bias — that are visually objectionable. The spectral perspective. A more principled way to evaluate halftone quality is in the frequency domain. The human visual system is most sensitive to spatial frequencies in a mid-range band — it is relatively insensitive to very low frequencies (large-scale variations) and very high frequencies (fine texture below the resolution limit). An ideal halftone pattern, from this perspective, should have its power concentrated at frequencies above the visual system's sensitivity band — that is, it should be a high-frequency or blue-noise pattern — so that the residual quantization error is perceptually invisible. Robert Ulichney of Digital Equipment Corporation formalized this spectral perspective in 1987 with his blue-noise halftoning model, which characterized the ideal halftone pattern as one whose power spectral density has a characteristic annular shape in the frequency domain: suppressed at low frequencies, peaked at a mid-to-high frequency annulus, and relatively flat above that. Blue-noise halftones are visually smooth and grain-free, and Ulichney's work on the void-and-cluster algorithm for generating blue-noise masks became one of the most cited works in the halftoning literature. Phase 1: The Green-Noise Model
The limitation of blue-noise. Ulichney's blue-noise model was developed for dispersed-dot halftones — patterns in which individual printed dots are isolated from one another and distributed as uniformly as possible across the page. Dispersed-dot patterns are appropriate for high-resolution printers where individual dots are small, but many printing technologies — particularly electrophotographic (laser) printers and low-resolution inkjet printers — produce better output with clustered-dot patterns, in which dots are grouped into small clusters or rosettes. Clustered-dot patterns are more robust to dot gain (the tendency of printed dots to spread) and to the mechanical instabilities of the print engine. Ulichney's blue-noise model, however, does not correctly describe clustered-dot patterns. The spectral characteristics of a clustered-dot halftone are fundamentally different: because the dots form clusters of varying size and shape, the power spectral density has its peak at a lower frequency than the equivalent dispersed-dot pattern, and the spectral profile is broader and less sharply annular. Existing models either incorrectly applied the blue-noise framework to clustered dots or had no principled spectral model at all. The green-noise contribution. My Ph.D. dissertation introduced the green-noise model to fill this gap. The name reflects the spectral position of the model: green-noise patterns have their spectral peak at a frequency lower than blue-noise (which is spectrally “higher”) but higher than the low-frequency content of a regular screen (which would be spectrally “lower”). Specifically, a green-noise halftone is characterized by:
Former R&D engineer Chris Brown with Mutoh executives in Tokyo, Japan.
Phase 2: Collaboration with Robert Ulichney
After joining the University of Kentucky, I initiated a collaboration with Robert Ulichney of HP Labs — the originator of the blue-noise halftoning model and one of the most influential figures in the field. Our collaboration produced two papers. The first was a comprehensive review of blue-noise and green-noise models invited for a special issue of IEEE Signal Processing Magazine:
Most halftoning algorithms produce binary output — each pixel is either printed or not. Multitoning generalizes this to printers with more than two output levels: for example, a printer that can produce light ink, dark ink, or no ink at each pixel. Multitoning is important for inkjet printers with multiple ink densities (light cyan, dark cyan, etc.) and for display systems with multiple gray levels. We developed a multitoning framework based on gray level separation — partitioning the tonal range into intervals and applying independent blue-noise or green-noise masks to each interval:
Dr. Lau with Chris Brown and Mutoh executives.
Phase 4: Lenticular Imaging and Stochastic Moiré
A natural extension of halftoning theory is lenticular imaging — the production of 3D or multiview images using a lens array placed over a printed image. Each lens in the array directs light from a narrow strip of the printed image to a specific viewing angle, so that a viewer sees a different image depending on their position relative to the print. The design challenge is that the lens array introduces a regular spatial frequency into the system, which can produce moiré interference patterns with the halftone screen or the image content. We developed model-based error diffusion algorithms specifically designed for lenticular screening, in which the error diffusion filter is adapted to the geometry of the lens array to suppress moiré while maintaining good tonal rendition:
Chris Brown and Dr. Lau at a Japanese restaurant in Suwa, Japan, where Mutoh printers are assembled.
Phase 5: QR Code Image Embedding
A more recent and commercially significant extension of halftoning theory addresses the problem of QR code image embedding: hiding a visually attractive color image inside a scannable QR code while preserving the code's machine-readability. Standard QR codes are visually stark black-and-white patterns with no aesthetic appeal. The challenge is that the black and white modules of a QR code carry error-corrected digital information — altering them to embed an image degrades the code's reliability. The goal is to find the binary pattern that simultaneously satisfies the QR specification (with sufficient error correction margin), reproduces the desired image as faithfully as possible to the human eye, and remains scannable by standard QR readers. This problem is a generalization of halftoning: instead of thresholding a continuous-tone image to binary with a free choice of output at every pixel, we must threshold with a partially constrained output — some pixels are fixed by the QR structure, while others are free to be optimized for image quality. We developed a framework based on modified error diffusion that propagates quantization error only through the free pixels while respecting the QR constraints:
A long-standing challenge in halftoning is computational speed. High-quality error diffusion algorithms are inherently sequential — each pixel’s output depends on the accumulated quantization error from all previously processed pixels — which makes them difficult to parallelize and expensive to implement in real time for high-throughput printing applications. Commercial printer RIPs (raster image processors) typically use lookup-table-based approximations that sacrifice quality for speed. In collaboration with Prof. Robert Heath at the University of Kentucky, my group developed parallel FPGA architectures for error diffusion that achieve real-time throughput without sacrificing the quality of the full sequential algorithm. The key insight is that while strict sequential error diffusion cannot be parallelized, a carefully designed stacked error diffusion variant — in which the image is processed in multiple interleaved passes — can be decomposed into independent subproblems that map efficiently onto the parallel processing resources of an FPGA:
A further practical problem in high-speed printing is printhead stitching — the challenge of printing seamlessly with multiple adjacent printhead segments whose outputs must be precisely aligned and whose individual dot densities must be matched to avoid visible banding artifacts at segment boundaries. I developed a family of algorithms and methods for minimizing stitching artifacts, which resulted in three U.S. patents:
Digital halftoning is the oldest and most foundational part of my research program, and its connections to the other three thrusts are deep and non-obvious:
Funding Summary
Primary funding for this thrust has come through industrial patent licensing and sponsored research agreements. The halftoning patent portfolio (15 U.S. patents) represents the commercial translation of this research.
Graduate Alumni from This Thrust
With friends from Tracer Inc in New York City, a company specializing in photo retail.
The human visual system as a low-pass filter. The reason halftoning works at all is that the human visual system has limited spatial resolution. When viewed from a normal reading distance, individual printed dots are too small to be resolved, and the eye effectively averages the ink density over a small neighborhood. A region where half the dots are printed appears as a medium gray, a region where most dots are printed appears dark, and a region where few dots are printed appears light. The halftoning algorithm controls dot placement to produce the desired local average density at every point in the image. Error diffusion. The most widely used class of halftoning algorithms is error diffusion, introduced by Floyd and Steinberg in 1976. In error diffusion, pixels are processed sequentially, and at each step a threshold is applied to decide whether to print a dot. The difference between the desired continuous-tone value and the actual binary output — the quantization error — is then distributed to neighboring unprocessed pixels, so that the total ink density over any region converges to the desired average. Error diffusion produces high-quality halftones with good tonal rendition and sharp edges, but it can introduce structured artifacts — diagonal lines, worm-like textures, and directional bias — that are visually objectionable. The spectral perspective. A more principled way to evaluate halftone quality is in the frequency domain. The human visual system is most sensitive to spatial frequencies in a mid-range band — it is relatively insensitive to very low frequencies (large-scale variations) and very high frequencies (fine texture below the resolution limit). An ideal halftone pattern, from this perspective, should have its power concentrated at frequencies above the visual system's sensitivity band — that is, it should be a high-frequency or blue-noise pattern — so that the residual quantization error is perceptually invisible. Robert Ulichney of Digital Equipment Corporation formalized this spectral perspective in 1987 with his blue-noise halftoning model, which characterized the ideal halftone pattern as one whose power spectral density has a characteristic annular shape in the frequency domain: suppressed at low frequencies, peaked at a mid-to-high frequency annulus, and relatively flat above that. Blue-noise halftones are visually smooth and grain-free, and Ulichney's work on the void-and-cluster algorithm for generating blue-noise masks became one of the most cited works in the halftoning literature. Phase 1: The Green-Noise Model
The limitation of blue-noise. Ulichney's blue-noise model was developed for dispersed-dot halftones — patterns in which individual printed dots are isolated from one another and distributed as uniformly as possible across the page. Dispersed-dot patterns are appropriate for high-resolution printers where individual dots are small, but many printing technologies — particularly electrophotographic (laser) printers and low-resolution inkjet printers — produce better output with clustered-dot patterns, in which dots are grouped into small clusters or rosettes. Clustered-dot patterns are more robust to dot gain (the tendency of printed dots to spread) and to the mechanical instabilities of the print engine. Ulichney's blue-noise model, however, does not correctly describe clustered-dot patterns. The spectral characteristics of a clustered-dot halftone are fundamentally different: because the dots form clusters of varying size and shape, the power spectral density has its peak at a lower frequency than the equivalent dispersed-dot pattern, and the spectral profile is broader and less sharply annular. Existing models either incorrectly applied the blue-noise framework to clustered dots or had no principled spectral model at all. The green-noise contribution. My Ph.D. dissertation introduced the green-noise model to fill this gap. The name reflects the spectral position of the model: green-noise patterns have their spectral peak at a frequency lower than blue-noise (which is spectrally “higher”) but higher than the low-frequency content of a regular screen (which would be spectrally “lower”). Specifically, a green-noise halftone is characterized by:
- A power spectral density that is suppressed at very low frequencies (no large-scale clumping)
- A broad spectral peak at mid-frequencies corresponding to the average cluster spacing
- A spectral profile that scales correctly with the dot percentage — as the coverage increases, the peak shifts to lower frequencies in a predictable way
- D. L. Lau G. R. Arce, and N. C. Gallagher, “Green-Noise Digital Halftoning,” Proceedings of the IEEE, vol. 86, no. 12, December 1998, pp. 2424–2444. (184 citations) (Lead article)
- D. L. Lau G. R. Arce, and N. C. Gallagher, “Digital Halftoning by Means of Green-Noise Masks,” Journal of the Optical Society of America A, vol. 16, no. 7, July 1999, pp. 1575–1586.
- D. L. Lau G. R. Arce, and N. C. Gallagher, “Digital Color Halftoning with Generalized Error Diffusion and Multichannel Green-Noise Masks,” IEEE Transactions on Image Processing, vol. 9, no. 5, May 2000, pp. 923–935.
- G. R. Arce and D. L. Lau, Method and Apparatus for Producing Halftone Images Using Green-Noise Masks Having Adjustable Coarseness, U.S. Patent 6,493,112, December 2002. (112 citations)
- D. L. Lau G. R. Arce, and N. C. Gallagher, Digital Color Halftoning with Generalized Error Diffusion Vector Green-Noise Masks, U.S. Patent 6,798,537, September 2004.
- D. L. Lau and G. R. Arce, Modern Digital Halftoning, Marcel Dekker, New York, 2001.
- D. L. Lau and G. R. Arce, Modern Digital Halftoning, 2nd ed., CRC Press, Taylor and Francis Group, Boca Raton, FL, 2008. (514 citations)
After joining the University of Kentucky, I initiated a collaboration with Robert Ulichney of HP Labs — the originator of the blue-noise halftoning model and one of the most influential figures in the field. Our collaboration produced two papers. The first was a comprehensive review of blue-noise and green-noise models invited for a special issue of IEEE Signal Processing Magazine:
- D. L. Lau, R. Ulichney, and G. R. Arce, “Blue and Green-Noise Digital Halftoning,” IEEE Signal Processing Magazine, vol. 20, no. 4, July/August 2003, pp. 28–38.
- D. L. Lau and R. Ulichney, “Blue-Noise Halftoning for Hexagonal Grids,” IEEE Transactions on Image Processing, vol. 15, no. 5, May 2006, pp. 1270–1284.
Most halftoning algorithms produce binary output — each pixel is either printed or not. Multitoning generalizes this to printers with more than two output levels: for example, a printer that can produce light ink, dark ink, or no ink at each pixel. Multitoning is important for inkjet printers with multiple ink densities (light cyan, dark cyan, etc.) and for display systems with multiple gray levels. We developed a multitoning framework based on gray level separation — partitioning the tonal range into intervals and applying independent blue-noise or green-noise masks to each interval:
- F. Faheem, D. L. Lau, and G. R. Arce, “Digital Multitoning Using Gray Level Separation,” Journal of Imaging Science and Technology, vol. 46, no. 5, 2002, pp. 385–397.
- J. B. Rodriguez, D. L. Lau, and G. R. Arce, “Blue-Noise Multitone Dithering,” IEEE Transactions on Image Processing, vol. 17, no. 8, August 2008, pp. 1368–1382.
- A. J. Gonzalez, J. B. Rodriguez, G. R. Arce, and D. L. Lau, “Alpha Stable Modeling of Human Visual Systems for Digital Halftoning in Rectangular and Hexagonal Grids,” Journal of Electronic Imaging, vol. 17, no. 1, 2008.
A natural extension of halftoning theory is lenticular imaging — the production of 3D or multiview images using a lens array placed over a printed image. Each lens in the array directs light from a narrow strip of the printed image to a specific viewing angle, so that a viewer sees a different image depending on their position relative to the print. The design challenge is that the lens array introduces a regular spatial frequency into the system, which can produce moiré interference patterns with the halftone screen or the image content. We developed model-based error diffusion algorithms specifically designed for lenticular screening, in which the error diffusion filter is adapted to the geometry of the lens array to suppress moiré while maintaining good tonal rendition:
- D. L. Lau and T. Smith, “Model-Based Error Diffusion for High Fidelity Lenticular Screening,” Optics Express, vol. 14, no. 8, April 2006, pp. 3214–3224.
- D. L. Lau and T. Smith, “High Fidelity Lenticular Screening by Means of Iterative Tone Correction,” JOSA A, vol. 23, no. 11, November 2006, pp. 2714–2723.
- D. L. Lau, A. Khan, and G. R. Arce, “Minimizing Stochastic Moiré by Means of Green-Noise Masks,” JOSA A, vol. 19, no. 11, November 2002, pp. 2203–2217.
- D. L. Lau, “Minimizing Stochastic Moiré Using Green-Noise Halftoning,” Journal of Imaging Science and Technology, vol. 47, no. 4, 2003, pp. 327–338.
- S. Spiro, S. S. Daniell, and D. L. Lau, Lenticular Product Having a Radial Lenticular Blending Effect, U.S. Patent 9,171,392, October 2015.
- S. S. Daniell, S. Spiro, and D. L. Lau, Product Alignment Using a Printed Relief, U.S. Patents 9,919,515 (2018), 10,245,825 (2019), 10,889,107 (2021).
A more recent and commercially significant extension of halftoning theory addresses the problem of QR code image embedding: hiding a visually attractive color image inside a scannable QR code while preserving the code's machine-readability. Standard QR codes are visually stark black-and-white patterns with no aesthetic appeal. The challenge is that the black and white modules of a QR code carry error-corrected digital information — altering them to embed an image degrades the code's reliability. The goal is to find the binary pattern that simultaneously satisfies the QR specification (with sufficient error correction margin), reproduces the desired image as faithfully as possible to the human eye, and remains scannable by standard QR readers. This problem is a generalization of halftoning: instead of thresholding a continuous-tone image to binary with a free choice of output at every pixel, we must threshold with a partially constrained output — some pixels are fixed by the QR structure, while others are free to be optimized for image quality. We developed a framework based on modified error diffusion that propagates quantization error only through the free pixels while respecting the QR constraints:
- G. J. Garateguy, G. R. Arce, D. L. Lau, and O. P. Villareal, “QR Images: Optimized Image Embedding in QR Codes,” IEEE Transactions on Image Processing, vol. 23, no. 7, July 2014, pp. 2842–2853. (151 citations)
- K. Pena-Pena, D. L. Lau, A. J. Arce, and G. R. Arce, “QRnet: Fast Learning-Based QR Code Image Embedding,” (in preparation).
- G. R. Arce, G. Garateguy, and D. L. Lau, System and Method for Embedding of a Two Dimensional Code with an Image, U.S. Patent 9,865,027, January 2018.
- G. R. Arce, G. Garateguy, S. X. Wang, and D. L. Lau, Method to Store a Secret QR Code into a Colored Secure QR Code, U.S. Patent 10,152,663, December 2018.
- G. R. Arce, G. Garateguy, and D. L. Lau, System and Method for Embedding of a Two Dimensional Code with an Image, U.S. Patent 10,817,971, October 2020.
- G. R. Arce, G. Garateguy, S. X. Wang, and D. L. Lau, Method to Store a Secret QR Code into a Colored Secure QR Code, U.S. Patent 10,832,110, November 2020.
A long-standing challenge in halftoning is computational speed. High-quality error diffusion algorithms are inherently sequential — each pixel’s output depends on the accumulated quantization error from all previously processed pixels — which makes them difficult to parallelize and expensive to implement in real time for high-throughput printing applications. Commercial printer RIPs (raster image processors) typically use lookup-table-based approximations that sacrifice quality for speed. In collaboration with Prof. Robert Heath at the University of Kentucky, my group developed parallel FPGA architectures for error diffusion that achieve real-time throughput without sacrificing the quality of the full sequential algorithm. The key insight is that while strict sequential error diffusion cannot be parallelized, a carefully designed stacked error diffusion variant — in which the image is processed in multiple interleaved passes — can be decomposed into independent subproblems that map efficiently onto the parallel processing resources of an FPGA:
- R. K. Venugopal, J. R. Heath, and D. L. Lau, “FPGA Based Parallel Architecture Implementation of Stacked Error Diffusion Algorithm,” IEEE 9th Symposium on Application Specific Processors (SASP), San Diego, CA, June 2011.
- Q. Hu, D. L. Lau, R. K. Venugopal, and J. R. Heath, “FPGA-Based Hardware Implementation of Blue-Noise Stacked Error Diffusion Multitoning,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2025.
A further practical problem in high-speed printing is printhead stitching — the challenge of printing seamlessly with multiple adjacent printhead segments whose outputs must be precisely aligned and whose individual dot densities must be matched to avoid visible banding artifacts at segment boundaries. I developed a family of algorithms and methods for minimizing stitching artifacts, which resulted in three U.S. patents:
- D. L. Lau, Method of Minimizing Stitching Artifacts for Overlapping Printhead Segments, U.S. Patent 10,293,622, May 2019.
- D. L. Lau, Method of Printing Foreground and Background Images with Overlapping Printhead Segments, U.S. Patent 10,414,171, September 2019.
- D. L. Lau, Method of Stitching Overlapping Printhead Segments, U.S. Patent 10,625,518, April 2020.
Digital halftoning is the oldest and most foundational part of my research program, and its connections to the other three thrusts are deep and non-obvious:
- Thrust 2 (Graph/Hypergraph SP): The blue-noise sampling theory developed for halftoning is the direct conceptual predecessor of blue-noise graph sampling. The void-and-cluster algorithm for constructing blue-noise masks generalizes to the graph domain by replacing spatial frequency analysis with graph spectral analysis. This connection motivated the entire graph signal processing thrust.
- Thrust 3 (Compressive Spectral Imaging): The coded aperture design problem in CASSI — placing open pixels on a binary spatial mask to optimize compressive measurement quality — is mathematically equivalent to the halftoning problem. Blue-noise coded apertures are a direct application of blue-noise mask theory to optical system design. The QR code embedding problem is a constrained halftoning problem; the graph-based reconstruction algorithms in Thrust 3 use smoothness priors that generalize the spectral smoothness criteria in halftoning theory.
- Thrust 1 (Structured Light): The pattern design problems in structured light — designing projected fringe patterns whose spectral properties maximize phase measurement accuracy — are related to the pattern design problems in halftoning. Both require controlling the spatial frequency content of a projected binary or multi-level pattern to optimize a perceptual or physical measurement criterion.
| Patent Number | Title | Year |
|---|---|---|
| 6,493,112 | Green-Noise Masks with Adjustable Coarseness | 2002 |
| 6,798,537 | Digital Color Halftoning with Green-Noise Masks | 2004 |
| 9,171,392 | Lenticular Product with Radial Blending Effect | 2015 |
| 9,568,649 | Radial Lenticular Blending Effect | 2017 |
| 9,865,027 | Embedding a 2D Code with an Image | 2018 |
| 9,919,515 | Product Alignment Using a Printed Relief | 2018 |
| 10,152,663 | Secret QR Code into a Colored Secure QR Code | 2018 |
| 10,245,825 | Product Alignment Using a Printed Relief | 2019 |
| 10,293,622 | Minimizing Stitching Artifacts | 2019 |
| 10,414,171 | Printing Foreground and Background with Overlapping Segments | 2019 |
| 10,614,540 | Embedding a 2D Code in Video Images | 2020 |
| 10,625,518 | Stitching Overlapping Printhead Segments | 2020 |
| 10,817,971 | Embedding a 2D Code with an Image | 2020 |
| 10,832,110 | Secret QR Code into a Colored Secure QR Code | 2020 |
| 10,889,107 | Product Alignment Using a Printed Relief | 2021 |
| Sponsor | Program | Amount | Period |
|---|---|---|---|
| University of Delaware / Agere Systems | Halftoning ASIC Design | $22K | 2005 |
| Mutoh America | Web-Enabled Embedded Printer RIP | $47K | 2013 |
| NSF / Intel (partial) | VEC Compressive Imaging (halftoning connections) | $860K | 2015–2019 |
| Student | Degree | Institution | Year | Current Position |
|---|---|---|---|---|
| Faraz Faheem | M.S. | University of Delaware | 2001 | Apple |
| Arif M. Khan | M.S. | University of Delaware | 2001 | — |
| Jan Bacca Rodriguez | Ph.D. | University of Delaware | 2007 | Universidad Nacional de Colombia |
| Rishvanth Kora Venugopal | M.S. | University of Kentucky | 2010 | Cummins Inc. |
| Gonzalo Garateguy | Ph.D. | University of Delaware | 2014 | MathWorks |
| Qishi Hu | M.S. | University of Kentucky | 2024 | Sichuan University (Ph.D. student) |