Classical signal processing was built on the assumption that signals live on regular domains — a uniform grid of time samples, a rectangular array of image pixels, a periodic lattice of spatial measurements. The mathematical tools developed for these domains — the Fourier transform, convolution, sampling theory, filter design — are elegant and powerful precisely because the underlying domain has translation invariance and a well-defined notion of frequency. But many of the most important signals encountered in modern engineering and science do not live on regular grids. They live on networks: the nodes of a sensor array with irregular placement, the vertices of a social graph, the atoms of a molecular structure, the patches of a triangulated 3D mesh, the regions of a brain connectivity map. Processing these signals correctly requires generalizing the foundational tools of signal processing to these irregular, graph-structured domains.
This is the problem that my research in graph and hypergraph signal processing (GSP and HGSP) addresses. Beginning in 2018 and growing into one of the most active parts of my current research program, this thrust develops the theoretical foundations and practical algorithms for sampling, filtering, reconstruction, and learning of signals defined on graphs and hypergraphs. The work is funded by two active federal grants totaling nearly $1.2M and is carried out in close collaboration with my long-time research partner, Professor Gonzalo R. Arce at the University of Delaware.
MS student Ben Brown presenting his hypergraph research at the ECE Spring Research Symposium.
Background: From Images to Graphs
To understand why graph signal processing matters, it helps to consider what makes classical signal processing work so well for images. An image is a signal defined on a 2D rectangular grid. The grid has a natural notion of neighborhood — every pixel has exactly eight neighbors at known positions — and a natural notion of smoothness — nearby pixels tend to have similar intensities. The discrete Fourier transform exploits the grid's translation invariance to decompose the image into spatial frequency components, and operations like low-pass filtering, downsampling, and wavelet decomposition all inherit their elegance from the grid's regularity. Now consider a sensor network deployed across a city to measure air quality. The sensors are placed irregularly, their spacing varies widely, and two sensors that are geographically close may be connected through very different atmospheric pathways than two sensors that are far apart. The signal of interest — pollutant concentration at each sensor — is smooth in the sense that nearby sensors tend to record similar values, but "nearby" is defined by the network topology, not by Euclidean distance. Applying classical Fourier analysis to this signal by pretending the sensors live on a grid would discard the network structure entirely. Graph signal processing provides the correct framework: it defines the graph Laplacian as the analog of the second derivative operator, uses the eigenvectors of the Laplacian as the graph Fourier basis, and builds filtering, sampling, and reconstruction theory on this foundation. Hypergraphs extend this further. In a standard graph, every edge connects exactly two vertices — a pairwise relationship. But many real-world relationships are not pairwise: a research paper has multiple co-authors simultaneously, a metabolic reaction involves multiple reactants and products, a sensor reading depends on the joint state of a cluster of nodes. A hypergraph replaces binary edges with hyperedges that can connect any number of vertices simultaneously. Processing signals on hypergraphs — capturing these higher-order dependencies — requires yet another layer of generalization beyond standard GSP. My group has contributed to both layers: graph signal processing, with a focus on optimal sampling theory, and hypergraph signal processing, with a focus on tensor-algebraic frameworks for representing and learning from higher-order data. Phase 1: The Bridge from Halftoning — Blue-Noise Sampling
The conceptual origin of my work in graph signal processing is my earlier research in digital halftoning. In halftoning, the goal is to place a fixed number of printed dots on a page so that their spatial distribution appears as uniform as possible to the human visual system — neither clustered nor overly regular, but exhibiting a characteristic mid-frequency randomness called blue noise. The mathematical criterion for blue noise is a spectral one: the power spectral density of the dot pattern should be suppressed at low frequencies (no large-scale clumping) and at high frequencies (no regular aliasing artifacts), with energy concentrated in a mid-frequency annular band. In 2018, I recognized that this same criterion — place a fixed number of samples so that their distribution is as uniform as possible with respect to the underlying domain's geometry — is exactly the right criterion for sampling signals on graphs. On a graph, "uniform" means that sampled nodes should be spread across the graph in a way that respects the graph's topology and the smoothness of the signals it carries, not just their Euclidean positions. This led to the concept of blue-noise sampling on graphs: a graph sampling strategy that minimizes low-frequency concentration in the graph spectral domain, analogous to the way blue-noise halftone masks minimize low-frequency concentration in the spatial frequency domain. This connection — between a 25-year-old body of work in printing and a cutting-edge problem in network signal processing — is a good example of how foundational theoretical work creates unexpected bridges across decades and domains. Phase 2: Graph Signal Processing — Blue-Noise Sampling and Reconstruction
The first paper in this thrust introduced the formal connection between halftoning and graph sampling and developed an efficient algorithm for constructing blue-noise sampling sets on arbitrary graphs. The algorithm generalizes the void-and-cluster method — a classical halftoning technique — to the graph domain by operating on the graph spectral representation rather than the spatial frequency representation:
Standard graphs model pairwise relationships. But in many signal processing problems the relevant dependencies are higher-order: a group of sensors that jointly measure a correlated physical phenomenon, a set of co-authors who collectively contribute to a paper, a cluster of genes that jointly regulate a biological pathway. Hypergraphs — in which a single hyperedge can connect any number of nodes simultaneously — are the natural mathematical structure for these problems. However, generalizing signal processing to hypergraphs is non-trivial: the standard graph Laplacian construction does not extend cleanly to hyperedges of variable size, and the tensor algebra required to represent hypergraph structure introduces significant computational complexity. Our key insight was that the algebraic structure of a hypergraph can be naturally represented using the t-product — a tensor multiplication operation defined for third-order tensors that generalizes matrix multiplication using the discrete Fourier transform along the third dimension. The t-product endows the space of third-order tensors with a ring structure that supports eigendecomposition, singular value decomposition, and spectral analysis in direct analogy with matrix algebra. By representing the hypergraph adjacency as a third-order tensor and applying t-product algebra, we obtained a clean, computationally tractable framework for hypergraph signal processing:
Graph neural networks (GNNs) — which generalize convolutional neural networks to graph-structured data — have become one of the most active areas of machine learning research. However, standard GNNs operate on pairwise graphs and cannot directly exploit higher-order hypergraph structure. We developed T-HyperGNNs, a family of hypergraph neural network architectures grounded in the t-product algebraic framework, that generalizes graph convolution, graph pooling, and graph U-Net architectures to hypergraphs:
All three grants are collaborative with Professor Gonzalo R. Arce, Department of Electrical and Computer Engineering, University of Delaware.
Graduate Alumni from This Thrust
Active Ph.D. candidates Mundo Levano, Nicolas Bello, and Ziyuan Dong at the University of Delaware are currently pursuing extensions of this work.
Open Problems and Future Directions
This research thrust is rich with open problems that make excellent Ph.D. dissertation topics:
MS student Ben Brown presenting his hypergraph research at the ECE Spring Research Symposium.
To understand why graph signal processing matters, it helps to consider what makes classical signal processing work so well for images. An image is a signal defined on a 2D rectangular grid. The grid has a natural notion of neighborhood — every pixel has exactly eight neighbors at known positions — and a natural notion of smoothness — nearby pixels tend to have similar intensities. The discrete Fourier transform exploits the grid's translation invariance to decompose the image into spatial frequency components, and operations like low-pass filtering, downsampling, and wavelet decomposition all inherit their elegance from the grid's regularity. Now consider a sensor network deployed across a city to measure air quality. The sensors are placed irregularly, their spacing varies widely, and two sensors that are geographically close may be connected through very different atmospheric pathways than two sensors that are far apart. The signal of interest — pollutant concentration at each sensor — is smooth in the sense that nearby sensors tend to record similar values, but "nearby" is defined by the network topology, not by Euclidean distance. Applying classical Fourier analysis to this signal by pretending the sensors live on a grid would discard the network structure entirely. Graph signal processing provides the correct framework: it defines the graph Laplacian as the analog of the second derivative operator, uses the eigenvectors of the Laplacian as the graph Fourier basis, and builds filtering, sampling, and reconstruction theory on this foundation. Hypergraphs extend this further. In a standard graph, every edge connects exactly two vertices — a pairwise relationship. But many real-world relationships are not pairwise: a research paper has multiple co-authors simultaneously, a metabolic reaction involves multiple reactants and products, a sensor reading depends on the joint state of a cluster of nodes. A hypergraph replaces binary edges with hyperedges that can connect any number of vertices simultaneously. Processing signals on hypergraphs — capturing these higher-order dependencies — requires yet another layer of generalization beyond standard GSP. My group has contributed to both layers: graph signal processing, with a focus on optimal sampling theory, and hypergraph signal processing, with a focus on tensor-algebraic frameworks for representing and learning from higher-order data. Phase 1: The Bridge from Halftoning — Blue-Noise Sampling
The conceptual origin of my work in graph signal processing is my earlier research in digital halftoning. In halftoning, the goal is to place a fixed number of printed dots on a page so that their spatial distribution appears as uniform as possible to the human visual system — neither clustered nor overly regular, but exhibiting a characteristic mid-frequency randomness called blue noise. The mathematical criterion for blue noise is a spectral one: the power spectral density of the dot pattern should be suppressed at low frequencies (no large-scale clumping) and at high frequencies (no regular aliasing artifacts), with energy concentrated in a mid-frequency annular band. In 2018, I recognized that this same criterion — place a fixed number of samples so that their distribution is as uniform as possible with respect to the underlying domain's geometry — is exactly the right criterion for sampling signals on graphs. On a graph, "uniform" means that sampled nodes should be spread across the graph in a way that respects the graph's topology and the smoothness of the signals it carries, not just their Euclidean positions. This led to the concept of blue-noise sampling on graphs: a graph sampling strategy that minimizes low-frequency concentration in the graph spectral domain, analogous to the way blue-noise halftone masks minimize low-frequency concentration in the spatial frequency domain. This connection — between a 25-year-old body of work in printing and a cutting-edge problem in network signal processing — is a good example of how foundational theoretical work creates unexpected bridges across decades and domains. Phase 2: Graph Signal Processing — Blue-Noise Sampling and Reconstruction
The first paper in this thrust introduced the formal connection between halftoning and graph sampling and developed an efficient algorithm for constructing blue-noise sampling sets on arbitrary graphs. The algorithm generalizes the void-and-cluster method — a classical halftoning technique — to the graph domain by operating on the graph spectral representation rather than the spatial frequency representation:
- A. Parada-Mayorga, D. L. Lau, J. H. Giraldo, and G. R. Arce, "Blue-Noise Sampling on Graphs," IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 3, pp. 554–569, September 2019.
- D. L. Lau, G. R. Arce, A. Parada-Mayorga, D. Dapena, K. Pena-Pena, "Blue-Noise Sampling of Graph and Multigraph Signals: Dithering on Non-Euclidean Domains," IEEE Signal Processing Magazine, vol. 37, no. 6, pp. 31–42, October 2020. (Invited tutorial)
- D. Dapena, D. L. Lau, and G. R. Arce, "Parallel Graph Signal Processing: Sampling and Reconstruction," IEEE Transactions on Signal and Information Processing over Networks, vol. 9, pp. 190–206, 2023.
- D. Dapena, D. L. Lau, and G. R. Arce, "Density Aware Blue-Noise Sampling on Graphs," 30th European Signal Processing Conference (EUSIPCO), Belgrade, Serbia, 2022.
- J. F. Florez-Ospina, A. K. M. Alrushud, D. L. Lau, and G. R. Arce, "Block-Based Spectral Image Reconstruction for Compressive Spectral Imaging Using Smoothness on Graphs," Optics Express, vol. 30, pp. 7187–7209, 2022.
- J. F. Florez-Ospina, D. L. Lau, D. Guillot, K. Barner, and G. R. Arce, "Smoothness on Rank-Order Path Graphs and its Use in Compressive Spectral Imaging with Side Information," Signal Processing, 2022.
- NSF CIF: Small — Collaborative Research: Blue-Noise Graph Sampling ($500K, 2018–2021, with Prof. Arce, University of Delaware)
Standard graphs model pairwise relationships. But in many signal processing problems the relevant dependencies are higher-order: a group of sensors that jointly measure a correlated physical phenomenon, a set of co-authors who collectively contribute to a paper, a cluster of genes that jointly regulate a biological pathway. Hypergraphs — in which a single hyperedge can connect any number of nodes simultaneously — are the natural mathematical structure for these problems. However, generalizing signal processing to hypergraphs is non-trivial: the standard graph Laplacian construction does not extend cleanly to hyperedges of variable size, and the tensor algebra required to represent hypergraph structure introduces significant computational complexity. Our key insight was that the algebraic structure of a hypergraph can be naturally represented using the t-product — a tensor multiplication operation defined for third-order tensors that generalizes matrix multiplication using the discrete Fourier transform along the third dimension. The t-product endows the space of third-order tensors with a ring structure that supports eigendecomposition, singular value decomposition, and spectral analysis in direct analogy with matrix algebra. By representing the hypergraph adjacency as a third-order tensor and applying t-product algebra, we obtained a clean, computationally tractable framework for hypergraph signal processing:
- K. Pena-Pena, D. L. Lau, and G. R. Arce, "t-HGSP: Hypergraph Signal Processing Using t-Product Tensor Decompositions," IEEE Transactions on Signal and Information Processing over Networks, vol. 9, pp. 329–345, 2023.
- K. Pena-Pena, L. Taipe, F. Wang, D. L. Lau, and G. R. Arce, "Learning Hypergraphs Tensor Representations From Data via t-HGSP," IEEE Transactions on Signal and Information Processing over Networks, vol. 10, pp. 17–31, 2024.
- B. T. Brown, H. Zhang, D. L. Lau, and G. R. Arce, "Scalable Hypergraph Structure Learning with Diverse Smoothness Priors," IEEE Transactions on Signal and Information Processing over Networks, vol. 11, pp. 1–15, 2025.
Graph neural networks (GNNs) — which generalize convolutional neural networks to graph-structured data — have become one of the most active areas of machine learning research. However, standard GNNs operate on pairwise graphs and cannot directly exploit higher-order hypergraph structure. We developed T-HyperGNNs, a family of hypergraph neural network architectures grounded in the t-product algebraic framework, that generalizes graph convolution, graph pooling, and graph U-Net architectures to hypergraphs:
- F. Wang, K. Pena-Pena, D. L. Lau, and G. R. Arce, "T-HyperGNNs: Hypergraph Neural Networks via Tensor Representations," IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 3, pp. 5044–5058, 2025.
| Sponsor | Program | Amount | Period |
|---|---|---|---|
| NSF CIF: Small | Collaborative Research: Blue-Noise Graph Sampling | $500K | 2018–2021 |
| NSF CIF: Small | Collaborative Research: Hypergraph Signal Processing and Networks via t-Product Decompositions | $600K | 2023–2026 |
| AFOSR DEPSCOR | Learning Multilayer and Hypergraph Networks from Data | $599K | 2022–2025 |
| Student | Degree | Institution | Year | Current Position |
|---|---|---|---|---|
| Alejandro Parada-Mayorga | Ph.D. | University of Delaware | 2019 | University of Colorado |
| Daniela Dapena | Ph.D. | University of Delaware | 2022 | LightingAI |
| Karelia Pena-Pena | Ph.D. | University of Delaware | 2023 | Intuit |
| Fuli Wang | Ph.D. | University of Delaware | 2024 | Apple Research |
| Benjamin T. Brown | M.S. | University of Kentucky | 2025 | — |
| Mihir Malladi | Ph.D. | University of Kentucky | anticipated December 2026 | — |
| Haoxiang Zhang | Ph.D. | University of Kentucky | anticipated 2027 | — |
This research thrust is rich with open problems that make excellent Ph.D. dissertation topics:
- Directed hypergraph signal processing. Current t-HGSP theory assumes undirected hyperedges. Directed hyperedges — in which the relationship between nodes has an asymmetric, causal structure — arise naturally in biological signaling networks, information flow graphs, and citation networks. Extending t-HGSP to directed hypergraphs requires new tensor algebraic tools and new notions of graph frequency.
- Physics-informed hypergraph learning. Many physical systems — fluid flows, structural mechanics, electromagnetic fields — can be modeled as signals on irregular meshes whose topology encodes the physical geometry. Incorporating known physical laws (conservation equations, boundary conditions) as constraints in the hypergraph learning problem is a natural extension of this work with applications in scientific machine learning.
- Scalable T-HyperGNN architectures. Current T-HyperGNN architectures scale to graphs of moderate size. Extending them to graphs with millions of nodes — social networks, large-scale sensor arrays, genomic interaction networks — requires new approximation strategies and hardware-aware implementations.
- Applications in computational imaging. The graph smoothness priors developed in this thrust have already been applied to compressive spectral imaging (see Thrust 3). Open problems include applying hypergraph priors to 3D point cloud denoising and reconstruction — a natural bridge back to Thrust 1.