Haichuan Zhang
Ongoing Project
Deep, Convergent, Unrolled Non-Blind Image Deconvolution
Information Processing and Algorithms Lab - Pennsylvania State University, University Park
We propose a deep, interpretable neural network by unrolling the widely-used Half-Quadratic Splitting (HQS) algorithm. A structured parametrization scheme is introduced to ensure convergence with minimal impact on network performance.
The convergence of this neural network, under our parametrization, is both theoretically established and empirically validated through simulations.
In comparison with SOTA, our approach outperforms both traditional iterative algorithms and contemporary deep neural networks by approximately 1 dB in PSNR and 0.1 in SSIM, all while ensuring convergence and maintaining interpretability.
The paper titled ”A Convergent Neural Network for Non-Blind Image Deblurring” has been accepted at the 2023 IEEE International Conference on Image Processing (ICIP). Additionally, ”Deep, Convergent, Unrolled Non-Blind Image Deconvolution” has been published at the IEEE Transactions on Computational Imaging (TCI).
High-Resolution Transcranial Ultrasound Neuromodulation at Large Scale
Information Processing and Algorithms Lab - Pennsylvania State University, University Park
We use CT imaging for Skull-Induced Phase Aberration Correction in Ultrasound Neuromodulation.
We developed the Intelligent Time Delay Search (ITDS) algorithm to iteratively optimize time delay profiles for phased ultrasound arrays.
To overcome ITDS’s speed limitations, we used it to generate training data and propose a domain-enriched, Dual-Branch Skull-Induced Phase Aberration Correction Network (DB-SIPAC), a novel machine learning framework designed to efficiently predict time delay profiles.
The paper ``Domain Enriched Learning for Skull Induced Phase Aberration Correction in Ultrasound Neuromodulation" is under review.
Scene Segmentation-Guided Lens Mapping for Bokeh Effect Transformation
Information Processing and Algorithms Lab - Pennsylvania State University, University Park
We developed the Segmentation-Guided Lens Mapping (SGLM) methodology for Bokeh Effect Transformation, which integrates the Foreground Segmentation Module (FSM) and the Lens Mapping Module (LMM) to highlight the distinct optical properties of various lenses.
The FSM is designed to accurately predict the foreground alpha matte through its Semantic Prediction Branch and Detail Prediction Branch, ensuring sharpness is preserved in the foreground while the bokeh effect is transformed in the out-of-focus regions.
The LMM utilizes multiple encoders and decoders, enabling the conversion of bokeh effects across different lenses, with each encoder and decoder specifically tailored to a particular lens.