"Fast error detection method for additive manufacturing process monitoring using structured light three dimensional imaging technique" (2025)

Abstract

This paper presents a novel method to speed up error detection in an additive manufacturing (AM) process by minimizing the necessary three-dimensional (3D) reconstruction and comparison. We develop a structured light 3D imaging technique that has native pixel-by-pixel mapping between the captured two-dimensional (2D) image and the reconstructed 3D point cloud. This 3D imaging technique allows error detection to be performed in the 2D image domain prior to 3D point cloud generation, which drastically reduces complexity and computational time. Compared to an existing AM error detection method based on 3D reconstruction and point cloud processing, experimental results from a material extrusion (MEX) AM process demonstrate that our proposed method significantly increases the error detection speed.

“Calibration method based on virtual phase-to-coordinate mapping with linear correction function for structured light system," (2024)

R. Vargas, L. A. Romero, S. Zhang, and A. G. Marrugo, “Calibration method based on virtual phase-to-coordinate mapping with linear correction function for structured light system,“ Optics and Lasers in Engineering, 183, 108496, (2024)

Abstract

Structured light systems are crucial in fields requiring precise measurements, such as industrial manufacturing, due to their capability for real-time reconstructions. Existing calibration models, primarily based on stereo vision (SV) and pixel-wise approaches, face limitations in accuracy, complexity, and flexibility. These challenges stem from the inability to fully compensate for lens distortions and the errors introduced by physical calibration targets. Our work introduces a novel calibration approach using a virtual phase-to-coordinate mapping with a linear correction function, aiming to enhance accuracy and reduce complexity. This method involves traditional stereo calibration, phase processing, correction with ideal planes, and fitting a pixel-wise linear correction function. By employing virtual samples for phase-coordinate pairs and computing a pixel-wise correction, our methodology overcomes physical and numerical limitations associated with existing models. The results demonstrate superior measurement precision, robustness, and consistency, surpassing conventional stereo and polynomial regression models, both within and beyond the calibrated volume. This approach offers a significant advancement in structured light system calibration, providing a practical solution to existing challenges.

Calibration of dual resolution dual camera structured light systems,“ (2024)

Y. Yang, I. Bortins, D. Baldwin, and S. Zhang, “Calibration of dual resolution dual camera structured light systems,“ Optics and Lasers in Engineering, 182, 108472, (2024)

Abstract

This paper introduces a calibration framework designed for dual-resolution dual-camera structured light systems. The calibration process for each camera-projector pair adheres to a conventional pixel-wise calibration method for structured light systems. To align the two cameras' coordinate system, the method employs a white planar object and virtual feature points. These points are generated based on the dual-directional absolute phase maps obtained from each camera. By leveraging the pixel-wise calibration results from both single camera-projector pairs, this approach determines the geometric relationship between the camera coordinates via rigid transformation. Experimental results demonstrate the proposed method's high calibration precision for each camera-projector pair and the accurate alignment of coordinates between these two cameras.

“Image-based non-isotropic point light source calibration using digital fringe projection" ( 2024)

Y-H Liao and S. Zhang, “Image-based non-isotropic point light source calibration using digital fringe projection,” Optics Express, 32(14) 25046-25061 (2024)

Abstract

Accurate light source calibration is critical in numerous applications including physics-based computer vision and graphics. We propose an image-based method for calibrating non-isotropic point light sources, addressing challenges posed by their non-radially symmetric radiant intensity distribution and the non-Lambertian properties of the calibration target. We deduce an image formation model, and capture the intensity of the calibration target at multiple poses, coupled with accurate 3D geometry acquisition using the digital fringe projection technique. Finally, we design an iterative computational framework to optimize all the light source parameters simultaneously. The experiment demonstrated the high accuracy of the calibrated light source model and showcased its application by estimating the relative reflectance of diverse surfaces.

"Adaptive focus stacking for large depth-of-field microscopic structured-light 3D imaging,” (2024)

[150] L. Chen, R. Ding and S. Zhang, “Adaptive focus stacking for large depth-of-field microscopic structured-light 3D imaging,” Applied Optics, (2024)

Abstract

This paper presents an adaptive focus stacking method for large depth-of-field (DOF) 3D microscopic structured-light imaging systems. Conventional focus stacking methods typically capture images under a series of pre-defined focus settings without considering the attributes of the measured object. Therefore, it is inefficient since some of the focus settings might be redundant. To address this problem, we first employ the focal sweep technique to reconstruct an initial rough 3D shape of the measured objects. Then, we leverage the initial 3D data to determine effective focus settings that focus the camera on the valid areas of the measured objects. Finally, we reconstruct a high-quality 3D point cloud using fringe images obtained from these effective focus settings by focus stacking. Experimental results demonstrate the success of the proposed method.

“Pixelwise calibration method for telecentric structured light system,” (2024)

Y. Yang and S. Zhang, “Pixelwise calibration method for telecentric structured light system,” Applied Optics, 63(10), (2024)

Abstract

This paper introduces a pixel-wise calibration method designed for a structured light system utilizing a camera attached with a telecentric lens. In the calibration process, a white flat surface and another flat surface with circle dots serve as the calibration targets. After deriving the properties of the pinhole projector through conventional camera calibration method using circle dots and determining the camera's attributes via 3D feature points estimation through iterative optimizations, the white surface calibration target was positioned at various poses and reconstructed with initial camera and projector calibration data. Each 3D reconstructions was fitted with an ideal virtual ideal plane that was further used to create the pixel-wise phase-to-coordinate mapping. To optimize the calibration accuracy, various angled poses of the calibration target are employed to refine the initial results. Experimental findings validate that the proposed approach offers high calibration accuracy for a structured light system using a telecentric lens.

“Unidirectional structured light system calibration with auxiliary camera and projector,” (2024)

Y. Yang, Y.-H. Liao, I. Bortins, D. P. Baldwin, and S. Zhang, “Unidirectional structured light system calibration with auxiliary camera and projector,” Optics and Lasers in Engineering, 175, 107984, (2024)

Abstract

This paper presents a novel calibration method for a unidirectional structured light system, which utilizes a white planar surface as the calibration target instead of the conventional targets with physical features like circle dots or checker squares. The proposed method reconstructs the white planar surface by employing stereo visionwith projected random patterns and plane fitting. To facilitate the calibration process, an auxiliary camera and an auxiliary projector are employed. Experimental results demonstrate that the high calibration accuracy is achieved by the proposed method for a unidirectional structured light system.

"Electrically tunable lens assisted absolute phase unwrapping for large depth-of-field 3D microscopic structured-light imaging,” (2024)

L. Chen and S. Zhang, “Electrically tunable lens assisted absolute phase unwrapping for large depth-of-field 3D microscopic structured-light imaging,” Optics and Lasers in Engineering, 174, 107967 (2024)

Abstract

This paper presents an absolute phase unwrapping method for large depth-of-field (DOF) microscopic structured-light 3D imaging. We calculate the wrapped phase maps and the fringe contrast maps from fringe images captured under various focus settings realized by an electrically tunable lens. From the fringe contrast maps, we extract each in-focus pixel and determine its approximate depth value from the focus settings. The approximate depth map is then used to create an artificial phase map. The geometric-constraint-based phase unwrapping method is adopted to unwrap the phase of all in-focus pixels using the artificial phase map. The unwrapped in-focus phase map is then used to reconstruct 3D coordinates for the entire DOF with the multi-focus pin-hole model. Experimental results demonstrate that our proposed method only requires three phase-shifted fringe patterns to achieve large DOF 3D imaging.


“Human respiration rate measurement with high-speed digital fringe projection technique,” (2023)

A. L. Lorenz and S. Zhang, “Human respiration rate measurement with high-speed digital fringe projection technique,” Sensors 23(21), 9000 (2023)

Abstract

This paper proposes a non-contact continuous respiration monitoring method based on Fringe Projection Profilometry (FPP). This method aims to overcome the limitations of traditional intrusive techniques by providing continuous monitoring without interfering with normal breathing. The FPP sensor captures three-dimensional (3D) respiratory motion from the chest wall and abdomen, and the analysis algorithms extract respiratory parameters. The system achieved a high Signal-to-Noise Ratio (SNR) of 37 dB with an ideal sinusoidal respiration signal. Experimental results demonstrated that a mean correlation of 0.95 and a mean Root-Mean-Square Error (RMSE) of 0.11 breaths per minute (bpm) were achieved when comparing to a reference signal obtained from a spirometer.

“Calibration method for a multi-focus microscopic 3D imaging system,” (2023)

L. Chen, X. Wang, and S. Zhang, “Calibration method for a multi-focus microscopic 3D imaging system,” Optics Letters 48(16), 4348-4351 (2023)

Abstract

This Letter presents a novel, to the best of our knowledge, method to calibrate multi-focus microscopic structured-light three-dimensional (3D) imaging systems with an electrically adjustable camera focal length. We first leverage the conventional method to calibrate the system with a reference focal length f0. Then we calibrate the system with other discrete focal lengths fi by determining virtual features on a reconstructed white plane using f0. Finally, we fit the polynomial function model using the discrete calibration results for fi. Experimental results demonstrate that our proposed method can calibrate the system consistently and accurately.

"Pixel-wise rational model for a structured light system" (2023)

Raúl Vargas, Lenny A. Romero, Song Zhang, and Andres G. Marrugo, “Pixel-wise rational model for a structured light system,” Optics Letters, 48(10) 2712-2715 (2023)

Abstract

This Letter presents a novel structured light system model that effectively considers local lens distortion by pixel-wise rational functions. We leverage the stereo method for initial calibration and then estimate the rational model for each pixel. Our proposed model can achieve high measurement accuracy within and outside the calibration volume, demonstrating its robustness and accuracy.

"Flexible structured light system calibration method with all digital features" (2023)

S. Zhang, “Flexible structured light system calibration method with all digital features,” Optics Express 31(10), 17076-17086 (2023)

Abstract

We propose an innovative method for single-camera and single-projector structured light system calibration in that it eliminates the need for calibration targets with physical features. Instead, a digital display such as a liquid crystal display (LCD) screen is used to present a digital feature pattern for camera intrinsic calibration, while a flat surface such as the mirror is used for projector intrinsic and extrinsic calibration. To carry out this calibration, a secondary camera is required to facilitate the entire process. Because no specially made calibration targets with real physical features are required for the entire calibration process, our method offers greater flexibility and simplicity in achieving accurate calibration for structured light systems.  Experimental results have demonstrated the success of this proposed method.

"Large depth-of-field microscopic structured-light 3D imaging with focus stacking" (2023)

L. Chen and S. Zhang, “Large depth-of-field microscopic structured-light 3D imaging with focus stacking”, Optics and Lasers in Engineering 167, 107623 (2023)

Abstract

State-of-the-art microscopic structured-light (SL) three-dimensional (3D) imaging systems typically use conventional lenses with a fixed focal length achieving a limited depth of field (DOF). This paper proposes to drastically increase the DOF of the microscopic 3D imaging by leveraging the focus stacking technique and developing a novel computational framework. We first capture fringe images with various camera focal lengths using an electrically tunable lens (ETL) and align the recovered phase maps using phase constraints. Then, we extract the focused pixel phase using fringe contrast and stitch them into an all-in-focus phase map via energy minimization. Finally, we reconstruct the 3D shape using the all-in-focus phase map. Experimental results demonstrate that our proposed method can achieve a large DOF of approximately 2 mm and field of view (FOV) of approximately 4 mm × 3 mm with a pixel size at the object space of approximately 2.6 μm. The achieved DOF is approximately 10× the DOF of the system without the proposed method.

Semi-Global Matching Assisted Absolute Phase Unwrapping (2023)

Y.-H. Liao and S. Zhang,  "Semi-Global Matching Assisted Absolute Phase Unwrapping," Sensors 23(1), 411 (2023); doi: 10.3390/s23010411

Abstract

Measuring speed is a critical factor to reduce motion artifacts for dynamic scene capture. Phase-shifting methods have the advantage of providing high-accuracy and dense 3D point clouds, but the phase unwrapping process affects the measurement speed. This paper presents an absolute phase unwrapping method capable of using only three speckle-embedded phase-shifted patterns for high-speed three-dimensional (3D) shape measurement on a single-camera, single-projector structured light system. The proposed method obtains the wrapped phase of the object from the speckle-embedded three-step phase-shifted patterns. Next, it utilizes the Semi-Global Matching (SGM) algorithm to establish the coarse correspondence between the image of the object with the embedded speckle pattern and the pre-obtained image of a flat surface with the same embedded speckle pattern. Then, a computational framework uses the coarse correspondence information to determine the fringe order pixel by pixel. The experimental results demonstrated that the proposed method can achieve high-speed and high-quality 3D measurements of complex scenes.

Pixel-wise structured light calibration method with a color calibration target (2022)

Abstract

We propose to use a calibration target with a narrow spectral color range for the background (e.g., from blue) and broader spectral color range for the feature points (e.g., blue + red circles), and fringe patterns matching the background color for accurate phase extraction. Since the captured fringe patterns are not affected by the high contrast of the calibration target, phase information can be accurately extracted without edging artifacts. Those feature points can be clearly “seen” by the camera if the ambient light matches the feature color or without the background color. We extract each calibration pose for three-dimensional coordinate determination for each pixel, and then establish pixel-wise relationship between each coordinate and phase. Comparing with our previously published method, this method significantly fundamentally simplifies and improves the algorithm by eliminating the computational framework estimate smooth phase near high-contrast feature edges. Experimental results demonstrated the success of our proposed calibration method.

Digital image correlation assisted absolute phase unwrapping (2022)

Y-H. Liao, M. Xu, and S. Zhang,  "Digital image correlation assisted absolute phase unwrapping," Optics Express 30(18), 33022-33034 (2022); doi: 10.1364/OE.470704

Abstract

This paper presents an absolute phase unwrapping method for high-speed three-dimensional (3D) shape measurement. This method uses three phase-shifted patterns and one binary random pattern on a single-camera, single-projector structured light system. We calculate the wrapped phase from phase-shifted images and determine the coarse correspondence through the digital image correlation (DIC) between the captured binary random pattern of the object and the pre-captured binary random pattern of a flat surface. We then developed a computational framework to determine fringe order number pixel by pixel using the coarse correspondence information. Since only one additional pattern is used, the proposed method can be used for high-speed 3D shape measurement. Experimental results successfully demonstrated that the proposed method can achieve high-speed and high-quality measurement of complex scenes.

High-speed 3D optical sensing and information processing for automotive industry (2021)

S. Zhang, "High-speed 3d optical sensing and information processing for automotive industry," SAE International Journal of Advances and Current Practices in Mobility, 2021-01-030 (2021); doi:10.4271/2021-01-0303. (Selected as one of the best papers for 2021 WCX)

Abstract

This paper explains the basic principles behind two platform technologies that my research team has developed in the field of optical metrology and optical information processing: 1) high-speed 3D optical sensing; and 2) real-time 3D video compression and streaming. This paper will discuss how such platform technologies could benefit the automotive industry including in-situ quality control for additive manufacturing and autonomous vehicle systems. We will also discuss some of other applications that we have been working on such as crime scene capture in forensics.

Calibration method for an extended depth-of-field microscopic structured light system (2022)

L. Chen, X. Hu, and S. Zhang, "Calibration method for an extended depth-of-field microscopic structured light system," Optics Express, 30(1), 166-178 (2022); doi: 10.1364/OE.448019

Abstract

This paper presents a calibration method for a microscopic structured light system with an extended depth of field (DOF). We first employed the focal sweep technique to achieve large enough depth measurement range, and then developed a computational framework to alleviate the impact of phase errors caused by the standard off-the-shelf calibration target (black circles with a white background). Specifically, we developed a polynomial interpolation algorithm to correct phase errors near the black circles to obtain more accurate phase maps for projector feature points determination. Experimental results indicate that the proposed method can achieve a measurement accuracy of approximately 1.0 𝜇m for a measurement volume of approximately 2,500 𝜇m (W) × 2,000 𝜇m (H) × 500 𝜇m (D).

"Comparative study on 3D optical sensors for short range applications," Optics and Lasers in Engineering (2022)

R. Chen, J. Xu, and S. Zhang,  "Comparative study on 3D optical sensors for short range applications," Optics and Lasers in Engineering 149, 106763, (2022); doi:j.optlaseng.2021.106763

Abstract

The increasing availability of commercial 3D optical sensors drastically benefits the application fields including the mechatronics where providing affordable sensing means for perception and control is vital. Yet, to our knowledge, there is no thorough comparable study to the state-of-the-art 3D optical sensors, making it difficult for users to select for their specific applications. This paper evaluates the performance of each sensor for short range applications (i.e., $\le$ 1 m ). Specifically, we present our findings on the measurement accuracy of each sensor under ``ideal'' situations, compare the influence of various lighting conditions, object surface properties (e.g., transparency, shininess, contrast), and object locations. In addition, we developed software APIs and user instructions that are available for the community to easily use each of the evaluated commercially available 3D optical sensor.