This page describes and contains the methodology behind the algorithmic component of our project.
Simulation Pipeline
In our simulation framework, online images with varying degrees of sparsity are utilized for the object. To ensure high contrast in brightness, the PSF from the DiffuserCam paper is incorporated into the simulation. The process involves feeding the PSF and ground truth object into the forward model to generate simulated images for each object, followed by inputting these images and the PSF into the inverse algorithm to produce an estimated object. The performance of the system is evaluated by comparing the estimated object with the ground truth object.
Results
To get a quantitative comparison, we normalized the brightness of all the ground truth objects and estimated objects, based on the total brightness. Then, we summed up the difference between the ground truth and estimation of each object. The “W” and the “WashU” logo both showed a high difference in ground truth and the estimation compared to the square.
We plotted out the error distribution of “W” and “WashU logo.” We observed that the algorithm did not reconstruct the object out of a certain region. We thought the reason was that most of the PSF is cropped in the image domain, so the algorithm failed to detect it. Moreover, the algorithm is better at reconstructing continuous objects instead of sharp edges.