Simulated Structured Light
I had trouble getting good scans initially and I thought it would be good to create a perfectly controlled setup that didn't require a projector or camera- instead generate scenes from software, projecting textures onto 3d objects.
One of the generated input phase images:
Output from the google code ThreePhase processing app:
The obvious improvement is to add in an obj loader to this program, instead of just generating a semi-random blobby shape.
Real World Scans
I was hoping for a lot more but managed to get a few scans of people during a New Year's Eve party:
Those were all made using modified versions of Kyle McDonald's slDecode and slCapture programs. I'll post those somewhere eventually.
This script is currently very very slow, but it's good to be able to debug in matlab and have easy access to fft and other functions.
Note the vertical lines where phase propagates vertically in glitchy ways- some filtering (and even slower processing time) ought to be able to clean that during the flood fill.
I hope to go mainly in the direction of high resolution high fidelity scans, as opposed to high frame-rate low sample-time scans, though I think I can access several high frame rate cameras for use in the latter. There is also lots of room for improvements (especially at the phase unwrapping stage) that would benefit either one of those.
This is awesome to see the Matlab result... I'm surprised it's so glitchy, I feel like I have some algorithms on paper that would be more accurate.
Regarding higher resolution as a target, the direction I'd like to take for this is a combination of phase shift and gray code. Imagine a gray code pattern, but using a sine wave instead of a square wave. The gray code part solves the depth discontinuity issue. The phase shift part solves resolution.
Geometric Informatics has a demo video where it looks like they're doing something similar to this: http://www.youtube.com/watch?v=EPTt2HgGYYQ at 2:47
I already have a generator for these patterns if you're interested in working with me on the decoding.
Aha, I just read through it some more and I realize now that you're using the same algorithm. I thought maybe you were using someone else's unwrapping algorithm.
I've been porting Kyle's code to c#, and it looks like I've run into some similar tearing issues when unwrapping the phases.
Any idea whats causing this? At first I just assumed it was poor masking, and the inability of the flood fill to handle it. However, I get different tears on the same pixels with different thresholding.
I've created a pretty close port, so its difficult for me to see exactly what the difference is.
One method to reduce the tearing that works decently (though is costly in capture time) is to do at least three passes at different frequencies, and take the median of the unwrapped phases.
One thing I haven't tried (for a single pass) would be to 'seed' the phase unwrapping at different locations and not just the center, though that are still on the target (and not on masked off areas), and take the median of those.
Another thing to try is to have three variations on the standard flood fill, ones that for example fill more preferentially in different directions, and median the results.
You might want to move discussion over to http://groups.google.com/group/structured-light. Also share a link to your source. Also if you want you could probably host it within the Structured Light google code project.
Thanks for the quick reply, I reposted on the google groups forum you linked.
Post a Comment