2008-10-07

Artoolkit rangefinding continued

I've discovered the arParamObserv2Ideal() function to correct for image distortion, and I think it has improved things. But my main problem is figuring out how to properly project the line from the origin through the laser dot in marker/fiducial space. I have a message out on the mailing list but it is not that active.



The results of my crude fillgaps processing app are shown below, using the somewhat sparse points from above.



The above results look pretty good- the points along the edge of the wall and floor are the furthest from the camera so appear black, and the floor and wall toward the top and bottom of the image are closer and get brighter.

My main problem is getting live feedback of where I've gotten points. With a live view that showed all found depth points it would be easier to achieve uniform coverage, rather than going off of memory. My problem there is that to use artoolkit I have to detect the markers, then shrink the image down and draw dots over it- not too hard sounding but the first time I tried it got all messed up.

2008-10-05

Depth Maps with ARToolkit and a Laser Pointer

Flying home from a recent trip to the east coast, I tried to figure out what the most inexpensive method for approximating scanning lidar would be. This is my answer:


ARToolKit assisted laser rangefinding from binarymillenium on Vimeo.


It's not that inexpensive, since I'm using a high resolution network camera similar to an Elphel- but it's possible I could replace it with a good consumer still camera with a linux supported usb interface for getting live images.




In the above screen shot the line projected from the found fiducial is shown, and the target where the found red laser dot is- they ought to cross each other but I need to learn more about transforming coordinates in and out of the camera space in ARToolkit to improve upon it.







This picture shows that the left side of the screen is 'further' away than the wall on the right, but that is not quite right- it is definitely further away from the fiducial, so I may be making an error.

source code

Intel Research Seattle - Open House

From 2008.10.01 Intel Research Seattle


Robotic Arm

From 2008.10.01 Intel Research Seattle


From 2008.10.01 Intel Research Seattle


This robot had a camera and em field sensors in its hand, and could detect the presence of an object within grasping distant. Some objects it had been trained to recognize, and others it did not but would attempt to grab anyway. Voice synthesis provided additional feedback- most humorously when it accidently (?) dropped something it said 'oops'. Also motor feedback sensed when the arm was pushed on, and the arm would give way- making it much safer than an arm that would blindly power through any obstacle.

From 2008.10.01 Intel Research Seattle


From 2008.10.01 Intel Research Seattle


It's insane, this application level taint

From 2008.10.01 Intel Research Seattle


I don't really know what this is about...


Directional phased array wireless networking

From 2008.10.01 Intel Research Seattle


The phased array part is pretty cool, but the application wasn't that compelling: Using two directional antennas a select zone can be provided with wireless access and other zones not overlapped by the two excluded. Maybe if it's more power efficient that way?

From 2008.10.01 Intel Research Seattle


From 2008.10.01 Intel Research Seattle


This antenna also had a motorized base, so that comparisons between physically rotating the antenna and rotating the field pattern could be made.

Haptic squeeze

From 2008.10.01 Intel Research Seattle


This squeeze thing has a motor in it to resist pressure, but was broken at the time I saw it. The presenter said it wasn't really intended to simulate handling of real objects in virtual space like other haptic interfaces might, but be used more abstractly as a interface to anything.

From 2008.10.01 Intel Research Seattle


RFID Accelerometer


From 2008.10.01 Intel Research Seattle


From 2008.10.01 Intel Research Seattle


This was one of two rfid accelerometers- powered entirely from the rfid antenna, the device sends back accelerometer data to rotate a planet on a computer screen. The range was very limited, and the update rate about 10 Hz. The second device could be charged within range of the field, then be taken out of range and moved around, then brought back to the antenna and download a time history (only 2 Hz now) of measurements taken. The canonical application is putting the device in a shipped package and then reading what it experienced upon receipt.

Wireless Resonant Energy

From 2008.10.01 Intel Research Seattle


From 2008.10.01 Intel Research Seattle


This has had plenty of coverage elsewhere but is very cool to see in person. Currently moving the receiver end more than an inch forward or back or rotating it causes the light buld to dim and go out.

Scratch Interface
From 2008.10.01 Intel Research Seattle


A simple interface where placing a microphone on a surface and then tapping on the surface in distinct ways can be used to control a device. Also a very simple demo of using opencv face tracking to reorient a displayed video to correct for distortion seen when viewing a flat screen from an angle.

Look around the building

From 2008.10.01 Intel Research Seattle


From 2008.10.01 Intel Research Seattle


From 2008.10.01 Intel Research Seattle


I found myself walking in a circle and not quite intuitively feeling I had completed a circuit when I actually had.


Also see another article about this, and Intel's flickr photos, and a a video.

2008-09-26

initialization discards qualifiers from pointer target type

A const may be needed, the following produces the error:

PixelPacket *p = AcquireImagePixels(image,0,y,image->columns,1,&image->exception);


while this fixes it:

const PixelPacket *p = AcquireImagePixels(image,0,y,image->columns,1,&image->exception);


In other news hopefully soon I'll have an ARToolkit app for reading in jpegs using ImageMagick, and also that app will have some other more exciting attributes.

2008-09-08

Increased Dynamic Range For Depth Maps, and Collages in Picasa 3


360 Vision + 3rd Person Composite from binarymillenium on Vimeo.

After I compressed the above video into a WMV I was dissatisfied with how little depth detail there is in the 360 degree vision part - it's the top strip. I initially liked the cleaner single shade look, but now I realize the utility of using a range of colors for depth fields (or IR/thermal imaging also)- it increases the number of colors to represent different depths beyond 256 to a larger number. Earlier I tried using one color channel for higher order bits and another for lower order bits (so the depth could be computed like red*256+green) for a total of 256*256 depth levels (or even 256^3 or 256^4 using alpha), but visually it's a mess.

But visual integrity can be maintained while multiplying that 256 levels by five or a bit more with additional work.

Taking six colors, three of them are pure red, green, blue, and inbetween there is (255,255,0) for yellow and the other two pure combinations of two channels. Between each subsequent set there can be 256 interpolated values, and in the end a color bar like the follow is generated with 1280 different values:





The bottom color bar shows the differences between adjacent values- if the difference was none then it would be black in spots, so my interpolation is verified.

Applying this to the lidar data, I produced a series of images with a processing project:



After making all the images I tried out Picasa 3 to produce a collage- the straightforward grid makes the most sense here. Picasa 3 crashed a few times in the collage editor before I was able to get this exported.