I used to just keep my graphs to myself and only show videos of results occasionally, but from now on I'll be backing up all the good ones online suitable for distribution under the GPL.
Here's a couple:
http://code.google.com/p/binarymillenium/wiki/GephexGraphs
This one is cool:
galaxy from binarymillenium on Vimeo.
This is okay:
BW Plasm demo from binarymillenium on Vimeo.
2007-12-29
2007-12-16
Dorkbot show
This went pretty well, it was low-key- the tent I was in was something of a place to relax although some of the DJs played more danceable music, and half a dozen or so people might be dancing at any time. I showed off Gephex to anyone who was curious, and would leave effects running unattended for long lengths of time. Not nearly the same level attention or effort from me as my last show, but good enough for the venue.
I feel like I've done everything I want to do in Gephex, I have to start using it differently or try different software, but there isn't anything free I've found that and as powerful that is suitable for live editing of effects.
Things that would help:
Make the joystick input support all buttons on the logitech controller- currently only two buttons and one analog joystick are supported, I'm sure adding the rest is trivial if the module is recompiled. More inputs to make effects more dynamic would allow more fundamentally different effects.
But what really needs to be done that is outside the scope of what I can do is allowing patches to be selected and reduced to subsystems that can be better organized and combined with other subsystems. There's a limit to how complex a graph can become otherwise that contributes to the 'sameyness' of everything I make with it. I think they are working on this and there may be a beta version.
Better random number generation- perlin noise specifically plus more generic IIR number filter block. I think if I connect a square wave and a random number generator to a flip-flop I can improve the randomness by only infrequently sampling the random number. The feedback option works as a very poor filter and is inadequate.
I feel like I've done everything I want to do in Gephex, I have to start using it differently or try different software, but there isn't anything free I've found that and as powerful that is suitable for live editing of effects.
Things that would help:
Make the joystick input support all buttons on the logitech controller- currently only two buttons and one analog joystick are supported, I'm sure adding the rest is trivial if the module is recompiled. More inputs to make effects more dynamic would allow more fundamentally different effects.
But what really needs to be done that is outside the scope of what I can do is allowing patches to be selected and reduced to subsystems that can be better organized and combined with other subsystems. There's a limit to how complex a graph can become otherwise that contributes to the 'sameyness' of everything I make with it. I think they are working on this and there may be a beta version.
Better random number generation- perlin noise specifically plus more generic IIR number filter block. I think if I connect a square wave and a random number generator to a flip-flop I can improve the randomness by only infrequently sampling the random number. The feedback option works as a very poor filter and is inadequate.
Radio effect in Audacity
So recently I needed to figure out how to take some dialogue recorded in person and make it sound like it was being heard over the radio. There's no built in effect for that, but on a bulletin board someone suggested notch filtering 1000-3500 Hz as radio's do the same thing- that gets you in the ball park but it misses the distortion and noise and crackle of a radio. For that I used the 'satan maximizer' effect that came in the large set of ladspa plugins that work in Audacity. With a lot more tuning and research on the type of radio I wanted to imitate I could probably do better, but short of that it at least communicates to the viewer the intention of the effect without being perfect.
2007-11-21
Dorkbot - Opening Night Party
This will be my second non Open Lab show, this year or ever. I need to get to work making a bunch of effects, hopefully mostly new ones that don't all look the same...
Also, a new video from the October Open Lab:
Open Lab - October 2007 from binarymillenium on Vimeo.
Also, a new video from the October Open Lab:
Open Lab - October 2007 from binarymillenium on Vimeo.
2007-10-30
Converting vob to avis
So I have a few vobs from a dvdrecorder that was recording a live set of video from an Open Lab. The vobs must be a little screwy- any editing program I try to load them into thinks they're only 18 seconds long. VLC knows how to play them in their entirety, but I really only want to get a few good moments out of it all.
I think transcode is up to the task, with command like:
transcode -i vts_01_1.vob -y dv -o openlab1.avi
But the video comes out screwy (while the audio is fine).
I google for the answer and got this gem of a thread as the first hit:
http://linux.derkeiler.com/Newsgroups/comp.os.linux.misc/2004-02/2245.html
where one poster repeatedly ask for details beyond 'go look on google' or 'read the manpage'. I think anyone who responds with 'go look on google', when that thread then for some stupid reason uselessly becomes the number one search result on google, deserves to be summarily shot. I'm only half kidding. Anyway the original poster responds with queries for details (like a command line) but repeatedly gets the same useless answers, but more vehemently.
I haven't figured it out yet, but I'll post an actual command line when I do.
I think transcode is up to the task, with command like:
transcode -i vts_01_1.vob -y dv -o openlab1.avi
But the video comes out screwy (while the audio is fine).
I google for the answer and got this gem of a thread as the first hit:
http://linux.derkeiler.com/Newsgroups/comp.os.linux.misc/2004-02/2245.html
where one poster repeatedly ask for details beyond 'go look on google' or 'read the manpage'. I think anyone who responds with 'go look on google', when that thread then for some stupid reason uselessly becomes the number one search result on google, deserves to be summarily shot. I'm only half kidding. Anyway the original poster responds with queries for details (like a command line) but repeatedly gets the same useless answers, but more vehemently.
I haven't figured it out yet, but I'll post an actual command line when I do.
2007-10-26
Connecting to a Server in GWT
A while back I started playing with GWT and created a small game GWTTiles. Fine example but there's no client-server interaction, so I tried that out a few days ago. The DynaTables example seemed way too complicated so I looked on the web, and got fairly far but couldn't get the server to work- but then finally a comment in a tutorial pointed out I needed to compile the java server myself, GWT wasn't going to compile it for me- though once placed in the proper directory it would happily run the .class file automatically.
So I didn't even have the java jdk installed, that was necessary to get javac:
javac -cp "$APPDIR/bin:$APPDIR/src:/home/bm/other/gwt-linux-1.4.60/gwt-user.jar:/home/bm/other/gwt-linux-1.4.60/gwt-dev-linux.jar" src/com/binarymillenium/gwt/server/serviceImpl.java
Then the class file had to be placed in bin/com/binarymillenium/gwt/server, and running the -shell script will properly execute it and my client app can get data from the server (just a text string for now, I need to figure out serializing next).
This is the error I used to get:
"com.google.gwt.user.client.rpc.InvocationException: Unable to find/load mapped servlet class 'com.binarymillenium.gwt.server.serviceImpl'"
And in the development shell window:
[ERROR] Unable to instantiate 'com.binarymillenium.gwt.server.serviceImpl'
java.lang.ClassNotFoundException: com.binarymillenium.gwt.server.serviceImpl
...
So I didn't even have the java jdk installed, that was necessary to get javac:
javac -cp "$APPDIR/bin:$APPDIR/src:/home/bm/other/gwt-linux-1.4.60/gwt-user.jar:/home/bm/other/gwt-linux-1.4.60/gwt-dev-linux.jar" src/com/binarymillenium/gwt/server/serviceImpl.java
Then the class file had to be placed in bin/com/binarymillenium/gwt/server, and running the -shell script will properly execute it and my client app can get data from the server (just a text string for now, I need to figure out serializing next).
This is the error I used to get:
"com.google.gwt.user.client.rpc.InvocationException: Unable to find/load mapped servlet class 'com.binarymillenium.gwt.server.serviceImpl'"
And in the development shell window:
[ERROR] Unable to instantiate 'com.binarymillenium.gwt.server.serviceImpl'
java.lang.ClassNotFoundException: com.binarymillenium.gwt.server.serviceImpl
...
2007-09-21
2007-09-01
Generating Video
There have been a lot of Arduino video generation projects covered by the Make blog recently, I did my own a couple of weeks ago:
Arduino Video - NTSC from binarymillenium and Vimeo.
Since then I've experimented with upping the resolution by a lot, though memory becomes an issue quickly. Ideally I could have a whole set of different visuals to generate with an arduino and few knobs and buttons to control them with for a cheap VJ device.
Color is out of the question with an Arduino without hardware components to generate much higher frequencies than can be made in software with the standard 20MHz clock.
So next I'm thinking of trying color video generation with an FPGA. B&W generation on an FPGA seems trivial compared to doing it in software, and many grades of gray could be generated with PWM rather than having to add more resistors and use more digital outputs.
But can a 3.58MHz color signal be put on top of that successfully?
I have a Digilent Spartan 3 dev-kit that's a few years old, it has a 50 MHz oscillator. I could generate 50/14 = 3.571 Mhz clock from that, each period would be 14 cycles long. So that means only 13 different phase shifts from that would be possible, and I'm not sure if a full 360 of phase shifts is allowed- it might only be 1/2 or 1/4 of that. And then there's the problem of PWMing those to produce different amplitudes- for one half period there are 7 cycles. A normal one would have PWM like 1010101, then a 1100110 (or maybe 1100011), and then 1101101 or 1110111 or something else but it's not clear any filtering I would do or the tv input does would actually create a readable 3.571 MHz signal out of that. It's worth a shot anyhow.
There are a few color video FPGA projects out there, one on opencores, I'll look into those if my efforts aren't that successful.
Once I can generate a test pattern, I'll want to synthesize a cpu and program it with a game or VJ stuff to generate interesting moving visuals. I don't think I can make a microblaze with the free Xilinx Webpack tools, but maybe something like OpenFire or PicoBlaze?
Arduino Video - NTSC from binarymillenium and Vimeo.
Since then I've experimented with upping the resolution by a lot, though memory becomes an issue quickly. Ideally I could have a whole set of different visuals to generate with an arduino and few knobs and buttons to control them with for a cheap VJ device.
Color is out of the question with an Arduino without hardware components to generate much higher frequencies than can be made in software with the standard 20MHz clock.
So next I'm thinking of trying color video generation with an FPGA. B&W generation on an FPGA seems trivial compared to doing it in software, and many grades of gray could be generated with PWM rather than having to add more resistors and use more digital outputs.
But can a 3.58MHz color signal be put on top of that successfully?
I have a Digilent Spartan 3 dev-kit that's a few years old, it has a 50 MHz oscillator. I could generate 50/14 = 3.571 Mhz clock from that, each period would be 14 cycles long. So that means only 13 different phase shifts from that would be possible, and I'm not sure if a full 360 of phase shifts is allowed- it might only be 1/2 or 1/4 of that. And then there's the problem of PWMing those to produce different amplitudes- for one half period there are 7 cycles. A normal one would have PWM like 1010101, then a 1100110 (or maybe 1100011), and then 1101101 or 1110111 or something else but it's not clear any filtering I would do or the tv input does would actually create a readable 3.571 MHz signal out of that. It's worth a shot anyhow.
There are a few color video FPGA projects out there, one on opencores, I'll look into those if my efforts aren't that successful.
Once I can generate a test pattern, I'll want to synthesize a cpu and program it with a game or VJ stuff to generate interesting moving visuals. I don't think I can make a microblaze with the free Xilinx Webpack tools, but maybe something like OpenFire or PicoBlaze?
2007-06-29
WinTV USB with Ubuntu 6.04
I bought a usb TV tuner/s-video/rca input device for my laptop, so that I can get video from DV video cameras and any other DV source straight into Gephex. Installation in Linux wasn't too hard:
First get usbvision:
cvs -z3 -d:pserver:anonymous@usbvision.cvs.sourceforge.net:/cvsroot/usbvision co -P usbvision
make; sudo make install
Tv time doesn't work, but zapping does:
tvtime:
"Your capture card driver, USBVision USB Video, is not providing
enough buffers for tvtime to process the video."
zapping:
works!
Gephex doesn't know how to change the channels or settings, so the trick is to change things in zapping and then quit and the settings will persist in gephex.
First get usbvision:
cvs -z3 -d:pserver:anonymous@usbvision.cvs.sourceforge.net:/cvsroot/usbvision co -P usbvision
make; sudo make install
Tv time doesn't work, but zapping does:
tvtime:
"Your capture card driver, USBVision USB Video, is not providing
enough buffers for tvtime to process the video."
zapping:
works!
Gephex doesn't know how to change the channels or settings, so the trick is to change things in zapping and then quit and the settings will persist in gephex.
2007-06-14
Opticlash 2
This was an interesting event, I didn't go to the first one in 2005 and I had my doubts about the format going in- and I still have my doubts, but overall it was a success in terms of promoting VJing- and also VJ Scobot won, and I think he did do the best VJing there.
There were three sets of screens at the front of the room, the center had camera views of the two competing VJs and the outer ones had the video they were outputting. This seemed less than ideal because of difficulty in viewing both simultaneously- you could either watch one or the other except from the most distant points or oblique angles.
It's possible the VJs at some points were glimpsing what the other guy was doing and responded in some way, but I think mainly they were just concentrating on their own stuff- which is unfortunate, because the most crowd-pleasing aspect of a competition is any kind of interaction and drama that can be generated between the contestants. It would be great to have one contestant go for a minute or two and the the other goes, trying to outdo the other, maybe playing a similar sort of clip, or mocking them somehow, or anything like that- and they would go back and forth a few times.
Opticlash 2 from binarymillenium on Vimeo
I didn't actually see the last set, since the show was going on a bit longer than advertised- the 15 minute sets for the later rounds should have been at most 10 minutes on schedule or not.
The judges didn't add a lot besides the rote rating judgements they offered, the MC initially wanted some kind of vocal rationale or something out of the judges but they were microphone shy.
Here's a writeup from one of the judges, and he posted some video (Pixelflip vs. ?) on dailymotion.
2007-06-11
VJing Toorcon
Post-mortem thingy:
My setup was to use gephex on a Linux laptop and have a video camera and dvd player as video sources. Switching between them would have been nicer with a switch box, I was just reconnecting cables while having a graph loaded on gephex that wasn't dependent on external video input.
I had a couple of newly burned DVDs, one was lots of video game imagery. Biohazard Battle on the Sega Genesis worked decently, but old Atari games with flatter coloring and graphics could have been more interesting. Unfortunately my Atari 2600 didn't work after I pulled it out of storage. One of those $20 battery powered video game joysticks would work well for sourcing the imagery live, but I'd need someone else to play (I recall somebody brought one to Open Lab a few months ago but didn't try it out).
The other dvd had videos of a computer animation contest, one where the executable creating the animation had to be less than 32K. I had to run the dvd player output through a anything-to-anything box from Canopus with firewire in/out on it.
Four projectors were going with cloned video, and I had an LCD monitor to see what I was doing. None of the projection surfaces in the main room were that ideal but they worked. The better surfaces were out in the hall but I couldn't see them.
Gephex (or underlying Linux graphics software) is finicky about driving full-screen, sometimes there would be a sliver of the desktop underneath it. Playing around with it usually got rid of it. I spent a long time going through all my graphs and deleting ones that didn't work and adjusting output settings for ones to keep. It would be nice if each individual graph didn't have it's own output settings.
Kino plus my screengrab frei0r module was my way of getting 1394 video into gephex. I could probably write my own 1394 gephex/frei0r input module using source from Kino but I haven't gotten around to it yet.
Ideally there would be a way to get video into linux so that it is seen as a webcam- a video-to-usb device sounds ideal- I could get video from a video camera or a dvd player without using firewire at all. Supposedly there is one from X10 with linux drivers, and another from Hauppage, but neither is sold on Amazon or other sites that feel respectable enough to purchase from.
Toorcon 2007 from binarymillenium on Vimeo
Another option would be to run Gephex on the windows side of my laptop, but unfortunately it came with Vista preinstalled, and gephex doesn't run so well there.
I would have liked to be able to record the whole show or portions of it. Another RGB-to-video plus another video camera with video input recording (like my old Canon DV camera) would have worked, but added a lot of wires. Also I didn't have a spare VGA splitter output to drive it.
My setup was to use gephex on a Linux laptop and have a video camera and dvd player as video sources. Switching between them would have been nicer with a switch box, I was just reconnecting cables while having a graph loaded on gephex that wasn't dependent on external video input.
I had a couple of newly burned DVDs, one was lots of video game imagery. Biohazard Battle on the Sega Genesis worked decently, but old Atari games with flatter coloring and graphics could have been more interesting. Unfortunately my Atari 2600 didn't work after I pulled it out of storage. One of those $20 battery powered video game joysticks would work well for sourcing the imagery live, but I'd need someone else to play (I recall somebody brought one to Open Lab a few months ago but didn't try it out).
The other dvd had videos of a computer animation contest, one where the executable creating the animation had to be less than 32K. I had to run the dvd player output through a anything-to-anything box from Canopus with firewire in/out on it.
Four projectors were going with cloned video, and I had an LCD monitor to see what I was doing. None of the projection surfaces in the main room were that ideal but they worked. The better surfaces were out in the hall but I couldn't see them.
Gephex (or underlying Linux graphics software) is finicky about driving full-screen, sometimes there would be a sliver of the desktop underneath it. Playing around with it usually got rid of it. I spent a long time going through all my graphs and deleting ones that didn't work and adjusting output settings for ones to keep. It would be nice if each individual graph didn't have it's own output settings.
Kino plus my screengrab frei0r module was my way of getting 1394 video into gephex. I could probably write my own 1394 gephex/frei0r input module using source from Kino but I haven't gotten around to it yet.
Ideally there would be a way to get video into linux so that it is seen as a webcam- a video-to-usb device sounds ideal- I could get video from a video camera or a dvd player without using firewire at all. Supposedly there is one from X10 with linux drivers, and another from Hauppage, but neither is sold on Amazon or other sites that feel respectable enough to purchase from.
Toorcon 2007 from binarymillenium on Vimeo
Another option would be to run Gephex on the windows side of my laptop, but unfortunately it came with Vista preinstalled, and gephex doesn't run so well there.
I would have liked to be able to record the whole show or portions of it. Another RGB-to-video plus another video camera with video input recording (like my old Canon DV camera) would have worked, but added a lot of wires. Also I didn't have a spare VGA splitter output to drive it.
2007-05-28
Europe Has A Mission
There was an add for this movie in The Stranger. My first thought, naturally, was WTF? Then I wondered if it was 'Command & Conquer: Generals: The Movie', and then I looked it up on IMDB (it wasn't there) and then on google and found out why: it doesn't exist. It's a joke/commentary/guerilla-art/etc.
2007-05-01
Amazon S3 for media backup
I've been concerned about backing up my videos and pictures for as long as I've been taking them, but until a week ago I never bothered to think much about a commercial service. I would periodically burn DVDs, multiple DVDs of the same data- worried that the DVDs would degrade after a few years. I'd also make hard drive backups and then store the hard drives and some of the dvds offsite.
DVDs are poor for backup as soon as a platter is full of them- it takes a lot of time to find specific files later on. Hard drives are better, but expensive. Recently I had two hard drives fail in a row- they still sort of work, I can recover data off them, but they don't work as boot drives and were at times making clunking noises.
Another feature I've been in need of is a way of transferring videos to people, or hosting high quality videos. Youtube/vimeo/google-video/etc. is low quality and I don't necessarily want to give the videos to the entire world.
I'd heard about Amazon's S3 service a while back, but it doesn't have a friendly user interface by default- and I wasn't sure about paying for third party software like jungledrive, and I didn't want to pay another service like that something-else-drive. I've bought things from Amazon before, they have my credit card information already, and I like their pricing scheme. Other services want you to pay an excess- $10 a month for 100 GB, great deal if you fill up that 100 GB immediately, but poor if you only use 10 GB at first.
Then I found S3 firefox plugin. I can't vouch for the security of this plugin, but it seems to work and the media I'm backing up isn't that security critical. The plugin allows simple uploading and downloading and access control setting of files.
Over a cable modem, it took me about 3-days of uploading to get 10 GB of pictures onto S3. This is slow, but once they're uploaded the rate at which I take new pictures is much lower than backing up several years of pictures all at once.
At they're current pricing this 10 GB will cost $1 .50 a month, and I anticipate they'll lower the price per GB as storage prices fall. In the next week I'll probably have 20 GB up, so $3 a month or $36 a year. If I was just storing pictures this would start to compare poorly with flickr or other photo sites, which charge a flat yearly rate for 'unlimited' storage- but I like the raw interface for uploading and the truly unlimited feeling of S3, and I also don't plan on having too many GB of just pictures.
Video is the real hard-drive killer, and unfortunately S3 isn't going to be a good solution for raw off the camera DV or HDV or other kinds of video. The files are too big: S3 file limit size is 5 GB also, though I can chop up some of the bigger video files, it takes too long to transfer, and it will cost too much. I could easily have a terabyte of video pretty soon. I suppose it's not all worth backing up, maybe if I keep my S3 total under 50 -100 GB I won't mind the cost too much.
For now, I'm just backing up editing compressed video- wmv files under 100 MB. I might progress to backing up less heavily compressed files later. It's easy to share the files with others, set them to global read-only and email people the link- or if the third party has an Amazon account and gets Firefox + the S3 plugin going you can specifically allow them and only them to have read or even write access.
DVDs are poor for backup as soon as a platter is full of them- it takes a lot of time to find specific files later on. Hard drives are better, but expensive. Recently I had two hard drives fail in a row- they still sort of work, I can recover data off them, but they don't work as boot drives and were at times making clunking noises.
Another feature I've been in need of is a way of transferring videos to people, or hosting high quality videos. Youtube/vimeo/google-video/etc. is low quality and I don't necessarily want to give the videos to the entire world.
I'd heard about Amazon's S3 service a while back, but it doesn't have a friendly user interface by default- and I wasn't sure about paying for third party software like jungledrive, and I didn't want to pay another service like that something-else-drive. I've bought things from Amazon before, they have my credit card information already, and I like their pricing scheme. Other services want you to pay an excess- $10 a month for 100 GB, great deal if you fill up that 100 GB immediately, but poor if you only use 10 GB at first.
Then I found S3 firefox plugin. I can't vouch for the security of this plugin, but it seems to work and the media I'm backing up isn't that security critical. The plugin allows simple uploading and downloading and access control setting of files.
Over a cable modem, it took me about 3-days of uploading to get 10 GB of pictures onto S3. This is slow, but once they're uploaded the rate at which I take new pictures is much lower than backing up several years of pictures all at once.
At they're current pricing this 10 GB will cost $1 .50 a month, and I anticipate they'll lower the price per GB as storage prices fall. In the next week I'll probably have 20 GB up, so $3 a month or $36 a year. If I was just storing pictures this would start to compare poorly with flickr or other photo sites, which charge a flat yearly rate for 'unlimited' storage- but I like the raw interface for uploading and the truly unlimited feeling of S3, and I also don't plan on having too many GB of just pictures.
Video is the real hard-drive killer, and unfortunately S3 isn't going to be a good solution for raw off the camera DV or HDV or other kinds of video. The files are too big: S3 file limit size is 5 GB also, though I can chop up some of the bigger video files, it takes too long to transfer, and it will cost too much. I could easily have a terabyte of video pretty soon. I suppose it's not all worth backing up, maybe if I keep my S3 total under 50 -100 GB I won't mind the cost too much.
For now, I'm just backing up editing compressed video- wmv files under 100 MB. I might progress to backing up less heavily compressed files later. It's easy to share the files with others, set them to global read-only and email people the link- or if the third party has an Amazon account and gets Firefox + the S3 plugin going you can specifically allow them and only them to have read or even write access.
2007-04-21
911 Media Open Lab - April 15
911 Media Arts Center Open Lab - 2007.04.15 on Vimeo
I bought a new laptop somewhat recently, and am currently dual-booting between Vista & Ubuntu. Most of my custom software was only set up to run in Ubuntu, but I forgot to figure out getting the S-Video output to work when I brought it to Open Lab- despite all the other ease-of-use advances of Ubuntu something as simple as configuring an external output is still a pain. So instead I went back into Vista and just messed around with Wings3D, while others manipulated that source video with some hardware video mixers.
The slower bits are edited out, and overall I like the most of the scenes in there- even if they don't match up to the (also live generated) music, I think there's a few underlying ideas that could be developed into more interesting clips:
-Flying over alien landscapes, manipulating them, and a kind of 80s CG flat shading look
-Simple shapes that generate fractals
Wings isn't really meant for live performance. With a little more work I could set up a lot more keyboard shortcuts so the context menus don't show up as much. A more intensive effort would be to make models in Wings, export them to objs or something and a have a custom app running on the external monitor that can be triggered to load the model.
2007-04-09
Bones Animation - Re-Acting
Bones - Re-Acting from binarymillenium on Vimeo.
I think I could have done a better job with this video, edited it a little more heavy, but I don't like to get to bogged down with it. I sort of think as these as visual notes to myself, I can refer back to them and recreate the effect I captured in a bigger and more meaningful work.
The main thing making editing more difficult was that I was using image sequences in Premiere- my computer isn't fastest enough to actual play back unrendered image sequences (and I was too lazy to render it), so it was hard to get the edits and feel right.
The source imagery is from a code.google project called 'bones' http://code.google.com/p/binarymillenium/wiki/Bones. It's a very simplistic bones animation implementation, using osg::nodes and with randomly generated hierarchy and animation. Every vertex in the object loaded for a bone has a weight that mixes (using quaternion slerp) the positions of the parent osg::node and the child. The weights are automatically generated based on the distance from the vertex from the root of the object where it joins with the parent.
2007-03-18
VJ/Visualization Blogs
I was randomly searching for something the other day and came across a lot of interesting blogs:
Create Digital Motion
This one has a commercial look, and covers a lot of gear too expensive for the casual reader, and in general focuses on gear and other people's work. But it show me Processing. Processing has a horrible name for googleability, which used to be addressed by calling it Proce55ing. Idiotically they've assumed that because the google search for just processing by itself points to the proper website, they can stop using the distinctive 55 in the name. But what happens when you try combined searches for processing and some other search term? You get tons of site that use processing as a common English word.
flight404
Personal blog covering projects by the author of the site. He's done some really cool things with processing and also griffin powermate knobs.
Processing Blogs
Haven't really read this that much, but it looks interesting.
VLOBLIVE
It's a clunky word for 'Very Low Budget Live video for Events'. It's not that applicable to much that I do, and their definition of 'very low budget' is still a lot of money to me. They use the other terms like IMAG, for image magnification, which very mundanely just refers to show a big closeup of a performer on a stage on a big screen. The blog is pretty practical, not about cool art but just has a lot of helpful stuff.
There was another site linked to from Create Digital Motion but I don't have a link to it in front of me.
I've been playing with Processing, and it is very fast and easy to use. The support for microphone and video input doesn't seem as robust as in gephex, but the flexibility for making effects and taking data from other sources seems high. It's wierd, because a lot of the stuff to do is the sort of thing I'd do already in C++ and OSG rather than Java here- but I can understand the appeal to people who don't want to spend hours compiling the latest OSG sources, and there's few button and mouse clicks saved by just typing in code and hitting run. But I feel like I should learn it anyway even though its 3D capabilities are a step back from OSG, if nothing else because there's a higher likelihood of being able to collaborate with other people into processing.
Create Digital Motion
This one has a commercial look, and covers a lot of gear too expensive for the casual reader, and in general focuses on gear and other people's work. But it show me Processing. Processing has a horrible name for googleability, which used to be addressed by calling it Proce55ing. Idiotically they've assumed that because the google search for just processing by itself points to the proper website, they can stop using the distinctive 55 in the name. But what happens when you try combined searches for processing and some other search term? You get tons of site that use processing as a common English word.
flight404
Personal blog covering projects by the author of the site. He's done some really cool things with processing and also griffin powermate knobs.
Processing Blogs
Haven't really read this that much, but it looks interesting.
VLOBLIVE
It's a clunky word for 'Very Low Budget Live video for Events'. It's not that applicable to much that I do, and their definition of 'very low budget' is still a lot of money to me. They use the other terms like IMAG, for image magnification, which very mundanely just refers to show a big closeup of a performer on a stage on a big screen. The blog is pretty practical, not about cool art but just has a lot of helpful stuff.
There was another site linked to from Create Digital Motion but I don't have a link to it in front of me.
I've been playing with Processing, and it is very fast and easy to use. The support for microphone and video input doesn't seem as robust as in gephex, but the flexibility for making effects and taking data from other sources seems high. It's wierd, because a lot of the stuff to do is the sort of thing I'd do already in C++ and OSG rather than Java here- but I can understand the appeal to people who don't want to spend hours compiling the latest OSG sources, and there's few button and mouse clicks saved by just typing in code and hitting run. But I feel like I should learn it anyway even though its 3D capabilities are a step back from OSG, if nothing else because there's a higher likelihood of being able to collaborate with other people into processing.
2007-03-17
Vimeo
It's easy to spread yourself too thin by trying to maintain an online presence on a lot of similar community sites for hosting content. They all have different designs and by virtue of their architecture or just the vibe the creators and the initial group of users create they seem to be good for different things. But I'm trying out vimeo because of their focus on user created content- there's a lot of artists in there, and it sort of has a flickr feel to it- and the video quality is pretty good, the UI is nice that it disappears when not in use (though it obscures video when in use). I'm just hosting files identical to ones I have on myspace, google video, and youtube, but whatever site feels the most productive I'll probably eventually use to the exclusion of the rest. When someone starts a free 640x480 or HD hosting site I'll probably re-evaluate...
Youtube is the most popular, but video quality is garbage and most of the content is garbage. I imagine the user base is on the more youthful side.
Google video is better quality, though has no real community feel- it's just a faceless generic place to host video. Some day I might be interested in trying to make money off of uploaded video that I own entirely but for now that idea is mostly incompatible with my approach to this sort of thing (a tip jar feature with proceeds donated to a charity of my choosing would be nice though).
Myspace is not primarily a video sharing site, but they have a decent video uploading feature. If you have a myspace page and make video you might as well use their hosting instead of embedding someone else's. It can feel a lot more personal even if your page is not your personal page because of the emphasis on friending.
There's a ton of other hosting sites I won't bother with, mostly since their user base is marginal or they are just a dumping ground for copyrighted material, racier stuff if their content policy is more liberal than youtube or google's. Others are completely based around making money for uploaded content, pay per view basically: get thousands of views and make $10 or so. That sort of goal based emphasis probably marginalizes a lot of otherwise interesting content, encourages copy-cats and therefore homogeneity. So does having a view-counter or ratings (read slashdot for a few months to see how their 'karma' promotes pointless regurgitation...), but money is a force-multiplier.
Youtube is the most popular, but video quality is garbage and most of the content is garbage. I imagine the user base is on the more youthful side.
Google video is better quality, though has no real community feel- it's just a faceless generic place to host video. Some day I might be interested in trying to make money off of uploaded video that I own entirely but for now that idea is mostly incompatible with my approach to this sort of thing (a tip jar feature with proceeds donated to a charity of my choosing would be nice though).
Myspace is not primarily a video sharing site, but they have a decent video uploading feature. If you have a myspace page and make video you might as well use their hosting instead of embedding someone else's. It can feel a lot more personal even if your page is not your personal page because of the emphasis on friending.
There's a ton of other hosting sites I won't bother with, mostly since their user base is marginal or they are just a dumping ground for copyrighted material, racier stuff if their content policy is more liberal than youtube or google's. Others are completely based around making money for uploaded content, pay per view basically: get thousands of views and make $10 or so. That sort of goal based emphasis probably marginalizes a lot of otherwise interesting content, encourages copy-cats and therefore homogeneity. So does having a view-counter or ratings (read slashdot for a few months to see how their 'karma' promotes pointless regurgitation...), but money is a force-multiplier.
2007-02-23
Google project hosting
I was getting tired of waiting for shell service to come back to sourceforget so I searched for blogs talking about it, maybe they knew something I didn't- instead I found this entry http://icolor2.blogspot.com/2007/02/sourceforge-downtime.html which pointed me to google project hosting.
Almost immediately I created my own project http://code.google.com/p/binarymillenium/ and started uploading code. Initially I thought there was no support for screenshots but I then realized screenshots could be uploaded to svn and linked to from the wiki, although it automatically creates image for 'jpg' extensions but not 'JPG'.
I'm ready to quit sourceforge entirely, and leave all its frustrating user interface problems behind. It's one of those facets of the internet where big sites start up and then become sort of fossilized in whatever web technology was the state of the art or popular at the time, and then a new generation will rise up and replace them with superior web stuff- google web toolkit is the current new thing.
Almost immediately I created my own project http://code.google.com/p/binarymillenium/ and started uploading code. Initially I thought there was no support for screenshots but I then realized screenshots could be uploaded to svn and linked to from the wiki, although it automatically creates image for 'jpg' extensions but not 'JPG'.
I'm ready to quit sourceforge entirely, and leave all its frustrating user interface problems behind. It's one of those facets of the internet where big sites start up and then become sort of fossilized in whatever web technology was the state of the art or popular at the time, and then a new generation will rise up and replace them with superior web stuff- google web toolkit is the current new thing.
2007-02-21
Render-to-texture feedback with OSG
This actually doesn't use any of the screen capture code from the last post ( I'm still working out how to best use that) - it just does a good old fashioned glReadPixels to current the current opengl screen.
OSG Feedback from binarymillenium on Vimeo.
OSG Feedback from binarymillenium on Vimeo.
2007-02-19
Screen Capture with wxWindows
For reasons that will become clear when I post some more software and video, I need to have a general method of capturing any part of the screen and using it as a texture in custom applications.
Google searches for 'screen capture' 'capture screen' or 'screen grab' turn up a lot of other peoples shareware screen grab stuff, and lots of mailing lists where screen captures are provided of one thing or another but it's hardly ever about the code that did the work. How to find some usable code to do what I needed? The answer turned out to be sourceforge, where I found sseditor:
wxWin32ScreenShot.cpp
I was hoping there would have been a platform independent method where you just ask the video card to give you all the pixels, but it turns out to be windowing platform specific. This instance used Windows and wxWindows. I compiled wxWindows 2.8.0 for Cygwin and saw that the examples work. I then copied the screengrab code out of sseditor and put it in a wxWindows sample called 'drawing'. That worked after a little tweaking.
I then took the code and put it into a custom OSG application. There were lots of problems, and I was worried the wxWindows libraries or just using wxWindows in OSG at all was causing instability. I then ran into a problem where calling 'CreateCompatibleBitmap' failed after exactly 153 calls, but it turned out I needed to call DeleteObject on the HBITMAP before calling CreateCompatible repeatedly, I was using up graphics memory and not freeing it.
Also, I had to call wxInitialize() and wxUninitialize() at the beginning and end of my program.
My other problem was that I didn't know how to translate between the wxBitmap format of wxWindows and the texture format of the osg::Image. The first way I made it work was by writing the bitmap to disk using a wxWindows function then loading that bmp to a texturing in OSG- this worked but was hard on the disk drive for high rate screen capturing, but writing to a ramdisk can fix that.
But the seeming right way to do it:
The flip code seems really wierd, but it was necessary.
Anyway, after all that I felt pretty proud of myself for hacking a working screencap together over a couple of days, not knowing anything about windowing toolkits before that (I've always known enough about them to try and avoid them like the plague, there's no uglier code than window gui widget code).
The only libraries needed for wxWindows are -lwx_base-2.8 and -lwx_msw_core-2.8, though there will be warnings about other wx libs getting auto imported.
Google searches for 'screen capture' 'capture screen' or 'screen grab' turn up a lot of other peoples shareware screen grab stuff, and lots of mailing lists where screen captures are provided of one thing or another but it's hardly ever about the code that did the work. How to find some usable code to do what I needed? The answer turned out to be sourceforge, where I found sseditor:
wxWin32ScreenShot.cpp
I was hoping there would have been a platform independent method where you just ask the video card to give you all the pixels, but it turns out to be windowing platform specific. This instance used Windows and wxWindows. I compiled wxWindows 2.8.0 for Cygwin and saw that the examples work. I then copied the screengrab code out of sseditor and put it in a wxWindows sample called 'drawing'. That worked after a little tweaking.
I then took the code and put it into a custom OSG application. There were lots of problems, and I was worried the wxWindows libraries or just using wxWindows in OSG at all was causing instability. I then ran into a problem where calling 'CreateCompatibleBitmap' failed after exactly 153 calls, but it turned out I needed to call DeleteObject on the HBITMAP before calling CreateCompatible repeatedly, I was using up graphics memory and not freeing it.
Also, I had to call wxInitialize() and wxUninitialize() at the beginning and end of my program.
void wxScreenCapture(wxDC& dc)
{
int sizeX = 0;
int sizeY = 0;
sizeX = tex_width; //GetSystemMetrics(SM_CXSCREEN);
sizeY = tex_height; //GetSystemMetrics(SM_CYSCREEN);
compat_counter++;
bmp->SetHeight(sizeY);
bmp->SetWidth(sizeX);
HDC mainWinDC = GetDC(GetDesktopWindow());
HDC memDC = CreateCompatibleDC(mainWinDC);
if (bitmap != NULL) DeleteObject(bitmap);
bitmap = CreateCompatibleBitmap(mainWinDC,tex_width,tex_height);
if (bitmap == NULL) {
std::cerr << "CreateCompatibleBitmap failed at " <<
compat_counter << ", " <<
mainWinDC << " " << tex_width << " " <<
tex_height << std::endl;
exit(1);
return;
}
HGDIOBJ hOld = SelectObject(memDC,bitmap);
BitBlt(memDC, 0, 0,sizeX,sizeY, mainWinDC, 20, 20, SRCCOPY);
SelectObject(memDC, hOld);
DeleteDC(memDC);
ReleaseDC(GetDesktopWindow(), mainWinDC);
bmp->SetHBITMAP((WXHBITMAP)bitmap);
if (bmp->Ok() ) {
//dc.DrawText( _T("BMP ok"), 30, 20 );
} else {
//dc.DrawText( _T("BMP not ok"), 30, 20 );
std::cerr << "bmp not ok" << std::endl;
return;
}
if (savewximage) {
bmp->SaveFile( wxT("/cygdrive/b/text.bmp"), wxBITMAP_TYPE_BMP);
}
My other problem was that I didn't know how to translate between the wxBitmap format of wxWindows and the texture format of the osg::Image. The first way I made it work was by writing the bitmap to disk using a wxWindows function then loading that bmp to a texturing in OSG- this worked but was hard on the disk drive for high rate screen capturing, but writing to a ramdisk can fix that.
But the seeming right way to do it:
/// get image from desktop in wxBitmap format,
// convert it to osg::Image format
wxAlphaPixelData rawbmp(*bmp, wxPoint(0,0),
wxSize(tex_width, tex_height));
wxAlphaPixelData::Iterator p(rawbmp);
image->allocateImage(tex_width, tex_height,
1, GL_RGBA, GL_FLOAT);
/// image is an osg::Image
float* img_data = (float*)image->data();
for (unsigned i = 0; (i < tex_height); i++) {
for (unsigned j = 0; (j < tex_width); j++) {
int ind = i*tex_width + j;
bool flip = ((i > tex_height/4-1) && (i < 3*tex_height/4)
&& (j > tex_width/4-1) && (j < 3*tex_width/4));
img_data[ind*4] = flip ? 1.0 - p.Red()/255.0 : p.Red()/255.0;
img_data[ind*4+1] = flip ? 1.0 - p.Green()/255.0 : p.Green()/255.0;
img_data[ind*4+2] = flip ? 1.0 - p.Blue()/255.0 : p.Blue()/255.0;
img_data[ind*4+3] = 1.0;
p.MoveTo(rawbmp, j, tex_height-1-i);
}
}
The flip code seems really wierd, but it was necessary.
Anyway, after all that I felt pretty proud of myself for hacking a working screencap together over a couple of days, not knowing anything about windowing toolkits before that (I've always known enough about them to try and avoid them like the plague, there's no uglier code than window gui widget code).
The only libraries needed for wxWindows are -lwx_base-2.8 and -lwx_msw_core-2.8, though there will be warnings about other wx libs getting auto imported.
2007-01-24
VJ Night
There might have been about 20 people there to watch the show. Around 8 VJ Scobot started out with his opening set, which was mostly fast edits between a few dozen clips. Some of the clips were recognizable, like from Bad Boys or Team America, others from anime sources, and some footage from marginal productions of decades ago. Something from early incarnations of 'The Rocketeer'? Scobot later explained that his software allows keying of clips to keys (and presumably effects and transitions also?).
After 20-30 minutes of that the music wound down and Scobot introduced DJ Deeb, the guest VJ Spyscience, and what VJ Night is all about. An interview with Spyscience followed, video taped by one from the 911 crew (though it's not clear whether the interview will ever be put on the internet or anywhere else, it's just for the archive). Spyscience used some software I forget the name of, which is also clip oriented but drag and drop along with an external USB device with sliders and knobs is used. Most of the clips where provided with the software, one he had made himself. His computer locked up a few of times and Scobot had to take control of the video for a while during rebooting.
After that a Q&A sessions followed, and the event was over by about 10.
Overall the event was interesting, though even as Scobot admitted the traditional role of VJ work is to make a side-show, not to be shown in a theater to an audience focused directly on it. I thought the format works but that VJ sets should be a little shorter, since they can get repetitious. Another nice thing might be to have continuous split-screen, one view showing the VJ output and the other focusing on the VJ and what they're doing, although the layout of the space allows the audience to effectively look over the shoulder of the working VJ.
I found my personal preference is for visuals that are the opposite of recognizable clips from tv or movies, but purely abstract instead. I'd like to see someone do a set with more live-generated visuals (rather than live-editing-plus-effects on a lot of clips).
After 20-30 minutes of that the music wound down and Scobot introduced DJ Deeb, the guest VJ Spyscience, and what VJ Night is all about. An interview with Spyscience followed, video taped by one from the 911 crew (though it's not clear whether the interview will ever be put on the internet or anywhere else, it's just for the archive). Spyscience used some software I forget the name of, which is also clip oriented but drag and drop along with an external USB device with sliders and knobs is used. Most of the clips where provided with the software, one he had made himself. His computer locked up a few of times and Scobot had to take control of the video for a while during rebooting.
After that a Q&A sessions followed, and the event was over by about 10.
Overall the event was interesting, though even as Scobot admitted the traditional role of VJ work is to make a side-show, not to be shown in a theater to an audience focused directly on it. I thought the format works but that VJ sets should be a little shorter, since they can get repetitious. Another nice thing might be to have continuous split-screen, one view showing the VJ output and the other focusing on the VJ and what they're doing, although the layout of the space allows the audience to effectively look over the shoulder of the working VJ.
I found my personal preference is for visuals that are the opposite of recognizable clips from tv or movies, but purely abstract instead. I'd like to see someone do a set with more live-generated visuals (rather than live-editing-plus-effects on a lot of clips).
2007-01-10
Gephex
I created a couple of custom gephex modules (for 0.4.3):
Average
Find the average color or brightness of a framebuffer. (maybe add HSV next)
Slow motion
Play back snippets of framebuffer input in slow motion. The music is from ccmixter, William Berry's Time To Take Out The Trash.
The source code (GPLed of course) and windows dlls are provided in those zip files.
More custom effects are on the way.
New video showing off basic gephex effects:
...but none of those I just created. Maybe next time.
Subscribe to:
Posts (Atom)