Down and Out in the Magic Kingdom by Cory Doctorow has a very permissive license for reuse, so I've gone through the steps of making an audio book with images of the text and putting it on youtube:
There are some issues with text encoding that I mostly plowed through though I suspect another process for conversion to UTF8 could have worked better.
First thing is to get rid of some ampersand hash forty fives that I think were dashes in vim:
Also replacing tabs with spaces turned out to be necessary.
Imagemagick wouldn't do automatic line breaks for me later in this process (though pango might have worked), so added linebreaks to keep lines under 80 characters was necessary:
There were still some odd question marks generated by convert in the text, I hand edit to get the worst one out- the one that would have appeared on the title of the book.
Next thing was to split the book at every blank line into roughly 1500 text files which will probably be short enough to show in a single image:
csplit -f down -b '%05d.txt' ../*.txt '/^$/' '{*}'
Next is the conversion of each of the split text files into HD png files
for i in *.txt;
do convert -background black -fill white -size 1920x1080 -pointsize 45 -gravity center label:"$(<$i)" PNG8:"$i.png";
done
And then generate wave files from each of the 1500 text files:
for i in *txt; do pico2wave -w $i.wav "$(<$i)" done
Videos are then created from putting the png images together with the images, this part is very similar to the process in http://binarymillenium.com/2013/07/turn-set-of-mp3s-into-static-image.html
Some conversions result in 0 length mp4s with this error: [buffer @ 0x8959e0] Invalid pixel format string '-1' ,
this turned out to be caused by some of the convert png images being 16-bit instead of 8-bit (why wasn't it consistent, most were 8-bit), but putting PNG8: into the convert command line fixed this.
Create a text file listing of all the mp4 files:
rm all_videos.txt
for i in *mp4;
do
echo $i
echo "file '$i'" >> all_videos.txt
done
And concatenate all the mp4 files together into one giant 6 hour video with no recompression (only 500MB though):
As I understand it the proper use of catkin is to create a catkin workspace for all the standard ROS stuff, build and install it ( ./src/catkin/bin/catkin_make_isolated --install ) and then source the install setup.sh from that install ( source ~/ros_catkin_ws/install_isolated/setup.bash ) and then go on and create a new catkin workspace to actually do development in. Otherwise the build times will be ridiculous if catkin has to traverse 250 packages.
Gazebo
Since the core gazebo isn't a ros package (yet?) it ought to be built separately following the instructions on http://gazebosim.org/wiki/2.0/install .
I ran into this error near the end of the build:
[ 99%] Building CXX object interfaces/player/CMakeFiles/gazebo_player.dir/GazeboDriver.cc.o In file included from /home/lwalter/other/gazebo_source/gazebo/interfaces/player/GazeboInterface.hh:26:0, from /home/lwalter/other/gazebo_source/gazebo/interfaces/player/GazeboDriver.cc:25: /home/lwalter/other/gazebo_source/gazebo/interfaces/player/player.h:22:38: fatal error: libplayercore/playercore.h: No such file or directory #include <libplayercore/playercore.h>
So install libplayer-dev? No, that is a different player. I had libplayerc3.0-dev and libplayerc++3.0-dev installed already, and the file in question was located in /usr/include/player-3.0/libplayercore/playercore.h but gazebo wasn't seeing it.
I'm sure I could have done this cleaner, but I just hand-edited interfaces/player/CMakeLists.txt:
I got a lot of these warnings but built 100% (haven't fully tested yet so they may yet cause problems):
/usr/bin/ld: warning: libboost_system.so.1.49.0, needed by /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libsdformat.so, may conflict with libboost_system.so.1.53.0
The post install bashrc instructions are not quite what is on the gazebo install page, I had to do this:
Something went wrong in the ros libstage package, it never generated a config.h from ros_catkin_ws/src/stage/config.h.in ( https://github.com/rtv/Stage/blob/master/config.h.in ) - possibly this was due to not having the environmental variables pointing at gazebo correctly.
[ 10%] Building CXX object libstage/CMakeFiles/stage.dir/gl.o[ 12%] Building CXX object libstage/CMakeFiles/stage.dir/logentry.o/home/lwalter/other/ros_catkin_ws/src/stage/libstage/file_manager.cc:5:45: fatal error: config.h: No such file or directory #include "config.h" // to get INSTALL_PREFIX ^compilation terminated.[ 14%] make[2]: *** [libstage/CMakeFiles/stage.dir/file_manager.o] Error 1make[2]: *** Waiting for unfinished jobs....Building CXX object libstage/CMakeFiles/stage.dir/model.o/home/lwalter/other/ros_catkin_ws/src/stage/libstage/model.cc:141:45: fatal error: config.h: No such file or directory #include "config.h" // for build-time config ^compilation terminated.make[2]: *** [libstage/CMakeFiles/stage.dir/model.o] Error 1make[1]: *** [libstage/CMakeFiles/stage.dir/all] Error 2make: *** [all] Error 2<== Failed to process package 'stage': Command '/home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4' returned non-zero exit status 2 Reproduce this error by running:==> cd /home/lwalter/other/ros_catkin_ws/build_isolated/stage && /home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4
The really ugly hack solution is to create config.h by hand:
vi /home/lwalter/other/ros_catkin_ws/src/stage/libstage/config.h
That much worked, though those values may cause problems later if not correct.
Telling ROS about Gazebo
(I didn't discover the gazebo bashrc instructions were wrong until after going through these steps, they probably aren't necessary)
==> cmake /home/lwalter/other/ros_catkin_ws/src/gazebo_plugins -... CMake Error at CMakeLists.txt:40 (find_package): By not providing "Findgazebo.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "gazebo", but CMake did not find one. Could not find a package configuration file provided by "gazebo" with any of the following names: gazeboConfig.cmake gazebo-config.cmake Add the installation prefix of "gazebo" to CMAKE_PREFIX_PATH or set "gazebo_DIR" to a directory containing one of the above files. If "gazebo" provides a separate development package or SDK, be sure it has been installed. -- Configuring incomplete, errors occurred! <== Failed to process package 'gazebo_plugins':
Command '/home/lwalter/other/ros_catkin_ws/install_isolated/env.sh cmake /home/lwalter/other/ros_catkin_ws/src/gazebo_plugins -DCATKIN_DEVEL_PREFIX=/home/lwalter/other/ros_catkin_ws/devel_isolated/gazebo_plugins -DCMAKE_INSTALL_PREFIX=/home/lwalter/other/ros_catkin_ws/install_isolated' returned non-zero exit status 1 Reproduce this error by running: ==> cd /home/lwalter/other/ros_catkin_ws/build_isolated/gazebo_plugins && /home/lwalter/other/ros_catkin_ws/install_isolated/env.sh cmake /home/lwalter/other/ros_catkin_ws/src/gazebo_plugins -DCATKIN_DEVEL_PREFIX=/home/lwalter/other/ros_catkin_ws/devel_isolated/gazebo_plugins -DCMAKE_INSTALL_PREFIX=/home/lwalter/other/ros_catkin_ws/install_isolated
Command failed, exiting.
It can't find gazebo, so run cmake-gui . in ros_catkin_ws/build_isolated/gazebo_plugins and set gazebo_DIR to
/home/lwalter/other/install/share/gazebo/cmake
SDFormat
Now it looks like the debian supplied sdfformat is conflicting with the one gazebo built, uninstall and rebuild the ros_caktin_ws
cd /home/lwalter/other/ros_catkin_ws/build_isolated/gazebo_plugins cmake-gui .
SDFormat_DIR needs to be set to /home/lwalter/other/install//lib/x86_64-linux-gnu/cmake/sdformat
Have to set the above for several packages.
RVIZ build problems with libshiboken
Linking CXX shared library /home/lwalter/other/ros_catkin_ws/devel_isolated/rviz/lib/libdefault_plugin.so [ 95%] Built target default_plugin make: *** [all] Error 2 <== Failed to process package 'rviz': Command '/home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4' returned non-zero exit status 2 Reproduce this error by running: ==> cd /home/lwalter/other/ros_catkin_ws/build_isolated/rviz && /home/lwalter/other/ros_catkin_ws/install_isolated/env.sh make -j4 -l4
Investigate this with make VERBOSE=1
... type 'QX11EmbedWidget' is specified in typesystem, but not defined. This could potentially lead to compilation errors. Segmentation fault (core dumped) make[2]: *** [src/python_bindings/shiboken/librviz_shiboken/librviz_shiboken_module_wrapper.cpp] Error 139 make[2]: Leaving directory `/home/lwalter/other/ros_catkin_ws/build_isolated/rviz' make[1]: *** [src/python_bindings/shiboken/CMakeFiles/rviz_shiboken.dir/all] Error 2 make[1]: Leaving directory `/home/lwalter/other/ros_catkin_ws/build_isolated/rviz' make: *** [all] Error 2
Add the installation prefix of "GeneratorRunner" to CMAKE_PREFIX_PATH or set "GeneratorRunner_DIR" to a directory containing one of the above files. If "GeneratorRunner" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): src/python_bindings/shiboken/CMakeLists.txt:9 (include) CMake Warning at /home/lwalter/other/ros_catkin_ws/install_isolated/share/python_qt_binding/cmake/shiboken_helper.cmake:41 (message): Shiboken binding generator NOT available. Call Stack (most recent call first): src/python_bindings/shiboken/CMakeLists.txt:9 (include) SIP binding generator available. Python binding generators: sip Configuring done
But the pacckages all build and install now.
Misc
Next try out building the catkin workspace with the projects I'm working on, the first thing missing appears to be the joy package, so clone it and rerun the catkin make install in the main ros catkin ws:
What I don't understand about re-running ./src/catkin/bin/catkin_make_isolated --install is how much stuff has to be re-done even when nothing or very little has changed. Object files are correctly recognized as already compiled, but something high level gets dirtied and many shared libraries and scripts have to be rerun to presumably generate the exact same output files that were already generated.
Around 10 years ago I was working on a number of personal software projects with a mostly common C++ code-base that had a lot of boilerplate OpenGL and vector classes I'd built up from reading the NeHe tutorials. Some of that work was properly documented and put into source control and madee public, the rest were periodically made into version numbered tarballs. When I finished or lost interest in developing some graphics technique or physics simulation or anything else I would rename the directory to reflect the new project and start on new functionality: some of old was still useful, some of it had to get ifdeffed out, and some just sat unused. Some of those were documented but not open-sourced, and a few of those tarballs were archived in my online home directory. Eventually a lot of the code was superseded by vastly superior open source libraries so it didn't make sense to continue using it, but I would sometimes make backups of the old stuff on DVD and copy them to multiple hard drives as I bought them but with less and less care as time went by.
Fast forward to the present, and reading a section of Planet Google about StreetView, and I started thinking about a particular project where I was driving around Seattle with a DV camera mounted in on the passenger side and a GPS on my roof being logged on a laptop. I'm pretty sure I was inspired by reading about the Aspen Movie Map from the +Howard Rheingold book Virtual Reality.
Some OpenGL software loaded the images extracted from the video and then displayed them on top of a 3D GPS trajectory. It worked fine, but I only did it once and took no screenshots or videos and told no more than one or two people about it. Maybe I thought it was a such a good idea it had to be kept secret until the opportunity to capitalize arose, obviously the opportunity is now long past. But it it still was fun to have done and having it run again would be cool... but I couldn't find it on any of my still running desktop computers or laptops. Eventually I found a 250GB Maxtor drive in a shoebox and plugged it in with a usb-to-sata adapter, and there it was: 700 megabytes of video and images all nicely organized along with scripts and source code. And it compiled: after resolving the SDL dependencies the only thing I had to do was move the ordering -lGL etc. linker options to be after the listing of object files: $(CXX) -o $(PROGRAM) $(OBJECTS) $(LIBS) instead of $(CXX) -o $(PROGRAM) $(LIBS) $(OBJECTS). And it ran fine with ./gpsimage --gps ../capture_10_22_2004.txt --bmp biglist.txt, and with some minor modification to the keyboard controls and the resolution I was able to take screenshots and a video:
Ballard surface streets
Ballard surface streets
Exiting the tunnel to get on the viaduct
Driving south on the 99 viaduct looking west
Implementation
It might be nice to actually check in some of the code to github or something, but for now I'll document the important parts here.
I used dvgrab to extract video from the camera, and converted that to decimated timestamped bmp images. The text gps log which looks like this:
A few other old projects could be revived, though some have more obscure dependencies (paragui and maybe another opengl gui). It's not a high priority but it would be nice to create better records now than wait even longer for more bitrot to set in, and I have a restored interest in low-ish level OpenGL so it would be nice to get refreshed on the stuff I've already done.
I wanted to take a directory full of mp3s, in this case a bunch of Creative Commons Attribution from Kevin MacLeod (http://incompetech.com/music/) and make videos that simply have the artist name and track name, and moreover string many of those videos together into a longer compilation- the Linux bash script to do this follows.
It seems like ffmpeg fails to concatenate after the videos reached an hour in length- I would get a segfault at that point. The music and video was getting unsynchronized which causes the titles to run longer than the music does, I'll have to look more into that.
Make title image videos from a directory of mp3s:
mkdir output
rm output/*
for i in *mp3;
do
convert -background black -fill white \
-size 1920x1080 -pointsize 80 -gravity center \
label:"Kevin Macleod\n\n`echo $i | sed s/.mp3//`" output/"$i.png"
# TBD replace with ffmpeg
avconv -loop 1 -r 1 -i output/"$i.png" -c:v libx264 -i "$i" -c:a aac -strict experimental -shortest output/"$i.mp4"
done
Draw sound waveforms with a mouse, then play the sounds with keys that vary in pitch. The frequency and phase spectrum can also be manipulated in the same way.
Mostly I want to create crude chiptunes sound effects which it can do pretty well, I think it needs more layering/modulation capability to be a bit more useful. Also most of the interesting frequencies are very near the left hand fifth of the frequency plot, an ability to zoom there and on the time waveform would be very useful- maybe doubling or tripling the amount of horizontal resolution devoted to the plots would be nice as well.
The mouse drawing code is pretty crude, it can't even interpolate between two different sampled mouse y positions yet.
I used Processing and the minim sound library which didn't directly support manipulation or viewing of phase information. The trick was to subclass fft like this: