Maybe not yet…

Even though in my previous post I considered the option of putting the old Andiamo code to rest, the next few days showed me it still has quite a lot of useful value, at least for my particular interest in live drawing. In this sense, Andiamo doesn’t attempt to be a general framework for such application, but more of an ongoing experiment in the use certain techniques for generating animations in combination with video and text. Here I tend to understand animation in terms of gestural movements of the line, as well as in the classical sense of “cel animation”. Also, I’m interested in exploring some ideas about how a graphical interface for such an application should be, particularly under the constrains of live performance. Anyways, the code is available with the GNU Public License, so if it can be of any use for other people, well… better still.

So, one week of coding later and Andiamo looks quite nice, with a completely redesigned interface for the drawing and video layers. This UI was built with the excellent library controlP5 by Andreas Schlegel. A MIDI controlled can be also used to input certain parameters such as video mixing, volume, line speed, etc. The MIDI support was possible thanks to the Midi Bus library. I also evaluated the proMidi library, but this one crashed when the Scene selector button on the nanoKontrol was pressed, which didn’t happen with the Midi Bus.

Screenshot-andiamo.Main

There are still no install packages, but the source code is available in the sourcerge’s svn. The only missing feature with respect to the versions used originally in Latent State is the video tracking. At this time I didn’t find any satisfactory solution for detection and tracking of feature points, and unfortunately the proGPUKLT library doesn’t work on my MacLinux box (it crashes with some mysterious graphics driver error, and no time to look for a fix right now). The most promising direction to add video tracking seems to use the JavaCV wrappers for opencv 2 being developed by Samuel Audet.

Andiamo offers a number of gesture types, most of them adapted from the original yellowtail code from Golan Levin. Now, I just discovered the GML4U library by Jerome Saint-Clair which adds support for something called the Graffiti Markup Language. I don’t know much about this yet, but it seems this language allows to define custom gesture types by using xml-based files, with full support for animation and various transformations. So it could be a good idea to incorporate the GML in the future.


About this entry