All posts tagged visuals

Photographic Monument: Kentile Floors

On May 3, a Photographic Monument entitled τέλειο σύμπαντος, or “Perfect Universe,” lit up the Kentile Floors sign in Brooklyn. It was a collaboration between George Del Barrio at The Vanderbilt Republic, Karl Mehrer at K2imaging, and myself.

teleion-holon-v3

George and K2imaging have created a series of these large-scale Photographic Monument projections in the past, and I’ve been involved since the last one.

We created video content that ran 30 minutes in total, using George’s photographs, my video loops and neon recreation, text by Ralph Ellison, Milan Kundera, Gabriel García Márquez, and Henry David Thoreau, and patterns designed by Ed Roth of Stencil1.

Karl is the man with the original dream to make this happen, and the technical expertise to pull it off. He aligned two of K2imaging’s brand-new DPI Titan Super Quad 20,000 lumen projectors on the sign, and the result was beautiful.

K2_Kentile_setup-960

I figured out the geometric distortion needed to map the image onto the sign during our projection test a month earlier. Read about that in this post.

We were interviewed by the New York Daily News before-hand, and got further pre-show coverage in Brooklyn Magazine, Curbed, Gothamist, and Fucked in Park Slope.

Kentile_DanNguyen_1-700

People showed up with their cameras and got some amazing shots. Dan Nguyen shot this image of people viewing the sign from the Smith-9th subway platform, as well as the banner at the top of this post. (Thanks, Dan). People also watched from the streets and passing trains.

Gothamist, Curbed, Gownaus Your Face Off, and Brownstoner have their own photo and video roundups.

Here is Photographer Barry Yanowitz’s excellent video, with highlights from the night:
(I recommend going full-frame on all of these videos.)

Dave Bunting at King Killer Studios shot this gorgeous time-lapse from their roof:

The official documentation video from The Vanderbilt Republic:

Projection Content

We tailored the program’s themes to the sign, layering words and photos with the idea of text (our canvas was giant letters) in mind, and using video loops to tell an impressionistic story of the sign’s history by the Gowanus Canal.

I also traced the neon tubes in their current broken state, and then reconstructed the complete neon based on the existing pieces, plus visible electrical contact points on the sign:

The sequence with Stencil1’s designs, adapted for the screen:

And the official, remixed, final vision of the full program:

Thanks to everyone who helped out and showed up. Keep an eye out for future Photographic Monuments.

Azimuth Video Installation

Azimuth is a video capture and playback installation with simple wireless controllers. It’s based on the Déjà View Master, modified for the art show “Being and Time.”

The basic layout is a webcam feeding into a Mac Mini running a Max/Jitter patch, controlled by three wireless Arduino-based devices, and displayed on two monitors: a Zenith television on a table, and a projector mounted under the table, projecting onto the floor.

azimuthshow2

The screen normally displays a frozen image of the empty room. As you turn the clock-like controller, you scrub back and forward through a constantly-recording half-hour video buffer. The frames flash on screen as long as you’re moving the hand, but it fades back to the frozen frame when movement stops.

To add an element of unpredictability, it sometimes scrubs through a pre-recorded video of walking to the venue, or operates normally but with the image sliding off the TV and onto the floor via the projector.

Jitter works by using a matrix of pixel information to manipulate video. In Azimuth, the video buffer is accomplished by recording sequentially-numbered Jitter binary files (each matrix file is one video frame) to a hard drive, and separately reading the matrix files back and displaying them. Once the maximum number of frames has been written, the counter resets and writes over the first frames, letting you record in a loop.

Here are the basic components of this operation, set to record one minute at 15 fps (900 frames), and read back and display other frames, chosen by the number box on the right.

maxbufferrecord

Grab this Max patch here

———-begin_max5_patcher———-
976.3oc0XFsbiaBEF9Z6mBFM8RWM.BPhdU5ywNcxfjHNxUVxKBuwa2Ye26Aj
sih2HY4Ta4zKhmHDF94iCmyu4GymEjVuS2Df9CzWPyl8i4yl4ax0vr8OOKXs
ZWVopw2sfpsqS0lfEsu5o5JaSw+ncuhPCw6aF5TQUo15+Jj8MVu0BMY+9Fc6
7ETTYCVfBRUUKCP+09dsQYydtnZ4iFclssiBXj4LRR7BDg4lDTRh6SJNDe76
Uj6kWc5pemE7pLZmTuNnczbkZsWEA+ooPUFbbtMP6Vs4QckJsz2Cr6U+b9b2
GKFKizu.53vnZ067KjfUE1vWJpxqeAsoT88TU1eid.ff1rAQtJD0iR.oChSF
U3wICHXRjCjDgbHdRuXddcY1Zk0TrCwPYOqLnDdDhkfuJ7BF8GaG8KhZrnPJ
iwEI.5hFDcB4GMVLqtr1zN0X2LfCShjThv8eXIi5la2b9AP8ZcSiZo9WXsQq
xQ+FYApMLZ77k1GeGMQI7Pojw4QvYatOjL48AZL98AJYJBFypWmVToQvh4YD
D2fvnvU6dB8vFUdNr3fmwHN72CPH0xkZSChdIbj0KGufnSB.PIDpv4.WkCln
LlLEmrS2Zs0UAWdhrgVvwhPIIFNEBKXh.NNFwYh1U5.mF6I145EgTuQWkCbp
dI5o5x7qRRpQUirKOXvltDS4.OjzA2+Y2wL6Y0ack9PR7EkKmeF2DG9y83P.
CvSHki4IuEXwrAcVP64DC6pRr9RPmUV2nu8IlEQghHHZBRfvfzJQ3DobApsP
We4kID9EmXtyxa01FawSEYJaAjp3CdlrOr4NTd6oFjA1QM1ITKdXpIt6T6+V
stIuRW2DccqzM7OIHdR7v1W.3KlBqFbVc6CB6PGJARkISjNOA7nA8Twuidp9
zXvuK5X7PlLJNlcVC9DJ4SnC+AX8WsgKMpz6Ek6jkLgExSDIdOq33ggr3+OP
9qq0VSMRHtJm1OqwuN9X5T1QvGlmxo3Dusd4xRcvEemPi01Vxwk6v194mI+1
4t1G+XFTBUCO4lx75y09agPS8VS1gEz9KjB8pxx0PU6pi0r+xq+VLD8XmdtH
OW6e+g3k7hFm3x6+RoFqbbWJwY0C8M84VJm3QimIRO3QnmSX3MUOjwvG7joG
uU+yJHeN6ISQhOaJhNNEMcQ07wsoMgg0zwDVymN8v+jsic5lw6qH1DpnSmqd
1ynSVkrwgno7f+XN1eRAu6es0OZs9ViQpMa9l1zreL8RA7Atp01sXg+whp1G
8VtBL5uUbn+scPY.idVvk2VSqoycIBvxILO+b9+Bn2hk4C
———–end_max5_patcher———–

There is more math involved in keeping track of the current frame and rolling back through zero.

The projection on the floor is black, but image appears where the camera picks up movement. The patch subtracts the 8-bit value of each pixel of the previous frame from the current frame: where the pixels are the same, the value will be zero. Where the value is different, it will light up. It’s like velociraptor vision.

azimuthscreen

The button controller swaps the output monitors, putting the frame differencing on the TV screen and the freeze frame on the floor via the projector. The frame differencing is quite popular: the kids love dancing in front of it. But it has a practical function too. Part of the Max patch adds up the value of all the cells in the frame to determine whether anyone is in the room. The more movement, the higher the total value. If the number stays below a set threshold for more than a few minutes, it will assume that the room is empty and update the freeze-frame.

The other control box has a knob, which switches between five channels. Some channels read back from other matrix “banks,” where the five-digit matrix file begins with a different number. The main video loop is 10000-17200 (add 10000 to the counter value, max of 7200 for 30 minutes at 15 fps), a busy time saved from the show’s opening is 20000-27200 (add 20000), a pre-recorded movie of riding the subway and walking to the venue is 50000-53400, and so on. Another channel adds vertical roll to the live video feed, like an old TV. All adjust the brightness and color in some way.

Any controller will take over whatever’s happening in screen, and the result of pressing the button or knob will time out and revert to the empty frame if left alone.

azimuthboxes1

The boxes are all handmade oak frames with laser-cut semi-transparent acrylic front and back, echoing the Zenith television.

The big box has a rotary encoder with a clock hand attached, so it can spin continuously in either direction. The encoder is connected to a Teensy 3.0 microcontroller which runs Arduino code. It sends one command repeatedly if turned clockwise, and another command if turned counterclockwise, via a serial connection with an XBee wifi radio and adapter.

It’s powered by a 2,500mAh lipo battery (the Teensy and XBee operate at 3.3v), and uses Sparkfun’s Wake on Shake as a power switch. This device is brilliant. It has a gyroscope, and turns on if there’s any movement. It then stays on as long as there is power going to the wake pin — this comes from one of the Teensy pins, which is programmed to stay on for 15 minutes after the last time the controller’s been used.

I used a breadboard with power rails removed to hold the Teensy and XBee, since it provides a flush, solid connection. Double-sided tape, industrial strength Velcro, and hot glue keep everything in place. The back panel is held on with machine screws.

boxAinside2

The smaller boxes are similar, but use an Adafruit Trinket for the logic. One has a 10k linear potentiometer, and the other uses an arcade button. Each has a panel-mount power switch on the bottom of the box.

boxBinside

The receiver uses a Teensy 3.1, which relays incoming serial messages from the XBee to the Mac Mini over USB. I’d normally send a serial connection directly into Max, but since this installation needs to run reliably without supervision, I set the Teensy to appear as a standard keyboard. Messages from the controllers are sent as keystrokes, and the Max patch responds accordingly. This also made programming easier, since I could emulate controller action with keystrokes.

The receiver is housed in a spare Raspberry Pi case with a hole cut in the top for the XBee. I also added a kill button to stop the patch from running and quit Max by sending another keystroke. The Mac Mini is set to launch the Azimuth patch on startup, so between that and the kill button, no keyboard or mouse is needed from day to day.

Arduino code for the controllers and receiver is here.

azimuthRcv

The Mac Mini connects to the Zenith TV via a series of adapters: mini display port to VGA, VGA to RCA, and RCA to RF (on channel 3). The projector is configured as the second monitor, with a direct HDMI connection. I don’t recommend working on a complex Max patch on a 640 x 480 screen.

All in all, the installation runs well. Video is stored on a solid state laptop hard drive in a USB 3 enclosure, and most of the video processing happens on the GPU using the Gen object (jit.gl.pix) in Jitter. Some people were tentative when using the controllers, but others dove in and had a good time.

Gloves Video Controller

Six of us at NYU’s ITP Camp decided to follow The Gloves Project’s patterns to build our own gloves in June. These are sensor-laden gloves that can be used to control software through hand gestures. Our group included musicians, a theatrical sound designer, a gamer, and visualists, each with different uses for the glove in mind.

To get an idea of how it can be used with video in a live setting, take a look at this test clip, where I use hand movement to wirelessly control video playback and effects.

Here, sensor values on the glove are sent via Bluetooth to a decoder patch written in Max, and then out as MIDI controller data to VDMX, VJ software. It works!

Gloves have been used as controllers in live performance for some time — see Laetita Sonami’s Lady’s Glove for example. Our particular design is based on one created for Imogen Heap to use as an Ableton Live controller, so she can get out from behind a computer or keyboard and closer to the audience. She gives a great explanation and demonstration at this Wired Talk (musical performance starts at 13:30).

Heap and The Gloves Project team are into sharing the artistic possibilities of this device with others, as well as increasing the transparency of the musical process which can be obscured inside a computer. This is an attitude I’ve believed in since attending MakerFaire and Blip Festival in 2009, where I saw a range of homemade controllers and instruments. I was much more engaged with the artists who made the causal process visible. It doesn’t have to be all spelled-out, but in certain cases it helps to see the components: the performer is making the things happen. This is obvious with a guitar player, but not so much with electronic music. Also, you get a different creative result by moving your arms than pressing a button — a violin is different from a piano.

The Gloves Project has a residency program where they’ll loan a pair of gloves to artists, plus DIY plans for an Open Source Hardware version. The six of us at ITP Camp built one right-hand glove each. We had to do a bit of deciphering to figure everything out, but we had a range of skills between us and got there in the end.

Each glove has six flex sensors in the fingers (thumb and ring finger have one each, and index and middle have two each, on the upper and lower knuckle), which are essentially resistors: the more they bend, the less electricity passes through. This can be measured and turned into a number. The sensors run to a tiny programmable ArduIMU+ v3 board by DIYDrones, which uses Arduino code and includes a built-in gyroscope, accelerometer, and magnetometer (a compass if you attach a GPS unit for navigation). This is mostly used for flying things like small self-guided airplanes, but also works for motion capture. We make a serial connection to the computer with a wireless bluetooth device.

Here’s a wiring guide that we drew up.

We had more trouble with the software side of things. The Gloves Project designed is to communicate with their Glover software, written in C++ by Tom Mitchel. There are instructions on the website, but we couldn’t reach anyone to actually get a copy of the program. In the end, we copied the flex sensor sections of Seb Madgwick’s ArduIMU code and used it to modify the ArduIMU v3 code. It delivered a stream of numbers, but we still had to figure out how to turn it into something we could use.

We formatted the output sensor data like this:

Serial.println("THUMB:");
Serial.println(analogRead(A0));
Serial.println("INDEXLOW:");
Serial.println(analogRead(A1));
Serial.println("INDEXUP:");
Serial.println(analogRead(A2));

…and so on. I then programmed a patch in Max to sort it out.

Details:

When one of the sensors’ name comes through, Max routes it to a specific switch, opens the switch, lets the next line through (the data for that sensor), and then closes the switch. Data goes where we want, and garbage is ignored.

Every glove and person is slightly different, so next the glove is calibrated. Max looks for the highest and lowest number coming in, and then scales that to the range of a MIDI slider: 0 to 127. When you first start the decoder, you move your hand around as much as you can and voilà! It’s set.

I made the default starting point for flex sensor data 400, since the lowest point sometimes didn’t fall below 0, while the peak was always above 400. The starting point for movement data is 0. There’s also a “slide” object that smooths movement so it doesn’t jump all over the place while still being fairly responsive.

The number is now sent through a Max “send” object with a different name than the raw sensor data. If you’re keeping everything inside Max, you can just set up a corresponding “receive” object.

Otherwise, it gets turned into a MIDI control or note value, and sent out through a local MIDI device or over a network.

Finally, I tidied everything up so it’s useable in presentation mode. Anyone can download the patch and run it in Max Runtime (free).

There are probably more efficient ways of doing this, but it’s our first pass to get things working.

To download all our code, visit https://github.com/timpear/ITP-Gloves/

Since finishing that, I discovered that The Gloves Project has released a whole range of decoders / bridges in various languages. Their ArduIMU code has lots of clever deciphering on the gloves end of things, and the bridges primarily output OSC instead of MIDI, which is handy. Beyond that, The Gloves Project continues to develop new versions of gloves, and are worth checking up on.

Our decoder simply translates the raw sensor data. The next step is to get it to recognize hand gestures, and trigger specific events or adjust values based on that (which is what the Glover software does). We also need to program the glove’s RGB LED and vibration motor for feedback from the computer.

I showed this project to Karl Ward (rock star, Ghost Ghost collaborator, masters student at ITP), and it turns out that he’s currently working on an Arduino library to do a lot of this work, only more elegantly, within the controller. The first library is Filter, which he augmented over the summer to require another new library he wrote, called DataStream. He says: “They are both in usable, tested shape, but the API is still in flux. Right now I’m looking for folks who have Arduino code that does its own filtering, or needs filtering, so I can design the API to fit the most common cases out there.” We’re going to jam.

The glove has all sorts of possible artistic applications, but what else? When I showed it to my dad, he wondered if it could be used as a translator for sign language. Brilliant. It sounds like Microsoft is currently developing software for the Xbox One and new Kinect that will do this, although one advantage of a wearable controller in any case is the ability to get away from a computer (within wireless range). One of the people on our team is going to use it to adjust audio signals while installing sound in theaters. Easier than holding a tablet at the top of a ladder.

Another friend suggested that the glove as demonstrated here could be used for art therapy by people with limited movement. I imagine that something similar is in use out there, but the open-source aspect adds another level of customization and possibility, and again, transparency.

I’m looking to experiment with adjusting specific elements of a video clip with something more organic than a slider or knob, and also be able to interact more directly with a projection. I’ve worked with painter Charlie Kemmerer, creating hybrid painting-projections during Ghost Ghost shows. Charlie works on the canvas with a brush, but even standing beside him, I have to work on an iPad at best. Now I can point directly at the surface while selecting, adjusting, and repositioning clips. Or Charlie could wear it while painting to capture his movement, without it getting in the way of holding a brush.

Creative work reflects the nature of your instrument, so it’s exciting to expand the toolset and learn more about the media. Video A-B fades are pretty straight-forward, but the way that the IMU unit works isn’t nearly as predictable as a fader on a board, and I’ve gotten some unexpected results. That’s a good thing.

Even better, I can’t wait to see what other people with these gloves come up with. Tinker, modify, share.

Ross and Cesar Get Married

For their wedding at Littlefield in Brooklyn last month, Cesar and Ross wanted a DIY affair that would involve their friends and families. We came up with the idea of making a documentary-style video snapshot of their life in New York, to be projected on walls throughout the venue that night.

CR-outside

We shot footage of the grooms walking their dog, hanging out at home, riding the subway and bus, getting haircuts, eating at Mission Chinese, hanging out with friends in bars, etc. They’re Instagram fiends, so I had a blast coloring the footage to give it that social media snapshot look.

At the venue, I went for the smallest footprint possible, so packing up at the end of the night would be quick. The main movie was looped on an outside wall in the venue’s entrance courtyard. There is a window facing the wall, so I mounted the projector inside pointing out. For this, I used one of my home-made mounts, plus mafer clamp, 20″ arm, and knuckle, clamped onto the metal window frame above head-level.

CR-rig

I taped a Roku onto the mount, and created a .m4v file with Handbrake to play back off a USB drive. Roku has a limited number of file types that it will play, and that’s one. Also, the Roku USB player doesn’t have a loop function, so I made a version of the 40 minute movie that plays twice, followed by 15 minutes of black with “CESAR & ROSS” text at the end. I just hit “back” and “play” on the remote every hour and a half. I’m going to pick up a Raspberry Pi to do this better. [UPDATE 5/16/14: I got this running.]

CR-wall-words

Inside, I had another projector running from my MacBook Pro, up in the sound booth. From there I could fill the entire side wall of the dining/dancing part of the venue. The wall is a black rubberized cork material, not too shiny and not too light-absorbent. With a bright enough projector (mine’s 3,000 lumens), projecting on a dark surface like that gives you high contrast and looks great.

I made both 1280×720 and 854×480 versions of the movies and loops, to pick depending on final projection size. If a particular clip gets shrunk down on the canvas during playback, there’s no need to waste pixels with a large file. But if it’s fillng the screen, I can go for quality. I also compressed everything with VidVox’s new Hap codec, which is open-source and decodes everything on the graphics card. Looked great, played great.

CR-touchOSC

In VDMX (my VJ software of choice), I made a canvas that was 1708 x 960, with quads of 50% scaled-down layers (which were the 854 x 480 media’s native size), including some that were doubled-up for live blending. This went out through Syphon to MadMapper, and I mapped the quads to different parts of the wall: Quads 1 and 2 were the full 40-minute movie with staggered start times, shrunk to two 4-foot wide rectangles on the wall. Behind that was Quad 3, as a giant wall-sized projection. During dinner this was blank or dim, abstract patterns (subway cars passing in the tunnels, blurry street scenes at night…). Quad 4 was just for text (“CESAR & ROSS”) that was projected a few places on the wall.

Once the dance party started, I cleared the smaller images and went full-wall with movie, loops and text, and some color mixing and effects.

I ran the whole show with Touch OSC on my iPhone (some frame-grabs are pictured here: layer-clip assignment, layer blending, and an RGB effect control). I’ve learned not to do this when working with bands: people think I’m some asshole texting during the show. But in this case I didn’t have to leave the dance floor.

Lower East Side Rig

Just wrapped up the premiere of Lotus Lives in Vermont, which had been in the works for years. Now it’s back from the opera to the Lower East Side with Ghost Ghost.

Club shows in New York require quick setup and small footprint. Last night I put the projector and camera together on a tripod, to drop down on the floor by Charlie Kemmerer’s canvas when we were on. He paints during most shows, and we’ve been working on a live video/painting collaboration

camera-projector-rig

The photo doesn’t do it justice, but Charlie’s beast was on fire (actually a waterfall in reverse, red, plus some shimmering from a mylar balloon). I ran both projectors from an iPad with TouchOSC, through VDMX. It’s good to walk around and not spend the show leaning over a laptop. I’m working on getting rid of screens altogether.

I’ve also learned that it’s a bad idea to run OSC from my phone — more portable, but I look like some asshole who’s texting during the show.

CharlieProjection

Here are a few older pictures of my LES club projector rig in action — it’s a 3,000 lumen Optoma projector mounted on a piece of wood, with a baby wall plate, grip head, C-stand arm, and mafer clamp (plus safety cable when hanging overhead). I run everything through MadMapper since it was released in May, so I just have to point the projector in the general direction, and can square the image up with software.

projecor-shelf

projector-ceiling

New Gadgets for the Show

Ghost Ghost’s main show at SXSW this year was the Ignite Austin/Dorkbot SXSW Interactive opening night event at the Austin Music Hall. It marked the live debut of Ghost Ghost’s new lineup, with weirder, more electronic sound — more synth from Kevin Peckham, Karl Ward creating vocal and percussion loops with Ableton, and the incomparable Mark Christensen on guitar. The fans seem to approve.

SXSW-karl-tunes

Mark couldn’t make it to Texas this year, so he appeared via Skype, on a center-stage laptop. I threw that into the video mix. It worked with the interactive theme, although Mark missed the tacos and beer.

In addition to my usual VDMX rig with Akai MPK49 keyboard to trigger video clips, I built a few new toys in honor of the techie nature of the show. First was a simple foot pedal switch, since I was on stage with the band.

pedal-open

pedal-cu

I used an enclosure and basic stomp switch to create an on-off toggle that closes a circuit coming in through a 1/4″ jack. That way I can connect it to the computer with any instrument cable. The blue LED is independent, and a bit too bright.

On the other end of the cable was a Hagstrom 36-input USB keyboard encoder, with a 1/4″ jack wired to a few of the inputs. Stomping on the pedal sent a simple keyboard signal to the computer, which I assigned in VDMX.

pedal-on-stage

I built a few pedals, but only ended up using one. It started and stopped video recording from my handheld camera, so I could grab snippets throughout the evening and build them up into a collage of loops. I’m still working on getting this to run smoothly on my MBP, but it’s coming along.

The other new toy was Spikenzie Labs’s drum kit kit, which is an Arduino-based MIDI encoder for piezos. I just taped the sensors onto the bottom of Karl’s drums with blue painter’s tape, and set the levels to adjust video effect levels.

SXSW-band

After the show we went back to a dot com startup office to finish off some kegs with the organizers. The last time I was at SXSW was in 2000, just before the end of the original dot com boom. Now everyone’s talking about revenue models. So different. But they still have giant beanbag chairs.

Ghost Ghost at SXSW 2011

SXSW Interactive has ended and music is underway. Ghost Ghost had a good show on opening night, playing the Ignite Austin/Dorkbot event at the Austin Music hall, and I dorked out my visuals to suit. More on that later.

Here is one final Ghost Ghost video dispatch from the road, providing a representative summary of SXSW 2011.

Ghost Ghost SXSW Austin Flier

Finished a video flier for Friday’s SXSWi Ghost Ghost show. Some people throw around buzz words like “robots” and “lasers.” Folks, we’ve got ’em.

— posted from Ghost Ghost Mobile Operations, a black minivan somewhere in the hills of Virginia