All posts in Visuals

Lotus Lives New Video Player

Earlier this year, I upgraded the programming and staging options for my projections in Su Lian Tan’s opera, Lotus Lives. It had been running on the live performance video software VDMX (which I love), but I wanted to create a more customized setup with an interactive cue sheet and single-button operation. I made a patch in Max, and ran it during a performance in Boston without a hitch.

lotusstage

The cue sheet is a table (Max jit.cellblock object) with the columns:

cue number
description
measure number (in the score)
cue notes (when to trigger the next cue)
duration
active screens
whether the media is a still image or movie

Here is my documentation for the video playback:

Overview

Lotus Lives is a chamber opera for two singers, a brass quintet, and percussion.

Video plays throughout the performance, sometimes acting as the set, and other times taking center stage.

It is designed to be flexible. A basic concert performance uses only one screen, plus audio playback, while the full staging uses multiple projectors with video mapped onto 12 surfaces. And it is possible to stage versions with complexity in between. It should adapt to fit the performance space.

The video is broken into sections ranging from 30 seconds to 5 minutes long. The end of each section has a tail of extra video, which will play until the video operator launches the next clip. This way, the video remains in sync with the live performers, who don’t have to cater their actions to the technology.

The playback software has two parts: the Controller and the Player.

The Controller is like a smart remote control, operated from a single computer to trigger the cues. The Player is the program that actually plays the media clips for projection.

Both can be on the same computer, or it is possible to have Players on multiple computers, one for each projector, controlled from a single Controller over a network.

The software is written in a program called Max (Max/MSP/Jitter). If projecting onto multiple surfaces from one projector, additional video mapping software is needed. Technical details about the software and mapping are below.

It’s also possible to run this media on other performance playback software (Isadora, Resolume, VDMX, modul8, etc.), in which case the fade timing would need to be set according to the cue list.

The Set

stagemodel

lotusmirrorcThe video surfaces are:

(A) a large central screen above or behind the performers.
(B) four banners on either side of this screen (for a total of eight).
(C) a dressing room “mirror” set piece (best as rear-projection).
(D) projection across the width of the stage, onto a handheld scrim during the ballet sequence, and onto the performers as a lighting effect at other times.
(E) projection onto the walls and ceiling of the performance space, to fill the venue with rippling light during the climax of the Folktale.

The video is meant to be immersive, and the size and placement of the surfaces can be tailored to each production. The only thing that needs to be maintained is the aspect ratio of each surface, and relative distance between the banners.

The aspect ratios are:

(A) 1.78:1 (16:9)
(B) 1:4 for each banner, to be spaced 1/2 the banners’ width from each other, four on each side of surface A.
(C) 1.14:1, which is a 1:1 square with an additional border on the left, right, and top.
(D) 4:1. The handheld scrim should be a white or gray mesh suitable for projections, about 7′ high and the width of the stage, or at least 30′.
(E) This is an abstract rippling texture meant to fill as much of the performance space’s ceiling and walls as possible at one point during the Folktale. While the source movie is 16:9, the projected aspect ratio does not matter.

The Media

The “media” folder contains QuickTime movies and audio for playback. These use the ProRes 422-LT codec, which has a lower data rate than the master clips (saved as ProRes 422-HQ) but maintains quality.

There is also an audio folder which contains .aif audio files, which are to be updated with recordings by the performers. See “Setting Up Audio Clips” below for details.

There are four versions of the video, which are configurations for different projection setups.

V1: This is for running projections from multiple networked computers. There is one screen per video, with the exception of surface B.

For surface B, all eight banners are composited onto this video, so it will need to be sliced up with mapping software.

V2: This is the version for one screen only. Critical elements that would be lost by eliminating surfaces B-E are included on this single, main-screen video.

V3: This has all the surfaces composited onto one large movie, to be mapped onto multiple screens from one projector, or multiple projectors from one computer.

V4: This is surfaces A and E composited into one movie, since it’s likely that a single projector can be used for both surfaces. Mapping is required.

I have prepared and included MadMapper files for V1-B, V3, and V4.

Setting Up the Computer

While Max runs on Mac or Windows, I have only tested this patch for Mac. Additionally, the output for mapping with MadMapper uses a Mac-only framework, called Syphon.

You will need to install:

Max (version 7 or later)
Max is free if using it for running files like the Lotus Controller and Player. A paid license is only needed for saving changes, after the free trial period.

– Apple ProRes codec
Probably installed on any Mac with QuickTime; also available for download from Apple.

If mapping the video output:

Syphon for Jitter
Syphon is a Mac OS framework that allows multiple applications to share the same space in video memory, without a hit to performance. This is how the video gets to the mapping software.

To install Syphon, unzip the package, then move the Syphon folder into Users/[user]/Documents/Max 7/Packages

– Mapping software of choice
I use MadMapper.
It does require a paid license, but it’s easy to use and runs beautifully. There are other options (search for “projection mapping software”). Max can also handle mapping, although this Player isn’t set up for it.

Setting Up Audio Clips:

In addition to movies 301 and 401, which have stereo audio tracks, there are four more separate audio clips that will play back in sync with the video. These are recordings of the performers, and need to be prepared for each production.

The reference folder of the Lotus hard drive contains QuickTime movies of the subtitled narration, which can be read karaoke-style for exact timing.

Once the new audio files are placed in the media/audio folder of the playback drive, with the specified file name, the Player will play them back at the correct point during the performance.

The Lotus Player

This runs the video and audio for Lotus Lives, controlled by the Lotus Lives Controller. It should be on the computer that’s hooked up to the projector.

Double-clicking Lotus Player.maxpat will launch Max, and open the Player.

lotusplayer2

SETUP:

1. Select which surface video you want to run.

2. Click CHECK FILE PATHS to make sure the Player can find the media. If the media is on a drive other than “lotus,” click Set Movie / Audio File Path and find the folder with the media.

3. If the Controller is on the same computer, leave “controller” set to “local.” If it’s on a different computer on the same network, select “network.” Be sure “network” is selected on the Controller too.

4. Set the video output:

4a. If projecting directly from the Player, move the “projector” window to the projector display. If the projector is attached when launching the Player, the “projector” window will already be on the second display.

4b. If mapping the video output with a program that uses the Syphon framework (like MadMapper), select “Syphon,” then launch the program and use that for display.

5. Test the audio, and set levels for the individual clips. From the Controller, select cue 301 or 401 for movies with audio. Press “play” below the levels sliders on the Player for the additional clips.

5a. The audio clip levels will not save when the Player is closed, but you can make note of the numerical setting, and adjust it the next time you launch the Player.

5b. The beat in 601 should be played live, so by default it will not play; but it can be cued for playback too by selecting the toggle next to the levels slider.

OTHER CONTROLS:

window visible – toggles whether the “projector” window is visible. Turns off if Syphon is selected.

video render – refreshes the video screen. Video will not appear to play if this is off.

audio – turns audio playback on and off.

video fullscreen – toggles whether the “projector” window is fullscreen. Also activated by the escape key.

hide menubar & cursor on fullscreen – use this option if presenting the window on the same screen as the Player, ie. if the projector is the only display.

Load Calibration Grid – this will load a calibration grid for the selected surface.

play, pause, restart, eject – controls playback of the video in either bank.

slider – A/B fade. Operates automatically when the GO button is triggered on the Controller.

“X” toggle next to audio sliders – enables or disables individual audio clips.

play for audio sliders – manual playback of audio clips, for testing purposes.

The Lotus Controller

This controls the video Player(s), which can be on this computer, or networked over several different computers.

Double-clicking Lotus Controller.maxpat will launch Max, and open the Controller.

lotuscontroller2

TO RUN THE SHOW:

1. Launch the Controller and the Player(s)

2. Set the settings on the Player(s)

3. START THINGS RUNNING by pressing the “Run” button

4. Go to the first cue by pressing “go to beginning of show,” or the GO button several times, until the CURRENT Cue # is “1 – BLACK”

5. Press GO or the space bar to trigger the next cue

Duration is an estimated countdown to the next cue. Actual time will vary depending on the performance, but it will let you know when to be ready.

Also keep an eye on Cue Notes, which is a description of when the next cue occurs.

OTHER CONTROLS:

Black – toggles a fast fade to / from black, and pauses the active movie.

Grid – activates a calibration grid on all Players.

CURRENT Cue # and Description – what’s playing now.

NEXT Cue # and Description – what’s cued up to play when GO is pressed. NEXT Cue # is a dropdown menu, so you can jump directly to any cue.

Fade is the duration of the crossfade from current to next clips. This can be adjusted manually, but will automatically set according to the cue list.

Measure – The measure of the next cue in the score.

play – plays the active movie

pause – pauses the active movie

restart – goes to the beginning of the active movie

eject – clears the active movie from the Player

previous and next move forward and backward through the next cue to be loaded.

go to beginning of show
– loads the first cue up next

open cue list – this is the cue sheet in table form, which is where all the playback data is stored. Editing this will affect the show’s playback.

Local / Network – If the Controller and the Player are on the same computer, keep the lower-right setting on local. If networking several computers, select network on the Controller and all Players. It is recommended to have a dedicated network, wired if possible.

Lotus Lives Shadow Puppets

At our first meeting about visuals for Lotus Lives, composer Su Lian Tan said she’d pictured a Malaysian-style shadow puppet show during the Folktale section of the opera. This would become the centerpiece of the video projections, a 14-minute film designed for eleven screens, plus the stage, walls, and ceiling of the concert hall.

LotusFinal-pond

Wayang Kulit is the name for traditional Malaysian shadow puppetry. A temporary shadow stage is constructed outdoors in a village, and a single puppeteer operates and voices all the characters. Each puppet is supported by a single stick, propped up while the puppeteer operates a hinged arm with a second stick. The performance usually tells the classic tale of Ramayana, and lasts late into the night for several days.

In contrast, Chinese shadow puppets are more articulated, supported by rods connected to the torso, legs, and each arm. Their movement is acrobatic, as they spin and fly through the air.

Lotus Lives is about the transition between the worlds inhabited by three generations of women: the lead soprano’s (Lily) life in 1980s London and the present, her grandmother’s in 1920s China, and her mother’s in Malaysia in between. The Folktale is where the struggle between traditions comes to a head.

LotusFinal-momngirl

I traveled to Malaysia and Singapore to research Wayang Kulit and the Peranakan culture, and collect images for the opera.

The Peranakan are Chinese immigrants to Peninsular Malaysia, dating back to the 15th century. Lily’s grandmother (and Su Lian Tan’s own family) moved into this community after leaving China in the 1930s and 1940s.

The Peranakan Chinese adopted cultural elements from Malaysia, and later, Europe colonialism. We wanted to represent that blend of influences in the look of the opera, so I used a mix of both Chinese and Malaysian-style puppets.

Traditional shadow puppets are made of rawhide leather, but that isn’t a skill you can master in a week. So I decided to stick with what I know: pen and ink and lasers.

puppet-drawing

After coming up with a sketch of each puppet, I outlined the different puppet parts with blue pencil, and filled in the negative space with solid black ink.

puppet-moonink

I photographed the drawing and brought it into Photoshop for cleanup. I also copied repeated patterns like lace, and added holes at the joints.

puppet-photo

Next, I converted the artwork to a vector image with Illustrator. Then more cleanup and scaling, and over to local hackerspace NYC Resistor to use their laser cutter.

puppet-laser

This process allowed for multiple copies of each puppet, with different sizes of each. There is a large version of the Little Girl when she is on her own, and a smaller version when she appears next to her mom.

puppet-girl2

The puppet bodies are heavy card stock, with joints connected by thick, knotted embroidery thread.

It was tricky finding support rods that were thin and strong, which could be bent without cracking but won’t flex when in use. The winner was the poles on the little neon flags used for marking buried lines, like sprinkler systems or invisible dog fences.

I made handles from oak dowels, and attached each rod to the puppet with a single link made from art store armature wire.

The Malaysian style puppets were easier to rig, with wooden rods tied to the stock.

puppets-malaysian

Authentic leather shadow puppets are painted, and the color shows up through the translucent skin. After some experimentation, I decided to keep the puppets black and white with tinted backgrounds, more in the cinematic tradition of Lotte Reiniger’s stop-motion silhouettes. (The Fine Lady’s carriage in Lotus is a nod to the carriage in The Adventures of Prince Achmed (1926).)

LotusFinal-finelady

I considered compositing the images optically, using projectors and layered fabrics in a toy theater. In the end, I decided to gather clean optical elements and then composite everything digitally.

shadow-screen

The puppeteers performed against a white screen, with marks for set pieces to be added later. The background and props were mostly pen, brush and ink.

I did build a simple toy theater frame for photographing backlit paper textures.

modeltheater

The Folktale climaxes with the Moon Goddess’s transformation from puppet into a dancer, with performance by Arika Yamada. Her choreography contains another blend of elements: classical ballet and modern dance.

Arika-shoot

I wanted Arika to be against a black background. The best way to do this without losing the shadow details was to shoot her against an evenly lit white wash and invert the image to negative. Also, the blue hue works for the ghostly Moon Goddess.

Arika-neg

Even though the puppet show was pre-recorded, the music and narration were all performed live. I split the video into shorter sequences with loops at the end, to be triggered with the music. This way the musicians could concentrate on their performances without worrying about keeping in time with the video. (For a lot more detail about that, read here.)

There were a lot of parts, and in the end they all worked together as planned.

New Yorker Sidewalk Projection

Andrew Baker and I created the visuals for the poetry segment of The New Yorker Presents pilot, by projecting archival video onto a sidewalk, stoop, and fence.

stoopshoot

The segment is an excerpt from Matthew Dickman’s poem “King”, which begins:

… So I put on my black-white
checkered Vans, the exact pair of shoes
my older brother wore when he was still a citizen in the world,
and I go out, I go out into the street
with my map of the dead and look for him…

The poem is recited by Andrew Garfield in a studio setting, intercut with home movie footage of a different young man and older brother who had passed away. Director Dave Snyder wanted to give the video a stylized treatment, so I suggested going out into the street, literally, as described in the text.

Here’s my projector rig booting up on the sidewalk:

And here’s a quick clip of the final product from the show’s trailer:

We shot the video with a 5DMkIII on a slider, with Zeiss and Canon lenses.

I also edited the final segment. The entire episode is streaming in the 2015 batch of Amazon Original Series pilots, which you can watch here. Watch the entire trailer and read more about The New Yorker Presents on the Jigsaw Productions website.

Self-Contained Projector Rig

piprojector

I was recently asked to provide a video projection for the Proteus Gowanus ball, and assembled my most compact, self-contained projector rig to date. It involves Velcroing a Raspberry Pi computer to my homemade projector mount, which can be clamped anywhere with standard film grip gear. When plugged in, 1080p video plays in a loop. The projector is small but bright, a 3,000 lumen Optoma TW-1692.

Getting the video to start and loop automatically was fairly simple, but required several stops on the internet:

I used this script to loop video files in a folder. I put mine in one called /media.

I added -r four lines from the end, as suggested in one of the comments. The video was getting squished toward the bottom of my projector, and this fixed that.

omxplayer -r $entry > /dev/null

Then I made the script (named videoplayer.sh) executable with the command:
sudo chmod +x /home/pi/videoplayer.sh

To run the video loop, if you’re in the same directory, type
nano ./videoloop

That worked, but I had to reboot the Raspberry Pi to get it to stop. !IMPORTANT! Before making it start automatically, make sure that you can edit rc.local from another computer via SSH while the script is running. Adafruit has a good overview of this here. That way, you can remove the following autostart line from rc.local when you want your Pi back.

To run the script on boot:
sudo nano /etc/rc.local
Before the final “exit 0” line, insert this line:

/home/pi/videoplayer &

Change the path accordingly. I left the script in the home directory, although I may move it at some point.

I loaded the video onto the device, strapped everything together — projector, mount, Raspberry Pi (in a Pibow Timber case), multiplug, USB power adapter, HDMI cable, safety cable, extension cord — plugged it in, and it ran. Just like that.

Photographic Monument: Kentile Floors

On May 3, a Photographic Monument entitled τέλειο σύμπαντος, or “Perfect Universe,” lit up the Kentile Floors sign in Brooklyn. It was a collaboration between George Del Barrio at The Vanderbilt Republic, Karl Mehrer at K2imaging, and myself.

teleion-holon-v3

George and K2imaging have created a series of these large-scale Photographic Monument projections in the past, and I’ve been involved since the last one.

We created video content that ran 30 minutes in total, using George’s photographs, my video loops and neon recreation, text by Ralph Ellison, Milan Kundera, Gabriel García Márquez, and Henry David Thoreau, and patterns designed by Ed Roth of Stencil1.

Karl is the man with the original dream to make this happen, and the technical expertise to pull it off. He aligned two of K2imaging’s brand-new DPI Titan Super Quad 20,000 lumen projectors on the sign, and the result was beautiful.

K2_Kentile_setup-960

I figured out the geometric distortion needed to map the image onto the sign during our projection test a month earlier. Read about that in this post.

We were interviewed by the New York Daily News before-hand, and got further pre-show coverage in Brooklyn Magazine, Curbed, Gothamist, and Fucked in Park Slope.

Kentile_DanNguyen_1-700

People showed up with their cameras and got some amazing shots. Dan Nguyen shot this image of people viewing the sign from the Smith-9th subway platform, as well as the banner at the top of this post. (Thanks, Dan). People also watched from the streets and passing trains.

Gothamist, Curbed, Gownaus Your Face Off, and Brownstoner have their own photo and video roundups.

Here is Photographer Barry Yanowitz’s excellent video, with highlights from the night:
(I recommend going full-frame on all of these videos.)

Dave Bunting at King Killer Studios shot this gorgeous time-lapse from their roof:

The official documentation video from The Vanderbilt Republic:

Projection Content

We tailored the program’s themes to the sign, layering words and photos with the idea of text (our canvas was giant letters) in mind, and using video loops to tell an impressionistic story of the sign’s history by the Gowanus Canal.

I also traced the neon tubes in their current broken state, and then reconstructed the complete neon based on the existing pieces, plus visible electrical contact points on the sign:

The sequence with Stencil1’s designs, adapted for the screen:

And the official, remixed, final vision of the full program:

Thanks to everyone who helped out and showed up. Keep an eye out for future Photographic Monuments.

Kentile Floors Sign Illuminated

I’m working with George Del Barrio at The Vanderbilt Republic and Karl Mehrer at K2imaging to bring the long-dormant Kentile Floors sign in Brooklyn back to life on the night of Saturday, May 3.

The giant rooftop sign is visible to passengers of the F & G trains as they pass over the Gowanus Canal between Carroll Gardens and Park Slope, and has been dark at least since 1992, when Kentile Floors went out of business.

We did a test with one of Karl’s 20K projectors last night, and it looked amazing. I created a head-on vector mask of the sign’s letters, and mapped masked video onto the sign with VDMX and MadMapper. I’ll use that information to create the geometric distortion on the final composition, so everything will be rendered out before playback on the night. Also, Karl may add a second projector.

The picture at the top of this post is a projection illuminating the neon tubes in their current condition. Based on that, we’ll restore the sign to what it looked like with full neon, and then go from there.

kentile-grid

Kentile-lit-full

This projection is the latest Photographic Monument from George and Karl, titled τέλειο σύμπαντος (“Perfect Universe”).

Come to the Smith-9th G/F subway stop in Brooklyn for a great view of the sign, after dark on May 3. The Gowanus Loft is at 61 9th Street.

Here are a few video clips from test night.

For another massive video projection in the same area, check out my post on the Photographic Monument at last year’s Art From The Heart.

[UPDATE] — This project in the news:

New York Daily News
Brooklyn Magazine
Curbed
Gothamist
Fucked in Park Slope

READ HERE for photos, video, and a post-event recap.

Azimuth Video Installation

Azimuth is a video capture and playback installation with simple wireless controllers. It’s based on the Déjà View Master, modified for the art show “Being and Time.”

The basic layout is a webcam feeding into a Mac Mini running a Max/Jitter patch, controlled by three wireless Arduino-based devices, and displayed on two monitors: a Zenith television on a table, and a projector mounted under the table, projecting onto the floor.

azimuthshow2

The screen normally displays a frozen image of the empty room. As you turn the clock-like controller, you scrub back and forward through a constantly-recording half-hour video buffer. The frames flash on screen as long as you’re moving the hand, but it fades back to the frozen frame when movement stops.

To add an element of unpredictability, it sometimes scrubs through a pre-recorded video of walking to the venue, or operates normally but with the image sliding off the TV and onto the floor via the projector.

Jitter works by using a matrix of pixel information to manipulate video. In Azimuth, the video buffer is accomplished by recording sequentially-numbered Jitter binary files (each matrix file is one video frame) to a hard drive, and separately reading the matrix files back and displaying them. Once the maximum number of frames has been written, the counter resets and writes over the first frames, letting you record in a loop.

Here are the basic components of this operation, set to record one minute at 15 fps (900 frames), and read back and display other frames, chosen by the number box on the right.

maxbufferrecord

Grab this Max patch here

———-begin_max5_patcher———-
976.3oc0XFsbiaBEF9Z6mBFM8RWM.BPhdU5ywNcxfjHNxUVxKBuwa2Ye26Aj
sih2HY4Ta4zKhmHDF94iCmyu4GymEjVuS2Df9CzWPyl8i4yl4ax0vr8OOKXs
ZWVopw2sfpsqS0lfEsu5o5JaSw+ncuhPCw6aF5TQUo15+Jj8MVu0BMY+9Fc6
7ETTYCVfBRUUKCP+09dsQYydtnZ4iFclssiBXj4LRR7BDg4lDTRh6SJNDe76
Uj6kWc5pemE7pLZmTuNnczbkZsWEA+ooPUFbbtMP6Vs4QckJsz2Cr6U+b9b2
GKFKizu.53vnZ067KjfUE1vWJpxqeAsoT88TU1eid.ff1rAQtJD0iR.oChSF
U3wICHXRjCjDgbHdRuXddcY1Zk0TrCwPYOqLnDdDhkfuJ7BF8GaG8KhZrnPJ
iwEI.5hFDcB4GMVLqtr1zN0X2LfCShjThv8eXIi5la2b9AP8ZcSiZo9WXsQq
xQ+FYApMLZ77k1GeGMQI7Pojw4QvYatOjL48AZL98AJYJBFypWmVToQvh4YD
D2fvnvU6dB8vFUdNr3fmwHN72CPH0xkZSChdIbj0KGufnSB.PIDpv4.WkCln
LlLEmrS2Zs0UAWdhrgVvwhPIIFNEBKXh.NNFwYh1U5.mF6I145EgTuQWkCbp
dI5o5x7qRRpQUirKOXvltDS4.OjzA2+Y2wL6Y0ack9PR7EkKmeF2DG9y83P.
CvSHki4IuEXwrAcVP64DC6pRr9RPmUV2nu8IlEQghHHZBRfvfzJQ3DobApsP
We4kID9EmXtyxa01FawSEYJaAjp3CdlrOr4NTd6oFjA1QM1ITKdXpIt6T6+V
stIuRW2DccqzM7OIHdR7v1W.3KlBqFbVc6CB6PGJARkISjNOA7nA8Twuidp9
zXvuK5X7PlLJNlcVC9DJ4SnC+AX8WsgKMpz6Ek6jkLgExSDIdOq33ggr3+OP
9qq0VSMRHtJm1OqwuN9X5T1QvGlmxo3Dusd4xRcvEemPi01Vxwk6v194mI+1
4t1G+XFTBUCO4lx75y09agPS8VS1gEz9KjB8pxx0PU6pi0r+xq+VLD8XmdtH
OW6e+g3k7hFm3x6+RoFqbbWJwY0C8M84VJm3QimIRO3QnmSX3MUOjwvG7joG
uU+yJHeN6ISQhOaJhNNEMcQ07wsoMgg0zwDVymN8v+jsic5lw6qH1DpnSmqd
1ynSVkrwgno7f+XN1eRAu6es0OZs9ViQpMa9l1zreL8RA7Atp01sXg+whp1G
8VtBL5uUbn+scPY.idVvk2VSqoycIBvxILO+b9+Bn2hk4C
———–end_max5_patcher———–

There is more math involved in keeping track of the current frame and rolling back through zero.

The projection on the floor is black, but image appears where the camera picks up movement. The patch subtracts the 8-bit value of each pixel of the previous frame from the current frame: where the pixels are the same, the value will be zero. Where the value is different, it will light up. It’s like velociraptor vision.

azimuthscreen

The button controller swaps the output monitors, putting the frame differencing on the TV screen and the freeze frame on the floor via the projector. The frame differencing is quite popular: the kids love dancing in front of it. But it has a practical function too. Part of the Max patch adds up the value of all the cells in the frame to determine whether anyone is in the room. The more movement, the higher the total value. If the number stays below a set threshold for more than a few minutes, it will assume that the room is empty and update the freeze-frame.

The other control box has a knob, which switches between five channels. Some channels read back from other matrix “banks,” where the five-digit matrix file begins with a different number. The main video loop is 10000-17200 (add 10000 to the counter value, max of 7200 for 30 minutes at 15 fps), a busy time saved from the show’s opening is 20000-27200 (add 20000), a pre-recorded movie of riding the subway and walking to the venue is 50000-53400, and so on. Another channel adds vertical roll to the live video feed, like an old TV. All adjust the brightness and color in some way.

Any controller will take over whatever’s happening in screen, and the result of pressing the button or knob will time out and revert to the empty frame if left alone.

azimuthboxes1

The boxes are all handmade oak frames with laser-cut semi-transparent acrylic front and back, echoing the Zenith television.

The big box has a rotary encoder with a clock hand attached, so it can spin continuously in either direction. The encoder is connected to a Teensy 3.0 microcontroller which runs Arduino code. It sends one command repeatedly if turned clockwise, and another command if turned counterclockwise, via a serial connection with an XBee wifi radio and adapter.

It’s powered by a 2,500mAh lipo battery (the Teensy and XBee operate at 3.3v), and uses Sparkfun’s Wake on Shake as a power switch. This device is brilliant. It has a gyroscope, and turns on if there’s any movement. It then stays on as long as there is power going to the wake pin — this comes from one of the Teensy pins, which is programmed to stay on for 15 minutes after the last time the controller’s been used.

I used a breadboard with power rails removed to hold the Teensy and XBee, since it provides a flush, solid connection. Double-sided tape, industrial strength Velcro, and hot glue keep everything in place. The back panel is held on with machine screws.

boxAinside2

The smaller boxes are similar, but use an Adafruit Trinket for the logic. One has a 10k linear potentiometer, and the other uses an arcade button. Each has a panel-mount power switch on the bottom of the box.

boxBinside

The receiver uses a Teensy 3.1, which relays incoming serial messages from the XBee to the Mac Mini over USB. I’d normally send a serial connection directly into Max, but since this installation needs to run reliably without supervision, I set the Teensy to appear as a standard keyboard. Messages from the controllers are sent as keystrokes, and the Max patch responds accordingly. This also made programming easier, since I could emulate controller action with keystrokes.

The receiver is housed in a spare Raspberry Pi case with a hole cut in the top for the XBee. I also added a kill button to stop the patch from running and quit Max by sending another keystroke. The Mac Mini is set to launch the Azimuth patch on startup, so between that and the kill button, no keyboard or mouse is needed from day to day.

Arduino code for the controllers and receiver is here.

azimuthRcv

The Mac Mini connects to the Zenith TV via a series of adapters: mini display port to VGA, VGA to RCA, and RCA to RF (on channel 3). The projector is configured as the second monitor, with a direct HDMI connection. I don’t recommend working on a complex Max patch on a 640 x 480 screen.

All in all, the installation runs well. Video is stored on a solid state laptop hard drive in a USB 3 enclosure, and most of the video processing happens on the GPU using the Gen object (jit.gl.pix) in Jitter. Some people were tentative when using the controllers, but others dove in and had a good time.

Photographic Monument

Vanderbilt Republic partnered with K2imaging again for this years’s Art From the Heart all-media art show, to create a massive outdoor video projection — the Photographic Monument — this time on the construction mesh below the Gowanus F/G train viaduct in Brooklyn.

PM_screen1

Mark Kleback and I wrote a Max/Jitter patch which would randomly cycle through all photo submissions to the show, and allow the viewer to tweak how the images interact with the environment. We used a controller that Mark built for video mixing, with four switches and four knobs.

The photos showed up on top of a background image which you could select with the controller. They were video loops which I’d shot from the F and G trains, abstract shapes, a cascade of all the photos, and a blown-up version of the photo that was being displayed. Other controls affected size, blending mode, and brightness/contrast/saturation.

PM_window

You could stand at the controls and see the monument through the window. The projector was massive. The deer was impressed.

PM_setup

Thanks to George Del Barrio for inviting us to participate. Also check out my Déjà View Master which premiered at the show.

Deja View Master

The Déjà View Master is an interactive video installation for situations where people and their attention wander over the course of an evening. The audience uses a clock-like wooden controller to rewind surveillance video back from a live feed to some-time-before.

DVM_screen3

There are two parts: a surveillance camera and TV monitor, recording video on a Mac running a Max/Jitter patch, and a wireless wooden controller sending position data to the computer, to move the playhead.

DVM_held

Recording and Playing

The Max/Jitter patch takes video from a USB webcam and writes the camera feed to a hard drive as matrix files (.jxf), one per frame, while simultaneously reading them back for display on an external monitor (via a cheap VGA to RCA adapter). The files are numbered serially, overwriting the earlier files once a limit (time or drive space) is reached.

This is basically just doing what a DVR does, although it took a bit of figuring to get it running on an updating loop in Max.

The current frame (“now”) is set by a counter which resets at the maximum frame count. When the controller is set to display “now,” the live feed is actually reading the “now”-1 frame. As you dial the controller backward, it subtracts an appropriate number of frames from the current record frame, rolling backward through zero, until the earliest point of the recording is reached at “now”+1.

I added a video effect which becomes more visible the farther back in time you go — initially it was a feedback loop which caused motion blur, but I finally went with more visible and less processor-intensive black-and-white, with increased brightness and contrast. I’d still like to add a blur or luma increase during scrubbing, independent of distance from present, but that’s for a future version.

The matrix files recorded to a solid state USB 3 hard drive. I also added a rotate 180 degrees function (jit.rota) for when the incoming webcam is clamped upside down to the ceiling.

DVM_front

Controlling

The controller is the fun part. It’s a wooden box resembling a clock, with a wooden hand which controls the playback.

The design was inspired by the Pimoroni Timber Raspberry Pi enclosure, using stacks of laser cut wood. I used 1/8″ ply, which ended up taking close to an hour of laser time. It turned out well, but I might go for a different approach next time.

The face and back are 1/32″ veneer finished with Briwax, which allow two hidden LEDs to shine through (“now” and “before” indicators).

The clock hand is attached to a linear 10k potentiometer, which is wired to a Teensy 3.0 development board running standard Arduino code (take a look at the code here), powered by an 850 Ah lipo battery (it’s a 3.3V system). The Teensy sends serial data to the Max patch via a pair of XBee radios; the receiver XBee is attached to the recording computer with an FTDI cable and XBee adapter kit. The whole thing is mounted on an Adafruit perma-proto board.

DVM_open

George Del Barrio invited me to premiere the Déjà View Master at Vanderbilt Republic’s 2013 Art From The Heart photography show and event, curated by Renwick Heronimo. I had it set on a 70-minute loop, with the camera clamped to the ceiling, and monitor behind a transparent corrugated wall.

I may experiment with other controllers in future versions. The original idea was to use the position of a beer on a bar as the playhead, but I could see that ending with spillage. The wooden controller is pretty intuitive, as long as it’s obviously associated with the monitor. Another alternative to the current design is to use a radial encoder, so the hand can keep spinning as you wind through time.

Also check out the Photographic Monument that I programmed with Mark Kleback for the show.

[UPDATE 3/23/14: The Déjà View Master has evolved into Azimuth for the “Being and Time” art show.]

Gloves Video Controller

Six of us at NYU’s ITP Camp decided to follow The Gloves Project’s patterns to build our own gloves in June. These are sensor-laden gloves that can be used to control software through hand gestures. Our group included musicians, a theatrical sound designer, a gamer, and visualists, each with different uses for the glove in mind.

To get an idea of how it can be used with video in a live setting, take a look at this test clip, where I use hand movement to wirelessly control video playback and effects.

Here, sensor values on the glove are sent via Bluetooth to a decoder patch written in Max, and then out as MIDI controller data to VDMX, VJ software. It works!

Gloves have been used as controllers in live performance for some time — see Laetita Sonami’s Lady’s Glove for example. Our particular design is based on one created for Imogen Heap to use as an Ableton Live controller, so she can get out from behind a computer or keyboard and closer to the audience. She gives a great explanation and demonstration at this Wired Talk (musical performance starts at 13:30).

Heap and The Gloves Project team are into sharing the artistic possibilities of this device with others, as well as increasing the transparency of the musical process which can be obscured inside a computer. This is an attitude I’ve believed in since attending MakerFaire and Blip Festival in 2009, where I saw a range of homemade controllers and instruments. I was much more engaged with the artists who made the causal process visible. It doesn’t have to be all spelled-out, but in certain cases it helps to see the components: the performer is making the things happen. This is obvious with a guitar player, but not so much with electronic music. Also, you get a different creative result by moving your arms than pressing a button — a violin is different from a piano.

The Gloves Project has a residency program where they’ll loan a pair of gloves to artists, plus DIY plans for an Open Source Hardware version. The six of us at ITP Camp built one right-hand glove each. We had to do a bit of deciphering to figure everything out, but we had a range of skills between us and got there in the end.

Each glove has six flex sensors in the fingers (thumb and ring finger have one each, and index and middle have two each, on the upper and lower knuckle), which are essentially resistors: the more they bend, the less electricity passes through. This can be measured and turned into a number. The sensors run to a tiny programmable ArduIMU+ v3 board by DIYDrones, which uses Arduino code and includes a built-in gyroscope, accelerometer, and magnetometer (a compass if you attach a GPS unit for navigation). This is mostly used for flying things like small self-guided airplanes, but also works for motion capture. We make a serial connection to the computer with a wireless bluetooth device.

Here’s a wiring guide that we drew up.

We had more trouble with the software side of things. The Gloves Project designed is to communicate with their Glover software, written in C++ by Tom Mitchel. There are instructions on the website, but we couldn’t reach anyone to actually get a copy of the program. In the end, we copied the flex sensor sections of Seb Madgwick’s ArduIMU code and used it to modify the ArduIMU v3 code. It delivered a stream of numbers, but we still had to figure out how to turn it into something we could use.

We formatted the output sensor data like this:

Serial.println("THUMB:");
Serial.println(analogRead(A0));
Serial.println("INDEXLOW:");
Serial.println(analogRead(A1));
Serial.println("INDEXUP:");
Serial.println(analogRead(A2));

…and so on. I then programmed a patch in Max to sort it out.

Details:

When one of the sensors’ name comes through, Max routes it to a specific switch, opens the switch, lets the next line through (the data for that sensor), and then closes the switch. Data goes where we want, and garbage is ignored.

Every glove and person is slightly different, so next the glove is calibrated. Max looks for the highest and lowest number coming in, and then scales that to the range of a MIDI slider: 0 to 127. When you first start the decoder, you move your hand around as much as you can and voilà! It’s set.

I made the default starting point for flex sensor data 400, since the lowest point sometimes didn’t fall below 0, while the peak was always above 400. The starting point for movement data is 0. There’s also a “slide” object that smooths movement so it doesn’t jump all over the place while still being fairly responsive.

The number is now sent through a Max “send” object with a different name than the raw sensor data. If you’re keeping everything inside Max, you can just set up a corresponding “receive” object.

Otherwise, it gets turned into a MIDI control or note value, and sent out through a local MIDI device or over a network.

Finally, I tidied everything up so it’s useable in presentation mode. Anyone can download the patch and run it in Max Runtime (free).

There are probably more efficient ways of doing this, but it’s our first pass to get things working.

To download all our code, visit https://github.com/timpear/ITP-Gloves/

Since finishing that, I discovered that The Gloves Project has released a whole range of decoders / bridges in various languages. Their ArduIMU code has lots of clever deciphering on the gloves end of things, and the bridges primarily output OSC instead of MIDI, which is handy. Beyond that, The Gloves Project continues to develop new versions of gloves, and are worth checking up on.

Our decoder simply translates the raw sensor data. The next step is to get it to recognize hand gestures, and trigger specific events or adjust values based on that (which is what the Glover software does). We also need to program the glove’s RGB LED and vibration motor for feedback from the computer.

I showed this project to Karl Ward (rock star, Ghost Ghost collaborator, masters student at ITP), and it turns out that he’s currently working on an Arduino library to do a lot of this work, only more elegantly, within the controller. The first library is Filter, which he augmented over the summer to require another new library he wrote, called DataStream. He says: “They are both in usable, tested shape, but the API is still in flux. Right now I’m looking for folks who have Arduino code that does its own filtering, or needs filtering, so I can design the API to fit the most common cases out there.” We’re going to jam.

The glove has all sorts of possible artistic applications, but what else? When I showed it to my dad, he wondered if it could be used as a translator for sign language. Brilliant. It sounds like Microsoft is currently developing software for the Xbox One and new Kinect that will do this, although one advantage of a wearable controller in any case is the ability to get away from a computer (within wireless range). One of the people on our team is going to use it to adjust audio signals while installing sound in theaters. Easier than holding a tablet at the top of a ladder.

Another friend suggested that the glove as demonstrated here could be used for art therapy by people with limited movement. I imagine that something similar is in use out there, but the open-source aspect adds another level of customization and possibility, and again, transparency.

I’m looking to experiment with adjusting specific elements of a video clip with something more organic than a slider or knob, and also be able to interact more directly with a projection. I’ve worked with painter Charlie Kemmerer, creating hybrid painting-projections during Ghost Ghost shows. Charlie works on the canvas with a brush, but even standing beside him, I have to work on an iPad at best. Now I can point directly at the surface while selecting, adjusting, and repositioning clips. Or Charlie could wear it while painting to capture his movement, without it getting in the way of holding a brush.

Creative work reflects the nature of your instrument, so it’s exciting to expand the toolset and learn more about the media. Video A-B fades are pretty straight-forward, but the way that the IMU unit works isn’t nearly as predictable as a fader on a board, and I’ve gotten some unexpected results. That’s a good thing.

Even better, I can’t wait to see what other people with these gloves come up with. Tinker, modify, share.