All posts tagged Max/MSP

Azimuth Video Installation

Azimuth is a video capture and playback installation with simple wireless controllers. It’s based on the Déjà View Master, modified for the art show “Being and Time.”

The basic layout is a webcam feeding into a Mac Mini running a Max/Jitter patch, controlled by three wireless Arduino-based devices, and displayed on two monitors: a Zenith television on a table, and a projector mounted under the table, projecting onto the floor.

azimuthshow2

The screen normally displays a frozen image of the empty room. As you turn the clock-like controller, you scrub back and forward through a constantly-recording half-hour video buffer. The frames flash on screen as long as you’re moving the hand, but it fades back to the frozen frame when movement stops.

To add an element of unpredictability, it sometimes scrubs through a pre-recorded video of walking to the venue, or operates normally but with the image sliding off the TV and onto the floor via the projector.

Jitter works by using a matrix of pixel information to manipulate video. In Azimuth, the video buffer is accomplished by recording sequentially-numbered Jitter binary files (each matrix file is one video frame) to a hard drive, and separately reading the matrix files back and displaying them. Once the maximum number of frames has been written, the counter resets and writes over the first frames, letting you record in a loop.

Here are the basic components of this operation, set to record one minute at 15 fps (900 frames), and read back and display other frames, chosen by the number box on the right.

maxbufferrecord

Grab this Max patch here

———-begin_max5_patcher———-
976.3oc0XFsbiaBEF9Z6mBFM8RWM.BPhdU5ywNcxfjHNxUVxKBuwa2Ye26Aj
sih2HY4Ta4zKhmHDF94iCmyu4GymEjVuS2Df9CzWPyl8i4yl4ax0vr8OOKXs
ZWVopw2sfpsqS0lfEsu5o5JaSw+ncuhPCw6aF5TQUo15+Jj8MVu0BMY+9Fc6
7ETTYCVfBRUUKCP+09dsQYydtnZ4iFclssiBXj4LRR7BDg4lDTRh6SJNDe76
Uj6kWc5pemE7pLZmTuNnczbkZsWEA+ooPUFbbtMP6Vs4QckJsz2Cr6U+b9b2
GKFKizu.53vnZ067KjfUE1vWJpxqeAsoT88TU1eid.ff1rAQtJD0iR.oChSF
U3wICHXRjCjDgbHdRuXddcY1Zk0TrCwPYOqLnDdDhkfuJ7BF8GaG8KhZrnPJ
iwEI.5hFDcB4GMVLqtr1zN0X2LfCShjThv8eXIi5la2b9AP8ZcSiZo9WXsQq
xQ+FYApMLZ77k1GeGMQI7Pojw4QvYatOjL48AZL98AJYJBFypWmVToQvh4YD
D2fvnvU6dB8vFUdNr3fmwHN72CPH0xkZSChdIbj0KGufnSB.PIDpv4.WkCln
LlLEmrS2Zs0UAWdhrgVvwhPIIFNEBKXh.NNFwYh1U5.mF6I145EgTuQWkCbp
dI5o5x7qRRpQUirKOXvltDS4.OjzA2+Y2wL6Y0ack9PR7EkKmeF2DG9y83P.
CvSHki4IuEXwrAcVP64DC6pRr9RPmUV2nu8IlEQghHHZBRfvfzJQ3DobApsP
We4kID9EmXtyxa01FawSEYJaAjp3CdlrOr4NTd6oFjA1QM1ITKdXpIt6T6+V
stIuRW2DccqzM7OIHdR7v1W.3KlBqFbVc6CB6PGJARkISjNOA7nA8Twuidp9
zXvuK5X7PlLJNlcVC9DJ4SnC+AX8WsgKMpz6Ek6jkLgExSDIdOq33ggr3+OP
9qq0VSMRHtJm1OqwuN9X5T1QvGlmxo3Dusd4xRcvEemPi01Vxwk6v194mI+1
4t1G+XFTBUCO4lx75y09agPS8VS1gEz9KjB8pxx0PU6pi0r+xq+VLD8XmdtH
OW6e+g3k7hFm3x6+RoFqbbWJwY0C8M84VJm3QimIRO3QnmSX3MUOjwvG7joG
uU+yJHeN6ISQhOaJhNNEMcQ07wsoMgg0zwDVymN8v+jsic5lw6qH1DpnSmqd
1ynSVkrwgno7f+XN1eRAu6es0OZs9ViQpMa9l1zreL8RA7Atp01sXg+whp1G
8VtBL5uUbn+scPY.idVvk2VSqoycIBvxILO+b9+Bn2hk4C
———–end_max5_patcher———–

There is more math involved in keeping track of the current frame and rolling back through zero.

The projection on the floor is black, but image appears where the camera picks up movement. The patch subtracts the 8-bit value of each pixel of the previous frame from the current frame: where the pixels are the same, the value will be zero. Where the value is different, it will light up. It’s like velociraptor vision.

azimuthscreen

The button controller swaps the output monitors, putting the frame differencing on the TV screen and the freeze frame on the floor via the projector. The frame differencing is quite popular: the kids love dancing in front of it. But it has a practical function too. Part of the Max patch adds up the value of all the cells in the frame to determine whether anyone is in the room. The more movement, the higher the total value. If the number stays below a set threshold for more than a few minutes, it will assume that the room is empty and update the freeze-frame.

The other control box has a knob, which switches between five channels. Some channels read back from other matrix “banks,” where the five-digit matrix file begins with a different number. The main video loop is 10000-17200 (add 10000 to the counter value, max of 7200 for 30 minutes at 15 fps), a busy time saved from the show’s opening is 20000-27200 (add 20000), a pre-recorded movie of riding the subway and walking to the venue is 50000-53400, and so on. Another channel adds vertical roll to the live video feed, like an old TV. All adjust the brightness and color in some way.

Any controller will take over whatever’s happening in screen, and the result of pressing the button or knob will time out and revert to the empty frame if left alone.

azimuthboxes1

The boxes are all handmade oak frames with laser-cut semi-transparent acrylic front and back, echoing the Zenith television.

The big box has a rotary encoder with a clock hand attached, so it can spin continuously in either direction. The encoder is connected to a Teensy 3.0 microcontroller which runs Arduino code. It sends one command repeatedly if turned clockwise, and another command if turned counterclockwise, via a serial connection with an XBee wifi radio and adapter.

It’s powered by a 2,500mAh lipo battery (the Teensy and XBee operate at 3.3v), and uses Sparkfun’s Wake on Shake as a power switch. This device is brilliant. It has a gyroscope, and turns on if there’s any movement. It then stays on as long as there is power going to the wake pin — this comes from one of the Teensy pins, which is programmed to stay on for 15 minutes after the last time the controller’s been used.

I used a breadboard with power rails removed to hold the Teensy and XBee, since it provides a flush, solid connection. Double-sided tape, industrial strength Velcro, and hot glue keep everything in place. The back panel is held on with machine screws.

boxAinside2

The smaller boxes are similar, but use an Adafruit Trinket for the logic. One has a 10k linear potentiometer, and the other uses an arcade button. Each has a panel-mount power switch on the bottom of the box.

boxBinside

The receiver uses a Teensy 3.1, which relays incoming serial messages from the XBee to the Mac Mini over USB. I’d normally send a serial connection directly into Max, but since this installation needs to run reliably without supervision, I set the Teensy to appear as a standard keyboard. Messages from the controllers are sent as keystrokes, and the Max patch responds accordingly. This also made programming easier, since I could emulate controller action with keystrokes.

The receiver is housed in a spare Raspberry Pi case with a hole cut in the top for the XBee. I also added a kill button to stop the patch from running and quit Max by sending another keystroke. The Mac Mini is set to launch the Azimuth patch on startup, so between that and the kill button, no keyboard or mouse is needed from day to day.

Arduino code for the controllers and receiver is here.

azimuthRcv

The Mac Mini connects to the Zenith TV via a series of adapters: mini display port to VGA, VGA to RCA, and RCA to RF (on channel 3). The projector is configured as the second monitor, with a direct HDMI connection. I don’t recommend working on a complex Max patch on a 640 x 480 screen.

All in all, the installation runs well. Video is stored on a solid state laptop hard drive in a USB 3 enclosure, and most of the video processing happens on the GPU using the Gen object (jit.gl.pix) in Jitter. Some people were tentative when using the controllers, but others dove in and had a good time.

Photographic Monument

Vanderbilt Republic partnered with K2imaging again for this years’s Art From the Heart all-media art show, to create a massive outdoor video projection — the Photographic Monument — this time on the construction mesh below the Gowanus F/G train viaduct in Brooklyn.

PM_screen1

Mark Kleback and I wrote a Max/Jitter patch which would randomly cycle through all photo submissions to the show, and allow the viewer to tweak how the images interact with the environment. We used a controller that Mark built for video mixing, with four switches and four knobs.

The photos showed up on top of a background image which you could select with the controller. They were video loops which I’d shot from the F and G trains, abstract shapes, a cascade of all the photos, and a blown-up version of the photo that was being displayed. Other controls affected size, blending mode, and brightness/contrast/saturation.

PM_window

You could stand at the controls and see the monument through the window. The projector was massive. The deer was impressed.

PM_setup

Thanks to George Del Barrio for inviting us to participate. Also check out my Déjà View Master which premiered at the show.

Deja View Master

The Déjà View Master is an interactive video installation for situations where people and their attention wander over the course of an evening. The audience uses a clock-like wooden controller to rewind surveillance video back from a live feed to some-time-before.

DVM_screen3

There are two parts: a surveillance camera and TV monitor, recording video on a Mac running a Max/Jitter patch, and a wireless wooden controller sending position data to the computer, to move the playhead.

DVM_held

Recording and Playing

The Max/Jitter patch takes video from a USB webcam and writes the camera feed to a hard drive as matrix files (.jxf), one per frame, while simultaneously reading them back for display on an external monitor (via a cheap VGA to RCA adapter). The files are numbered serially, overwriting the earlier files once a limit (time or drive space) is reached.

This is basically just doing what a DVR does, although it took a bit of figuring to get it running on an updating loop in Max.

The current frame (“now”) is set by a counter which resets at the maximum frame count. When the controller is set to display “now,” the live feed is actually reading the “now”-1 frame. As you dial the controller backward, it subtracts an appropriate number of frames from the current record frame, rolling backward through zero, until the earliest point of the recording is reached at “now”+1.

I added a video effect which becomes more visible the farther back in time you go — initially it was a feedback loop which caused motion blur, but I finally went with more visible and less processor-intensive black-and-white, with increased brightness and contrast. I’d still like to add a blur or luma increase during scrubbing, independent of distance from present, but that’s for a future version.

The matrix files recorded to a solid state USB 3 hard drive. I also added a rotate 180 degrees function (jit.rota) for when the incoming webcam is clamped upside down to the ceiling.

DVM_front

Controlling

The controller is the fun part. It’s a wooden box resembling a clock, with a wooden hand which controls the playback.

The design was inspired by the Pimoroni Timber Raspberry Pi enclosure, using stacks of laser cut wood. I used 1/8″ ply, which ended up taking close to an hour of laser time. It turned out well, but I might go for a different approach next time.

The face and back are 1/32″ veneer finished with Briwax, which allow two hidden LEDs to shine through (“now” and “before” indicators).

The clock hand is attached to a linear 10k potentiometer, which is wired to a Teensy 3.0 development board running standard Arduino code (take a look at the code here), powered by an 850 Ah lipo battery (it’s a 3.3V system). The Teensy sends serial data to the Max patch via a pair of XBee radios; the receiver XBee is attached to the recording computer with an FTDI cable and XBee adapter kit. The whole thing is mounted on an Adafruit perma-proto board.

DVM_open

George Del Barrio invited me to premiere the Déjà View Master at Vanderbilt Republic’s 2013 Art From The Heart photography show and event, curated by Renwick Heronimo. I had it set on a 70-minute loop, with the camera clamped to the ceiling, and monitor behind a transparent corrugated wall.

I may experiment with other controllers in future versions. The original idea was to use the position of a beer on a bar as the playhead, but I could see that ending with spillage. The wooden controller is pretty intuitive, as long as it’s obviously associated with the monitor. Another alternative to the current design is to use a radial encoder, so the hand can keep spinning as you wind through time.

Also check out the Photographic Monument that I programmed with Mark Kleback for the show.

[UPDATE 3/23/14: The Déjà View Master has evolved into Azimuth for the “Being and Time” art show.]

Talking Opera at ITP

I’ve been getting my hands dirty at ITP Camp. NYU Tisch School of the Arts’s Interactive Telecommunications Program is a two-year grad program focused on technology in arts, and Camp is where they let working professionals crash the party for the month of June.

There was a focus on many of the tools I used in Lotus Lives — Max, VDMX, MadMapper, After Effects, laser cutters, etc. Technical workshops are useful, but I always appreciate hearing stories of real-world application. So I gave a presentation about bringing everything together in an actual performance.

The fun part was breaking out the 1:24 scale-model of the concert hall where the premiere performance was staged. I used it during development to help visualize how the projections would fill the space — I’d projected rough versions of the video, but this time I projected the final elements, including a recording of the musicians on stage.

image

I also covered:

– Designing a concept that would be appropriate to the story and feasible with our resources.
– Creating a playback system that could adapt to the performers in a changing, live situation.
– Designing the set for the video, and vice versa.
– Shooting the content: gathering images on location in Malaysia, designing and building shadow puppets (with lasers), and collaborating with dancers.
– Editing and compositing the content.
– Prepping the video for mapping.
– Designing the playback for six projectors, and making it as fail-safe as possible for a live performance.

It’s the first time I’ve covered the breadth of the project at once. I’ve already written up a post on the playback system here, and will cover other elements when I get a chance.

Lotus Lives Projection Documentation

The premiere performance of Lotus Lives took place in the Middlebury College Concert Hall, which is a beautiful space, but not built for rigging lights or set. It does have a curved balcony running around the entire room, and architectural beams that could support high-tension cable for hanging screens. I built a scale model and started from there.

CFA-model

My original plan was to have 3 to 5 projectors spread out around balcony, with one video playback computer per projector. Each computer would run a Max/Jitter patch, with video cues triggered from a networked central control computer. This would allow each system to play back a smaller video file, reducing the chances of slow playback or crashing, and also mean shorter runs of expensive cable.

In the end, I went with the more-eggs-in-fewer-baskets approach, slicing out video from two computers to five projectors, which were almost all within arm’s reach of the control booth. I figured this would keep the different screens in perfect sync, and require fewer separate movies, making them faster to render and easier to manage.

Key to this plan was garageCUBE’s MadMapper projection mapping software. It uses the Syphon framework to allow different Mac applications to share the same space on the graphic card. Mapping is certainly doable within Max/Jitter, but I know that garageCUBE’s Modul8 VJ software has rock-solid under-the-hood performance, and MadMapper’s interface is friendlier than anything I could come up with in the time frame. I downloaded beta version of MadMapper 1 minute after it was released, started using it with VidVox’s VDMX software for live VJ gigs, and loved the results.

control-setup

My final setup, in order from user input to image output, was this:

1. AKAI APC-20 MIDI controller
2. into a Max patch on a MacBook Pro, which sent the custom MIDI data out to a network, and back to the controller for visual response (the APC-20 only officially plays with Ableton, but the buttons are controlled by a range of MIDI signals — more on that in a separate post).
3. Another Max patch received the MIDI data from the network, and was on every playback computer — in this case, just the same MacBook Pro and a Mac Pro tower, connected with a crossover cable. This patch sent the MIDI signal to VDMX.
4. VidVox’s VDMX for video playback. The programs on each computer were identical, but loaded with different video files. One controller, two (or more) computers.
4a. The media files were on external G-Raid drives. I swear by those. eSATA connection to the MBP, Firewire 800 to the tower (it was an older, borrowed machine).
4b. I used the Apple ProRes422 (not HQ) codec for the movies. They were odd resolutions, larger than SD but smaller than 1920×1080, at 23.976 fps. I usually use motion JPG for VJ work to keep the processor happy, but found that ProRes422 was something that the Macs could handle, with a nice, sharp image.
4c. Several sections included audio playback as well. I went out through an M-Audio firewire box to the sound mixer’s board.
5. Out from VDMX to Syphon
6. From Syphon into MadMapper
7. From MadMapper out to a Matrox TripleHead2Go (digital version) hardware box. The computer sees it as a really wide monitor, but it splits the image out to two or three monitors/projectors.
8. TripleHead2Go to the projectors. The A-computer projectors were a trio of 4000 lumen XGA Panasonics with long lenses, and B-computer projectors were a 5500 lumen WXGA projector and 3000 lumen Optoma on stage, doing rear-projection on a set piece that looked like a dressing room mirror (with border). That was at the end of a 150′ VGA cable run, with VGA amp. Worked well.
9. There was also a 6th projector hooked up to a 3rd computer, which played a stand-alone loop at a key moment in the action. This filled the ceiling with rippling light, turning the visuals up to 11.

The video was broken down into sections ranging from 30 seconds to 5 minutes long. The end of each section had a long tail of looping video, which would play until receiving the MIDI trigger, and then dissolve to the next clip. It would work like this:

Let’s say the fourth song has just begun. Clip 401 is playing in the “A” layer. The video crossfader is on A, so that’s being projected. This is happening on both computers. I press a button on the controller to load clip 402 into the B layer. It’s ready. The performers reach the cue in the music, and I press the GO button. Clip 402 starts playing, and VDMX crossfades from A to B. The crossfade speed is determined by one of the sliders on the controller, ranging from 0 to 5 seconds. Once layer A is clear, I press the eject button and the computer stops playing clip 401. Then I press the button to load clip 403 into A, and standby for the next cue.

In addition to this, I had a few extra layers that could be controlled manually. This way I was able to add new elements during rehearsal, and control some things by hand depending on the feel of the performance.

I found that even with VDMX set to pre-load all video into RAM, the visible layer would skip for a split second when the second movie was triggered, but just on some clips. It turned out that the first 20 or so clips to load when the program was launched would play smoothly, but later ones wouldn’t. This is less of a problem now with SSD playback drives, and maybe with a newer update of VDMX, but I got around it by putting clips with more movement at the top of the media bin.

One other hitch is that the first MacPro tower that I borrowed had two 128 MB graphics cards, but the software could only use one of them. I traded it for a 256 MB card and all was well. Again, not a concern with newer computers, but something to look into if building a system with multiple graphics cards.

All in all, everything worked out well. For future productions, I plan to finish writing my Max/Jitter patch to include playback, and make the eject/load and clip selection process more automatic and fool-proof. Single-button operation, or tied into the lighting board. The MadMapper license is limited to two machines, but like Max (Runtime), VDMX can run a project on any number of machines — the license is only required to make changes and save. All of these programs are fantastic.