All posts in Technology

Lotus Lives New Video Player

Earlier this year, I upgraded the programming and staging options for my projections in Su Lian Tan’s opera, Lotus Lives. It had been running on the live performance video software VDMX (which I love), but I wanted to create a more customized setup with an interactive cue sheet and single-button operation. I made a patch in Max, and ran it during a performance in Boston without a hitch.


The cue sheet is a table (Max jit.cellblock object) with the columns:

cue number
measure number (in the score)
cue notes (when to trigger the next cue)
active screens
whether the media is a still image or movie

Here is my documentation for the video playback:


Lotus Lives is a chamber opera for two singers, a brass quintet, and percussion.

Video plays throughout the performance, sometimes acting as the set, and other times taking center stage.

It is designed to be flexible. A basic concert performance uses only one screen, plus audio playback, while the full staging uses multiple projectors with video mapped onto 12 surfaces. And it is possible to stage versions with complexity in between. It should adapt to fit the performance space.

The video is broken into sections ranging from 30 seconds to 5 minutes long. The end of each section has a tail of extra video, which will play until the video operator launches the next clip. This way, the video remains in sync with the live performers, who don’t have to cater their actions to the technology.

The playback software has two parts: the Controller and the Player.

The Controller is like a smart remote control, operated from a single computer to trigger the cues. The Player is the program that actually plays the media clips for projection.

Both can be on the same computer, or it is possible to have Players on multiple computers, one for each projector, controlled from a single Controller over a network.

The software is written in a program called Max (Max/MSP/Jitter). If projecting onto multiple surfaces from one projector, additional video mapping software is needed. Technical details about the software and mapping are below.

It’s also possible to run this media on other performance playback software (Isadora, Resolume, VDMX, modul8, etc.), in which case the fade timing would need to be set according to the cue list.

The Set


lotusmirrorcThe video surfaces are:

(A) a large central screen above or behind the performers.
(B) four banners on either side of this screen (for a total of eight).
(C) a dressing room “mirror” set piece (best as rear-projection).
(D) projection across the width of the stage, onto a handheld scrim during the ballet sequence, and onto the performers as a lighting effect at other times.
(E) projection onto the walls and ceiling of the performance space, to fill the venue with rippling light during the climax of the Folktale.

The video is meant to be immersive, and the size and placement of the surfaces can be tailored to each production. The only thing that needs to be maintained is the aspect ratio of each surface, and relative distance between the banners.

The aspect ratios are:

(A) 1.78:1 (16:9)
(B) 1:4 for each banner, to be spaced 1/2 the banners’ width from each other, four on each side of surface A.
(C) 1.14:1, which is a 1:1 square with an additional border on the left, right, and top.
(D) 4:1. The handheld scrim should be a white or gray mesh suitable for projections, about 7′ high and the width of the stage, or at least 30′.
(E) This is an abstract rippling texture meant to fill as much of the performance space’s ceiling and walls as possible at one point during the Folktale. While the source movie is 16:9, the projected aspect ratio does not matter.

The Media

The “media” folder contains QuickTime movies and audio for playback. These use the ProRes 422-LT codec, which has a lower data rate than the master clips (saved as ProRes 422-HQ) but maintains quality.

There is also an audio folder which contains .aif audio files, which are to be updated with recordings by the performers. See “Setting Up Audio Clips” below for details.

There are four versions of the video, which are configurations for different projection setups.

V1: This is for running projections from multiple networked computers. There is one screen per video, with the exception of surface B.

For surface B, all eight banners are composited onto this video, so it will need to be sliced up with mapping software.

V2: This is the version for one screen only. Critical elements that would be lost by eliminating surfaces B-E are included on this single, main-screen video.

V3: This has all the surfaces composited onto one large movie, to be mapped onto multiple screens from one projector, or multiple projectors from one computer.

V4: This is surfaces A and E composited into one movie, since it’s likely that a single projector can be used for both surfaces. Mapping is required.

I have prepared and included MadMapper files for V1-B, V3, and V4.

Setting Up the Computer

While Max runs on Mac or Windows, I have only tested this patch for Mac. Additionally, the output for mapping with MadMapper uses a Mac-only framework, called Syphon.

You will need to install:

Max (version 7 or later)
Max is free if using it for running files like the Lotus Controller and Player. A paid license is only needed for saving changes, after the free trial period.

– Apple ProRes codec
Probably installed on any Mac with QuickTime; also available for download from Apple.

If mapping the video output:

Syphon for Jitter
Syphon is a Mac OS framework that allows multiple applications to share the same space in video memory, without a hit to performance. This is how the video gets to the mapping software.

To install Syphon, unzip the package, then move the Syphon folder into Users/[user]/Documents/Max 7/Packages

– Mapping software of choice
I use MadMapper.
It does require a paid license, but it’s easy to use and runs beautifully. There are other options (search for “projection mapping software”). Max can also handle mapping, although this Player isn’t set up for it.

Setting Up Audio Clips:

In addition to movies 301 and 401, which have stereo audio tracks, there are four more separate audio clips that will play back in sync with the video. These are recordings of the performers, and need to be prepared for each production.

The reference folder of the Lotus hard drive contains QuickTime movies of the subtitled narration, which can be read karaoke-style for exact timing.

Once the new audio files are placed in the media/audio folder of the playback drive, with the specified file name, the Player will play them back at the correct point during the performance.

The Lotus Player

This runs the video and audio for Lotus Lives, controlled by the Lotus Lives Controller. It should be on the computer that’s hooked up to the projector.

Double-clicking Lotus Player.maxpat will launch Max, and open the Player.



1. Select which surface video you want to run.

2. Click CHECK FILE PATHS to make sure the Player can find the media. If the media is on a drive other than “lotus,” click Set Movie / Audio File Path and find the folder with the media.

3. If the Controller is on the same computer, leave “controller” set to “local.” If it’s on a different computer on the same network, select “network.” Be sure “network” is selected on the Controller too.

4. Set the video output:

4a. If projecting directly from the Player, move the “projector” window to the projector display. If the projector is attached when launching the Player, the “projector” window will already be on the second display.

4b. If mapping the video output with a program that uses the Syphon framework (like MadMapper), select “Syphon,” then launch the program and use that for display.

5. Test the audio, and set levels for the individual clips. From the Controller, select cue 301 or 401 for movies with audio. Press “play” below the levels sliders on the Player for the additional clips.

5a. The audio clip levels will not save when the Player is closed, but you can make note of the numerical setting, and adjust it the next time you launch the Player.

5b. The beat in 601 should be played live, so by default it will not play; but it can be cued for playback too by selecting the toggle next to the levels slider.


window visible – toggles whether the “projector” window is visible. Turns off if Syphon is selected.

video render – refreshes the video screen. Video will not appear to play if this is off.

audio – turns audio playback on and off.

video fullscreen – toggles whether the “projector” window is fullscreen. Also activated by the escape key.

hide menubar & cursor on fullscreen – use this option if presenting the window on the same screen as the Player, ie. if the projector is the only display.

Load Calibration Grid – this will load a calibration grid for the selected surface.

play, pause, restart, eject – controls playback of the video in either bank.

slider – A/B fade. Operates automatically when the GO button is triggered on the Controller.

“X” toggle next to audio sliders – enables or disables individual audio clips.

play for audio sliders – manual playback of audio clips, for testing purposes.

The Lotus Controller

This controls the video Player(s), which can be on this computer, or networked over several different computers.

Double-clicking Lotus Controller.maxpat will launch Max, and open the Controller.



1. Launch the Controller and the Player(s)

2. Set the settings on the Player(s)

3. START THINGS RUNNING by pressing the “Run” button

4. Go to the first cue by pressing “go to beginning of show,” or the GO button several times, until the CURRENT Cue # is “1 – BLACK”

5. Press GO or the space bar to trigger the next cue

Duration is an estimated countdown to the next cue. Actual time will vary depending on the performance, but it will let you know when to be ready.

Also keep an eye on Cue Notes, which is a description of when the next cue occurs.


Black – toggles a fast fade to / from black, and pauses the active movie.

Grid – activates a calibration grid on all Players.

CURRENT Cue # and Description – what’s playing now.

NEXT Cue # and Description – what’s cued up to play when GO is pressed. NEXT Cue # is a dropdown menu, so you can jump directly to any cue.

Fade is the duration of the crossfade from current to next clips. This can be adjusted manually, but will automatically set according to the cue list.

Measure – The measure of the next cue in the score.

play – plays the active movie

pause – pauses the active movie

restart – goes to the beginning of the active movie

eject – clears the active movie from the Player

previous and next move forward and backward through the next cue to be loaded.

go to beginning of show
– loads the first cue up next

open cue list – this is the cue sheet in table form, which is where all the playback data is stored. Editing this will affect the show’s playback.

Local / Network – If the Controller and the Player are on the same computer, keep the lower-right setting on local. If networking several computers, select network on the Controller and all Players. It is recommended to have a dedicated network, wired if possible.

Standup Reminder

I rarely sit down from call time to wrap during film production. But when editing, the reverse is true, and I find that’s far worse for the body.

Here’s a reminder program I built to run in the corner of my desktop.


When the blue dial lights up (every 12 minutes), I drink water; the green dial (30 minutes), I stand up and stretch for a moment; yellow (55 minutes), I take a walk and look outside.


Times are adjustable. If you ignore a reminder and it goes around twice, it turns red. Click on the dial to reset.

It works!

The Standup Reminder uses a few simple UI elements in the graphical programing language Max. I was able to export it as a standalone Mac App for anyone who wants to give it a go, although it’s 73 MB for some reason (the price of using Max for something so simple). Download that here.

If you have Max, download the patch here.

Or copy and paste this into a patcher window:


Highchair Blinky

Here’s my third baby/toddler blinky toy, and first official entry to the canon of IKEA hacks… the blinking high chair, aka the Antiloputer.

This is four arcade buttons and four smart full-color LEDs, drilled into an IKEA Antilop high chair (a spare tray is $5). The brain is a 5v Adafruit Pro Trinket, powered by four AA batteries.

The default mode is the same as the blinky my dad made for me long ago: press the red button and the red light lights up, press the green button and the green light lights up, and so on.

Holding down all the buttons at once lets you switch modes: mode two is like mode one, except each light stays on until the next one is pressed. Mode three is like a typewriter, and mode four is a step sequencer.

This was my first time programming with the FastLED library to control the LEDs. Download the code here.


I suspect the high chair wouldn’t make it through airport security — below is what it looks like inside… Normally, baby legs are protected from the electronics (and vice versa) by a piece of heavy cardboard.


The buttons are attached between the regulated 5v power rail on one side, and pins 10-13 on the other, with a 10k ohm resistor running to ground from each. The LED data is on pin 9, and power for the LEDs also comes from the 5v power rail, through the Trinket. I did this since the 6v coming directly from the batteries is too much for the NeoPixels.

The Trinket can only provide 150 mA, which is pushing what four LEDs will draw at max brightness. Fortunately, the FastLED library has a line where you can specify the max current, and it will dim the LEDs as needed. Genius.


The LEDs are below the high chair tray, aimed up through small holes. I hot glued translucent plastic containers from an art store over the holes, which glow nicely. Be warned, babies are stronger than hot glue, although there’s no danger if the containers get ripped off. You could probably skip the holes and containers, since the LEDs are bright enough to show through the tray.

Also check out the BABYTRON for more blinking.


My grandfather built a panel of light switches from WWII surplus for my dad as a toddler, and my dad built a panel of buttons connected to indicator lights for me. So now, I’ve built the Model C, a set of light switches for my daughter. It’s easier than holding her up to play with the house lights.


These switches are connected to white LEDs, powered by two C batteries. The switches are hot glued to holes drilled in a board, covered with plastic containers from an art store.

I used the LED wizard to figure out what resistor to use. Burnt out LEDs stink, literally.

Next up, the Model D, using programmable LEDs in a high chair.

[UPDATE] The lights are in holes beneath translucent plastic containers. I put some cheap plastic Easter eggs inside, and voila! Now in color!


Toddler’s Cardboard Computer

My one-year old daughter looooooves buttons, and always wants in on the action when someone’s using a computer… so I transformed her new car seat box into the BABYTRON, inspired by the Burroughs B205.

It features a keyboard, arcade buttons, and a strip of LEDs from Blinkinlabs.

The LEDs have a built-in controller, so no wiring was required beyond USB power. In fact, the whole computer was assembled only using glue, tape, scissors, and string. The buttons don’t control anything — but they do make a satisfying “click” when pressed.

There are several ways to talk to the Blinkinlabs LEDs. I’m familiar with Adafruit’s NeoPixel library, so I told the Arduino IDE that it was a Leonardo, assigned data to pin 13, and uploaded my sketch. Code here.

I kept the brightness of each LED below 20 out of 255, because they’ll hurt your eyes if fully on. It’s also running off a portable USB battery pack, so I wanted to keep the power draw fairly low.

I did consider adding a tape drive, using a paper plate behind a round plastic cover from a food delivery container, powered by a LittleBits motor. Next time.

It turns out that this is ideal for someone who’s able to stand up while holding onto something, but can’t walk on their own. My daughter’s younger friends love it. But she’s now savvy to the buttons not doing anything, and is most interested in the power button — the one that mom and dad push. Time to build something else and recycle the BABYTRON.

New Yorker Sidewalk Projection

Andrew Baker and I created the visuals for the poetry segment of The New Yorker Presents pilot, by projecting archival video onto a sidewalk, stoop, and fence.


The segment is an excerpt from Matthew Dickman’s poem “King”, which begins:

… So I put on my black-white
checkered Vans, the exact pair of shoes
my older brother wore when he was still a citizen in the world,
and I go out, I go out into the street
with my map of the dead and look for him…

The poem is recited by Andrew Garfield in a studio setting, intercut with home movie footage of a different young man and older brother who had passed away. Director Dave Snyder wanted to give the video a stylized treatment, so I suggested going out into the street, literally, as described in the text.

Here’s my projector rig booting up on the sidewalk:

And here’s a quick clip of the final product from the show’s trailer:

We shot the video with a 5DMkIII on a slider, with Zeiss and Canon lenses.

I also edited the final segment. The entire episode is streaming in the 2015 batch of Amazon Original Series pilots, which you can watch here. Watch the entire trailer and read more about The New Yorker Presents on the Jigsaw Productions website.

(Wire) Stripper Cake

This is a case of knowing your audience: my friend Matt is deeply involved in the world of creative DIY electronics, and appreciates a good (ie. terrible) pun. So for Matt’s bachelor party, fellow groomsman Ethan and I arranged for a stripper to pop out of a cake — but in this case, it was a wire stripper (an electrician’s tool) animated by a jumble of electronics, in a cake which we built.

After dinner, we lit the candles on the cake (it was also Matt’s birthday), and in the dim light of the BBQ joint, the illusion was convincing.


But once Matt made his wish and blew the candles out, the top opened up revealing a stripper dancing on a stage, complete with flashing lights and music!


The cake’s frosting is a sheet of white foam material from an art store, held on with Velcro stickers. The decorative chocolate frosting is brown caulk.

Beneath the frosting is a laser-cut plywood frame, with threaded dowels, nuts and washers. The top has a hinged lid with the stage slotted and glued inside.


Cake plans
Cake plans
Cake plans

A hidden power switch in the back provides 6V from four AA batteries to an Arduino Duemilanove microcontroller, which we programmed to run the show.


First, all the lights flash red, as a motor with built-in gear reduction box pulls a string attached to the lid’s counter-lever, opening the lid and raising the stage. (We used this motor driver).



Once the stage is up, the lights change to a cycling rainbow pattern. We used four of Adafruit’s 8-NeoPixel strips, all chained together in series. The first NeoPixel strip is on stage, lighting up the wire stripper.


Other NeoPixel strips are on the middle ring, lighting the inside of the cake in all its mad scientist glory. The white foam frosting is translucent, so the whole cake glows brightly on the outside.


Beneath the stage, a motor with an elliptical wheel makes the wire stripper kick its leg and dance the can-can.

The original plastic geared motor for this section melted when we tried to solder on the power cables. The Radio Shack replacement didn’t have enough torque. So we used a spare littleBits motor. It later burned out because we’d run out of time and didn’t regulate the voltage, but by then it had done its job.


Music is from a chip I ripped out of a greeting card years ago, wired up to a small speaker. It plays the theme from “The Good, The Bad, and The Ugly.”


When you’ve had enough of the show, you can reach in through the top and press a microswitch (center of the picture, on the middle ring), which stops the dancing, turns all the lights to blue, and closes the cake up again.


We built the whole thing in very little time, and it has a cobbled-together, hot glue aesthetic. Not the finest engineering, but Matt loved it, which is all that matters.


Building a Ball Jar Lantern: Tutorial

Here’s how to build an LED lantern in a Ball jar, programmed to flicker like a candle. It involves a little bit of soldering and a little bit of programming, and will introduce you to Adafruit’s NeoPixel and Flora products.



From Adafruit:
Flora NeoPixel 4-pack (you just need one LED)
Gemma Microcontroller
2x Coin Cell Battery Holder with Switch

CR2032 battery (buy two)

From Elsewhere:
• Ball Jar and Lid
• #216 Diffusion or Tracing Paper
• Solid Core Hookup cable
• Solder
• Double-sided foam tape

Tools Used:
• Soldering Iron
• Soldering “Helping Hands”
• Wire Stripper
• Flush Cutters
• Tin Snip
• Ruler
• Scissors


The NeoPixel is Adafruit’s brand of the WS2812 Integrated Light Source, which has a tiny driver built into an RGB LED. They’re addressable, so you can run a whole chain of them from one data pin on an Arduino or other microcontroller board. Plus, Adafruit has written a library, so the code is extremely simple.

For example, if you have three pixels chained together and want to make them green, yellow, and red like a stoplight, you would write:

strip.setPixelColor(0, 0, 255, 0);
strip.setPixelColor(1, 255, 255, 0);
strip.setPixelColor(2, 255, 0, 0);;

The four variables are: pixel number (where the first pixel in the chain is 0), red value (from 0 to 255), green value (0 to 255), blue value (0 to 255).

Then; executes those settings.

The NeoPixels are sold in different layouts, from flexible strips to matrices to rings. In this project we’ll use the Flora Series 2 NeoPixel, which is a single LED with four pads: power (+), ground (-), data in, and data out.


The Flora series is mainly designed for wearable projects, so the pads are well suited for sewing onto clothing with conductive thread. They’re also very easy to solder.


The controller board we’re using is from the same series. It’s a Gemma, which is the tiny, inexpensive version of the Flora, also designed for wearables.


This board runs most Arduino code, and is programmed from the Arduino IDE (programming environment). The thing is, it’s so small that it needs to use a special bootloader to upload code.

The easiest way to do this is to download Adafruit’s Arduino IDE.

As a bonus, this comes with the NeoPixel library installed. If you’re using NeoPixels with a full-sized Arduino board, you’ll need to install the library separately (

If this is your first time using the Arduino IDE and you’re on a Mac, it may not let you launch the software. Go to System Preferences / Security & Privacy / General, and change “Allow applications downloaded from:” to “anywhere.” You may need to click on the padlock to select this button.

We’ll come back to this, but now it’s time to connect the NeoPixel.


First we need to attach wire leads to the Flora NeoPixel, and then attach that to the board.

Cut three pieces of solid core wire, around 3 inches long, and strip 1/4″ or so off one end of each.


From the back of the pixel, solder wires to: ground (-), power in (+), and data in. Data in is the one with an arrow pointing from the wire to the pixel. Data out has an arrow pointing out.

It’s easier if you solder one wire on at a time, using helping hands to hold the parts in place. Be sure to clip onto a solder pad on the NeoPixel so you don’t crush any tiny components.


In the pictures, I use black for (-), red for (+), and yellow for data.

With a flush wire cutter, snip off the short wire ends.


Now, clip the wires to the same length, about an inch and a half long, long enough for the pixel to be centered above the Gemma with the wires reaching to the soldering pads around the edge. Strip 1/4″ off the ends.


Solder the black (-) wire to GND, the red (+) wire to 3Vo, and the yellow (data) wire to D0. Snip off the wire ends.




The Flora NeoPixel is rated at 5V, but a small number of pixels can be powered by a lower voltage like the 3.3V regulated output from the Gemma’s 3Vo pin. This pin can provide up to 150mA of current. The Flora NeoPixel will draw 60mA at its brightest, so if you build a project with more than two pixels, they should be powered separately.

Also for larger projects, Adafruit recommends adding a resistor between the data pin and the data in, and a capacitor bridging the power and ground on the strip. And, you need to make sure that the data voltage is close to the power voltage: running a 5V strip from a 3.3V Gemma board might be unreliable.

Check out Adafruit’s NeoPixel Überguide for details about all of this.


Download the Arduino code here:

“NeoPixel_Basic_Setup” is the basic setup you need for any NeoPixel project. It calls up the library and initializes the NeoPixel strip. The only things you would need to change for most projects are the data PIN number on the board (here it’s soldered to D0, so we have it set to 0), and the number of pixels in the strip. The content in the void loop() section is just a sample.

“CandleWorkshop1” steps the pixel through a sequence of different colors.

Upload this file to the Gemma board to test it out.

The process of uploading is slightly different than with a normal Arduino. In the Adafruit Arduino IDE menus, select Tools / Board / “Gemma 8MHz,” and Tools / Programmer / “USBtinyISP.”

Now, connect the Gemma with a USB mini cable. There is a tiny button on the Gemma, and pressing it enters bootloader mode for 10 seconds. The red light lets you know that it’s listening. While it’s in this mode, press the upload button on the Arduino IDE.

For more detailed instructions, visit They talk about the Trinket, which is the Gemma’s less-wearable sibling, available in 3.3 or 5V versions.


If all goes well, your pixel should be changing colors. Try uploading different color combinations and timing. The RGB values can be anywhere between 0 and 255, not just one or the other like in the sample.

At this point, you can upload “NeoCandle_1” to the Gemma and skip ahead to ASSEMBLE THE JAR. But if you want more of an explanation of what’s going on in the code, read on.

“CandleWorkshop2” introduces a few more concepts that are used in animating flicker.

First, the same color sequence from the previous sketch has been removed from the main loop, and turned into a function. It has a new name: colorStep(). So now, when you run colorStep() in the main void loop(), it will call it up and run through it once. Additionally, it will accept a variable: an integer named “pause.” So when you type colorStep(2000), it will hold each color for 2 seconds.

Second, there is a function called fader(), which is a standard for-loop, fading the red and green pixels from 0 to 255 and back down again.

Third, the function fadeRepeater() nests the fader() function, repeating it a variable number of times. All of these can be called from the main loop, keeping it tidy.

So with that in mind, load “NeoCandle_1” onto the Gemma. It’s much more complicated, but uses the same ideas.

It starts with an RGB mix of 255, 135, 15, which looks like a good candle-flame orange to my eye. This is set as variables at the top (redPx, grnHigh, bluePx), so it can be adjusted without having to go through the entire code. Then, the green value dips down and back up again, simulating the flicker of a candle; as it looses oxygen, it gets darker and redder.

The function burnDepth sets how many steps below grnHigh the green value will dip during the normal burn effect.
flutterDepth is a more dramatic flicker effect.
cycleTime is how long each fade cycle lasts. The default is 120 ms, so it will flicker eight times per second.

The next set of functions are used for calculations in the setup. There is a flickerDepth which is a little more than half way between burnDepth and flutterDepth. And the delays are calculated from the cycleTime and number of steps green needs to dip. For example, if burnDepth is 14 and flutterDepth is 30, the fade effect will take over twice as long for flutter than burn. To prevent it from slowing down, the delay time in the flutter for-loop is cut in half.

Once all this math is taken care of, the animation becomes simple. In the main loop, you just call up the different flicker modes, with the duration you want each one to last, in seconds.


Once the code is uploaded to the Gemma, you can remove the USB cable and power it from a battery. The two CR2032 coin batteries provide 6V of power, which is more than you need, but very convenient. Put the batteries in the holder and plug it into the white power jack on the Gemma. The battery holder has a tiny power switch.

Next, cut a small wedge in the jar lid, big enough for the cable to pass through. Be careful of sharp edges!


Use double-sided foam tape to attach the Gemma to the inside of the lid, and the battery holder to the outside. You may want to loop the power cord around the Gemma, tucking it beneath or taping it down.




Finally, cut an 8″ by 4-1/2″ piece of paper to line the inside of the jar. I used tracing paper for my first jar, but prefer theatrical lighting diffusion (#216 is full diffusion, and is sold in sheets at theatrical or film lighting sources like B&H). Also cut a circle of white paper to rest on the bottom of the jar, to reflect light back up.



I use my lantern when wandering around at night. I also made a light painting using the same parts, minus the jar.

Self-Contained Projector Rig


I was recently asked to provide a video projection for the Proteus Gowanus ball, and assembled my most compact, self-contained projector rig to date. It involves Velcroing a Raspberry Pi computer to my homemade projector mount, which can be clamped anywhere with standard film grip gear. When plugged in, 1080p video plays in a loop. The projector is small but bright, a 3,000 lumen Optoma TW-1692.

Getting the video to start and loop automatically was fairly simple, but required several stops on the internet:

I used this script to loop video files in a folder. I put mine in one called /media.

I added -r four lines from the end, as suggested in one of the comments. The video was getting squished toward the bottom of my projector, and this fixed that.

omxplayer -r $entry > /dev/null

Then I made the script (named executable with the command:
sudo chmod +x /home/pi/

To run the video loop, if you’re in the same directory, type
nano ./videoloop

That worked, but I had to reboot the Raspberry Pi to get it to stop. !IMPORTANT! Before making it start automatically, make sure that you can edit rc.local from another computer via SSH while the script is running. Adafruit has a good overview of this here. That way, you can remove the following autostart line from rc.local when you want your Pi back.

To run the script on boot:
sudo nano /etc/rc.local
Before the final “exit 0” line, insert this line:

/home/pi/videoplayer &

Change the path accordingly. I left the script in the home directory, although I may move it at some point.

I loaded the video onto the device, strapped everything together — projector, mount, Raspberry Pi (in a Pibow Timber case), multiplug, USB power adapter, HDMI cable, safety cable, extension cord — plugged it in, and it ran. Just like that.

Azimuth Video Installation

Azimuth is a video capture and playback installation with simple wireless controllers. It’s based on the Déjà View Master, modified for the art show “Being and Time.”

The basic layout is a webcam feeding into a Mac Mini running a Max/Jitter patch, controlled by three wireless Arduino-based devices, and displayed on two monitors: a Zenith television on a table, and a projector mounted under the table, projecting onto the floor.


The screen normally displays a frozen image of the empty room. As you turn the clock-like controller, you scrub back and forward through a constantly-recording half-hour video buffer. The frames flash on screen as long as you’re moving the hand, but it fades back to the frozen frame when movement stops.

To add an element of unpredictability, it sometimes scrubs through a pre-recorded video of walking to the venue, or operates normally but with the image sliding off the TV and onto the floor via the projector.

Jitter works by using a matrix of pixel information to manipulate video. In Azimuth, the video buffer is accomplished by recording sequentially-numbered Jitter binary files (each matrix file is one video frame) to a hard drive, and separately reading the matrix files back and displaying them. Once the maximum number of frames has been written, the counter resets and writes over the first frames, letting you record in a loop.

Here are the basic components of this operation, set to record one minute at 15 fps (900 frames), and read back and display other frames, chosen by the number box on the right.


Grab this Max patch here


There is more math involved in keeping track of the current frame and rolling back through zero.

The projection on the floor is black, but image appears where the camera picks up movement. The patch subtracts the 8-bit value of each pixel of the previous frame from the current frame: where the pixels are the same, the value will be zero. Where the value is different, it will light up. It’s like velociraptor vision.


The button controller swaps the output monitors, putting the frame differencing on the TV screen and the freeze frame on the floor via the projector. The frame differencing is quite popular: the kids love dancing in front of it. But it has a practical function too. Part of the Max patch adds up the value of all the cells in the frame to determine whether anyone is in the room. The more movement, the higher the total value. If the number stays below a set threshold for more than a few minutes, it will assume that the room is empty and update the freeze-frame.

The other control box has a knob, which switches between five channels. Some channels read back from other matrix “banks,” where the five-digit matrix file begins with a different number. The main video loop is 10000-17200 (add 10000 to the counter value, max of 7200 for 30 minutes at 15 fps), a busy time saved from the show’s opening is 20000-27200 (add 20000), a pre-recorded movie of riding the subway and walking to the venue is 50000-53400, and so on. Another channel adds vertical roll to the live video feed, like an old TV. All adjust the brightness and color in some way.

Any controller will take over whatever’s happening in screen, and the result of pressing the button or knob will time out and revert to the empty frame if left alone.


The boxes are all handmade oak frames with laser-cut semi-transparent acrylic front and back, echoing the Zenith television.

The big box has a rotary encoder with a clock hand attached, so it can spin continuously in either direction. The encoder is connected to a Teensy 3.0 microcontroller which runs Arduino code. It sends one command repeatedly if turned clockwise, and another command if turned counterclockwise, via a serial connection with an XBee wifi radio and adapter.

It’s powered by a 2,500mAh lipo battery (the Teensy and XBee operate at 3.3v), and uses Sparkfun’s Wake on Shake as a power switch. This device is brilliant. It has a gyroscope, and turns on if there’s any movement. It then stays on as long as there is power going to the wake pin — this comes from one of the Teensy pins, which is programmed to stay on for 15 minutes after the last time the controller’s been used.

I used a breadboard with power rails removed to hold the Teensy and XBee, since it provides a flush, solid connection. Double-sided tape, industrial strength Velcro, and hot glue keep everything in place. The back panel is held on with machine screws.


The smaller boxes are similar, but use an Adafruit Trinket for the logic. One has a 10k linear potentiometer, and the other uses an arcade button. Each has a panel-mount power switch on the bottom of the box.


The receiver uses a Teensy 3.1, which relays incoming serial messages from the XBee to the Mac Mini over USB. I’d normally send a serial connection directly into Max, but since this installation needs to run reliably without supervision, I set the Teensy to appear as a standard keyboard. Messages from the controllers are sent as keystrokes, and the Max patch responds accordingly. This also made programming easier, since I could emulate controller action with keystrokes.

The receiver is housed in a spare Raspberry Pi case with a hole cut in the top for the XBee. I also added a kill button to stop the patch from running and quit Max by sending another keystroke. The Mac Mini is set to launch the Azimuth patch on startup, so between that and the kill button, no keyboard or mouse is needed from day to day.

Arduino code for the controllers and receiver is here.


The Mac Mini connects to the Zenith TV via a series of adapters: mini display port to VGA, VGA to RCA, and RCA to RF (on channel 3). The projector is configured as the second monitor, with a direct HDMI connection. I don’t recommend working on a complex Max patch on a 640 x 480 screen.

All in all, the installation runs well. Video is stored on a solid state laptop hard drive in a USB 3 enclosure, and most of the video processing happens on the GPU using the Gen object ( in Jitter. Some people were tentative when using the controllers, but others dove in and had a good time.