All posts tagged ITP Camp

Building a Ball Jar Lantern: Tutorial

Here’s how to build an LED lantern in a Ball jar, programmed to flicker like a candle. It involves a little bit of soldering and a little bit of programming, and will introduce you to Adafruit’s NeoPixel and Flora products.



From Adafruit:
Flora NeoPixel 4-pack (you just need one LED)
Gemma Microcontroller
2x Coin Cell Battery Holder with Switch

CR2032 battery (buy two)

From Elsewhere:
• Ball Jar and Lid
• #216 Diffusion or Tracing Paper
• Solid Core Hookup cable
• Solder
• Double-sided foam tape

Tools Used:
• Soldering Iron
• Soldering “Helping Hands”
• Wire Stripper
• Flush Cutters
• Tin Snip
• Ruler
• Scissors


The NeoPixel is Adafruit’s brand of the WS2812 Integrated Light Source, which has a tiny driver built into an RGB LED. They’re addressable, so you can run a whole chain of them from one data pin on an Arduino or other microcontroller board. Plus, Adafruit has written a library, so the code is extremely simple.

For example, if you have three pixels chained together and want to make them green, yellow, and red like a stoplight, you would write:

strip.setPixelColor(0, 0, 255, 0);
strip.setPixelColor(1, 255, 255, 0);
strip.setPixelColor(2, 255, 0, 0);;

The four variables are: pixel number (where the first pixel in the chain is 0), red value (from 0 to 255), green value (0 to 255), blue value (0 to 255).

Then; executes those settings.

The NeoPixels are sold in different layouts, from flexible strips to matrices to rings. In this project we’ll use the Flora Series 2 NeoPixel, which is a single LED with four pads: power (+), ground (-), data in, and data out.


The Flora series is mainly designed for wearable projects, so the pads are well suited for sewing onto clothing with conductive thread. They’re also very easy to solder.


The controller board we’re using is from the same series. It’s a Gemma, which is the tiny, inexpensive version of the Flora, also designed for wearables.


This board runs most Arduino code, and is programmed from the Arduino IDE (programming environment). The thing is, it’s so small that it needs to use a special bootloader to upload code.

The easiest way to do this is to download Adafruit’s Arduino IDE.

As a bonus, this comes with the NeoPixel library installed. If you’re using NeoPixels with a full-sized Arduino board, you’ll need to install the library separately (

If this is your first time using the Arduino IDE and you’re on a Mac, it may not let you launch the software. Go to System Preferences / Security & Privacy / General, and change “Allow applications downloaded from:” to “anywhere.” You may need to click on the padlock to select this button.

We’ll come back to this, but now it’s time to connect the NeoPixel.


First we need to attach wire leads to the Flora NeoPixel, and then attach that to the board.

Cut three pieces of solid core wire, around 3 inches long, and strip 1/4″ or so off one end of each.


From the back of the pixel, solder wires to: ground (-), power in (+), and data in. Data in is the one with an arrow pointing from the wire to the pixel. Data out has an arrow pointing out.

It’s easier if you solder one wire on at a time, using helping hands to hold the parts in place. Be sure to clip onto a solder pad on the NeoPixel so you don’t crush any tiny components.


In the pictures, I use black for (-), red for (+), and yellow for data.

With a flush wire cutter, snip off the short wire ends.


Now, clip the wires to the same length, about an inch and a half long, long enough for the pixel to be centered above the Gemma with the wires reaching to the soldering pads around the edge. Strip 1/4″ off the ends.


Solder the black (-) wire to GND, the red (+) wire to 3Vo, and the yellow (data) wire to D0. Snip off the wire ends.




The Flora NeoPixel is rated at 5V, but a small number of pixels can be powered by a lower voltage like the 3.3V regulated output from the Gemma’s 3Vo pin. This pin can provide up to 150mA of current. The Flora NeoPixel will draw 60mA at its brightest, so if you build a project with more than two pixels, they should be powered separately.

Also for larger projects, Adafruit recommends adding a resistor between the data pin and the data in, and a capacitor bridging the power and ground on the strip. And, you need to make sure that the data voltage is close to the power voltage: running a 5V strip from a 3.3V Gemma board might be unreliable.

Check out Adafruit’s NeoPixel Überguide for details about all of this.


Download the Arduino code here:

“NeoPixel_Basic_Setup” is the basic setup you need for any NeoPixel project. It calls up the library and initializes the NeoPixel strip. The only things you would need to change for most projects are the data PIN number on the board (here it’s soldered to D0, so we have it set to 0), and the number of pixels in the strip. The content in the void loop() section is just a sample.

“CandleWorkshop1” steps the pixel through a sequence of different colors.

Upload this file to the Gemma board to test it out.

The process of uploading is slightly different than with a normal Arduino. In the Adafruit Arduino IDE menus, select Tools / Board / “Gemma 8MHz,” and Tools / Programmer / “USBtinyISP.”

Now, connect the Gemma with a USB mini cable. There is a tiny button on the Gemma, and pressing it enters bootloader mode for 10 seconds. The red light lets you know that it’s listening. While it’s in this mode, press the upload button on the Arduino IDE.

For more detailed instructions, visit They talk about the Trinket, which is the Gemma’s less-wearable sibling, available in 3.3 or 5V versions.


If all goes well, your pixel should be changing colors. Try uploading different color combinations and timing. The RGB values can be anywhere between 0 and 255, not just one or the other like in the sample.

At this point, you can upload “NeoCandle_1” to the Gemma and skip ahead to ASSEMBLE THE JAR. But if you want more of an explanation of what’s going on in the code, read on.

“CandleWorkshop2” introduces a few more concepts that are used in animating flicker.

First, the same color sequence from the previous sketch has been removed from the main loop, and turned into a function. It has a new name: colorStep(). So now, when you run colorStep() in the main void loop(), it will call it up and run through it once. Additionally, it will accept a variable: an integer named “pause.” So when you type colorStep(2000), it will hold each color for 2 seconds.

Second, there is a function called fader(), which is a standard for-loop, fading the red and green pixels from 0 to 255 and back down again.

Third, the function fadeRepeater() nests the fader() function, repeating it a variable number of times. All of these can be called from the main loop, keeping it tidy.

So with that in mind, load “NeoCandle_1” onto the Gemma. It’s much more complicated, but uses the same ideas.

It starts with an RGB mix of 255, 135, 15, which looks like a good candle-flame orange to my eye. This is set as variables at the top (redPx, grnHigh, bluePx), so it can be adjusted without having to go through the entire code. Then, the green value dips down and back up again, simulating the flicker of a candle; as it looses oxygen, it gets darker and redder.

The function burnDepth sets how many steps below grnHigh the green value will dip during the normal burn effect.
flutterDepth is a more dramatic flicker effect.
cycleTime is how long each fade cycle lasts. The default is 120 ms, so it will flicker eight times per second.

The next set of functions are used for calculations in the setup. There is a flickerDepth which is a little more than half way between burnDepth and flutterDepth. And the delays are calculated from the cycleTime and number of steps green needs to dip. For example, if burnDepth is 14 and flutterDepth is 30, the fade effect will take over twice as long for flutter than burn. To prevent it from slowing down, the delay time in the flutter for-loop is cut in half.

Once all this math is taken care of, the animation becomes simple. In the main loop, you just call up the different flicker modes, with the duration you want each one to last, in seconds.


Once the code is uploaded to the Gemma, you can remove the USB cable and power it from a battery. The two CR2032 coin batteries provide 6V of power, which is more than you need, but very convenient. Put the batteries in the holder and plug it into the white power jack on the Gemma. The battery holder has a tiny power switch.

Next, cut a small wedge in the jar lid, big enough for the cable to pass through. Be careful of sharp edges!


Use double-sided foam tape to attach the Gemma to the inside of the lid, and the battery holder to the outside. You may want to loop the power cord around the Gemma, tucking it beneath or taping it down.




Finally, cut an 8″ by 4-1/2″ piece of paper to line the inside of the jar. I used tracing paper for my first jar, but prefer theatrical lighting diffusion (#216 is full diffusion, and is sold in sheets at theatrical or film lighting sources like B&H). Also cut a circle of white paper to rest on the bottom of the jar, to reflect light back up.



I use my lantern when wandering around at night. I also made a light painting using the same parts, minus the jar.

Gloves Video Controller

Six of us at NYU’s ITP Camp decided to follow The Gloves Project’s patterns to build our own gloves in June. These are sensor-laden gloves that can be used to control software through hand gestures. Our group included musicians, a theatrical sound designer, a gamer, and visualists, each with different uses for the glove in mind.

To get an idea of how it can be used with video in a live setting, take a look at this test clip, where I use hand movement to wirelessly control video playback and effects.

Here, sensor values on the glove are sent via Bluetooth to a decoder patch written in Max, and then out as MIDI controller data to VDMX, VJ software. It works!

Gloves have been used as controllers in live performance for some time — see Laetita Sonami’s Lady’s Glove for example. Our particular design is based on one created for Imogen Heap to use as an Ableton Live controller, so she can get out from behind a computer or keyboard and closer to the audience. She gives a great explanation and demonstration at this Wired Talk (musical performance starts at 13:30).

Heap and The Gloves Project team are into sharing the artistic possibilities of this device with others, as well as increasing the transparency of the musical process which can be obscured inside a computer. This is an attitude I’ve believed in since attending MakerFaire and Blip Festival in 2009, where I saw a range of homemade controllers and instruments. I was much more engaged with the artists who made the causal process visible. It doesn’t have to be all spelled-out, but in certain cases it helps to see the components: the performer is making the things happen. This is obvious with a guitar player, but not so much with electronic music. Also, you get a different creative result by moving your arms than pressing a button — a violin is different from a piano.

The Gloves Project has a residency program where they’ll loan a pair of gloves to artists, plus DIY plans for an Open Source Hardware version. The six of us at ITP Camp built one right-hand glove each. We had to do a bit of deciphering to figure everything out, but we had a range of skills between us and got there in the end.

Each glove has six flex sensors in the fingers (thumb and ring finger have one each, and index and middle have two each, on the upper and lower knuckle), which are essentially resistors: the more they bend, the less electricity passes through. This can be measured and turned into a number. The sensors run to a tiny programmable ArduIMU+ v3 board by DIYDrones, which uses Arduino code and includes a built-in gyroscope, accelerometer, and magnetometer (a compass if you attach a GPS unit for navigation). This is mostly used for flying things like small self-guided airplanes, but also works for motion capture. We make a serial connection to the computer with a wireless bluetooth device.

Here’s a wiring guide that we drew up.

We had more trouble with the software side of things. The Gloves Project designed is to communicate with their Glover software, written in C++ by Tom Mitchel. There are instructions on the website, but we couldn’t reach anyone to actually get a copy of the program. In the end, we copied the flex sensor sections of Seb Madgwick’s ArduIMU code and used it to modify the ArduIMU v3 code. It delivered a stream of numbers, but we still had to figure out how to turn it into something we could use.

We formatted the output sensor data like this:


…and so on. I then programmed a patch in Max to sort it out.


When one of the sensors’ name comes through, Max routes it to a specific switch, opens the switch, lets the next line through (the data for that sensor), and then closes the switch. Data goes where we want, and garbage is ignored.

Every glove and person is slightly different, so next the glove is calibrated. Max looks for the highest and lowest number coming in, and then scales that to the range of a MIDI slider: 0 to 127. When you first start the decoder, you move your hand around as much as you can and voilà! It’s set.

I made the default starting point for flex sensor data 400, since the lowest point sometimes didn’t fall below 0, while the peak was always above 400. The starting point for movement data is 0. There’s also a “slide” object that smooths movement so it doesn’t jump all over the place while still being fairly responsive.

The number is now sent through a Max “send” object with a different name than the raw sensor data. If you’re keeping everything inside Max, you can just set up a corresponding “receive” object.

Otherwise, it gets turned into a MIDI control or note value, and sent out through a local MIDI device or over a network.

Finally, I tidied everything up so it’s useable in presentation mode. Anyone can download the patch and run it in Max Runtime (free).

There are probably more efficient ways of doing this, but it’s our first pass to get things working.

To download all our code, visit

Since finishing that, I discovered that The Gloves Project has released a whole range of decoders / bridges in various languages. Their ArduIMU code has lots of clever deciphering on the gloves end of things, and the bridges primarily output OSC instead of MIDI, which is handy. Beyond that, The Gloves Project continues to develop new versions of gloves, and are worth checking up on.

Our decoder simply translates the raw sensor data. The next step is to get it to recognize hand gestures, and trigger specific events or adjust values based on that (which is what the Glover software does). We also need to program the glove’s RGB LED and vibration motor for feedback from the computer.

I showed this project to Karl Ward (rock star, Ghost Ghost collaborator, masters student at ITP), and it turns out that he’s currently working on an Arduino library to do a lot of this work, only more elegantly, within the controller. The first library is Filter, which he augmented over the summer to require another new library he wrote, called DataStream. He says: “They are both in usable, tested shape, but the API is still in flux. Right now I’m looking for folks who have Arduino code that does its own filtering, or needs filtering, so I can design the API to fit the most common cases out there.” We’re going to jam.

The glove has all sorts of possible artistic applications, but what else? When I showed it to my dad, he wondered if it could be used as a translator for sign language. Brilliant. It sounds like Microsoft is currently developing software for the Xbox One and new Kinect that will do this, although one advantage of a wearable controller in any case is the ability to get away from a computer (within wireless range). One of the people on our team is going to use it to adjust audio signals while installing sound in theaters. Easier than holding a tablet at the top of a ladder.

Another friend suggested that the glove as demonstrated here could be used for art therapy by people with limited movement. I imagine that something similar is in use out there, but the open-source aspect adds another level of customization and possibility, and again, transparency.

I’m looking to experiment with adjusting specific elements of a video clip with something more organic than a slider or knob, and also be able to interact more directly with a projection. I’ve worked with painter Charlie Kemmerer, creating hybrid painting-projections during Ghost Ghost shows. Charlie works on the canvas with a brush, but even standing beside him, I have to work on an iPad at best. Now I can point directly at the surface while selecting, adjusting, and repositioning clips. Or Charlie could wear it while painting to capture his movement, without it getting in the way of holding a brush.

Creative work reflects the nature of your instrument, so it’s exciting to expand the toolset and learn more about the media. Video A-B fades are pretty straight-forward, but the way that the IMU unit works isn’t nearly as predictable as a fader on a board, and I’ve gotten some unexpected results. That’s a good thing.

Even better, I can’t wait to see what other people with these gloves come up with. Tinker, modify, share.