05 December 2010

Prepared Piano: Subtle Effects by Hacking DIY Nocks with Sugru® Silicone Rubber

Nocking points and tube applied to piano strings
S ometimes ‘less’ is ‘more’. For ‘prepared piano’, you don't necessarily need to throw all sorts of heavy hardware, and car keys, bolts, screws, weatherstripping, and other violent junk into your piano and treat your piano like a percussionist’s rubbish bin.

F or one thing, the piano suffers. For another thing, the sonic consequences are just too extreme, and draw the audience’s attention to the performer and to the outrageous thing the performer has done to the piano, and detract from the audience’s actually listening to the music that the prepared instrument makes. Sometimes, instead of a visual ‘spectacle’ or sonically ‘provocative stunt’, the comparatively subtle acoustic effect you want can be gotten just by inserting some tiny objects that are gentle to the piano and that can be as easy to apply and remove as, say, a mute on a violin or other stringed instrument is easy to apply and remove.

T he do-it-yourself widgets below don’t ‘mute’ anything though. They dramatically change the pitch of each string they are applied to—from a few tens of ‘cents’, up to as much as a major sixth.

A nd you can’t create this sort of note-by-note harmonic effect with Pianoteq® or other synths; those establish harmonic attributes globally, applying them across the entire keyboard, not note-by-note.

F irst get yourself some Sugru®. It comes in a wide variety of colors, not just the orange that appears in the pictures here.

N ext, get yourself some 5 mm OD brass tubing and 5 mm OD brass archery nocking points (see links below). Also get yourself some 22 AWG copper wire, or whatever convenient gauge is slightly smaller than the diameters of the various piano wires where you intend to place the nocks.

T hen with your Dremel® handtool fitted with an abrasive saw (circle disc), cut your brass tubing to the lengths you want.

I  have used tubing lengths from 30 mm to 70 mm to create weights ranging from 3 gm to 7 gm. For strings in the bass register, obviously you can use larger-diameter brass tubing.

E ach Allen Co (Broomfield CO) nocking point (Part No. 540) weighs about 400 mg when its lumen is filled with Sugru® silicone rubber.

N ext, cut one lengthwise slit in the side of each segment of brass tube with the Dremel® handtool.

C ut pieces of copper wire with a wire cutter or snips, about 2 cm longer than the length of each brass tube or nock. Use a needlenose pliers to put a crook or loop in one end of each wire, so you will be able to easily grasp the wire between your thumb and index finger, and so you will also have a loop by which to hang each piece up in a safe place while the Sugru® cures. Alternatively, you can get a piece of florists’ foam or styro at a hobby shop and stick the straight end of the copper wire into it after the nocks have been penetrated by the wire.

L ay all of the brass bits and archery nocking points and copper wires out on your worktable on a piece of plastic or a plastic kitchen cutting board. This will faciliate clean-up and prevent the Sugru® from smudging any surface you care about.

O pen a 5 gm sachet of Sugru®. Roll it around in your fingers for a few seconds. It has the consistency of soft modeling clay at this point.

Y ou can press the Sugru® into the lumen of a brass bit with your fingers and trowel it nice and even with an artists’ spatula.

T hen take a copper wire of the length appropriate to that brass bit and gently push the straight end of the wire lengthwise through the Sugru® in the center of the lumen. The copper wire adheres to the fresh Sugru® a little, but you can easily poke the wire through, so that a centimeter or so of bare wire sticks out beyond the end of the nock/tube. Hang the wire with the Sugru® nock/tube (or insert the straight into florists’ foam) in a safe place where it can remain undisturbed for at least 24 hours.

C ontinue, filling each of the brass bits and penetrating their centers with the copper wires. Set your collection of Sugru® piano nocks aside in a cupboard—someplace at about 20°C (68°F) and away from sunlight.

F or your convenience, make a 5 mm ball of Sugru® and set it aside in the same location with the rest. Wait at least 24 hours for the Sugru® to cure. Pick up the 5 mm Sugru® ball and squeeze it. If it is firm, then you know that the curing process is complete and it is okay to proceed with the next step.

T ake a razor blade or X-acto® knife and cut radially, directly down to the copper wire. Mount each in turn in a PanaVise® or other suitable holder and slice the silicone in the lumen lengthwise, to enable the nock/tube to slide onto a piano string. Then gently rotate the copper wire loop clockwise and anticlockwise 45 degrees or so, to release the wire’s surface from its attachment to the Sugru® silicone. Pull lengthwise to extract the copper wire out of the silicone rubber hole it’s been embedded in. Use your X-Acto® knife to trim any excess Sugru® from the slit or the ends of each nock/tube.

Slicing silicone in nock core with X-Acto knife
H ere is how the Sugru®-filled brass tubes and bow-hunting archery nocking points look:

5 mm OD brass tube filled with silicone
Allen 540 nocking point filled with silicone
A nd the picture at the top of this post is how they look when you push the nocks and tubes on, their slits going over the piano wires so that each piano wire is seated in the center, in the cylindrical ‘cast’ made by the copper wire.

O n notes that have 2 or three strings, if you apply a tube or nock to one of the strings, you will create a sound where the weighted string is significantly lower in pitch than its companion string(s). In the upper register, my 3 gm to 7 gm tubes make the notes sound like Chinese bronze bells with the nipple-like bosses.

Zhou Dynasty bronze bell with bosses
Y ou can slide the weight close to the middle of the string and it will mainly lower the fundamental and second harmonic of the string. If you move it nearer to the end of the string, it will lower the third and higher harmonics much more strongly.

I ’ve created a little Excel spreadsheet (link below), which helps you to calculate how much detuning you can expect as a function of changing the average mass of a string by adding weight to it. Obviously, this simple spreadsheet does not tell you anything about how the higher-order harmonics change, given that the weight is localized in a particular position on the string, not spread over its entire length.

P lease add comments below or email me to comment on your experiences with these Sugru® prepared-piano kluges.





04 December 2010

|| Eye Candy + Ear Candy || ≤ || Eye Candy || + || Ear Candy ||

Burtner’s ‘A’aa’, Grace Lai, flute, and Michael Abrams, clarinet

Grace Lai (flute) & Michael Abrams (clarinet) perform Matthew Burtner’s ‘A’aa’ at KcEMA event at La Esquina in Kansas City (03-DEC-2010)

H    ow much information is carried in [neural] spike trains? Should we use the discrete theory or the continuous one? Spike times are not discrete, and discrete calculations don’t seem to give satisfactory answers. But, on the other hand, the continuous theory assumes a continuous space, and, from the available evidence, that is clearly too strong an assumption to be valid. What, then, is the [mathematical, topological, metric] space to which spike train phenomena belong? While it might be possible to define the sum of two spike trains by simple linear superposition, it isn’t at all obvious how to define the difference. There is no reason to expect spike trains to be Euclidean. For example, with regard to color perception, MacAdam ellipses in color space are wildly non-Euclidean.”
  —  Conor Houghton, Trinity College, Dublin.
T he acoustic and optic stimuli we received at the KcEMA event held last night at La Esquina in Kansas City were intense, sensorially immersive, emotionally evocative, and thoroughly satisfying in terms of their well-crafted dramatic arcs, accessibility, and comprehensibility. The seven works performed were selected from among 120 compositions that were submitted by composers from around the world, in response to KcEMA’s call-for-scores last Spring.
  1. Elainie Lillios (composer), Bonnie Mitchell (visual artist) – “2BTextures”: 2-movement work of abstract animation “takes the audience on an integrated sonic and visual journey into a surrealistic environment influenced by Nature.” This is an edgy, edgy piece with extensive, dramatic use of surround-sound spatial movement of sound, around the Mackie speaker array that was deployed around the perimeter of the audience seating area. The computer-generated visual displays were very high-dimensionality animations, with many tens of thousands of dynamic objects projected in motion at each moment.
  2. Bret Battey (composer, videographer) – “Sinus Aestum”: inspired by the eponymous plain on the Moon’s surface, smooth and dark, “articulated by threads of white dust, like the tips of flowing waves.” Music and video sequences are propelled by algorithmic processes that sample the digital imagery of surface topographic XYZ structures of this lunar plain. It reminds me of ‘Swarm’ software, with autopoiesis and phase-transitions generated by adaptive networks of communicating processes, much like nonlinear reaction-diffusion equations in statistical physics and chemistry, such as the Belousov-Zhabotinsky reaction and other processes in non-equilibrium thermodynamics. Composer as Dawkinsonian “blind watch-maker” impersonal God, initializing the universe and letting Creation auto-unfold as it may. This is utterly phenomenal and without doubt one of the most elegant unifications of light and music I have so far had the pleasure to experience. Bravo!
  3. Matthew Burtner (composer) – “A’aa”: includes field recordings of lava flows and associated sounds emitted by an active Guatemalan volcano, and expresses deep ecology at a primeval, violent, chaotic, low frequency-spectrum, butt-thumping level via the Mackie SRS1500 subwoofer—convincingly conveying the notion that human affairs are puny compared to stochastic geophysical mechanical processes: not only beyond our control but largely beyond the reach of human technologies even to monitor or measure or predict. Burtner’s Nature is uncaring; however, it is not malevolent. It does not wreak shock and awe on all sentient life-forms, human and otherwise. Instead, the shrieks and rumblings of the Earth pushing viscous molten rock are surprisingly benign; nothing is imminently about to explode; this Guatemalan volcano is somewhat like standing very close to a powerful waterfall that is not going anywhere and is not going to kill us. The lava cools, creating solid crusts, and these smash together under the pressure of still-molten material behind them, generating breaking, shattering sounds. [I very much liked the percussionist, mashing fist-sized lava rocks together. The miked grating, smacking sounds of these added high-frequency effects on top of the thundering volcano sounds. Grace Lai’s flute and Michael Abrams’s clarinet, both playing long, sustained notes with pitch-bending and harmonics rounded out this ecoexperience, complementing the timbres of the realtime volcano sounds.]
  4. Jeffrey Hass (composer), Elizabeth Shea (choreography) – “Magnetic Resonance Music”: composed after his having an MRI exam, during which he “focused on bizarre, strident, extremely loud noises and complex rhythms the machine was making.” The world according to Hass has humans as guinea pigs, to whom things are done or happen passively while doctor-priests tell you to hold perfectly still. Shea’s choreographed movements are wonderful counterparts to Hass’s impulsive quasi-chaos syncopated music. Very cool!
  5. Christopher Burns (composer) – “Sawtooth”: mash-up of elements of visual performance and sound-art. Instead of a personal sensor-area network with transducers capturing biomechanics signals from a live performer’s limbs and trunk (3-axis accelerometers, gyros, joint goniometer angles, etc.), Burns uses the video camera on the LCD monitor of his laptop computer to capture the motions of his arms and torso and hands, which he moves Theremin-style . Scene-recognition, edge-detection, and vector velocity computations from the realtime video image datastream of motions of the performer captured from the camera are translated in software and converted into both music and visual animation. The gestures are transformed into a rich palette of pitch set and crescendo/decrescendo and articulation and other motifs. Simultaneously, the gestures yield an interesting range of optical and animation effects, projected on the walls of the performance space. The spatial positions and colors and intensities and sizes (scale) where newly ‘launched’ animation objects appear depend on the position and speed and direction as well as ‘open-hand/closed-hand’ morphology of gestures that Burns makes with his hands. The result is a truly beautiful, cohesive integration of sound and light. [Just as with the recent ‘Faster Than Sound’ event at Aldeburgh, there are issues regarding our less-than-optimal notational idioms for scoring complex computer/electronic music – video/light-show compositions, to sufficiently specify what comes precisely when and specify interfaces for multiple disparate acoustic and optical effects in technology-interoperable, manufacturer-independent ways – so as to facilitate future rehearsals and performances by persons other than the composer/video artist/dancer(s) and on the equipments of future decades when the computers and other devices and patches we now have are long-since obsolete.]
  6. Robert Ratcliffe (composer) – “Phoenix”: repeated resurrection from sonic ashes “explores the possibility of combining characteristic features of synthetic‐driven electronic dance music genres such as acid house and techno” with elements of instrumental and electroacoustic composition. I am reminded of the sounds that titanium robot insects make during the raves that precede their exuberant mating season. This thing is absolutely danceable—very, very nice.
  7. Mike McFerron (composer) – “Prelude to You Brought This On Yourself”: sonically explores the physical and emotional abuse experienced by an openly homosexual youth, and “attempts to comment on a human collective intolerant of an individual voice, who is not asking to be understood or even heard, but simply allowed to exist.” The tintinnabulations of small bells are synchronized with images of people that flash in front of us, accompanied by short voice-over recordings, mostly of anti-gay hate-speech by conservative religious figures. This is a deeply moving, powerful testament to the courage and resilience of those who are harmed—be prepared to be moved to tears as you listen and watch: no mere electronic music or multimedia installation work, it is a galvanizing, effective example of art as political intervention to bring about a more just society.

Battey – ‘Sinus Aestum’

Battey—‘Sinus Aestum’

W hy do light shows combined with music affect us in ways that are substantially different from either stimulus taken separately? Are there features of dynamic light displays that, when designed and synchronized with music, might optimally stimulate our senses and enhance the depth of our reception of the music?

I s there a point of “sensory overload” or “diminishing returns” as regards multi-sensory stimulation to accompany music? Conversely, why do periods of silence or darkness that sometimes arise in electronic music / lightshow events cause hallucinatory acoustic and optical experiences? How might composers understand and leverage these neurophysiologic phenomena to better advantage, expressively?

M ight there in the future be implantable prosthetic devices—or legal, safe, and effective “performance-enhancing medications” (esp. glutamatergic ones or GABA-ergic ones, to modulate corticofugal transmission)—that could further enhance or extend the scope of these tandem multi-sensory experiences?

W hen I fly on long trans-oceanic flights and listen to my iPod, my perceptions of music are distorted, sometimes in interesting, revelatory ways. How, if at all, would combinations of music and light in zero-gravity space environments differ? Could space-tourism include specially-composed music and accompanying light displays that would capitalize on the sensory biophysics anomalies up there, to produce an extraordinary kind of transcendence for the travelers?

M ight electroacoustic music-light compositions in the future be composed so as to be more effective for music therapy purposes, compared to current conventional “music-only” music therapy?

H ow do the experiences that people with autism-spectrum disorders have of light-sound compositions differ from others’ experiences?

T hese questions occurred to me last night as I listened to the performances of electroacoustic and computer-based compositions at La Esquina in Kansas City. (The same questions had dogged me when I attended last month’s ‘Faster Than Sound’ event in Snape.)

Christopher Burns performs ‘Sawtooth’

Burns – ‘Sawtooth’

N eural coding has traditionally been assumed to be one of rate-coding: the stronger the stimulus, the more action-potentials per second that a sensory neuron transmits, and the stronger the perceived sensation is.

H owever, it’s now known that in various sensory systems there is ‘sparse temporal neural coding’. The timing and univariate temporal pattern of action-potentials and sequences of clusters of them in a single neuron itself carries information. The bivariate or multivariate temporal pattern of concurrently-incident action-potentials in multiple neurons, which may perhaps be in different sensory pathways (such as acoustic nerve and optic nerve pathways, thalamic pathways, cortical pathways, etc.) may likewise carry [different] information.

T he processing of sensory information by the brain is difficult to understand, in part because of complex interconnections between the sub-cortical and cortical areas. connections between the thalamus and the cortex in the brain are reciprocal, with information carried both in “feed-forward” [peripheral nerves to thalamus to cortex] and “feed-back” [cortex to thalamus] directions. Sensory signals reach the cerebral cortex after having traversed many synapses along multiple pathways. Acoustic and optical signals interact in the thalamus and possibly elsewhere, in both mutally excitatory and mutually inhibitory ways.

Brain
C onor Houghton at Trinity College, Dublin, has recently examined spectro-temporal receptive fields and filter functions in studies of the neurophysiology of bird song, using analytic formalisms of van Rossum metrics and non-Euclidean spaces. Useful, I think, for human music analysis, and not just of multi-sensory music-visual works...

A nd Stephen Coombes’s and Paul Bressloff’s edited volume (link below) covers how the patterns of spiking activity provide a substrate for the encoding and transmission of information, that is, how do neurons compute with spikes? It is likely that an important element of both the dynamical and computational properties of neurons is that they can exhibit bursting, which is a relatively slow rhythmic alternation between an active phase of rapid spiking and a quiescent phase without spiking—which is altered by “cross-talk” (statistical cross-correlations) between the spiking patterns evoked by concurrent stimulation by music and light.

Belousov-Zhabotinsky reaction
W ith regard to the ‘sensory overload’ question, there is a fair amount of recent research that is germane to this. In mathematics, the triangle inequality is a defining property of norms and measures of distance. This property must be established as a theorem for any function proposed for such purposes for each particular space. For example, the triangle inequality holds for spaces such as the real numbers, Euclidean spaces, Lp spaces (p ≥ 1), and inner-product spaces. Maybe it also holds for multidimensional perceptual spaces, such as ones involving simultaneous stimulation by sound and light.

Triangle
Triangle Inequality
E ven if these are one-of-a-kind experiences never to be quite the same twice, and even if the communication impact of multisensory stimulation is subadditive in terms of quantitative neurophysiology and biophysics, the qualitative expressive and aesthetic result of any particular performance can nonetheless be really, really wonderful. Congratulations to each of these composers/artists/performers on these fine compositions and ‘sensorially whelming’ performances at KcEMA!

M    usical Ecoacoustics embeds environmental systems into musical and performance structures using new technologies. Ecoacoustics derives its musical procedures from abstracted environmental processes, remapping data from the ecological into musical domain. It draws on techniques of sonification, acoustic ecology and soundscape composition (e.g, Truax, Westerkamp, Keller and others). The data from nature may be audio information (from wind or ocean waves, for example), or it may be a signal produced from some measurable parameter such as temperature timeseries, geological change or seismic data, astrophysical data, and so on. Going beyond mere direct sonification, the composer develops a syntax on the basis of the recorded natural processes, and creates new patterns conforming to this syntax.”
  —  Matthew Burtner on ecoacoustics.
Burtner’s ‘A’aa’, Grace Lai, flute, and Michael Abrams, clarinet