The Place ReSounds of Water

Sunday May 19th was a gorgeously breezy afternoon when Susanne Romey fluttered and howled on the Native American flute while Eleanor Mills pulled rich sounds from The Bells, and Lee Moore Crawford blessed and balanced the water, while Jody Cassell moved and was moved by internal and external energies. dejacusse added a little shimmer to the bell harmonics and witnessed this most beautiful rendering of love.

Big thanks to Jim Kellough for sound smudging the space, to Linda Carmichael for the video capture, and Central Park School for allowing us to celebrate at The Bells.

Stay tuned next year for National Water Dance 2020 Durham NC!

Synthesizing in Ableton: They are On It!

Well, my short-lived journey into configuring Ableton Live as a synthesizer has come to a halt with the purchase of a Behringer Neutron at Moogfest AND with the Ableton announcement that they are Beta-testing CV plug-ins for Ableton 10. I am soooo excited with this direction.

My experiments with creating modulation FX using “dummy clips” or Envelope Generators yielded some new directions for iBoD and dejacusse. We are experimenting with running live sound through the FX tracks and EG clips. This coming Sunday, we will perform The Place ReSounds of Water in front of the Central Park School for Children. Eleanor Mills will play The Bells, dejacusse will morph the bell harmonics into a watery pallette that Susanne Romey will play NA flute over top. There will be meditative movement and the pouring of water. Come join us!

Sunday May 19 @4 pm

724 Foster St @The Bells

Synthesizing in Ableton Live: External Effects Pedal (fail)

Dear friend and compadre, Karim Merlin, loaned me a guitar pedal. He recently purchased an Earthquaker Levitation pedal, which uses delay, tone and atomosphere to mix a versatile reverb with lots of space to explore. Since I am moved to play all the harmonics through synthesized sound, a guitar pedal gives me a chance to experiment with routing hardware effects through Ableton. I was very excited to try it out.

The wind left my sails when I YouTubed for some supportive info and learned that, in order to get the signal from my ukelele/or vocal mic through the pedal into Ableton and out to auditory cortexes, I need a reamp box between the pedal and the sound card, and a preamp box between the soundcard and mixing board. This has to do with matching the signal out and the signal in to the same impedance. Signal routing is the great labrynth of synthesized sound in my mind. Signals can be sound energy, electrical energy, can be boosted, attenuated, colored, and fed back onto and through each other. And, when it comes to hardware, signals must match somehow. Something to do with the energy of the signal. This part eludes my understanding so far, and I am eager to grok it! And what better way then to simply play.

The NI Komplete 6 soundcard I use has phantom power, which amplifies the signal in certain microphones. The Behringer mixing board has several ways to elevate the signal. Perhaps these will suffice? When I ran the electric uke signal through The Levitation there was a little bit of signal and a whole lot of noise. I tried playing with it within Ableton to see if I could make the noise blend, but no. A vocal microphone sounded the best, but wasn’t a sound I wanted to cultivate. The YouTube guy may be right. I need to build an empire to use pedals through Ableton.

So I end up back in Ableton, playing with all their reverb configurations and making a few of my own.

And I am still wanting a few more 3D knobs and sliders. I am anticipating that my next big sound love may come my way this week via Moogfest!

Art of a Scientist @ Golden Belt April – June 2019

The Art of a Scientist is an annual exhibit curated by Duke University graduate students who are interested in promoting dialogue between art and science. STEM graduate students submit images from their scientific research to the AoS Committee. I answered a call for artists to work with the project, and was paired with a graduate medical student who submitted a video.

The video is of the vascular system of a mouse hind leg. It begins with a view of the murine vascular tree branching out in red. We see the side of the leg rotating, then it tilts and rotates around, and disappears. An angled plane (that resembles a microscope slide) moves from bottom left of screen to top right of screen, to reveal the leg with the soft tissue enclosing the vascular system. The image rotates, then the layers melt away as the leg disappears. Then the image reverses and the layers swirl back together. The image stops before the layers finish, rotates a half turn and is complete.

The video is a collaboration between Hasan Abbas, an MD/PhD student, and the Shared Materials Instrumentation Facilities at Duke University. The mouse leg visualization can be used to model healthy and diseased cardiovascular systems. When I asked Hasan what he heard while creating the video, he said, “scholarship”, “elegance” and “discovery”. I love this as a jumping off point for a soundscape – “elegant discovery”.

And then there is the mouse! My first thought while curating samples for this project was to “give voice to the mouse”. Modern medical research is built on the backs of mice, so it seems right to honor and acknowledge their participation. I found hours of recordings of mice squeaks and scratches. Another element I wanted to capture was the branching of the vascular structure at the beginning of the video. So the creak and cracking of a large body of ice was layered into the sound bed. These sounds were synthesized into a liquidy flowing underbed (suggestive of bloodflow) over which orchestral voices swell in wonder. This piece is called O Men and Mice.

Hasan Abbas and I had several email correspondences. I sent him my first soundscape and he gave wonderfully useful feedback. In all of his correspondence, Hasan spoke with keen interest about the technology used to create the video. The method is called diceCT and the technology is a micro-CT scanner. Hasan referred to the technology as “a thousand tiny X-ray” images stitched together to create the detailed 3D image of the mouse hind leg. This made me wonder how 1000 Tiny X-rays might sound. So a second soundscape was born.

For this piece, while the primary idea is the sound of 1000 tiny x-rays, I also wanted to convey a sense of excitement and pride in an amazing technological accomplishment. diceCT is a new way of seeing living matter that could reveal hidden organic structures or systems. Drum rolls, claps and cymbal crashes are iconic sounds of triumph, so these were used as the sound source. In the video, when the tilted slide-like plane moves from bottom left to top right, the full leg emerges, and there is a feeling of a “great reveal”. This feeling is emphasized by a drum roll and cymbal splash into a moment of silence in the soundscape. For the sound of thousands of X-ray images being taken, granular synthesis was applied to the drum sounds as they built up in dense layers. Interestingly, granular processing does a similar thing to audio as the diceCT method does to matter. Hasan provided me with a video that was slower in pace for this piece. The layers of the whole leg system as they swirl away and return are so beautiful and perfectly fitted together, I wanted it to take more time.

The Art of a Scientist will open Saturday April 6 at the Golden Belt Grand Gallery (800 Taylor St. Durham) and will run through June 23, 2019.

Riding and Playing the en/Harmonic Waves

My quest to synthesonize Ableton Live has taken an exciting new turn. Last Sunday, we discovered that by micing The Bells at the Central Park School Soundgarden, I can run that sound through Ableton and into the various synth modules and FX racks I am building. What happens is that the Abeju Synth Modules and FX Racks capture most of the harmonics that arise from Eleanor’s bell playing. The harmonics can be shaped by envelopes and attenuation and, of course, granular synthesis. My goal is to gradually shape the bell harmonics into a watery stream sound. This will be part of the soundscape for The Place ReSounds of Water (TPRSW) on April 14th at 4pm for SITES Season 2018-19.

When iBoD first started playing with The Bells, I recorded and analyzed their harmonic content. These bells are former compressed air tanks with the bottoms cut off, so the metal is not pure, it is some kind of alloy. This translates to lots of harmonic AND enharmonic content! A pure metal would render more pure harmonics. These pure harmonics are pretty, often beautiful, but my ear grows tired of the stasis of it all. The idea of purity in all of its forms is an illusion that leads to much misunderstanding and anguish in the world. Think about what striving for purity has given us: genocides, fascism, chronic autoimmune diseases, disconnection from and attempts to conquer nature, diminished empathy, and on and on. It is my prayer that riding and faithfully playing All the en/harmonic waveforms will encourage evolutionary growth. That is what I am going for!

TPRSW is my first attempt to sync up with the National Water Dance. My timing is off as this is not the year for National Water Dance, however I am hoping this will kickoff some interest for 2020. The idea for TPRSW is to give prolonged loving attention to water in the form of sound, light and the liquid itself. The soundscape will consist of Eleanor Mills playing The Bells, dejacusse aka Jude Casseday capturing and playing the en/harmonic waves from The Bells and morphing them into a watery feeling soundbed. Then Susanne Romey will play Native American flute over that for a while, then we start the wave again. The movers will pour water from vessel to vessel. An altar of flowers may be built. The whole thing is a mystery.

Our location at the Soundgarden at Central Park School gets full afternoon sun, so the visuals might include sparkles and shimmers of water. We could be lit up! If it is overcast, the air will be moist and the sounds of water will carry more clearly. If it threatens rain on Sunday, we will do it on Saturday instead! Or, perhaps, we will figure something else out and perform as it rains.

Whatever we do will be in praise of water!

Synthesizing in Ableton Live: Envelope Generators

My first pass at working with Ableton as a modular synth was to explore “dummy” clips. These are tracks that contain an audio clip with the actual content muted. The clip is then transformed into a container for audio effects automation that can be applied to any audio routed through whatever audio track the clip is in. The clip becomes an Envelope template that can be pulled into other projects.

Ableton Live allows the sound designer to see and manipulate many parameters within each individual clip. This is most amazing, and something I have not paid enough attention to. Here is a picture of the clip console from the Ableton Manual:

For Audio Animation Clips, the focus is on the Sample box and the Envelopes box. The slider in the sample box is used to mute the audio by simply turning it completely down. Now the clip is ready to be shaped into an envelope. (That is how Audio Animation Clips will function in my Abeju Synth Station – as Envelope Generators (AAC/EG). The two drop down windows in the Envelope box give the sound designer access to all the effects and mix processing available on the track for this clip only. So whatever I draw onto the audio clip itself only happens when that particular clip is playing. In the case of AAC/EG, the clip will play the animation of the envelope and the envelope will be applied to any audio routed through the track.

In order to create an AAC/EG, we need an audio track and an audio clip. The length of the clip doesn’t seem to matter cause it loops. (Currently, I am sticking with 8-12-24 bars and I hear potential for longer clips with more complex movement in the automation in the future). Use the volume slider to mute the audio within the clip. (Addendum: Experience has taught me that the original audio in the clip should be low volume. I am going to start experimenting with creating audio specifically for these clips.) Then assemble a rack of audio effects you want to use to shape sound and insert them on the track. In the orginal audio clip, use the drop down box in the clip to access each effect, and turn them all to Device Off by moving the red line in the clip box to the off position. Now you have an open audio sleeve with no effects enabled. When source audio is routed through this clip, the audio will sound as is, with no effects present. Next, duplicate the clip. This will be the first AAC/EG. Go to the drop down box and turn on the devices you want to use. The top box takes you to the main effect, and the second will bring up every adjustable parameter within the effect. When you identify a parameter that you want to change, go to the red line in the audio clip window (on the right above) and draw in the animation you want.

Once I finally understood all this, I began to design AAC/EG Modules. Each module is a track with 2 to 4 different envelope template clips. Each instance of the clip can be shaped by adjusting the automation (I am calling animation) of the effects that are on the track. One technique I like is layering three effects over three clips. The track contains three effects (a delay, an EQ and a reverb), so the first clip has one effect on, the second has two, the third has three. Initially, I tried linking several modules (tracks containing the clips) together but found this too cumbersome. The option to layer modules is in the routing in each particular set.

To use the modules, insert another audio track that will hold the samples you are using. I label this track Source. Now there are several routing options, but the main idea is that the AAC/EG modules need sound coursing through them in order to perform. On the In/Out tab on the track, audio can be routed from someplace and routed to some place. The best mixing option so far is to have the AAC/EG modules receive audio from source and then everybody sends audio to the Master Volume. This allows the faders to act as dry/wet attenuators. The Source can be heard as is or through the AAC/EG Modules and Clip Templates.

So far, I have created four modules. Combed, Mangled, Stutter Delay and Harmonics Flinger. [Post Publication Addendum: More modules are added to above and all are being fined tuned. Our April 14th SITES event was rained out (rescheduled for May 19th) so the debut of Abeju Synth Station will be tonight as part of Electronic Appetizers program at Arcana. I am playing a piece called Perimeter Centre, which will feature the Ripplemaker semi-modular ios synth as my sound source for the Abeju Synth Station. Come listen!

Audiorigami Phase 2

Since releasing Audiorigami (Meditations on the Fold), my sonsense as to how to explore the Fold has shifted. This shift is in sync with Glenna Batson’s return to Durham and the start of a monthly Human Origami Jam. Glenna is interested in exploring folds through a variety of deep somatic frameworks. She narrates the biomolecular potentials that the body travails from utero through the many modulating intersections of growth . My own sonsense of the Fold is opening to the quantum aspects of sound and further harmonic interplay. I sense that these sonic realms might possibly allow access to some basic templates of life. Perhaps sound, in the form of patterned frequencies, guides life into being. Perhaps harmonic frequencies are part of a templates for the growth and movement of life forms through space and time. That is what I am playing with here.

The focus of Audiorigami will now be to explore the changing shapes of sounds themselves. Audiorigami will propogate, excavate, and modulate the folds that emerge from and disappear into the waveforms that are the vehicle of sound. Modular/ Granular Synthesis and Frequency Modulation are the methods for engaging with sound media. I plan to more carefully curate the sound sources I use and to do more sampling from my own recorded sounds.

Here are some excerpts from the Human Origami Jam which happened last month at ADF Studios. Glenna leads an exploration of lines and trajectories, corners and angles. The soundscape is my first rendering with some of the Abeju Synth Station modules I created from “dummy clips” in Ableton, coupled with TAL- Noisemaker VST synth plugin and Ripplemaker on the iPad.

Next Human Origami Jam will happen THIS FRIDAY March 15th at Joy of Movement Studios in Pittsboro NC from 4:30 to 6:30 pm. https://www.thejoyofmovementcm.com

Come join us!