Riding and Playing the en/Harmonic Waves

My quest to synthesonize Ableton Live has taken an exciting new turn. Last Sunday, we discovered that by micing The Bells at the Central Park School Soundgarden, I can run that sound through Ableton and into the various synth modules and FX racks I am building. What happens is that the Abeju Synth Modules and FX Racks capture most of the harmonics that arise from Eleanor’s bell playing. The harmonics can be shaped by envelopes and attenuation and, of course, granular synthesis. My goal is to gradually shape the bell harmonics into a watery stream sound. This will be part of the soundscape for The Place ReSounds of Water (TPRSW) on April 14th at 4pm for SITES Season 2018-19.

When iBoD first started playing with The Bells, I recorded and analyzed their harmonic content. These bells are former compressed air tanks with the bottoms cut off, so the metal is not pure, it is some kind of alloy. This translates to lots of harmonic AND enharmonic content! A pure metal would render more pure harmonics. These pure harmonics are pretty, often beautiful, but my ear grows tired of the stasis of it all. The idea of purity in all of its forms is an illusion that leads to much misunderstanding and anguish in the world. Think about what striving for purity has given us: genocides, fascism, chronic autoimmune diseases, disconnection from and attempts to conquer nature, diminished empathy, and on and on. It is my prayer that riding and faithfully playing All the en/harmonic waveforms will encourage evolutionary growth. That is what I am going for!

TPRSW is my first attempt to sync up with the National Water Dance. My timing is off as this is not the year for National Water Dance, however I am hoping this will kickoff some interest for 2020. The idea for TPRSW is to give prolonged loving attention to water in the form of sound, light and the liquid itself. The soundscape will consist of Eleanor Mills playing The Bells, dejacusse aka Jude Casseday capturing and playing the en/harmonic waves from The Bells and morphing them into a watery feeling soundbed. Then Susanne Romey will play Native American flute over that for a while, then we start the wave again. The movers will pour water from vessel to vessel. An altar of flowers may be built. The whole thing is a mystery.

Our location at the Soundgarden at Central Park School gets full afternoon sun, so the visuals might include sparkles and shimmers of water. We could be lit up! If it is overcast, the air will be moist and the sounds of water will carry more clearly. If it threatens rain on Sunday, we will do it on Saturday instead! Or, perhaps, we will figure something else out and perform as it rains.

Whatever we do will be in praise of water!

Synthesizing in Ableton Live: Envelope Generators

My first pass at working with Ableton as a modular synth was to explore “dummy” clips. These are tracks that contain an audio clip with the actual content muted. The clip is then transformed into a container for audio effects automation that can be applied to any audio routed through whatever audio track the clip is in. The clip becomes an Envelope template that can be pulled into other projects.

Ableton Live allows the sound designer to see and manipulate many parameters within each individual clip. This is most amazing, and something I have not paid enough attention to. Here is a picture of the clip console from the Ableton Manual:

For Audio Animation Clips, the focus is on the Sample box and the Envelopes box. The slider in the sample box is used to mute the audio by simply turning it completely down. Now the clip is ready to be shaped into an envelope. (That is how Audio Animation Clips will function in my Abeju Synth Station – as Envelope Generators (AAC/EG). The two drop down windows in the Envelope box give the sound designer access to all the effects and mix processing available on the track for this clip only. So whatever I draw onto the audio clip itself only happens when that particular clip is playing. In the case of AAC/EG, the clip will play the animation of the envelope and the envelope will be applied to any audio routed through the track.

In order to create an AAC/EG, we need an audio track and an audio clip. The length of the clip doesn’t seem to matter cause it loops. (Currently, I am sticking with 8-12-24 bars and I hear potential for longer clips with more complex movement in the automation in the future). Use the volume slider to mute the audio within the clip. (Addendum: Experience has taught me that the original audio in the clip should be low volume. I am going to start experimenting with creating audio specifically for these clips.) Then assemble a rack of audio effects you want to use to shape sound and insert them on the track. In the orginal audio clip, use the drop down box in the clip to access each effect, and turn them all to Device Off by moving the red line in the clip box to the off position. Now you have an open audio sleeve with no effects enabled. When source audio is routed through this clip, the audio will sound as is, with no effects present. Next, duplicate the clip. This will be the first AAC/EG. Go to the drop down box and turn on the devices you want to use. The top box takes you to the main effect, and the second will bring up every adjustable parameter within the effect. When you identify a parameter that you want to change, go to the red line in the audio clip window (on the right above) and draw in the animation you want.

Once I finally understood all this, I began to design AAC/EG Modules. Each module is a track with 2 to 4 different envelope template clips. Each instance of the clip can be shaped by adjusting the automation (I am calling animation) of the effects that are on the track. One technique I like is layering three effects over three clips. The track contains three effects (a delay, an EQ and a reverb), so the first clip has one effect on, the second has two, the third has three. Initially, I tried linking several modules (tracks containing the clips) together but found this too cumbersome. The option to layer modules is in the routing in each particular set.

To use the modules, insert another audio track that will hold the samples you are using. I label this track Source. Now there are several routing options, but the main idea is that the AAC/EG modules need sound coursing through them in order to perform. On the In/Out tab on the track, audio can be routed from someplace and routed to some place. The best mixing option so far is to have the AAC/EG modules receive audio from source and then everybody sends audio to the Master Volume. This allows the faders to act as dry/wet attenuators. The Source can be heard as is or through the AAC/EG Modules and Clip Templates.

So far, I have created four modules. Combed, Mangled, Stutter Delay and Harmonics Flinger. [Post Publication Addendum: More modules are added to above and all are being fined tuned. Our April 14th SITES event was rained out (rescheduled for May 19th) so the debut of Abeju Synth Station will be tonight as part of Electronic Appetizers program at Arcana. I am playing a piece called Perimeter Centre, which will feature the Ripplemaker semi-modular ios synth as my sound source for the Abeju Synth Station. Come listen!

Audiorigami Phase 2

Since releasing Audiorigami (Meditations on the Fold), my sonsense as to how to explore the Fold has shifted. This shift is in sync with Glenna Batson’s return to Durham and the start of a monthly Human Origami Jam. Glenna is interested in exploring folds through a variety of deep somatic frameworks. She narrates the biomolecular potentials that the body travails from utero through the many modulating intersections of growth . My own sonsense of the Fold is opening to the quantum aspects of sound and further harmonic interplay. I sense that these sonic realms might possibly allow access to some basic templates of life. Perhaps sound, in the form of patterned frequencies, guides life into being. Perhaps harmonic frequencies are part of a templates for the growth and movement of life forms through space and time. That is what I am playing with here.

The focus of Audiorigami will now be to explore the changing shapes of sounds themselves. Audiorigami will propogate, excavate, and modulate the folds that emerge from and disappear into the waveforms that are the vehicle of sound. Modular/ Granular Synthesis and Frequency Modulation are the methods for engaging with sound media. I plan to more carefully curate the sound sources I use and to do more sampling from my own recorded sounds.

Here are some excerpts from the Human Origami Jam which happened last month at ADF Studios. Glenna leads an exploration of lines and trajectories, corners and angles. The soundscape is my first rendering with some of the Abeju Synth Station modules I created from “dummy clips” in Ableton, coupled with TAL- Noisemaker VST synth plugin and Ripplemaker on the iPad.

Next Human Origami Jam will happen THIS FRIDAY March 15th at Joy of Movement Studios in Pittsboro NC from 4:30 to 6:30 pm. https://www.thejoyofmovementcm.com

Come join us!