Riding and Playing the en/Harmonic Waves

My quest to synthesonize Ableton Live has taken an exciting new turn. Last Sunday, we discovered that by micing The Bells at the Central Park School Soundgarden, I can run that sound through Ableton and into the various synth modules and FX racks I am building. What happens is that the Abeju Synth Modules and FX Racks capture most of the harmonics that arise from Eleanor’s bell playing. The harmonics can be shaped by envelopes and attenuation and, of course, granular synthesis. My goal is to gradually shape the bell harmonics into a watery stream sound. This will be part of the soundscape for The Place ReSounds of Water (TPRSW) on April 14th at 4pm for SITES Season 2018-19.

When iBoD first started playing with The Bells, I recorded and analyzed their harmonic content. These bells are former compressed air tanks with the bottoms cut off, so the metal is not pure, it is some kind of alloy. This translates to lots of harmonic AND enharmonic content! A pure metal would render more pure harmonics. These pure harmonics are pretty, often beautiful, but my ear grows tired of the stasis of it all. The idea of purity in all of its forms is an illusion that leads to much misunderstanding and anguish in the world. Think about what striving for purity has given us: genocides, fascism, chronic autoimmune diseases, disconnection from and attempts to conquer nature, diminished empathy, and on and on. It is my prayer that riding and faithfully playing All the en/harmonic waveforms will encourage evolutionary growth. That is what I am going for!

TPRSW is my first attempt to sync up with the National Water Dance. My timing is off as this is not the year for National Water Dance, however I am hoping this will kickoff some interest for 2020. The idea for TPRSW is to give prolonged loving attention to water in the form of sound, light and the liquid itself. The soundscape will consist of Eleanor Mills playing The Bells, dejacusse aka Jude Casseday capturing and playing the en/harmonic waves from The Bells and morphing them into a watery feeling soundbed. Then Susanne Romey will play Native American flute over that for a while, then we start the wave again. The movers will pour water from vessel to vessel. An altar of flowers may be built. The whole thing is a mystery.

Our location at the Soundgarden at Central Park School gets full afternoon sun, so the visuals might include sparkles and shimmers of water. We could be lit up! If it is overcast, the air will be moist and the sounds of water will carry more clearly. If it threatens rain on Sunday, we will do it on Saturday instead! Or, perhaps, we will figure something else out and perform as it rains.

Whatever we do will be in praise of water!

Synthesizing in Ableton Live: Envelope Generators

My first pass at working with Ableton as a modular synth was to explore “dummy” clips. These are tracks that contain an audio clip with the actual content muted. The clip is then transformed into a container for audio effects automation that can be applied to any audio routed through whatever audio track the clip is in. The clip becomes an Envelope template that can be pulled into other projects.

Ableton Live allows the sound designer to see and manipulate many parameters within each individual clip. This is most amazing, and something I have not paid enough attention to. Here is a picture of the clip console from the Ableton Manual:

For Audio Animation Clips, the focus is on the Sample box and the Envelopes box. The slider in the sample box is used to mute the audio by simply turning it completely down. Now the clip is ready to be shaped into an envelope. (That is how Audio Animation Clips will function in my Abeju Synth Station – as Envelope Generators (AAC/EG). The two drop down windows in the Envelope box give the sound designer access to all the effects and mix processing available on the track for this clip only. So whatever I draw onto the audio clip itself only happens when that particular clip is playing. In the case of AAC/EG, the clip will play the animation of the envelope and the envelope will be applied to any audio routed through the track.

In order to create an AAC/EG, we need an audio track and an audio clip. The length of the clip doesn’t seem to matter cause it loops. (Currently, I am sticking with 8-12-24 bars and I hear potential for longer clips with more complex movement in the automation in the future). Use the volume slider to mute the audio within the clip. Then assemble a rack of audio effects you want to use to shape sound and insert them on the track. In the orginal audio clip, use the drop down box in the clip to access each effect, and turn them all to Device Off by moving the red line in the clip box to the off position. Now you have an open audio sleeve with no effects enabled. When source audio is routed through this clip, the audio will sound as is, with no effects present. Next, duplicate the clip. This will be the first AAC/EG. Go to the drop down box and turn on the devices you want to use. The top box takes you to the main effect, and the second will bring up every adjustable parameter within the effect. When you identify a parameter that you want to change, go to the red line in the audio clip window (on the right above) and draw in the animation you want.

Once I finally understood all this, I began to design AAC/EG Modules. Each module is a track with 2 to 4 different envelope template clips. Each instance of the clip can be shaped by adjusting the automation (I am calling animation) of the effects that are on the track. One technique I like is layering three effects over three clips. The track contains three effects (a delay, an EQ and a reverb), so the first clip has one effect on, the second has two, the third has three. Initially, I tried linking several modules (tracks containing the clips) together but found this too cumbersome. The option to layer modules is in the routing in each particular set.

To use the modules, insert another audio track that will hold the samples you are using. I label this track Source. Now there are several routing options, but the main idea is that the AAC/EG modules need sound coursing through them in order to perform. On the In/Out tab on the track, audio can be routed from someplace and routed to some place. The best mixing option so far is to have the AAC/EG modules receive audio from source and then everybody sends audio to the Master Volume. This allows the faders to act as dry/wet attenuators. The Source can be heard as is or through the AAC/EG Modules and Clip Templates.

So far, I have created four modules. Combed, Mangled, Stutter Delay and Harmonics Flinger will be useful when iBoD plays at The Bells on April 14th for SITES. More on that event very soon

Audiorigami Phase 2

Since releasing Audiorigami (Meditations on the Fold), my sonsense as to how to explore the Fold has shifted. This shift is in sync with Glenna Batson’s return to Durham and the start of a monthly Human Origami Jam. Glenna is interested in exploring folds through a variety of deep somatic frameworks. She narrates the biomolecular potentials that the body travails from utero through the many modulating intersections of growth . My own sonsense of the Fold is opening to the quantum aspects of sound and further harmonic interplay. I sense that these sonic realms might possibly allow access to some basic templates of life. Perhaps sound, in the form of patterned frequencies, guides life into being. Perhaps harmonic frequencies are part of a templates for the growth and movement of life forms through space and time. That is what I am playing with here.

The focus of Audiorigami will now be to explore the changing shapes of sounds themselves. Audiorigami will propogate, excavate, and modulate the folds that emerge from and disappear into the waveforms that are the vehicle of sound. Modular/ Granular Synthesis and Frequency Modulation are the methods for engaging with sound media. I plan to more carefully curate the sound sources I use and to do more sampling from my own recorded sounds.

Here are some excerpts from the Human Origami Jam which happened last month at ADF Studios. Glenna leads an exploration of lines and trajectories, corners and angles. The soundscape is my first rendering with some of the Abeju Synth Station modules I created from “dummy clips” in Ableton, coupled with TAL- Noisemaker VST synth plugin and Ripplemaker on the iPad.

Next Human Origami Jam will happen THIS FRIDAY March 15th at Joy of Movement Studios in Pittsboro NC from 4:30 to 6:30 pm. https://www.thejoyofmovementcm.com

Come join us!

Monthly Human Origami Jam

Glenna Batson is back in town and we have started a monthly Human Origami Jam at ADF Studios on Broad Street in Durham. Join us this Friday, February 15th for an exploration guided by the foldings of cells – the building blocks of nature. While Glenna guides you through a macroscopic to microscopic sense of the cell, dejacusse will sculpt sonic forms in the atomosphere of the room. The soundscape will swath you in harmonics, whispers, bounce and back to the ambient sound of the room. Sound as water is my theme, and since cells are mostly made of water…

Human Origami Jam

TODAY! Friday February 15th

4:30 pm to 6:30 pm

ADF Studios Broad St. Durham NC

Donations accepted

I would love to see you there!!

Voices from Eris released 1/3/19

Cover art by Bronwyn Mackenzie

Voices from Eris was released on Shifting Waves label January 3, 2019. Here is a link to the Bandcamp page where you can buy the album!

http://shiftingwaves.bandcamp.com/album/voices-from-eris

Big thanks to all the artists and especially to Margaret Harmer, who brought mad skillz and expanded sonscience to the project.

Big thanks to any of my friends who takes the time and interest in my being in the world. I love you and would love to talk with you over a cup of tea very soon!

Sonification and Life Forms II

Since excitedly sharing the results of a sonic analysis of Lemur Gut Microbiomes, I have been working up a soundscape based on the quick sketch included in the first Sonifications and Life Forms post. (You can hear both below.) In order to get feedback on the work, I sent the first blog post to Mark Ballora, with whom I had taken a data sonification workshop in May. His response helped me realize the need to clarify my sonification process. So here is a description of the project:

The purpose of the sonification is to illustrate the changes in baby lemur microbiomes from birth to weaning. Microbiome data was captured through fecal samples taken at birth, through nursing, introduction to solid foods, regular solid foods, and two times while the babies were weaning. The sonification will illustrate changes in the type and amount of bacterial phyla present at each of the six sampling stages for all three lemur babies. In addition, the mother’s microbiome was sampled at the time she gave birth, so her profile, which was assumed not to change, provides a baseline adult profile with which to compare the babies’ changes.

There were 255 strains of bacteria collected over the course of the study. These fell into 95 classes and 35 phylum. I focused on the phylum, as my plan was to assign a note value to each bacterial data point, so I needed a smaller data set. The data set was narrowed further (and made more interesting) by focusing on a family: a mother Pryxis, and her triplets, Carne, Puck and Titan. This group allows us to not only hear the variety of changes in the babies’ microbiomes, but compare the changes as well.

The original data set included 9 lemur babies and 7 mothers. So the first step was to go through the phylum data sheets and pull out the profiles for Pryxis, Carne, Puck and Titan. A phylum profile would be the type and amount of each phyla present at each data collection point. The profile changes over time at each collection point. The microbiomes of these four lemurs housed 15 phylum (at a density of >.001) out of the 35 found in the entire study group.

The next step was to assign a note value to each phyla. Since there are only 13 notes available in the chromatic scale, some phylum would need to be on the same note, albeit a different octave. Same note, different octave will lend a tonal consonance to the profiles. So what might this consonance represent? There were 5 phylum that had the greatest density and presence in all the samples, so I assigned those to the note G from octave 1 to 5. The remaining 10 phylum were assigned note values based on their presence throughout the profiles, and on their consonance/dissonance with the tonal center G.

In order to capture the density of each phyla, a midi velocity range was aligned with the decimal percentage of the phyla in each profile. Midi velocity settings determine the force with which the note is played. Thus the velocity ranges render a clear sense of presence or loudness to each note played. The decimal percentages ran from .001 to 1.0 and the midi velocity range runs from 1 – 127. Here is a chart of how these ranges overlap:

So for example, Protobacteria present at .25473 would be represented by the note G at octave 3 set at 40 velocity. The largest sample in all the data points captured for this project was around .9 and the smallest was .001 (this was a cutoff point as there were bacterial phylum present down to .0001 ranges.) Here is the chart for Titan showing note assignment and density values through each sample stage:

My sounding board for this data comparison is Ableton Live, a digital audio workstation (DAW). The individual lemurs are represented by a “voice”/midi instrument in Ableton. Tuck, Titan and Carne are bell-like voices that blend together, while Pryxis, the mother, is a warm, pervasive woodwind. She envelops and contains the changes in the babies’ phylum profiles.

All lemurs had a Phylum Profile Chart like the one above. In the DAW, the instrument track for Titan, for Puck and for Carne contains a midi-clip of notes of the phylum colonies present at each stage of dietary change, which were then laid out as a “scene” in Ableton. As example, Titan’s Phylum Profile at birth was

Protobacteria (Note value=G3) set at 101/127 in intensity

Euryarcheatae (Note Value=A2) set at 34/127

Firmicutes (Note Value=G2) set at 11/127

Cyanobacteria (Note Value=A#4) set at 1/127

Other Bacteria (Note Value=B3) set at 1/127

Spirochaetae (Note Value=G5) set at 1/127

Titan’s Birth Phylum Profile is the multi octave chord GAA#B. Three of the phylum were barely present, so those tones are almost inaudible in the chord. However, 2 Gs and the A ring out. The total number of phylum present in each dietary stage varied from 3 to 14, so the multi octave chord becomes more dense and dissonant when the phylum are so varied. Here is a look at the tracks (individual lemur voices) and the “scenes” (which are the phylum profiles from all 3 babies at each stage.)

The first sketch was just the mother’s phylum profile droning under the three babies’ profiles expressed as a stacked megachord. All 3 baby profiles rang out together four times at each stage, starting with birth and ending with the second wean. What could be heard was a homogeneity and consonance between the Mother and babies at birth that gradually became more diverse and dissonant as solid food was introduce. However, by the second wean, the babies’ and mother’s profiles become more consonant again. The researcher said this illustrated the conclusions of her study.

As a soundscape artist, I felt there was more here than just that basic chordal movement. The babies’ phylum profiles were quite different from each other as well, which is lost in the chord presentation. For example, Carne’s birth profile has only 3 phylum, while Titan has twice that amount. One way to hear this level of contrast in the baby profiles is to articulate the chords into riffs. Now we can hear the interplay of the changes in their microbiomes. In addition, we can hear how consonnant/dissonant and dense the phylum become as outside food is introduced into their systems. Titan’s phylum profiles arpeggiate down, Puck’s go up and Carne’s go down then up. A practiced deep listener could key in on a particular profile and follow it through to the end. I played around with rhythmic shifts to create more movement in the stages where the phylum profile were incredibly dense and diverse. The last two arpeggiating riffs you will hear are all of the phylum notes sounding through twice. And listen for the elevated levels of Protobacteria in all 3 profiles at birth – that G3 rings out at that point.

As I put this full family profile together, another more nuanced movement in the data appeared. In the chord rendering, I heard the data get more dissonant and dense from nursing through first wean, and then the phylum thinned out and became more consonant at the last wean. In the riff rendering, I can hear a contraction and more consonance at the Intro to Solid Foods stage as well as the second Wean. That was not clear in the chord presentation. When I checked my data records, there was a drop in the number of phylum present between Nurse and Intro stages. I love that a nuance appeared in the listening that made me go back and check the data. That is exactly how I hope this process will work.

Some other things for future consideration:

Aligning each phylum tone to a particular beat might help the listener hear the differences from stage to stage more clearly.

When assigning notes to data points, closer attention to the harmonic overtone series might help clarify the role consonance and dissonance play in hearing the data.

The voices of the baby profiles have similar timbre as a unifying element. The profiles could have very distinct voices which might make the variances in their profiles more audible.

Up next – Sourdough Songs.

Code Music

During a trip to Washington D.C. in 2017, granddaughter Jahniya purchased a silver necklace from the International Spy Museum gift shop. The necklace is the word “strength” in Morse Code with round beads for the dots and longer tube beads for the dashes. When she showed me the necklace, I saw the dots and dashes as stabs and long tones and began wondering about soundscapes with words and phrases spelled out in Morse Code.

This idea is a perfect companion to TRIC Questions, and my interest in using overlapping “predeterminded” sound articulation patterns in music. Morse Code is also a way to imbed words into soundscapes. People seem to prefer music with words. We like a story, an image and the sound of the human voice. For me, the singer and the words hijack a large percent of the ear brain in any listening situation. That is why I prefer my music sans words – let the instruments convey the narrative. Since 70% of meaning is derived from the non-verbal components of speech, including cadence and tonality, then music should be capable of conveying information/meaning quite effectively. Morse Code brings a verbal aspect into the soundscape without words taking over the show.

Here is the simple instruction sheet for creating Morse Code:

This will be my idiosyncratic metric template. The time signature is open and based on units: 1 unit = 1 beat = the length of a dot. The measures can be laid out in the number of units it takes to complete the word or phrase in Morse Code with the 7 unit break between iterations of the word.

Feeling sure that modern composers have explored this idea already, I googled “Morse Code music”. What I found were instances of Morse Code used as drum patterns. The code was always tied to a time signature, so the actual prescribed template for Morse Code is not what is heard. Plugging these patterns into Ableton voices means that the exact Morse Code template is played and preserved in time. This rhythmically offsets the clips, since they are different lengths. I have played with this idea for over a year using different words like love and joy, or om shanti. I did an iteration of “peace” over and over for International Peace Day 2017.

Now it is September 2018, and I am playing off the Peace Day soundscape in preparation for the Wake Forest Dance Festival this coming weekend (9/29/18). Justin Tornow was invited as a guest choreographer. She is creating a solo for dancer Maggie Page Bradley which I will accompany. The sound piece is called PSS – short for Peace Shalom Salam. Each clip is a Morse Code Template of peace or shalom or salam. Each template follows the unit pattern described in the Morse Code description above. Some units are quarter notes and some are eighths, another layer of rhythmic offset. After laying out the templates, I squeezed several of them into smaller time frames (oh, the things you can do in Ableton), thus speeding them up while maintaining the integrity of the Morse Code Template. In addition, I am throwing into this randomness one of my favorite Ableton audio effects called Fade to Grey. This is a high pass and low pass filter that move towards each other and basically squish the sound to nothing, but the addition of a ping pong delay holds a bit of the sound in an echoic pattern. This effect allows the sound to be fractured and reflected in interesting ways. The effect is on every track, so there is the potential to break the sound into multiple glimmering pieces. Here is a short sample of this effect at work:

If you are in Wake Forest close to Joyner Park tomorrow, please come to the Wake Forest Dance Festival. There is an open tech rehearsal in the morning, movement around the park in the afternoon and the dance performances will be from 5-6:30. As always, I appreciate your support!