Sounding Board

My curiosity about sound is completely engaged by exploring modular synthesis. So far my understanding is often inarticulate and mystified! But thanks to Suzanne Ciani, True Cuckoo, Andrew Huang, Ultrabillions, Hark Madley, Lisa Belladonna, Caterina Barbieri, Moogfest, Bram Bos, and Kim Bjorn’s book Patch and Tweak, I am evolving a different way of creating soundscapes and perceiving the world. This is the stuff of life! Waveforms modulating waveforms, waveforms shaping waveforms, waveforms reflecting, refracting and bouncing around and through us. Energetic matter begins and ends on a wave.

I am focusing my Artists Residency here at home on improving my mixing skills and building a sounding board. The mixing skills are put to the test making the recording of Carnatic Water Music that iBoD will release in the next week. As I mixed this recording I received helpful suggestions from tutorials by Jason Moss, HarkMadley, Mathew Weiss. These skills are a forever work in progress. As for the sounding board, there are currently three main ingredients: Elektron:Model Samples as main sequencer providing beats/patterns and midi triggers to the Behringer Neutron. Audio out from both of these units into Audio Tracks in Ableton Live. Ableton will provide drones, loops, and AAC/EG clips which can process audio from either unit. I can do Master recordings in Ableton as well.

Even though I want a modular system, I will work with what I have now, and learn, and be ready when my modular system appears. (Make Noise modules are the ones that I want- doo doo do do)

The Model Samples and I are getting on fairly well. I am learning the architecture of the menus, watching people perform with it to see what key combos they use, and setting up some patterns. The samples available “in the box” are very cool and I am curating my own samples as well. Every sound is potential material so it is daunting.

The past few days, I experimented with some patch ideas in the Behringer Neutron. I have gotten alot of growling out of the synth, but no sound that I liked. There is one simple patch I use: the Sample and Hold into Delay Time. When the Delay Mix knob is raised and the S&H knob is turned up, there are lots of odd, random pitch artifacts that I enjoy hearing. Today I patched the Osc Mix into a Mult, then ran Mult 1 to the OD(overdrive)IN, and Mult 2 to Pulse Width 2. Tuned the oscillators to consonant pitches. Slowly turning the Osc Mix Knob opens a whole realm of timbres. When the OM knob was all the way to one side the tone could be made clear and bell-like. With the Oscillator shapes in the square or tone mod shape, the Pulse Width knob seems to act as a filter.The Mod Depth and Envelope Depth can be brought in. This is where I am not sure what is happening – there are changes in the timbre of the tone from the synth. And what exactly is depth? There is alot to play with depending on where the Osc Mix dial is tuned in.

The third part of this is creating Audio Animation Clips/Envelope Generators within Ableton. Envelopes shape the amplitude and modulate the pitch of the sound. Audio Animation allows the Envelope parameters to move over time. Here is the post on how audio animation can be created in Ableton: https://wp.me/p5yJTY-vL I use filters to sculpt out harmonics and add texture to the sound of the Model Samples or the Neutron. So far, I am experimenting with banks of filters to sculpt out or boost particular harmonics then perform a finer tuning with some EQ. I am listening for a diverse sonic spread, then tuning it in, then spreading, and finally fine tuning.

The adventure continues!!

National Water Dance – NOT Cancelled

With all of the rescheduling of public events local, national and global, there is one event that will go on next month. National Water Dance 2020 will happen as scheduled on April 18th 2020 at 4 pm EST. This biannual movement choir in honor and healing of water will take place across the country all at the same time and streaming across the web. This announcement came from NWD last week:

WE ARE STILL DANCING! Wherever you are on April 18 at 4:00PM EST, alone or self-quarantining or with a small group in an open space, we will begin with the shared gesture and end with the shared gesture and your personal movement will fill in the middle.

We are fortunate to be living in the digital age – as we are asked to observe *“physical distancing,” we are able to close that distance by linking together through social media.

This challenge is forcing us to re-evaluate what we are doing and how we are doing it. Let’s find that deeper meaning in our dance, whether in a group or alone. We can dance wherever we are and livestream it on Instagram and Facebook. 

More than ever the world needs our hope and energy. Let’s move forward together and flood the social media networks with our dances on April 18th.

My crew at the idiosyncratic Beats of Dejacusse (iBoD) had big plans to create a watery like container at PS 137 with live plants and flowers by Lee Moore Crawford, and space for movers and viewers. Now we have constrained as we must, so will feature Jody Cassell as Durham’s National Water Dancer streaming live from her home. Jody will move to a recording of Carnatic Water Music, which will be released by iBoD on Bandcamp in April a week before the event. We will keep you posted as to how to link to the performance and pre-order the digital EP.

Mark your calendars for Durham’s National Water Dance April 18th at 4 pm.

Covid-19 DNA Remix

In the midst of everything going viral all around us, my friend @abstracta.audio pointed me toward Eric Drass’ sonification of the DNA sequence of the Corona virus. The National Institute of Health has released the transcript of the sequence, which can be found on their website https://www.ncbi.nlm.nih.gov/nuccore/MN908947.3 Eric, who makes all kinda wild art at Shardcore.com, assigned note combinations to each letter of the genome sequence (ATCG in various iterations) and you can listen to it (and upload the midi file) here: http://www.shardcore.org/shardpress2019/2020/02/28/the-sounds-of-covid-19/

I am fascinated by his process and hope he will give me an idea of how he did it. I am very interested in using notes/pitches/frequencies to sound out data. Eric created a 16 note scale. The top four notes and the bottom four notes are the same notes one octave apart. The eight notes in the middle do not repeat. Each measure of the midi file has 4 beats, the first beat has 2-3 notes stacked, then these notes repeat singularly over the 2,3,4 beats. How this relates to the DNA sequence I have not figured out.

Anyway, my remix begins in the middle of the midi file. There are five voices assigned to voice the midi notes. Percussion, pizzicato strings, and some other odds and ends of sonic dross. I slowed the bpm way down to 100. The piece sounds mincing, impish, serious and ominous in places. AND, you want it to end before it actually does!

Have a listen – it runs a bit over 6 minutes.

Covid 19 Remix Eric Drass

Listen to Your Gut: Engaging the Public with Science and Sound

On Friday, March 20th, Dr. Erin McKenney and I will present our work on sounding the data from her doctoral dissertation, which focuses on changes in Lemur baby gut microbiomes as their diet changes from birth to weaning. (See this post for further information: https://wp.me/p5yJTY-tD ) We will also preview some of the findings and sounds from the Sourdough Project through the Rob Dunn Lab at NC State.

Our presentation is sponsored by Duke University Science and Society, and is one of a number of talks and presentations presented by this department. The program is open to the public and a pizza lunch is served. You can register at this link: https://scienceandsociety.duke.edu/engage/events/upcoming-events/ Scroll down the March calender to our event, click on it, scroll down and register.

I hope to see you there!

Mercury Retrograde (or don’t fight it, surrender)

Right in the midst of the most recent Mercury Retrograde, I decided to dive into MAX MSP, a visual computer coding program for controlling sound and light for performance. After downloading the software, I started a class online and was working with some patches when my computer audio stopped functioning. No sound out of the computer. Then the computer and sound card stopped talking. All of this right before an iBoD rehearsal when we were recording Carnatic Water Music.

Using the Windows Troubleshooter, I discovered the problem “audio services not responding” and that this problem was “not fixed”. Online, there are multiple fixes for this message. After cancelling our recording session, I tried all the suggested fixes several times – from inspecting the Services to make sure Windows Audio and Windows Audio Endpoint and all their dependencies were automatically running to entering very specific commands into Command Prompt as Administrator. The first thing I did was update the ASIO4ALL audio driver, so no problems there!

After several days of trying different fixes, I was able to get the computer and sound card talking again! Ableton Sets and Projects were now audible! Yayyyyy! But the computer would not play audio WAV files. Outside of Ableton, audio services still not responding. Finally, I uninstalled the ASIO driver and uploaded the driver for the soundcard. I have a Native Instruments Komplete 6 soundcard, which has been a great device. (I had audio dropout problems with the NI driver about a year after I purchased it, which was when I switched to the ASIO driver and all was well.) Well, changing back to the NI driver solved the audio problems completely and I am back to sounding again!

A friend mentioned Mercury Retrograde as I was working through this process. Dang, I forgot about that current astronomical phenomenon. If I had remembered, would I have done anything different? As things turned out, it is very good that I did not! While I got thrown off of MAX (for the moment) I redirected my energies toward creating synth sequences in Ableton. Since purchasing the Behringer Neutron, I have been unsuccessful in getting Ableton set up as a sequencer for the Neutron. The Neutron has processed audio signal, but never midi signal. Low and behold the NI Komplete 6 driver allowed Ableton to see the midi ports for the Neutron. Suddenly, I was hearing the synth voice and all the modulators. When I made a patch or tweaked a knob, the sound was changed as I expected it to be! WoW! I feel like this is the first time I have heard the instrument’s true voice!

Today I am working on a soundscape for the next Human Origami Jam at ADF Studios in Durham on December 6. Very excited to finally get going with the Neutron.

This is what I will make in the soundscape!

String of Yeasts

After reading and studying the data (so far) from The Sourdough Project, a bit of it jumped out as a possible sound pallete. The growth profiles of the five most prevalent yeasts and aabs (acetic acid bacteria) measured as increasing Optical Density over a 48 hour period. Measurements were taken in 12 hour increments and recorded from 0.1 to 1.2 levels of density.

I was drawn to this data because the graphs reminded me of waveforms.

I am not at liberty to reveal the details of the data, so suffice to say that these are 5 strains of yeast. We will call them pink, blue, orange, green and neon. The pinpoints mark the 12 hour samplings of the prevalence of the strain. So at 12 hours pink grew to around .25 OD, while neon grew to .6 OD. How to represent this in sound is the next question!

My old friend, the piano keyboard, provides a familiar sonic framework. A two octave chromatic scale will represent the sound of OD growth by stretching the OD measurement scale over the two octaves. Like this:

Each OD amount covers 2 notes. D and D# represent the .1 amount, E and F are .2 and so on. This allows some wriggle room when the 12 hour sample seems to be between two numbers as is seen with pink. The growth range for pink will run from D to F and encompass 4 notes. In the case of neon, the growth range runs from D to C and encompasses 11 notes. The differences in the growth rates will be heard in the number of and duration of the steps taken within each twelve hour time frame. So far, so good!

The time frame will run in beats and measures. Since it is 48 hours of growth, one hour can equal one measure. The step patterns will run up to the highest note indicated by the OD data at that particular 12 hour marker. That makes each sampling unit 12 measures in length – seems perfect. Even better, at 4/4 time, each 12 measure sampling unit is 48 beats long! Synchronous!

Lets lay out the first 12 hours of pink and neon. Since all the yeast densities begin from .1, all the patterns will begin with D in the 3rd octave (D3). pink grows from D through D#, E, and lands on F. For this growth pattern there are 4 notes and 48 beats, so each note will be 12 beats long. The long notes and fewer steps up communicate that pink did not grow much in the first 12 hours. Neon grows from D, D#, E, F – C. For this growth pattern there are eleven notes and 48 beats. Each note is 4.36 beats in length. So the first ten notes are four beats long, and the eleventh is eight beats. The longer note at the end places emphasis on the final growth number for that 12 hour period. Faster steps further up the scale sonify neon‘s more abundant 12 hour growth period.

Looking at the graph, it is easy to hear that the growth patterns of pink and neon invert at the 12-24 hour sampling unit. Pink leaps from .25 to .7, while neon short stretches from .6 to .75. Again, note duration and number of steps will sonify these contrasts in the data.

While a sense of growth is captured by the movement up the scale, there is not yet a sense of increasing density. To get at this, I decided to sustain the top note of each 12 hour sampling unit. As example, pink’s F and neon’s C would continue softly to the end of the 48 measures. This would follow for the last note of each 12 hour cycle and will create the sense of sonic density.

Enough talk, lets have a listen!

neon 48 hour growth pattern

pink 48 hour growth pattern

These are the 48 ms versions of the patterns. So 48 4/4 measures at 120 BPM really stretches out these relationships making it harder to hear the movement of the data. Ableton Live has a function that allows me to collapse the sequence from 48 measures to 24 measures and still maintain the rhythmic integrity of the phrase. WoW! Then the phrase can collapse to 12 measures. All of these phrases will likely be a part of the Sourdough Song, but I am still deciding which version (24ms or 12ms) conveys the data more clearly. One of the researchers on the project said the longer growth articulations conveyed the anticipation the bakers feel as they wait for their starters to grow.

Here is the 12 ms version of both strains together. See if you can hear the changes described above. Listen closely for each voice – you will hear pink holding longer tones, while neon changes tone more quickly. It helps to look at the graph while you listen.

This will likely be one a motif within The Song of Sour Dough. (What do you think of separating sourdough in the title?)

 

Playing by Ear – a sonic exploration of hearing

The All Data Lost Noise and Music Festival kicked off iBoD’s exploration of playing musical sound and morphing it into multi-dimensional soundscapes as an exercise in deep listening and what it means to “hear”. I was inspired in this endeavor by two experiences: hearing Incidental Exercise at a 919 Noise Showcase, and improvising with Lisa Means when we were on a group beach trip together. Incidental Exercise is a duo made up of a guitar player and a modular synth player. When I heard them play, I was completely mesmerized by the intricate patterning they were able to create from a guitar as the sound source for a modular synth. The improvisation they performed was as delicate as lace and as boundless and tumultuous as the ocean. At the beach, Lisa, a guitar player who is deaf (but has some hearing thanks to hearing aids), brought several of her beautiful custom made guitars for us to play along with the sound of the crashing waves. Once we were home, we got together to improvise a few more times and then life took us in different directions.

When iBoD was invited to play All Data Lost and the usual bandmates were not available to prepare for that date, I contacted Lisa. My recent purchase of a Behringer Neutron semi-modular synth had me thinking about creating a sonic pallette along the lines of Incidental Exercise. While Lisa mostly plays acoustic guitars, she does own a Hollow TKD Hybrid electric guitar, which she was willing to play for this event.

We improvised together on Saturday afternoons in July and early August. At first, I struggled with handling the sound coming through the Neutron. I wanted to start with a clean guitar sound that could then be expanded and shifted through patching and tweaking of knobs. I learned that a tiny little knob tweak could bring forth a sudden blast of sound. A couple of times things got out of control to the point of turning the synth off and starting over.

Lisa experimented with lots of extended techniques using glass and brass slides, aluminum foil and various capos. For the ADL performance she settled on a spider capo which is a capo that can be set or released for each string. Then she found little gloves that went over each “finger” of the capo. This allows harmonics to be added to the mix, and Lisa can play in front of and behind the capo. This worked out beautifully as we built the soundscape for our set.

Here is the nested soundscape Playing by Ear as captured at The Wicked Witch for All Data Lost Fest. Special thanks to sound engineer Oona!

PBEiBoDWickedWitchAug2019