Sounding Board

My curiosity about sound is completely engaged by exploring modular synthesis. So far my understanding is often inarticulate and mystified! But thanks to Suzanne Ciani, True Cuckoo, Andrew Huang, Ultrabillions, Hark Madley, Lisa Belladonna, Caterina Barbieri, Moogfest, Bram Bos, and Kim Bjorn’s book Patch and Tweak, I am evolving a different way of creating soundscapes and perceiving the world. This is the stuff of life! Waveforms modulating waveforms, waveforms shaping waveforms, waveforms reflecting, refracting and bouncing around and through us. Energetic matter begins and ends on a wave.

I am focusing my Artists Residency here at home on improving my mixing skills and building a sounding board. The mixing skills are put to the test making the recording of Carnatic Water Music that iBoD will release in the next week. As I mixed this recording I received helpful suggestions from tutorials by Jason Moss, HarkMadley, Mathew Weiss. These skills are a forever work in progress. As for the sounding board, there are currently three main ingredients: Elektron:Model Samples as main sequencer providing beats/patterns and midi triggers to the Behringer Neutron. Audio out from both of these units into Audio Tracks in Ableton Live. Ableton will provide drones, loops, and AAC/EG clips which can process audio from either unit. I can do Master recordings in Ableton as well.

Even though I want a modular system, I will work with what I have now, and learn, and be ready when my modular system appears. (Make Noise modules are the ones that I want- doo doo do do)

The Model Samples and I are getting on fairly well. I am learning the architecture of the menus, watching people perform with it to see what key combos they use, and setting up some patterns. The samples available “in the box” are very cool and I am curating my own samples as well. Every sound is potential material so it is daunting.

The past few days, I experimented with some patch ideas in the Behringer Neutron. I have gotten alot of growling out of the synth, but no sound that I liked. There is one simple patch I use: the Sample and Hold into Delay Time. When the Delay Mix knob is raised and the S&H knob is turned up, there are lots of odd, random pitch artifacts that I enjoy hearing. Today I patched the Osc Mix into a Mult, then ran Mult 1 to the OD(overdrive)IN, and Mult 2 to Pulse Width 2. Tuned the oscillators to consonant pitches. Slowly turning the Osc Mix Knob opens a whole realm of timbres. When the OM knob was all the way to one side the tone could be made clear and bell-like. With the Oscillator shapes in the square or tone mod shape, the Pulse Width knob seems to act as a filter.The Mod Depth and Envelope Depth can be brought in. This is where I am not sure what is happening – there are changes in the timbre of the tone from the synth. And what exactly is depth? There is alot to play with depending on where the Osc Mix dial is tuned in.

The third part of this is creating Audio Animation Clips/Envelope Generators within Ableton. Envelopes shape the amplitude and modulate the pitch of the sound. Audio Animation allows the Envelope parameters to move over time. Here is the post on how audio animation can be created in Ableton: https://wp.me/p5yJTY-vL I use filters to sculpt out harmonics and add texture to the sound of the Model Samples or the Neutron. So far, I am experimenting with banks of filters to sculpt out or boost particular harmonics then perform a finer tuning with some EQ. I am listening for a diverse sonic spread, then tuning it in, then spreading, and finally fine tuning.

The adventure continues!!

No Moogfest 2020

Just found out that Moogfest 2020 has been cancelled, so am feeling a little disappointed. However, that disappointment is tiny compared to the gifts that four years of Moogfest attending gave to me and countless others. Even if the festival never happens again, it left its vibratory mark on two North Carolina towns and neither of them will ever be the same.

The fest moved to Asheville NC in 2010 after four years in New York City. Moog has been a presence in Asheville since Bob Moog settled there in 1978, and opened the Moog factory. Moogfest Asheville ran from 2010 -12, culminating in appearances by Brian Eno, Kraftwerk, DEVO and many other big name electronic acts. Eventually, they got a better deal in Durham, so the fest came East. Asheville remains a hub of synthesizer activity with the Moog Factory and recently opened Moog Museum. Plus modular modules crafters Make Noise have called Asheville home since 2008. And there is an active and prolific community of synth players there to this day. Synthesis leaves a vibrational mark!

The first year Moogfest hit Durham, it was big time razzle-dazzle!! Every theatre and bar venue booked with performances. Laurie Anderson, Jason Lanier, Silver Apples, a three day residency with Gary Numan, a huge outdoor stage, a sleep concert, signage all over downtown – it was amazing! Working the ticketing table the very first year was incredible fun. Everyone was so pumped and joyful! All my co-volunteers were music producers as well, which was exciting! The subsequent years were just as much fun, although seemingly scaled down each year. In 2019, I was impressed by the presence of a number of local acts.

And, to be honest, that is what I would love to hear! More local players, low key venues, meetups, parties, jams! This is something the synth community in the Triangle/Triad area can do! I have very little nostalgia for “the great ones” of the past! I appreciate the ground they broke, but, damn, lets walk on that ground. To truly honor them, lets dance on that ground. So many people out there making so much beautiful noise!! I do not hear the need to import acts.

I would love the local synth community to take the weekend that was to be Moogfest 2020 and do something with it. One idea is a 24 – 48 hour synth concert – players could sign up for times in increments from 20 minutes to an hour. It would be fun to have each player or group leave a loop or drone running so the next player can take off from there. All woven together! Another idea would be to book several venues for 2-3 nights and offer evenings of sounds all around. The coolest thing would be to have whatever we do benefit Jill Christenson’s Day One Disaster Relief Organization!

Anybody want to do this???

iBoD presents FreeQuencies @Durham Makes Music Day

My cohorts and I are flipping the script from our usual way of play for Durham Makes Music Day this coming Friday. We have played together as iBoD for about 5 years now. I make soundscapes in Ableton Live, while Susanne, Eleanor and Jim add their own riffs and melodies over top. These soundscapes follow a more formal, songish structure. While we mostly improvise, the more we play a piece, the more we lock into parts, which layers in a more rigid form and stifles the improv. Too much structure calcifies creative growth, so time for a shift!

Under the influence of Moogfest and the work of Pauline Oliveroes, iBoD is exploring “all of the waveforms” and the means to transmit them. Susanne, Eleanor, Jim and dejacusse will provide the soundscape LIVE using voice, harmonicas, melodica, digital horn, recorder, flute and electronic modulations. In this way we will transmit a diverse range of audible waveforms as patterns of frequencies. These “freequencies” will permeate the larger soundscape that will surround us, altering the sonic environment in unusual ways.

Our location at M Alley/Holland Street (behind the Durham Hotel) means we will be in the thick of all the sounds of downtown Durham and all the outdoor music being made on Durham Makes Music Day. We will not be the loudest, but if you come down to where we are located, close your eyes, quiet your mind and open your ears, I guarantee you will hear something beautiful and amazing!

Friday June 21st

8:30-9:30 pm

iBod at M Alley/Holland St.

Moogfest 2019

Being an introverted elder, I am no longer the festigoer I once was. One festival a year is enough for me and this is it! Moogfest is an incredible sonic universe that opens up in and around Durham, and turns my world upside down. This year was no exception!

There were numerous durational performances, sound installations and interactive opportunities. I was particularly excited to meet Madame Gandhi, who gave a fabulous performance at Motorco last year. This year she lead two sessions of interactive play at The Fruit. The set up included a Push, a Bass Station, a drum set, microphones for vocals and small percussion, and two synths! WoW! She wasted no time with alot of talk. We jumped right in and started playing. I really appreciated that! I played Push and did some vocals, but mostly listened. The group came up with some nice grooves.

This experience reminded me that I prefer solo or small group playing these days, and the energy of the experience was fantastic! Glad I showed up!

Durational performances involve a group or person performing for 2-3 hours solid with no silence. The 21C Museum Hotel Ballroom and Global Breath Yoga Studio were the venues for these works. For durational performances, I love to sit with the beginning and the end OR go in for the middle. Heard Richard Devine and Greg Fox perform the beginnings and endings of their sets. Always interesting to hear the different approaches the start and finish in the broad context of a durational performance. I would love to create a durational performance there someday…soon.

21C was my favorite venue this year. I heard a wonderful variety of soundscapes there in quad sound with excellent sound engineers, a beautiful light show and interactive screens on either side. The bookends of the weekend for me were Ultrabillons and TRIPLE X SNAXXX (local favorites). Both of these sets were incredibly satisfying to listen to. Big synths and bouncy modulars all around. What I come to Moogfest for!! In between there was Aaron Dilloway who gave an amazing embodied noise performance that was as much exorcism as anything else. Drum and Lace started her set with some whispy songs that all seemed to be the same short length, like 3 minutes or so. But then she launched into some beefier pieces and really took the space. She had some gorgeous videos behind her as she performed.

Cuckoo was so much fun and I envied him his tiny set up which he carried into the venue in a knapsack. At one point, he was playing a vampy section and said, “well, this is the point where I introduce the band!” and proceeded to show us the three small controllers he had routed together. He has YouTube videos, so I want to check him out. Here he is playing at The Pinhook.

Finally, I had a few mind-opening, inspiring encounters. Steve Nalepa pointed the way to route signal out of Ableton for quad speakers. He performed at 21C through quad speakers using Ableton. I always wondered why you would route tracks to sends only in the I/O menu. I havent yet tried this, but plan to soon. Then there was the Modular Marketplace! I delayed going till Friday and spent 3-4 hours there playing. As the WoW would have it, the modular unit Behringer Neutron was on sale for $100 off. I struck while I had a little cash flow. Less than a week later, Abe, Nuet and I played a beautiful primal soundscape for Audio Origami on Friday May 3 at ADF Studios @ 4:30pm.

Thank you, Moogfest! See you next year!

Synthesizing in Ableton Live: External Effects Pedal (fail)

Dear friend and compadre, Karim Merlin, loaned me a guitar pedal. He recently purchased an Earthquaker Levitation pedal, which uses delay, tone and atomosphere to mix a versatile reverb with lots of space to explore. Since I am moved to play all the harmonics through synthesized sound, a guitar pedal gives me a chance to experiment with routing hardware effects through Ableton. I was very excited to try it out.

The wind left my sails when I YouTubed for some supportive info and learned that, in order to get the signal from my ukelele/or vocal mic through the pedal into Ableton and out to auditory cortexes, I need a reamp box between the pedal and the sound card, and a preamp box between the soundcard and mixing board. This has to do with matching the signal out and the signal in to the same impedance. Signal routing is the great labrynth of synthesized sound in my mind. Signals can be sound energy, electrical energy, can be boosted, attenuated, colored, and fed back onto and through each other. And, when it comes to hardware, signals must match somehow. Something to do with the energy of the signal. This part eludes my understanding so far, and I am eager to grok it! And what better way then to simply play.

The NI Komplete 6 soundcard I use has phantom power, which amplifies the signal in certain microphones. The Behringer mixing board has several ways to elevate the signal. Perhaps these will suffice? When I ran the electric uke signal through The Levitation there was a little bit of signal and a whole lot of noise. I tried playing with it within Ableton to see if I could make the noise blend, but no. A vocal microphone sounded the best, but wasn’t a sound I wanted to cultivate. The YouTube guy may be right. I need to build an empire to use pedals through Ableton.

So I end up back in Ableton, playing with all their reverb configurations and making a few of my own.

And I am still wanting a few more 3D knobs and sliders. I am anticipating that my next big sound love may come my way this week via Moogfest!

Modular synthesizing in Ableton Live

Even though I have worked intimately with Ableton Live DAW (Digital Audio Workstation) for over eight years, I KNOW I have only scratched the surface of its capabilities. In the last few years, I have come to think of Ableton Live as my “instrument”, my medium, what I create with. What an incredibly rich and mysterious instrument it is! Now I am enamored of modular synthesis, striving to be come engaged with modular synthesis, particularly Eurorack modules. I love the knobs and sliders, and the patch cables just put me over the top. Modular synthesis is like sonic legos, a form of prayer, and a particular patterning of vibrations that matter. I want to plaaayyyy!!

This is the 2019 plan: spend time researching, listening to and playing modular units. I am learning about how modulars work by playing with the Ripplemaker ios app on iPad. Ripplemaker is a semi modular synthesizer which means that some of the signal routing is already patched together. Ripplemaker contains five modules, an LFO and amplifier/mixer, so it is a good basic learning tool. In addition I am studying the book Patch and Tweak which just came out a few months ago. It is a comprehensive survey of modular synthesis from the basics, to the gear, to the practitioners. I bought the book because it contains an interview with a performer whose work I admire tremendously, Caterina Barbieri. The book is a treasure trove of information that I am studying every day. The write ups about all the major brands and models of Eurorack synth modules is amazing. While Patch and Tweak could pass as a coffee table book, it is a bible to me. While doing all of this, I will save the money to buy my first modular synth unit. This could happen at Moogfest, or sooner depending on the progress of research and saving.

As for now, I have invested alot of time and money in Ableton Live and my computer set up, which is essentially a soft synth – I just need to configure it as such. Up to this point, I have treated Ableton as a composition tool/recording studio for the most part. I have been performing with Ableton and learning how to use the control surfaces (Novation LaumchPro and AKAI Key 25) to create soundscapes in real time. Now I want to configure and play Ableton more like a synth. This new approach will mean I have to deeply learn all the audio effects in Ableton, and other external plugins. I can configure the control surfaces to function as synth controls by mapping the parameters I want to sculpt with to the knobs and sliders of the control surfaces. The rest is signal routing.

Ableton Live is so robust and complex that the signal routing possibilities are numerous. Audio effects can be placed on tracks, within tracks routed together through a group submix, within a clip, within send/return tracks, on the Master track. Then within the larger set, tracks can be routed to and through each other using the in/out sub menu embedded in each track. So, with this in mind, how can I create control voltages, oscillators, slopes, envelopes, LFOs, Noise, and other modulators and configure them and play with them within Abelton Live?

The journey begins…

*photo of basic Auto Filter from Ableton Live Manual

Sonic Illustrations and Life Forms

Data sonification is a burdgeoning area of sound design that is quite amazing in its depth and flexibility. I have a keen interest to sonify data in a way that furthers our understanding of the data. I would love to create a sonic pie chart for example. While a visual pie chart is a snapshot, a sonic pie chart would be more like an animation. A chemical reaction could be sonified by assigning particular voices to different parameters of the reaction: as the reaction proceeds, the voices would change from “reagent” voices to “product” voices. Consonance and dissonance couid illustrate the changing relationships amongst the components of the chemical reaction. One possible way to sonify, in my mind.

Then at Moogfest 2018, a workshop introduced me to the world of SuperCollider and MaxMSP as instruments for creating sonic pie charts. Mark Ballora of Penn State University (Please check out his work at http://www.markballora.com) has been working with sonifying data for decades. He was doing it when no one was paying attention. Mark uses SuperCollider to create sonifications of tidal changes and the movement of hurricanes. This type of sonic representation of data illustrates how a group of parameters changes over time, and when you listen, you hear all of the changes happening over time. Voila! A sonic pie chart! Attending Mark’s workshop, shifted my soundsense, as I realized I do not want to learn computer programming (at this time). This blog post by Mark Ballaro and George Smoot (https://www.huffingtonpost.com/mark-ballora/sound-the-music-universe_b_2745188.html) helped me understand that my interest is in exploring how modal/timbral shifts that are set in a familiar,equal-tempered scale spectrum might illustrate data-driven relationships. What I am interested in is more a sonic illustration, than a map or a pie chart.

Just before Moogfest, The Dance DL, a Durham dance listserve sent this announcement:

Auditions & Open Calls

Arts & Sciences Collaboration: Sourdough Collective – Rob Dunn Lab

Where: AS IF Center in Penland, NC

Rob Dunn’s lab at NC State University explores microbiomes of some of our most familiar places. The sourdough project studies sourdough starters from around the world, including some really ancient ones that have been passed down for generations. Seeking an artist working in any media with an interest in microbiology, bread baking, making the invisible visible, and/or communicating complex science through art. Help us bring the awe and wonder of science–and the microbial world– to the world.

As I read this notice, it felt like a dream! I have a two and half year old sourdough starter which is used to create 75% of the bread Trudie and I eat. I have recently studied cell biology, neurobiology and have a deep interest in molecular chemistry about which I am just learning. And I am looking for a data sonification project. I sent them an inquiry, they checked out my sound work, and I was invited to participate.

First step, meet with the Sourdough folks at Rob Dunn’s Lab. On Friday June 15th, Erin McKenney, post-Doctoral Fellow in Microbiome Research and Education and a research lead on the sourdough project, and Lauren Nichols, Dunn Lab Manager, met me in the lobby of the David Clark Labs (home of the Dunn Lab). I learned that the sourdough project is looking at the ecology of sourdough starter communities as relates to yeast and bacteria growth in flour when exposed to water and the local microbial environment. I attended a lab staff meeting and learned about the amazing research being done here. All the projects are basically looking at how the smallest phenomena impact much larger phenomena and vice versa, the micro to macro to micro feedback loop. And they keep finding that diversity is the key to sustainable growth and a healthy environment. I left the meeting excited and inspired! Next stop will be the As If Center in Penland, NC in October.

The only other preparation I would like to do is to try sonifying some data. I reached out to the Rob Dunn Lab folks, and Erin McKenney sent me a data set to try my hand at. The data is about nine lemur babies from three lemur species, and how the microbial makeup in each baby’s stomach evolves as changes are introduced to their diets. (This is Erin’s dissertation study!) We have identifiable parameters that can be orchestrated to show changes over time. Perfect!

The data is on a massive (to me) spreadsheet with lots of terminology I don’t know…yet. This will be an interesting process as we work out exactly what the sonic map will depict. I sense that certain data will lend itself to sonification and that is the part I do not yet know. After spending some time studying the spreadsheet, I asked Erin how we can cluster some of the microbial data together, and she sent me the class and phylum data sheets. Phylum became my focus as there were only 35 phylum as opposed to 95 classes and 255 strains of bacteria. One of the lemur mothers had triplets so I decided to put together phylum profiles on this small group. Culling through the data for these specific individuals narrowed the phyla divisions down to 24, then I made an arbitrary cutoff point of >.00 density for each phylum (Erin said this was fine and is actually a tool scientists use to declutter data). Now was down to 15 phylum – a manageable number for a timbral illustration.

The microbes were collected from the three babies six times from birth to nine months. The timeline for the samples was birth, nursing, introductory solid foods, regular solid foods, and two times as they were weaning. Microbes were collected from the mother when she gave birth. Erin had the brilliant idea to have the mother’s phylum profile (which does not change over time) be a drone under the babies’ phylum profiles in the sound map. This allows you to hear when the profiles diverge and when they converge.

The sonic substance for all this is a phyla megachord that stretches from G1 to G5. Each phylum is voiced by a single pitch, so, for example, Protobacteria is G1. Since there are only thirteen pitches in a chromatic scale, some of the phyla would land on the same pitch, different octaves. There were five phylum that tended to have the highest presence in each sample, so I made them the Gs, and all the rest had separate, distinct pitches. I used amplitude to render the amount each phylum was present in each sample.

Then there was how to voice the individual profiles in order to hear the data as clearly as possible. After much experimentation the mother’s voice is a woodwind with steady tone throughout. I chose bell-like voices for the three lemur baby profiles, letting each phase ring out four times over the mother’s profile. The idea is to listen and compare the mother’s profile with the babies’ profiles. Listen for the change (or lack of change) as the each stage rings in four times. You will probably need to listen closely several times. What you hear is a uniformity of tone at birth that becomes more dense and dissonant as the phyla diversify with the babies’ diversifying diet. Then the final wean profiles settle into more consonance with the mother’s profile. So very interesting!

When I sent this to Erin, she said, “The patterns you’ve detected and sonified are exactly what I published.” Yes! This is the sketch I will use to create a soundscape of the Lemur Data. From this exercise, some tentative questions have emerged that will help when we start working on the sourdough project:

How is the data organized/catagorized?

What is being measured?

What are the signifigant changes and time frames within the data collection process?

What are the researchers interested in hearing from the data?

And this is just the beginning!