My curiosity about sound is completely engaged by exploring modular synthesis. So far my understanding is often inarticulate and mystified! But thanks to Suzanne Ciani, True Cuckoo, Andrew Huang, Ultrabillions, Hark Madley, Lisa Belladonna, Caterina Barbieri, Moogfest, Bram Bos, and Kim Bjorn’s book Patch and Tweak, I am evolving a different way of creating soundscapes and perceiving the world. This is the stuff of life! Waveforms modulating waveforms, waveforms shaping waveforms, waveforms reflecting, refracting and bouncing around and through us. Energetic matter begins and ends on a wave.
I am focusing my Artists Residency here at home on improving my mixing skills and building a sounding board. The mixing skills are put to the test making the recording of Carnatic Water Music that iBoD will release in the next week. As I mixed this recording I received helpful suggestions from tutorials by Jason Moss, HarkMadley, Mathew Weiss. These skills are a forever work in progress. As for the sounding board, there are currently three main ingredients: Elektron:Model Samples as main sequencer providing beats/patterns and midi triggers to the Behringer Neutron. Audio out from both of these units into Audio Tracks in Ableton Live. Ableton will provide drones, loops, and AAC/EG clips which can process audio from either unit. I can do Master recordings in Ableton as well.
Even though I want a modular system, I will work with what I have now, and learn, and be ready when my modular system appears. (Make Noise modules are the ones that I want- doo doo do do)
The Model Samples and I are getting on fairly well. I am learning the architecture of the menus, watching people perform with it to see what key combos they use, and setting up some patterns. The samples available “in the box” are very cool and I am curating my own samples as well. Every sound is potential material so it is daunting.
The past few days, I experimented with some patch ideas in the Behringer Neutron. I have gotten alot of growling out of the synth, but no sound that I liked. There is one simple patch I use: the Sample and Hold into Delay Time. When the Delay Mix knob is raised and the S&H knob is turned up, there are lots of odd, random pitch artifacts that I enjoy hearing. Today I patched the Osc Mix into a Mult, then ran Mult 1 to the OD(overdrive)IN, and Mult 2 to Pulse Width 2. Tuned the oscillators to consonant pitches. Slowly turning the Osc Mix Knob opens a whole realm of timbres. When the OM knob was all the way to one side the tone could be made clear and bell-like. With the Oscillator shapes in the square or tone mod shape, the Pulse Width knob seems to act as a filter.The Mod Depth and Envelope Depth can be brought in. This is where I am not sure what is happening – there are changes in the timbre of the tone from the synth. And what exactly is depth? There is alot to play with depending on where the Osc Mix dial is tuned in.
The third part of this is creating Audio Animation Clips/Envelope Generators within Ableton. Envelopes shape the amplitude and modulate the pitch of the sound. Audio Animation allows the Envelope parameters to move over time. Here is the post on how audio animation can be created in Ableton: https://wp.me/p5yJTY-vL I use filters to sculpt out harmonics and add texture to the sound of the Model Samples or the Neutron. So far, I am experimenting with banks of filters to sculpt out or boost particular harmonics then perform a finer tuning with some EQ. I am listening for a diverse sonic spread, then tuning it in, then spreading, and finally fine tuning.
Just found out that Moogfest 2020 has been cancelled, so am feeling a little disappointed. However, that disappointment is tiny compared to the gifts that four years of Moogfest attending gave to me and countless others. Even if the festival never happens again, it left its vibratory mark on two North Carolina towns and neither of them will ever be the same.
The fest moved to Asheville NC in 2010 after four years in New York City. Moog has been a presence in Asheville since Bob Moog settled there in 1978, and opened the Moog factory. Moogfest Asheville ran from 2010 -12, culminating in appearances by Brian Eno, Kraftwerk, DEVO and many other big name electronic acts. Eventually, they got a better deal in Durham, so the fest came East. Asheville remains a hub of synthesizer activity with the Moog Factory and recently opened Moog Museum. Plus modular modules crafters Make Noise have called Asheville home since 2008. And there is an active and prolific community of synth players there to this day. Synthesis leaves a vibrational mark!
The first year Moogfest hit Durham, it was big time razzle-dazzle!! Every theatre and bar venue booked with performances. Laurie Anderson, Jason Lanier, Silver Apples, a three day residency with Gary Numan, a huge outdoor stage, a sleep concert, signage all over downtown – it was amazing! Working the ticketing table the very first year was incredible fun. Everyone was so pumped and joyful! All my co-volunteers were music producers as well, which was exciting! The subsequent years were just as much fun, although seemingly scaled down each year. In 2019, I was impressed by the presence of a number of local acts.
And, to be honest, that is what I would love to hear! More local players, low key venues, meetups, parties, jams! This is something the synth community in the Triangle/Triad area can do! I have very little nostalgia for “the great ones” of the past! I appreciate the ground they broke, but, damn, lets walk on that ground. To truly honor them, lets dance on that ground. So many people out there making so much beautiful noise!! I do not hear the need to import acts.
I would love the local synth community to take the weekend that was to be Moogfest 2020 and do something with it. One idea is a 24 – 48 hour synth concert – players could sign up for times in increments from 20 minutes to an hour. It would be fun to have each player or group leave a loop or drone running so the next player can take off from there. All woven together! Another idea would be to book several venues for 2-3 nights and offer evenings of sounds all around. The coolest thing would be to have whatever we do benefit Jill Christenson’s Day One Disaster Relief Organization!
My cohorts and I are flipping the script from our usual way of play for Durham Makes Music Day this coming Friday. We have played together as iBoD for about 5 years now. I make soundscapes in Ableton Live, while Susanne, Eleanor and Jim add their own riffs and melodies over top. These soundscapes follow a more formal, songish structure. While we mostly improvise, the more we play a piece, the more we lock into parts, which layers in a more rigid form and stifles the improv. Too much structure calcifies creative growth, so time for a shift!
Under the influence of Moogfest and the work of Pauline Oliveroes, iBoD is exploring “all of the waveforms” and the means to transmit them. Susanne, Eleanor, Jim and dejacusse will provide the soundscape LIVE using voice, harmonicas, melodica, digital horn, recorder, flute and electronic modulations. In this way we will transmit a diverse range of audible waveforms as patterns of frequencies. These “freequencies” will permeate the larger soundscape that will surround us, altering the sonic environment in unusual ways.
Our location at M Alley/Holland Street (behind the Durham Hotel) means we will be in the thick of all the sounds of downtown Durham and all the outdoor music being made on Durham Makes Music Day. We will not be the loudest, but if you come down to where we are located, close your eyes, quiet your mind and open your ears, I guarantee you will hear something beautiful and amazing!
Being an introverted elder, I am no longer the festigoer I once was. One festival a year is enough for me and this is it! Moogfest is an incredible sonic universe that opens up in and around Durham, and turns my world upside down. This year was no exception!
There were numerous durational performances, sound installations and interactive opportunities. I was particularly excited to meet Madame Gandhi, who gave a fabulous performance at Motorco last year. This year she lead two sessions of interactive play at The Fruit. The set up included a Push, a Bass Station, a drum set, microphones for vocals and small percussion, and two synths! WoW! She wasted no time with alot of talk. We jumped right in and started playing. I really appreciated that! I played Push and did some vocals, but mostly listened. The group came up with some nice grooves.
This experience reminded me that I prefer solo or small group playing these days, and the energy of the experience was fantastic! Glad I showed up!
Durational performances involve a group or person performing for 2-3 hours solid with no silence. The 21C Museum Hotel Ballroom and Global Breath Yoga Studio were the venues for these works. For durational performances, I love to sit with the beginning and the end OR go in for the middle. Heard Richard Devine and Greg Fox perform the beginnings and endings of their sets. Always interesting to hear the different approaches the start and finish in the broad context of a durational performance. I would love to create a durational performance there someday…soon.
21C was my favorite venue this year. I heard a wonderful variety of soundscapes there in quad sound with excellent sound engineers, a beautiful light show and interactive screens on either side. The bookends of the weekend for me were Ultrabillons and TRIPLE X SNAXXX (local favorites). Both of these sets were incredibly satisfying to listen to. Big synths and bouncy modulars all around. What I come to Moogfest for!! In between there was Aaron Dilloway who gave an amazing embodied noise performance that was as much exorcism as anything else. Drum and Lace started her set with some whispy songs that all seemed to be the same short length, like 3 minutes or so. But then she launched into some beefier pieces and really took the space. She had some gorgeous videos behind her as she performed.
Cuckoo was so much fun and I envied him his tiny set up which he carried into the venue in a knapsack. At one point, he was playing a vampy section and said, “well, this is the point where I introduce the band!” and proceeded to show us the three small controllers he had routed together. He has YouTube videos, so I want to check him out. Here he is playing at The Pinhook.
Finally, I had a few mind-opening, inspiring encounters. Steve Nalepa pointed the way to route signal out of Ableton for quad speakers. He performed at 21C through quad speakers using Ableton. I always wondered why you would route tracks to sends only in the I/O menu. I havent yet tried this, but plan to soon. Then there was the Modular Marketplace! I delayed going till Friday and spent 3-4 hours there playing. As the WoW would have it, the modular unit Behringer Neutron was on sale for $100 off. I struck while I had a little cash flow. Less than a week later, Abe, Nuet and I played a beautiful primal soundscape for Audio Origami on Friday May 3 at ADF Studios @ 4:30pm.
Dear friend and compadre, Karim Merlin, loaned me a guitar pedal. He recently purchased an Earthquaker Levitation pedal, which uses delay, tone and atomosphere to mix a versatile reverb with lots of space to explore. Since I am moved to play all the harmonics through synthesized sound, a guitar pedal gives me a chance to experiment with routing hardware effects through Ableton. I was very excited to try it out.
The wind left my sails when I YouTubed for some supportive info and learned that, in order to get the signal from my ukelele/or vocal mic through the pedal into Ableton and out to auditory cortexes, I need a reamp box between the pedal and the sound card, and a preamp box between the soundcard and mixing board. This has to do with matching the signal out and the signal in to the same impedance. Signal routing is the great labrynth of synthesized sound in my mind. Signals can be sound energy, electrical energy, can be boosted, attenuated, colored, and fed back onto and through each other. And, when it comes to hardware, signals must match somehow. Something to do with the energy of the signal. This part eludes my understanding so far, and I am eager to grok it! And what better way then to simply play.
The NI Komplete 6 soundcard I use has phantom power, which amplifies the signal in certain microphones. The Behringer mixing board has several ways to elevate the signal. Perhaps these will suffice? When I ran the electric uke signal through The Levitation there was a little bit of signal and a whole lot of noise. I tried playing with it within Ableton to see if I could make the noise blend, but no. A vocal microphone sounded the best, but wasn’t a sound I wanted to cultivate. The YouTube guy may be right. I need to build an empire to use pedals through Ableton.
So I end up back in Ableton, playing with all their reverb configurations and making a few of my own.
And I am still wanting a few more 3D knobs and sliders. I am anticipating that my next big sound love may come my way this week via Moogfest!
Even though I have worked intimately with Ableton Live DAW (Digital Audio Workstation) for over eight years, I KNOW I have only scratched the surface of its capabilities. In the last few years, I have come to think of Ableton Live as my “instrument”, my medium, what I create with. What an incredibly rich and mysterious instrument it is! Now I am enamored of modular synthesis, striving to be come engaged with modular synthesis, particularly Eurorack modules. I love the knobs and sliders, and the patch cables just put me over the top. Modular synthesis is like sonic legos, a form of prayer, and a particular patterning of vibrations that matter. I want to plaaayyyy!!
This is the 2019 plan: spend time researching, listening to and playing modular units. I am learning about how modulars work by playing with the Ripplemaker ios app on iPad. Ripplemaker is a semi modular synthesizer which means that some of the signal routing is already patched together. Ripplemaker contains five modules, an LFO and amplifier/mixer, so it is a good basic learning tool. In addition I am studying the book Patch and Tweak which just came out a few months ago. It is a comprehensive survey of modular synthesis from the basics, to the gear, to the practitioners. I bought the book because it contains an interview with a performer whose work I admire tremendously, Caterina Barbieri. The book is a treasure trove of information that I am studying every day. The write ups about all the major brands and models of Eurorack synth modules is amazing. While Patch and Tweak could pass as a coffee table book, it is a bible to me. While doing all of this, I will save the money to buy my first modular synth unit. This could happen at Moogfest, or sooner depending on the progress of research and saving.
As for now, I have invested alot of time and money in Ableton Live and my computer set up, which is essentially a soft synth – I just need to configure it as such. Up to this point, I have treated Ableton as a composition tool/recording studio for the most part. I have been performing with Ableton and learning how to use the control surfaces (Novation LaumchPro and AKAI Key 25) to create soundscapes in real time. Now I want to configure and play Ableton more like a synth. This new approach will mean I have to deeply learn all the audio effects in Ableton, and other external plugins. I can configure the control surfaces to function as synth controls by mapping the parameters I want to sculpt with to the knobs and sliders of the control surfaces. The rest is signal routing.
Ableton Live is so robust and complex that the signal routing possibilities are numerous. Audio effects can be placed on tracks, within tracks routed together through a group submix, within a clip, within send/return tracks, on the Master track. Then within the larger set, tracks can be routed to and through each other using the in/out sub menu embedded in each track. So, with this in mind, how can I create control voltages, oscillators, slopes, envelopes, LFOs, Noise, and other modulators and configure them and play with them within Abelton Live?
The journey begins…
*photo of basic Auto Filter from Ableton Live Manual
Data sonification is a burdgeoning area of sound design that is quite amazing in its depth and flexibility. I have a keen interest to sonify data in a way that furthers our understanding of the data. I would love to create a sonic pie chart for example. While a visual pie chart is a snapshot, a sonic pie chart would be more like an animation. A chemical reaction could be sonified by assigning particular voices to different parameters of the reaction: as the reaction proceeds, the voices would change from “reagent” voices to “product” voices. Consonance and dissonance couid illustrate the changing relationships amongst the components of the chemical reaction. One possible way to sonify, in my mind.
Then at Moogfest 2018, a workshop introduced me to the world of SuperCollider and MaxMSP as instruments for creating sonic pie charts. Mark Ballora of Penn State University (Please check out his work at http://www.markballora.com) has been working with sonifying data for decades. He was doing it when no one was paying attention. Mark uses SuperCollider to create sonifications of tidal changes and the movement of hurricanes. This type of sonic representation of data illustrates how a group of parameters changes over time, and when you listen, you hear all of the changes happening over time. Voila! A sonic pie chart! Attending Mark’s workshop, shifted my soundsense, as I realized I do not want to learn computer programming (at this time). This blog post by Mark Ballaro and George Smoot (https://www.huffingtonpost.com/mark-ballora/sound-the-music-universe_b_2745188.html) helped me understand that my interest is in exploring how modal/timbral shifts that are set in a familiar,equal-tempered scale spectrum might illustrate data-driven relationships. What I am interested in is more a sonic illustration, than a map or a pie chart.
Just before Moogfest, The Dance DL, a Durham dance listserve sent this announcement:
Rob Dunn’s lab at NC State University explores microbiomes of some of our most familiar places. The sourdough project studies sourdough starters from around the world, including some really ancient ones that have been passed down for generations. Seeking an artist working in any media with an interest in microbiology, bread baking, making the invisible visible, and/or communicating complex science through art. Help us bring the awe and wonder of science–and the microbial world– to the world.
As I read this notice, it felt like a dream! I have a two and half year old sourdough starter which is used to create 75% of the bread Trudie and I eat. I have recently studied cell biology, neurobiology and have a deep interest in molecular chemistry about which I am just learning. And I am looking for a data sonification project. I sent them an inquiry, they checked out my sound work, and I was invited to participate.
First step, meet with the Sourdough folks at Rob Dunn’s Lab. On Friday June 15th, Erin McKenney, post-Doctoral Fellow in Microbiome Research and Education and a research lead on the sourdough project, and Lauren Nichols, Dunn Lab Manager, met me in the lobby of the David Clark Labs (home of the Dunn Lab). I learned that the sourdough project is looking at the ecology of sourdough starter communities as relates to yeast and bacteria growth in flour when exposed to water and the local microbial environment. I attended a lab staff meeting and learned about the amazing research being done here. All the projects are basically looking at how the smallest phenomena impact much larger phenomena and vice versa, the micro to macro to micro feedback loop. And they keep finding that diversity is the key to sustainable growth and a healthy environment. I left the meeting excited and inspired! Next stop will be the As If Center in Penland, NC in October.
The only other preparation I would like to do is to try sonifying some data. I reached out to the Rob Dunn Lab folks, and Erin McKenney sent me a data set to try my hand at. The data is about nine lemur babies from three lemur species, and how the microbial makeup in each baby’s stomach evolves as changes are introduced to their diets. (This is Erin’s dissertation study!) We have identifiable parameters that can be orchestrated to show changes over time. Perfect!
The data is on a massive (to me) spreadsheet with lots of terminology I don’t know…yet. This will be an interesting process as we work out exactly what the sonic map will depict. I sense that certain data will lend itself to sonification and that is the part I do not yet know. After spending some time studying the spreadsheet, I asked Erin how we can cluster some of the microbial data together, and she sent me the class and phylum data sheets. Phylum became my focus as there were only 35 phylum as opposed to 95 classes and 255 strains of bacteria. One of the lemur mothers had triplets so I decided to put together phylum profiles on this small group. Culling through the data for these specific individuals narrowed the phyla divisions down to 24, then I made an arbitrary cutoff point of >.00 density for each phylum (Erin said this was fine and is actually a tool scientists use to declutter data). Now was down to 15 phylum – a manageable number for a timbral illustration.
The microbes were collected from the three babies six times from birth to nine months. The timeline for the samples was birth, nursing, introductory solid foods, regular solid foods, and two times as they were weaning. Microbes were collected from the mother when she gave birth. Erin had the brilliant idea to have the mother’s phylum profile (which does not change over time) be a drone under the babies’ phylum profiles in the sound map. This allows you to hear when the profiles diverge and when they converge.
The sonic substance for all this is a phyla megachord that stretches from G1 to G5. Each phylum is voiced by a single pitch, so, for example, Protobacteria is G1. Since there are only thirteen pitches in a chromatic scale, some of the phyla would land on the same pitch, different octaves. There were five phylum that tended to have the highest presence in each sample, so I made them the Gs, and all the rest had separate, distinct pitches. I used amplitude to render the amount each phylum was present in each sample.
Then there was how to voice the individual profiles in order to hear the data as clearly as possible. After much experimentation the mother’s voice is a woodwind with steady tone throughout. I chose bell-like voices for the three lemur baby profiles, letting each phase ring out four times over the mother’s profile. The idea is to listen and compare the mother’s profile with the babies’ profiles. Listen for the change (or lack of change) as the each stage rings in four times. You will probably need to listen closely several times. What you hear is a uniformity of tone at birth that becomes more dense and dissonant as the phyla diversify with the babies’ diversifying diet. Then the final wean profiles settle into more consonance with the mother’s profile. So very interesting!
When I sent this to Erin, she said, “The patterns you’ve detected and sonified are exactly what I published.” Yes! This is the sketch I will use to create a soundscape of the Lemur Data. From this exercise, some tentative questions have emerged that will help when we start working on the sourdough project:
How is the data organized/catagorized?
What is being measured?
What are the signifigant changes and time frames within the data collection process?
What are the researchers interested in hearing from the data?
iBoD is back playing in the Sun(Ra) Room with a focus on improved recordings. In addition, we plan to play at the Central Park School Soundgarden on the Sunday evening after Moogfest, May 20. The last time we played there, the request came through for “more bells”. So, this year, the bells will be central to the evening’s soundscapes. So, more bells, y’all! per yer request.
In 2016, in preparation for playing soundscapes in the Soundgarden, I did a detailed analysis of the harmonics of the metal tanks and tank tops that we call “The Bells”. From this came the piece called Adrift in a Sea of Bells, which we played the first post-Moogfest soncert. The dissonance and consonance that The Bells throw out can be sculpted by the soundscape’s sonic character, and the additional frequency forms created by the cohorts. Here is an excerpt from that performance:
We will perform this piece again on May 20th, but I wanted to design a different piece for The Bells. Instead of a sea, we will sound out a large field. This idea was fun to develop- starting with a reexamination of the sonic data from my previous research (for more on this see https://wp.me/p5yJTY-ci ). Two ideas emerged – the field should be low, rumbly, percussive and – the tonailty should be shaped by the tones of the middle pole tanks and tops. These are the ones Eleanor focuses on when she “wakes up The Bells”. I have a recording of Eleanor performing this sonic ritual, so I loaded that clip into an audio channel in Ableton, and looped it. Then I started listening to voices in the Ableton stable. Then I layered in some tones and liked the sound of it!
The fundamental tones of the six tank tops and two short tanks available from the middle post are DEF#G#A. The intervals in this pentatonic scale are 5th, 4th, tritone and minor third. A scale beginning on C and including those intervals is CEbFGbG. In Hewitt’s Musical Scales of the World, this scale is close to the minor blues scale (if we throw in the Bb). Next step, play around with that. The scale patterns being offset by a step creates a tension that is held together by the one common note – the F#|Gb.
The voices and tonalities I choose to play under The Bells tend to be quite dark and heavy. The Bells have a cheery brightness of tone that calls for this buzzy darker undertone as counterpoint. The dissonant character of The Bells is a dominant feature of the soundscape. They go together in this sweet and lovely way. Both Adrift and A Field tug at my stomach and heart! The process is to analyze the sonic spectrum of The Bells and then listen for what goes with that – and this heart- heaving stuff comes out.
Listening to the interplay of bells and electronic voices, I hear the bells encouraging continuous movement. These two balance and catalyze each other! Unfortunately, I do not yet have the live sound equipment or knowledge to convey all of this sonic richness to the world when we perform live. To be heard, The Bells must resound when being played. Subtle gestures do not carry. Eleanor Mills, who is the master player of these bells, must pull alot of sound out of them to be heard when accompanied by iBoD. Ideally, I would mic the bells and all the players into a mixing board and out to three speakers. Perhaps, one year, a person of sound heartitude would step forth. Till then you are stuck with my meager amplification.
In spite of our less than ideal sound setup, we have made some lovely recordings at The Bells. Here is one of Gone Won: Life is a Dream from iBoD’s last soncert at The Bells in August 2017:
Then there is the question of how to audience iBoD?
“Well, we just pull up a chair and watch you, right? You’re going to put on a SHOW, right?”
Well, not exactly. Our ideal audience would probably stroll by, slowly, listening, sit on the steps, look at the sky. Or lie on the ground close by with eyes closed.
Actually, Catherine DeNueve of Beaver Pageant fame, embodied our ideal audience as she strung up a hammock or did walking meditation around the schoolyard. Reclining and strolling are the appropriate audience postures for our soncerts. We are not entertainers, and yet we bring a gift of great vibrancy in the form of these long form soundscapes which we will play for you on
Sunday May 20th
7 pm
Central Park School Soundgarden (on the hill behind Cocoa Cinnamon.)
Eban Crawford aka Senator Jaiz is the Audio Engineer and Sound Designer for the North Carolina Museum of Natural Sciences in Raleigh NC. I met Eban at my first Moogfest, where he was facilitating a community music making workshop and an interactive exhibit from Natural Sciences. Both of these were highlights of Moogfest for me as I got to play around with an Ableton Push, make music with strangers (that Eban uploaded to Soundcloud later to hear) and meet Senator Jaiz.
A year and a half later, iBoD (idiosyncratic Beats of Dejacusse, my improv group) played a show with Senator Jaiz thanks to Ted Johnson and Triangle Electro Jam at Nightlight Bar in Chapel Hill. So I was thrilled when Senator Jaiz contacted dejacusse to collaborate on a soundscape for a Museum of Natural Sciences exhibit. The company of collaborative conspirers for this project is rich and includes Raleigh’s own SkidMatik, Boston’s Petridisch, New York composers Michael Harren and AfroDJMac. My assignment was to create a 10 minute drone in Gm in a particular tempo range. What fun to have clear constraints and freedom within those constraints. I sent him the finished drone piece in early November.
Now the exhibit Mazes and Brain Games is happening at the NC Museum of Natural Sciences till September 2018 and includes our 50 minute soundscape. Here is one of the first things you see when you walk into the exhibit:
The soundscape is now available on Spotify, iTunes and most online music retailers. Thank you for purchasing the album! Your support means the world to me!