All Data Lost Fest – August 17th – The Wicked Witch -Raleigh NC

iBod will perform THIS SATURDAY at 3:20 to 3:40 pm, so do not be late! This version of the indiosyncratic Beats of Dejacusse (iBoD) features Lisa Means and me in the first installment of a long term sonic exchange called Playing by Ear. Our set up is simple with Lisa on a Hollow TKD-Hybrid II electric guitar (built by Terry Dineen of Raleigh) which will play through a Behringer Neutron semi-modular synthesizer that I will be navigating into the amplifier. So Lisa provides the melodic tones and rhythmic energy, while I modulate timbre and propagate rhythmic structure.

When I first proposed this set up to Lisa, she said, “I should just give the guitar to you!” I am controlling the sound that comes out of the amplifier, so Lisa wondered what she was contributing. Thus began a dialogue about collaboration, listening, playing, structuring a sonic improvisation, narrative, adaptation, exchange, and standing in our own sonic authority. This dialogue is 25% verbal and 75% listening/playing in the moment.

Another factor in all of this is our different ways of hearing. Lisa experienced a gradual hearing loss through adolescence to adulthood. Hearing aids help bring her sense of hearing out into the world, while much of what Lisa hears is in her mind’s ear. When Lisa listens deeply to the sounds that emanate from her collection of crafted guitars, she hears (and conveys) vast worlds. My own hearing often feels supersonic to me. I hear the clocks ticking, the brown noise of the air purifier upstairs, an airplane passing over the house and the morning trilling of wrens as a musical interplay with rich harmonic textures. Sound surrounds and beckons me. Lisa and I talk in order to understanding how we each are hearing what we are playing together. Sometimes we agree on a theme and go from there. After we play, we talk about what we heard.

Playing by Ear is Lisa channeling the vibration of the present moment via her fingers on electric guitar through the Behringer Neutron synth, which I will patch and tweak in response to what Lisa says. I am constantly listening and tuning in search of “all of the waveforms” with the intention of catching and amplifying patterns of life in the moment. This is easy to do in the SunRa Room, with its lively acoustic enclosure. Here are two snippets from one of our sessions a few weeks ago.

We are both aware that this will be quite different at The Wicked Witch this week. What happens when we take this dialogue into a public space? How will we play with all that we are hearing in that space? We are ready to find out!!

1000 Tiny X-rays and O Men and Mice from The Art of a Scientist exhibit 2019

The Art of a Scientist exhibit closed last weekend. If you did not get a chance to hear/see my collaboration with Hasan Abbas, MD-Ph.D candidate, Duke University and the Shared Materials Implimentation Facility (SMIF), Duke University – here it is! The video plays twice and the whole thing lasts only a little over two minutes. For more on the concepts underlying the soundscapes, go here-> https://wp.me/p5yJTY-zm after listening.

iBoD presents FreeQuencies @Durham Makes Music Day

My cohorts and I are flipping the script from our usual way of play for Durham Makes Music Day this coming Friday. We have played together as iBoD for about 5 years now. I make soundscapes in Ableton Live, while Susanne, Eleanor and Jim add their own riffs and melodies over top. These soundscapes follow a more formal, songish structure. While we mostly improvise, the more we play a piece, the more we lock into parts, which layers in a more rigid form and stifles the improv. Too much structure calcifies creative growth, so time for a shift!

Under the influence of Moogfest and the work of Pauline Oliveroes, iBoD is exploring “all of the waveforms” and the means to transmit them. Susanne, Eleanor, Jim and dejacusse will provide the soundscape LIVE using voice, harmonicas, melodica, digital horn, recorder, flute and electronic modulations. In this way we will transmit a diverse range of audible waveforms as patterns of frequencies. These “freequencies” will permeate the larger soundscape that will surround us, altering the sonic environment in unusual ways.

Our location at M Alley/Holland Street (behind the Durham Hotel) means we will be in the thick of all the sounds of downtown Durham and all the outdoor music being made on Durham Makes Music Day. We will not be the loudest, but if you come down to where we are located, close your eyes, quiet your mind and open your ears, I guarantee you will hear something beautiful and amazing!

Friday June 21st

8:30-9:30 pm

iBod at M Alley/Holland St.

Synthesizing in Ableton: They are On It!

Well, my short-lived journey into configuring Ableton Live as a synthesizer has come to a halt with the purchase of a Behringer Neutron at Moogfest AND with the Ableton announcement that they are Beta-testing CV plug-ins for Ableton 10. I am soooo excited with this direction.

My experiments with creating modulation FX using “dummy clips” or Envelope Generators yielded some new directions for iBoD and dejacusse. We are experimenting with running live sound through the FX tracks and EG clips. This coming Sunday, we will perform The Place ReSounds of Water in front of the Central Park School for Children. Eleanor Mills will play The Bells, dejacusse will morph the bell harmonics into a watery pallette that Susanne Romey will play NA flute over top. There will be meditative movement and the pouring of water. Come join us!

Sunday May 19 @4 pm

724 Foster St @The Bells

Moogfest 2019

Being an introverted elder, I am no longer the festigoer I once was. One festival a year is enough for me and this is it! Moogfest is an incredible sonic universe that opens up in and around Durham, and turns my world upside down. This year was no exception!

There were numerous durational performances, sound installations and interactive opportunities. I was particularly excited to meet Madame Gandhi, who gave a fabulous performance at Motorco last year. This year she lead two sessions of interactive play at The Fruit. The set up included a Push, a Bass Station, a drum set, microphones for vocals and small percussion, and two synths! WoW! She wasted no time with alot of talk. We jumped right in and started playing. I really appreciated that! I played Push and did some vocals, but mostly listened. The group came up with some nice grooves.

This experience reminded me that I prefer solo or small group playing these days, and the energy of the experience was fantastic! Glad I showed up!

Durational performances involve a group or person performing for 2-3 hours solid with no silence. The 21C Museum Hotel Ballroom and Global Breath Yoga Studio were the venues for these works. For durational performances, I love to sit with the beginning and the end OR go in for the middle. Heard Richard Devine and Greg Fox perform the beginnings and endings of their sets. Always interesting to hear the different approaches the start and finish in the broad context of a durational performance. I would love to create a durational performance there someday…soon.

21C was my favorite venue this year. I heard a wonderful variety of soundscapes there in quad sound with excellent sound engineers, a beautiful light show and interactive screens on either side. The bookends of the weekend for me were Ultrabillons and TRIPLE X SNAXXX (local favorites). Both of these sets were incredibly satisfying to listen to. Big synths and bouncy modulars all around. What I come to Moogfest for!! In between there was Aaron Dilloway who gave an amazing embodied noise performance that was as much exorcism as anything else. Drum and Lace started her set with some whispy songs that all seemed to be the same short length, like 3 minutes or so. But then she launched into some beefier pieces and really took the space. She had some gorgeous videos behind her as she performed.

Cuckoo was so much fun and I envied him his tiny set up which he carried into the venue in a knapsack. At one point, he was playing a vampy section and said, “well, this is the point where I introduce the band!” and proceeded to show us the three small controllers he had routed together. He has YouTube videos, so I want to check him out. Here he is playing at The Pinhook.

Finally, I had a few mind-opening, inspiring encounters. Steve Nalepa pointed the way to route signal out of Ableton for quad speakers. He performed at 21C through quad speakers using Ableton. I always wondered why you would route tracks to sends only in the I/O menu. I havent yet tried this, but plan to soon. Then there was the Modular Marketplace! I delayed going till Friday and spent 3-4 hours there playing. As the WoW would have it, the modular unit Behringer Neutron was on sale for $100 off. I struck while I had a little cash flow. Less than a week later, Abe, Nuet and I played a beautiful primal soundscape for Audio Origami on Friday May 3 at ADF Studios @ 4:30pm.

Thank you, Moogfest! See you next year!

Synthesizing in Ableton Live: Envelope Generators

My first pass at working with Ableton as a modular synth was to explore “dummy” clips. These are tracks that contain an audio clip with the actual content muted. The clip is then transformed into a container for audio effects automation that can be applied to any audio routed through whatever audio track the clip is in. The clip becomes an Envelope template that can be pulled into other projects.

Ableton Live allows the sound designer to see and manipulate many parameters within each individual clip. This is most amazing, and something I have not paid enough attention to. Here is a picture of the clip console from the Ableton Manual:

For Audio Animation Clips, the focus is on the Sample box and the Envelopes box. The slider in the sample box is used to mute the audio by simply turning it completely down. Now the clip is ready to be shaped into an envelope. (That is how Audio Animation Clips will function in my Abeju Synth Station – as Envelope Generators (AAC/EG). The two drop down windows in the Envelope box give the sound designer access to all the effects and mix processing available on the track for this clip only. So whatever I draw onto the audio clip itself only happens when that particular clip is playing. In the case of AAC/EG, the clip will play the animation of the envelope and the envelope will be applied to any audio routed through the track.

In order to create an AAC/EG, we need an audio track and an audio clip. The length of the clip doesn’t seem to matter cause it loops. (Currently, I am sticking with 8-12-24 bars and I hear potential for longer clips with more complex movement in the automation in the future). Use the volume slider to mute the audio within the clip. (Addendum: Experience has taught me that the original audio in the clip should be low volume. I am going to start experimenting with creating audio specifically for these clips.) Then assemble a rack of audio effects you want to use to shape sound and insert them on the track. In the orginal audio clip, use the drop down box in the clip to access each effect, and turn them all to Device Off by moving the red line in the clip box to the off position. Now you have an open audio sleeve with no effects enabled. When source audio is routed through this clip, the audio will sound as is, with no effects present. Next, duplicate the clip. This will be the first AAC/EG. Go to the drop down box and turn on the devices you want to use. The top box takes you to the main effect, and the second will bring up every adjustable parameter within the effect. When you identify a parameter that you want to change, go to the red line in the audio clip window (on the right above) and draw in the animation you want.

Once I finally understood all this, I began to design AAC/EG Modules. Each module is a track with 2 to 4 different envelope template clips. Each instance of the clip can be shaped by adjusting the automation (I am calling animation) of the effects that are on the track. One technique I like is layering three effects over three clips. The track contains three effects (a delay, an EQ and a reverb), so the first clip has one effect on, the second has two, the third has three. Initially, I tried linking several modules (tracks containing the clips) together but found this too cumbersome. The option to layer modules is in the routing in each particular set.

To use the modules, insert another audio track that will hold the samples you are using. I label this track Source. Now there are several routing options, but the main idea is that the AAC/EG modules need sound coursing through them in order to perform. On the In/Out tab on the track, audio can be routed from someplace and routed to some place. The best mixing option so far is to have the AAC/EG modules receive audio from source and then everybody sends audio to the Master Volume. This allows the faders to act as dry/wet attenuators. The Source can be heard as is or through the AAC/EG Modules and Clip Templates.

So far, I have created four modules. Combed, Mangled, Stutter Delay and Harmonics Flinger. [Post Publication Addendum: More modules are added to above and all are being fined tuned. Our April 14th SITES event was rained out (rescheduled for May 19th) so the debut of Abeju Synth Station will be tonight as part of Electronic Appetizers program at Arcana. I am playing a piece called Perimeter Centre, which will feature the Ripplemaker semi-modular ios synth as my sound source for the Abeju Synth Station. Come listen!

Audiorigami Phase 2

Since releasing Audiorigami (Meditations on the Fold), my sonsense as to how to explore the Fold has shifted. This shift is in sync with Glenna Batson’s return to Durham and the start of a monthly Human Origami Jam. Glenna is interested in exploring folds through a variety of deep somatic frameworks. She narrates the biomolecular potentials that the body travails from utero through the many modulating intersections of growth . My own sonsense of the Fold is opening to the quantum aspects of sound and further harmonic interplay. I sense that these sonic realms might possibly allow access to some basic templates of life. Perhaps sound, in the form of patterned frequencies, guides life into being. Perhaps harmonic frequencies are part of a templates for the growth and movement of life forms through space and time. That is what I am playing with here.

The focus of Audiorigami will now be to explore the changing shapes of sounds themselves. Audiorigami will propogate, excavate, and modulate the folds that emerge from and disappear into the waveforms that are the vehicle of sound. Modular/ Granular Synthesis and Frequency Modulation are the methods for engaging with sound media. I plan to more carefully curate the sound sources I use and to do more sampling from my own recorded sounds.

Here are some excerpts from the Human Origami Jam which happened last month at ADF Studios. Glenna leads an exploration of lines and trajectories, corners and angles. The soundscape is my first rendering with some of the Abeju Synth Station modules I created from “dummy clips” in Ableton, coupled with TAL- Noisemaker VST synth plugin and Ripplemaker on the iPad.

Next Human Origami Jam will happen THIS FRIDAY March 15th at Joy of Movement Studios in Pittsboro NC from 4:30 to 6:30 pm. https://www.thejoyofmovementcm.com

Come join us!