Early Explorations into Elektron Model:Samples

Sometime in November/December 2019 my earbrain decided we need a sequencer for the Neutron. While I was successful in getting Ableton to communicate with Neutron, I had to use the NI Komplete 6 audio driver, which seems to cause occasional sound dropouts, and feels like an unreliable set up to me. Anyway after extensive research, I settled on the Elektron Model:Samples. While it is geared toward creating drum patterns with 6 track pads/sample containers that can play patterns 1- 64 beats in length, I am hoping to explore its sound design/soundscaping capacity in addition to beat-making.

The Model:Samples uses buried menus, which I was not sure I would enjoy. I understand that the Elektron Digitakt has a deeper and more extensive menu listing. The M:S has just the right amount of menu diving for me. Most of the effects knobs are dedicated and can be modulated per track AND per trigger as well as over the whole pattern. The most any one of the buttons does is 2 or 3 functions. The deepest menu is the samples menu. I want to spend some time getting to know the cool samples that came with the M:S. However, I have spent most of my time with M:S creating and loading my own samples.

As example, one sound clip of a plaintive horn riff became the one and only sound used in a pattern called Plaints. By changing the start and end point of the sample, varying the delay amount, frequency cutoff and reverb time within each track, each one sounds different from the others. I played with this at The Shadowbox Sessions in January and now want to do more with this pattern.

Elektron Transfers is the software for loading samples into the M:S. As I collect and curate samples, it seems best to organize them into 6 pack folders. This way I can load a whole folder into a saved pattern slot. I have not yet figured out how to see the samples that are already in the box. Samples can be deleted through the M:S menu. Samples can be changed out while playing, which is a very cool feature. A pattern template can be completely transformed while it is playing by placing a different sample on the track.

One thing I am interested in exploring more deeply is setting effects modulation on specific triggers in the pattern. As example, the first trigger could have a low pitch with a LPF and heightened resonance AND only play 25% of the time. The 14th trigger might be a higher pitch with delay and feedback. These two sounds will express so differently yet they are coming from the same track sample. Wild! This is the arena of creating sound PAINTINGS! How to orchestrate sounds within a grid pattern and NOT have them create a groove? How to use these parameter locks to create a moving and changing “pattern” within a fixed grid of 1 – 64 triggers/beats/notes.

My challenge this week is to work with this idea in preparing the soundscape for the Human Origami Jam this Friday January 31, 2020. First, what sounds do I want to explore? Then, how can these sounds be triggered and mixed into a morphing pattern that does not sound like a groove? I will report back next week as to how this has developed.

If you have any interest in Elektron Model:Samples, I highly recommend True Cuckoo’s tutorial. I watched this multiple times before the M:S arrived, and was able to jump in and make stuff immediately.

No Moogfest 2020

Just found out that Moogfest 2020 has been cancelled, so am feeling a little disappointed. However, that disappointment is tiny compared to the gifts that four years of Moogfest attending gave to me and countless others. Even if the festival never happens again, it left its vibratory mark on two North Carolina towns and neither of them will ever be the same.

The fest moved to Asheville NC in 2010 after four years in New York City. Moog has been a presence in Asheville since Bob Moog settled there in 1978, and opened the Moog factory. Moogfest Asheville ran from 2010 -12, culminating in appearances by Brian Eno, Kraftwerk, DEVO and many other big name electronic acts. Eventually, they got a better deal in Durham, so the fest came East. Asheville remains a hub of synthesizer activity with the Moog Factory and recently opened Moog Museum. Plus modular modules crafters Make Noise have called Asheville home since 2008. And there is an active and prolific community of synth players there to this day. Synthesis leaves a vibrational mark!

The first year Moogfest hit Durham, it was big time razzle-dazzle!! Every theatre and bar venue booked with performances. Laurie Anderson, Jason Lanier, Silver Apples, a three day residency with Gary Numan, a huge outdoor stage, a sleep concert, signage all over downtown – it was amazing! Working the ticketing table the very first year was incredible fun. Everyone was so pumped and joyful! All my co-volunteers were music producers as well, which was exciting! The subsequent years were just as much fun, although seemingly scaled down each year. In 2019, I was impressed by the presence of a number of local acts.

And, to be honest, that is what I would love to hear! More local players, low key venues, meetups, parties, jams! This is something the synth community in the Triangle/Triad area can do! I have very little nostalgia for “the great ones” of the past! I appreciate the ground they broke, but, damn, lets walk on that ground. To truly honor them, lets dance on that ground. So many people out there making so much beautiful noise!! I do not hear the need to import acts.

I would love the local synth community to take the weekend that was to be Moogfest 2020 and do something with it. One idea is a 24 – 48 hour synth concert – players could sign up for times in increments from 20 minutes to an hour. It would be fun to have each player or group leave a loop or drone running so the next player can take off from there. All woven together! Another idea would be to book several venues for 2-3 nights and offer evenings of sounds all around. The coolest thing would be to have whatever we do benefit Jill Christenson’s Day One Disaster Relief Organization!

Anybody want to do this???

Carnatic Water Music

Susanne, Eleanor, Jim and I have been soundscaping together for 5 years now. During that time, we have all grown as deep listeners and sound painters. I am grateful to play with people who can tune into the sonic environment, their own voices and play the waveforms. We are soundpainters not musicians. Sometimes even we get confused.

The first time we played together publicly was at the Won Buddhist Temple Bazaar in October of 2014. And the first piece we played was Carnatic Water Music. (This soundscape is based on a Carnatic Indian scale that is included in Michael Hewitt’s book, Musical Scales of the World.) We played over and around what is now the first section of Carnatic Water Music while the rains poured down! We were actually in a tent, but the Zoom recorder was out in the rain with a raincoat over it. The recording has rain patter on it, which sounds like scratches on a vinyl record. I really like this recording! (We appreciated the company of Linda Carmichael singing/playing ukelele at this event.) Here is an excerpt:

Carnatic Water Music Nested

After this performance, we played CWM frequently at public events. This is a long-form soundscape that we play for 20 to 40 minutes. As time went on, I added some new sections to the piece so the players and listeners would have greater variety of the sonic spectrum, and to vary the pace a bit. Now when I listen to Carnatic Water Music I hear different energetic aspects of bodies of water, from lolling rivers to waving oceans.

The next stop for CWM will be as the main theme for The Space ReSounds of Water, a pop-up dance installation to be performed on April 18th. The performance is Durham’s offering for National Water Dance 2020. Here is a write-up about the event:

Since 2016, National Waterdance has brought attention to water issues through synchronized dance performances in multiple locations. iBoD and dejacusse want 2020 to be the year the Triangle joins the dance.

The Place ReSounds of Water is a sound, dance and visual art performance piece conceived and performed by iBoD in 2019. We would like to expand on the piece by creating The Space ReSounds of Water, a space/container with video projections, and with healing flowerscapes by Hana Lee, a soundscape by dejacusse and iBoD, and dance movement by Jody Cassell. The performance will take place as a part of the National Waterdance event on April 18 2020 at 4 pm.

The performance will run from from 4-6 in a space where the audience can come and go. This is a meditative performance that can be engaged with on many levels. While some of the movement may be choreographed, most will be free flowing improvisation the audience can participate with. Outside the venue, we will invite local water and environmental organizations to offer education and actions we can take to protect our waters.

To prepare for this event, we are making a really good recording of Carnatic Water Music. We have played this piece so many times, as soon as it begins we fall into a lovely sync with the soundscape. We are recording in the SunRa Room, which is a lively, if not acoustically perfect, space. As of now we are playing through the soundscape and recording on a separate track each time. Afterward, I mix the recording and put it out dor feedback from the group.

Last week, I recorded one runthrough into a track in Ableton and the second runthrough into a track on the H6n. The Ableton track had the most presence and was easy to work with in the mixing process. The trick is to get the right balance between our live playing and the looping soundscape. Today I discovered several recordings we made through the H6n- might be able to tuck some of those in the mix somewhere. We did our final two takes last week, so now I go to work in earnest!

One of the several works-in-progress happening as Winter sets in. Come Spring, iBoD will release Carnatic Water Music as our first extended play download!

Mercury Retrograde (or don’t fight it, surrender)

Right in the midst of the most recent Mercury Retrograde, I decided to dive into MAX MSP, a visual computer coding program for controlling sound and light for performance. After downloading the software, I started a class online and was working with some patches when my computer audio stopped functioning. No sound out of the computer. Then the computer and sound card stopped talking. All of this right before an iBoD rehearsal when we were recording Carnatic Water Music.

Using the Windows Troubleshooter, I discovered the problem “audio services not responding” and that this problem was “not fixed”. Online, there are multiple fixes for this message. After cancelling our recording session, I tried all the suggested fixes several times – from inspecting the Services to make sure Windows Audio and Windows Audio Endpoint and all their dependencies were automatically running to entering very specific commands into Command Prompt as Administrator. The first thing I did was update the ASIO4ALL audio driver, so no problems there!

After several days of trying different fixes, I was able to get the computer and sound card talking again! Ableton Sets and Projects were now audible! Yayyyyy! But the computer would not play audio WAV files. Outside of Ableton, audio services still not responding. Finally, I uninstalled the ASIO driver and uploaded the driver for the soundcard. I have a Native Instruments Komplete 6 soundcard, which has been a great device. (I had audio dropout problems with the NI driver about a year after I purchased it, which was when I switched to the ASIO driver and all was well.) Well, changing back to the NI driver solved the audio problems completely and I am back to sounding again!

A friend mentioned Mercury Retrograde as I was working through this process. Dang, I forgot about that current astronomical phenomenon. If I had remembered, would I have done anything different? As things turned out, it is very good that I did not! While I got thrown off of MAX (for the moment) I redirected my energies toward creating synth sequences in Ableton. Since purchasing the Behringer Neutron, I have been unsuccessful in getting Ableton set up as a sequencer for the Neutron. The Neutron has processed audio signal, but never midi signal. Low and behold the NI Komplete 6 driver allowed Ableton to see the midi ports for the Neutron. Suddenly, I was hearing the synth voice and all the modulators. When I made a patch or tweaked a knob, the sound was changed as I expected it to be! WoW! I feel like this is the first time I have heard the instrument’s true voice!

Today I am working on a soundscape for the next Human Origami Jam at ADF Studios in Durham on December 6. Very excited to finally get going with the Neutron.

This is what I will make in the soundscape!

iBoD – Playing by Ear with Lisa Means: Hearing the Ethers

Lisa bought a new guitar! A John Suhr limited edition commissioned electric guitar signed by the maker in a faux alligator hardshell case. The top of the guitar is quilted maple and looks like rippled water. Lisa bought the guitar because it’s voice eclipsed the sound she was carrying around in her mind. She said she had this jazzy sound in mind with rhythm (swingy, danceable) and a clean, clear tone when plucked (like George Benson). The Suhr guitar has a lovely tone with crisp, clean edges and bell-like shape. The sound the Suhr guitar planted in Lisa’s earbrain is more “New Agey”

A few weeks back, I sent Lisa a thumbdrive with recordings of our sessions since June. She reports that the recordings were not helpful to her as she couldn’t pick out her voice from the whole soundscape. This is good to know- the recordings give me a lot of information, but not so for Lisa. I know she listens to music by turning it up very loud in her home, so I asked if she did the same with the session recordings. She explained that she has sound reference files in her brain that pick up on familiar patterns associated with the song she is listening to. Without these references, Lisa is less able to make sonic sense of what she is hearing.

Our September 28 2019 session focused on the new guitar and what it brings to our pallette. And we played in a different relationship today. Instead of Lisa’s guitar through the Neutron, we played on separate channels. Lisa wanted to hear her new guitar clearly since she is just learning it, so I played the Ripplemaker through the Neutron. In this configuration, Lisa leads the way, while I bring interesting underpinnings into the mix.

Listening back to the recording, I think this is another way for us to play together. Our collaboration becomes more like intermingled solos, so the impact of our playing together is indirect rather than direct. Our voices are tandem rather than merged, and we can respond to each other. One question is how to create useful audio reference patterns for Lisa? She said that she couldn’t hear the recordings in the thumb drive because they were too removed from what we are doing currently. So it seems possible that if she listens to a recording from the most recent session, she could create new reference files. We will try this out.

The October 5 session is when things came together. Lisa brought another guitar – a 17″ wide arch-top Kay guitar which she describes as the kind of guitar you would find in the Sears catalogue in the 1950s. She played that and the Suhr while I created morphing streams of sound sequenced by Ripplemaker and modulated by Neutron through Abejusynth Station modules. The quality of the sounds of the sequence can be altered within the Ripplemaker, then in the Neutron. Then the audio signal from the Neutron goes through an Ableton audio track, which can then be sent through and altered by the Abejusynth Station AAC/EG modules. (For more info, go here: https://wp.me/p5yJTY-vL). Any of these Ableton tracks can go through delay send and a reverb send. So there is a whole lotta modulating going on!!

Kayguitar4blog

Oct5Session4blog

After our October 12 session, I am very excited about our playing as intermingled soloists at 919 Noise Showcase on October 30. We ran ourselves through my Roland Eurorack mixer (Thanks, Jim!) so I could balance the sound. Then I recorded into 2 H6n tracks and in the room. We decided to start with a wave of sound and then whittle it down. I was not sure this was working, but listening to the recording, I decided we need to just listen close and have faith that it IS working.

Here is a mix of the 2 H6n tracks AND the room recording. This seems like an interesting way to capture sound recordings in the SunRa Room. That said, this mix has too much synth and not enough guitar, and we will fix that so the blend is better in the future.

Playing by Ear

Come and hear us play the ethers at 919 Noise this Wednesday 10/30 at 8:30!

Frankensynth

Ever since I saw Caterina Barbieri at the Pinhook during Moogfest 2018, my deepest desire has been to dive into the sonic sketches/sculptures/landscapes of modular synthesis. Caterina’s album title, Patterns of Conciousness, says it all. This sounding out of the electrical impulse that is at the heart of sonic events has become my spiritual practice, my way of hearing and understanding the world, my container of wonder!

The world of modular synthesis is dense with creative pathways and quite expensive, so I decided to start with what I have – Ableton Live, my soundscape companion for 8 years. For a while, I worked on creating Audio Animation Clip/Envelope Generator modules. This can be done by animating effects within muted audio clips so only the effects are heard, and then routing audio through the clips from a source track. The source audio is then modulated by the effects in the AAC/EG track. I used this for The Space ReSounds of Water to capture and modulate the live sound of the bells. Here is an example:

Then I bought my first hardware synth – a Behringer Neutron. This synth had great reviews, it has knobs and patchbay, and can be sequenced by Ableton. Ableton is beta-testing a pack that allows the DAW to play Control Voltages. I am not sure how this works, but it involves having an interface that is DC-coupled. And this will be for Ableton 10 Suite users, which I am not yet. All of this to say, I have not been successful at getting the Neutron conversing with Ableton via midi. I have had success with the Neutron by running audio signals through the input with the VCA bias knob all the way open. This worked out well as you know if you heard our All Data Lost performance!

Before the Behringer, there was Ripplemaker iOS semi-modular synth, which I have played with for a few years now. We are old friends, and I can sit down to a fresh template on Ripplemaker and get going immediately with cool sonic relationships. This app will teach you about synths in a deep way. In the beginning, I referred to the manual constantly, but now it is easy to just jump in and play for long periods of time. Here is a recent soundscape performed on the Ripplemaker to accompany Jody Cassell for the last PROMPTS at The Carrack.

Now the fun begins! After some experimentation, I have cobbled together my Frankensynth. I begin with sequencing in the Ripplemaker, which provides the audio source for Neutron. So we have an iOS synth and a hardware synth playing together. Then the audio from the Neutron goes through a track in Ableton. Seven additional tracks in Ableton are each running AAC/EG effects and receiving audio from the track carrying the Neutron. So the Ripplemaker/Neutron generated audio will be heard through whichever AAC/EG track’s volume fader is up. So these three synths (Ripplemaker, Neutron and Abeju Synth Station) are sitting inside each other like nested dolls. Here is a sample of how this can sound: (recorded in the SunRa Room on a rainy day!)

I am very excited to play this setup with Lisa Means on guitar at the 919 Noise Showcase on October 30th at The Nightlight Bar in Chapel Hill!!

String of Yeasts

After reading and studying the data (so far) from The Sourdough Project, a bit of it jumped out as a possible sound pallete. The growth profiles of the five most prevalent yeasts and aabs (acetic acid bacteria) measured as increasing Optical Density over a 48 hour period. Measurements were taken in 12 hour increments and recorded from 0.1 to 1.2 levels of density.

I was drawn to this data because the graphs reminded me of waveforms.

I am not at liberty to reveal the details of the data, so suffice to say that these are 5 strains of yeast. We will call them pink, blue, orange, green and neon. The pinpoints mark the 12 hour samplings of the prevalence of the strain. So at 12 hours pink grew to around .25 OD, while neon grew to .6 OD. How to represent this in sound is the next question!

My old friend, the piano keyboard, provides a familiar sonic framework. A two octave chromatic scale will represent the sound of OD growth by stretching the OD measurement scale over the two octaves. Like this:

Each OD amount covers 2 notes. D and D# represent the .1 amount, E and F are .2 and so on. This allows some wriggle room when the 12 hour sample seems to be between two numbers as is seen with pink. The growth range for pink will run from D to F and encompass 4 notes. In the case of neon, the growth range runs from D to C and encompasses 11 notes. The differences in the growth rates will be heard in the number of and duration of the steps taken within each twelve hour time frame. So far, so good!

The time frame will run in beats and measures. Since it is 48 hours of growth, one hour can equal one measure. The step patterns will run up to the highest note indicated by the OD data at that particular 12 hour marker. That makes each sampling unit 12 measures in length – seems perfect. Even better, at 4/4 time, each 12 measure sampling unit is 48 beats long! Synchronous!

Lets lay out the first 12 hours of pink and neon. Since all the yeast densities begin from .1, all the patterns will begin with D in the 3rd octave (D3). pink grows from D through D#, E, and lands on F. For this growth pattern there are 4 notes and 48 beats, so each note will be 12 beats long. The long notes and fewer steps up communicate that pink did not grow much in the first 12 hours. Neon grows from D, D#, E, F – C. For this growth pattern there are eleven notes and 48 beats. Each note is 4.36 beats in length. So the first ten notes are four beats long, and the eleventh is eight beats. The longer note at the end places emphasis on the final growth number for that 12 hour period. Faster steps further up the scale sonify neon‘s more abundant 12 hour growth period.

Looking at the graph, it is easy to hear that the growth patterns of pink and neon invert at the 12-24 hour sampling unit. Pink leaps from .25 to .7, while neon short stretches from .6 to .75. Again, note duration and number of steps will sonify these contrasts in the data.

While a sense of growth is captured by the movement up the scale, there is not yet a sense of increasing density. To get at this, I decided to sustain the top note of each 12 hour sampling unit. As example, pink’s F and neon’s C would continue softly to the end of the 48 measures. This would follow for the last note of each 12 hour cycle and will create the sense of sonic density.

Enough talk, lets have a listen!

neon 48 hour growth pattern

pink 48 hour growth pattern

These are the 48 ms versions of the patterns. So 48 4/4 measures at 120 BPM really stretches out these relationships making it harder to hear the movement of the data. Ableton Live has a function that allows me to collapse the sequence from 48 measures to 24 measures and still maintain the rhythmic integrity of the phrase. WoW! Then the phrase can collapse to 12 measures. All of these phrases will likely be a part of the Sourdough Song, but I am still deciding which version (24ms or 12ms) conveys the data more clearly. One of the researchers on the project said the longer growth articulations conveyed the anticipation the bakers feel as they wait for their starters to grow.

Here is the 12 ms version of both strains together. See if you can hear the changes described above. Listen closely for each voice – you will hear pink holding longer tones, while neon changes tone more quickly. It helps to look at the graph while you listen.

This will likely be one a motif within The Song of Sour Dough. (What do you think of separating sourdough in the title?)