Code Music

During a trip to Washington D.C. in 2017, granddaughter Jahniya purchased a silver necklace from the International Spy Museum gift shop. The necklace is the word “strength” in Morse Code with round beads for the dots and longer tube beads for the dashes. When she showed me the necklace, I saw the dots and dashes as stabs and long tones and began wondering about soundscapes with words and phrases spelled out in Morse Code.

This idea is a perfect companion to TRIC Questions, and my interest in using overlapping “predeterminded” sound articulation patterns in music. Morse Code is also a way to imbed words into soundscapes. People seem to prefer music with words. We like a story, an image and the sound of the human voice. For me, the singer and the words hijack a large percent of the ear brain in any listening situation. That is why I prefer my music sans words – let the instruments convey the narrative. Since 70% of meaning is derived from the non-verbal components of speech, including cadence and tonality, then music should be capable of conveying information/meaning quite effectively. Morse Code brings a verbal aspect into the soundscape without words taking over the show.

Here is the simple instruction sheet for creating Morse Code:

This will be my idiosyncratic metric template. The time signature is open and based on units: 1 unit = 1 beat = the length of a dot. The measures can be laid out in the number of units it takes to complete the word or phrase in Morse Code with the 7 unit break between iterations of the word.

Feeling sure that modern composers have explored this idea already, I googled “Morse Code music”. What I found were instances of Morse Code used as drum patterns. The code was always tied to a time signature, so the actual prescribed template for Morse Code is not what is heard. Plugging these patterns into Ableton voices means that the exact Morse Code template is played and preserved in time. This rhythmically offsets the clips, since they are different lengths. I have played with this idea for over a year using different words like love and joy, or om shanti. I did an iteration of “peace” over and over for International Peace Day 2017.

Now it is September 2018, and I am playing off the Peace Day soundscape in preparation for the Wake Forest Dance Festival this coming weekend (9/29/18). Justin Tornow was invited as a guest choreographer. She is creating a solo for dancer Maggie Page Bradley which I will accompany. The sound piece is called PSS – short for Peace Shalom Salam. Each clip is a Morse Code Template of peace or shalom or salam. Each template follows the unit pattern described in the Morse Code description above. Some units are quarter notes and some are eighths, another layer of rhythmic offset. After laying out the templates, I squeezed several of them into smaller time frames (oh, the things you can do in Ableton), thus speeding them up while maintaining the integrity of the Morse Code Template. In addition, I am throwing into this randomness one of my favorite Ableton audio effects called Fade to Grey. This is a high pass and low pass filter that move towards each other and basically squish the sound to nothing, but the addition of a ping pong delay holds a bit of the sound in an echoic pattern. This effect allows the sound to be fractured and reflected in interesting ways. The effect is on every track, so there is the potential to break the sound into multiple glimmering pieces. Here is a short sample of this effect at work:

If you are in Wake Forest close to Joyner Park tomorrow, please come to the Wake Forest Dance Festival. There is an open tech rehearsal in the morning, movement around the park in the afternoon and the dance performances will be from 5-6:30. As always, I appreciate your support!

Sonic Illustrations and Life Forms

Data sonification is a burdgeoning area of sound design that is quite amazing in its depth and flexibility. I have a keen interest to sonify data in a way that furthers our understanding of the data. I would love to create a sonic pie chart for example. While a visual pie chart is a snapshot, a sonic pie chart would be more like an animation. A chemical reaction could be sonified by assigning particular voices to different parameters of the reaction: as the reaction proceeds, the voices would change from “reagent” voices to “product” voices. Consonance and dissonance couid illustrate the changing relationships amongst the components of the chemical reaction. One possible way to sonify, in my mind.

Then at Moogfest 2018, a workshop introduced me to the world of SuperCollider and MaxMSP as instruments for creating sonic pie charts. Mark Ballora of Penn State University (Please check out his work at http://www.markballora.com) has been working with sonifying data for decades. He was doing it when no one was paying attention. Mark uses SuperCollider to create sonifications of tidal changes and the movement of hurricanes. This type of sonic representation of data illustrates how a group of parameters changes over time, and when you listen, you hear all of the changes happening over time. Voila! A sonic pie chart! Attending Mark’s workshop, shifted my soundsense, as I realized I do not want to learn computer programming (at this time). This blog post by Mark Ballaro and George Smoot (https://www.huffingtonpost.com/mark-ballora/sound-the-music-universe_b_2745188.html) helped me understand that my interest is in exploring how modal/timbral shifts that are set in a familiar,equal-tempered scale spectrum might illustrate data-driven relationships. What I am interested in is more a sonic illustration, than a map or a pie chart.

Just before Moogfest, The Dance DL, a Durham dance listserve sent this announcement:

Auditions & Open Calls

Arts & Sciences Collaboration: Sourdough Collective – Rob Dunn Lab

Where: AS IF Center in Penland, NC

Rob Dunn’s lab at NC State University explores microbiomes of some of our most familiar places. The sourdough project studies sourdough starters from around the world, including some really ancient ones that have been passed down for generations. Seeking an artist working in any media with an interest in microbiology, bread baking, making the invisible visible, and/or communicating complex science through art. Help us bring the awe and wonder of science–and the microbial world– to the world.

As I read this notice, it felt like a dream! I have a two and half year old sourdough starter which is used to create 75% of the bread Trudie and I eat. I have recently studied cell biology, neurobiology and have a deep interest in molecular chemistry about which I am just learning. And I am looking for a data sonification project. I sent them an inquiry, they checked out my sound work, and I was invited to participate.

First step, meet with the Sourdough folks at Rob Dunn’s Lab. On Friday June 15th, Erin McKenney, post-Doctoral Fellow in Microbiome Research and Education and a research lead on the sourdough project, and Lauren Nichols, Dunn Lab Manager, met me in the lobby of the David Clark Labs (home of the Dunn Lab). I learned that the sourdough project is looking at the ecology of sourdough starter communities as relates to yeast and bacteria growth in flour when exposed to water and the local microbial environment. I attended a lab staff meeting and learned about the amazing research being done here. All the projects are basically looking at how the smallest phenomena impact much larger phenomena and vice versa, the micro to macro to micro feedback loop. And they keep finding that diversity is the key to sustainable growth and a healthy environment. I left the meeting excited and inspired! Next stop will be the As If Center in Penland, NC in October.

The only other preparation I would like to do is to try sonifying some data. I reached out to the Rob Dunn Lab folks, and Erin McKenney sent me a data set to try my hand at. The data is about nine lemur babies from three lemur species, and how the microbial makeup in each baby’s stomach evolves as changes are introduced to their diets. (This is Erin’s dissertation study!) We have identifiable parameters that can be orchestrated to show changes over time. Perfect!

The data is on a massive (to me) spreadsheet with lots of terminology I don’t know…yet. This will be an interesting process as we work out exactly what the sonic map will depict. I sense that certain data will lend itself to sonification and that is the part I do not yet know. After spending some time studying the spreadsheet, I asked Erin how we can cluster some of the microbial data together, and she sent me the class and phylum data sheets. Phylum became my focus as there were only 35 phylum as opposed to 95 classes and 255 strains of bacteria. One of the lemur mothers had triplets so I decided to put together phylum profiles on this small group. Culling through the data for these specific individuals narrowed the phyla divisions down to 24, then I made an arbitrary cutoff point of >.00 density for each phylum (Erin said this was fine and is actually a tool scientists use to declutter data). Now was down to 15 phylum – a manageable number for a timbral illustration.

The microbes were collected from the three babies six times from birth to nine months. The timeline for the samples was birth, nursing, introductory solid foods, regular solid foods, and two times as they were weaning. Microbes were collected from the mother when she gave birth. Erin had the brilliant idea to have the mother’s phylum profile (which does not change over time) be a drone under the babies’ phylum profiles in the sound map. This allows you to hear when the profiles diverge and when they converge.

The sonic substance for all this is a phyla megachord that stretches from G1 to G5. Each phylum is voiced by a single pitch, so, for example, Protobacteria is G1. Since there are only thirteen pitches in a chromatic scale, some of the phyla would land on the same pitch, different octaves. There were five phylum that tended to have the highest presence in each sample, so I made them the Gs, and all the rest had separate, distinct pitches. I used amplitude to render the amount each phylum was present in each sample.

Then there was how to voice the individual profiles in order to hear the data as clearly as possible. After much experimentation the mother’s voice is a woodwind with steady tone throughout. I chose bell-like voices for the three lemur baby profiles, letting each phase ring out four times over the mother’s profile. The idea is to listen and compare the mother’s profile with the babies’ profiles. Listen for the change (or lack of change) as the each stage rings in four times. You will probably need to listen closely several times. What you hear is a uniformity of tone at birth that becomes more dense and dissonant as the phyla diversify with the babies’ diversifying diet. Then the final wean profiles settle into more consonance with the mother’s profile. So very interesting!

When I sent this to Erin, she said, “The patterns you’ve detected and sonified are exactly what I published.” Yes! This is the sketch I will use to create a soundscape of the Lemur Data. From this exercise, some tentative questions have emerged that will help when we start working on the sourdough project:

How is the data organized/catagorized?

What is being measured?

What are the signifigant changes and time frames within the data collection process?

What are the researchers interested in hearing from the data?

And this is just the beginning!