Right in the midst of the most recent Mercury Retrograde, I decided to dive into MAX MSP, a visual computer coding program for controlling sound and light for performance. After downloading the software, I started a class online and was working with some patches when my computer audio stopped functioning. No sound out of the computer. Then the computer and sound card stopped talking. All of this right before an iBoD rehearsal when we were recording Carnatic Water Music.
Using the Windows Troubleshooter, I discovered the problem “audio services not responding” and that this problem was “not fixed”. Online, there are multiple fixes for this message. After cancelling our recording session, I tried all the suggested fixes several times – from inspecting the Services to make sure Windows Audio and Windows Audio Endpoint and all their dependencies were automatically running to entering very specific commands into Command Prompt as Administrator. The first thing I did was update the ASIO4ALL audio driver, so no problems there!
After several days of trying different fixes, I was able to get the computer and sound card talking again! Ableton Sets and Projects were now audible! Yayyyyy! But the computer would not play audio WAV files. Outside of Ableton, audio services still not responding. Finally, I uninstalled the ASIO driver and uploaded the driver for the soundcard. I have a Native Instruments Komplete 6 soundcard, which has been a great device. (I had audio dropout problems with the NI driver about a year after I purchased it, which was when I switched to the ASIO driver and all was well.) Well, changing back to the NI driver solved the audio problems completely and I am back to sounding again!
A friend mentioned Mercury Retrograde as I was working through this process. Dang, I forgot about that current astronomical phenomenon. If I had remembered, would I have done anything different? As things turned out, it is very good that I did not! While I got thrown off of MAX (for the moment) I redirected my energies toward creating synth sequences in Ableton. Since purchasing the Behringer Neutron, I have been unsuccessful in getting Ableton set up as a sequencer for the Neutron. The Neutron has processed audio signal, but never midi signal. Low and behold the NI Komplete 6 driver allowed Ableton to see the midi ports for the Neutron. Suddenly, I was hearing the synth voice and all the modulators. When I made a patch or tweaked a knob, the sound was changed as I expected it to be! WoW! I feel like this is the first time I have heard the instrument’s true voice!
Today I am working on a soundscape for the next Human Origami Jam at ADF Studios in Durham on December 6. Very excited to finally get going with the Neutron.
Moogfest 2016, which took place May 19 – 22 in Durham, was a mind-blowing and inspirational experience for me. Last Fall, while selling my old instructional drumming CDs to the now-defunct Nice Price Books, I was talking to the owner about my new love: electronic music. He said, “You must be super excited about Moogfest coming here!” “Oh, yeah”, I responded, knowing I should be excited but just not feeling it yet. A few years earlier I wanted to go to the festival in Asheville, NC when Brian Eno was featured. But then I read how you spend all this money on a ticket and might not be able to get in to see what you came to see. So I knew about how the tickets worked, and that it was a celebration of Bob Moog, a synthesizer pioneer. The Moog Factory is still a fixture in Asheville, but Moogfest was coming right to my front door.
I was still feeling ambivalent in April and Moogfest was 6 weeks away. One thing I had decided – I wanted to be involved musically – so I started planning a Post-Moogfest event for the final day after everything “official” was over. (See post: http://wp.me/p5yJTY-ci) Then a volunteer application came my way, I filled it out and attended my first volunteer meeting. I met Wilson, Hugh, Robin, Ilsa and several other sweet, friendly folks who were psyched for the event. Bianca Banks, the volunteer coordinator, gave us postcards and Moogfest stickers (everybody LOVES stickers) and a welcomed us to the Moogfest family. Sweet!
The only acts I knew in the line-up were Laurie Anderson and Sun Ra Arkestra. By this time, Sun Ra Arkestra had cancelled, so I started YouTubing the artists to get a taste of what they had to offer. I started with the women artists: Julianna Barwick, Grimes, Suzanne Ciani, Grouper, Julia Holter, Laurel Halo, Olivia Block, Paula Temple. I did not get very far in this exploration before Moogfest was upon me and I just had wing it.
The first day, I worked guest check-in with Michael Jones (or Jones Michael, his producer moniker: check out his Soundcloud – https://soundcloud.com/jonesmichael), Nico and several other young musicians who told me about groups they were excited to hear. Volunteering took 18.5 hours of the weekend, and got me free admission into the festival – way worth it. I learned that hospitality is not my skill set (My partner, Trudie said, “I could have told you that.”) I learned that there are lots of folks, young and old, poor and rich, out there creating vibrations in the form of music and sound. I learned that people who come to Moogfest are – for the most part – friendly, open and excited about the prospects of technology and music making.
Luckily, Jim Kellough recommended several performances to me on the first night that were fantastic. His first recommendation was Silver Apples, a staple of the NYC scene since the sixties. Silver Apples was an early electronic duo who played the soundtrack for the moonlanding as it was broadcast on a big screen in Central Park in 1969. Now Silver Apples is just Simeon (his drummer died in 2005) and he really rocks the synthesizers. Here is a picture of Simeon with The Soundman AKA Christopher Thurston at Motorco the night of his performance:
Christopher and Silver Apples, Motorco, May 19, 2016
After this show, I headed over to see the best music of the whole weekend. Arthur Russell’s Instrumentals was inspired by the nature photography of Yuko Nonomora, and was only performed five times in Russell’s short life. The group, playing under the direction of Peter Gordon, was comprised of Russell’s collaborators and cohorts, including Peter Zummo, Rhys Chatham and Ernie Brooks. The piece was jazzy, funky and took the listeners on a fabulous journey. My favorite part was Peter Zummo dancing around the stage and gently clapping his hands whenever the trombone had a musical hiatus. Their performance left me curious to check out more of Russell’s work.
Moogfest is all about synthesized sound. So on Saturday, I headed down to The Carrack to hear Antenes, who crafts old phone operator switchboards into sequencers and synthesizers. She performed on her DIY synths for a half an hour and then did a presentation on how she came to create these particular instruments. I loved the deep sweeps and blips and bloops she carved out of various oscillating waveforms. Next stop was the Pop-Up Moog Factory, where employees were building actual Moog Synthesizers right before our eyes. The employees worked at four stations performing assemblies and passing them on to the next table. By midday Saturday, they had assembled 14 Minimoog Model Ds. The factory was full of a variety of synths hooked up to headphones so people could play and experiment to the ear brain’s delight. I had a fantastic several hours there, and left feeling like I really need a synth to add to my setup.
Then I checked out Critter and Guitari, who were in a geodesic dome tent outside the DPAC. These Booklyn-based musician entreprenuers have created adorable little synthesizers that are just my style. I enjoyed playing with the Moogs, but they are expensive and heavy. (Dang, I do not need anymore weight in my setup with a 12″ QSC K Speaker to haul around.) I enjoyed jamming with the guys , the other peeps, and the train that passed by. Their Organelle allows you to dial up a variety of sounds, play them polyphonically on a little wooden button keyboard, and tweak the sounds as you go. Neat! In my fantasy, they offer to give me one to play as a sponsor of ibod when we go on our sound sculpture tour. Wouldn’t it be nice…
I was anxious to get a good seat for Laurie Anderson’s Saturday afternoon performance, so got there waaaay early only to discover a long line snaking around The Carolina Theatre. I got in it only to discover the line was for a talk by Jaron Lanier, whose name I did not know. The guy in front of me did not know him either, but he figured “He is the keynote speaker, he must be good!” As it turned out- he was right! Jaron is a musician, virtual reality geek, author and incredible human being. He started his talk by playing the khene, a Laotian mouth organ, that he said is a “digital” instrument thousands of years old that could have inspired the invention of computers. Here is a YouTube video, where he plays this instrument in his own amazing way:
His message was wonderful and optimistic. He said we need to “will away” our obsesssion with war, combat and all things military. He advocates a movement toward kindness and beauty as guiding values in technological development. He asked VR game makers to use the technology to engender empathy. What I heard was – let us play games that engage our emerging polyvagal brain rather than continuuing to stir up our shriveling reptillian brain. Jaron Lanier is one gorgeous genius, and I was uplifted and inspired listening to him.
Next up was Laurie Anderson, who grabbed her electric violin, slung it over her shoulder and and filled Fletcher Hall with deep sweeping harmonics that made my heart pound. She moved toward the audience as she continued playing, looking right at us. This connecting more openly with the audience is a shift in her performance aesthetic from times I have seen her over the past twenty years. The next day, she talked about “seeing the audience” during her presentation/interview. While I enjoyed her performance, I was mesmerized by the retrospective talk about her work on Sunday. I love hearing and reading about artistic process. It is extremely intimate discourse, which is why many creatives are reluctant to share it. Laurie gave us a glimpse into her process over the years, and for that I will be forever grateful.
She spent a good bit of time talking about a recent work Habeas Corpus and how the piece evolved into an illumination of and a step toward healing the horrors and injustices of Guantanamo Bay. The work was presented in 2015 in NYC and is based on the experience of Mohammed el Gharani, the youngest detainee at Guantanamo Bay. He was sold to the US at the age of fourteen, kept in solitary, subjected to torture, and finally released by a US District Court judge for lack of evidence. He was held for seven years. The performance installation included a plaster cast chair the size of the Lincoln Memorial. Mohammed’s full body image was projected via a live video feed from Chad, where he now resides. He sat in the chair and told his story. The audio was one way only to protect Mohammed from hearing any personal attacks from the American public – there was concern that those Americans still blinded by their own fear and ignorance might attend the installation to berate him. He had suffered enough at American hands already. The video feed was two way, so Mohammed could see the audience. The most moving thing Laurie shared with us was that many of the attendees came forward and mouthed “I am sorry” to Mohammed’s projected image. For more on Mohammed el Gharani and Habeas Corpus see this link:
Laurie Anderson echoed Jaron Lanier’s thought on the necessity for kindness, empathy and beauty as hallmarks of our creative relationship with technology. Both pointed toward the potential for technology to help us connect, see, listen to and understand each other even if we do not agree.
Laurie and Lou Reed, her husband who died of cancer in 2013, came up with three rules to live by which she shared with us: 1. Do not be afraid of anyone. 2. Have a good bullshit detector, and learn how to use it. 3. Be tender with life. Afterwards, I could only remember 1 and 2. That is because I have issues with tenderness. Tender feelings make me feel vulnerable. Gotta work on that.
There is lots more to write about, so many encounters and experiences packed into 4 days, 40 venues and nearly 300 speakers/performers/presenters. Moogfest was so much more than I ever expected – my world expanded several times over. And the best way to top it all off was to play with my cohorts before an exclusive and appreciative audience. Here is an excerpt from Adrift in a Sea of Bells, one of the pieces we performed in the soundgarden following Moogfest:
This experiment began with a rather dubious YouTube video about the “11th harmonic” and its power in breaking up cancer cells. The video is about the Rife Machine, which was an invention from the 1930s purporting to cure many diseases. Royal Rife was the scientist and inventor who “discovered” frequencies that could interfer with the frequencies of diseased cells. The narrator of the YouTube video, stated that the 11th harmonic was the frequency that disrupted cancer cells. About a week after I started this post, I found a TED Talk along this same line:
What we are learning from quantum physics about how the Universe is put together lends quite a bit of credence to the idea that frequencies can disrupt disease. Oscillating frequencies make up the entire spectrum of “all that is.” When these frequencies interact with consciousness – “being” happens. Our singular awarenesses collapse the waveforms into the many points of existence – the mix of all our singularities creates what we call “reality”. The famous physicist Erwin Schrodinger put this idea in another way when he said, “The total number of minds in the Universe is one: In fact, consciousness is a singularity phasing within all beings.” Oscillating frequencies engage with each other through constructive (in phase) and destructive (out of phase) interference (or, as I like to call them – engagement) patterns. Thus the fabric of reality is an oscillating organism of frequencies engaging, changing and disengaging with each other. Our brains stabilize the whole thing so that we can navigate and participate in our lived experience.
Both of these videos assert that a harmonic relationship created by a low tone and a higher tone is necessary to disrupt diseased cells. In both cases, the necessary frequencies equate to an extreme number of oscillations. Dr. Holland said that frequencies needed to be around 300,000 to 400,000 hertz in order to destroy cancer cells. While these frequencies are waaaay outside of the audio spectrum, there is an organizing principle that allows for the possibility that lower audio frequencies might influence healing. And that organizing principal is – the octave. Whatever frequency you start with will always return “home” when it doubles. It is itself again. For example, middle C on a piano is about 262 hz, double that to 524 hz and you are at C again. This creates a resonating fractal that repeats on and on into infinity.
The harmonic overtone series, which is the basis for most everything we hear musically, is built around this doubling principal. As we add more iterations of the fundamental frequency, we create more overtone relationships. Using the middle C example again, adding 262 hz to 524 hz gives us 786 hz, which is G or a fifth above C. Add 262 hz to 786 hz and we get 1048 hz which returns us to C again. Now we are two octaves above our fundamental frequency Middle C, AND we are at the 3rd harmonic. By adding 262 hz eight more times we reach the 11th harmonic, which is 3114 hz – G in the fourth octave above middle C. (For more on harmonic overtones and their impact on our cosmic existence check out Hans Cousto’s book The Cosmic Octave.) Now I can create an audible 11th harmonic by combining a fundamental frequency and the fifth degree of that frequency in the fourth octave above that frequency. So I decided to make a leap of faith into the realm of the cosmic octave, and create a soundscape that hinges on an 11th harmonic and the healing secrets that it may hold.
Folding/Unfolding: The 11th Harmonic is built on a tetrachord of fundamental tones – CEGB accompanied by their 11th harmonic companions – GBDF#. The tones are 4 octaves apart, so this is not an interval you are accustomed to hearing. I chose 6 instruments and created patterns with these unusual intervals. As I thought about how to voice this harmonic, I identified three choices :1. alternate between the fundamental and harmonic in a variety of rhythmic patterns all on one voice, 2. have one voice sounding just the fundamental and a different voice sounding the harmonic, 3. since the 11th harmonic is a fifth in the fourth octave and the two octaves below the fourth octave also contain fifths (according to the overtone series), then I could vary the patterns with some fifth reinforcements in those lower octave. The second choice was very monotonous and weakened the presence of the 11th harmonic, so I went with the other two as my basic structure.
This soundscape will be performed tomorrow, May 15th from 2 to 4 pm as accompaniment for Glenna Batson’s latest Human Origami workshop. This workshop is subtitled Partnering with Paper, Exploring the Muse. Joy of Movement Studio in Chatham Mills is hosting the event. In addition, to the featured 11th harmonic, I will use the audio folding techniques I discovered during the previous Human Origami workshop.(See blog post – http://wp.me/p5yJTY-c9)
One of the gifts of the new year is that I am realizing a long-held goal of learning to play the bass. I have always thought of myself as a born bass player – laid-back maker of the low end harmonies. I sang the lowest alto part in choirs and choruses for decades. Having spent the past few years playing percussion alongside Christopher Thurston, master bass man, my ear is primed for doing this now. Back in 2009, I bought a Kala U-Bass (a bass ukelele) and it has been sitting in its case ever since. So I pulled it out, made myself a diagram of the neck, and started figuring out familiar bass lines (Mission Impossible, Fever, various Motown, etc.) I am spending several hours a day playing and learning my way around the instrument.
Last Sunday, Lisa Means and Martha Dyer came over to play and record guitar improvisations. Lisa brought four of her guitars, which we looked at and listened to, eventually focusing in on two: Goddess and Yellow Moon. Both of these guitars were hand built by Joe Young, a Canadian Luthier. Here are his descriptions of them:
This is the first in a series of ‘Goddess’ builds. This visually stunning OM guitar is crafted from Pomelle Sapele, a beautiful, iridescent, lustrous wood that delivers a rounded, gorgeous and balanced tone. The colours in her back, sides, headstock and rosette, range from pink to light brown, to red and then to gold. Her Honduran mahogany neck and her striped ebony fretboard and bridge hold perfect tension through the strings; her Sitka spruce top, as sound as a bell . The image of The Goddess is etched into the centre of her back and is found deep within her body as you peer into the sound hole. This Goddess image symbolizes, at least to me, the connection we have with Earth. Roots deep into the earth, their form portrayed as a vessel for life, hair sensing the winds of change, and their arms reaching into the ether, the heavens and beyond. Of course, this image has the characteristics of a tree, the true beginning of wood’s song. The ancient Sanskrit word ‘OM’ which suggests the phrase ‘that which is sounded out loudly’; the sound often vibrated at the end of mindful, spiritual practice, seemed the only appropriate choice for the guitar’s size. Sound and Spirit connects music and soul, creating an opportunity to hear and feel your music just as you want it to be.
This organic guitar is formed with a musical accordance of West Coast woods. Its back, sides, neck and fretboard are yew; the wood named by the Druids for its representation of rebirth and transformation. This instrument is earthy, woody and sacred. It has the clear, fundamental sounds of a bell and vibrates with a bright, sharp tone. At the 13th fret, the yew and yellow cedar neck meets the body; thirteen representing integrity and the female magic of the moon. The bird’s eye yellow cedar burl rosette, tail wedge and big leaf maple bindings unite this delicious instrument. The bridge is carved from the soft roundness of a yellow cedar burl, a wood known to promote peaceful thoughts. Ultimately, the sound is perfectly attuned to its origins: the forest, the ocean, and the sky.
Lisa gives loving care and attention to these instruments. She delights in them and is sensitive to their changing needs and moods.
Yellow Moon from Joe Young’s Website
Martha brought some percussion instruments: small cymbals, a toy xylophone, tingshas she had picked up while traveling in Thailand. Martha is an expert percussionist who plays spoons with local bluegrass favorites The Blue-Tailed Skinks. She is equally as skilled at bringing out harmonics on the guitar. The three of us played for several hours in the free improv/deep listening style that I encourage and enjoy.
Two Zoom H2n microphones were placed in the Sun(Ra) Room. One located above us and at the edge of the corner cut out in the room. Experimentation has revealed that this spot picks up a really good mix of the instruments. The other Zoom was placed low, and directly in front of the three of us in a semi-circle. The mics are preset with a low cut filter, auto gain and compression/limiter. Both were set to surround sound with the five interior mics wide open. This gives me four pairs of stereo tracks to work with when compiling and mixing.
I plan to build a soundscape of all string instruments using clips of Lisa playing her guitars. The cohorts and I will play all strings as well. Perhaps some day we will create a Nested Soundscape in Baldwin Auditorium. That would be so cool!!
In the meantime, here is a moment that happened while Martha, Lisa and I were playing last week. I did not make any extra cuts and pastes in this excerpt. It evolved just as you hear it. Martha on percussion, Lisa playing Yellow Moon and me on the bass. I call this Time Out of the Blue.
My first experience of seeing a movie with live musicians playing the soundtrack was a monumental occasion. In 1981, Carmine Coppolla wrote an original score for the 1927 Abel Gance silent film classic Napoleon, at the behest of his son, Francis Ford Coppolla, who was releasing a reconstructed version of the film. The film played Radio City Music Hall for three weeks and then went on tour. I saw the performance at a very large venue in Detroit (Cobo Hall?) It was a breathtaking extravaganza. The film itself was ground breaking cinema introducing camera techniques that are still in use today, and a presentational technique that is NOT still in use – Three Screen Triptych Polyvision. The three screens set in triptych formation on the stage displayed a panoramic image or 2-3 different images that commented on each other. I guess it was the IMax of it’s day. I remember being thrilled and transported with the mammoth interplay of sounds and visuals. Overwhelming!
Many years later, I am at The Pinhook in Durham NC appreciating Wendy Spitzer’s soundtrack for “Dreams of a Rarebit Fiend: THE PET”, a surreal B&W animated film based on a comic strip series entitled Dreams of the Rarebit Fiend by Winsor McCay. There is a live action version that came up when I googled the title, but it is this animated version that Wendy composed the soundtrack for (I had to mute the soundtrack that accompanies this clip. It did not work for me.)
Wendy and Billy Sugarfix performed the soundtrack and I remember the frisky fun of mallet percussion, slide whistles, and feeling delighted with the entire experience. It was just really fucking charming!!
I heard a live soundtrack for Nosferatu, the silent film that comes around every Halloween. And I enjoyed the D-Town Brass’ soundtrack for Georges Mieles’ silent film, The Impossible Voyage at Motorco. Oh, oh, oh, and a soundtrack comparison/contrast event we went to at the Carrboro Arts Center several years ago. Jim McQuaid, a local filmmaker, had six different composers score the same short film by another filmmaker. That was incredibly enlightening. Some of the scores made no comment on the action and were just musical accompaniment almost like Muzak or montage music. Others followed themes for each character, or a different feeling for each scene. There was one standout score that told a story through music, sound and silence that wove into the filmic narrative. It was the only one that did that, and I felt the film was elevated by the score. The audience favorite was a more romantic underscore that was suggested by the film narrative, but which felt more reductive than elevating. Although these soundtracks were not played live, this experience shared the “extra attentiveness” to the filmscore that live performance brings as well.
I plan to score a film someday and really enjoy exploring how music and film intersect and interact. The live soundtracks I have encountered so far have been for silent films with no spoken dialogue or environmental sound effects. Today I was reading an interesting article on mixing sound for “Cineconcerts”. These are special screenings of contemporary films with a live orchestra playing the original music score from the soundtrack. This article was on a recent Cineconcert screening of Francis Ford Coppolla’s The Godfather with Nino Rota’s beautiful score. Unlike Napoleon, which Carmine Coppolla aptly described as “wall-to-wall music”, The Godfather is a film with dialogue AND a memorable environmental soundtrack. In this situation the live orchestra must be able to swell into a scene and duck under the dialogue, so the conductor works closely with the sound engineer who is mixing the dialogue and sound effects. Here are some examples of the challenges facing the sound engineer and orchestra conductor in this unique situation:
As in any soundtrack mix, the dialog track had received appropriate reverb and other spatial effects to properly place it within the environment of each scene in the film. “But if you add that reverb to the acoustics of a concert hall, your dialog’s going to be floating up in the rafters someplace,” Hoffis explains. “That’s the big challenge for our CineConcert projects, to get the dialog under control—taking something that was meant for an acoustically designed movie theater and putting it in a super-live concert hall.” To bring the tracks back to a more raw form, Hoffis collaborated with friend and dialog editor Robert Langley, using iZotope RX4 Pro to remove just enough reverb from the tracks to make them distinct in the acoustically live environment. He then applied bandwidth compression to spots where unwanted sounds appeared on the production track, in order to reduce their visibility within the dialog. “There are scenes in the dialog tracks when you can hear actors adjusting their clothing or moving props louder than you can hear them talk,” Hoffis says. “So I try to go in with bandwidth compression and pull down those specific frequencies a little bit.”
During the actual performance, computer software helps the conductor and sound engineer stay in synch::
For synching, Hoffis and conductor/CineConcerts producer Justin Freer work with video director Ed Kalnins, using Figure 53’s Streamers and QLab live show control software, placing streamers and other indicators as directed by Freer for key points in the score—much as one might have used Auricle in the past during a score recording. “Every conductor is different and uses different color markers in different ways,” Hoffis explains. For example, when an orchestra is playing at a wedding party outside Don Corleone’s mansion, the camera follows characters inside to continue a conversation that is taking place in his study. “If you’re outside, in the perspective of being near the orchestra, Justin will play the orchestra louder. And then he’ll use a streamer to tell him he’s about to go inside, and he will mute the orchestra to match what would be heard if you were inside the study. So, rather than me mixing that, he controls the dynamics of the orchestra. That way, the audience is hearing the scored music the very same way that they would hear that cue if they were watching the film in a theater.” Vocal soloists, such as a singer at the party, come on the tracks provided by the studio, and are sent (as is all of Hoffis’s mix) to a powered monitor situated to Freer’s right onstage. “Justin has that cue in his monitor, as reference. He then conducts the orchestra in time with the singer. There’s no click track,” the engineer explains. With only a two-and-a-half hour rehearsal period, there’s not a lot of time for Hoffis to nail down the mix to the liking of everyone in the audience—both the filmgoers and the symphony-goers. “It’s a challenge. It’s mixing on the fly,” he says. “It’s a live performance, so you can never have total control of the orchestra. Occasionally the orchestra will overpower the movie, and occasionally the movie will overpower the orchestra. But hopefully we provide something both segments of our audience can really enjoy. It’s quite unique.”
While I do not know all of the programs they reference, such as Auricle and Figure 53’s Streamers, it is easy to get the idea of how these programs are used thanks to this well-written article by Matt Hurwitz in Mix Magazine. You can read the entire article here: http://www.mixonline.com/news/tours/godfatherlive/424155#sthash.umMqe0HD.dpuf
This sounds like such an interesting process, I would love to have been there. The Godfather is one of my all-time favorite films. To hear it with a live orchestra playing the soundtrack would be amazing.
Yan Jun is a pretty cool dude. He has a simple sound set up where he plays feedback frequencies, or, as he said in the Q&A following the performance, he “dances” the frequencies. Because the Carrack is a small, enclosed venue, Yan Jun chose to use silence/ambient noise as a part of his performance. As he began, he looked over his sound rig, which was several small, naked speaker parts, a shotgun mic with parabolic shield, contact mics, a mixing board and speakers with their own mixer. He looked at his rig for a long time, as if he had never seen it before. (He had been sitting and looking at it for the hour or so before he started performing. He said he had done an hour long sound check as well.) He was focused, relaxed and unhurried.
I listened to some You Tube videos of Yan Jun performing and knew what to expect. This is the realm of noise, static, and all inclusive harmonics with very few tones standing out to the ear. This is a different kind of music with a deeply interactive function. Yan Jun interacts with the feedback loop frequencies, the space, the vibe of the people in attendance, even the vibration that is posturing the space we were inhabiting. I asked him about his process and he said he goes by the “feeling” of the frequencies. He makes decisions about whether or not to “follow” the sound that happens in the moment. He seems to be having quite an intimate experience with the vibration. So then how do I, as an audience (in the truest sense of the word) find a way into what he is creating? Without the familiar tonal forms and cadences, clearcut harmonic relations, how do I engage with this music?
There seem to be a vast number of ways to engage and disengage with Yan Jun’s creations. His very deep focus on his personal interaction with vibrations in the room really demands the same from us as listeners. We have to bring something to the table. One woman said she was directed by his movements as to what to hear. Mirroring the creator’s experience is one way our brains and minds can interact with this creation. As it is a kind of “abstract” music, the invitation is to “read story” into it. We are highly trained experts at reading story into all aspects of existence. Since Yan Jun was so deeply engaged, many people could access by reading story into his movements and the resulting sounds.
People with hearing sensitivites might be invited to disengage. The frequencies and the distortion, while not painfully loud to me, may have been to others. This type of performance pushes the boundaries of our perceptions and our expectations, which often limit our perceptions. This can be a painful experience, but not an intolerable one.
I decided to use a spectrum analyzer to engage with the performance. Yan Jun said he pays no attention to what frequencies he is generating; this is not scientific, he goes by feel. My interest in the spectrum analyzer was to see/ hear if there were any patterns to his performance. First a disclaimer: I am just learning how to use spectrum analyzers, so I don’t understand everything about them. They give a measurement of amplitude to frequency, which I read as a means of locating dominant (louder) frequencies. I weighted the analyzer with a lower sensitivity to low sounds, sat as close to the sweet spot between the four speakers as I could get, and used an Ipad app called Analyzer. I went back and forth between watching Yan Jun and the analyzer. While I could not see patterns, a sonic progression did emerge.
He began with silence, then brought in low rumbling frequencies below -40 dBl FS (I used this amplitude parameter, which is used in computer sound measure where 0.0 dBl is the loudest sound before clipping. I don’t know if this was an accurate way to measure acoustic sound, but I went with the familiar.) My window into the app tops at about 14 kHz. Early on, I was not seeing any frequencies except the low ones (which at one point were picked up by a passing motorcycle). So I watched Yan Jun, who, at times made gestures and no change occurred in my ear, so he appeared to be having some difficulty engaging the frequencies. He got up and moved his chair back and to the left. From that point on he stood, and seemed to get entrained to what he was looking for. Frequencies around 13 kHz gave way to more around 8kHz. At one point bunches of frequencies popped up in the 13-14 kHz range. As the performance progressed, he engaged more frequencies in the 8kHz range, then he spent some time in 1-2 kHz (this range sounding a bit more familiar to my ear.) At one point there were patches of frequencies slightly above and below the lower ranges of the human voice (100 Hz- 1kHz) and I thought he was avoiding those frequencies. By the end, he was bringing up more frequencies in that range, with harmonics at 8kHz popping up many times.
So I was engaged in watching and listening to (I don’t feel that I can say I was hearing them) frequencies and how they unfolded during the evening. Seeing a progression of movement was very engaging for me. I was also thinking of this experience as a sonic cleansing or a brain massage. Brain research has revealed that when a specific frequency is generated and picked up by the amazing human hearing mechanism, part of the brain physically vibrates at that same frequency. This has been measured and there is a direct vibrational correlation between frequencies and your brain. So just WoW, and congratulations to all who came and experienced some edgy performance art. Your brains are probably better for it!