Tag Archives: mainstage

Producing octaphonic surround concerts

surround_04I’d like to share my recent experiences from performing live with surround sound. The system dealt with is the diamond shaped surround field of eight full-range speakers. I will not go into discussing 5.1 or other Dolby based formats targeting DVD home theatre systems. These are supported by several DAWs but not a suitable tool for preparation of a partly playable full-range octaphonic live setup. Therefore I decided to roll my own, patching away from scratch. Here’s the story.

Click here for printer friendly PDF!

I had been dreaming about live surround sound for decades but never had a chance to try it out in the real world, as few venues see a point in multiplying their PA system rental cost for putting on just one “experimental concert”. The option finally came up when my duo together with Erdem Helvacıoğlu was booked for Présences Electronique 2011 in Paris by French radio station and software developer ina-GRM. surround_01GRM wanted us to play our album Sub City 2064 and since we are only two musicians the concert would have to be performed as an interaction between Erdem, me and pre prepared sounds remixed out of the album. I immediately contacted the GRM sound engineers and learned that a diamond shaped octaphonic system would be provided on location. Speakers were to be addressed as four stereo pairs fed by eight mono channels with a circular numbering; beginning with speakers 1/2 on the stage, followed by 3/8 acting side fills for the part of the audience sitting close to the stage, 4/7 a little more back in the venue and finally 5/6 behind the audience and a little closer to each other.

Picking a strategy: Taste in music and public presentation

Then began the process of deciding what instruments to play live and what parts of the album to prepare for playback or as interactive electronic elements. I had access to all my files from mixing the stereo album so I didn’t have to worry about anything not being possible to implement technically. Instead I focused on imagining the surround concert just like planning your playing or composing; by taste in music and public presentation.

Generally I tried to put myself at the audiences position and come up with ideas of what would sound really cool that I’d like to experience myself from hearing a surround concert. In the academic world of electronic music it is common to present a piece of octaphonic surround music as plain playback of eight recorded channels but I wanted to stay away from that and put focus on the two live musician’s playing on stage. surround_03So the decision of which instruments to play live was fundamental for the rest of the project; we both played many different instruments on the album and that’s not an option on stage and definitely not if flying in to Paris from Stockholm and Istanbul. The combination of a Erdem playing the Guitarviol and me playing the Stick seemed optimal. The Stick can also play electronics over MIDI and if choosing the smaller Stick Guitar I could make room for also bringing an alto flute for live playing.

Selecting the most exciting parts to be played live on stage

Next task was to identify parts in the music, in the album mix or specific album effect treatments that would make an interesting experience for the audience if performed with the live instruments. So I made a list of all that and filled it up with some extra things that can be added in surround; exciting things that won’t work in stereo like for example having two or four reverbs surrounding the audience and simulating a larger room by sending more or less from certain parts to these reverbs. Another example is to make sounds appear as flying out from the stage over the heads of the audience by using a stereo reverb in speakers 1/2 and a time delayed stereo reverb in the rear 5/6 (plus lots of delicate tweaks in the diffusion and frequency response areas).

Utilizing specific surround expression

In the past I have done some surround mixing for recordings to be finalized on DVD video and from that I learned that compared to normal stereo you can have a lot more frequency intensive material in a surround mix since you are not forced to define resolution by timbre, as by “the crowded stereo format”. Surround opens up a much wider canvas of 360 degrees circular directional sound resolution and you can combine fat sound layers that would just not fit into the physical restrictions of stereo sound transmition. Because of this my work from mixing the album could not be directly applied to the preparatory phase of this surround concert project.

Circular Tap Delay routing in LogicIn general I strived for keeping the experienced room ambiences from the album but for the surround implementation I spread them out into the three physical dimensions, rather than trying to fool the listener mind to “hear 3-D sound from two speakers”. I also created some new live effects specifically for the surround field, to play with as a performance. One example is a three dimensional tap delay using eight delay units, one “in each speaker”, all set to 100% wet and sending one delay tap to the next delay unit in line. This way, when sending signal into the delay effect every played note would bounce one full circle around the audience. On my station I kept an expression pedal assigned to the “freeze loop” function in all these eight delays. In Paris we used Logic on my laptop for all this and the delay was the Tape Delay plugin of Logic’s. I did set the eight Tape Delay instances to quite heavy tape flutter to cause a minimal pitch discrepancy in each delay bounce and a degradation of the signal as it jumped around one full circle.

Finally hitting the stage in Paris

When we arrived in Paris to soundcheck we found that there was also an inner circle of smaller speakers surrounding the center core of the big surround field placed like a fence around the live sound engineer’s booth. These small speakers were aiming outwards to the audience, so the audience were actually sitting inside two circles of speakers. As the artists had not been informed about this in advance and because it isn’t traditional “surround comme il faut” we were asked if we wanted them to turn off the inner circle, but we decided to keep those on. People in the audience later told us the inner circle of speakers added an exciting dimension to the show, and we also trusted the engineers at ina-GRM to collaborate with an interesting on-the-fly use of anything at hand.

Choosing software platform – the need for a Graphical Visual Conductor

surround_08Another important decision was what platform to use for surround files playback. Since I also play live electronics hosted in a laptop it would be comfortable to use an application that can handle both these tasks. After having created the general surround concept and created the actual eight mono speaker sound files I tried it all out in Apple Logic, in Ableton Live, in Apple Mainstage and in Plogue Bidule. There was also a second aspect to this: the need for visual cues on stage, “an on-screen graphical conductor”. Some pieces contain key and scale breaking chord changes where there is not rhythm and we wanted to improvise rather freely over these melodic structures with the Guitarviol and Stick/Flute. Mainstage would be the platform best equipped to provide a good “visual screen conductor” function but unfortunately it could not handle the setup in a stable way (back in year 2011). Bidule would also tax the CPU too much since I would have to cable up a lot of “hungry” third-party plugins to realize the setup. Live was not stable enough in general back in 2011 so that left me with Logic. Being the most CPU effective DAW Logic let me implement both my own playable live electronics and the eight surround channels prepared as four stereo files. But I had to think a little extra about avoiding latency because Logic is designed to produce recordings and not like Ableton Live designed as a good compromise between sound design accuracy and live performance playability. The solution to this was to use direct input monitoring in the RME Fireface400 audio interface for Stick and Flute input and stay away from using any live instrument treatments that produce sharp attack transients that would interfere with the natural instrument sound attack. Same goes for software synth sounds; all slow attack sounds leaving room for the RME direct monitoring of “flute air spit” or string tap attack.

The eight outputs from my RME Fireface400 were patched into the PA stage box, targeting the eight surround speakers. Erdem on his side had brought a suitcase with Eventide Eclipse, AxeFx Ultra, Kaoss Pad and similar gear to cable up with a borrowed sixteen channel mixer on a table. From his on-stage mixer bus groups were going into the stage box for the surround speaker channels.

Building an Octaphonic Surround Channel Mixer in Logic

surround_05For the duo’s second surround concert at Borusan Music House in Istanbul I had done a little more preparations. For one piece that uses element of a guitar based metal style music there is a hysterical synth line throbbing around and I had taken that part and mixed it to sway around rapidly in a full circle. I did this by signal routing in Logic’s mixer using an environment object called “X/Y Vector”. The X/Y Vector pad routing I created for this were simple cross faders of four stereo channels. On one axis I set up arithmetic rules (in a Transformer object) for morphing between the four stereo channels and on the other axis I already had Left and Right stereo as the two crossfade poles. The Vector Pad object data is cabled through a number of Transformer objects where the data stream is transformed to control the four send knobs of Aux channel 11. Each of the four send knobs represents a stereo channel matching one pair of surround speakers in the diamond shaped setup. As you see I have set Aux 11 to “no output” so the send knobs are the only active audio outputs. I was using a joystick on my Faderfox LV3 hand mixer to play the surround field movements of the audio passing through this Aux 11 channelstrip, recording automation and tweaking that to perfection during the general playback files preparation process. As the result the source audio was dynamically distributed over the eight speaker channels to imply a sound source that is circling around the listener.

surround_06An important piece of Logic specific information here is which MIDI CC# numbers that are hardwired in Logic to specific channel send knobs. As you see on the image (click img for bigger size) the incoming CC#2 is being transformed into outgoing CC#28 and that matches the the channelstrip’s first send knob. Second send knob listens to CC#29 and vice versa.

When we arrived at the venue in Istanbul it turned out the stage was in the center surrounded by the audience, and I must say it was really great to play and hear the complete surround field as the audience was hearing it. Paris only offered flat stage monitors in mono because the stage was outside the actual surround field. One issue turned up in Istanbul though: the eight surround channels were not all surrounding us directly; only four speakers were, while the other four speakers were placed in an similar circle four meters up in the air where a round balcony was surrounding the stage on the ground floor. surround_07Luckily I had kept reverb channels rather free from not reverb treated parts (following the approach to use reverb as an “answer” to indicate space) so at the soundcheck we could re-direct the reverb channels to be coming “from above”. This was not planned but turned out to fit very well in with the scenario of doing an instrumental under-water opera suggesting a soundtrack for life in a submarine city, as room ambience were now experienced “above” just as you experience the surface of the sea when diving (click image for big size).


Ableton Live, stage screenIn Istanbul we used Ableton Live on my laptop, but that was not so good as Logic due to the lack of stable “visual graphical conductor function” in Live. Erdem got an external monitor on his side of the table to be able to follow arrangements but as you might know Live only shows the audio wave file of the selected track and as I was goofing around to process things live in Live this display kept disappearing and reappearing on both my MBP screen and Erdems externally added 17″ screen.

Mainstage at North Sea Jazz – the most superior Visual Conductor Screen

Mainstage on stage graphic conductor screenThe third concert we did on the Sub City 2064 album material was booked by the North Sea Jazz festival in Rotterdam. This is a very big annual festival with no room for surround performance but I just want to mention it briefly here because at that time, July 2012, Mainstage had been updated and we could benefit from the awesome visual conducting leads it can provide. Doing surround in Mainstage is simply a matter of directing live processing and the eight surround speaker files, handled by the Playback plugin, to separate outputs – but for this stereo gig I routed them all to one stereo output.

As for the visual conductor aspect, Mainstage is totally configurable so I could pick the waveform that kind of shows best where the crescendi are coming up and I was also able to name text objects with the chord names and short reminders for us how to play. On the Mainstage screen I put two counters and text objects; one that displays the name of and counts down the beats (eight notes) to the next cue and another that displays the name of the current cue. This worked much better than in Ableton Live and Logic. Before that gig I snatched screenshot videos of the the Mainstage screen display and uploaded to YouTube with only permission for Erdem to watch, so that he would be able to rehearse at his Instanbul studio and prepare his live effects setup. We were not given any rehearsal or soundcheck time in Rotterdam.

I think that was about everything I learned in the process, and the typical stuff I was wondering about myself three years ago and wished there would have been someone to spell out for me :-)



Addendum – Octaphonic surround preparation tools for your DAW

This article was about creating your own tools as you go, by basic traditional signal addressing. But there are indeed appropriate specialized software tools available. surround_12The good guys at ina-GRM in Paris offers a nice option as part of their GRM-Tools plugins suit. Delays, Doppler, Reson and Shuffling are the specific GRM-Tools plugins supporting this 7.1 non-standard. For an AU DAW channels correspond as on this image. You need to switch your DAW to 7.1 surround support and then the plugins will output audio for octaphonics through the DAWs 7.1 channels. This means that the sub bass channel [LFE] will become one of the eight full-range speaker channels, so you need to make sure your DAW doesn’t by default apply any low pass filtering to that channel. Another fairly recent new option for Ableton Live users is to seek out Max for Live patches for octaphonic surround processing.

Here’s a link to read or download a printable PDF of this article!

Lovely Harp Guitar!

Just a quick video upload testing out my new Tim Donahue signature Electric Fretless Harp Guitar. I think it plays like a dream… in fact I have been dreaming for decades about certain aspects of what this instrument has to offer. Tim designed it and has been playing this and the fretted version since the eighties and just recently initiating manufacturing of his harp guitars. You’ll find more on that at www.timdonahue.com

I love my Stick!

After having my new instrument, the Chapman Stick, for five months I finally decided to shot a video of it. What makes the Stick so fun to play is that you can use both hands more or less as “two musicians that jam together”. The playing experience is very open and creative. Quite different compared to most ordinary instruments that force you to train multiple body parts until they become one unified performance machinery. Stick playing rather puts your brain into multi tasking mode and calls for a split vision attitude.

Powerful live sound design options

Another thing I like with the Stick is the powerful live sound design options you get by having two fretboards going out through separate outputs – meaning you can treat them with two different effect chains. I plug those two outputs into a laptop running Mainstage.

The Chapman Stick totally rocks!!!

playing the Chapman StickI’m learning a new music instrument here, The Chapman Stick. It’s so fun because on the stick you can play both bass, comping chords and melody lines at the same time. The instrument has twelve strings divided into two groups of six and each group has its own set of electro magnetic pickups and output.

The Stick was invented by musician Emmet Chapman in the late sixties to be used by himself as his “custom instrument”. However, many folks that heard him play also wanted sticks so Emmet started manufacturing in -74. I feel honored having an instrument actually built by the inventor. Thank you, Emmet!

Here’s where you can read more about The Chapman Stick.

Epilogue: Below is a quick video I recorded as a freshman on the Stick. I will soon upload something more exciting, as I’m slowly rewiring brain to improve its skills as the conductor of the “two independent hands” orchestra.

“First Meeting” trio concert downloads

concert_mars2009
One night in Mars 2009 I met up with these two guys to play a completely improvised concert together. They had been playing together before, but not with me, so I was excited not to know whatever to expect on stage. It turned out great fun though and someone was even recording it.

We’d like to share these seven pieces that were born on stage that night:
[audio:unitrack4a.mp3,unitrack3a.mp3,unitrack3b.mp3,unitrack3c.mp3,unitrack3d.mp3,unitrack3e.mp3,unitrack3f.mp3|titles=Improvisation 1,Improvisation 2,Improvisation 3,Improvisation 4,Improvisation 5,Improvisation 6,Improvisation 7|artists=Kristofer Johansson / Niclas Höglind / Per Boysen,Kristofer Johansson / Niclas Höglind / Per Boysen,Kristofer Johansson / Niclas Höglind / Per Boysen,Kristofer Johansson / Niclas Höglind / Per Boysen,Kristofer Johansson / Niclas Höglind / Per Boysen,Kristofer Johansson / Niclas Höglind / Per Boysen,Kristofer Johansson / Niclas Höglind / Per Boysen,Kristofer Johansson / Niclas Höglind / Per Boysen]

Kristofer Johansson: cajon, snare drum and other percussive objects.
Niclas Höglind: 8-stringed guitar, Apple Mainstage laptop.
Per Boysen: fretless guitar, alto flute, EWI, Apple Mainstage laptop.

An interesting aspect of this group improvisation is that we were instantly recording ourselves and arranging this live looping to go with the bands playing. This was done with MIDI foot pedals and the live looping software Mobius.

Niclas and Kristofer are also active with Unit.

–> Post comment!

How to use chord progression in live looping

When you loop live it can be quite a challenge to make use of such a basic musical component as a simple chord progression. This may have to do with the sad fact that some looping devices can only play one loop and this loop can not be re-pitched either. Not much to do about that, I’m afraid. The two techniques I’m about to describe relies on using many loops and on modulating the pitch of one loop. As an example I have uploaded this video where I play a song with a melody theme that stretches over a progression of five chords. I create these five chords in the beginning of the piece, as separate loops, and then I simply swap loop as the melody passes through the chord progression. At the middle section, the breakdown, I change key from minor to major by pitching up the dominant chord of the minor key five half steps. This makes this major chord land at the tonica pitch – and so we’ve moved from minor to major in the same key! Notice how the rhythm of the loop changes as its pitch is being modulated. This happens because I’m using Rate/Speed Shift rather than Pitch Shift combined with Time Stretch. Since I’m overdubbing two layers of eight note arpeggio playing, to build “a chord”, this Speed Shift break-down section also goes into some odd grooves. I think those kinds of “poly rhythm accidents” are great fun and a reason I  love varispeed and don’t miss the time calculated pitch shifting function I had with that old Repeater (looper) back in the days.


Gammal fäbodspsalm (Old Cottage Psalm) from Per Boysen on Vimeo.

The Bare Bones Course
For those of you who want to know exactly what is going on in this live looping performance, here’s a step by step walk-through (using Mobius software looper):

  1. Kicking “Record” EXACTLY on the first downbeat as I play the arpeggio of the first chord, B minor.
  2. Kicking “Overdub” EXACTLY as I play the fifth note in the arpeggio. This causes four things to happen: (1) the arpeggio loop starts playing back the first four notes I just played in a loop, (2) my recent playing will overdub a second layer to the loop and (3) the technical tempo is set by my looper (Mobius standalone software looper) and a MIDI Clock signal is sent out through the OS X IAC Bus (internal MIDI pipe system on Mac). (4) My pre amp and effect rig software, Apple MainStage, is receiving the MIDI Clock tempo signal and corrects its tempo setting to follow what I’m playing and looping. If you listen carefully you may hear a filtered delay slap-back gated to short 16th note slices behind the 8th notes I’m playing. This is a typically useful application of musically synced effects in MainStage. I hope this explains why I don’t like to play live looping with a click track; it’s more fun to start playing as you feel the music coming out through you rather than adapting to a machine. I don’t mind a lot of machines adapting to my own playing though. That’s sort of the point in using instruments – you express yourself through them and not the other way around :-)
  3. Kicking “NextLoop” somewhere before the loop reaches its turnaround point. My looper is set to “SwitchQuantize=Cycle”, which means the first loop I record sets the resolution for when all kind of “switching” commands will be applied. I like it that way because you can relax and focus on the music; just kick the pedal at any point during the last cycle before you want the switch to happen.
  4. The looper switches from Loop 1 to Loop 2. Now the old loop I just recorded stops playing back and nothing else plays back instead, since this is a new and yet empty loop. I have set up my looper to behave like this when selecting an empty loop slot: creating a new loop of one cycle’s length and putting it into Overdub Mode. So, you see the point; that I can seamlessly start to overdub my live playing into a new loop (Loop 2) that has the same length and tempo as the first one I created. In this piece of music one loop cycle equals one musical bar and that makes it easy to play a different arpeggio for the second chord (F# major) without loosing the tempo. This time I don’t have to worry about kicking pedals with a precise timing. I play the F# major arpeggio for two bars and make sure I kick the “NextLoop” pedal again during the second bar/cycle.
  5. The looper switches from Loop 2 to Loop 3. I perform the same routines but with the difference that I now play other notes: an eight note based arpeggio matching the chord A major.
  6. The looper switches from Loop 3 to Loop 4. I perform the same routines but with the difference that I now play other notes: an eight note based arpeggio matching the chord G major.
  7. The looper switches from Loop 4 to Loop 5. I perform the same routines but with the difference that I now play other notes: an eight note based arpeggio matching the chord D major.
  8. Kicking “Direct Call Loop 2”. Loop 2 is the F# major chord arpeggio and I want to start the melody line with an upbeat from that chord.
  9. Stepping through the loops while playing the melody. Now, the song doesn’t utilize the chords in the same order I created the chord arpeggio loops. On the MIDI pedal I now kick this sequence while playing: “Loop 1, Loop 2, Loop 1, Loop 3, Loop 4, Loop 5, Loop 2”. The melody stays for two bars in each loop except for Loop 4 which goes on for 4 bars.

The mid section, where I change the Loop Speed/Rate, uses only Loop 2, the F# major arpeggio. This is a different technique to induce chord change in live looping and I like it better because it is all open for improvisation. I have a pedal bank set up to speed shift a loop into any of nine optional intervals. If you know the intervals and the key of the source loop, then you have all the information needed for improvising melodies as you also improvise chord progressions. I use to compare this to two hand improvisation on the piano; not very complicated at all, you just have to get used to dividing your consciousness into following and coordinating two simultaneous processes. This is a powerful technique for doing what I call Instant Composition, improvisation that also includes musical structures. I’ll post a video on that later, because I’d love to see more live looping musicians follow into this exciting new field!

You can learn more about and download Mobius at circularlabs.com



Comment here<<<