Category Archives: Creativity

As creators we need to be aware of what makes us creative and how. The music, the painting or the novel are all different peaks of the same iceberg. This category deals with the iceberg.

Behold the “Stick Street” CD album!

The physical “Stick Street” CDs just arrived from the pressing plant. I’m mostly selling them hand to hand at concerts, but you can also order a CD from:
http://perboysen.bandcamp.com/album/stick-street (packages posted from Stockholm, Sweden)
and
http://www.cdbaby.com/cd/perboysen5 (packages posted from Portland, OR, USA).

Digital Download sales are also available from those two web shops, but also at most other DDL Music web shops on the web.

stick_street_cd_front_560
stick_street_cd_rear_560

Producing octaphonic surround concerts

surround_04I’d like to share my recent experiences from performing live with surround sound. The system dealt with is the diamond shaped surround field of eight full-range speakers. I will not go into discussing 5.1 or other Dolby based formats targeting DVD home theatre systems. These are supported by several DAWs but not a suitable tool for preparation of a partly playable full-range octaphonic live setup. Therefore I decided to roll my own, patching away from scratch. Here’s the story.

Click here for printer friendly PDF!

I had been dreaming about live surround sound for decades but never had a chance to try it out in the real world, as few venues see a point in multiplying their PA system rental cost for putting on just one “experimental concert”. The option finally came up when my duo together with Erdem Helvacıoğlu was booked for Présences Electronique 2011 in Paris by French radio station and software developer ina-GRM. surround_01GRM wanted us to play our album Sub City 2064 and since we are only two musicians the concert would have to be performed as an interaction between Erdem, me and pre prepared sounds remixed out of the album. I immediately contacted the GRM sound engineers and learned that a diamond shaped octaphonic system would be provided on location. Speakers were to be addressed as four stereo pairs fed by eight mono channels with a circular numbering; beginning with speakers 1/2 on the stage, followed by 3/8 acting side fills for the part of the audience sitting close to the stage, 4/7 a little more back in the venue and finally 5/6 behind the audience and a little closer to each other.

Picking a strategy: Taste in music and public presentation

Then began the process of deciding what instruments to play live and what parts of the album to prepare for playback or as interactive electronic elements. I had access to all my files from mixing the stereo album so I didn’t have to worry about anything not being possible to implement technically. Instead I focused on imagining the surround concert just like planning your playing or composing; by taste in music and public presentation.

Generally I tried to put myself at the audiences position and come up with ideas of what would sound really cool that I’d like to experience myself from hearing a surround concert. In the academic world of electronic music it is common to present a piece of octaphonic surround music as plain playback of eight recorded channels but I wanted to stay away from that and put focus on the two live musician’s playing on stage. surround_03So the decision of which instruments to play live was fundamental for the rest of the project; we both played many different instruments on the album and that’s not an option on stage and definitely not if flying in to Paris from Stockholm and Istanbul. The combination of a Erdem playing the Guitarviol and me playing the Stick seemed optimal. The Stick can also play electronics over MIDI and if choosing the smaller Stick Guitar I could make room for also bringing an alto flute for live playing.

Selecting the most exciting parts to be played live on stage

Next task was to identify parts in the music, in the album mix or specific album effect treatments that would make an interesting experience for the audience if performed with the live instruments. So I made a list of all that and filled it up with some extra things that can be added in surround; exciting things that won’t work in stereo like for example having two or four reverbs surrounding the audience and simulating a larger room by sending more or less from certain parts to these reverbs. Another example is to make sounds appear as flying out from the stage over the heads of the audience by using a stereo reverb in speakers 1/2 and a time delayed stereo reverb in the rear 5/6 (plus lots of delicate tweaks in the diffusion and frequency response areas).

Utilizing specific surround expression

In the past I have done some surround mixing for recordings to be finalized on DVD video and from that I learned that compared to normal stereo you can have a lot more frequency intensive material in a surround mix since you are not forced to define resolution by timbre, as by “the crowded stereo format”. Surround opens up a much wider canvas of 360 degrees circular directional sound resolution and you can combine fat sound layers that would just not fit into the physical restrictions of stereo sound transmition. Because of this my work from mixing the album could not be directly applied to the preparatory phase of this surround concert project.

Circular Tap Delay routing in LogicIn general I strived for keeping the experienced room ambiences from the album but for the surround implementation I spread them out into the three physical dimensions, rather than trying to fool the listener mind to “hear 3-D sound from two speakers”. I also created some new live effects specifically for the surround field, to play with as a performance. One example is a three dimensional tap delay using eight delay units, one “in each speaker”, all set to 100% wet and sending one delay tap to the next delay unit in line. This way, when sending signal into the delay effect every played note would bounce one full circle around the audience. On my station I kept an expression pedal assigned to the “freeze loop” function in all these eight delays. In Paris we used Logic on my laptop for all this and the delay was the Tape Delay plugin of Logic’s. I did set the eight Tape Delay instances to quite heavy tape flutter to cause a minimal pitch discrepancy in each delay bounce and a degradation of the signal as it jumped around one full circle.

Finally hitting the stage in Paris

When we arrived in Paris to soundcheck we found that there was also an inner circle of smaller speakers surrounding the center core of the big surround field placed like a fence around the live sound engineer’s booth. These small speakers were aiming outwards to the audience, so the audience were actually sitting inside two circles of speakers. As the artists had not been informed about this in advance and because it isn’t traditional “surround comme il faut” we were asked if we wanted them to turn off the inner circle, but we decided to keep those on. People in the audience later told us the inner circle of speakers added an exciting dimension to the show, and we also trusted the engineers at ina-GRM to collaborate with an interesting on-the-fly use of anything at hand.

Choosing software platform – the need for a Graphical Visual Conductor

surround_08Another important decision was what platform to use for surround files playback. Since I also play live electronics hosted in a laptop it would be comfortable to use an application that can handle both these tasks. After having created the general surround concept and created the actual eight mono speaker sound files I tried it all out in Apple Logic, in Ableton Live, in Apple Mainstage and in Plogue Bidule. There was also a second aspect to this: the need for visual cues on stage, “an on-screen graphical conductor”. Some pieces contain key and scale breaking chord changes where there is not rhythm and we wanted to improvise rather freely over these melodic structures with the Guitarviol and Stick/Flute. Mainstage would be the platform best equipped to provide a good “visual screen conductor” function but unfortunately it could not handle the setup in a stable way (back in year 2011). Bidule would also tax the CPU too much since I would have to cable up a lot of “hungry” third-party plugins to realize the setup. Live was not stable enough in general back in 2011 so that left me with Logic. Being the most CPU effective DAW Logic let me implement both my own playable live electronics and the eight surround channels prepared as four stereo files. But I had to think a little extra about avoiding latency because Logic is designed to produce recordings and not like Ableton Live designed as a good compromise between sound design accuracy and live performance playability. The solution to this was to use direct input monitoring in the RME Fireface400 audio interface for Stick and Flute input and stay away from using any live instrument treatments that produce sharp attack transients that would interfere with the natural instrument sound attack. Same goes for software synth sounds; all slow attack sounds leaving room for the RME direct monitoring of “flute air spit” or string tap attack.

The eight outputs from my RME Fireface400 were patched into the PA stage box, targeting the eight surround speakers. Erdem on his side had brought a suitcase with Eventide Eclipse, AxeFx Ultra, Kaoss Pad and similar gear to cable up with a borrowed sixteen channel mixer on a table. From his on-stage mixer bus groups were going into the stage box for the surround speaker channels.

Building an Octaphonic Surround Channel Mixer in Logic

surround_05For the duo’s second surround concert at Borusan Music House in Istanbul I had done a little more preparations. For one piece that uses element of a guitar based metal style music there is a hysterical synth line throbbing around and I had taken that part and mixed it to sway around rapidly in a full circle. I did this by signal routing in Logic’s mixer using an environment object called “X/Y Vector”. The X/Y Vector pad routing I created for this were simple cross faders of four stereo channels. On one axis I set up arithmetic rules (in a Transformer object) for morphing between the four stereo channels and on the other axis I already had Left and Right stereo as the two crossfade poles. The Vector Pad object data is cabled through a number of Transformer objects where the data stream is transformed to control the four send knobs of Aux channel 11. Each of the four send knobs represents a stereo channel matching one pair of surround speakers in the diamond shaped setup. As you see I have set Aux 11 to “no output” so the send knobs are the only active audio outputs. I was using a joystick on my Faderfox LV3 hand mixer to play the surround field movements of the audio passing through this Aux 11 channelstrip, recording automation and tweaking that to perfection during the general playback files preparation process. As the result the source audio was dynamically distributed over the eight speaker channels to imply a sound source that is circling around the listener.

surround_06An important piece of Logic specific information here is which MIDI CC# numbers that are hardwired in Logic to specific channel send knobs. As you see on the image (click img for bigger size) the incoming CC#2 is being transformed into outgoing CC#28 and that matches the the channelstrip’s first send knob. Second send knob listens to CC#29 and vice versa.

When we arrived at the venue in Istanbul it turned out the stage was in the center surrounded by the audience, and I must say it was really great to play and hear the complete surround field as the audience was hearing it. Paris only offered flat stage monitors in mono because the stage was outside the actual surround field. One issue turned up in Istanbul though: the eight surround channels were not all surrounding us directly; only four speakers were, while the other four speakers were placed in an similar circle four meters up in the air where a round balcony was surrounding the stage on the ground floor. surround_07Luckily I had kept reverb channels rather free from not reverb treated parts (following the approach to use reverb as an “answer” to indicate space) so at the soundcheck we could re-direct the reverb channels to be coming “from above”. This was not planned but turned out to fit very well in with the scenario of doing an instrumental under-water opera suggesting a soundtrack for life in a submarine city, as room ambience were now experienced “above” just as you experience the surface of the sea when diving (click image for big size).


Ableton Live, stage screenIn Istanbul we used Ableton Live on my laptop, but that was not so good as Logic due to the lack of stable “visual graphical conductor function” in Live. Erdem got an external monitor on his side of the table to be able to follow arrangements but as you might know Live only shows the audio wave file of the selected track and as I was goofing around to process things live in Live this display kept disappearing and reappearing on both my MBP screen and Erdems externally added 17″ screen.

Mainstage at North Sea Jazz – the most superior Visual Conductor Screen

Mainstage on stage graphic conductor screenThe third concert we did on the Sub City 2064 album material was booked by the North Sea Jazz festival in Rotterdam. This is a very big annual festival with no room for surround performance but I just want to mention it briefly here because at that time, July 2012, Mainstage had been updated and we could benefit from the awesome visual conducting leads it can provide. Doing surround in Mainstage is simply a matter of directing live processing and the eight surround speaker files, handled by the Playback plugin, to separate outputs – but for this stereo gig I routed them all to one stereo output.

As for the visual conductor aspect, Mainstage is totally configurable so I could pick the waveform that kind of shows best where the crescendi are coming up and I was also able to name text objects with the chord names and short reminders for us how to play. On the Mainstage screen I put two counters and text objects; one that displays the name of and counts down the beats (eight notes) to the next cue and another that displays the name of the current cue. This worked much better than in Ableton Live and Logic. Before that gig I snatched screenshot videos of the the Mainstage screen display and uploaded to YouTube with only permission for Erdem to watch, so that he would be able to rehearse at his Instanbul studio and prepare his live effects setup. We were not given any rehearsal or soundcheck time in Rotterdam.

I think that was about everything I learned in the process, and the typical stuff I was wondering about myself three years ago and wished there would have been someone to spell out for me :-)



Addendum – Octaphonic surround preparation tools for your DAW

This article was about creating your own tools as you go, by basic traditional signal addressing. But there are indeed appropriate specialized software tools available. surround_12The good guys at ina-GRM in Paris offers a nice option as part of their GRM-Tools plugins suit. Delays, Doppler, Reson and Shuffling are the specific GRM-Tools plugins supporting this 7.1 non-standard. For an AU DAW channels correspond as on this image. You need to switch your DAW to 7.1 surround support and then the plugins will output audio for octaphonics through the DAWs 7.1 channels. This means that the sub bass channel [LFE] will become one of the eight full-range speaker channels, so you need to make sure your DAW doesn’t by default apply any low pass filtering to that channel. Another fairly recent new option for Ableton Live users is to seek out Max for Live patches for octaphonic surround processing.

Here’s a link to read or download a printable PDF of this article!

Lovely Harp Guitar!

Just a quick video upload testing out my new Tim Donahue signature Electric Fretless Harp Guitar. I think it plays like a dream… in fact I have been dreaming for decades about certain aspects of what this instrument has to offer. Tim designed it and has been playing this and the fretted version since the eighties and just recently initiating manufacturing of his harp guitars. You’ll find more on that at www.timdonahue.com

I love my Stick!

After having my new instrument, the Chapman Stick, for five months I finally decided to shot a video of it. What makes the Stick so fun to play is that you can use both hands more or less as “two musicians that jam together”. The playing experience is very open and creative. Quite different compared to most ordinary instruments that force you to train multiple body parts until they become one unified performance machinery. Stick playing rather puts your brain into multi tasking mode and calls for a split vision attitude.

Powerful live sound design options

Another thing I like with the Stick is the powerful live sound design options you get by having two fretboards going out through separate outputs – meaning you can treat them with two different effect chains. I plug those two outputs into a laptop running Mainstage.

CDM covers one of my electronic instrument designs!

Wow, what an honor only to be mentioned by such a great webzine as CDM, Create Digital Music! “Dreams of a Musical Future: Digitópia Winners’ Wondrous Creations”.

As a matter of fact I did use my Steppophononic Looperformer at one track on the recently released duo album Sub City 2064 with Erdem Helvacıoğlu. Here’s a link to a track where I use that electronic design to play both the flute pads and the synth sequence simultaneously.

Musical instrument of three dimensional performance

Note how three musical lines are created at the same time; (1) flute melodies, (2) chords layered by livelooping overdubbed long flute notes and (3) matching arpeggios (instantly snagged flute sample, live sequenced and sent through beat synced filters). All three parts following my harmonic on-the-fly improvisation.

Sharing my vision according to Creative Commons

This version of my Steppohonic Looperformer is like a pilot test. I mocked it up with Plogue Bidule and Expert Sleeper’s Crossfade Loop Synth Effect. Not technically optimal, but musically it worked well enough to be used on this record. I published the functionality design idea under a CC license so if you are a programmer you are allowed to steal the idea to create a plugin or whatever. Here is the link to my presentation.

Improvisation is not free!

The better you become at “improvising” the more you realize there is no such thing as “free improvisation”. Since music is a form of communication the best improvisations are those where the player succeeds in applying gestures that draw on rules known to the listener. Such gestures and rules can be timbre, direction in movement or plain music harmony theory.

I am especially excited by multi lateral improvisation, as I call it when a player improvises many musical parts at the same time – as opposed to simply improvising a melody over a given background. In this performance I use live looping, which means I record phrases I play and then keep changing those recordings while playing an additional part. So there is no “lead” and no “background” part of this improvisation. I do not play melodies and improvise chords to back melody up, nor do I play chords and improvise melodies that fit in. I invent all parts of the music at once. This is not “free improvisation” because in order to sound like some sort of music, although weird, everything has to relate to some common ground. The common ground in this particular performance is parallel transposition of minor chords. In this case using only the tonica, first, second fourth and sixth position transposition diminishes the palette further and creates a musical universe where almost anything can be played and still turn out harmonic.

The looping technique used here is to start out by playing an instrument and recording it as a very long loop. Careful to initially play only notes that will work harmonically even if transposed (thinking not only about actual sound here but also about what scales any given future transposition of the recorded loop may imply). So as lungs go empty of air I close the loop and it starts repeating. Now I use foot pedals to shift speed/pitch of this long loop into different intervals while I play along. Manipulating transposition of the recorded loop is one orchestral element and my live instrument is a second – both elements are parts of the same improvisation. This is a simple technical praxis of what I call multi lateral improvisation. If transposing a musical part in minor you get totally different harmonic scale options for your playing compared to transposing a musical part in major. It can easily become too complex to sound interesting so the challenge is, in my opinion, to find themes and refine them.

Composers use similar theoretical rules to create scores, but to me in this moment of time it is more fun to work out techniques that allow you to do it all at once in sound!

(edit)
Since publishing I have received some questions on what software were used in this performance, so here we go: Mainstage by Apple is the “effect rack”, “mixer” and “patchbay”. Inside Mainstage I am running the AU plugin version of the looper Mobius. As soon as the first loop is recorded Mobius calculates the musical tempo I am playing in and sends out MIDI Clock which Mainstage adapts its tempo to. This makes tempo dependent effects follow my playing/live looping. Maybe I should also mention that the video doesn’t cover the extensive foot work done to simultaneously play Mobius from a Behringer FCB1010 MIDI pedal board. There are almost as many looping commands happening as there are notes played in this performance.

The audio sensitive live graphics are simply the iTunes Visualizer

The Chapman Stick totally rocks!!!

playing the Chapman StickI’m learning a new music instrument here, The Chapman Stick. It’s so fun because on the stick you can play both bass, comping chords and melody lines at the same time. The instrument has twelve strings divided into two groups of six and each group has its own set of electro magnetic pickups and output.

The Stick was invented by musician Emmet Chapman in the late sixties to be used by himself as his “custom instrument”. However, many folks that heard him play also wanted sticks so Emmet started manufacturing in -74. I feel honored having an instrument actually built by the inventor. Thank you, Emmet!

Here’s where you can read more about The Chapman Stick.

Epilogue: Below is a quick video I recorded as a freshman on the Stick. I will soon upload something more exciting, as I’m slowly rewiring brain to improve its skills as the conductor of the “two independent hands” orchestra.

Cutting video in Ableton Live

I had no idea it is this easy to cut and edit video in the music production application Ableton Live! The lucky coincidence that had me find out was when I was hired to teach a group of film people about Live. live7videoTo prepare the workshop I started playing around with my own crappy cell phone video clips; simply throwing them into Live to see what would happen. What happened was just amazing! I discovered I was able to abuse and mutilate video almost as badly as I usually mangle audio in this application! You may think “so what” but listen here: what whe have here is an exciting new hands-on approach for churning out video and music simultaneously! Not “making video for music” or “composing music for film”, that’s just boring, lame and obsolete by now. The new vision is about being a video musician! Play the shit out  loudly with the right attitude and apply your musician’s first-take approach to video! Ok, so for those of you who haven’t gotten around to try this out for yourself, here’s what you can do.

Drop a Quicktime video clip into Live (Arrange View) to have it create an audio track and open a video display window where you can see the moving picture as the sequencer plays. If double-clicking the video clip you get access to clip properties at the bottom of the screen (just as for audio clips).

If you already have a music mock-up cocking and want to adjust the musical tempo to hook up musical downbeats with cue points in the movie, set the QT clip to “Warp as Master”. Then create warp markers just as you do with audio clips. Drag a warp marker to align a video cue point with a musical timing point.

Also, if you throw in other QT clips they will create their own, new, audio tracks. Mix the volume of these tracks according to how much of the sync sound (original video cam sound) you want to keep (or not). You may also process the video sound with Live’s effects, if you’re so inclined. Anyway, a very cool thing is this: if these extra thrown-in clips are happening while already the master video clip is playing they will just take over the video channel for the clip’s duration. See the point? Instant video cuts with an exact musical timing! And you may do this as you are also creating the music – in the same arrangement window with the same visual timing grid. Awesome! You may even use a copy of one video clip (alt drag and drop it to copy it) to shrink it into a short slice to just beep a 32th note video scene into the overall video flow. And of course this does not even have to show the same moving picture flow as the main video at that particular moment in time, you are free to fetch the cut’s content from earlier or later in the video take.

Here’s another scenario: You have a video with a rhythmic passage that you want to use as the tempo base to create new music. Then do not set the main video clip to “warp” but fool around with Live’s global tempo until you find a tempo that is the same as in that particular video sequence (you may create a playback cycle in Live, around the rhythmic video part, while working out the fitting tempo). You may also want to adjust the video’s starting point in order to set the groove right. Below is a quick mock-up I made in a couple of minutes with this method. I also copied the video and shrinked it to loop only the rhythmic part part a couple of times to make room for adding some drums to the video loop + video sound.

This Man Is So Rude! from Per Boysen on Vimeo.

You can not play video backwards in Live though. But you can do all this:

  • Continuous speed change of video + original sound.
  • Continuous speed change of original sound while video stays normal.
  • Distribute short video slices as “cuts” to take over the video channel from the main video track.
  • Tune the original audio of a video clip into any melodic interval.
  • Change pitch of original video sound without affecting the sync sound timing.
  • Draw rhythmic “pitch change melodies” for the video clip’s original sound.
  • Render both a video + now sound and a 24 bit sound file to give the proper audio mastering treatment. Then you may put that sound channel back into the film.

To wrap this up I have to mention that this was all written regarding Ableton Live 7.0.14. By late spring 2009 we will get access to Live 8.0 and the add-on tool Max For Live. Max For Live is developed especially for Live 8 by Cycling74, based on both their legendary midi and audio manipulation software Max/MSP and the video manipulation application Jitter. Needless to say, Live 8 will bring video musicians some sharp new axes.

Comment here <<<

Steppophonic Looperformer – please steal this!

I’m sharing this awesome idea for an awesome electronic instrument! I’ve been longing for a Steppophonic Looperformer for almost a decade and now I’m giving away the idea for free in the hope that someone will develop it as a software plug-in, a Max For Live mockup for Ableton Live 8 or maybe integrated in Numerology 2.0 Pro. Or whatever… it’s free – grab it!. Creative Commons license applies as stated below.

What is it?
The Steppophonic Looperformer is a step pattern sequencer driven real-time sampler (to be implemented by software). If the sample button is pressed down it samples audio from the system audio input and instantly plays it back according to the looping pattern (patterns can form chords, bass lines… whatever, and you may modify/swap patterns as you go). The idea is that a vocalist, trumpet/clarinet/sax etc player shall play the Steppo to orchestrate multi-timbrally on-the-fly while also playing the lead.

What makes this new and unique is not “looping” or “sampling” but that it’s a real-time system, optimally playable in a musical sense. You can run through chord progression that are composed or improvised on the spot as in Keith Jarret’s legendary piano example, and do it in a techno style sequencing context that still fetches its sounding source component from the acoustic instrument you are playing. So you are totally in control, expressing yourself.

The length of the sample depends on how long the sample button is pressed down. Duration of the note playback is controlled by its own parameter though. When sampled the audio snippet is kept in RAM, looped and split into as many monophonic voices/instances as there are DOTs set up in the grid (or an absolute number, spanning four octaves, if that is easier to program, not having to deal with voice allocation). The vertical grid axis represents pitch (12 half note pitches) and the placement of each dot on this vertical axis controls the playback Rate/Speed of the sample (“pitch”). The horizontal axis represents the loop of the pattern. In the example pattern above we are running a 12 beats long pattern, but the number of steps should be controllable by MIDI (so you can “sweep” the looped pattern’s length continuously while playing, as you can also “sweep” the Steppo’s relation to the global tempo). Normally the grid beats correspond to musical beats, i.e. an eighth note – but that can be changed by a “tempo divisor/multiplier” parameter. A dot may be set to play back at another octave than the grid displayed octave and is then displayed with a special color (in order to keep the graphics minimal). Each note can also be given a Release value to make it fade out slowly (release value “playable by MIDI”). Every voice/pitch is monotimbral – meaning that if using really long samples each new note will overtake a note that is already sounding at the same pitch.

How to use it?
Run it on a laptop while singing, or playing, into the audio input. Kick a foot button to snag a note now and then to feed the pattern. Snagging a different note will result in a parallel pitch transformation of the pattern. Another way to move the music is to change pattern while keeping the same sample. It is a creative way for singers and monophonic instrumentalists to improvise chord patterns and melodies simultaneously.

An interesting aspect is that note pitches are generated as in old-school samplers, by Rate/Speed shifting. This means that if you record a longer snippet where you play a rhythm this rhythm will sound faster at higher pitched notes. But you can also record a short snippet but keep the duration set to longer notes and this will result in the sound of a sequenced old-school sampler (looped short samples playing long notes).

One Bank holds twelve Patterns. These twelve patterns correspond to the twelve notes of the octave. Users should be able to set up the patterns to support any major, minor or personally weird key/scale. That way “chords”, “keys” or “song parts” can be assigned to separate foot pedals. What I think is cool with this is that it opens up for very free multi harmony imrovisation. However, the actual sounding tonal center depends on what sounding pitch is fed into the Steppophonic Looperformer.

Bottom line: While playing a lead instrument or/and singing you can use a simple MIDI foot pedal board to direct the Steppophoner into any sort of chord progression – even on-the-spot improvised. And it really is an instrument because you can change the sound of the whole shebang in a blink by simply exchanging the sample for a note with a different tonal character. And of course you snag the sample seamlessly from your lead singing/playing, making it an instant performance process.

Here are a couple of loose ideas that would be cool to have:

  • Duration value (long/short note with abrupt ending)
  • Release value (fade out ending)
  • Release Pitch Fall (with parameters “amount” and  “speed” of fall, assignable to DOTs or Grid Positions)
  • Release Pitch Rise (with prameters “amount” and “speed” of rise, assignable to DOTs or Grid Positions)
  • Scrolling “Tempo Divide/Multiply” in musical values (i.e. half note, dotted, triad)
  • “Tempo Divide/Multiply” value optional to follow Hard Sync (see below)
  • Hard Sync parameter (how many steps until the downbeat will be forced to happen on the global tempo bar cusp. Great for landing on your feet when returning from “granular rhythm chaos bursts”)
  • The DOTs can be put in with the mose but should also be programmable by live MIDI (as you work with an MPC). If you play a MIDI Note into the Steppo while in “Adapt Mode” a DOT will be placed at the grid cusp closest to where you played that MIDI Note. If you keep the Steppo in “Adapt Mode” and play the same note at the same point in the pattern the DOT will be deleted from the grid pattern. This way a performer can both catch new audio of different pitch into the same pattern, swap to another pattern or PLAY a new pattern as you record drums with MIDI pads.

I’m not a programmer, just a musician that would love to have this instrument (I’d rather spend a day playing music than programming code). So I’m giving away this idea for free to whomever wants to give it a shot. To tell the truth I have been longing for the Steppophonic  Looperformer for almost a decade and even suggested some software companies to pick it up, without any luck so far. But I hope the odds are optimal now, because great programming tools are in the hands of talented individuals and product developers while the commercial players start to support grass root community collaborations (much thanks to the compute gaming industry I would guess).

Existing products, that I know of, that sort of lives in the same building as this idea are GURU by FXpansion, Numerology and the Granulaterre plug-in of Logelloop. But they all miss out on some points (…so far) that I think is important.

Here’s a listening example. The Steppo is what here makes it possible for me to play both the chords and bubbling bass line within the same flute performance.

Ideas? Suggestions? Comment here <<<

9 steps to become a better musican

With these simple suggestions I want to share an attitude that can make you progress faster. Please note that this method works well also for non musicians. Even business leaders and pizza dudes/dudettes will improve!

  1. Play together with musicians that are more experienced and better than yourself.
  2. Accept difficult tasks and fulfill them! Concerts, recordings, compositions…
  3. Never believe that you have no inspiration! Inspiration is always there inside you. Just shut up and go find it!
  4. Alternate between playing different instruments.
  5. When you play with others, do not play what they play. Find the “holes” where you can fit something else in.
  6. Apply “questions and answers” attitude to your playing and composing. For example, when playing funky, do not play every note but still THINK them all. Just leave some out sonically, on your physical instrument, while you keep the groove within going.
  7. When playing, do not focus your listening on your own instrument but on everything around it. Ice hockey coaches call this “split vision”.
  8. When playing, do not concentrate only on the present moment. Try listening to the music that  happens ten seconds into the future!
  9. Do not play your instrument – play your music!

These nine tricks have worked well for me and that’s why I’m sharing them here. What are your expriences?

Comment here<<<