Friday 29 July 2011

Updates

Im going to start rebuilding the gloves in the next week or so but at the moment im working on finishing up an EP and some tracks for compilation albums so please excuse the lack of updates, there is lots more to come when i get finished with these projects!

Thursday 14 July 2011

Gestural Controller Software Design and Conclusion

Firstly a video another proof of concept video for my supporting work for my masters:


Construction 4 Edit from TheAudientVoid on Vimeo.
Another bit of proof of concept video from my supporting documentation for my Masters.
The drums at the start were fed into the markov chain drum recorder earlier. Basically this patch takes what you put into it, makes a markov grid and spits permutations of its input out according to whatever method you use to retrieve the data (in this case it uses the recorded midi sequence being played back to create note on/offs which send a bang to the pitch generation patch. These are quantised to 16th notes and output).

You can see how the gestural control works with the gloves pretty clearly at the start as the hand position is used to control a filter over a drum section.

 Around the 3 minute mark i start playing some percussion live instead of using the markov chain recorded section.




and now the final sections of my Dissertation, please look at the annotated patch pictures that accompany the text as they are meant to be seen in conjunction with this section. There are in fact many more annotated patches in the actual maxforlive device but I will post about those another day in a more detailed breakdown of the software design

 

Software Design

Once the hardware has been designed and created the software must be made to produce useful values for control of Ableton Live. As maxforlive offers an almost infinite possibility of functions it is important to decide what you wish to do with the software before you start building it. “By itself , a computer is a tabula rasa , full of potential , but without specific inherent orientation. It is with such a machine that we seek to create instruments with which we can establish a profound musical rapport . ”(Tanaka 2)


It is important that we create a system whereby the software plays to the strengths of the performer and hardware design, these elements must work in tandem to create an innovative and usable ‘playing’ experience. Firstly the Arduino data has to be understood by Max/Msp. I chose Firmata as it uses the robust OSC protocol to transmit data to max/msp and is also provided with pre made max/msp objects for receiving the data, this code proved to be very stable and fast at passing messages. Once this was uploaded to the board and xbees were configured correctly it becomes simple to receive values that can become usable in your software. As we are using a range of analogue sensors it is important to include a calibration stage in the software so that minimum and maximum values can be set, inputs can be smoothed and then also assigned to a function. To this function I used the “Sensor-Tamer” max patch as a basis for creating a calibration system for all the inputs. These are then scaled and sent to a max patch which allows us to choose an effect from the current Ableton live set.

Left Hand Maxuino Input and other modules, annotated

Right Hand Maxuino Input and other modules, annotated

The analogue inputs can be made to produce midi messages as well as directly controlling parameters of effects from Live menus, the advantage of this is that you can then operate with two distinct modes, one for controlling fx parameters and other for passing midi messages to synths. Due to the fact that Ableton Live merges midi from all input channels to one channel of output you have to use the internal routing (S and R objects) functions of Max/Msp to send midi to a number of tracks. Obviously as this is a control system for live performance you need to have a way to control more than one synth/plugin and you want to be able to control various parameters for each synth. Creating small plugin objects for the channels you wish to control makes it easy to do this and as these simply pipe the midi from the input channel to the selected max receive object and because of this it is possible to assign the same physical controller to a different midi assignment on every channel. This again comes back to the watchword of customizability and allows the user to create dynamic performances where many elements can be changed without touching the computer. This also works neatly around the problem of only being able to send information to the active channel in a sequencer as your midi is routed ‘behind the scenes’ and effectively selects the channel you wish to use without any physical selection of the channel (i.e. no mouse click is required).
The footpedal which currently implements record and step through patch features
As the system is to be used to perform live there are a number of utility functions which also need to be created such as freezing and recording loops, stepping through channels, octaves and patches. These are best implemented away from the gloves themselves as the gloves are most intuitive to play when using both hands (8 notes over two hands), as you can only have a fixed number of switches that are easily playable it makes sense to assign these to notes (with sharps of notes being achieved through a foot pedal). Having switches used for playing on both hands also means that you can create polyphony by pressing down finger switches on both hands simultaneously. There is also the practical consideration that you do not want to have to stop playing a pattern to push a button to record a loop or to freeze things, by moving these functions to your feet you can continue playing whilst accessing control functions. For ease of use recording and freezing functions are assigned to all looping plugins from a single switch, as you are only sending midi data to one channel at a time there is no chance of creating a ‘false-positive’ and recording unwanted sounds in the wrong channel and having one switch to operate freeze or record greatly simplifies control for the end user.

I also decided to use a phone mounted on my arm running touchOSC to control some functions of Ableton live as it is useful in some cases to have visual feedback and again this allows the gloves to be freed up for musical functions. Some of these functions echo the footswitch controls to allow the performer to move away from the laptop and into the audience and as touchOSC has two-way midi control it updates the status of a switch or setting to correspond with the footswitch being pressed so there are no crossed signals. With touchOSC it is easy to design your own interface and to assign buttons to Ableton Live functions. As this essentially operates as a midi controller it is only necessary to put the software into midi learn mode, click the function you wish to assign and touch the button on the phone. This again allows for a high level of customizability for the end user and for interfaces to be made and set up according to the type of performance you wish to create. It is for example particularly suited to triggering sounds or prerecorded loops as many buttons are required for this (one button per clip) and this would not be sensibly achievable using the gloves. Although currently using a predesigned interface due to hardware constraints it is my aim to implement a touchOSC system that as well as providing controls for loops and other parameters provides a full set of feedback from the gloves and foot pedal and thus it will be possible to see what instrument, bank and so forth you have chosen in the software. This will become vital to the projects aim of being able to move completely away from the computer when performing.













At the time of writing this I did not have an apple device to create a custom layout so this HUD was used to show data from Max on the laptop .



Algorithmic Variation

“Each artwork becomes a sort of behavioral Tarot pack, presenting coordinates which can be endlessly reshuffled by the spectator, always to produce meaning”(Ascott 1966 3)

The Markov Chain Recorder/Player, Annotated


I decided that I wanted to be able to manipulate midi data within my performance to produce a number of variations to the input. These variations had to sound human and make intelligent choices from the data that was presented. To this end I have used Markov Chains to analyze midi data to create a system whereby a circular causal relationship between the user and the patch is developed. The patch takes midi input and then creates a probability table as to which note will be played next, after each note is generated it is fed back into the system and used to look up the next note from the probability grid. This means that whatever midi data is fed to the patch will be transformed in a way that preserves the most important intervals and melodic structures of your original data but allows for permutation, this in turn means that the performer must react to what the patch outputs and there is the possibility to input more data to change the markov chain that you are using and thus alter the performance further. In essence I wished to create a system of patches that function very much like an improvising live band, a certain set of melodic parameters are agreed upon, by midi input, and then used as a basis for improvisation. The data from these markov chains can be output in two ways, either the computer can be set to automate the output itself or you may use the gloves the push data from the markov chain into a synth, both of these methods yield different but equally valid musical results and allow the performer to create very different types of results. The idea of using markov chains to create predictable but mutating data has much in common with Cybernetic and Conversation theory where the interaction of two agents and the interpretation of these leads to the creating of a third which in turn influences the original agents. If we consider the original midi data in the patch to be the first agent and the person using the controller to be the second the interpretation of data from the computer influences the playing of the person using the controller and in turn this can be fed back into the computer to create another set of data which is again interpreted, permuted and responded to by the performer. This application of disturbing influences to the state of a variable in the environment can be related to Perceptual Control Theory.
“Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by telling the lower levels what to perceive: that is, they adjust the reference levels (goals) of the lower levels.” (Powers 1995)
Despite this being in reference to control systems in the human mind it is easy to see how it is also applicable to computer control systems, the higher level systems that are accessible by the user tell the software what to perceive, this is done in two ways, firstly the input of midi data, this input allows the software to create a lower level abstraction, being the table of probability, which is then called upon to trigger notes.
The changes must be subtle and controlled enough that the performer is reacting to them and responding rather than fighting the computer to maintain control of the system. The process that is used to determine probability of notes is a closed system to the performer (all one needs to do is feed in a midi file) the performer has access to an open system which can be used to alter key characteristics of the processes after this, they also have access to play along with this process through a separate control system linked to an instrument, hence the feel of improvising along with a band is created. In Behaviourist Art and the Cybernetic Vision Roy Ascott states: “We can say that in the past the artist played to win, and so set the conditions that he always dominated the play”(Ascott 1966 2) but that the introduction of cybernetic theory has allowed us to move towards a model whereby “we are moving towards a situation in which the game is never won but remains perpetually in a state of play” (Ascott 1966 2) Although Ascott is concerned with the artist and audience interaction we can easily apply this to the artist/computer/audience interaction whereby the artist has a chance to respond to the computer input and the audience and to use this response to shape future outcomes from the computer, thus creating an ever changing cyclical system that rather than being dependant on the “total involvement of the spectator” is dependent on the total involvement of the performer.

Improvements

Having worked on developing this system for two years there are still improvements to be made, although the idea to use conductive thread would have been very good from a design and comfort point of view, as it allowed components to be mounted on the glove without bulky additional wiring, the technology proved to be too unstable to withstand normal usage and when creating something for live performance it needs to be robust. It was the case with this design that something could be working in one session and not the next, and obviously if a mission critical thread was to come unraveled it had the potential to take power from the whole system rather than causing a single element not to work. Also the thread, being essentially an un-insulated wire, if not stitched carefully created the possibility of short circuits when the glove were bent in a particular way. In addition to this the switches, even when used with resistors (also made of thread) produced a voltage drop in the circuit that changed the values of the analogue sensors. Obviously this change in values will change what happens to a parameter that the sensor controls and therefore can produce very undesirable effects within the music you are making.
Although the accelerometers produce usable results for creating gestural presets and manipulating parameters the method used to work out the position of the hands could be further improved by the use of gyroscopes instead. Gyroscopes are able to accurately tell the position of an object when it is not in motion where as accelerometers work best when in a state of constant motion. With a gyroscope we would be able to introduce an addition value into our gestural system, we would be able to tell the amount of rotation from the starting position, and this would allow us to use very complicated gestures to control parameters within Ableton.
The current ‘on the glove mounting’ of the components works but is in my opinion not robust enough to withstand repeated usage and so it will be important to build the gloves again using a more modular design. Currently the weak point is stress placed on soldered connections when the gloves twist or bend and even though using longer than necessary wiring helps to alleviate this it does not totally solve the problem, therefore it is necessary to create a more modular design which keeps all soldered components contained and does not subject them to any stress. The best way that this could be achieved would be to mount the Xbee, Arduino and power within a wearable box housing and have all soldered connections housed within it as well. To make sure there is no cable stress it is possible to mount screw down cable connectors in the box for two wire input sensors and three pin ribbon cable connectors for analogue sensors, in this way no stress is put on the internal circuitry and the cabling is easily replaceable as none of it is hard soldered. These cables would run between the box and a small circuit board mounted on the glove near the sensor where the other end would plug in. This also increases the durability of the project as it can be disassembled before transport and as such does not risk any cables getting caught or pulled and makes every component easily replaceable, without soldering, in event of a failure.
I would like to introduce a live ‘gesture recording’ system to the software so that it is possible to record a gesture during a live performance that can be assigned to a specific control, this would allow the user to define controls on the fly in response to what movements are appropriate at the time. However this will take considerable work to design and implement effectively as value changes must be recorded and assigned in a way that does not break the flow of the performance and although it is relatively simple to record a gesture from the gloves by measuring a change in values of certain sensors assigning these to a parameter introduces the need to use dropdown boxes within the software to choose a channel, effect and parameter and how to achieve this away from the computer is not immediately apparent. It may be possible to choose this using touchOSC when an editor becomes available for the android version of the software, but as yet this is not possible.
Further to this the touchOSC element of the controller must be improved with a custom interface which collects suitable controls on the same interface page and receives additional feedback from Ableton such as lists of parameters controlled by each sensor, the sensors current value and the names of clips which can be triggered. Using the Livecontrol API it should be possible to pass this information to a touch screen device but again without an editor being available for the Android version of touchOSC this is not yet possible. I have investigated other android based OSC software solutions such as OSCdroid and Kontrolleur but as yet these also do not allow for custom interfaces. OSCdroid however looks promising and having been in touch with the developer the next software revision will include a complex interface design tool that should allow for these features to be implemented. I will be working with the developer to see if suitable Ableton control and feedback can be achieved once this has been released.

Conclusion

In essence the ideas and implementations I have discussed mean that we can create an entire musical world for ourselves informed by both practical considerations and theoretical analysis of the environment in which we wish to perform. We can use technology to collect complex sets of data and map them to any software function we feel is appropriate, we can use generative computer processes to add a controlled level of deviation and permutation to our input data and we can use algorithms to create a situation whereby we must improvise and react to decisions made by the computer during the performance of a piece. We can have both total control of a musical structure and allow a situation whereby we must respond to changes being made without our explicit instruction. It is my hope that through this it is possible to create a huge number of different musical outcomes even if using similar musical data as input. The toolset that I have created hopefully allows the performer to shape their work to the demands of the immediate situation and to the audience they are playing to and opens up live computer composition in a way that allows for ‘happy mistakes’ and moments of inspiration.
As previously stated it is my hope that these new technologies can be used to start breaking down the performer and audience divide. It is possible to realize performances where the performer and audience can enter into a true feedback loop and can both influence the outcome of the work. In the future there is the potential to also use camera sensing and other technologies (when they are more fully matured and suitable for use in ‘less than ideal’ situations) to capture data from the crowd as well as the performer. The performer can remain in control of the overall structure but could conduct the audience in a truly interactive performance. This technology potentially allows us to reach much further from the stage than traditional instruments and to create immersive experiences for both performer and audience. It is this idea and level of connection and interactivity that should move electronic musicians away from traditional instrument or hardware modeling controllers and look for more exciting ways to use technology.

“All in all, it feels like being directly interfaced with sound. An appendage that is simply a voice that speaks a language you didn't know that you knew” Onyx Ashanti

Updates and more dissertation

Apologies for not updating this for a while, ive been moving house, to Berlin! Also ive been finishing an EP to be released on Planet Terror.

So without further ado, the next section of my dissertation...........


Hardware Design

“Any sufficiently advanced technology is indistinguishable from magic.” - Arthur C Clarke

“One way one can attempt their adepthood in magic is to try weaving a spell without using any of the prescribed tools. Just quiet the mind and slip off into a space within your mind that belongs only to you. Cast your will forward into the universe and see if you get the desired results.” (Nicht 2001 47)

When designing my controller I looked at the idea of openhanded magic as a source of inspiration. Rather than being directly related to card tricks and illusion open handed magic is a form of magic in modern occult systems whereby the practitioner does not use tradition ritual props but uses the focus of their will in the moment to achieve the intended results. The performer must achieve some sense of gnosis and ‘at-one-ness’ for this to succeed and as we have previously explored dancing is one route to this state. As explained by Joshua Wetzel:

“Dancing This method could also be termed “exhaustion gnosis.” The magician engages in continuous movement until a trance-like state of gnosis occurs. Dance gnosis is particularly good for visions and divinatory sorts of workings, or at least that is the history of its use. However, it is apparent how it could be used in any type of magical activity. The effort to maintain continuous motion eventually forces the mind to a single point of concentration, the motions themselves become automatic and there is a feeling of disassociation from the mind. It is at this point that the magician performs rituals, fire sigils and various other magical acts. This is also a great form of “open handed magic.” You can do it in a club full of people, with dozens watching, and no one has a clue.” (Wetzel 2006 21)

As discussed earlier I feel that the dance floor has a strong ritual and tribal element associated with it and I believe that these ideas can be incorporated into the design and usage of an adaptive controller system. If the ultimate aim of the design is to interact with the audience and the “blurring of once clear demarcations between himself and the crowd, between herself and the rave” then it is possible to incorporate the ideas of ritual and ritual magick to inform the creation of your controller. Although the idea of creating something ‘magic’ is certainly in one sense that it should ‘wow’ the audience and create something novel and exciting to draw them into the performance I believe that for the performer/programmer the idea must become more abstracted. If we refer back to the earlier idea of having the space within a performance to have moments of inspiration and the room to experiment, take risks and also possibly fail and couple this with the intended purpose of the music we are focusing on, to make people dance, then surely the optimal state for creation of this is to be in the trance like state of the dancer. In the previous section I asked the question “Would they (the performer) not be more fully immersed in their own sonic landscapes if unshackled from the computer screen and became free to roam the space their sound occupies, interacting with the audience and using their whole body to feel their performance in the way the audience does?” and I believe the answer to this is to allow the performer a system of control that allows them to largely forget the mechanism of creation and to ‘feel’ what they are making by being in the same state as the dancer themselves. When looking at how to design my controller I have tried to think about this question throughout and use it as a reference when trying to ascertain the best way to incorporate a feature into the hardware and software design. The controller must be simple to use, requiring natural hand gestures, and notes must be easy to trigger and record so that the flow of the performer is not interrupted by the technology. It has taken a great amount of trial and error to reach a stage where this is possible and indeed the use and design of a controller to allow such interaction with audience and music is, by necessity, in a constant state of flux where new ideas can always be incorporated and refined to move towards the optimal playing experience. As I have previously stated this idea of a continually evolving and demand responsive controller system is the optimum state for these projects and although temporary goals can be established the performer/designer should always be looking for a way to improve and advance their work and as such it can never be described as truly ‘finished’.

It is relatively easy to build your own controller system and use it to interact with a computer and there a number of advantages in creating your own system over co-opting existing computer interface devices. With a basic knowledge of electronics it is possible to create anything from a simple input device to a whole new instrument. Using an interface such as the Arduino you can simply, and with minimal processor load, send analog and digital signals to your software and there are a huge number of sensors on the market that you cannot find in a pre made solution and making your own controller allows a novel approach to the capture of data. The traditional computer controller model of interface relies on pushing buttons to input data and thus even when using a modern controller such as the Wii-mote we are still tied into this idea of physical buttons as the main input device. Other devices such as the Kinect although allowing gestural input only work under specific lighting and placement conditions which would make it largely unsuitable for use in a live performance environment. If we build our own system it is possible for us to use a vast number of different devices such as bend and pressure sensors or accelerometers to receive input. This approach allows us to fully incorporate the idea of gestures to manipulate music as it does not rely on you tapping a key but rather invites you to use your whole body. As previously stated with the controller I wished to design I did not wish to copy or model traditional instruments but rather to create a unique interface with a distinct playing experience to take advantage of the many controls available to us to manipulate. To get the most from the custom controller experience we must develop our own language to interact with computers and the music being made.

In designing a physical controller it is important to think about what you intend to use it for and what controls you need. Do you just need switches or do you need analog control values that you can use to, for example, turn a knob or move a fader? Do you want your controller to play like a traditional instrument or to have a totally non-traditional input method? With my project it was important to have a number of analog controllers as well as digital switches and also some kind of control for moving through the live interface was required, this meant that I added a touchOSC component to my project for feedback and control of Ableton’s midi map triggered features, this allows you to trigger clips and manipulate controls all without having to look at the computer. In my project only the hands contain sensors and the feet perform basic functional software control, which are also replicated on the touch screen device, allowing the performer total freedom of movement. Being free from the computer allows the performer to more fully enter into the flow of the music and to, for example, dance whilst creating. In this aspect my controller is attempting to remove itself from a more traditional model of playing music where you would have to think about the placement of an instruments, your hands on the keys and so on. As my project is particularly focused on creating electronic ‘dance’ music, which has little link to traditional instruments, it seems counter productive to produce something which models itself upon a traditional instrument as in the setting of a live performance this would look misplaced.

Rather than create a system where the user has to hold a controller my system is built entirely into a set of gloves and as such one simply has to move their hand to affect change in the music. The hardware has gone through a number of revisions to find the best setup to compliment my workflow. Initially I used available ready made sensors to create my gloves, and whilst these made for a relatively simple construction they presented a serious set of problems regarding connections to the gloves, keeping the sensors in place and not putting stress on the weak points of their construction. Many commercially available sensors are designed to be used in a static setup where once mounted they are not moved, however when making something such as a pair of gloves it must be recognized that there will be a large amount of movement and that actions as simple as putting on or removing the gloves may produce unwanted stress on connections that may break or impair the functionality of the system.
Over the development time of my project technology has become available that allows you to make bend sensors, pressure sensors and switches out of conductive material. This creates a distinct advantage over traditional sensors as they are more durable, easier to wear and very simple to fix and replace. Conductive thread has, in theory, made it possible to create a controller with less physical wiring, the wires can be sown into controller, are flexible and do not restrict movement and are more comfortable for the user. I initially remade my project using this technology, however this technology also has drawbacks that only become apparent after a period of usage and have meant that it was unsuitable for this project. A prototype version of the gloves were made using conductive thread rather than wiring and although this initially worked it was found that stretching and compressing the thread in a vertical direction lead to it unraveling. As the wire functions in the same way as a multicore wire when the thread is not tightly wound together you get a loss of signal, initially I sought to counter this problem by covering the conductive thread in latex but as this seeped between the strands of the thread this also lead to a loss of signal. This conductive thread technology is certainly useful in some situations however when used on a pair of gloves the amount of stretching required to get them on and off means that the thread breaks very quickly. However it is still used in the project to connect between circuit boards and the conductive fabric fingertips of the gloves and between circuit boards and the analog sensors in places where there is not a great amount of stress placed on them.

The analogue sensors are also made from conductive material and this has the advantage of making the sensors easily replaceable if broken and easy to fine-tune the output values and sensitivity. The bend sensors on the fingers are made using conductive thread, conductive fabric, velostat and neoprene. By sewing conductive thread into two pieces of material and sandwiching layers of velostat between them you can easily create a sensor which is simple to adjust the sensitivity of as this is determined by the number of layers of velostat between the conductive thread, a sensor made this way also has the advantage that it can easily be mounted on the gloves via stitching. These sensors also can be made to look almost any way you desire, in the case of my project simple black circles, and as such they are in keeping with the idea of open handed magic where the actual method is partially obscured from the audience but easy to use and understand for the performer. The switches in the gloves are also made in a way that removes the need for any wiring, electronics or unwieldy physical switches. Using conductive thread it is possible to create a switch that can be closed by applying a voltage across it and this greatly simplifies the construction of the gloves as only one positive terminal is needed, in this case placed on the thumb, thus the switches are constructed by wiring a ground and input wire to each finger and are closed by touching the finger and thumb together. This natural gesture requires no learning on the part of the user and we can, for example, use each switch to trigger a drum hit or play a key on a synthesizer as well as performing more command based functions if required. I have taken the approach of making the switches on both hands produce midi notes (one for each whole tone in an octave with an extra c of the octave above on the last finger of the right hand and a foot pedal to sharpen/flatten the notes) as this yields the most natural playing experience, but it is possible to program these switches to provide other controls is required.
My controllers use accelerometers in each hand to work out the position of your hands, this allows us to seamlessly change between parameters being controlled. For example if your right hand is held at a 45 degree angle the accelerometer can function to control a cut off filter within your music software, however if you tilt the right hand further to 90 degrees the functionality of the left hand can change and could instead be used to control the volume of a part or the length of a sample. As we can produce accurate results with these sensors we are able to build a huge amount of multifunctionality into a very simple control system. Positioning of the hands is very easy for the performer to feel without the need for constant visual re-assurance and this contributes to the ease of use of the system.
I have also incorporated multi colored LED’s into the hand for visual feedback, by using three color LED’s we have a huge variety of potential colors to choose from which can indicate function, and therefore we also cut down on the amount of wiring needed to manage this and space used on the glove. There are three of these LED’s mounted on the gloves, two represent feedback from the notes played and change color corresponding to the instrument chosen and the third is used as a metronome so it is easy to record sections in time with the computers tempo setting and gives the performer visual feedback for their timing.
By using Xbee radios in conjunction with the Arduino and sensors we are able to unwire ourselves from the computer completely. This of course simplifies the use of the controllers as it does not matter where the performer is in relation to the computer and for my project this is vitally important to my core idea of ‘open handed magic’ and audience interaction. The most obvious disadvantage of using wireless communication is increased complexity of setup. To get the xbees to talk to one another over a meshed wireless network is not a simple task and Arduino code that works when the unit is plugged in via USB does not necessarily work when passed over a serial radio connection. For example the Arduino2Max code, available online, is a simple piece of programming that allows the Arduino to pass results from each of its inputs to max/msp. However this does not work when Xbees are introduced as the data being reported by the serial.print function floods the buffers of the xbees and means that data is only reported once every ten seconds or so. Obviously as we are aiming for a system with as low latency as possible this situation is unacceptable and another means of passing the data must be sought. In the case of my project this meant the Firmata system which can be uploaded to the Arduino and which communicates data to the computer by the use of the OSC protocol. Although the code for this system is much more complex than Arduino2Max the results that it produces are far more accurate and do not result in any appreciable latency. However to get this to work in the way I required demands a greater level of coding knowledge for both the Arduino and Max/MSP and messages are passed to and from the serial port using more complicated OSC messages and must, for some functions, be translated into a format that max understands to create usable data. Using series 2 Xbees also creates an additional problem in that they are designed for more complex tasks than serial cable replacement, as such part of their standard behavior is to continually seek nearby nodes that they can connect and pass information to. Through extensive testing and research I have found that if this mode was utilized the stream of data from the gloves to the computer and visa-versa was often delayed by seconds at a time, as the xbees seem to prioritize data integrity over timing. However it is possible to quickly bypass this by setting the xbees to only look for a specifically addressed endpoint and this seemed to solve inconsistent timing issues. There is a distinct advantage to using the Firmata/OSC based communication and that is that if there is a dropout from the controller the flow of data will resume when the connection is restored. I.e. if the battery runs out and wireless communication is lost when a new battery is used the wireless communication is resumed and the data will also resume being seen in max/msp. This does not occur with more simple codes and therefore using this more complex system provides a level of redundancy to our hardware that allows us to continue performing without the need to reboot the computer or software.

When powering an Arduino over USB you do not need an additional power source as the USB bus can provide what is needed to run your sensors, however when using wireless you must include and external power source, this must be powerful enough to provide the correct voltage for the Arduino, wireless module and sensors and must have a long enough battery life to not run out mid performance. This obviously increases the size and weight of the controller and if you are using conductive thread it is important that the power source is placed in close proximity to the most high voltage mission critical elements of the project. This is because conductive thread has a resistance of 10 ohms per foot (i.e. one foot of conductive thread is equal to a 10 Ohm resistor) and therefore you lose power from your source the more thread is placed between it and your components. However if traditional wiring is used this becomes less of an issue. Li-Po batteries were chosen for this project due to their high power output and quick recharge time, one must be aware though that they must not be discharged under 3 volts and that if the packaging is damaged the batteries are liable to expand and potentially become unstable, therefore care must be taken to ensure that they are looked after properly when used. These batteries clearly offer the most potential for a system like this however as they allow somewhere in the range of 1000 − 3000 mAh to be output, this is more than enough to power the lilypad, xbee, sensors and lights for a long duration. Originally I had looked at using AAA batteries and although these powered the system on they ran down very quickly and with some sensors produced a voltage drop that would reset the Arduino and cause unreliable operation.

Tuesday 17 May 2011

A video at last

Heres a video of me playing around with the final 'prototype' version of the gloves. They need making again really to be perfect but im pretty pleased with progress so far. Basically they are controlling a load of sounds and effects which are looped via a midi footpedal.
It also uses a little bit of max code which takes an input, records it, creates a markov table of probability for what note you will hit next and then outputs permutations on this data, so it keeps the flavour of the original input whilst mutating it, its pretty damn cool if I do say so myself and probably the best bit of max programming that i have done.
Also there are no pre-existing loops used in this, some arps yes but loops no
Anyway enjoy a video of me messing about with the gloves


Construction6 from TheAudientVoid on Vimeo.

Thursday 12 May 2011

Adaptive Physical Controllers - Part 3 - The Dance Music Ritual

The Dance Music Ritual

As the name suggests Dance music has a specified aim of producing movement in the audience and to create a sense of community amongst them “Your kinaesthetic sense is externalised by being transferred from your own body into the body of the crowd… The room ceases to be occupied by strangers, instead it is filled with the party folk all satisfying their need to be”(Jackson 2004 19). Slogans such as “Peace, Love, Unity and Respect” (PLUR ethos) exemplify the community ideas of dance music audiences and Turners idea of Spontaneous Communitas “the transient personal experience of togetherness” has been taken on by many dance music scholars to explain the feeling of community and connection that the audience may experience in a rave setting.

“If the ecstatic raver is indeed an anonymous body of textless flesh, one that has shed its identity, ideology and language, one that has either divested or radically altered its culturally inscribed body image, then the thematic boundaries that normally delineate our edges are destabilized and perhaps dissolved. Dancing amidst a crowd of ecstatic bodies, the raver is consumed not only by an immediate ‘experience’ of the phenomenal world, but also by his or her body’s subconscious knowledges of unity and alterity (not to mention genderless sexual specificity)—knowledges that are quite different from those of self- reflective thought. Lost in the reflexivity and natural transgressivity of the flesh, in its indeterminacy and interwovenness, the raver is a mute witness to the blurring of once clear demarcations between himself and the crowd, between herself and the rave.” - Landau in (St. John 2004 p.121)

In my past work I have looked at the use of rituals within music in both modern and tribal cultures, in modern society this is seen most clearly on the dancefloor of clubs, there is a tribal and ritual element to dancing together and the musical style that accompanies this “‘repetitive, minimalistic, seamless cyclings of sonic patterns accompanied by a relentless driving or metronomic rhythm” (Fatone 2001) which creates not only community between the dancers but often ecstatic experience. James Landau states “in the psychoanalytical view, ecstacy’s transgressive relationship to binary thought stems from the rave-assemblage propelling its participants into the Real, a cognitive space ‘beyond’ the ego and its organizational structures” (St. John 2004 ) This idea is supported by St John who states, “The party makes possible a kind of collective ego-loss, a sense of communal singularity - a sensation of at-one-ness - is potentiated”(St. John 2004). However often the performer cannot take part in this ecstatic experience due to their physical disconnection from the dancefloor and the movement of the dancers. Is it not strange that a musician can produce music that makes their audience dance but they must remain rooted in place behind their computer? Would it not be more beneficial for the performer to be able to join the dance and become part of the community they are creating music for? Would they not be more fully immersed in their own sonic landscapes if unshackled from the computer screen and became free to roam the space their sound occupies, interacting with the audience and using their whole body to feel their performance in the way the audience does?

One of the huge benefits of electronic instruments is that as they do not have an element which needs a microphone and as such they are not subject to feedback in the same way that a traditional musician would be, this simple fact has seemingly been overlooked in the majority of live performances and the traditional room setup of placing the performer in a position of separation from their audience is adopted, by creating wireless wearable controllers it is possible to move beyond this traditional staging setup. I would assert that the ritual and community aspect that dance music embraces would be furthered immeasurably by the breaking down of the audience/performer divide and creating a situation where no one is placed ‘on a pedestal’ but instead all are intertwined with each other. Within the Electronic Music Scene there are far fewer ‘superstar’ performers than in other musical genres and although some performers break into the mainstream and achieve wide spread acclaim many are much less willing to fulfill the traditional hero archetype. Indeed the community often quickly derides those who are seen to have ‘risen above their station’ or developed an overbearing ego. When talking about this the musician Shackleton says “in rock music you have a projection of the individual, and it’s almost like the extension of a performance art where you have an individual being / doing a very egotistical thing and in that context it’s wonderful… because of course that person is venting something and the crowd can enjoy that, in that context. But I think I’ve never really seen it like that, the artist isn’t so important.”(Brignoli 2011) “I don't need lots and lots of money, I don't need a lot of fame or this sort of thing, I just like doing what I'm doing. That's good for me.”(Keeling 2010) Even very well known DJ’s such as Paul Van Dyke are known for their grounded attitudes towards their work “He is so sincere and is one of the nicest people I've ever met. You don't expect someone that's so well known to be so humble." Jessica Sutta (2001). Some artists and groups take this idea even further, Scot Gresham-Lancaster in his article about ‘the Hub’ (an ‘interactive computer network music group’) states “The Intent to detach ego from the process of music making we inherited directly from Cage. To refine that impulse and make a living machine that both incorporates our participation and lets the breath of these new processes out into the moment”. (Gresham-Lancaster 1998)

If as St John states ‘electronic dance music would be a conduit for experimentation, transgression and liberation, with rave becoming the manifestation of counter-culture continuity”(St. John 2008 156) the this freedom should logically be extended to break down traditional audience performer divides. Onyx Ashanti describes this sensation of using a wireless controller whilst being amongst the audience “I "thought" I would do what I usually do, which was to stand in front of the DJ booth and "perform". Not the case, at all! Before I realized it, I had eased into the crowd and was dancing with a couple of very attractive women, BUT WAIT...I was still creating and playing as well!”(Ashanti 2011) We can see clearly from this quote the excitement of the performer in this situation, that he can interact with the audience whilst creating and feedback into his system the energy from the audience. Within this context it is clear that the gestures one must ascribe to their controller are those of dancing, the performer must be able to dance with the audience and use their gestures to both manipulate the music and interact with others. I believe it is this situation, facilitated by the movement of the performer away from the computer that will truly revolutionize the performance of dance music.

Wednesday 11 May 2011

Adaptive Physical Controllers - Part 2


If we look at the way traditional instruments are played we can see that there is a great deal of body involvement and it is often easy to see the haptic and sonic link between the gesture of the performer and the sound that is produced, for example as we see a guitarist bend a string we can hear the corresponding rise in pitch from the amplifier. This produces a clear semiotic link understandable to the audience and performer; specific defined action produces a consistent result.[1] This is less true when we look at computer controllers that largely rely on the language of synthesizers and studios such as patch cables, knobs and faders. Alex Mulder states:

“The analog synthesizer, which is usually controlled with about a dozen knobs and even more patch cables, is an example of a musical instrument with relatively high control intimacy in terms of information processing, but virtually no control intimacy in terms of semiotics, as access to the structuring elements of the sounds is hidden behind a frustratingly indirect process of wiring, rewiring, tuning sound parameters by adjusting knobs, etc.”(Mulder 1996 4)

This lack of a clear semiotic language for the uninitiated (i.e. those without direct experience of using a synthesizer or being in a studio) means that much of the data that informs the audience of changes being made is lost. Indeed even those that understand patching an analogue synthesizer would not be able to tell, from the position of an audience member, what patch cable conforms to what function. Fernando Iazzetta states “gesture is an expressive movement which becomes actual through temporal and spatial changes. Actions such as turning knobs or pushing levers, are current in today's technology, but they cannot be considered as gestures”(Iazzetta 2000) This interaction and language of expression becomes even less clear when the performer is simply behind a laptop moving a mouse or pushing buttons on a controller. Therefore we need to move towards a system that is responsive to the users demands and which has a clear semiotic language whilst taking into account playability and ease of use. One instrument that attempts to reinforce this semiotic link is the Theremin, however the degree of physical discipline required to become a virtuoso at this instrument is beyond the capabilities of most players,

“You’re trying to stay very, very, very still, because little movements with other parts of your body will affect the pitch, or sometimes if you're holding a low note, and breathing, you know, will make it ... (Tone rising out of key)…. I think of it almost like a yoga instrument, because it makes you so aware of every little crazy thing your body is doing, or just aware of what you don't want it to be doing while you're playing” (Kurstin 2002)

Axel Mulders bodysuit also had a similar problem

“The low level of control intimacy resulted from the fact that the movements were represented as physical joint angles that were placed in a linear relation to the psycho-acoustical parameters representing the sounds. However, a performer rarely conceives of gestures in terms of single joint motions only: multiple joints are almost always involved in performance gestures. Therefore, considerable learning appeared to be necessary to gain control over the instrument and eliminate many unwanted sound effects.” (Mulder 1996 4)

In her Thesis “A Gestural Media Framework” Jessop states that “I have found that strong, semantically-meaningful mappings between gesture and sound or visuals help create compelling performance interactions, especially when there is no tangible instrument for a performer to manipulate”(Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010 15) Jessop points out that with these systems the performer must also be a programmer to gain the most reward and whilst this is true it is possible to create a coherent GUI (Graphic User Interface) that obscures much of the programming from the user whilst allowing them to effectively calibrate and work with the system. With any controller that uses gestural input it is necessary to have some kind of calibration stage to produce accurate results and cannot be avoided when so much is reliant on, for instance, the amount a person can bend their finger or move their wrist. This also creates the opportunity to create a system so customizable that it is a useful tool for those of impaired mobility, if it is possible to have sensors of a high enough accuracy that a large change can be made in high resolution over a small area of movement you move towards creating a system that with minimal training anyone can use and benefit from. It is possible to create a control system whereby the gestures used can be changed over time or be varied by the specific performer and their needs. Axel Mulder proposes that the existing problems with instruments and controllers are Inflexibility “Due to age and/or bodily traumas, the physical and/or motor control ability of a performer may change. Alternatively, his or her gestural "vocabulary" may change due to personal interests, social influences and cultural trends…” and Standardization “Most musical instruments are built for persons with demographically normal limb proportions and functionality”(Mulder 1996 2). Mulders work is of particular interest as he focuses on using the hands and associated gestures to create a new type of musical interaction. The SensOrg project also looks at the idea of creating an adaptable gestural control system based on the movement of the hands “ the SensOrg hardware is so freely configurable , that it is almost totally adaptable to circumstances . It can accommodate individuals with special needs, including physical impairments”(Ungvary and Vertegaal 176). The creators of this project state “we consider sensory motor tools essential in the process of musical expression”
Jessop (in reference to dance) states “We are now in an era in the intersection of technology and performance where the technology no longer needs to be the primary focus of a piece.  The performance is not about the use of a particular technology; instead, the performance has its own content that is supported and explored through the use of that technology.”(Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010). Whilst this could be assumed to be true in relation to dance I feel that this stage has not been reached yet in electronic music. There is still a fundamental disconnection between the performer and the music they are playing and between the audience and the performer.  As often electronic music is explicitly about the use of technology and its application to create and manipulate sound it seems strange that electronic live performers expose their audience to almost none of the technology they use other than to show them that they have a laptop computer. We have not yet reached a stage where the audience can assume to know what the performer is doing with their laptop and controllers. Performers such as Tim Exile are attempting to change this idea with highly interactive and customizable live shows and controllers that allow room for surprise elements, mistakes and moments of inspiration. As most people use computers to play live in the most limited way, by simply playing back tracks with basic alteration this does not allow room for one of the elements that makes traditional live music so special, the fact that the performance will change every time and that you can change the structure of the song or rearrange it in a different style. It is my goal to move towards developing a system that allows a deep, user defined interaction with the software you are using whilst being unique for each user and adaptive to their performance demands. The application of this idea means that we must attempt to introduce systems into the controller that allow the audience to form a link between the command being performed and the sonic outcome. As it is possible for each controller to be radically different in design and implementation it is important that some kind of visual feedback system is introduced in addition to the performers gestures that aids in the audiences understanding of what is happening.

            As such, systems must be designed that allow a high level of control, allow the performer greater room to improvise within their defined parameters and encourages them to take risks. With so many assignable controls available in computer software and the ease of use of multiple sensor inputs to the computer it is possible, for example, to use the whole body to control a synthesizer or indeed the whole arrangement of the piece. In this way it allows the performer to embody many of the separate parts of the music whilst maintaining control of the whole. In many ways this is utopian concept, whereby the performer has deep control of every aspect of the piece and can easily manipulate it in whatever way he desires, but can also introduce indeterminacy to the piece. Software such as Max/Msp and Puredata allows an almost infinite variety of control combinations to be remapped and recalibrated on the fly and can even be used to provide a constantly mutating backing which you can for example use your controller to play a solo over.[2] This software also has the advantage of being open to the end user, max and Puredata patches can be opened and reprogrammed to suit the users needs if the original design is not flexible enough and it is precisely this open source attitude towards software that will see alternative controller solutions start to appear in many different contexts throughout the musical world. When something has the capability to be anything you desire it to be (with the proviso that you need some skill at programming to realize this) the possibilities to any artist are immediately apparent. However these systems should also be designed with ease of use in mind and the beauty of this approach is that whilst on the surface it is possible to provide a unified and coherent GUI for those that wish to use it anyone who wishes to delve deeper into the inner workings of the controller is free to do so. It is this that allows one to design continually evolving controller concepts that can change based on the artists intent or interests at the time.

            With my project I have chosen to focus on the hand and wrist as the main method of control “the hand is the most dexterous of human appendages and is therefore, for many performers, the best tool to achieve virtuosity”(Mulder 1996 5). By focusing on the hand I am attempting to provide a method of input that is understood by the performer and audience and can provide a rich array of data by which to control aspects of the performance. The lack of tactile feedback from the controller and use of empty handed gesturing in my system makes it unlike traditional instrument models where every action is anchored to a physical device but also provides some similarity in its most simple operation (playing notes) as a physical press of a key is still involved.
Systems such as Onyx Ashanti’s “BeatJazz’[3] involves a controller that provides tactile feedback to the user via pressure applied to force sensing resistors to trigger notes and functions. This allows for a much greater degree of flexibility in the performance and remains true to the instruments that Onyx has traditionally played. Onyx is a skilled wind instrument player and has a background of playing the saxophone and more recently the Yamaha WX5 wind controller. However in designing his own controller system rather than simply choose to recreate a traditional wind controller Onyx has attempted to create a new controller that takes the best aspects of that instrument and combines them with the expanded possibilities of home build controllers. This is most simply seen in the layout of the controller that takes the form of two hand held units, a mouthpiece, and a helmet with visual feedback via TouchOSC. Where as the traditional wind controller looks something like a clarinet the beatjazz controller has a separate wedge shaped controller for each hand. Each hand features switches, accelerometers and lights to control his Puredata and Native Instruments Kore based computer setup. As this design uses force sensing resistors as switches it allows the performer to assign multiple functions to each of the buttons depending on how hard they are pressed which means that for a minimum number of buttons a huge array of controls can be manipulated.
When talking to Onyx about his controller system he stressed the importance of visual feedback for the audience and stated that he had modified his wind controller with brighter LED’s so that the audience could see when a note was played or a breath was being blown. He has carried this idea through to his Beatjazz controller using super bright multi-color LED’s that change patterns depending on what is being done with the controller. This serves to reinforce the link between with the audience and the performers actions and also draws the audience into the performance by creating a unique and performer centric visual display.
These individual and highly specific performance systems are aimed at encouraging the use of the computer to produce a new kind of instrument, not one rooted in classical tradition but an instrument that recognizes the power of the computer as a tool of complete agency over the music produced. Jessop states, “ For these interactions between a performer's body and digital movement to be compelling, the relationships and connections between movement and media should be expressive and the performer's agency should be clear.” (Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010 15) and this is a key principle of the system that I have designed. Although it is possible for subtle movements to produce great change within a performance the controller and software should be calibrated so that there is a clear visual link between the movements being made and the sound being output. It is clear when using an instrument such as the Eigenharp that when a key is pressed a sound is output, however when designing a more esoteric control system it is up to the designer and user to ascribe meaning to certain gestures. Without the presence of a physical input such as a fretboard or breath controller we must ensure that the audience understand what action corresponds to what gesture, this is given further importance due to the fact that our control system is adaptive, we may use one button or gesture to perform a number of functions depending on its assignment at that time and therefore we must ensure that these are clearly demarked through the performance and gestures used. We must create a set of semantically meaningful gestures to support our performance. Using sensors such as accelerometers, gyroscopes or bend sensors these gestures can be as simple or as complicated as the performer desires, from turning the hand in one direction to the other to control, for example, the cutoff of a filter, to a complex gesture involving the placement of both hands. The user of the system should free to define these interactions from within their code and to choose gestures that feel natural to their playing technique without producing ‘false positive’ triggers during normal use. It is also important to consider the setting in which the performance is to take place when defining gestures within a control system as the gestures associated with a conductor and a classical music setting are very different from those of a dance music event. The system I have made will be used mainly to create dance music within a club setting and therefore ideas of appropriate gestures for this must be considered as well as the role of the performer within this context and the breaking down of the performer audience divide.


[1] Ie bending a string always produces a rise in pitch, blowing harder into a wind instrument produces an overblown note and so on
[2] See the later section on Algorithmic variation
[3] See Appendix A 3.1-3.3 for images

Tuesday 10 May 2011

Adaptive Physical Controllers

Well I havnt updated this for a while but its because I have been busy writing my dissertation for my MA in Sonic Arts. There are some videos of my project coming soon when I edit things together but for now I will serialize my dissertation for you all to read if your interested in these topics. It covers both the theory behind alternative controllers and what I have done with my own work. I'll be posting a couple of sections of it up every day, although its very text heavy I hope it will be interesting to some of you who also are interested in these type of projects and will explain a bit about it all. When I've released the whole thing on here I will also put a download link to a printable pdf version which will include all the images, bibliography etc..

So without further ado here is the first section which covers the introduction and also some discussion of existing digital input devices for music


Adaptive Physical Controllers - 

Introduction

I will be looking in particular at the linking of Max/Msp and Ableton Live as it allows us to create complicated controller interfaces and devices whilst allowing us to access the API of a powerful live music system. Harnessing the flexibility of Max/Msp with the more traditional and time domain orientated approach of Ableton Live allows the performer to create an adaptive system whereby a controller can be used to perform multiple tasks and control any parameter, for example the pitch, timbre and velocity of a synth, the tempo of a song, a parameter within an effect, what sounds are playing, and so on. It is this adaptive nature of home built personalized controllers that allow us to explore new ways of interacting with computers and music. Projects such as David Jeffrey Merrills Flexigesture (Merrill and Massachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciences. 2004) “a personalizable and expressive physical input device” and Onyx Ashantis “Beatjazz” project (Ashanti 2010) move towards this goal and attempt to combine the best aspects of traditional hardware controllers with the possibilities audio programming languages and custom controllers present to us.
In my project I have attempted to make a pair of gloves that can be used to create and manipulate music within Ableton Live. The aim of this is being able to play live improvised electronic dance music without interacting with the computer directly. In many aspects of live electronic music the excitement of performance has been lost, there is often little interaction between the performer and the computer, and even if the music is composed and created in real time there is little for the audience to visually identify with. Unlike a performance with traditional instruments it is almost impossible for the audience member to visualize what the performer is doing. When using a computer the ‘wall of the screen’ separates the performer from the audience and obscures their actions “Conventional sonic display technologies create a “plane-of-separation” between the source/method of sound production and the intended consumer. This creates a musical/social context that is inherently and intentionally presentation (rather than process) orientated”(Bahn, Hahn et al. 2001 1). It is one of the central paradoxes for the electronic performer, although they have a box that is capable of creating almost any sound imaginable the central mechanisms for creating these sounds are obscured from all but the user themselves. I wish to find a way to attempt to overcome this problem by creating a system that allows access to the many features computer music software offers whilst removing the user from a fixed position in front of the computer screen and creating a direct visual feedback for the audience.




There are clearly benefits to modeling controllers on traditional instruments, by doing so you provide a safe reference point for the user and in theory reduce the learning curve required to play it (providing the user has previous instrument training). By working in a familiar framework you play on the existing strengths of the performer, however there are limitations to traditional instruments that I believe make them unsuitable for use as a modern day controller. Traditional use of a computer requires many keys and key combinations to perform specific functions, this is relatively easy using a keyboard and mouse as every key is individually marked and key combinations are easily pressed. However if you translate this idea of a grid of keys to the fretboard of a guitar you can begin to see the problems that may occur. It is very difficult to translate a vast number of controls to a small number of keys and direct midi instrument mapping often yields the problem that software will only allow you to control one parameter at once and many midi sequencers and live performance programs do not allow you to easily switch between channels and instruments.
There is obviously a great difference between an instrument that attempts to simply recreate the analogue in a digital domain and a controller that seeks to redefine performer and computer interaction, for example the Yamaha WX5[1] seeks to recreate the experience of playing a woodwind instrument but with extra keys for computer control, shifting octaves and so on, if seeking to simply replace a traditional instrument with a digital one instruments like this are an effective choice. However as we are looking to create a new type of computer control interface the mechanics and implementation of these instruments are less relevant to us than something such as the Eigenharp[2] which bills itself as ‘The most expressive electronic instrument ever made’ and attempts to go further than simply recreating existing instrument designs. Indeed it looks to incorporate aspects of many existing instruments and to allow the user to play VST plugins to create a hybrid design that straddles both traditional and digital instrument designs. Undoubtedly the quality of the keys, their velocity sensing and the ability to move them in both a horizontal and vertical direction goes a long way to allowing the player to perform all the traditional expressions associated with musical instruments[3] in a way that has not been available in the past and design features such as the inclusion of a breath controller and excellent instrument models allows the player to easily replicate, for example, wind instrument sounds. However it is the more strictly digital interactions with this controller that may leave the end user wanting.
 The Eigenharp, in a desire to remain as traditional as possible, takes the approach of using a complex set of key presses and lights to navigate through un-named menus on the instrument. Whilst usable this requires that the user become familiar with a menu tree that has little visual guide and without the names of the menus appearing and only colored lights to mark where you are or what option is active it is all too easy to choose the wrong option. In addition built in requirements such as having to reselect the instrument you are playing to exit the menu tree add un-necessary complexity. In this case the desire to pretend that the computer the instrument is plugged in to does not exist feels like a denial of the capability of the device and negates much of the goal to present an instrument that can be quickly mastered by the user. Although there is a limited computer interface provided with the Eigenharp this is mainly for choosing sounds, scale modes and VST’s and as such is more of a pre performance configuration tool than something that can be used ‘on the fly’.
I believe that the main fault with the Eigenharp model is that it binds the user to a specific predefined interface. The benefit of creating an alternative controller is that you can create an interface that combines well with your intuitive workflow and techniques. When using a powerful ‘virtual’ instrument that is linked to the computer you have the opportunity to allow the user to reprogram settings to work in a way that suits their needs. This is one of the central tenants of adaptive controller design; the end user can specify how to work with the tool that is used for interaction. For playability the Eigenharp undoubtedly succeeds in creating an instrument that can replicate the experience and sound of playing a ‘real’ instrument with all associated traits but it is in the user interface that stops if from being truly revolutionary and which does not allow the user to access the full capabilities of the instrument in a way that is complementary to their workflow.

“1st law of alternative controllers; adapt to the new reality. 2nd law of alternative controllers; adapt reality.” - Onyx Ashanti

“We feel that the physical gesture and sonic feedback are key to maintaining and extending social and instrumental traditions within a technological context” - (Bahn, Hahn et al. 2001 1)

I believe that a radical approach to instrument control systems is required to get the most from modern computers and audio software. Audio programming languages such as Max/Msp or Puredata and hardware interfaces such as the Arduino make it easy for a musician to design their own instrument and define their interaction with the computer in a way that is most appropriate to their performance. It is possible to create a “dynamic interactive system” (Cornock and Edmonds 1973) where the performer and computer feedback to each other to create ever changing interactive situations. It has become simple to create a system whereby the action of different sensors is easily assignable and changeable from within the software, and it is this flexibility and almost unlimited expandability that makes these tools suitable for creating a truly futuristic control system.


[1] see Appendix A 1.1 for picture
[2] see Appendix A 2.1 for picture
[3] Vibrato, Pitch Bend, Slides, etc....

Tuesday 12 April 2011

Some thoughts on conductive thread and a sneak peak at some coding

With a heavy heart I had to solder some wires into the gloves this morning. Not the finest hour when you are trying to make everything with thread, but a long string of errors led to this point, at first a few sensors stopped working or gave zero values, and the switches tended to randomly decide if they worked or not. I went through a long troubleshooting process with this, including checking the power hookups and arduino code. It was checking the arduino that really provided the answer as the software would no longer talk to the microprocessor and allow you to update it with the usb connected. Now normally even with everything hooked up to the xbee breakout board it updates fine (as long as the xbee isnt in the board) but this time I was getting an "arduino not in sync resp=0xf9" error when trying to upload. My first thought was that it was something to do with the xbee board, as this also gets involved when your uploading programs to the arduino due to the tx and rx wires being connected. So I disconnected them and the program uploaded fine. Somehow the conductive thread was stopping me updating the firmware, which could of course also account for some of the funky values I was getting into my maxuino patch.

After a lot of thought I've decided that mission critical wires need to be actual wires rather than thread, as things like the tx and rx getting shorted out or returning weird values isnt an option when your playing live, it all has to be bomb proof. Now there are still a few sensors acting funny and switches being intermittent in their response and this is clearly due to the thread. So firstly thats going to get replaced and ideally sewn between layers of neoprene instead of being latexed to see if that makes it more reliable and then dependant on the outcome I might need to break out the soldering iron again, I really dont want to, but the most important thing is that it works not that something in particular is used if its not suitable for the task.

Original Maxuino
Heres a brief snapshot of the code for one of the hands, im essentially running two copies of maxuino that have been heavily edited to do what I want... Firstly heres the original maxuino patch, as you can see apart from the subpatchers that deal with the OSC commands and do the javascript theres not a huge amount to it, midi comes out of the end and there you go job done. Now lets have a look at the left hand code for the gloves.....




My Maxuino Left Hand Patch
 This is the patch im running right now, basically everything bar the core maxuino patcher has been edited in some way to suit my purposes, theres an additional lighting system, routing options so you can send the midi via the internal max network of s and r objects to other channels (this is really important so you can control more than one instrument with your inputs and requires a little receive object to be placed on the channel your looking to send to), a calibration system, options to turn on and off sensors (as you dont want things happening every time you bend your fingers to hit a note or drum. Or maybe you do, you can do that too) and also inputs from the other gloves to control certain parameters, ie switches to change octave, channel the midi is routed to etc...... Quite a bit of data is passed back and forth to make things work, luckily max is fast at doing this and it doesnt cause any kind of unusable latency. However there is a little quirk that you have to work around. This patch has to be frozen to work properly, and you cant use an r object in a frozen patch. It is possible to add one but it causes a constant climb of the cpu usage second after second, so you end up with ableton telling you that you have 300% cpu usage or something ridiculous. However if you pop the R object in an external patch and bpatcher, or whatever, it into your frozen patch everything is fine. So theres a little workaround if you need it.

More on the coding soon.... for now back to conductive thread experiements