Wednesday, 11 May 2011

Adaptive Physical Controllers - Part 2


If we look at the way traditional instruments are played we can see that there is a great deal of body involvement and it is often easy to see the haptic and sonic link between the gesture of the performer and the sound that is produced, for example as we see a guitarist bend a string we can hear the corresponding rise in pitch from the amplifier. This produces a clear semiotic link understandable to the audience and performer; specific defined action produces a consistent result.[1] This is less true when we look at computer controllers that largely rely on the language of synthesizers and studios such as patch cables, knobs and faders. Alex Mulder states:

“The analog synthesizer, which is usually controlled with about a dozen knobs and even more patch cables, is an example of a musical instrument with relatively high control intimacy in terms of information processing, but virtually no control intimacy in terms of semiotics, as access to the structuring elements of the sounds is hidden behind a frustratingly indirect process of wiring, rewiring, tuning sound parameters by adjusting knobs, etc.”(Mulder 1996 4)

This lack of a clear semiotic language for the uninitiated (i.e. those without direct experience of using a synthesizer or being in a studio) means that much of the data that informs the audience of changes being made is lost. Indeed even those that understand patching an analogue synthesizer would not be able to tell, from the position of an audience member, what patch cable conforms to what function. Fernando Iazzetta states “gesture is an expressive movement which becomes actual through temporal and spatial changes. Actions such as turning knobs or pushing levers, are current in today's technology, but they cannot be considered as gestures”(Iazzetta 2000) This interaction and language of expression becomes even less clear when the performer is simply behind a laptop moving a mouse or pushing buttons on a controller. Therefore we need to move towards a system that is responsive to the users demands and which has a clear semiotic language whilst taking into account playability and ease of use. One instrument that attempts to reinforce this semiotic link is the Theremin, however the degree of physical discipline required to become a virtuoso at this instrument is beyond the capabilities of most players,

“You’re trying to stay very, very, very still, because little movements with other parts of your body will affect the pitch, or sometimes if you're holding a low note, and breathing, you know, will make it ... (Tone rising out of key)…. I think of it almost like a yoga instrument, because it makes you so aware of every little crazy thing your body is doing, or just aware of what you don't want it to be doing while you're playing” (Kurstin 2002)

Axel Mulders bodysuit also had a similar problem

“The low level of control intimacy resulted from the fact that the movements were represented as physical joint angles that were placed in a linear relation to the psycho-acoustical parameters representing the sounds. However, a performer rarely conceives of gestures in terms of single joint motions only: multiple joints are almost always involved in performance gestures. Therefore, considerable learning appeared to be necessary to gain control over the instrument and eliminate many unwanted sound effects.” (Mulder 1996 4)

In her Thesis “A Gestural Media Framework” Jessop states that “I have found that strong, semantically-meaningful mappings between gesture and sound or visuals help create compelling performance interactions, especially when there is no tangible instrument for a performer to manipulate”(Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010 15) Jessop points out that with these systems the performer must also be a programmer to gain the most reward and whilst this is true it is possible to create a coherent GUI (Graphic User Interface) that obscures much of the programming from the user whilst allowing them to effectively calibrate and work with the system. With any controller that uses gestural input it is necessary to have some kind of calibration stage to produce accurate results and cannot be avoided when so much is reliant on, for instance, the amount a person can bend their finger or move their wrist. This also creates the opportunity to create a system so customizable that it is a useful tool for those of impaired mobility, if it is possible to have sensors of a high enough accuracy that a large change can be made in high resolution over a small area of movement you move towards creating a system that with minimal training anyone can use and benefit from. It is possible to create a control system whereby the gestures used can be changed over time or be varied by the specific performer and their needs. Axel Mulder proposes that the existing problems with instruments and controllers are Inflexibility “Due to age and/or bodily traumas, the physical and/or motor control ability of a performer may change. Alternatively, his or her gestural "vocabulary" may change due to personal interests, social influences and cultural trends…” and Standardization “Most musical instruments are built for persons with demographically normal limb proportions and functionality”(Mulder 1996 2). Mulders work is of particular interest as he focuses on using the hands and associated gestures to create a new type of musical interaction. The SensOrg project also looks at the idea of creating an adaptable gestural control system based on the movement of the hands “ the SensOrg hardware is so freely conīŦgurable , that it is almost totally adaptable to circumstances . It can accommodate individuals with special needs, including physical impairments”(Ungvary and Vertegaal 176). The creators of this project state “we consider sensory motor tools essential in the process of musical expression”
Jessop (in reference to dance) states “We are now in an era in the intersection of technology and performance where the technology no longer needs to be the primary focus of a piece.  The performance is not about the use of a particular technology; instead, the performance has its own content that is supported and explored through the use of that technology.”(Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010). Whilst this could be assumed to be true in relation to dance I feel that this stage has not been reached yet in electronic music. There is still a fundamental disconnection between the performer and the music they are playing and between the audience and the performer.  As often electronic music is explicitly about the use of technology and its application to create and manipulate sound it seems strange that electronic live performers expose their audience to almost none of the technology they use other than to show them that they have a laptop computer. We have not yet reached a stage where the audience can assume to know what the performer is doing with their laptop and controllers. Performers such as Tim Exile are attempting to change this idea with highly interactive and customizable live shows and controllers that allow room for surprise elements, mistakes and moments of inspiration. As most people use computers to play live in the most limited way, by simply playing back tracks with basic alteration this does not allow room for one of the elements that makes traditional live music so special, the fact that the performance will change every time and that you can change the structure of the song or rearrange it in a different style. It is my goal to move towards developing a system that allows a deep, user defined interaction with the software you are using whilst being unique for each user and adaptive to their performance demands. The application of this idea means that we must attempt to introduce systems into the controller that allow the audience to form a link between the command being performed and the sonic outcome. As it is possible for each controller to be radically different in design and implementation it is important that some kind of visual feedback system is introduced in addition to the performers gestures that aids in the audiences understanding of what is happening.

            As such, systems must be designed that allow a high level of control, allow the performer greater room to improvise within their defined parameters and encourages them to take risks. With so many assignable controls available in computer software and the ease of use of multiple sensor inputs to the computer it is possible, for example, to use the whole body to control a synthesizer or indeed the whole arrangement of the piece. In this way it allows the performer to embody many of the separate parts of the music whilst maintaining control of the whole. In many ways this is utopian concept, whereby the performer has deep control of every aspect of the piece and can easily manipulate it in whatever way he desires, but can also introduce indeterminacy to the piece. Software such as Max/Msp and Puredata allows an almost infinite variety of control combinations to be remapped and recalibrated on the fly and can even be used to provide a constantly mutating backing which you can for example use your controller to play a solo over.[2] This software also has the advantage of being open to the end user, max and Puredata patches can be opened and reprogrammed to suit the users needs if the original design is not flexible enough and it is precisely this open source attitude towards software that will see alternative controller solutions start to appear in many different contexts throughout the musical world. When something has the capability to be anything you desire it to be (with the proviso that you need some skill at programming to realize this) the possibilities to any artist are immediately apparent. However these systems should also be designed with ease of use in mind and the beauty of this approach is that whilst on the surface it is possible to provide a unified and coherent GUI for those that wish to use it anyone who wishes to delve deeper into the inner workings of the controller is free to do so. It is this that allows one to design continually evolving controller concepts that can change based on the artists intent or interests at the time.

            With my project I have chosen to focus on the hand and wrist as the main method of control “the hand is the most dexterous of human appendages and is therefore, for many performers, the best tool to achieve virtuosity”(Mulder 1996 5). By focusing on the hand I am attempting to provide a method of input that is understood by the performer and audience and can provide a rich array of data by which to control aspects of the performance. The lack of tactile feedback from the controller and use of empty handed gesturing in my system makes it unlike traditional instrument models where every action is anchored to a physical device but also provides some similarity in its most simple operation (playing notes) as a physical press of a key is still involved.
Systems such as Onyx Ashanti’s “BeatJazz’[3] involves a controller that provides tactile feedback to the user via pressure applied to force sensing resistors to trigger notes and functions. This allows for a much greater degree of flexibility in the performance and remains true to the instruments that Onyx has traditionally played. Onyx is a skilled wind instrument player and has a background of playing the saxophone and more recently the Yamaha WX5 wind controller. However in designing his own controller system rather than simply choose to recreate a traditional wind controller Onyx has attempted to create a new controller that takes the best aspects of that instrument and combines them with the expanded possibilities of home build controllers. This is most simply seen in the layout of the controller that takes the form of two hand held units, a mouthpiece, and a helmet with visual feedback via TouchOSC. Where as the traditional wind controller looks something like a clarinet the beatjazz controller has a separate wedge shaped controller for each hand. Each hand features switches, accelerometers and lights to control his Puredata and Native Instruments Kore based computer setup. As this design uses force sensing resistors as switches it allows the performer to assign multiple functions to each of the buttons depending on how hard they are pressed which means that for a minimum number of buttons a huge array of controls can be manipulated.
When talking to Onyx about his controller system he stressed the importance of visual feedback for the audience and stated that he had modified his wind controller with brighter LED’s so that the audience could see when a note was played or a breath was being blown. He has carried this idea through to his Beatjazz controller using super bright multi-color LED’s that change patterns depending on what is being done with the controller. This serves to reinforce the link between with the audience and the performers actions and also draws the audience into the performance by creating a unique and performer centric visual display.
These individual and highly specific performance systems are aimed at encouraging the use of the computer to produce a new kind of instrument, not one rooted in classical tradition but an instrument that recognizes the power of the computer as a tool of complete agency over the music produced. Jessop states, “ For these interactions between a performer's body and digital movement to be compelling, the relationships and connections between movement and media should be expressive and the performer's agency should be clear.” (Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010 15) and this is a key principle of the system that I have designed. Although it is possible for subtle movements to produce great change within a performance the controller and software should be calibrated so that there is a clear visual link between the movements being made and the sound being output. It is clear when using an instrument such as the Eigenharp that when a key is pressed a sound is output, however when designing a more esoteric control system it is up to the designer and user to ascribe meaning to certain gestures. Without the presence of a physical input such as a fretboard or breath controller we must ensure that the audience understand what action corresponds to what gesture, this is given further importance due to the fact that our control system is adaptive, we may use one button or gesture to perform a number of functions depending on its assignment at that time and therefore we must ensure that these are clearly demarked through the performance and gestures used. We must create a set of semantically meaningful gestures to support our performance. Using sensors such as accelerometers, gyroscopes or bend sensors these gestures can be as simple or as complicated as the performer desires, from turning the hand in one direction to the other to control, for example, the cutoff of a filter, to a complex gesture involving the placement of both hands. The user of the system should free to define these interactions from within their code and to choose gestures that feel natural to their playing technique without producing ‘false positive’ triggers during normal use. It is also important to consider the setting in which the performance is to take place when defining gestures within a control system as the gestures associated with a conductor and a classical music setting are very different from those of a dance music event. The system I have made will be used mainly to create dance music within a club setting and therefore ideas of appropriate gestures for this must be considered as well as the role of the performer within this context and the breaking down of the performer audience divide.


[1] Ie bending a string always produces a rise in pitch, blowing harder into a wind instrument produces an overblown note and so on
[2] See the later section on Algorithmic variation
[3] See Appendix A 3.1-3.3 for images

No comments: