How can electronics change the voice as an instrument? Of course electronics cannot change the voice as such, but they can change the sound of it when amplified through loudspeakers. The use of electronics can also change the organisation of sounds, and as a consequence, their instrumental appearance and musical possibilities. Bergsland constructs what he calls a maximal – minimal model for analysing voice sounds in electroacoustic music.
The second central idea in my framework is the model of maximal and minimal voice. This model sets up two poles or extremes as reference points against which the experience of different types of transformed or manipulated voices might be judged and compared, namely the maximal and minimal voice. The maximal voice can briefly be described as a typical informative and neutral speaking voice, resembling in many ways public broadcast voices. At the other end, the minimal voice is usually highly manipulated and often quite abstract, and thus defines the zone between what is voice and what is not voice. The imagined space between these two extremes is thought of as a continuum extending from a central zone, defined by the maximal voice, towards a peripheral zone, defined by the minimal voice (ibid 3).
Bergsland’s Centre-periphery model of maximal and minimal voice (ibid 149)
In his model, Bergsland also breaks this continuum down into what he refers to as a set of seven premises, which he sees as being partly interrelated dimensions with which different vocal expressions in vocal music can be evaluated:
- Focus of attention
- Information density
- Clarity of meaning
- Feature salience
- Stream integration (Ibid 142)
Bergsland sees this model as being connected to other theories that are relevant here:
– We experience a manipulated sound in relation to one that is not manipulated (Smalley and Schaeffer)
– We can describe a continuum between the concrete and reference-oriented on one side, and the abstract and sound quality-oriented on the other side. (Hoopen, Young, Chion, Emmerson) (Ibid 3,4)
For me, as a vocal improvising performer, Bergsland’s model is useful when I am trying to understand my intuitive actions. I experience a play with “distance – nearness” in my work relating to this continuum, where the maximum, natural voice is the central zone, and the highly processed voice is the peripheral zone. Further, his premises are not only concerned with a sound’s quality, but also, in the last two premises: how it appears in the whole musical picture. Feature salience is about how vocal sounds “stand out” perceptually, both for themselves and in relation to other sound features. Stream integration indicates how the voice is integrated into one coherent and continuous sound stream. (Ibid: 142)
Looking at my music in the light of this model, I see that the model describes important aspects of what I am playing with. It is also clear to me that this model shows how the voice is very different from other musical instruments through the premises of Naturalness and Clarity of meaning. I see that this model could be used as a tool for analysing my music in a more theoretical way. Rather than providing a detailed analysis (which would be a big theoretical task since the play with these premises exists in an interweaved whole), I will be using the model more freely as a reference and tool for understanding some important aspects of my work.
I will try to show how Bergsland’s seven premises are able to describe musical parameters in my work:
Example III, 1: “Raised, rave” from the CD Voxpheria (2012) with Thomas Strønen:
[soundcloud url=”https://api.soundcloud.com/tracks/26816789?secret_token=s-ppu7k” params=”color=ff6600&auto_play=false&show_artwork=true&show_playcount=true&show_comments=true” width=”100%” height=”auto” iframe=”true” /]
For the first 3 minutes I work within a continuum between natural voice sounds on one side and different processed and sampled sounds (and reverbs) on the other. I am play, among other things, with degrees of Naturalness and Presence. The most processed sound comes from the plug-in synth Hadron (see Chapter 2), while some of the “sliced up” sounds, are produced by using effects in the Roland SP555 (see Chapter 2). Due to the character of the interplay, in this part very transparent through the use of silence/”stops” between the impulses, it is still easy to recognise much of the voice sound as voice, except for the Hadron pulse (Information density, Focus of attention and Feature Salience, as described above).
Towards the end of the session, (8.38) there is a sequence where the only voice present is a sampled loop processed with granular synthesis, pitched down and filtered through a MaxMSP patch (the G/F patch, see Chapter 2). I use the same combination of techniques a bit earlier (6.41). Here I am in the peripheral zone; the voice is less recognisable and also more mixed with Thomas’s sounds (Stream integration, as described above).
At 3.05, a “text” is introduced, a kind of Dadaistic improvisation with word-like sounds. The only clear word in this section is the word “raised”, and maybe the word “rave”. This was not intended as meaning, but popped up as a part of the improvisation. (The title of the piece is created afterwards.) This is a play with Bergsland’s premise of Clarity of meaning as I see it. The use of something “language-like” has a special energy because it is associated with meaning ‒ at least for me, being a performer capable of delivering a text in a more traditional setting. I experience a distinct difference between textually related vocalising and more abstract sound-sculpting or ‟instrumental” singing.