A lecture given at the Audiovisuality conference, University of Aarhus, 27 May 2011. It is published in Sound Effects, 3.1 (2013): 132-48.[pdf version]
Friedrich Kittler has described the discourse network of the later nineteenth century as effecting a conspicuous separation of the different sensory and mediatic channels, splitting apart the spontaneous cross-sensory concourse of eye, hand and ear at the beginning of the century. But he also shows that the later years of the nineteenth century are also characterised by a kind of conversion mania, as inventors and engineers sought more and more ways in which different kinds of energy and sensory form could be translated into each other. That one of the most important imaginary diseases of the fin-de-siècle was the condition known as ‘conversion hysteria’ is perhaps a sign of how far-reaching this enthusiasm was for the idea of translated energies and outputs. It is a happy coincidence that the Oxford English Dictionary was in preparation during the very decades in which some of the most important developments were occurring, since one of its most notable effects was the abundance of new names for the hybridising apparatuses that came into being (indeed the compilers of the dictionary were particularly exercised by the abundance of new technical terms, wondering how many of them were likely to survive long enough to merit inclusion in the dictionary.
Four years after the invention of the telephone, Alexander Graham Bell caused flurries of excitement with another invention, which he described in a series of essays and lectures in the US and Britain during the autumn of 1880. The device was what he called the ‘photophone’. It depended upon the discovery made by Willoughby Smith in 1873, during the course of work on the Atlantic undersea telephone cable, that the resistance of the material selenium, which was ordinarily extremely high, in fact varied with the action of light, exposure to light lowering the resistance of the material. Reading of selenium’s sensitivity to fluctuations of current, it occurred to Bell that, if rapid fluctuations in resistance could be induced in it by equivalent fluctuations in a beam of light, the output from a selenium cell might function in the same way as the fluctuating electrical current that produced sound in the telephone. In a lecture to the Royal Institution of May 1878, Bell had speculated that connecting a selenium cell to a telephone would mean ‘that you can hear a shadow’ (quoted Bruce 1973, 254). If the rapid fluctuations of light could be controlled by the modulations of a human voice, it should then be possible to transmit the sound of the voice, on the principles of the telephone, only wirelessly. In the ordinary telephone, the palindromic series of inductions ran sound-->magnetism-->electricity-->magnetism-->sound. In Bell’s photophone, the series of inductions ran sound-->light-->electricity-->magnetism-->sound.
The immediate advantage seemed clear. As Bell explained in a lecture of September 1880, ‘I saw that the effect could be produced at the extreme distance at which selenium would respond to the action of a luminous body, but that this distance could be indefinitely increased by a parallel beam of light, so that we could telephone from one place to another without the necessity of a conducting wire between the transmitter and receiver’ (Bell 1880b, 132). ‘Indefinitely’ turned out to be the most literal of longshots and, as time went on, the limitations of the photophone became stubbornly apparent. Indeed, it must have seemed to some that Bell had done little more than laboriously to reinvent, in electrolised form, the original form of the telegraph, which used line-of-sight signals to transmit signals over long distances, and which had its origin in the lines of beacon bonfires used by the Ancient Gauls, and many others. It is true that the photophone offered mechanical reproduction of a voice, rather than signals that had to be decoded and recoded at each transfer-point; but the original wireless telegraph would work as long as there was visibility – and even, in the case of beacon bonfires, at night – whereas the photophone would stop working whenever the sun went behind a cloud – though Bell did find that he was able to transmit by oxyhydrogen light and even by the light from a kerosene lamp (Bell 1880a, 320). The New York Times was elaborately sardonic in its commentary on a lecture in which Bell described his device:
The ordinary man…may find a little difficulty in comprehending how sunbeams are to be used. Does Prof. BELL intend to connect Boston and Cambridge, for example, with a line of sunbeams hung on telegraph posts, and, if so, of what diameter are the sunbeams to be, and how is he to obtain them of the required size? What will become of his sunbeams after the sun goes down? Will they retain their power to communicate sound, or will it be necessary to insulate them, and protect them against the weather by a thick coating of gutta-percha? The public has a great deal of confidence in Scientific Persons, but until it actually sees a man going through the streets with a coil of No. 12 sunbeams on his shoulder, and suspending it from pole to pole, there will be a general feeling that there is something about Prof. BELL’s photophone which places a tremendous strain on human credulity. (Anon 1880a, 4)
Nevertheless, Bell would spend much of the rest of his life, and no little portion of his fortune, trying to perfect and develop the photophone; he continued to regard it as his greatest invention and was still tinkering with it as late as 1922, the year of his death (Bruce 1973, 343; Mackay 1997, 205, 307).
Although the photophone was understood primarily as a possible improvement on the work done by the telephone, in converting the sound of a voice into light, then into electricity, and then back into sound, Bell also experimented directly on the direct production of sound, without an original. Indeed, he was also prompted to see whether the action of light might produce sound in selenium and in other materials without the need for electrical conversion. He concluded that indeed ‘sounds can be produced by the action of a variable light from substances of all kinds when in the form of thin diaphragms’ (Bell 1880a, 322-3). He had in fact written to his father in these terms on 26 February 1880: ‘I have heard articulate speech by sunlight! I have heard a ray of the sun laugh and cough and sing!...I have been able to hear a shadow and I have even perceived by ear the passage of a cloud across the sun's disk’ (Bruce 1973, 337). An enthusiastic journalist in Appletons’ Journal speculated that
We hear of conversation being carried on by means of a trembling beam of light, and incredulity reaches its climax when it is whispered that the photophone may enable us to hear the rise and fall of those gigantic storms that are constantly sweeping over the sun's surface. Is it possible that the revelations of modern science-condemned as materialistic and prosaic-can thus outstrip the wildest flights of the imagination? (Anon 1881, 181)
Bell’s experiments and speculations led him, so to speak, away from telephony and towards phonography, away, that is, from the idea of the transmission of sound and towards the investigation of the idea of inducing sound in material, or making manifest its sonorous potential, by making good the suggestion that ‘sonorousness, under the influence of intermittent light, is a property common to all matter’ (Bell 1881, 242). He experimented with many different objects and substances, including cigar butts and lampblack (pretty good) and water (disappointing), and succeeded in inducing sound from many of them. He was followed in this by W.H. Preece in London (Preece 1880-1), who argued that the sound was in fact caused by the agitation of the molecules in the sonorous material by heat rather than light.
It was the use of light to produce rather than to transmit sound that seems to have gripped Bell most, and that was also to prove most suggestive to others. We ought to take the word ‘produce’ quite seriously. The idea of production suggests that the action of making a sound is at once an extrapolation – a drawing out or, literally, drawing forwards – and an unfolding, an outering, or uttering. In his 1938 essay ‘A New Laocoön: Artistic Composites and the Talking Film’, Rudolf Arnheim suggested that the coming of sound to film created a division between speaking and silent objects that had not previously been apparent; in silent film, which created a ‘union of silent man and silent things’ (Arnheim 1957, 227), there was no such distribution, and objects were as expressive as human agents: ‘In the universal silence of the image, the fragments of a broken vase could “talk” exactly the way a character talked to his neighbor, and a person approaching on a road and visible on the horizon as a mere dot “talked” as someone acting in close-up’ (Arnheim 1957, 227). The coming of speech, which draws attention away from all the interactions of man and the extrahuman world, and focuses it exclusively on ‘the monotonous motions of the mouth’ (Arnheim 1957, 228), stifles this conversation: ‘it endows the actor with speech, and since only he can have it, all other things are pushed into the background’ (Arnheim 1957, 227).
The idea of the sonification of the visible suggests a redemptive reversal of this silencing, for it proposes that everything may be able to speak its name by techniques of sonification or the donation to objects of voices. Sonification suggests that there are no silent objects, only inaudible ones. Objects are brought to life by being sonorised, since sound has the power, in the words of sound artist Ros Bandt, to move objects from a spatial order into ‘the ephemeral temporal zone’, meaning that ‘[t]he physical point of demarcation of the object from the immaterial becomes blurred with the use of sound, and its presence can change through time because of the sound’ (Bandt 2001, 53) The sounds produced by various kinds of recoding are often treated as though they had been implicit in their sources all along; recoding brings to light, or to hearing, the recording that, remembering the cardiac etymology of the word, every object will thereby and thereafter seem to have by heart.
This notion receives its earliest and still most influential formulation in ‘Primal Sound’, a 1919 essay by Rainer Maria Rilke, a text that has been repeatedly replayed by historians of sound and media, especially following its reproduction in full in Friedrich Kittler’s Gramophone, Film Typewriter (Kittler 1999, 38-42). Rilke splices two memories of his youth. The first is of seeing a home-made phonograph demonstrated as a schoolchild.
The sound which had been ours came back to us tremblingly, haltingly from the paper funnel, uncertain, infinitely soft and hesitating and fading out altogether in places…We were confronting, as it were, a new and infinitely delicate point in the texture of reality, from which something far greater than ourselves, yet indescribably immature [unsäglich anfängerhaft], seemed to be appealing to us as if seeking help. (Rilke 1986, 127-8)
The striking feature of the gramophonic sound here is the fact that it seems feebly and tenderly incipient – ‘anfängerhaft’ – and yet also seems more powerful than ordinary speech. The sounds captured by the phonograph seem both fragile and persisting. This memory is joined with that of catching sight of a skull as an anatomy student, and perceiving the similarity of the line of the coronal suture to the groove of a gramophone, which suddenly opens up the prospect of a kind of universal gramophony:
What if one changed the needle and directed it on its return journey along a tracing which was not derived from the graphic translation of a sound, but existed of itself naturally [an sich und natürlich Bestehendes]– well: to put it plainly, along the coronal suture, for example. What would happen? A sound would necessarily result, a series of sounds, music…
Feelings – which? Incredulity, timidity, fear, awe – which of all the feelings here possible prevents me from suggesting a name for the primal sound [Ur-Geräusch] which would then make its appearance in the world…
Leaving that aside for the moment: what variety of lines, then, occurring anywhere, could one not put under the needle and try out? Is there any contour that one could not, in a sense, complete in this way [auf diese Weise zu Ende ziehen] and then experience it, as it makes itself felt, thus transformed, in another field of sense? (Rilke 1986, 129-30)
Rilke’s fantasy is both ancient and modern. The idea that the material world is in fact not merely a set of meaningless forms, but rather a network of signs or signatures, that may be read out by the attentive or the enlightened, flourished in the form of the ‘doctrine of signatures’ from the ancient world well into the seventeenth century. It finds a late expression in a poem by Gerard Manley Hopkins, which sees being as a kind of exultant enunciation, existence straining or blazing into utterance:
As kingfishers catch fire, dragonflies draw flame;
As tumbled over rim in roundy wells
Stones ring; like each tucked string tells, each hung bell’s
Bow swung finds tongue to fling out broad its name;
Each mortal thing does one thing and the same:
Deals out that being indoors each one dwells;
Selves—goes itself; myself it speaks and spells,
Crying What I do is me: for that I came. (Hopkins 1970, 90)
But Rilke’s fantasy is also modern in that it suggests that reading the signs, or giving utterance to the latent voices of things may be dependent, not upon revelation, or understanding, but upon technology the technology of the gramophone.
It did not take long for other artists to recognise and extrapolate from this possibility. Only a few years later, in 1922, László Moholy-Nagy suggested that ‘Since it is primarily production (productive creation) that serves human construction, we must strive to turn the apparatuses (instruments) used so far only for reproductive purposes into ones that can be used for productive purposes as well’ (Moholy-Nagy 2004, 331). Where Rilke imagined the systematic playing out of the existing sound-inscriptions in the world, Moholy-Nagy proposed a more direct inscription of sound:
[T]he grooves are incised by human agency into the wax plate, without any external mechanical means, which then produce sound effects which would signify without new instruments and without an orchestra – a fundamental innovation in sound production (of new, hitherto unknown sounds and tonal relations) both in composition and in musical performance. (Moholy-Nagy 2004, 332)
Although Rilke and Moholy-Nagy are often associated in histories of recorded sound, they seem to be drawing out opposed possibilities from it. Rilke’s aesthetic was gramophonic, in that it emphasised the automatistic playing out of already recorded sounds, albeit recorded without human agency. Moholy-Nagy was attempting to snatch creative control back from the process of recording, reducing in the process all of the mediations that the apparatus of recording introduced., It is therefore more aptly called phonographic. In his essay ‘New Form in Music: Potentialities of the Phonograph’ of 1923, Moholy-Nagy enumerated some of these potentials. Firstly, ‘[b]y establishing a groove-script alphabet an overall instrument is created which supersedes all instruments used so far’ (Moholy-Nagy 2004, 332). Secondly
The composer would be able to create his composition for immediate reproduction on the disc itself, thus he will not be dependent on the absolute knowledge of the interpretative artist. So far, the latter was in most cases able to smuggle in his own spiritual experience into the composition written in note form…Instead of the numerous “reproductive talents,” who have actually nothing to do with real sound creation (in either an active or a passive sense), the people will be educated to the real reception of creation of music. (Moholy-Nagy 2004, 332-3)
Moholy-Nagy’s vision seems to be of the kind of kind of direct, and immediate production of sound, without the need for elaborate mediations and encoding, that would not in fact materialise until the advent of digital synthesis and mixing at the end of the twentieth century. Strikingly, however, the direct control over the reproduction process, by means of an inscription process that will leave no room for interpretation, is bought at the cost of a kind of systematic ‘deaf spot’ in the system, in that the inscription of the sound must be brought about by the hand, and as the result of the internalising of a complex system of encoding, the visual script of the gramophone groove. The scrivener of sound is in the position of the deaf person taught to form words that he cannot himself hear, or, perhaps more accurately, like the reader of the phonautograph in which Bell first traced the flickerings of sound for the benefit of the deaf.
In 1932, Oskar Fischinger, who had been making animated abstract films with some success for several years, became interested by the resemblances between his abstract designs and the patterns on optical film soundtracks (one of the most successful of which was actually called the ‘photophone’ system). He spent some years working on a film called Ornament Ton (Ornament Sound) involving sound drawn directly on to film; Fischinger’s idea was that the shapes would be projected in the visual frame so that viewers would see precisely the shapes that were generating the very sound they were hearing (Moritz 2004, 219). Meanwhile, Moholy-Nagy was also experimenting with drawing directly on to optical soundtrack, once remarking to a friend as he sketched his face ‘I can play your profile…I wonder how your nose will sound’ (Moholy-Nagy 1969, 68). In 1933, he inscribed the alphabet into the optical soundtrack, which produced, when played back, ‘a strange tone sequence, a third dimension, so to speak, to the written and spoken alphabet’ (Moholy-Nagy 1969, 97).
Fischinger was an adherent of various mystical ideas and systems and had the leaning towards universal analogy that is a feature of such systems. This may well have encouraged his search for a grammar of visual forms that would correspond to an auditory grammar (Moritz 2004, 29, 43-4). His work in optical sound therefore seems to be the exact counterpart to his work on abstract animated film – he spoke in 1934of his dream of making ‘an absolute colour work, born wholly out of music, comprehensible to all the people on earth’ (Moritz 2004, 55). His creation of optical poetry and the visualisation of musical forms, most especially in the work he did as part of the production of Disney’s Fantasia (none of his work made it into the final film, but the Bach Toccata and Fugue section of the film was adapted from his designs), was matched by his belief in the auditory potential of visual forms and objects, a view that he passed on to the young John Cage when he met him in 1937 (Moritz 2004, 77-8). Cage recorded that ‘Fischinger’s whimsical notions about sight and sound opened a new door for me, something that stays with me always’ (Moritz 2004, 166), and recorded the impact that Fischinger had had upon him in the form of a mesostic:
Surprisingly, though, Fischinger seems to have moved beyond sound, into the higher form of optical music. He seems to have come to see sound as inessential, rather than part of the essence of the object. Of Radio Dynamics, his 1944 silent abstract film, he wrote:
If there is sound necessary, then the music has to go with the movement of the image, the motion of the forms. Light is the same as Sound, and Sound is the same as Light. Sound and Light are merely waves of different length. Sound and Light waves tell us something about the inner and outer structure of things. Non-objective expressions need no perspective. Sound is mostly an expression of the inner plastic structure of things, and should also not be needed for non-objective expressions. The more unessential material we can take away, the more the essential, the non-objective absolute truth, can come forth. (Moritz 2004, 184)
The authoritarian violence which lurks in every absolute attachment emerges in Fischinger’s condemnation, late in his life, in November 1956, of the cinema, for its addiction to realism and storytelling: ‘Another Mohammed must come to set in motion a new Bildersturm and to destroy all the films of “reality,” and, I hope, at the same time all the reproductions of paintings – the substitutes which poison the creative channels of art’ (Moritz 2004, 189).
Moholy-Nagy and Fischinger were not the only people interested in the techniques of direct sound-writing. Rudolph Pfenninger had developed in the late 1920s a system of ‘tönende Handschrift’ (sonorous handwriting) that enabled him to write directly on to the optical soundtrack of films (Levin 2003, 52-3). During the early 1930s, groups of researchers in the USSR were also making significant advances in the production of synthetic sound. Thomas Levin argues that the advent of synthetic sound fundamentally changed ‘the ontological stability of all recorded sound’ (Levin 2003, 61). Previously, ‘all recorded sound was always a recording of something - a voice, an instrument, a chance sound’. Subsequently, as an anonymous review in the Völkischer Beobachter in 1932 put it, the sound-scriptor ‘produces tones from out of nowhere [schäfft Tönen aus dem Nichts]’ (quoted Levin 2003, 58).
The new compact of sound and materiality established by this universal phonography is suggested by the phrase ‘sound sculpture’, the uses of which express an interesting reversibility with regard to the relations between the visible and the auditory. At its simplest, a sound sculpture is a sculpture that produces sound. In a sense, sound sculptures may all be thought of as instruments. If Rilke’s ‘Primal Sound’ opens up the prospect that every visible physical form in the world might constitute a sort of score, in another sense it might also be said to instrumentalise those forms, to turn them into instruments or the kind of playable objects that sound sculptures are. But the phrase sound sculpture is also commonly used to refer to a sort of immaterial sonorous quasi-object, thought to be sculpted or shaped by, and out of sound. In both cases, though in different ways, there is an attempt to embody the embodiment of sound, to thereby to reduce the irreducible ambivalence of sound, namely that it has material force, without having material form.
There has seemed to many to be a striking asymmetry in the human relations to sound and vision. Sound always seems, as Rick Altman has suggested, to pose or ask questions. In cinema, the sound of something invisible asks to be completed by identification of its source (Altman 1980, 74). Since all sound seems to be naturally in the genitive case, we seem naturally to wonder of a sound, what is that the sound of? This does not appear to be easily reversible. Only certain kinds of visual objects – ones that we have good reason to suspect exist in order to make sounds – will naturally provoke the corresponding question, what sound does that make? Where visible objects seem self-sufficient, sonorous events seem to point or convey us elsewhere, or backwards in time to their point of emission. Visible objects produce sounds; sounds do not produce objects in any equivalently straightforward way.
It is this lopsidedness to which practices known as sonification seem to reply. Sonification has been defined in various ways (Worrall 2009, 313-14). As David Worrall usefully observes, there is a difference between musical and artistic forms of sonification and more technical forms in which the purpose is ‘to represent data in such ways that structural characteristics of the data become apparent to the listener’ (Worrall 2009, 313-14).
One of the striking things about sonification is its general nonutility. This is not to say that there are no circumstances in which sonic information is useful, or even vital. In general, auditory information systems perform particularly well in circumstances in which it is necessary for users to be able to monitor continuously the state of a variable system, and to respond quickly to changes in that state, though without knowing precisely what they are. Sonification, that is, is particularly good at alerting us to change, hence its use for the processes that have become known as monitoring. A heart monitor allows those in an operating theatre to detect and quickly to respond to changes in speed, rhythm or amplitude. Sonification seems to interact well with the sampling structure of perception, the fact that we do not simply expose ourselves to sensory stimulus, of any kind, but rather compress that data into patterns, which we repeatedly sample to check for variations. The reason that sonification seems particularly effective in the areas where it is, is that the ear seems particularly apt to make out these patterns, and also to detect variations in them. Volcanologists and seismologists have used sonification of seismic readings in order to help the recognition of patterns that might seem too complex to sort into patterns if presented in the form of visual data.
We often assume that hearing is more concrete and immediate than seeing, perhaps because we respond much more quickly and involuntarily to things like crying babies and changes in engine noise than we do to fluctuations in forms of visible display. This makes it seem and feel as though the ear were more open to the immediacy of events, because less involved in analysing, interpreting or generally making potentially fallible decisions about those events. But in fact the sensation we have of the ear’s capacity to detect and respond quickly to things themselves, as they occur, the sensation we have that we are sensing, rather than interpreting, is the outcome of the energetically interpretative action of hearing. We might even say of hearing that it has a certain intolerance of particularity, of the Ding-an-sich or Klang-an-sich. Indeed, hearing may be said to be primarily statistical, in the primary and original meaning of the term, namely that it is good at detecting (or projecting) states of things and departures from those states, these states being abstract syntheses or higher-level generalisations of spreads of particularity.
But the strength of auditory perception, namely that it is primarily qualitative rather than quantitative, is also its limitation in many circumstances. That is, it is good for the registering of change, but not good for the measurement of the degree of change. Most of us can detect when a tone is followed by another an octave higher, but those without musical training are very unlikely to be able to distinguish the actual degrees of separation of music intervals within the octave any precision. A Geiger counter may provide good indications of increasing and falling levels of radiation, but it would be unwise to rely on its auditory evidence alone to identify absolute or threshold levels, of safe exposure, for example, An auditory altimeter is good at providing rapid feedback about changes in height, but if you want to know what your actual height is, as you plummet to earth, you are going to have to have perfect pitch in order to be able to read that off from sound alone.
Enthusiasts for sonification processes often considerably overestimate the capacities of the ear. David Worrall, for example, writes that the ear, being continuously awake and vigilant, ‘constantly monitors the world around us and, in doing so, directs our visual and kinesthetic attention’ (Worrall 2009, 325). He then goes on to offer the following proof:
The observation that our hearing leads our vision in making sense of the world is amply demonstrated by the importance of Foley (sound effects) in film; the imitation of the sound of a horse’s hooves, over stones, through mud, and so on by knocking coconut halves together is much more convincing than an unenhanced recording of the sound of the hooves themselves. (Worrall 2009, 325)
But Worrall’s example expertly slits the gizzard of his argument, for, far from leading our vision in making sense of the world, it is vision (the fact that we know what it is we are meant to be hearing because we can see it) that leads us to hear a sound as a particular sound, to hear it as a sound of. If our vision really were really led by our hearing, the only film in which there would not be a drastic sense of being misled on hearing Foley coconuts would be Monty Python and the Holy Grail, in which medieval knights in fact caper over the landscape on pretend steeds, with peasants trailing after them clacking coconut halves together, meaning that here, for once, what we see closely matches what we hear. This asymmetry, the fact that our hearing is nearly always matched to our vision rather than vice versa, is the foundation of both of ventriloquism and of cinema sound. The very fact that sonification is also regularly known as ‘auditory display’ should be indication enough of the fact that the purpose of sonification is usually to provide information in a form that approximates to something visual.
The difficulty of reading sonified data is suggested by the difficulty of reliably reading out even the rare examples of referential music, such as Beethoven’s Pastoral Symphony. Sitting cross-legged on the floor in primary school, and prompted by the gloss on the musical narrative provided by the Headmistress, I strove dutifully but in vain to make out all the sound pictures I had been instructed to hear, wind, thunder, birdsong, carousing peasants, and so forth.
And yet the passion for sonification, the rendering in sound of visible or nonsonorous forms, seems to have approached a condition in recent years that we see as a contemporary form of conversion hysteria. One might see the beginnings of this in the response to Bell’s Photophone. Comparing the discovery of developments in astronomical spectroscopy, the journal Engineering speculated feverishly:
Who, after Prof. Bell’s experiments, will have the hardihood to affirm that sounds taking place in the far off regions of the universe may not one day be heard upon the earth, and new fields of acoustical astronomy may not be opened to the intelligence of man. When such a time arrives, the thought of the poet will be clothed with the truth of the fact, that “Light is the voice of the stars.” ’ (quoted Anon 1880c, 177).
An editorial in Science reproduced these swollen sentiments in order to issue a sober reproof to Engineering for its unscientific exaggeration. Yet Bell himself was drawn into the enthusiasm for celestial sonification. A couple of months later, Science reported that he had visited the Meudon Observatory in Paris in order to see if he could use the photophone ‘for the reproduction of those sounds which these movements must necessarily produce on the surface of the sun’ (Anon 1880d, 304). Had he been able to assemble the images into an animated series of sufficient duration (since sound could be produced only from fluctuations of light intensity and not from static images), Bell might very well have been able to produce some kind of sonification of the visual data, but there seems no good reason to believe that the sounds played from the images would in any sense resemble the original solar ‘sounds’ – even supposing that conditions on the surface of the sun might be said to allow for the existence of anything approximating to what on earth might be called ‘sound’. Nevertheless, the article concluded that ‘the idea of reproducing on earth the sounds caused by great phenomena on the surface of the sun was so important that the author’s priority should be at once secured’ (Anon 1880d, 304). Later, Bell would develop a device he called the ‘spectrophone’, which enabled the analysis of a spectrum beyond the visible range through sound (Bruce 1973, 341-2).
There is, even at this early stage, a kind of magical thinking that is bred by the idea of sonification, which is evident precisely in the passion for sonification even in the face of its conspicuously limited utility. The point of sonification lies in a mysticism of the primal, a set of beliefs that sees translation into sound as a kind of making manifest of the latent truths, of a set of absolute but hidden primal conditions. The act of sonification is understood as a kind of re-enchantment of the world, the giving, which always imagines itself to be a giving back, to a voiceless world, of the voice it always lacked, but was still somehow always already, even if it was also always still not yet, unfalsifiably its own. Sonification also connects with that strange and pervasive fantasy, expressed in such works as Florence McLandburgh’s ‘The Automaton Ear’ (1876), that no sound once emitted ever quite dies away, even though it may ceaselessly diminish, and that technological advances might allow us, by selectively amplifying depleted sounds, to restore them to their full, sonorous presence. Sonification helps this fantasy of the survival of primal sound to survive.
‘Light is thus made to produce sound, and the ancient fable of Memnon’s statue is realised by modern science’, wrote the Journal of the Royal Society of Arts in response to Bell’s photophone (Anon 1880b, 848). Memnon was a mythical Ethiopian king whose mother was the goddess of the dawn, Eos or Aurora. After he was killed by Achilles in the Trojan War, a huge statue of him was erected in Thebes. Following an earthquake in 27BC, the statue begin to give out a sound like a voice every time it was hit by the morning rays, interpreted by many as a song of greeting to his mother. The tradition seems to have given rise to a number of parallel stories about pillars in Muslim mosques that similarly sang when struck by the sun (Goldzher 1886, 311). M.R. Duffey has proposed the Memnon myth as a model for a series of thermal automata and other sound-generating heat engines, which he proposes accordingly to call ‘memnonia’. Among them are ‘floral memnonia’, exploiting the principle that ‘[f]lower petals might function both as solar reflectors and as resonant cavities, thermokinetically unfolding and orienting in preparation for the music’ (Duffey 2007, 53).
The contemporary equivalent to Rilke’s skull-score, in our era of neuromania, is to be found in the many efforts to sonorise, or otherwise score music from the data provided by brain activity. One of the earliest such projects was Alvin Lucier’s Music for Solo Performer, for enormously amplified brain waves and percussion (1965). With electrodes attached to his scalp, Lucier maintained himself in a state of relaxation in order to produce alpha brainwaves, at about 8-12 hz, below human hearing range. This output was used to drive speaker diaphragms, which in turn powered instruments such as gongs and snare drums (Lucier 1995, 294). We might note that Lucier’s work avoided much of the sonorous mysticism of subsequent efforts to ‘hear’ thought, precisely because he shied away from any simple kind of musical modelling, explaining that ‘all around me were compositional people who wanted me to use technique, all of the things you learn – contrast, pacing, texture, things of that kind. I had to eliminate those to get at the poetry of the piece, which demanded that a solo performer sit in front of an audience and try to get in that alpha state and make his or her brain waves come out, to emerge with enough energy to drive an amplifier and do the piece’ (Lucier 1995, 50). The point for Lucier was precisely to avoid the sentimental effort to produce a sound-portrait of patterns of brain activity, responding to and representing shifts in the subject’s mood and attention. If the subject lost concentration, the result was not an interesting spike or arabesque in the contour of the sound, it was simple silence, as the alpha waves stopped.
More recent enterprises of this kind show all the dubiousness of contemporary sonifications. In August 2003, James Fung, of the Regenerative Brainwave Music group hooked up 48 meditating people and averaged their brainwave activity into a piece of music (an impeccably average one). In July 2004, a concert entitled Listening to the Brain Listening was presented at the Sydney Opera House. A number of groups had been given a dataset generated from the recordings of brain activity of somebody listening to a piece of music called ‘Dry Mud’ by David Page, from his 1997 CD Fish. This dataset was then used as the basis for ten separate sonifications. The ‘hypothesis’ of the project is as follows:
1. Music has effects on the electrical activity of the brain recorded with EEG;
2. Information in EEG can be heard in a sonification of the data;
3. Therefore, events in music produce corresponding events in a sonification of EEG data recorded while listening to the music (Barrass et. al. 2006, 14).
The pieces produced, and the detailed analysis undertaken of them, demonstrate emphatically that there is in fact no such correspondence.
Sonification prolongs a mystical sound-obscurantism that gives sound studies much of its impetus while yet also enfeebling it intellectually. ‘Ever since the invention of the phonograph’, writes Kittler, ‘there has been writing without a subject. It is no longer necessary to assign an author to every trace, not even God’ (Kittler 1999, 44). And thus, he adds ‘the impossible real transpires’ (Kittler 1999, 46). I can dramatise two responses to this in the work of Brandon LaBelle and Seth Kim-Cohen. At the end of his chapter on acoustic ecology, LaBelle writes that the recordings of Hildegard Westerkamp and others associated with the movement known as acoustic ecology produce ‘original meanings [which] hark back to Schafer’s claim for the Ursound, to the collective unconscious of our aural memory, that primary location of unity and instinct’ (LaBelle 2006, 215). LaBelle is in fact critical of the moralism of acoustic ecology, and insists that there will never be direct transcription of the Ursound, there will always be noise in the circuit. And yet his concluding remarks seem to return us to this mystical primality:
What acoustic ecology reveals, and must contend with, is the full body of sound in all its beautiful and terrible dimensions, from the deafening to the hauntingly attractive. Noise comes into play because it is unavoidable: tracking sound into such global and ancient territories necessarily delivers up the strange, the grotesque, the horrific, along with the magnificent…the Ursound is necessarily in all things, and in all places, as a total interpretative mixing of boundaries, where we live inside dreams and hallucinations, where place is fixed and dislocated in one move, where the voices of animals generate reverie inside the listener’s journey. (LaBelle 2006, 215)
Seth Kim-Cohen, by contrast, criticises the idea of the elementality or originality of sound. For both Rilke and Kittler, he suggests, the brain-groove fable stands for the possibility of a kind of access to the indexical real, considered alternately either as ‘pure’ or ‘raw’. For Rilke, this kind of phonography would allow the unsilencing of the fundamental or primal state of things, would allow things simply to register themselves, immediately. Kim-Cohen finds that Kittler shares these conceptions, though his idea of the immediacy of sound, or sound as the sign of the immediacy of the real is ironically achieved through a kind of excess or apotheosis of media – in the interchangeability of data streams which can traverse and converse between all phenomena. Kim-Cohen urges, surely correctly, that sound can never in fact merely signify and sustain itself, since that iconic haecceitas is always itself necessarily mediated. Sound in itself, and the ipseity or in-itselfness of a particular sound, have formed the tight weave they have as a result of being heard that way, for particular kinds of historical reasons. In all such cases, the iconic, or the earconic, is always in fact ironic, since it can only ever be purely itself, its own sound-signature, as a result of mediations that return it to itself, the long way round.
The translation of the coronal suture into phonographic sound erases the contextual markers that make the initial signal readable. The suture may be authorless, but it is not readerless, not contextless. Perhaps to a physiologist, the coronal text might convey information from the palimpsest of the skull: about the brain it once housed, the body of which it was part, the family from whom it descended. But to drop a phonographic needle into the suture’s groove is meaningless. As sound, it no longer maintains any connection to the conditions that produced it,. As sound, it is contextless data, pure noise. And let’s be clear that, contrary to apparent understanding, only noise is capable of purity. Signal, a product of traces and difference, is always impure, always shot through with the impurity of the other. (Kim-Cohen 2009, 100)
The idea of ‘pure sound’ is meaningless, since there is no pure sound, even if the idea of pure sound (which is not pure precisely because it is an idea) offers a performative contradiction of this. Sonification does not sound very far away from personification, and that seems apt, since there is much of imposture and impersonation in it. Sonification persuades us to pretend to believe that there is some hidden or implicit sound that has been brought to light or sounded out by the translation process, and that, by seeming to survive in some meaningful way through that translation, points back to its primal sonority and forwards to the prospect of its indefinite persistence through many further iterations. The sound that was never there in the first place is the product of a back-formation that makes it into the sound that will henceforth always have been there, waiting to be disinterred, disinaudiated.
Sonification induces a temporal perturbation, effecting what might be called a sleight of time. The act of auditory recoding that is performed upon a certain body of information turns into a new thing, that is connected with its original only by the thinnest of filaments. But the stubbornly genitive case of sound, its inseparability from the idea of an originating circumstance, helps us deceive ourselves into seeing this new thing as the actualisation of some primal sound-potential that was latent all along in the non-auditory source-material. But this primality is an after-effect of what has come later or last in time. The origin of the projected sonification therefore has its origin in it. Sonification gives rise to what seems to have given rise to it. Sound can do this, or cannot help but do it, because of sound’s failure of self-sufficiency, as the manifestation of a presence that it is not.
Clenched tight in this habit of thought is an unwillingness to accept the possibility of emergence, the same unwillingness as that found among proponents of intelligent design, who cannot accept that any kind of complex form or system can be produced except as the actualising of a pre-existing blueprint. One need not go from one form of magical thinking to another in accounting for this emergence. The fact that we cannot reliably predict the weather a year, or even a week from now is not because it is the result of supernatural causes. We may need to accept that there are many forms of indeterminable determination. The mystical or mythical reading of sonification as the sounding out of a universe of full and present primal sounds is a defence against this acceptance.
Altman, Rick (1980). ‘Moving Lips: Cinema as Ventriloquism.’ Yale French Studies, 60, 67-79.
Anon (1880a) ‘The Photophone.’ New York Times (30 August), 4.
Anon (1880b). ‘Professor Bell’s Photophone.’ Journal of the Royal Society of Arts, 28 (24 September), 847-8.
Anon (1880c). ‘Editorial.’ Science, 1 (9 October), 177-8.
Anon (1880d). ‘Application of the Photophone to the Study of the Noises Taking Place on the Surface of the Sun.’ Science, 1 (18 December), 304.
Anon (1881). ‘The Photophone.’ Appletons’ Journal, 10, 181-2.
Arnheim, Rudolf (1957). Film As Art. Berkeley, Los Angeles and London: University of California Press.
Bandt, Ros (2001). Sound Sculpture: Intersections in Sound and Sculpture in Australian Art. Sydney: Fine Art Publishing.
Barrass, Stephen (2006) ‘Listening to the Mind Listening: An Analysis of Sonification Reviews, Designs and Correspondences.’ Leonardo Music Journal, 16, 13-19.
Bell, Alexander Graham (1880a). 'On the Production and Reproduction of Sound By Light.’ American Journal of Science, 3rd Series, 20, 305-24.
------------------------------ (1880b). ‘The Photophone.’ Science, 1 (11 September, 130-1.
------------------------------ (1881). ‘The Production of Sound By Radiant Energy.’ Science, 2 (28 May), 242,53.
Bruce, Robert V. (1973). Alexander Graham Bell and the Conquest of Solitude. London: Victor Gollancz.
Duffey, M.R. (2007). ‘The Vocal Memnon and Solar Thermal Automata.’ Leonard Music Journal, 17, 51-4.
Goldzher, I. (1886). ‘The Voice of Memnon.’ The Academy, 757 (6 November), 311.
Hopkins, Gerard Manley (1970). The Poems of Gerard Manley Hopkins. 4th edn. Ed. W.H. Gardner and N.H. Mackenzie. Oxford: Oxford University Press.
Kim-Cohen, Seth (2009). In the Blink of an Ear: Toward a Non-Cochlear Sonic Art. New York and London: Continuum.
Kittler, Friedrich (1999). Gramophone, Film, Typewriter. Trans. Geoffrey Winthrop-Young and Michael Wutz. Stanford: Stanford University Press.
LaBelle, Brandon (2006). Background Noise: Perspectives on Sound Art. London and New York: Continuum.
Levin, Thomas Y. (2003). ‘ “Tones from out of Nowhere”: Rudolph Pfenninger and the Archaeology of Synthetic Sound.’ Grey Room, 12, 32-79. Online at http://www.centerforvisualmusic.org/LevinPfen.pdf.
Lucier, Alvin (1995). Reflections: Interviews, Scores, Writings, 1965-1994. Cologne: MusikTexte.
Mackay, James (1997). Sounds Out of Silence: A Life of Alexander Graham Bell. Edinburgh and London: Mainstream Publishing.
McLandburgh, Florence (1876). ‘The Automaton Ear.’ In The Automaton Ear and Other Sketches (Chicago: Jansen, McClurg and Co.), 7-43.
Moholy-Nagy, László (2004). ‘Production-Reproduction: Potentialities of the Phonograph.’ In Christoph Cox and Daniel Warner, eds., Audio Culture: Readings in Modern Music (New York and London: Continuum), 331-3,
Moholy-Nagy, Sibyl (1969). Moholy-Nagy: Experiment in Totality. Cambridge, MA and London: MIT Press.
Moritz, William (2004). Optical Poetry: The Life and Work of Oskar Fischinger. Eastleigh: John Libbey Publishing.
Preece, William Henry (1880-1). ‘On the Conversion of Radiant Energy into Sonorous Vibrations.’ Proceedings of the Royal Society of Arts, 31, 506-520.
‘Regenerative Music’ Online at http://eyetap.org/about_us/people/fungja/regen.html
Rilke, Rainer Maria (1986). ‘Primal Sound.’ In Rodin and Other Prose Pieces, trans. G. Craig Houston (London: Quartet Books), 127-32.
Worrall, David (2009). ‘An Introduction to Data Sonification.’ In Roger T. Dean, ed., The Oxford Handbook of Computer Music (Oxford: Oxford University Press), 312-33.