Computation is ubiquitous and invasive, seeping into, and radically transforming, all aspects of life. Computation is also increasingly present in physical and public environments, with code-spaces emerging from its metainterface. Computation forces us to change our ways of seeing the world we share with it, adopting a computational gaze that is still strange to us but that becomes fundamental for the experience of computational media and art. Being predicated on new and unique forms of aesthetic engagement, the relationship with the computational starts with aesthetics as a gateway to the perception of the computational substrate and its strangeness. This relationship is deeply ontological, and is developed through interfaces that emerge from the relationships between computational systems; these and humans; and humans haunted by computation. In this talk we explore how human perception is reconfigured by the computational, becoming multi- and crossmodal, making sound and algorithmic listening fundamental epistemological resources for both audiences and creators alike.
INTRODUCTION by José Alberto Gomes
As a conscious activity, listening is receiving information through hearing. It involves identifying sounds and processing them into content. When we listen, we receive sounds and we use our brain to convert these into messages that mean something. The act of listening involves complex affective, cognitive, and behavioral processes. It is a skill that involves different levels of effort. It takes a level of concentration and focus. Listening has a key role in our lives, either in the context of survival or in the cognitive development. For example it is the first of the four language skills, before speaking, reading and writing. It is the primal form to connect and perceive the surrounding. Listening jumps between different levels of consciousness. To implement a listening ability to computers, since audio signals are interpreted by the human ear-brain system, that complex perceptual mechanism should be simulated somehow in software for machine listening. This notion of teaching a machine to listen was first widespread in the artistic application of computer music.
We live in a post-digital world in which an invasive computation is fundamental to how communication functions and becomes essential to artistic practice and the aesthetic experience.
Today our very present digital prostheses are increasingly able to analyse and respond to sonic information as well. Machine listening is much more than just a new scientific discipline or vein of technical innovation. It is also an emergent field of knowledge-power, of data extraction and colonialism, of capital accumulation, automation and control. It demands critical and artistic attention. Initially inspired by models of human audition, Machine Listening deals with questions of representation, transduction, grouping, use of knowledge and general sound semantics for the purpose of performing intelligent operations on audio and musical signals.
Artists contribute to the ongoing process of thinking and questioning ourselves and the world through the engagement of artworks as tools, and art directly enroles with those technological practices, primarily as a way to understand how they affect us, and finally, as a means to reorganise ourselves. Computation is capable of endlessly generating new environments, not because the technologies are new in themselves, but because it allows the permanent re-articulation of environments and the constant development of new and sometimes unprecedented relations with their inhabitants. In a wider vision, computation is no longer only a means to create an artwork but can be the artwork itself.
But we are in a new step of this ongoing process. The focus is no longer about machines emulating our auditory process. Nowadays, the ways of listening are going through a new reconfiguration pushed by the ubiquitous presence of computers, computer networks, and computational media in our lives. Emerged in computational technologies, our listening needs to adapt to online and offline environments marked by an “all-out internet condition” that turns culture into a code/space.
This talk will explore how human perception is reconfigured by the computational, becoming multi and crossmodal, making sound and algorithmic listening fundamental epistemological resources for both audiences and creators alike.
Miguel Carvalhais is a designer, artist, and musician. He is an assistant professor (with Habilitation) at the Faculty of Fine Arts of the University of Porto and a researcher at INESC TEC and i2ADS. He studies creative practices with computational systems and wrote the book “Artificial Aesthetics” on this topic. His research and practice explore how computational and procedural systems are read by humans, and how procedural discovery and interpretation are paramount for the creation of meaning and the aesthetic experience. His artistic practice spans computer music, sound art, live performance, audiovisuals, and sound installations. He runs the Crónica label for experimental music and sound art, the xCoAx conference (on computation, communication, aesthetics and x), and the Invisible Places symposium (on soundscape art and ecology).