In concept, the user experience involves a three-stage process. First, as they enter the foyer, visitors to the Musée encounter a console that translates live sound into visualizations in real time, to demonstrate the connection between sound and image. They snap fingers, speak, or laugh, causing fluctuations in the real-time visualizer on screen, which serves as a key or legend to the overall work.
Behind them, a large-scale projection shows an ongoing feed of sound visualization, collected in various rooms through the Musée. This, the second stage of the experience, creates the suggestion of a chorus of voices and sounds in the Musée, which have passed through and been translated into a visual language of ghosts or spirits who have left an intangible signature behind. The idea is to create a sense of mystery and curiosity–to make people wonder what sorts of noises resulted in the visual information they are now seeing. They may also imagine what sounds they might leave behind as they tour the Musée.
The third stage of the project consist of interactions with the recording stations set up in the exhibition rooms, near the artworks. As they examine certain works, visitors encounter these stations, which are different prompts or invitations to audible interaction. One station might ask visitors to tell it a secret, while another may simply let them know, « I am listening. » These are intended to incite literal conversations with the artworks–to ask people not just to look, but to express their reaction to the works in audible sound, which is then translated into real-time visualizations shown on companion Android devices integrated into the stations. As they leave the Musée, visitors again encounter the projection in the foyer, and ponder whether any of what they are seeing was created by sound they made during their visit.
In practice, we did not have the technical capability to include microphones in each station, or to feed audio information to the foyer projection in real time. As such, for the prototype, these stations were designed as markers, indicating to visitors where sound was collected the previous day to be processed into visuals for projection, and outfitted with small screens visualizing their real-time interactions, but not actively recording audio information.
Loin d’être silencieux comme on pourrait le croire, le musée est rempli des voix et des traces sonores de ses visiteurs allant du simple murmure aux expressions d’admiration. Avec notre prototype, les visiteurs sont invités à interagir avec les oeuvres et ainsi voir les sons être transformés numériquement en images. Des stations de captation sont disposées dans les salles du musée rappelant aux visiteurs que les images projetées dans le vestibule sont l’écho de leurs réactions vis-à-vis des oeuvres. Dès lors, le musée murmure à son tour créant une conversation entre l’oeuvre et le public, entre le son et l’image.
e vestibule était l’endroit idéal pour la projection pour appuyer l’idée de briser les barrières du MAC.
C’est l’invitation à participer au souffle du Musée.
Les stations de microphones seront aussi dispersés dans les salles du Musée, en rappel aux visiteurs qu’ils participent activement à la vie du Musée. Ils seront les traces des captations de sons. Il y sera inscrit le titre et le descriptif du projet et pour marquer l’interaction du spectateur avec l’oeuvre, il y aura des phrases comme:
Je peux t’entendre
Les oeuvres ont des oreilles
Dis-moi un secret
Qu’est-ce que tu en penses?
Ta présence laisse des traces
Ton passage est marquant
Outils et techniques
The project requires the following technology: recording devices to collect ambient sound in the museum, software to process the recorded audio information into custom visualizations, and a projector to project the pre-recorded visuals onto a suitable surface.
Different options were considered for sound recording. Ultimately, the process was carried out using Zoom H2 portable Handy Recorders. Material was recorded as two-channel surround WAV files, which were then edited in Digital Performer 8. Our sound recordist combed through the files, searching for dynamic passages (illustrating active sound over a sustained period of time) and normalizing peaks. Once identified, these sections were cut into 15-minute chunks with fades on either end, and exported as mono WAV files onto a memory stick to be passed onto our programmer.
For projection of the sound visualizations, we selected a Christie DHD555-GS 1DLP Laser Phosphor projector. Our plan to project in the foyer of the musee required a projector with sufficient lumens to provide a visible image in an environment in which sunlight would be a factor. Tests on the DHD555-GS showed that even with full sunlight conditions, a projected image was still visible, especially with light-coloured visualizations. The projector was hooked up to an old desktop computer running the visualization software created by our team’s programmer.
Contenu : Menon Dwarka firstname.lastname@example.org
Communication : Joel McConvey email@example.com
Code : Marc Lavallée firstname.lastname@example.org
Fabrication : Geneviève Hébert email@example.com
Aménagement : Audrey Routhier firstname.lastname@example.org
Facilitation : Sarah Laurence email@example.com
Extra help : Garrick Ng