Vague Interactions is an experimental interface that listens without trying to understand. It questions the traditional model of interactivity, which is based on clear, controlled and efficient feedback, by introducing ambiguity, resonance and a sensitivity to the system.
Combining speech recognition, audio processing, and facial landmark detection, the system responds subtly to human interaction.
Instead of trying to decode or mirror the user's input, the system creates a space of simply non-understanding. It listens without projecting meaning, allowing uncertainty and ambiguity to exist.
The project takes inspiration from feminist critiques of empathy, relating particularly to the dangers of projecting our own emotions onto others, and proposes a different mode of interaction. An interface that acknowledges difference, ambiguity and the impossibility of full understanding.
This project was created using p5.js and Mediapipe. The system combines multiple input sources into a smooth visual response. Audio input is captured via the browser's microphone using the Web Audio API. The volume levels dynamically affect the pulsation intensity of the torus. When the noise exceeds a certain threshold, ASCII symbols are generated from the FaceMesh's face. Extended periods of silence cause the system to relax, reducing both structural noise and deformation. At the same time, facial landmark detection is performed using Mediapipe's FaceMesh model, tracking key facial points such as the nose and eyes in real time and incorporating them into the system's behavior.
The central visual, an organic, continuously deforming torus, is generated by simplex noise, with parameters such as radius, shape size, deformation factor and color transitions influenced by the presence and activity of the user.
The interaction does not include explicit commands or recognition of a specific meaning. Instead, the system focuses on resonance and ambiguity, reflecting a different approach to human-machine interaction inspired by critical theory.