Audiovisual
Map sound behavior into visuals and performance-oriented output systems.
- OSC and MIDI visual control
- audio-reactive graphics
- Blender / Three.js / TouchDesigner
Learn the basic principles of tying sound to graphics. Start mapping the acoustic behavior of a virtual modular into a visual system.
Map sound behavior into visuals and performance-oriented output systems.
Theory, structure, and practical context are all driven from content files.
Concrete repository anchors already exist for this lesson track.
By the end of this lesson, you should understand:
An audiovisual system becomes impressive only when the visual layer responds to actual musical structure, and not just to the overall volume level.
Instead of forcing graphics to jump to a finished track (like classic equalizers), we can pull data right from the heart of the modular patch - before the voices get mixed together in the master.
Reactive visuals should not be random decoration. They should reveal hidden musical behavior to the viewer.
If there’s chaos on the screen that loosely corresponds to what the audience hears, dissonance arises. The connection between visual and auditory must feel like a single “living organism”.
To create high-quality audio-reactive visuals, you need to analyze the sound. Useful inputs for mapping:
Mapping is the art of choosing what affects what. A good mapping has an internal logic:
graph LR
subgraph MODULAR[Audio / CV Sources]
SUB[Bass Energy / Sub]
TRANS[Transient Activity]
DRONE[Drone Intensity]
LFO[Control LFO]
end
subgraph VISUALS[Visual Engine]
SCALE[3D Scale / Mass]
PART[Particle Burst / Bloom]
CAM[Camera Motion / Turbulence]
ROT[Rotation / Hue Shift]
end
SUB -.->|Amplitude envelope| SCALE
TRANS -.->|Pulse / Trigger| PART
DRONE -.->|Slew / Averaged level| CAM
LFO -.->|Direct CV| ROT
classDef signal fill:#1A202C,stroke:#2D3748,stroke-width:2px,color:#E2E8F0;
classDef visual fill:#2C7A7B,stroke:#319795,stroke-width:2px,color:#E6FFFA,stroke-dasharray: 4 4;
classDef env fill:none,stroke:#4A5568,stroke-width:1px,stroke-dasharray: 2 2;
class SUB,TRANS,DRONE,LFO signal;
class SCALE,PART,CAM,ROT visual;
class MODULAR,VISUALS env;
One of the most popular mistakes is routing a raw Envelope or audio signal directly to the scale of a visual layer. Sound happens incredibly fast. Graphics reacting at the speed of sound look jittery and cause visual fatigue.
Use smoothing nodes: Smoothing, Lag, or modular Slew Limiting before sending it to render. This makes the graphics react quickly (the attack remains instant), but return to their original shape more organically and smoothly.
Reacting to the overall mix often turns into “mush”, where it’s impossible to isolate a specific instrument. It’s much more effective to route individual stems (Kick only, Bass only) to the visualizer.
If the entire screen is overflowing with effects jumping on every beat, it quickly gets tiring. Leave “negative space”, let the graphics breathe.
If you map sound without threshold values (Threshold/Noise-Gate), quiet background noise will cause micro-tremors in the object, making it “nervous”. Set up a Gate so the effect triggers only when the element is truly playing.
Simulate the “Smoothing” (Slew/Lag) effect using internal modules: run a rhythmic short trigger through a Slew Limiter module before feeding it to control brightness or a video scrambler. Observe the difference between a hard trigger and a “fluid” smoothed signal.
Once the principles of “parameter-to-parameter” mapping are clear, an engineering question arises: how do you transmit these values from a software modular without latency? We will break this down in the lesson on OSC and MIDI integration.
Use the linked patch entries below as concrete repository anchors for this lesson track.
Adjacent lessons in the same track keep the topic progression coherent.
The first system diagram connects the modular engine, DAW layer, and visual output layer.