You will return to your W6 scene and create a new spatial sound composition that explores stereo presence and listener position.
This activity focuses on how sound location and audience perspective reshape perception without relying on camera movement.
Complete the following in order. Ask your professor or TA for help as needed.
Use W8 vocabulary
Return to your Week 6 lighting scene and define a new sonic approach that explores stereo movement, depth, and listener perspective.
Write 4β5 sentences that respond to the following:
How will sound extend your Week 6 lighting transformation?
Will lighting shifts correspond to changes in panning, intensity, texture, or spatial emphasis?
What is the overall feeling you want to create?
How will you use the stereo field to shape space?
Will sounds move gradually from left to right?
Will certain elements remain centered while others shift?
Will volume and reverb create a sense of proximity or distance?
You may reuse unused sound samples from Week 7 or gather new royalty-free sounds from Freesound.
If you download new sounds, you must record the credit information (title, creator, source link, license).
Curate a focused selection of sounds that support stereo spatialization, such as:
Follow the REAPER tutorials below and build a 30-second stereo sound composition that explores spatial presence.
Your composition must:
In addition, you must:
When finished:
Lastname-Firstname-W8.wavLastname-Firstname-W8.rppβ Review this weekβs slides for practical tips on working with multiple tracks, zoom in/out in the arrange view, track overview options, add sound effects, and automatate/animate volume, panning and sound effects.
Check W7 REAPER tutorilas
Render Settings:
Dry Run = Analyzes the project locally to check peak levels without creating a file.
Render = Exports the full audio file and saves it to your selected folder.

First:
Then:
Note: The Speaker object converts your stereo file into a single spatial source. You will no longer hear left/right panning as designed in REAPER.
For this week, keep it this way. Instead of stereo separation, you will explore spatial difference through camera movement in Blender β allowing listener position to shape volume, proximity, and perceived depth.
The camera movement should be smooth and intentional (no random motion). Your goal is to demonstrate how camera distance and movement affect the perception of stereo panning, depth, and spatial presence.
Save your updated Blender file as: Lastname-Firstname-W8.blend
β‘οΈ Export Video as MP4, codec H.264
π Filename: Lastname-Firstname-W8.mp4
β οΈ Videos must be final renders, not viewport screen recordings.
β Review this weekβs slides for practical tips on info.
For this week, focus only on adding speaker object, mute and volume, distance, and cone.



Create a single PDF that includes:
4β5 sentence sonic intention description
Briefly describe the emotional arc (beginning β middle β end) and the types of sounds you selected.
List of sound samples used
Include for each sample:
β‘οΈ Export as PDF
π Filename: Lastname-Firstname-W8.pdf
| Component | File Name |
|---|---|
| Project document (PDF) | Lastname-Firstname-W8.pdf |
| Reaper file | Lastname-Firstname-W8.rpp |
| Sound file | Lastname-Firstname-W8.wav |
| Video file | Lastname-Firstname-W8.mp4 |
β οΈ Follow submission protocols carefully. Incorrect submissions may result in lost points.
Your work will be assessed based on:
Clarity of Spatial Sonic Intentions (PDF)
The written description clearly explains how stereo movement (panning), depth (volume/reverb), and camera distance shape listener experience and extend your Week 6 lighting transformation.
Stereo Spatialization & Sound Design (WAV + RPP)
The 30-second stereo composition demonstrates intentional use of panning (Left β Right), controlled volume to create depth (foreground/background), at least six layered sound sources, effective fades, and clean audio levels (final peak at -0.3 dB, no distortion). Spatial movement feels deliberate rather than random.
Integration in Blender & Camera Embodiment (MP4)
The sound file is correctly attached to a centered Speaker object. Camera movement (closer β farther) intentionally shapes perceived volume, proximity, and spatial presence, while lighting cues remain aligned with the 30-second structure. Framing is deliberate and consistent.
Technical Execution & File Organization
All required files (.pdf, .rpp, .wav, .mp4) follow correct naming conventions, maintain a 30-second duration, and are properly rendered (not viewport recordings).
Placing sound within the stereo field (left, centre, right)
Perceived location of the sound
How near or far a sound feels.
Reflected sound that creates a sense of room size and distance
Sound surrounds the listener
Perceived spatial layering of sound
Changes in loudness over time
How quickly a sound begins (e.g., sharp, slow)
How long a sound holds
How dense or sparse the sonic field feels (e.g., minimal, layered, overlapping, isolated)
Credits: Jessica A. RodrΓguez
AI Disclosure:
AI Disclosure: ChatGPT was used for editing and clarity only. No original course content was generated using AI.