Skip to main content

Topic: Convolution reverb? (Read 4977 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • HTS
  • [*][*][*][*]
Convolution reverb?
What is an Impulse response?

http://www.youtube.com/watch?v=XzAmGtoswAE...ACD24FF2A3240E1

There is a simple explanation saying that the IR preset makes your music soundlike it was played in a real place. So the IR of the Sydney Opera House lets you hear what it's like in that building. But if you look at the FLstudio video, the freeverbtoo (or any convolution software) has a lot of settings. In this case, which setting (or combination thereof) is the true response from the advertised sites? In other words, IRs are stored in regular sound files like wav or aiff, and it tells you that you can record your own reverb, but what did you record if there are so many settings in every convolution engine?

Thanks.

  • Brand
  • [*][*][*][*]
Convolution reverb?
Reply #1
What is an Impulse response?

Googling will surely give you a more accurate answer, but in short, it's a response (usually in the form of a WAV file), generated by recording a signal (an impulse) through specific circumstances. These circumstances can be real spaces, hardware equipment or, as in your video, a software plugin.
You can then combine this response (the WAV file) with other signals, to produce the sound of those circumstances. To do this you need specific software, like an IR (or convolution) reverb.


But if you look at the FLstudio video, the freeverbtoo (or any convolution software) has a lot of settings.

FreeverbToo is not convolution software, but an algorithmic reverb, and as such it's supposed to have a fair amount of settings to produce the sound you desire.
A convolution reverb will usually also have some settings to allow you to modify the sound to your liking (but of course it won't be able to modify the sound as much as an algo reverb). If you leave it at default it should sound like the original 'circumstance' of the recorded IR.

  • drewfx
  • [*][*][*]
Convolution reverb?
Reply #2
The idea is you play a very short noise burst (the impulse) in a room and record it.

If you remove the noise burst from the recording, the resulting IR is the response of the room (or device) - it tells the convolution reverb how the reflected sounds arrive at the microphone in terms of frequency response and volume over time.
  • Last Edit: 03 June, 2011, 12:53:33 PM by drewfx

  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Convolution reverb?
Reply #3
You can view a digital audio signal as the sum of pulses, where each pulse represents a sample value -- ie, scaled to a certain magnitude, and delayed to the sample's time position in the signal.

Now, if you were instead summing with something that's not a perfect "pulse" -- ie, where each individual sample of the original signal was replaced with a scaled, time-delayed audio signal -- you would be performing a convolution.

This is a fairly generic way of implementing reverb, especially because modelling a live space (like the Sydney Opera House) becomes unusually simple to implement: simply record a high quality pulse inside the venue, subjecting it to all the acoustic effects of the room, then convolve that with your digital audio.

More subtly, convolution in the time domain is equivalent to frequency multiplication in the frequency domain, and so convolution engines are also used for EQ.

  • HTS
  • [*][*][*][*]
Convolution reverb?
Reply #4
If you leave it at default it should sound like the original 'circumstance' of the recorded IR.

Can someone second this? A lot of the convolution reverb products have many EQ and stage position settings (with different naming), and some of them are enabled at default. If you force all of them off, the reverb disappears.

  • lvqcl
  • [*][*][*][*][*]
  • Developer
Convolution reverb?
Reply #5
Reading the manual usually helps.

  • HTS
  • [*][*][*][*]
Convolution reverb?
Reply #6
You can view a digital audio signal as the sum of pulses, where each pulse represents a sample value -- ie, scaled to a certain magnitude, and delayed to the sample's time position in the signal.

Now, if you were instead summing with something that's not a perfect "pulse" -- ie, where each individual sample of the original signal was replaced with a scaled, time-delayed audio signal -- you would be performing a convolution.

This is a fairly generic way of implementing reverb, especially because modelling a live space (like the Sydney Opera House) becomes unusually simple to implement: simply record a high quality pulse inside the venue, subjecting it to all the acoustic effects of the room, then convolve that with your digital audio.

More subtly, convolution in the time domain is equivalent to frequency multiplication in the frequency domain, and so convolution engines are also used for EQ.

Is it necessary to have so many IRs available? Companies like Altiverb are constantly expanding their libraries, and that is supposed to justify the exceedingly high price for their software that hasn't seen an update for years.

Scientifically shouldn't there be some criteria for what kinds of IRs are good and which less good? For example, the Vienna Konzerthaus is included in their collection, doesn't that kind of "trump" other locations in the same category? (large concert halls) It's kind of like headphones, if you can have the Sennheiser HD600 why get an extra Audio Technica AD700 even if you have the money?