Skip to main content

Topic: Pleasurize Music Foundation (Read 39771 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Pleasurize Music Foundation
http://www.dynamicrange.de/

This looks pretty good. They have lots of industry contacts, Algorithmix is involved with a professional-looking dynamic range meter (it appears to be a variant of Chromatix's technique, actually), which is already released. They aren't focusing on visual waveform plots (yay) and are instead focusing on industry interaction.

  • vitos
  • [*][*][*]
Pleasurize Music Foundation
Reply #1
It's relieving to find another group aiming to fight with loudness war issue. The first one I know of was Turn Me Up!, but this Pleasurize Music Foundation has declared concrete solutions, actions and interesting timetable... I wonder if they both have something in common, or if they are completely independent movements. I wish them well...

And they have linked to one conversation on this forum already (in links section).
Not really a Signature.

  • Ron Jones
  • [*][*][*][*]
Pleasurize Music Foundation
Reply #2
Their dynamic range meter seems interesting enough. My German's a little rusty (ahem), but it seems it's going to be released as a plug-in some time in February.

  • ExUser
  • [*][*][*][*][*]
  • Read-only
Pleasurize Music Foundation
Reply #3
The neologism "Pleasurize" sounds distinctly not credible to me. Sounds hokey. The "Turn Me Up" initiative seems to be far more consumer-friendly, and with support of high-profile artists like Tom Petty's Mudcrutch, I would tend to think that of the two, the latter will be the more successful.

That being said, any movement towards more dynamic range is something I appreciate.

  • smiler
  • [*]
Pleasurize Music Foundation
Reply #4
The neologism "Pleasurize" sounds distinctly not credible to me. Sounds hokey.

Just supposition, but perhaps it's a poor translation, given that the group obviously originates in Germany.

They seem very organised from looking at the site. I only hope they can make some headway!

  • kjoonlee
  • [*][*][*][*][*]
Pleasurize Music Foundation
Reply #5
The neologism "Pleasurize" sounds distinctly not credible to me. Sounds hokey.

Just supposition, but perhaps it's a poor translation, given that the group obviously originates in Germany.

They seem very organised from looking at the site. I only hope they can make some headway!

Headway for headroom?

  • ExUser
  • [*][*][*][*][*]
  • Read-only
Pleasurize Music Foundation
Reply #6
Headway for headroom?
Sounds like a wonderful battle cry for the Loudness War. HEADWAY FOR HEADROOM!

  • Raiden
  • [*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #7
From the web site:
Quote
20 February 2009: Release version 1.1 of the TT Dynamic Range Meter and the TT DR Offline Meter is ready for release. German manual is already finished. Who can make a professional English translation (3300 words) very quickly? This should be done before making the software available. Please contact us via contact form. German manual will be uploaded now.

  • gnypp45
  • [*]
Pleasurize Music Foundation
Reply #8
From the web site:
Quote
20 February 2009: Release version 1.1 of the TT Dynamic Range Meter and the TT DR Offline Meter is ready for release. German manual is already finished. Who can make a professional English translation (3300 words) very quickly? This should be done before making the software available. Please contact us via contact form. German manual will be uploaded now.


I agree that the dynamic range meter looks very interesting. The idea of putting labels like "DRX" on the music, where "X" stands for effective dynamic range in decibels, is also very nice and I think it has a chance of catching on. I would love to see these labels on commercial CDs.

From the download page:
"Der Download wird online gestellt, sobald die Gebrauchsanweisung ins englische übersetzt ist." i.e. "The dynamic range meter will be available for download as soon as the manual is translated into English."
German speaking readers, what are you waiting for!?

  • Trippynet
  • [*]
Pleasurize Music Foundation
Reply #9
Good to see some progress. I take it people here have added their names to the list on the site to show their support for it? I have anyway! Anything to try and stop the abomination which is modern CD mastering.

  • Raiden
  • [*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #10
Download of the TT Dynamic Range Meter is now available. The offline meter works fine under wine.

Few values:
DR15 - Biosphere - Substrata
DR10 - Autechre - Chiastic Slide
DR10 - Boards of Canada - The Campfire Headphase
DR7 - Daft Punk - Homework
DR7 - Gas - Pop
DR0 - Venetian Snares + Speedranch - Making Orange Things


I have some questions: I'm trying to implement their algorithm (in Java, nothing fancy). I've got something working, but the values differ a bit... How do I calculate the RMS of a wave file? I'm using the standard RMS formula from wikipedia on a window of 3 seconds (132300 samples). Now, by how many samples do I shift the window? And is there a special way to average the RMS values I get?
  • Last Edit: 22 February, 2009, 04:55:08 PM by Raiden

  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Pleasurize Music Foundation
Reply #11
RMS measurements are typically done with a 50ms window, no overlap. The mathematically correct way to average RMS measurements is to square each sample, sum the squares in 50ms chunks, sum the chunks, take the mean, and take the square root of the whole deal.

Their german documentation leaves much to be desired as to exactly how the dynamic range measurement works. It's clearly a quasi-peak-to-RMS measurement with dropout removal, but there are no specifics at all on what's going on under the hood. DR numbers seem consistently 66-76% of the magnitude of the listed peak-vs-RMS numbers. No mention of equal-loudness weighting or percentiles. Their peak measurements are pretty clearly BS.1770 standard (4x oversampling etc).

Some quick jots:
  • DR20 aeo3 peak:-0.10 rms:-26.0
  • DR15 beethoven 2.I harnoncourt 1991 peak:-0.17/-1.44 rms:-20.4/-21.4
  • DR-0 merzbow - i lead you towards glorious times dr:-0.0/-0.0 peak:over rms:2.2/2.4
  • DR13 shellac - genuine lullabelle peak:-0.17 rms:-19.7/-18.9
  • DR20 ligeti - cello concerto - norrington peak:-4.00/-4.82 rms:-33.7/-35.6
  • DR13 ligeti - atmospheres - norrington dr:12.7/12.9 peak:-7.13/-7.39 rms:-27.5/-26.6
  • DR18 ligeti - san francisco polyphony - norrington dr:17.8/18.2 peak:-0.17/over rms:-23.4/-23.7
That I can't punch any DR higher than 20, even for music for which even a studio might have insufficient dynamic range, is a little surprising. I smell a rat with the merzbow reading - if their raw DR measurements are actually reading negative with that track, humorous as though that is, it's probably a bug. IIRC, pfpf reads around 0.2db of dynamic range at all timescales with that track.

Also, the exe appears to be a front end for a VST plugin.
  • Last Edit: 22 February, 2009, 05:52:46 PM by Axon

  • Raiden
  • [*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #12
Thanks for your response.
I guess the "RMS" value in the frontend is just each sample squared and then root of everything. The 3 second window has to do with how they measure the top 20% RMS value.
Negative values probably show because they regard a full scale sine wave as 0 dB, not -3.01 dB.
  • Last Edit: 22 February, 2009, 06:08:32 PM by Raiden

  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Pleasurize Music Foundation
Reply #13
Where did you see the info referencing "3 seconds"? I couldn't find that sort of thing on their site.

Depending on how they're computing their RMS numbers, and especially depending on equal loudness weighting and/or BS.1770 filtering, a full scale sine could very easily go way above 0db. pfpf does the same thing. It doesn't mean the meter should go negative; that doesn't make any sense.

  • carpman
  • [*][*][*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #14
This may be a dumb question, but is the dynamic range of a quieter but otherwise identical piece different from the original?

I just ran a few pieces through their TT Dynamic Range Meter:

Mahler - Das Lied von der Erde - Der Abschied [original]  -- DR = 11
Mahler - Das Lied von der Erde - Der Abschied [-3.5 dB from original]  -- DR = 11

The Fall - The Classical [original]  -- DR = 8
The Fall - The Classical [-9.75 dB from original]  -- DR = 9

Is that what one would expect?

C.
PC = TAK + LossyWAV  ::  Portable = Lame MP3

  • Raiden
  • [*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #15
Where did you see the info referencing "3 seconds"? I couldn't find that sort of thing on their site.

Depending on how they're computing their RMS numbers, and especially depending on equal loudness weighting and/or BS.1770 filtering, a full scale sine could very easily go way above 0db. pfpf does the same thing. It doesn't mean the meter should go negative; that doesn't make any sense.

It's in the manual of the release version:
http://www.dynamicrange.de/sites/default/f...1_1-Deutsch.pdf
I'm referring to this paragraph:
Quote
Zur Ermittlung des offiziellen DR?Wertes wird der Titel bzw.
  das Image des Tonträgers (Wave, 16bit, 44,1 kHz) gescannt
  und im Hintergrund ein Histogramm (Lautheitsverteilungs?
  Diagramm) mit einer Auflösung in 0,01 dB?Schritten erzeugt.
  Die nach etablierten Standards für die RMS?Berechnung in
  einem Zeitfenster von 3 Sekunden ermittelten
  Lautheitswerte (dB/RMS) werden quasi in 10.000
  unterschiedliche Schubladen aufgeteilt (der 0,01dB?
  Auflösung entsprechend). Von dem Ergebnis werden nun die
  lautesten 20% als Berechnungsgrundlage für die
  durchschnittliche Lautheit der lauten Passagen errechnet.
  Gleichzeitig wird der höchste Peakwert ermittelt.
  Der DR?Wert ist die Differenz zwischen Peak und Top?20 des
  durchschnittlichen RMS.

So they calculate lots of RMS values over 3s windows, doing a histogram with a resolution of 0.01 dB, and finally they take the loudest 20% of those.
However I don't understand what exactly they mean by 20% as I get slightly different values when I program it.
  • Last Edit: 22 February, 2009, 06:43:57 PM by Raiden

  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Pleasurize Music Foundation
Reply #16
What's the peak info on The Classical? And the left/right DR breakdown?

If the dropout threshold is set at a fixed db value, the DR numbers should change with amplitude changes - this happens in pfpf - but the numbers should usually go down as a result. If they're using an adaptive equal loudness filter (unlikely) then something like this could happen. If they're clamping peaks for measurement at 0dbFS, this could happen.

What happens to the numbers at -20db/-30db/etc?

  • carpman
  • [*][*][*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #17
Funny you just write that Axon, that's just what I was doing:

The Fall - The Classical [original] -- DR = 8
The Fall - The Classical [-9.75 dB from original] -- DR = 9
The Fall - The Classical [-30.00 dB from original] -- DR = 8

EDIT:

Original Piece:

[EDIT2: According to fb2k Track Peak = 1.00]

Peak, according to the TT Meter DR Meter: Just says "over" [Left and Right]
DR Left = 8.2
DR Right = 8.7
RMS Left = -9.6
RMS Right = -10.1


-30 db Version:

Peak, according to the TT Meter DR Meter: Left & Right = -29.94
DR Left = 8.2
DR Right = 8.7
RMS Left = -39.6
RMS Right = -40.1

C.

[EDIT1: Added TT Meter breakdown]
[EDIT2: Added fb2k track peak info]
  • Last Edit: 22 February, 2009, 07:00:32 PM by carpman
PC = TAK + LossyWAV  ::  Portable = Lame MP3

  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Pleasurize Music Foundation
Reply #18
It's in the manual of the release version:http://www.dynamicrange.de/sites/default/f...1_1-Deutsch.pdf
Ah, great, awesome, didn't see that. Thanks. Looking at it through Google Translate now.

Quote
So they calculate lots of RMS values over 3s windows, doing a histogram with a resolution of 0.01 dB, and finally they take the loudest 20% of those. However I don't understand what exactly they mean by 20% as I get slightly different values when I program it.
Yeah, that's kind of weird. 3s non-overlapped windowing would be absolutely broken. There's nothing "standard" about a 3 second RMS window - usually it's 50ms, as I said. So either they are doing point-by-point windowing - as Chromatix suggested a while ago for the Sparklemeter, where a full 3-second sum of squares window is calculated, and then samples are individually added/subtracted from the beginning/end and the 3-second window RMS figure calculated for each sample - or the windows are in multiples of 50ms. I just looked at the program through ProcessMeter to monitor its file I/O and it's reading the wavs in 529920 byte chunks, which comes out to be 3.00408 seconds - if it were reading in 50ms chunks I would strongly suspect reads of 529200 bytes instead. So they're either doing single-sample window overlaps or they aren't overlapping at all. Try both.

As far as the 20% thing goes, I can interpret that either as the ratio of the 80th percentile to the 50th percentile of the histogram, or the 100th perentile (or 99th) to the 80th, or the 80th to the separately computed RMS number... could go a lot of ways without further clarification. Try all of those too.

  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Pleasurize Music Foundation
Reply #19
Funny you just write that Axon, that's just what I was doing:

The Fall - The Classical [original] -- DR = 8
The Fall - The Classical [-9.75 dB from original] -- DR = 9
The Fall - The Classical [-30.00 dB from original] -- DR = 8


bugz? That's really odd.

Pleasurize Music Foundation
Reply #20
From the web site:
Quote
20 February 2009: Release version 1.1 of the TT Dynamic Range Meter and the TT DR Offline Meter is ready for release. German manual is already finished. Who can make a professional English translation (3300 words) very quickly? This should be done before making the software available. Please contact us via contact form. German manual will be uploaded now.



I took a different approach - testing the dynamic range of test tones with various segments at various levels.

I can't make any sense out of how they interpret my test files - they seem to get the peak level right, but after that ?????

BTW, the file I downloaded from http://www.dynamicrange.de/en/download

was a .html file that did run as a setup program when the extension was chagned to .exe

Strange

  • carpman
  • [*][*][*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #21
I think I've figured out what the issue was:

The Fall - The Classical is a lossy file.

I initially converted the same source file to 2 WAVs:

(1) without RG processing (DR = 8)
(2) with RG processing (DR = 9)

Having got that odd result I took (1) and reduced its amplitude in Cool Edit to -30 db.

I just did the same thing again but this time reducing file (1) by the Replay Gain amount in Cool Edit (i.e. reducing a lossless file) rather than converting lossy to WAV with RG processing, and clearly it produced a different file. So the different DR result was caused by lossy conversion (clipping).

If I'd been sensible and had outputted one WAV and then reduced by 10 db each time (as I've just done) the DR result would have been consistent at DR = 8.

So no bug.

Sorry for the false alarm.

C.
  • Last Edit: 22 February, 2009, 07:52:13 PM by carpman
PC = TAK + LossyWAV  ::  Portable = Lame MP3

  • Raiden
  • [*][*][*]
  • Developer
Pleasurize Music Foundation
Reply #22
OK, I think I understood how the algorithm works:
For every 3 seconds a RMS value is generated (so a 1h CD would have 1200 RMS values). Then the loudest 20% of those are averaged. The difference between this value and the peak is the DR value.

I wonder why in the manual they talk about a histogram... because their algorithm doesn't need one?!

  • Chromatix
  • [*][*]
Pleasurize Music Foundation
Reply #23
OK, I think I understood how the algorithm works:
For every 3 seconds a RMS value is generated (so a 1h CD would have 1200 RMS values). Then the loudest 20% of those are averaged. The difference between this value and the peak is the DR value.

I wonder why in the manual they talk about a histogram... because their algorithm doesn't need one?!


A histogram is a valid optimisation when you need to pick out the top N values of something, but the precision is not critical.  It changes the complexity of this selection to O(N) instead of O(N log N), which is the best case for maintaining a sorted list of values - and when N is 132300 (for 3 seconds of 44.1kHz audio samples), that is significant.  I used this in my own Sparklemeter.

But they only have 1200 values to sort for an hour-long CD, if they did not use overlapping windows.  (I hope they did overlap windows, actually.)  1200 doesn't take very long to deal with for a sorting algorithm.

I suspect that they are using overlapping windows, if only because the animation of their VST plugin shows an averaging meter moving smoothly with about the right time constant.  If they have a 3-second window every 10ms, for example, that would be 360000 windows for an hour-long CD, which is worth using the histogram method for.

If that is true, then that's a very interesting way of measuring this.  I would personally avoid doing it because it would be sensitive to clicks and pops (which would make it incompatible with vinyl), but it's a fairly good way of detecting over-loud CDs.  I will read the English manual on Tuesday, when it's supposed to become available, to see whether you're right.

  • Axon
  • [*][*][*][*][*]
  • Members (Donating)
Pleasurize Music Foundation
Reply #24
If that is true, then that's a very interesting way of measuring this.  I would personally avoid doing it because it would be sensitive to clicks and pops (which would make it incompatible with vinyl), but it's a fairly good way of detecting over-loud CDs.  I will read the English manual on Tuesday, when it's supposed to become available, to see whether you're right.
The pop/tick thing becomes less of an issue when measuring quasi-peaks instead of peaks - instead of the 100th percentile, the 99th or 95th percentile, or the 2nd standard deviation or whatever. Major pops and ticks are a vanishingly small percentage of the total samples in recorded vinyl. The TT meter technical docs allude to something like this (resiliance against pops and ticks) without saying exactly what it's doing.

It would be far more dangerous to vinyl measurements if the RMS windows were not overlapped, because then the results are offset- and phase- sensitive. Based on my pfpf investigations, even when playing back the same track twice on vinyl, a figure of merit difference of 0.1db is not unheard of, even with overlapping windows. It's quite possible that DR numbers could shift depending on the recording start/end points and the speed stability of the turntable, thus giving different results for different people.