I would say 1950s. It got bad enough that juke boxes put in automatic volume controls - compressors - to even things out. Radio's had it about as long.
Was it 1982 when the CD was invented? But people were pushing for more loudness in pop records before then going back to the 40s and 50s. I'm curious about when the push for loudness first started reducing audio quality.
I'd say with the introduction of mass-market radios, or shortly thereafter.Early radios struggled with bandwidth and sound reproduction of anything else than voice. Compressing music down ensured that most of it was actually ending up at the listener. Also often these radios were built down to a price, so they used very sub-par amps and speakers, with as little components as you could get away with. Early AM radios had a rather limited dynamic range (and those that still exist still do), so it was actually quite beneficial to compress music like that.
Early CDs were mastered so the *album* peak -- usually a single peak across the entire album -- approached or reached 0dBFS. No compression added (I don't think digital compression had been invented yet). As a result the full dynamic range of the analog tape source -- there were few all-digital recording then -- was preserved. According to Wikipedia (with citations) "the average level of the average rock song during most of the decade [the 1980s] was around −16.8 dBFS."
Also: "In 1994, the digital brickwall limiter with look-ahead (to pull down peak levels before they happened) was first mass-produced." And that accords with my recollection of when it all really started going to hell.It was a terrible shame because it overlapped a positive trend underway since the late 80s: companies were finally doing due diligence and sourcing remasters from original master tapes (not vinyl EQ-d production masters) . Adding hard digital compression to a flat transfer of a classic original master tape is akin to adding googly eyes to the Mona Lisa.