Last post by magicgoose -
Right now one can only set a bitrate and WavPack encoder will introduce extremely varying distortion. In can be even demonstrated by encoding a sine sweep with -x3 and lowest possible bitrate. For the most part it sounds okay, but there will be several distinct bursts of noise when the frequency goes higher up. A reasonable lossy encoder would try to distribute distortion more or less evenly across the record — in order to not spend too much bits on the parts which have less distortion than the worst part of the record.
Given the nature of WavPack, I think something simple could do the job — for example setting a fixed signal to noise ratio as a quality goal, in dB. For example, a quality setting of 30 dB would mean that at any short part of the record the encoder is allowed to add distortion that's up to -30 dB as loud compared to the momentary loudness of the record at this point in time, but not louder (unless some other settings, such as "pre-quantize", make this goal unreachable at this moment).
When the goal is to have predictable quality level, this will both prevent spending unnecessary bits on super easy parts, and neutralize the most notorious killer samples by spending as much bits as needed to always meet the quality goal.
Last post by magicgoose -
There's no universal answer. The authors do whatever the hell they decided to, and their motivation could be anything really. In practice, most of the remasters in rock/metal genres which I have encountered were made to make the sound worse, and this is mostly about Loudness War, indeed. But a few also were the opposite — usually when the original recording was extremely bad (that is, as almost any other recording of rock/metal made after 1994). Notable examples of music albums which were later re-released with actually better mastering: Ulver — Nattens Madrigal … Agalloch — Ashes Against the Grain some albums labeled as "FDR" (that's full dynamic range), when they were previously released only with a lot of compression: Carcass — Swansong At the Gates — Slaughter of the Soul (for all of the above I ignored releases on non-digital mediums, of course)
One of the things STILL happening is that many recordings (certainly not all) are still left with DolbyA encoding. That helps to explain for excessive compressed sound, a sense of distortion and HF emphasis. I have decoded more than just a few CDs and digital distributions from DolbyA, and gotten audiophile results.
Any specific examples? What software did you use to detect that it's a kind of steganographically encoded data and not a normal record?
The endless confusion arises, I think, from ignorance of what 'mastering' meant historically. By the time the workflow: multitrack tape recording --> mix down to two track 'original master tape(s)' (OMTs)-->release on stereo vinyl (LP or 45) became routine, 'mastering' was the step after the mixdown, and was *necessary* because tape is a higher fidelity medium than vinyl. I.e., in order to 'fit' the signal on tape onto vinyl that could be played on typical home systems the signal on tape had to be compromised (e.g. bass filtered and summed to mono and ; treble rolled off near the end of a 'side') . That (and more mundane things like adjusting overall levels, track-to-track levels, fades, spacing, and sequencing the tracks) was what 'mastering' was. It was really 'mastering for a lower fidelity distribution medium'.
99% of the time* the two-track mixdown master tape *was* the artist's intent. Everything on vinyl was an attempt to get close to *that*. The EQ moves etc that were used during vinyl mastering and cutting were typically captured in real time on another tape, the 'production master'. This was used for repressings, so that the mastering and cutting engineers didn't have to re-create their art every time.
Since the advent of CD, or *at least* since the advent of hardware that might be better equipped to play back full-range CD signal, 'mastering' the EQ and dynamics of an analog OMT in the vinyl sense really has not been *necessary*. CD (Redbook PCM) is a *higher* fidelity medium than tape, and can handle anything in the audible band that magnetic tape can throw at it. (Sequencing, spacing, fades etc still are mastering functions). But mastering , in the sense of 'sculpting' what's on the OMT, was/is still done, as we all know. Mastering for CD (or other digital medium) is done to make the tracks 'hang together'. To make them 'pop' on radio. To make them sound 'professional'. To 'fix' something that was deemed wrong with the OMTs. TO give them that 'final touch of magic'. Whatever. Ask a mastering engineer today to justify his job, go to their websites and read their descriptions of what services they offer, the answers can be interesting.
Industry lore has it that much of the first wave of early-mid 1980s CD issues came from vinyl production master tapes, or tapes of even higher generation, with all the generational tape noise, and EQ moves etc, that that implies. Hence we had a wave of 'remasters' of the same albums in the late 80s/early 90s trumpeting as being sourced from 'original master tapes' -- mixdown masters. Though that didn;t necessarily mean 'flat transfer with no changes in EQ etc' Furthermore, and very unfortunately, the 'remaster' era was soon overlapped by the 'loudness wars' era, which arguably negated or rendered moot the sonic benefit of OMT sourcing. And that in turn led to yet another wave of remastering, this time in 'hi rez', to get 'better sound , though once again there was NO GUARANTEE that the dynamics and EQ weren't altered compared to the OMT source. This bullsh*t went on for years and now we have possibly a final iteration where places like HDtracks are having their feet held to the fire by audiophiles to deliver 'original master tape' sound as unadorned as possible. And (for no good technical reason) in 'high res'. However, HDtracks et al only get what the (notoriously lax) record companies deign to give them, so the wheel keeps turning.....
*A figure I am pulling out of my ass, but seriously, the exceptions I'm aware of are when some element or production move (e.g. an instrumental part fly-in, a fade-in or fadeout) was added at the cutting/production master stage...in that case the closest source to the artists's 'approved' version would be the production master tape, or the original master with those later moves re-created.
One of the things STILL happening is that many recordings (certainly not all) are still left with DolbyA encoding. That helps to explain for excessive compressed sound, a sense of distortion and HF emphasis. I have decoded more than just a few CDs and digital distributions from DolbyA, and gotten audiophile results. The worst thing is when they take the DolbyA masters, don't decode them, and then further compress -- YUCK!!!
Last post by DVDdoug - Your speakers are active... They already have a built-in amplifier. You could use an "extra" amplifier in-line, but you'd want to attenuate the amp's output and you'd want an "artificial load", especially if you want to accentuate the "tube sound".
And so a tube amp just can add warm sound and distortion ?
A good amplifier (tube or solid state) won't have any sound of it's own... It simply amplifies the signal with no audible distortion (assuming it's not over-driven), no audible frequency response deviation, and (hopefully) no audible noise.
Guitar amplifiers are an exception. There are designed to have "pleasing distortion", especially when driven hard. And, every guitar amp sounds different. Of course, only the guitar is distorted... The other instruments and vocals are usually "clean".
It is more-difficult and more-expensive to make a good tube amp. It's cheap and easy to build a good solid state amp. (Of course, higher power costs more.)
Is there a hearable difference to VST ?
Sure... Every VST will sound different and every tube amp that is not a "perfect" amplifier will sound different from any other "imperfect" amp.
BTW - Different people may have different definitions of "warm sound".
Last post by Klimis -
I'm having a hard time considering streaming services a viable choice for this thread. It just doesn't match the whole idea of this poll with localy stored media that you made the choice for it's format. For example, in iTunes or Amazon, atleast you do have files localy stored and you can re-encode them to something else if you have a preference to another format. To make it even more complicated, if I said that I was using Deezer, it offers 2 options of codecs for streaming, mp3 and flac. What if I said that i was listening my music from Youtube, would I vote for opus, aac or ogg? Also what if a streaming service offered somekind of proprietary codec? The whole idea behind streaming services is that you are not given any kind of choice in codecs, just qualities. Also, if Spotify didn't get pressured to share the fact that they use OGG, you wouldn't know that it's using and they want as much as possible for you to have no idea what they are listening to. So the problem here is mostly that you are given no choice and you do not possess any file to begin with. The final and bigger problem is that it also clashes with what most of the people in the poll had in mind when they voted for themselfs. We need in some way to separate Streaming Services from this topic.
Last post by kode54 -
What I meant was, a single application could fix the issue for just itself, by keeping regular track of where its windows are, and tracking through system messages when the desktop layout changes, and resetting window positions after the desktop settles back into a familiar configuration that matches its previous layout.
Otherwise, when the virtual desktop (all screens) changes size, windows just maintain their relative position on the new configuration, even sometimes crossing onto other screens. And frequently, they'll be forced to be back onscreen instead of partially offscreen.