The SACD 1-bit signal is not a 16-bit PCM, so it cannot fit into a 16-bit .wav. Anything below is cut off, if you do not dither:
You will fill the 16 bits with "bits 1 to 16", and bit 1 is always zero and easily compressed away.
If you bump up the volume by 1 bit (approx 6 dB), then what was bit 1 is "clipped away" (which makes no difference since it was always zero) and what is fit into the .wav file is "bits 2 to 17".
So you take out the all-zero easily-compressible bit 1 and throw in the mostly-noise hardly-compressible-at-all bit 17. No wonder it gets larger.
Why the cd wav ripped by WMP can't match any result? I rip some cd to wav files by WMP,but these albums can't not match to the same cd in cuetools or AccurateRip.why? Why the EAC can but WMP can't? there're something information lost in WMP-ripped-wav when ripped by WMP?
Short story: Do not use WMP.
Longer story, you will find the details by googling:
CDs were intended to have a 2 second gap between the tracks. But it need not be silent.
That gap is "Index 0" of the track. Music starts at index 1.
Upon ripping, the "gap" is typically appended to the previous track, because then music starts when you start at the track (like on a CD player).
If the index is not the default two seconds (exactly!), then it is too hard to guess what the table of contents was, and CUETools, won't find it.
But if CUETools finds an EAC log or a cuesheet, it can.
WMP does not produce the EAC log or the cuesheet.
Try to dump the EAC log and the EAC-generated cuesheet into the folder with the WMP rip. Sometimes that is enough for CUETools to know it.
...That said, for all practical purposes, converting DSD to FLAC in foobar2000 while resampling with SoX resampler to 44100Hz should produce completely fine results, as long as there's no clipping. All other settings can be left as defaults.I have a quick question about this. Using foobar2000 and the SACD input component which is the more accurate conversion (or better math) in your opinion:
Make sure to check "don't reset DSP between tracks…" if converting from a tracks view, otherwise resampling could be not gapless.
SACD input settings PCM & 44.1 and then use the foobar2000 convert function using wav (auto) and no dither
SACD input settings PCM & 88.2 (or 176 for that matter) and then foobar2000 convert-->wav (auto), no dither and SoX resample to 44.1.
I ask because I've done option #1 with both no SoX and with SoX and they both yield 44.1 files so it seems like SoX is redundant in this case...or probably not even being used?
Basically is SoX better than the SACD input conversion...I've read that as long as the sample rate for DSD64 is a multiple of 44.1 then it should be good to go so I was planning on using option number one.
Not those that are well-designed (in terms of how they alter tonal quality). To suggest this is idiotic. Furthermore I will also contend that the average listenr doesn't know how to use them. This includes systems with just bass and treble, since the average listener doesn't understand that the volume control is also a tone control.
I read this thread but still don't fully understand what is happening:
Basically, I acquired the MFSL SACD of the 1973 Lynyrd Skynyrd album. For portability reasons, I converted the dsf files to wav files via foobar. One of the options for this conversion in foobar is a volume increase of up to +6dB. The reason this exists is because (I've read) that the conversion from dsf (in this case dsd64) to wav lowers the volume quite a bit and this is there to adjust the volume back up with the caveat that you should check the outputted wav files to make sure they are not clipped (which I did in audacity).
I did this process twice, once with no volume modification (+0dB) and once with (+6dB). Then I encoded via flac -8 and noticed that file sizes of the +6dB file are bigger compared to the 0dB and that indeed more bitrate is being used to encode those files.
I don't understand what is happening unless actual encodeable signal is being created with a volume change...which is well...that can't happen...you can't create something from nothing right?
To me, it makes the the most sense to use the 0dB version and just use a replaygain tag to adjust the "volume up" (even though when played in foobar at the same volume settings as the corresponding .dsf file the 0dB file is most definitely quieter).
TL;DR: Supply and demand.
Sorry, i resume for the average guy (like me).
All well designed modern amplifiers are perfectly transparent, there is no point to choose one or another.
Tone controls are badly implemented, they are useless.
People don't care about HIFI when they buy their HIFI stuff, they just want to feel good and happy.
1) Character in sound raved about in reviews
a. Magazines make money selling ad space. Negative reviews will garner fewer sales for both the manufacturer and the magazine. This should be easy to understand.
b. Most reviewers are both stupid and batshit crazy.
2) The existence of tone controls in the signal chain degrade purity of sound (unless the designers are daft this simply isn't the case; at least not audibly so)
a. Maufacturers include bypass controls or simply omit tone controls because that is what the market wants.
b. Consumers of these products are both stupid and batshit crazy. Same goes for boitique designers who actually believe this nonsense. Part of the stupidity lies in their cluelessness about Fletcher-Munson. This also suggests that the people who want the omission of tone controls because they believe these controls result in poorer sound quality based on what they hear are deaf.
TL;DR: Supply and demand.
256 kbps - because with that rate i am 100% on the secure side of sound quality - and works great for converting Amiga/C64 music too.
If you're fine with that bitrate you may as well just use a time domain subband codec like MusePack or hell, even MP2, and enjoy the perfect temporal resolution. Throwing more bits at a transform codec doesn't do much to fix their fundamental shortcomings beyond 192 kbps.