I feel like an ass hijacking the OP's thread, but since this is related to Audacity....
As an experiment, I imported a FLAC file into Audacity. Without making any edits whatsoever, I exported this file to FLAC. I then used Foobar to compare these two tracks. This was the result:
Differences found in 1 out of 1 track pairs.
Comparing:
"C:\Users\Anthony Bertorelli\Desktop\FLAC #01.flac"
"C:\Users\Anthony Bertorelli\Desktop\FLAC #02.flac"
Differences found: 42082194 sample(s), starting at 0.0000680 second(s), peak: 0.0003052 at 120.1485714 second(s), 1ch
This would have given me cause for alarm a couple of days ago. Knowing what I know about dithering (which still isn't much), I set it to "None" and repeated the experiment.
All tracks decoded fine, no differences found.
Comparing:
"C:\Users\Anthony Bertorelli\Desktop\FLAC #01.flac"
"C:\Users\Anthony Bertorelli\Desktop\FLAC #03.flac"
No differences in decoded data found.
For this experiment, I didn't change the Default Sample Format; I left it at 32 bit float. I believe CD audio is 16 bit. Would I be correct in assuming that Audacity "upsamples" (if that's even a word) the imported FLAC file to 32 bit and then "downsamples" it to 16 bit (or 24 bit) upon export? Hence the added dither?