Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: lossyWAV 1.4.2 Development (was 1.5.0) (Read 117697 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Re: lossyWAV 1.4.2 Development (was 1.5.0)

Reply #150
@Nick.C What's our next step here? What should be improved here, LossyWAV or Vorbis?
• Join our efforts to make Helix MP3 encoder great again
• Opus complexity & qAAC dependence on Apple is an aberration from Vorbis & Musepack breakthroughs
• Let's pray that D. Bryant improve WavPack hybrid, C. Helmrich update FSLAC, M. van Beurden teach FLAC to handle non-audio data

Re: lossyWAV 1.4.2 Development (was 1.5.0)

Reply #151
@Nick.C What's our next step here? What should be improved here, LossyWAV or Vorbis?
In my opinion Vorbis should be fixed to properly handle chunks of odd length, in line with the Microsoft/IBM RIFF specification, based on the EA/Commodore IFF specification that preceded it.

A quick and nasty solution would be for lossyWAV to increase the length and stated size of the FACT chunk, if required, to ensure that it is always even, however this would perpetuate support for applications that don't handle WAV files correctly.

 

Re: lossyWAV 1.4.2 Development (was 1.5.0)

Reply #152
Please find attached a new beta release of lossyWAV.

lossyWAV beta 1.4.3c, 02/05/2024 (expires 31/12/2024)
  • Major bug identified after @guruboolez discovered that lossyWAV would not successfully convert large files in foobar2000, i.e. those which exceed 4GiB of uncompressed data in length, using the --ignore-chunk-sizes option. Many thanks to @Case for answering my questions on foobar2000, which made identifying the bug much easier. The calculation of padding bytes to write after each chunk incorrectly assumed that the number of bytes of processed WAV file would not exceed 4GiB, which caused the program to fail when attempting to write exabytes of padding...

Re: lossyWAV 1.4.2 Development (was 1.5.0)

Reply #153
Complete n00b question that I didn't find much answer to. I see no -m in the signature immediately above, at least.

Reducing bit depth not by zeroing out both stereo channels, but by averaging bits - so that they are zeroed in a side/difference channel; is that -m?
Which might not be exploited in a way that gives any bang for the buck really?

Re: lossyWAV 1.4.2 Development (was 1.5.0)

Reply #154
Complete n00b question that I didn't find much answer to. I see no -m in the signature immediately above, at least.

Reducing bit depth not by zeroing out both stereo channels, but by averaging bits - so that they are zeroed in a side/difference channel; is that -m?
Which might not be exploited in a way that gives any bang for the buck really?
The -m, --midside parameter only works with stereo content and determines bits to remove through analysis of mid and side channel data, the calculated bits to remove are then removed from each of the stereo channels in the WAV data. This means that the bits to remove value is the same for each channel (which is not the case normally) so the overall bits to remove for the processed data will likely be lower.

Re: lossyWAV 1.4.2 Development (was 1.5.0)

Reply #155
the calculated bits to remove are then removed from each of the stereo channels in the WAV data.
From each ...
So there is then this in-principle-possible way to save "up to half a bit" extra at likely-small fidelity penalty: If we decimate each channel down to N bits, then design the dither so that Nth bit is common to both channels - then reducing the "side" channel by one bit, except for frames where FLAC uses dual-mono encoding?
Or does it already?

Re: lossyWAV 1.4.2 Development (was 1.5.0)

Reply #156
The bit removal process is carried out on each channel separately, using the calculated bits to remove value for each channel, and may (depending on whether too many new clips are encountered or whether any of the feedback [if selected] breaches limits) remove fewer bits than desired - shown in output as "bits lost". Noting that if --midside or --linkchannels have been selected then the bit removal process will be repeated for any channels where the actual bits removed is higher than the minimum for that codec block.

This would be further complicated if adaptive noise shaping was in use (which it is by default) as the filters are channel specific.