Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Status of lossless codecs' error handling? (Read 7900 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Status of lossless codecs' error handling?

Doing some bit-flipping, I cannot reproduce all the claims in the wiki's lossless comparison. TTA seems worse than the wiki indicates; Monkey's-by-way-of-ffmpeg seems slightly better than the wiki indicates on signals ffmpeg can handle and otherwise worse.

I have boldfaced the discrepancies. Please fill in any corrections.

Also, WavPack / OptimFROG have implemented ultra-fast integrity checking without decoding. So, when you have copied a big failing hard drive and want to check for any corruption, the slowest codec would check faster than the fastest - unless FLAC has some neat option I don't know about.
And since there has been such a discussion over using 3rd party tools to rectify reference decoders' limitations: If you seriously are considering ALAC, then https://foobar.hyv.fi/?view=foo_audiomd5 is your friend for files where AccurateRip verification is not an option.


TTA Like Monkey's below, this becomes a discussion of format vs reference decoder.
(-) Reference decoder will not detect errors? The wiki's criterion is detect & warn and play on. At least I get no warning decoding with the exe (using tta-2.3-64bit-sse4 for Windows). But ffmpeg decoding will scream "Invalid data found when processing input".
(-) The reference decoder thus produces loud static. It can be from a few hundred samples to a second. (ffmpeg decoding will drop the entire frame, which is slightly above a second.)

Monkey's Audio. This has a history about whether the format is resilient.
(+) Can detect it - but only by running a decoding? (-v consistently takes the same time as decoding, could be a lot on a big drive).
(-) Official Monkey will abort - but official SDK (fb2k tested) can seek to position right after the error and play from there.  But to salvage anything past the error, one must resort to 3rd party decoders.
(!) 3rd party recovery with ffmpeg: Yes, even for "Insane". Don't expect miracles, dropouts could be > 20 seconds on Insane (likely more if you feed it less-than-CDDA-resolution) - but, salvaging anything at all out of insane is better than the wiki indicates (with the reservation it takes).
(-) 3rd party decoders tools support a limited number of formats (although recent ffmpeg has fixed 24-bit support). So with non-CDDA it is not sure you can even salvage "Fast" .ape. That is worse than the wiki indicates.

WavPack / OptimFROG: These have gotten a new feature that is IMHO quite valuable. Especially for OFR, which decodes slowly. Also, the damage control is valuable.
(++) Can detect - and can even detect without decoding, much faster than any decoded format.  Can also optionally use md5.
(+) Damage control: Appear to mute a broken block.
(-) Dropout might be a bit (? seems to be half a second-ish for WavPack, more for OFR)

FLAC
(+) Can detect it - but only by running a decoding? Even if FLAC has blockwise checksums, there is no other tool than decoding to check?
(+) Damage control here too, it seems? Only tried a few, appears muted.
(+) Can play and decode through errors. Dropout = one block. (4096 samples/ch by default.)
(!) It seems like ffmpeg salvages more, as it doesn't drop the block. But, ffmpeg decoding does not notice the corruption.

TAK
(+) Can detect it - but only by running a decoding? Like FLAC here?
(+) Damage control here too, it seems? Only tried a few, appears muted.
(+) Can play and decode through errors. Dropout seems to be around 1/4 second?! More than FLAC, less than WavPack/OFR, but:
(!) Like for FLAC, ffmpeg salvages more audio than Takc.exe does.

ALAC in MP4
(-) As well known, it cannot detect.
(+) Happily decodes through errors of course, and apparently doesn't lose much: foo_bitcompare would report only a few samples wrong - and among a handful of attempts, I had one with six samples wrong and one with just one sample wrong (both at -90.31 dBTP!)
Dropout could be audible, but nothing like TTA. Actually, only FLAC's short mute could qualify as less annoying. And then I have presumed that FLAC always mutes.


Re: Status of lossless codecs' error handling?

Reply #1
ffmpeg salvages more
for WavPack as well!

So - using foo_bitcompare - decoding with ffmpeg gives fewer samples different for the following due to not dropping zeroing out the entire block/frame:
FLAC
TAK
WavPack
All keep their original length, hence "dropping" was a bad word.

For TTA it is opposite: the reference decoder outputs correct length and with most correct samples (but can give full static for a second) while ffmpeg drops the block/frame (making the file shorter).
For Monkey's, ffmpeg drops the rest of the corrupted block, while the reference decoder drops the rest of the file.

ALAC? Zero difference between ffmpeg-decoded and refalac-decoded corrupted file. OFR? No alternative.




Re: Status of lossless codecs' error handling?

Reply #2
Some remarks regarding TAK:

- If a frame is damaged, you will loose the whole frame.

- Frame size depends on the preset and the sampling rate. -p4 is using the biggest frames: 250 ms duration, but limited to 16384 samples per channel. Therefore the maximum frame duration for a sampling rate of  96 k is 16384 / 96000 = 0.171 sec.

- The command line version Takc replaces damaged frames with silence. The gui version Tak can alternatively remove the frames and also provide you a log file with info about position and size of the damage(s). See "Decompress->Options->Error handling".

- A fast integrity check without full decoding can be implemented. I didn't know that this could be useful. But now it's on my to do list.

BTW: Nice work!

Re: Status of lossless codecs' error handling?

Reply #3
Ah! So the GUI has even more. This was a job for the cli ... but when I look at what I got in the folder: Takc also wrote a "Tak_Deco_Error.txt" with position as well, that is more information than -t gives.

Integrity check without decoding is probably less essential at TAK/FLAC decoding speeds than for OptimFROG - but it makes very much sense for encoders which choose to let the MD5sum stay optional.

Re: Status of lossless codecs' error handling?

Reply #4
(+) Can detect it - but only by running a decoding? Even if FLAC has blockwise checksums, there is no other tool than decoding to check?

The two most CPU-intensive tasks during FLAC decoding are MD5 summing and reading the residual. According to profiling, decoding CPU-time of a FLAC file encoded with -8 consists for about 30% of reading the residual, 30% of MD5 summing, 20% of restoring the signal from residual and the LPC model, 10% of checking the frame CRC. FLAC frames do not store their length, so the only way to reliably find the start of the next frame is by completely reading it, so there is no efficient way to skip that. So I'd say the max FLAC can improve on this front (assuming the disk drive is not a limiting factor) is getting about twice as fast, so not as dramatic an improvement as OptimFROG.
Music: sounds arranged such that they construct feelings.

Re: Status of lossless codecs' error handling?

Reply #5
Very interesting results, Porcus; thanks for testing this!   8)

With respect to WavPack, I decided that the safest thing was to mute anything that might possibly be incorrect. So if an error is detected, either from a corrupt value being generated mid-frame or the checksum failing at the end, I zero all the samples from that frame. My thinking was that it’s better to lose a little valid data than to play corrupt audio, especially when listening in headphones, as corrupt data tends to decode into loud static.

Of course, if the caller into the decoder is requesting very short blocks of data, or even single samples, then some audio would already be returned before the error was detected. In fact, by doing this intentionally, it would be possible to “cheat” the library and behave more like FFmpeg. Maybe that should be an option?

Re: Status of lossless codecs' error handling?

Reply #6
It is easier to tell what is bad handling than what is optimal handling ...

The best is of course to get a warning in your face and go to your backups and retrieve it. Salvaging audio is for those who don't have a backup, and we aren't that stupid more than half the time, are we?
Or maybe in the following situation it is good to get out as much as possible:
CDDA rip, small corruption, you want to try a CUETools repair. Much better chances if as few samples as possible are wrong. A CUETools repair is more handy than taking a USB pen to my off-site backup.


But, but: do such small errors as a bit flip really happen? Nice benchmark, sure, but what is "the smallest file corruption per scenario"?

* Defective RAM, will that ever corrupt a single bit? Sure there is some mention of how a single bit wrong is bad in this paper by Google, but the mindset there is that "one or more bits wrong" is bad.

* Botched file write? Most file systems default to 4k block size. And if a file write fails to assign the blocks correctly in the file allocation table - has happened to me on failing mobo losing its USB handling - files may break in two and a one would start reading into unclaimed space with whatever content - effectively pulling a Monkey's on it. (And, ahem, certain file acquiring techniques that may or may not be orthogonal to TOS#9, deliver it in larger chunks than 4k. Surely many have listened to music with a few 64k chunks missing ...)

* Botched writing of beginning-of-file tags past padding where there is very little audio in that block? That could be less than 4k of audio, but if the part of the file indicating beginning of audio is gone, that is a whole new issue. For all that I know the file could be totally dead then.

Other scenarios? How do you folks destroy your files?  :-X


Re: Status of lossless codecs' error handling?

Reply #7
Interesting comments on ALAC. Though Apple's decoder seems quite buggy when there's horrible losses on a HLS stream. It will often decode what ends up being full volume full spectrum noise. Wish they'd add extensions to the container for ALAC to record integrity info for the ALAC packets, and deal with that.

Re: Status of lossless codecs' error handling?

Reply #8
Defective RAM, will that ever corrupt a single bit?
Yes. Consumer hardware typically uses RAM with no error detection, so intermittent single-bit faults can survive a while undetected.

Re: Status of lossless codecs' error handling?

Reply #9
I got error detection in ALAC, wtf? Not from single bit flip, I needed more.


So kode54 got me curious, and I tried to corrupt more of the file. Sure it was static, but it was a very short bzzt. Consistent with only the affected samples.

So I tried to overwrite even even more. Turns out, when I destroyed a frame header, I got what I thought didn't exist (after all I have read my HA, haven't I?) namely ... ALAC error detection!

foo_verifier output:
Code: [Select]
MD5: 7B22472FE53CA32E8F934FCA86C84F43
CRC32: 82953A40
Warning: Reported length is inaccurate : 1:30.000000 vs 1:29.907120 decoded
Error: Decoding error: Unsupported format or corrupted file, frame: 246 of 969

From the output of ffmpeg decoding to .wav:
Code: [Select]
[alac @ 00000000003d4d80] Syntax element 4 is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that
 your file has a feature which has not been implemented.
Error while decoding stream #0:0: Not yet implemented in FFmpeg, patches welcome
[alac @ 00000000003d42c0] Syntax element 2 is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that
 your file has a feature which has not been implemented.
Error while decoding stream #0:0: Not yet implemented in FFmpeg, patches welcome
size=   15472kB time=00:01:30.00 bitrate=1408.3kbits/s speed= 374x

And then refalac -D ... hangs. Must be killed by task manager.

Re: Status of lossless codecs' error handling?

Reply #10
iTunes has a worse problem, because it's using archived HLS fetched bundles, so there's even less error resilience detection, apparently. I'd share one, but they're encrypted.

Re: Status of lossless codecs' error handling?

Reply #11
Deleting a byte! (does that even happen except by sabotage?)

So digging back, which I should have done in the first place:
@rjamorim tested both corrupted byte and deleted byte, see https://hydrogenaud.io/index.php/topic,33226.msg316496.html#msg316496

So I did the same. Same 90 second wav, encoded, opened all in a hex editor, scrolled down to some arbitrary point, hit delete and yes-I-am-sure-I-want-a-shorter-file and saved. Opened all in fb2k verified. Played. ffmpegged to .wav.

The good: TAK. Great reporting there! foo_verifier gives position of corruption. Plays & decodes with one block muted, to the right length. (I tried p0, p2, p4, since I messed up that last time).
To salvage more audio: ffmpeg gives more data at the risk of a static burst, but: ffmpeg drops a block on -p4 (shortening the audio by .25 seconds).

Good #2: WavPack. Tried -hh. I don't find the position of corruption with WavPack alone (easy if you decode with ffmpeg ... this time you don't even have to run a bitcompare). Peculiarity: wvunpack -vv says 2 errors, -v says one block erroneous.
Plays & decodes with one block muted to the right length.
ffmpeg: does not like this! Truncates from the error and out.
 
Good but with a minus or two: FLAC. Like WavPack it isn't straightforward for a novice user to get out the position of the corruption. (But VUPlayer's audiotester.exe does, unlike for WavPack.)
But flac.exe -d - F drops a block here, shortening the audio. So also does ffmpeg. (FLACCL and Flake refuse to decode.)
Also flac.exe -d -F  outputs a .wav that foo_verifier gives "Warning: Indicated RIFF size exceeds actual file size, file appears to be truncated
Warning: Malformed or truncated chunk found at 36 bytes, claimed length 15876000 bytes, truncated to 15871392 bytes".
I am not sure if that is a bad. Does not matter for playback (apart from the dropped block) - and it warns the user even after decoding.

The bad: Monkey's and now also OptimFROG and ALAC. Deleting a byte is now detected, and kills the rest of the file. Monkey's and ALAC have to use ffmpeg: Monkey's own decoder refuses it and refalac hangs and must be killed in Task Manager. ALAC gives a static burst, but forgiveably short before truncating the rest.

The ugly: TTA in the reference decoder. Happily decodes the rest of the file to static with a short silence every block (every second). At least ffmpeg kills the rest of the file.


Not saying that the bad rap on ALAC is ill-deserved, but we might have a new worst-in-show. That is, unless TTA should be considered obsolete like WMAL. Worst-in-show or not-in-show?


foo_verifier output which indicates what I tried. The "[repeats until]" is manually inserted.
Code: [Select]
Item: "C:\tmp\byte-deleted\90.ALAC.m4a"
Error: Object "mdat" at 6334 bytes is truncated.
Error: Media data is truncated, the file may not be playable completely.
Error: Decoding error: Unsupported format or corrupted file, frame: 181 of 969
[repeats until]
Error: Decoding error: Unsupported format or corrupted file, frame: 968 of 969
Error: MP4 frame position outside valid range
Error: Reading from MP4 file failed: frame 969 of 969.
Error: Unsupported format or corrupted file

Item: "C:\tmp\byte-deleted\90.-0.flac"
MD5: 615E0E818E7D43867FB508A2AA3C5865
CRC32: A20834D8
Warning: Reported length is inaccurate : 1:30.000000 vs 1:29.973878 decoded
Error: Corrupted FLAC stream
Error: MD5 mismatch

Item: "C:\tmp\byte-deleted\90.-5.flac"
MD5: 8755034FA9ECD3046D91CF6AB27413ED
CRC32: C3E3E3A4
Warning: Reported length is inaccurate : 1:30.000000 vs 1:29.907120 decoded
Error: Corrupted FLAC stream
Error: MD5 mismatch

Item: "C:\tmp\byte-deleted\90.fast.ape"
Error: Unsupported format or corrupted file

Item: "C:\tmp\byte-deleted\90.ape.insane.ape"
Error: Unsupported format or corrupted file

Item: "C:\tmp\byte-deleted\90.-0.ofr"
MD5: 236AA703839997C274E01F2B771FA809
CRC32: 12CC478D
Error: Recoveable errors encountered while decoding starting around sample 1090560
Error: Reported length is inaccurate : 1:30.000000 vs 0:29.976961 decoded

Item: "C:\tmp\byte-deleted\90.-8.ofr"
MD5: BEC4E7A69725AE11793678882195DB70
CRC32: DEED5B08
Error: Recoveable errors encountered while decoding starting around sample 1058816
Error: Reported length is inaccurate : 1:30.000000 vs 0:29.976961 decoded

Item: "C:\tmp\byte-deleted\90.-p0.tak"
Error: Audio data damaged. Damaged frame located at 0:35.29
Error: Audio data damaged. Damaged frame located at 0:35.29

Item: "C:\tmp\byte-deleted\90.-p2.tak"
Error: Audio data damaged. Damaged frame located at 0:18.74
Error: Audio data damaged. Damaged frame located at 0:18.74

Item: "C:\tmp\byte-deleted\90.-p4.tak"
Error: Audio data damaged. Damaged frame located at 0:14.75
Error: Audio data damaged. Damaged frame located at 0:14.75

Item: "C:\tmp\byte-deleted\90.tta"
Error: Can't read from file

Item: "C:\tmp\byte-deleted\90.hh.wv"
MD5: F7A997DC920E44874D25141D28783F1A
CRC32: 2FA14A09
Error: WavPack CRC mismatch


12 items could not be correctly decoded.

List of undecodable items:
"C:\tmp\byte-deleted\90.ALAC.m4a"
"C:\tmp\byte-deleted\90.-0.flac"
"C:\tmp\byte-deleted\90.-5.flac"
"C:\tmp\byte-deleted\90.fast.ape"
"C:\tmp\byte-deleted\90.ape.insane.ape"
"C:\tmp\byte-deleted\90.-0.ofr"
"C:\tmp\byte-deleted\90.-8.ofr"
"C:\tmp\byte-deleted\90.-p0.tak"
"C:\tmp\byte-deleted\90.-p2.tak"
"C:\tmp\byte-deleted\90.-p4.tak"
"C:\tmp\byte-deleted\90.tta"
"C:\tmp\byte-deleted\90.hh.wv"

Re: Status of lossless codecs' error handling?

Reply #12
Played. ffmpegged to .wav.
Decoded to .wav by reference decoder and also by ffmpeg.

"Plays & decodes": plays in fb2k, decodes with reference decoder.  ffmpeg-decoding mentioned separately.


we might have a new worst-in-show. That is, unless TTA should be considered obsolete like WMAL. Worst-in-show or not-in-show?

That is a question ... more for the wiki discussion section, but anyway, for what it is worth:
I may have contributed to TTA not being removed from the lossless comparison table, and so I re-ran such a "test" with a given Russian site. (If you know any of them, you know this.) The number of hits were quite deceiving, and I just went to the end of the search list and saw they were much fewer. BING (!) seems to do this in a way where you can easily reproduce it, by the &first= at the end.
So search syntax is https://www.bing.com/search?q=%22.flac%22+%22lossless%22+site%3ruyoucanguesstheresthere.org&first=666 and with ".flac" replaced. Number of hits per file extension:

843: .flac
816: .wv [apparently many of their users are fond of embedded cuesheets!]
788: .wav [might include bootlegs with information like "Lineage: microphone -> wav -> ..."]
764: .ape
759: .tak
699: [Here I searched for %28%22.mp4%22+OR+%22.m4a%22%29+ALAC]
232: .tta
152: .shn [like .wav, this is likely to include bootlegs which have been retrieved from once-.shn]
54: .ofr - that includes no actual audio share, actually, only mention of software with  OFR support. Hunch: subtract at least fifty from each of the above.

So ... TTA isn't non-existing, but even on self-proclaimed home ground it seems by now to be much smaller than TAK (cf. the statement in the link). More than the frog at this site obviously, but OFR has IMHO long earned its noteworthiness from its compression ratio, so it takes more to remove that from the table.
But then there is the collection mentioned here of fan-made remixes (I think?!), which might be the biggest TTA userbase by far.

Re: Status of lossless codecs' error handling?

Reply #13
More interesting results!

I haven't run any tests, but I'm thinking that the poor results in some cases from removing a byte may be because the decoders are assuming word-alignment for parsing, which of course gets broken for a whole file after a byte is removed. And I suspect that it is probably a really unlikely form of corruption. A more realistic corruption might be deleting 256 consecutive bytes, or something along those lines.

I'm not excusing any failure here, and they should handle that gracefully, but I think I might know why.

Re: Status of lossless codecs' error handling?

Reply #14
Makes sense. I agree that it doesn't sound realistic for reading files - but I saw the test was made, it was simple to do, and tests something about re-syncs (... recall back in the days where people would actually splice MP3s by concatenating files?)

Now for an ignorant layman's questions: shouldn't this also spawn a question of streamability? If the decoder doesn't find a block header unless it is precisely where expected, does that suggest ... ?
But then, as in this thread: is streamability something end-users should really care about?


Quote
A more realistic corruption might be deleting 256 consecutive bytes, or something along those lines.
Does anyone have such low-level utilities around, that they can directly access a block (4k, typically) in the file system: identify a file, get a block in the middle, with an editor indicating "block starts here in the file, ends here in the file", zero out those bits?
(And is that "anyone" willing to do the test, so we kids don't have to play with such knives?)

Re: Status of lossless codecs' error handling?

Reply #15
If the decoder doesn't find a block header unless it is precisely where expected, does that suggest ... ?
As far as I know, streaming audio typically splits the data at block boundaries when encapsulating it for transport, so the decoder wouldn't need to search for headers. Maybe I just don't know very much about streaming audio.

In some cases, the problem is actually the inability to back up and read the file headers. For example, some very unusual non-subset FLAC files might not decode correctly without the STREAMINFO block in the file header.

(And is that "anyone" willing to do the test, so we kids don't have to play with such knives?)
No knives needed. Just grab your hex editor, seek to an address within the file that's evenly divisible by 4096 (or 0x1000 - hint, that means it'll end in three zeroes if the address is written in hex) and overwrite 4096 bytes with zeroes or data from another file or whatever you like.

Re: Status of lossless codecs' error handling?

Reply #16
Embarrassed now, I didn't even think over the fact that files start at the beginning of a block  :-[

Re: Status of lossless codecs' error handling?

Reply #17
OK, so I did that test. Went to address 0x3FF000, marked up to 0x400000 and had it randomized.

foo_verifier screams at everything but ALAC and TTA.  And refalac and TTA happily decode, producing bursts consistent with what ffmpeg drops: 4096 samples for ALAC and 46080 samples for TTA.

Actually, ffmpeg drops samples from everything:
* Monkey's "High": 46080 here too, and the reference decoder drops the rest of the file as usual.
* FLAC -5: 16384 samples, twice what the reference decoder does, this time!
* TAK -p2: 0.125 seconds. Reference decoder returns original length, mutes apparently 0.125.
* WavPack -h: 0.500 seconds. Reference decoder returns original length, mutes apparently 0.500.

And then it was OptimFROG. It decodes to original length ... and mutes around six seconds. That was a lot. But nearly unaffected by compression setting. I tried the same a different address in the file to see if I had been "unlucky". No, the other way around: Nine seconds gone. Not quite an insane monkey, but ...

Re: Status of lossless codecs' error handling?

Reply #18
WavPack / OptimFROG have implemented ultra-fast integrity checking without decoding.

Monkey's Audio too - in the GUI.

Or well, probably undocumented in the CLI then? The GUI can choose between checksum on encoded data, or doing full decode.

Tested: verification time in seconds of a 230 MB file (uncompressed .wav size - 3:30 of 192kHz/24 bit) on internal SSD. Time for Monkey's Audio taken from the GUI, the others from timer64.exe by the 7-zip author; I used total time, presuming that this is what the Monkey's GUI gives me:

.46: wv -hh verified with -vv (process time as low as 0.14). Other .wv about the same.
.50: ape extra high or insane verified with GUI
.7: ofr preset 5 verified (process time 0.5)
1.1: flac with no md5
1.6: flac
1.6: tak -p0
1.85: tak -p4
4 sec: wv -h verified with -v
5 sec: wv -hh verified with -v
13sec: ape extra high verified by decoding (also through GUI)
28sec: ape insane verified by decoding (also through GUI)
28sec: ofr preset 5 verified with --check (decodes)


Also tested: spinning hard drive attached by USB3. Disk read time completely dominates. Everyone took about the same time as copying to internal SSD - provided you would use the fast verify options for those three which support it.
Example, filesize 1.8 GB (compressed file size, that is what matters!) - all spent 53 to 59 seconds verifying, about same as copy to internal SSD. and fastest were FLAC and TAK that had to decode.
Monkey's extra high took 55 seconds - but 200 seconds by selecting full decode in the GUI.

Re: Status of lossless codecs' error handling?

Reply #19
A very interesting topic, would be nice if you tested lossy codecs as well.

I'm extremely interested in how Ogg Vorbis, OPUS and M4A/AAC handle bit errors. Not so interested in MP3 which though still widely used has fallen out of grace.

Re: Status of lossless codecs' error handling?

Reply #20
As far as I understand, that is very much up to players, and since there were a lot of "bad" MP3 files around, they often try to resync.

Encoded a 20 second file into five formats:
raw .aac / aac in .m4a / .mp3 / .ogg / .opus
and did the laziest two corruptions with HxD:

* Corruption 1: Go to address 0x10000 and overwrite the next half byte with a zero
All files play and decode except aac, where foobar2000 truncates the rest of the file from the error.
mp3: no errors reported, no sign there was ever an error, went back and overwrote more bytes, same result.
For the rest: play and decode but both ffmpeg and foo_verifier report errors. In most cases, an offending packed was dropped, shortening files by a quarter of a second (m4a) or ~ a second (ogg/opus)

* Corruption 2, expected to be more severe: Go to address 0x10000 and deleted the previous half byte.
.m4a is not playable nor decodable.
Then the rest play/decode but throws errors or warnings (fb2k for ogg/opus only reporting a "minor" problem of shorter length). Only one very short burst of static: mp3.
Maybe a bit surprising, foobar2000 plays the .aac to the end despite having aborted the presumed less severe corruption 1. Could be a coincidence.


Re: Status of lossless codecs' error handling?

Reply #21
Nowhere mentioning ffmpeg version used. FFmpeg can check much more or less with custom -err_detect option. default of -err_detect is crccheck now.

Re: Status of lossless codecs' error handling?

Reply #22
5.0. For Windows.

For the latter, I ran a decode to file, to simulate playback. Because I would run foo_bitcompare, it is easier/lazier to have file output than to try ffplay.

Re: Status of lossless codecs' error handling?

Reply #23
First, the updated qaac/refalac 2.73/1.73 does no longer hang on the worst-corrupted ALAC.

FFmpeg can check much more or less with custom -err_detect option.
Makes no difference.  Tested with corrupted ALAC / TTA, -err_detect ignore_err / explode / compliant / aggressive all produce bitwise same output file.

For TTA, this is getting bad indeed. I "found" (just by trying another file, this time a 192 kHz/24 bit) a byte-deletion that causes static/signal pick-up/static etc.

* tta.exe decode: runs the decoding to at least "90%" in the status, likely all; only then it aborts with a "can't read from input file", no other explanation. So it has some check at the end. And throws away the entire file.
* foo_verifier (foo_input_tta uses reference library): consistent with tta.exe
* foobar2000 using reference library: spits static and every now and then a second of signal.
* ffmpeg decoding = ffmpeg playback on smplayer : drops most frames from the error on. Still some static bursts.


And part of this is touted in TTA's README under "Features":
Code: [Select]
  - CRC checking;
  - Corrupted data decoding;
Yeah it plays corrupted data as static in your face. But - if it detects at all, and it cannot detect bit-flips - then tta.exe -d refuses to salvage your data; will run until the end and only then decide that nah, you cannot have a second output file.

Doing everything wrong, methinks?

 

Re: Status of lossless codecs' error handling?

Reply #24
try -err_detect 0, i think it should not do crc checks then.