Skip to main content
Topic: par2 (file/archive recovery) released (Read 49772 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

par2 (file/archive recovery) released

Par 2.0 was released.  Anyone who uses binaries newsgroups, and anyone who archives files and wants protection against errors, may be interested.  It has some important new features, like:
- Doesn't require equal size files to be efficient.  No need to rar and unrar shn/flac files when posting to binaries newsgroups, or for archiving files!
- Will repair only missing/damaged parts of files, no need to recreate entire files, so much more efficient in using recovery data.

There's a good comparison of par and par2 at this link. Other links below.  Most Windows users will want to start at the QuickPar site.  Don't miss the tutorial on the Operating Instructions page.

Mac users, take a look at MacPar Deluxe

- - - - - - -

A command line client for PAR 2.0 (called par2cmdline) is available
for download (in both source form and as a Windows executable) from:

http://sourceforge.net/project/showfiles.p...lease_id=157899

There is also a graphical client called QuickPar available (as a
Windows executable only), and you can download that from:

http://www.pbclements.co.uk/QuickPar/

Peter B Clements
Author of par2cmdline and QuickPar.
The Parchive Project
http://www.sourceforge.net/projects/parchive

- - - - - - -

POST EDITED to remove the section below, most if now no longer applies (or never did).  The QuickPar client has been developing rapidly.  Many binaries newsreaders can download and decode incomplete files.


I'm not sure it's time to start using this in the newsgroups, as it seems there's no version for Mac yet, always best to use cross-platform tools on newsgroups.

Also, I get the impression that one of the best features - being able to repair only missing or damaged parts - won't help with binaries newsgroups as much as I would have thought.  Can't say I understand yEnc encoding, but someone gave me the impression that you need all the consecutive parts of a multipart file post in order to decode the next one.  That is, I'm told that if you're missing parts 9 and 42 of 50 yEnc coded parts, you can only decode 1-8.  So you can't use par2 to recover the missing parts efficiently, because you would have to decode them first.  Not sure I've got that quite right, but it seems (unless there's another way to code) a big benefit of this for newsgroups users might not be there at the moment.

par2 (file/archive recovery) released

Reply #1
This is wonderful news! I'll start trying it this weekend on my cd backups! I guess there are quite high expectations on this new version.

par2 (file/archive recovery) released

Reply #2
I like the new PAR2 program's, except for the bad symbols and strange characters support.

The console and the GUI version have problems with files like:
06 - ¿Y tú qué has hecho .mpc
07 - Veinte años.mpc

The console version has even problems with the & symbol.

I hope this can be fixed in the next version 

par2 (file/archive recovery) released

Reply #3
Quote
No need to rar and unrar shn/flac files when posting to binaries newsgroups, or for archiving files!

As far as I know, the original version also works with any kind of file(s), no need to compress.
You can do a parity set from a bunch of mp3s, for example.
"Jazz washes away the dust of everyday life" (Art Blakey)

par2 (file/archive recovery) released

Reply #4
Quote
Also, I get the impression that one of the best features - being able to repair only missing or damaged parts - won't help with binaries newsgroups as much as I would have thought.  Can't say I understand yEnc encoding, but someone gave me the impression that you need all the consecutive parts of a multipart file post in order to decode the next one.  That is, I'm told that if you're missing parts 9 and 42 of 50 yEnc coded parts, you can only decode 1-8.  So you can't use par2 to recover the missing parts efficiently, because you would have to decode them first.  Not sure I've got that quite right, but it seems (unless there's another way to code) a big benefit of this for newsgroups users might not be there at the moment.

Actually with yEnc, all parts can be decoded and saved to disk independantly in exactly the same was as for UUEncode. You don't need to have decoded the earlier parts in order to be able to decode later parts.

If a file is posted as 50 articles using yEnc and you are missing articles 9 and 42, then you can save parts 1-8, 10-41 and 43-50 either as separate files or combined into a single file.

Whether you combine all parts into a single file or leave them as three separate files, both QuickPar and par2cmdline will scan them and find all the valid blocks of data that belong to the original file. The repair process will then only require as much recovery data as there were data blocks that could not be found.

par2 (file/archive recovery) released

Reply #5
Quote
Quote
No need to rar and unrar shn/flac files when posting to binaries newsgroups, or for archiving files!

As far as I know, the original version also works with any kind of file(s), no need to compress.
You can do a parity set from a bunch of mp3s, for example.

You are quite correct.

RAR (and other splitters/archivers) are mainly used when posting single large files (because all PAR 1 effectively does with a single file is make a copy of it).

If you are posting a number of files, then it is perfectly possible to use PAR 1 directly on those files. The problem however is that MP3 files tend to be quite varied in size, so if you create a set of PAR 1 files, then then will each be as large as the largest MP3 file.

Suppose there were a batch of 40 MP3 files with sizes varying from 3MB to 5MB and that 4 x 5MB PAR 1 files were posted along with them. If you were unlucky enough to lose 1 article from each of 4 of the files, then you are forced to download all 4 PAR 1 files. This amounts to an extra 20 MB download to cater for the loss of 4 articles (which might only amount to 1 MB worth of data).

The obvious answer to that is that you would not bother downloading the 4 incomplete MP3 files, you would just download the 36 good ones plus the 4 PAR files.

But what happens when you are missing 1 article from each of 5 of the files. In this case PAR 1 can do nothing, and you are forced to ask for another PAR 1 file (or a fill) to be posted.

With PAR 2 on the other hand, you would download the 5 incomplete files plus maybe 1.5MB of PAR 2 files and the repair would fix the incomplete files.

par2 (file/archive recovery) released

Reply #6
Peter, thanks very much for the enlightenment on encoding issues.  That's very encouraging.  (Though see the "however" at the bottom.)

The problem of uneven file sizes seems far worse with lossless audio.  Shn files for live shows may range from an introduction track ("Hello Springfield!") of 10MB up to continuous (unsplittable) track of 60-100MB or more.  To post a par1 set for such a show, you'd have to make each of the par files in this set 100MB.  And if you were missing a 10, 20 or 30MB file, you'd need to download one of the 100MB par files to recover it - wasting significant bandwidth.  So rar is used to make files of equal size, so that you download one of the par files to replace the incomplete rar file.  No wasted downloads.  So there's a big incentive under par to use equal file sizes.

There's also a benefit when using par1 with binaries newsgroup posts to having not only equal file sizes, but small file sizes.  For example, assume that a post is made in 500KB parts.  (There are technical reasons why the parts posted to newsgroups generally should not be larger than about 4-500KB.)  If an average file is 40MB, it will be posted in about 80 encoded parts.  If a particular user's news server fails to get 1 of every 250 parts (just 0.4% missing data), then about 1 in 3 files will be incomplete and require a par for recovery (requiring 33% recovery data).  But if the average file is 15MB and thus 30 parts, only about 1 in 8 files will be incomplete and require a par for recovery.  (As a rule of thumb, I've found posting 1 par for every 6 rars - about 15% recovery data - seems to work in practice when posting with rar archives with roughly 15MB file size, so these numbers are in the ballpark.)  That is, the fact that par recovery occurs at the file level, at a minimum, has a big impact on the need for par data and creates a big incentive under par1 to use small file sizes. 

(It also means that practically speaking the efficiency is much worse when posting large files than when posting small files like mp3, given that the 4-500KB part size is constant.  Using the above example and assuming the mp3 files are about 5MB or 10 parts each, one missing 500KB part would affect only 1 of every 25 mp3 files.) 

So today, if your goal is to get a set of shns to as many people as possible, some of whom have unreliable news servers, the process is:  put the shns into an rar archive (one rar archive per disc, about 4-500MB), with 10-15MB rar parts of equal size; and create a par set.  Each roughly 500KB encoded file PART that is missing will create the need for another 10-15MB par to replace the affected rar - not very efficient! 

Also, each user - whether their news server is perfect or not - has to download and unpack the rar files (using more time and twice the disc space).  But among reasonably sophisticated users, this cumbersome process proves to be less trouble than trying to manage a repost that gets every part complete on everyone's servers before parts start disappearing from the servers!

Once newsreaders support par2, the approach will simply be:  post the shn or flac files; and post a set of par2 files.  With at most a few reposts of parts, the shn files will be complete for those with good news servers and they can probably just download the shn files.  No need to unrar the files.  Only those who need them have to use the par2 files.  Also, the amount of par2 data should be less. (Though how much less is hard for me to say, not having used par2 yet and not knowing what the smallest practical "slice" or block will be.)

HOWEVER, note that many newsreaders today, especially those that are easiest and most powerful to use with binaries (such as GrabIt), do not allow you to recover partial files when parts are missing on a news server.  Only some newsreaders allow you to recover partial files, and it's not always a simple process.  As of today, for most users the efficiencies of par2 are not fully available.  So to realize the full power of par2, it seems newsreaders will need to change.

The implication in the meantime is that relatively large amounts of par2 data would still have to be posted - enough to recover the entire amount of the files that are affected by missing parts, not just the slices that are affected.

par2 (file/archive recovery) released

Reply #7
I've done plenty of calculations on the recovery rate that both par1 and par2 are capable of.

For example, with 450 MB of data split into 30 x 15 MB RAR files and then posted as 30 x 500 KB articles per RAR:

If you wanted 15% recovery data, then for PAR1 that would be 5 PAR1 files (totalling 75 MB of data). With PAR2 (assuming you set the block size equal to the article size - i.e. 500KB) the same 75 MB of recovery data equates to 150 recovery blocks.

The following table shows the rate at which articles can be lost that results in a particular probability that repair will not be possible.

Code: [Select]
    Recovery            Article loss Rate
 failure rate    5 PAR1 files   150 PAR2 blocks

         1 in 2     1 in 172      1 in 7.0
         1 in 5     1 in 250      1 in 7.4
        1 in 10     1 in 309      1 in 7.7
       1 in 100     1 in 546      1 in 8.4
     1 in 1,000     1 in 880      1 in 8.9
    1 in 10,000     1 in 1364     1 in 9.4
   1 in 100,000     1 in 2073     1 in 9.9
 1 in 1,000,000     1 in 3111    1 in 10.3
1 in 10,000,000     1 in 4633    1 in 10.7
1 in 100,000,000     1 in 6867    1 in 11.1


This shows that if as many as 1 in 11 articles are lost, with PAR2 you would have only 1 chance in 100 million that repair would not be possible, but with PAR1 an article loss rate as bad as 1 in 172 would result in a 50:50 probability of failure.

The fact that some commonly used newsreaders do not permit you to download incomplete files is very inconvenient. Presumably the designers of those newsreaders thought no one would ever want to download an incomplete version of a file. That is all well and good, but some file types are still useable even if the file is truncated.

The sooner the developers of those newsreaders enable the download of incomplete files the better.

par2 (file/archive recovery) released

Reply #8
Quote
The sooner the developers of those newsreaders enable the download of incomplete files the better.

Absolutely.  The fact that the newsreaders prevent downloading incomplete files was in some ways desirable - it prevented less experienced users from wasting bandwidth and time.  But with par2, it's more of an obstacle.

I don't want to leave a wrong impression - par2's features are terrific.  And (as long as you don't need to get the files to Mac users), binaries newsgroup users can start using it today, and and stop rar-ing and un-raring the files, provided we also post about as much par2 recovery data as we do now.  (Or perhaps a bit more to allow for the larger file sizes.)  That alone is very helpful.

However, once newsreaders that support the download of incomplete files are in use, we'll also be able to post much less recovery data.  That may be another way the benefit of par2 will show up.  (The ability to do selective reposts, check other servers, etc., raises the odds of completion to sufficiently good numbers already.)  So perhaps another version of that table is, how much less par2 data is needed to achieve the same probability of success.

Peter, can you comment on whether a GUI version for Mac will become available?

par2 (file/archive recovery) released

Reply #9
Quote
So perhaps another version of that table is, how much less par2 data is needed to achieve the same probability of success.


Actually, I've already done the computation, you you may be surprised at the answer. I certainly was.

Using the same 450 MB of data posted using a 500 KB article size (with 500 KB block size for PAR2, and 15 MB RAR/PAR files with PAR1), you get:

Code: [Select]
   Recovery           Article loss rate
failure rate     5 PAR1 files   5 PAR2 blocks
               
         1 in 2    1 in 172       1 in 160
         1 in 5    1 in 250       1 in 232
        1 in 10    1 in 309       1 in 287
       1 in 100    1 in 546       1 in 506
     1 in 1,000    1 in 880       1 in 816
    1 in 10,000   1 in 1364      1 in 1265
   1 in 100,000   1 in 2037      1 in 1922
 1 in 1,000,000   1 in 3111      1 in 2884
1 in 10,000,000   1 in 4633      1 in 4295
1 in 100,000,000   1 in 6867      1 in 6365


This means that to achieve a corresponding probability of succesfull repair you would need to post roughly 2.5 MB of PAR2 recovery data instead of 75 MB of PAR1 recovery data!

If this sounds incredible, then I'm not surprised.

Consider the same 450 MB of data and 500 KB article size, but this time there is only 1 x 15 MB PAR1 file and 1 x 0.5 MB PAR2 file:

Code: [Select]
   Recovery               Article loss rate
failure rate        1 PAR1 file   1 PAR1 block
         1 in 2        1 in 546        1 in 537
         1 in 5      1 in 1,110      1 in 1,093
        1 in 10      1 in 1,721      1 in 1,694
       1 in 100      1 in 6,159      1 in 6,062
     1 in 1,000     1 in 20,152     1 in 19,835
    1 in 10,000     1 in 64,384     1 in 63,378
   1 in 100,000    1 in 204,272    1 in 201,059
 1 in 1,000,000    1 in 646,639    1 in 636,469
1 in 10,000,000  1 in 2,045,353  1 in 2,013,183
1 in 100,000,000  1 in 6,468,907  1 in 6,367,162


In both cases, either no lost articles or one lost article is recoverable but two or more lost articles results in failure.

It should be obvious that in both cases the chances of losing two or more articles will be roughly the same (although with PAR 1 that is 2 articles spread across 930 and with PAR 2 it is 2 articles spread across 901), so the failure rate for a given article loss rate should also be roughly the same. The big difference is that the PAR1 file is 15MB and the PAR2 file is 0.5MB.

Quote
Peter, can you comment on whether a GUI version for Mac will become available?


This I do not know. However, since Mac OS X is based on unix, it aught to be able to compile and run the par2cmdline source code.

par2 (file/archive recovery) released

Reply #10
Do you think PAR2 will be a nice solution to the nightmare of cdrs?

Anyone know if I should make one par2 file-set for each cdr or if I should cut it in smaller part? Like a file-set for each album...

par2 (file/archive recovery) released

Reply #11
par2 seems like one of the better solutions to the cdr problem, depending I suppose on what the error rate is and how the errors are distributed.  The one caveat being, if your table of contents on the CD goes, you're presumably out of luck.

The great thing about par2 (vs. par) is that it can repair each part of a file that is damaged by using a relatively small amount of data.  So it can fix many errors to many files with a relatively small amount of total data.  par would need an entire file's worth of data to fix one error. 

If you're using a format where one error makes the file no longer decodable, that could be very useful.  (Though if you're getting a lot of errors, at some point it will overwhelm any error correction system.)

There is a tradeoff, though.  When creating the par2 files, the smaller the block size you use, the more errors you'll be able to fix.  (For example, par2 uses one block of data to fix the errors in a chunk of a file of that size.  If you leave the block size at the default of 256KB, it uses up all 256KB, whether it's just one byte or 100KB of that chunk of the file that's off.  Using a block size of 128KB would stretch the recovery data twice as far, if the errors are random and infrequent.)  But, smaller block size increases the computation time to create the par2 files.

As far as whether  to do it by album (a part of the data on a disc) or by disc, that's a good question, and might depend on:
- again, how the errors are distributed and how frequent they are. 
- how much time and disc space you're willing to allocate to the par2 data
- convenience - which approach do you find easier?
Then again, it might not make that much difference which approach you use.  Either will give good protection, neither will protect against severe decay or damage to the CD.

Presumably Peter or his colleagues have considered this issue, and may have recommendations for
- block size
- doing the par2 files by album (that is, a cohesive part of the data on a disc) vs. by disc

If anyone has an idea of how many errors you're trying to correct, and how they are distributed (randomly or is one error typically associated with another) it might help.

par2 (file/archive recovery) released

Reply #12
Quote
Presumably Peter or his colleagues have considered this issue, and may have recommendations for
- block size
- doing the par2 files by album (that is, a cohesive part of the data on a disc) vs. by disc

We have definitely considered the idea if using par2 files to protect data on a cdr.

The idea would be that you would fill all available free space on the cdr with par2 data.

One of us plans to do a test by burning a cdr and then deliberately scratching it and then seeing if recovery is possible.

We have no recommendations yet on block size or how to arrange the par2 files in relation to the data on the cdr.

par2 (file/archive recovery) released

Reply #13
Great!
Let us know when you have reached some conclusion.
I can't wait to burn my cdrs in a secure way.
It seems to me that the cdr will have to be badly damage for par2 to fail. Only problem is if the header is damaged in the par2 file. But that seems to be very unlikely.

par2 (file/archive recovery) released

Reply #14
Quote
par2 seems like one of the better solutions to the cdr problem, depending I suppose on what the error rate is and how the errors are distributed.  The one caveat being, if your table of contents on the CD goes, you're presumably out of luck.

I agree entirely. Maybe a new CD-R data storage file format should be created. Then, PAR2 could be integrated right into the subsystems.

par2 (file/archive recovery) released

Reply #15
Quote
One of us plans to do a test by burning a cdr and then deliberately scratching it and then seeing if recovery is possible.


Hi

Just a note - whenever I've had a bad CD-R, I usually lose access to the whole file - Windows doesn't allow me to copy just the good data - just ends with a CRC error or something.

If you want to do incremental recovery of individual files, then you're out of luck, as you won't be able to get the partial file off the bad CD-R

I'd be intrested in your results though, and how you get round the problem!
--Tosh

par2 (file/archive recovery) released

Reply #16
Quote
The fact that some commonly used newsreaders do not permit you to download incomplete files is very inconvenient. Presumably the designers of those newsreaders thought no one would ever want to download an incomplete version of a file. That is all well and good, but some file types are still useable even if the file is truncated.

I'd recommend Xnews for this - it manages to download incomplete binaries all the time!

Just need to find a newsgroup with PAR2 files now.

Xnews Homepage
--Tosh

par2 (file/archive recovery) released

Reply #17
Quote
It seems to me that the cdr will have to be badly damage for par2 to fail. Only problem is if the header is damaged in the par2 file. But that seems to be very unlikely.

Actually par2 files don't have a header.

par2 files consist of lots of little packets of information. Each packet contains its own checksum so that par2cmdline/quickpar can check each packet to see if it has been damaged or not. Damaged packets are simply ignored.

Some packets contain critical information such as the filenames, sizes, and hashes used to verify individual blocks of data. All critical packets get duplicated multiple times in each par2 file. These packets are also duplicated in the small par2 file, so it should be almost impossible for every copy to be lost.

The packets that contain actual recovery information are not duplicated. If they are damaged, you just have to obtain more par2 data.

par2 (file/archive recovery) released

Reply #18
I am planning to use par2 (through quickpar) for my archived music. A few questions:
  • Does the number of blocks effect only the speed (otherwise one can always select the higher block count) when you fix the redundancy ?
  • Why can't I mark a single directory and recurse all files and subdirectories within ?
  • Does par2 know which recovery files are based on which files? or is there a way to use par2 incrementally (instead of from scratch) when new files are added?
  • How erroneous a state could par2 recover from x per cent redundancy ? Could it recover if there're any errors in the recovery files?
The object of mankind lies in its highest individuals.
One must have chaos in oneself to be able to give birth to a dancing star.

par2 (file/archive recovery) released

Reply #19
Quote
Quote

One of us plans to do a test by burning a cdr and then deliberately scratching it and then seeing if recovery is possible.


Hi

Just a note - whenever I've had a bad CD-R, I usually lose access to the whole file - Windows doesn't allow me to copy just the good data - just ends with a CRC error or something.

If you want to do incremental recovery of individual files, then you're out of luck, as you won't be able to get the partial file off the bad CD-R

I'd be intrested in your results though, and how you get round the problem!

Even if you loose whole files, par2 will be able to re-create them.
I did a test with 30 files and deleted 4 of them. They were re-create.


Quote
Actually par2 files don't have a header.

ok...I did a test with par1 were I changed some of the first parts of the file and the program gave me an error (bad header or something). Is that different with par2? I didn't test it with par2; only par1 and asumed it was the same.

par2 (file/archive recovery) released

Reply #20
Quote
Does the number of blocks effect only the speed (otherwise one can always select the higher block count) when you fix the redundancy ?

The time taken to create a par2 set is proportional to the product of the amount of source data, the block count chosen, and the level of redundancy chosen. Creating 20% redundancy for 400MB with a block count of 1600 will take eight times as long as creating 10% redundancy for 200MB with a block count of 800.

Quote
Why can't I mark a single directory and recurse all files and subdirectories within ?

Currently neither QuickPar nor par2cmdline store path information so you cannot use if effectively on a whole subdirectory tree.

Quote
Does par2 know which recovery files are based on which files? or is there a way to use par2 incrementally (instead of from scratch) when new files are added?

Every par2 file contains details of which file(s) it contains recovery data for. Once par2 data has been created for a particular set of files, if you need to add new files you have to recreate the par2 data. This is the same with PAR 1.0 as well.

Quote
How erroneous a state could par2 recover from x per cent redundancy ? Could it recover if there're any errors in the recovery files?

par2 can handle extreme forms of corruption.

e.g. consider two distinct decks of playing cards and let them represent two files (with each card being one block of data). Now throw away some of the cards and introduce some extra cards from a third deck. Then shuffle the lot together into one pile and then roughly divide them into three piles. Let those three piles be three files. If you point par2 at these three files and provide it with enough recovery data to replace the cards that were thrown away, then it will reconstruct the original two files.

par2 (file/archive recovery) released

Reply #21
Quote
Just a note - whenever I've had a bad CD-R, I usually lose access to the whole file - Windows doesn't allow me to copy just the good data - just ends with a CRC error or something.

If you want to do incremental recovery of individual files, then you're out of luck, as you won't be able to get the partial file off the bad CD-R

I'd be intrested in your results though, and how you get round the problem!

Next time you might want to try something like CD Data Rescue or CDCheck.

These programs are able to extract files with bad sectors from CD-R(W). Extremely useful in combination with PAR2 IMO.
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello

par2 (file/archive recovery) released

Reply #22
Quote
The time taken to create a par2 set is proportional to the product of the amount of source data, the block count chosen, and the level of redundancy chosen. Creating 20% redundancy for 400MB with a block count of 1600 will take eight times as long as creating 10% redundancy for 200MB with a block count of 800.


I actually intended to ask what is the relationship between block count (for fixed level of redundancy) and the recovery potential. How much better is it? (The efficiency number in QuickPar)

Quote
Currently neither QuickPar nor par2cmdline store path information so you cannot use if effectively on a whole subdirectory tree.


Hmm that means I have to par by album. Maybe I have to write a batch file that would go into each directory and invoke par2. Oh "sweep" does that!

Quote
par2 can handle extreme forms of corruption.


Sweet  But how is the corruption recovery potential related to the block count and redundancy? For instance let's this example: "10% redundancy for 400MB with a block count of 800 compared to 5% redundancy with a block count of 1600." Which one has higher recovery potential?

Thanks...
The object of mankind lies in its highest individuals.
One must have chaos in oneself to be able to give birth to a dancing star.

par2 (file/archive recovery) released

Reply #23
Quote
For instance let's this example: "10% redundancy for 400MB with a block count of 800 compared to 5% redundancy with a block count of 1600."
Presumably you picked these because they both take the same amount of time to calculate, given Peter's formula?  So that means total par2 data of 40MB and 20MB, respectively, with block sizes of 51.2K and 12.8K respectively.

It would seem to me that again it comes back to how the errors are distributed, and how frequent they are, after taking into account the ECC that's already on the CDR.

- If the errors are small ( << block size), random, and infrequent (as might be the case for cd rot?), more blocks gives better coverage, as it's 1 block needed for each block with an error in it.  Even the smaller block size covers several CD sectors of 2k each.

- If the errors are small and correlated in a way that they tend to fall within a block (a sequence of a a relatively small number of bad sectors), even better coverage in favor of more blocks. One block fixes multiple sectors.

- It seems like the only time you're better with large blocks is if you know for sure the errors will be large - either each error, or the total amount of data affected.  For example, if you assume you need or want to recover at the file level, no point in having blocks that are smaller than say 1/20th the size of a large or typical file (and that only to efficiently use the data for uneven files - if all files are uniform in size, you could use block size = file size, like par did).

If the errors might be large or they might be small - then more, smaller blocks would seem to have the advantage.

So how do errors typically occur on a CD?  Either broad knowledge or anecdotal evidence about how CDs fail might help.

After ECC, how many CD sectors get wiped out by scratch from the center to the edge?  I don't know enough about the net effects of (data density of a CDR, data stored per groove, grooves per inch/CD, interleaving, ECC) to know how correlated and severe the errors would be from a scratch, for example. 

Also, is there any way to deal with errors in the TOC?

par2 (file/archive recovery) released

Reply #24
Ahem, I am going to put all my archived music on a single external hard drive. Par2 is an additional measure for recovery. I assume unless the drive goes dead, bad sectors should not make a problem when I Par2 (with additional NTFS recovery) each album individually.  The only problem is I correct tags once in a while, so I have to repar.

I also think that the small block size is better. I was thinking about a compromise. For instance 1 minute per CD seems good for par at 10% redundancy.

And back to one of my original questions: What if the recovery files have a problem? Does par2 algorithm in a naive sense par the recovery files themselfes as well ? Because there's this option in QuickPar which I am trying to figure : "Don't repeat critical data in recovery files".
The object of mankind lies in its highest individuals.
One must have chaos in oneself to be able to give birth to a dancing star.

 
SimplePortal 1.0.0 RC1 © 2008-2019