Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: foo_unpack_7z_ex (Read 2286 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

foo_unpack_7z_ex

When i did massive testing of foo_input_zxtune plugin i've spotted some problem - foo_unpack_7z is unable to process huge solid archives.
I got them from modland.torrent. Some of them are ~14Gb size.
But even for archives with size ~200 Mb it fails.
After source code investigation & debugging i noticed that plugin call C LZMA SDK API which tries to allocate more than 2 Gb of RAM to extract files.
And 32-bit foobar2000 process failed to do this.
After googling i found this limitation of C LZMA SDK.
7-Zip author recommend to use C++ LZMA SDK API.
So foo_unpack_7z_ex plugin was born.
It succesfully parse all modland.torrent 7z archives.


Re: foo_unpack_7z_ex

Reply #2
Thanks :)

Re: foo_unpack_7z_ex

Reply #3
Does it support unpacking multiple files in a series? At least using temp or tempmem files, it should be able to unpack an entire solid archive in one successive run, stashing and emitting the files one at a time. This doesn't necessitate unpacking them all to memory at once, since the callback can free them when it's done processing them.

Otherwise, it looks like a large solid archive will slow down exponentially as more files are repeatedly unpacked to get to the next file it forwards to the caller of the index process.

E: Never mind, your code looks sound. I don't know what's up with this archive.

Re: foo_unpack_7z_ex

Reply #4
Does it support unpacking multiple files in a series? At least using temp or tempmem files, it should be able to unpack an entire solid archive in one successive run, stashing and emitting the files one at a time. This doesn't necessitate unpacking them all to memory at once, since the callback can free them when it's done processing them.
No, it don't. But i don't sure we need this because of huge HDD space will be wasted.
By example, "Fasttracker 2.7z" will be unpacked from 14G to 31G.

Quote
Otherwise, it looks like a large solid archive will slow down exponentially as more files are repeatedly unpacked to get to the next file it forwards to the caller of the index process.
When indexing process is working all is fine because LZMA SDK C++ interface calls my callbacks in right order sequentially for all files in archive.
BTW i noticed strange thing when fb2k called this function:
Code: [Select]
virtual void archive_list( const char * path, const service_ptr_t< file > & p_reader, archive_callback & p_out, bool p_want_readers )
p_want_readers is always true.
so files are extracted in indexing process.

But when extracting files randomly or even in sequential order extraction time is bigger the later file placed in solid archive.

Re: foo_unpack_7z_ex

Reply #5
I will be following this with anticipation!! lol ;) :D