Skip to main content

Topic: foo_unpack_7z_ex (Read 1373 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • djdron
  • [*]
  • Developer
foo_unpack_7z_ex
When i did massive testing of foo_input_zxtune plugin i've spotted some problem - foo_unpack_7z is unable to process huge solid archives.
I got them from modland.torrent. Some of them are ~14Gb size.
But even for archives with size ~200 Mb it fails.
After source code investigation & debugging i noticed that plugin call C LZMA SDK API which tries to allocate more than 2 Gb of RAM to extract files.
And 32-bit foobar2000 process failed to do this.
After googling i found this limitation of C LZMA SDK.
7-Zip author recommend to use C++ LZMA SDK API.
So foo_unpack_7z_ex plugin was born.
It succesfully parse all modland.torrent 7z archives.

  • djdron
  • [*]
  • Developer
Re: foo_unpack_7z_ex
Reply #1
Small update to version 0.0.2
Fixed "Last modified" info.

  • Brazil2
  • [*][*][*]
Re: foo_unpack_7z_ex
Reply #2
Thanks :)

  • kode54
  • [*][*][*][*][*]
  • Administrator
Re: foo_unpack_7z_ex
Reply #3
Does it support unpacking multiple files in a series? At least using temp or tempmem files, it should be able to unpack an entire solid archive in one successive run, stashing and emitting the files one at a time. This doesn't necessitate unpacking them all to memory at once, since the callback can free them when it's done processing them.

Otherwise, it looks like a large solid archive will slow down exponentially as more files are repeatedly unpacked to get to the next file it forwards to the caller of the index process.

E: Never mind, your code looks sound. I don't know what's up with this archive.
  • Last Edit: 10 July, 2017, 09:53:30 PM by kode54

  • djdron
  • [*]
  • Developer
Re: foo_unpack_7z_ex
Reply #4
Does it support unpacking multiple files in a series? At least using temp or tempmem files, it should be able to unpack an entire solid archive in one successive run, stashing and emitting the files one at a time. This doesn't necessitate unpacking them all to memory at once, since the callback can free them when it's done processing them.
No, it don't. But i don't sure we need this because of huge HDD space will be wasted.
By example, "Fasttracker 2.7z" will be unpacked from 14G to 31G.

Quote
Otherwise, it looks like a large solid archive will slow down exponentially as more files are repeatedly unpacked to get to the next file it forwards to the caller of the index process.
When indexing process is working all is fine because LZMA SDK C++ interface calls my callbacks in right order sequentially for all files in archive.
BTW i noticed strange thing when fb2k called this function:
Code: [Select]
virtual void archive_list( const char * path, const service_ptr_t< file > & p_reader, archive_callback & p_out, bool p_want_readers )
p_want_readers is always true.
so files are extracted in indexing process.

But when extracting files randomly or even in sequential order extraction time is bigger the later file placed in solid archive.

  • Melchior
  • [*][*]
Re: foo_unpack_7z_ex
Reply #5
I will be following this with anticipation!! lol ;) :D