1
Notice
Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
2
Opus / Re: Opus decoding complexity
Last post by Klymins -3
Other Lossy Codecs / Re: NellyMoser Asao patent
Last post by Klymins -4
Support - (fb2k) / Re: Foobar 2.1.4 Crash
Last post by Case -Older foobar2000 versions prior to 2.0 did not use SQLite databases and they only wrote configuration to the disk once at the end. Since 2.0 configuration, media library and related metadata are all SQLite backed and the files are always kept up-to-date.
Move your foobar2000 away from the network drive and try again.
5
Support - (fb2k) / Re: Foobar 2.1.4 Crash
Last post by jakedean22 -6
3rd Party Plugins - (fb2k) / Re: foo_vis_shpeck
Last post by sheik124 -7
MP3 - General / Re: Resurrecting/Preserving the Helix MP3 encoder
Last post by JoshuaChang -I think i have the same problem, my cpu is zen2 4650g(6c12t), i got around 280~300x under command line @ single thread with autodidact's clang build or rarewares' gcc build, I also compiled helix myself with clang plus linktime optimization, my version got around 480x. maybe that's the key.To be honest since i did read this i thought you have the same problem as @KevinB52379.
here's the binary, you can try it.
How does foobar multithreaded react on your system with the other builds?
As a sidenote: fast-math + AVX2 gives another nice boost to the fast clang compile.
Well, after test, I think I didn't have multithread problem like KevinB52379, my foobar2000 hmp3 batch convert have normal behavior like all other encoders(didn't slow down the system at all, my thread count is set to 0), different compilers seems just affect performance.
As I mentioned above, seems all microsoft link signature works for KevinB52379, and all mingw link signature doesn't.
You can find the signature using exeinfo pe, as can be found at github.
8
Opus / Re: Opus decoding complexity
Last post by Heliologue -If decoding speed is a sufficient proxy, then it's easy to test this with foobar2000 (this assumes software decoding; specialized hardware will affect the speed), but there's lots of variables that could affect decoding speed. My extremely rough benchmarking shows that decoding Opus files is about 1/2 as fast as a Lame-encoded MP3 of approximately the same average bitrate.
Single-threaded decoding test (3 passes)
MP3 (Lame 3.100 V2) is 1058x realtime
Opus (1.4) is 539x realtime
MP3:
Code: [Select]
System:
CPU: AMD Ryzen 9 3900X 12-Core Processor, features: MMX SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX LZCNT
Architecture: x64
App: foobar2000 v2.1.4
Settings:
High priority: no
Buffer entire file into memory: yes
Warm-up: yes
Passes: 3
Threads: 1
Postprocessing: none
Stats by codec:
MP3: 1058.144x realtime
File: Rule 5 - James Picard.mp3
Run 1:
Decoded length: 3:14.640
Opening time: 0:00.001
Decoding time: 0:00.184
Speed (x realtime): 1054.937
Run 2:
Decoded length: 3:14.640
Opening time: 0:00.001
Decoding time: 0:00.184
Speed (x realtime): 1055.695
Run 3:
Decoded length: 3:14.640
Opening time: 0:00.001
Decoding time: 0:00.182
Speed (x realtime): 1063.845
Total:
Opening time: 0:00.001 min, 0:00.001 max, 0:00.001 average
Decoding time: 0:00.182 min, 0:00.184 max, 0:00.183 average
Speed (x realtime): 1054.937 min, 1063.844 max, 1058.143 average
Total:
Decoded length: 9:43.920
Opening time: 0:00.002
Decoding time: 0:00.550
Speed (x realtime): 1058.144
Opus:
Code: [Select]
System:
CPU: AMD Ryzen 9 3900X 12-Core Processor, features: MMX SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX LZCNT
Architecture: x64
App: foobar2000 v2.1.4
Settings:
High priority: no
Buffer entire file into memory: yes
Warm-up: yes
Passes: 3
Threads: 1
Postprocessing: none
Stats by codec:
Opus: 539.492x realtime
File: Rule 5 - James Picard.opus
Run 1:
Decoded length: 3:14.640
Opening time: 0:00.000
Decoding time: 0:00.362
Speed (x realtime): 537.041
Run 2:
Decoded length: 3:14.640
Opening time: 0:00.000
Decoding time: 0:00.360
Speed (x realtime): 539.971
Run 3:
Decoded length: 3:14.640
Opening time: 0:00.000
Decoding time: 0:00.359
Speed (x realtime): 541.482
Total:
Opening time: 0:00.000 min, 0:00.000 max, 0:00.000 average
Decoding time: 0:00.359 min, 0:00.362 max, 0:00.361 average
Speed (x realtime): 537.041 min, 541.482 max, 539.491 average
Total:
Decoded length: 9:43.920
Opening time: 0:00.000
Decoding time: 0:01.082
Speed (x realtime): 539.492
9
Development - (fb2k) / pfc container types do not satisfy the c++20 range concepts
Last post by pnck -Code: [Select]
void my_menu::context_command(unsigned int p_index, metadb_handle_list_cref p_data, const GUID &p_caller) {
switch (p_index) {
case CMD1: {
auto to_process = p_data | std::views::filter([](auto item) { /* filter by some interested fields */});
// ...
}
Sadly this is not possible because metadb_handle_list_cref AKA pfc::list_base_t doesn't satisfy the std::ranges::range concept.
range<T> requires a begin() function to return a std::input_or_output_iterator.
And the iterator is required to have an operator++ returning the reference to itself:
https://en.cppreference.com/w/cpp/iterator/input_or_output_iterator
https://en.cppreference.com/w/cpp/iterator/weakly_incrementable
But in pfc the operator++ is implemented as a void function:
Code: [Select]
namespace pfc {
template<typename arr_t>
class list_const_iterator {
typedef list_const_iterator<arr_t> self_t;
public:
typedef ptrdiff_t difference_type;
typedef typename arr_t::t_item value_type;
typedef const value_type* pointer;
typedef const value_type& reference;
typedef std::random_access_iterator_tag iterator_category;
list_const_iterator(arr_t* arr, size_t index) : m_arr(arr), m_index(index) {}
void operator++() { ++m_index; } // <--
void operator--() { --m_index; }
So the containers implemented in pfc can't be treated as ranges, nor adapted to views.
Since we've been gradually moving to c++20, would this problem be fixed soon?
10