oggcodecs 0.70.0827 released!
Reply #26 – 2006-03-11 06:09:00
If you want to use bitrate setting in the current version, make sure you set quality to 0. If quality is any other value, it seems to ignore bitrate, which kind of makes sense. that answers that question.. set quality to 0 to use abr seektable what is its purpose? ogg and ogm are almost the same.. yet with ogm (using oggds) it seeks instantly, and quite acurately IMO... I guess I don't understand the point in building this 'table'.. is it for perfectly acurate seeking (instead of a guess based on bitrate like most seeking for other stuff/programs is done)? [a href="index.php?act=findpost&pid=370742"][{POST_SNAPBACK}][/a] As with everything there is tradeoffs. There's two main ways to seek, one is seektable, one is binary search (and there is some sub variations here)... it depends on user preference and application. For example, my personal preference is that i'd rather a small amount of time up front (obviously at the moment, on big files, this is too excessive), than have to wait even a very short amount of time while seeking. Though i can understand that some people would prefer the reverse. Personally latency when seeking is something that irritates me. It also depends on application, certain applications it is highly desirable to seek very fast and accurately and an upfront load time is not such a big deal. As to accuracy, that is also subjective... afaik other implementations seek at best case to page boundaries, where these filters can seek to pretty much the highest resolution possible, ie. within packets to within a single audio sample (generally 1/44100th of a second). For general playback, some people might be happy to seek within 2 seconds, but there are applications where seeking exactly is essential. Even for general playback, you'll notice with some other applications that there are some places you just can't seek to directly, lets say someone says something and you want to seek back to just where they start speaking to listen to something over again to try and understand/hear it, instead they force you top to seek back to a second or two earlier. Again that's something i don't like. AFAIK, mine is the only implementation on any platform to use a seektable, everything else uses some variation of binary search. With a seektable, you can go to exactly the right page instantly. Going to exactly the correct page is important if you want to seek accurately. Depending on the implementation of the binary search, some applications also will seek to exactly the correct page(though after many more operations), but others will only iterate the seek a fixed number of times, this means seeking can sometimes be wildly innacurate, but limiting the number of iterations stops the binary search taking too long. There's two main approaches to binary seeking, one is bisection, this is what people commonly think of with binary search. Basically it goes i'm looking for this time, i'll look in the middle of the file, if the time i want is after, now seek to halfway in the second half of the file, check again, etc. Keep splitting the reamining space in half with every iteration. On a 200MB file with 4k pages, there will be about 50,000 pages, this means it will on average take 16 internal seeks to accurately fiind the page. Each seek will only land on a page boundary 1 in 4000 times, and will on average have to read 2000 bytes to find a page, then a further 4000 bytes to get the next page. The other approach is to use some kind of metric to let you do better than splitting the space in half each time. For example if you know the file is 4 minutes long and you want 3 minutes, you can make a reasonable guess that where you want to be is about 75% through the file. This seems pretty good, but it also has some problems, if you happen to land just after the place you want, instead of splitting your search space in half(50%), you've only cut it down to 75%. So a certain amount of extra fudging has to be done to try and make sure that you make your guesses conservative to avoid this case. This can certainly be better than bisection, but it makes for lots of corner cases and lots more complexity. In order to avoid a lot of that complexity, most applications have a fixed number of guesses, eg. they say if i don't find exactly whre i want after 3 or 5 attempts, i'm just going to start wherever i end up. This is completely unworkable if you want seeking to be accurate, but it's "good enough" most of the time for some things. On files where the bitrate is fairly even, this is not so bad, but it regions of the file where the bitrate is locally very high or low, it will perform fairly badly. Well that was kind of a long winded explanation But basically the reasoning for the seektable, is on files small enough(about 30-50MB atm) that the seektable builds without any noticable load time penalty, basically why not do it. On files in the middle range, say 50-500MB it's basically down to your preference as to whether load latency or seek latency is more annoying to you, and i definately agree this should be a user option eventually. On big files both approaches start to have big problems if you still want to seek accurately. With a seek table the load time becomes unbearable, with binary search the seek latency becomes unbearable. So in these cases the user either has to accept one or other of this annoyances, or you have to seek less accurately, which may or may not be desirable. Pretty much it comes down to; Seektable Binary Search Load time O(n) O(1) Seek time O(1) O(log n) There is also another alternative, which is what i will end up implementing later, which is a combination of both approaches that will give the near-optimal performance in all reasonable cases without sacrificing accuracy.