I don't know about Lame mp3 though. I remember reading not too long ago whenever 3.97 went final that some aspects of 3.97 could not be improved due to the limits of the mpeg-1 standard (or something along the lines).
Then again, my perspective is mainly from the portable audio side of things. I am not a lossy encoder developer so I don't know if the limits of lossy encoding have been reached. I am just looking at Lame here and comparing it to other AAC encoders. Lame 1.0 came out in mid-1998 with version 3.0 coming out in May 1999. So Lame is about 10 years old yet they are still adding improvements. Apple's AAC encoder was first made public in 2003 and I think Nero's AAC encoder first came out in 2005 (someone please correct me on this as I know a free public version came out in May 2006 but I don't know when Nero started implementing their AAC encoder with their software). So theoretically, these AAC encoders still have a lot of room for improvement.
Every codec developer is always constrained by what the standard allows. This doesn't mean the encoder can't be improved!
Nero AAC has its roots in Psytel AAC, which is much older. Apple is based on Dolby AAC, I think.
I don't know if we will ever reach full efficiency with lossy encoders.
@Woodinville:but i ask the question: "is what we currently have, possibly enough?"
In 1997, the idea that someone would be able to send/sell music over the internet was ridiculed (look up "a2b music" in google for that).
The effort (R&D and marketing $$) being invested in increasing storage and bandwidth seems to vastly outstrip the effort to reduce bit rates, so I guess the market is speaking with their dollars and answering this question - "yes, what we have is good enough and we'll crank up the storage and bandwidth to make sure you get enough music/video, etc."
Lossy makes some sense from a battery perspective. Although I'm not sure how that peters out in practise (I'm sure people on this site have tried!).
Doesn't the whole lossless vs lossy argument hinge around what you personally define as 'lossless' though?
Once the threshold of perceptual transparency has been reached with any recording/encoding method, be it a compressed format or not, it's 'lossless'.
You can also see it in video - .avi is slowly being replaced by .mkv files and H.264 encoding.
A lossless transform guarantees that we don't lose any more information than we've lost already, but comparing the enormous amount of real-world information already lost throughout the entire recording process to the miniscule amount that you lose in a well-configured lossy codec in perceptual terms doesn't seem to make much sense to me on an analogue level.
Have you ever heard a CD-quality recording that comes even vaguely close to replicating the sound of actually being at a live rock concert, for example? I certainly haven't.
Lossless encoding obviously makes sense for archival purposes, but I don't understand why people complain about not being able to download music in a lossless format when the chances are incredibly high that they won't hear any difference whatsoever between, say, a LAME MP3 encoding at VBR -V2 and the original CD recording anyway.
Ignore the fact that we live in an analogue world with analogue sound sources and listen to them with analogue ears if you like. It doesn't alter the undeniable fact that we do.
EDIT: @Synthetic Soul: Apologies. I didn't see your last post until I'd already posted this. I thought that the OP was asking if we'd reached the limit with lossy. I was just expressing the opinion that the 'lossiness' of lossy is a very small part of the overall picture with current lossy codecs. Please delete this post if you think it's of no relevance here.
Show me an analogue recording format that doesn't restrict bandwith and add noise and I'll send you a case of beer.
He cannot do that because its logically impossible without replacing causality