Seems to me most of the testing results I read are based on x encoder is better than y encoder at n bitrate. This does not tell me what I (and probably many others) really want to know.
Surely it would be better to test each encoder separately against the original source to discover the lowest bitrate at which everyone agrees it is transparent, and then compare the file sizes of the source audio encoded by each encoder at it's lowest transparent bit rate, therefore the results would tell us which really is the best compression codec, and the 2nd best etc.
Sounds like a monumental undertaking. Please feel free to organize and administer such a test! I'm sure many of the other people here who have run listening tests in the past would be willing to guide you in ensuring the tests are performed with proper double blind techniques and the data collated usefully at the conclusion.
Conceptually I agree with you OP, but in practice...
Large scale tests at or near transparency are difficult to run.
The test you are proposing is multidimensional, due to different encoders "failing" with different signals.
The other problem is that I do not know of any lossy encoder that is universally transparent at any efficient bitrate. Hence the answer to your answer is really quite simple: use lossless
I believe lossyWAV comes close to "absolutely always transparent", but that isn't what most people would call efficient. Several other lossy codecs still break (subtly, on a couple of known samples) even at their max bitrate. Others have maximum bitrates that are uncapped or higher than lossless, so of course don't break at their maximum bitrate, but what use is that?