Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: qtaacenc: a command-line QuickTime AAC encoder for Windows (Read 429540 times) previous topic - next topic
0 Members and 2 Guests are viewing this topic.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #75
Because, I assume, they use Replaygain which, unlike QuickTime's limiter, will prevent clipping without affecting dynamics (though, whether it makes an audible difference...).


ReplayGain cannot prevent clipping in this case. It can only work with what it's fed. If that is clipped, the data is gone.

We're talking about decoder clipping, right? Replaygain will prevent it (unless it's applied after converting the decoder's output to integer of course).

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #76
I didn't think about that. If the energy added by an encoders own filtering doesn't cause overflow, clipping should indeed not happen when decoding to float and then Quicktime's scaling would be unneeded. Does it also scale down if you convert the input to float before feeding it to Quicktime?

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #77
Or is AAC unclippable if you output to float?

Yes.

In addition, usually the clipped peaks have an inaudibly short duration. I have not yet seen a successful ABX report or been able to personally ABX such clipping. For a valid test you would need to decode one sample to float and let another sample to clip and after that reduce the volume level of both files an equal amount so that the "float" file will not clip when it is converted to an integer bit depth.

Edit

I was incorrectly speaking about lossless vs lossy instead of two differently decoded versions of a lossy file. Fixed that.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #78
It is possible to multiply input by 0.95 to prevent QT limiter from altering the signal.

Another method: decrease input by 1.505 dB, encode to AAC, then use aacgain to restore volume.

 

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #79
... Besides, it's not always strong enough to prevent clipping anyway...

I too have noticed that.

For instance here are three samples from my archive.

The encoding settings
FLAC: -8
M4A: QT --tvbr 62 --highest
MP4: Nero -q 0.41
MP3: Lame -V5


qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #80
It is possible to multiply input by 0.95 to prevent QT limiter from altering the signal.


Yes, though I think QT does not apply limiting, at worst attenuation. And I cannot even reproduce that. afconvert doesn't alter the -0.000001db sinus wave I feed it. The aac is just 0.05 db louder than the input WAV at the end.

I also doubt, that even if there was as much as 1db attenuation involved, that anyone of us could ABX it after ReplayGain had been applied.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #81
BTW, all resulting bitrates in my screenshot are dubiously low for the encoded material - especially for the Merzbow sample, which is from the possibly loudest recorded track ever. It is almost like white noise at 0 dBFS as you may guess by checking the FLAC bitrate.

I would have expected the bitrates to be very high, but perhaps the encoders have some kind of built-in sanity checks and they fall back to using a nominal bitrate when the signal doesn't look like a sound that has normal variation.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #82
Does it also scale down if you convert the input to float before feeding it to Quicktime?

Yes.
I converted the FLACs to 32bit float wav files, then fed those directly to qtaacenc.
Foo_bitcompare couldn't find any difference between the resulting files and the ones converted with foobar.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #83
Could you please upload 30 sec of a file, that gets attenuated, to the upload section? I'm unable to reproduce it.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #84
I also doubt, that even if there was as much as 1db attenuation involved, that anyone of us could ABX it after ReplayGain had been applied.

I am afraid that in the typical Apple style the "feature" is undocumented. For instance, for the public listening test we would need to know exactly in which circumstances the volume level reduction kicks in and also if its amount is constant. If it varies we would need to know how it varies.

A 0.6 dB volume level reduction is too big for a listening test and it must be corrected in a way or another. ABC/HR Java has a built-in normalizer, but because it it only levels the files according to the highest peak level it is useless.



qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #85
Test file for QT limiter:all (but one) samples are 0.25, one sample equal to -1.0.
[attachment=5711:test.wv]
Encode it to AAC, decode to WAV, multiply by 4 (so that 0.25 became 1.0)
Result:




qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #86
Could you please upload 30 sec of a file, that gets attenuated, to the upload section? I'm unable to reproduce it.

Any loud sample that peaks @ odBFs should do, but I uploaded the three samples I pictured in the above screenshot:
http://www.hydrogenaudio.org/forums/index....showtopic=78476

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #87
Alex, I get 0.999969 track peak for the original Merzbow, but 1.946267 for Quicktime. Isn't that supposed to be lower if it attenuation was happening?

But lvcql's file really shows shows lower peaks against both Nero and the original. And it seems to be applied as limiter with a very long release time. I would want to switch that off, at least for comparisons, but there is just no way to do that in afconvert. And since the effect is applied dynamically, it also cannot be removed by post processing. The only workaround would be lvcql's proposal.

Update: Corrected premature conclusions.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #88
Nao could you also give us instruction for a CLI dbpoweramp version of this? thank you so much for all of this and keep up the good work!

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #89
Nao could you also give us instruction for a CLI dbpoweramp version of this?

Are there any difficulties? For example:

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #90
Nao could you also give us instruction for a CLI dbpoweramp version of this?

Are there any difficulties? For example:


Just a couple of related questions:

1) Converting mono tracks - is it better to use the "channels" drop-down -> 1 or the channel count DSP? Or something in the command line itself? Or does it not matter?

BTW Using either the drop down or DSP, foobar properties still shows "channels - 2" with mono files, whereas with mono Nero files it shows "channels - 1". Don't know if this is caused by qtaacenc or the QuickTime encoder itself. [dBp properties info is worse - shows channels - 2 with both types of mono AAC files!]

2) For changing sample rates is it better to use the drop-down in the dB CLI encoder or "--samplerate xxx" in the command line? Or doesn't it matter?

Thanks nao.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #91
Quote
1) Converting mono tracks - is it better to use the "channels" drop-down -> 1 or the channel count DSP? Or something in the command line itself? Or does it not matter?

Nope.

Quote
BTW Using either the drop down or DSP, foobar properties still shows "channels - 2" with mono files, whereas with mono Nero files it shows "channels - 1". Don't know if this is caused by qtaacenc or the QuickTime encoder itself. [dBp properties info is worse - shows channels - 2 with both types of mono AAC files!]

The same thing happens to the files created by iTunes, so it should be caused by QuickTime.

Quote
2) For changing sample rates is it better to use the drop-down in the dB CLI encoder or "--samplerate xxx" in the command line? Or doesn't it matter?

If you don't want to use the samplerate converter in QT, use the drop-down. Otherwise use the --samplerate option. I can't say which is better.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #92
thank you nao!it works amazing! Should "highest quality decoded source" be checked? i have tested a bit anyway and i noticed that some songs converted with XLD Q127 MAX  have 1 kbps less than Q127 --highest (which is Max),isn't it weird?

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #93
thank you nao!it works amazing! Should "highest quality decoded source" be checked? i have tested a bit anyway and i noticed that some songs converted with XLD Q127 MAX  have 1 kbps less than Q127 --highest (which is Max),isn't it weird?


Because maybe qtaacenc uses QuickTime 7.6.5 on Windows OS, instead XLD uses QuickTime 7.6.3 (QuickTime X) on Mac OS X Snow Leopard and QuickTime 7.6.4 on Mac OS X Leopard.

Regards

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #94
sometimes it's 2-3 kbps of difference,anway more or less it's there. Probably you're right wkmax,weird that Apple hasn't updated quicktime also on Snow Leopard and Leopard,i mean if there are any improvements with encoding they should have updated on macintosh as well.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #95
I can confirm that the differences are not due to different container encapsulation. The raw AAC tracks also have different sizes.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #96
Quote
Should "highest quality decoded source" be checked?

I don't know what the option is.

Quote
isn't it weird?

Well, it is not a good way to check equivalence by bitrate. You should compare the decoded PCM samples. As far as I've tested the decoded samples are not identical, but almost the same. The difference is much smaller than the difference of --highest vs --normal.

Because maybe qtaacenc uses QuickTime 7.6.5 on Windows OS, instead XLD uses QuickTime 7.6.3 (QuickTime X) on Mac OS X Snow Leopard and QuickTime 7.6.4 on Mac OS X Leopard.

I don't think the difference of QT version (7.6.3 vs .4 vs .5) matters. I think the difference mainly comes from some environment dependent stuff (compiler, depending floating-point arithmetic routine, etc.). You know different LAME binaries result in subtle difference, even if they are built from the same source.

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #97
I don't think the difference of QT version (7.6.3 vs .4 vs .5) matters. I think the difference mainly comes from some environment dependent stuff (compiler, depending floating-point arithmetic routine, etc.). You know different LAME binaries result in subtle difference, even if they are built from the same source.


Thanks again nao, makes sense 

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #98
Hello.

I have one question about VBR encoding in QuickTime. I was searching with Google but have not found detailed explanation of Constrained VBR mode.

Is it better or worse than True VBR? . Maybe it gives higher quality than tvbr only at maximum quality settings (--cvbr 320 VS --tvbr 127) but at lower bitrates I can get better quality with true vbr?
🇺🇦 Glory to Ukraine!

qtaacenc: a command-line QuickTime AAC encoder for Windows

Reply #99
There is no technical reason to expect CVBR to be better than TVBR (especially not at high bitrates*). Conceptually TVBR is superior and CVBR is just TVBR with braces. If the TVBR implementation was somehow flawed, it would rather show at lower bitrates (as in the 2010 listening test). The constraint could limit the encoder from dropping too low (but be limited at the same time to scale up where needed).

You'll find explanations of CVBR from Apple's developer documentation, if you use the forum's search function.

In my opinion the sole purpose of the CVBR mode in iTunes is prevention of consumer confusion. They don't want people calling in why their remastered 1940 album comes out at ~90kbit/s when they had chosen 256 kbit/s average. When there's just not that much information to preserve, the TVBR encoder won't hesitate to do it. The CVBR encoder would instead output a higher bitrate and please consumer prejudice (without audible benefit). Consumers are happy, Apple is happy (less support calls): win/win.

* TVBR 127 doesn't drop that low anymore and results in >330kbit/s rates.