Skip to main content

Recent Posts

1
Just my two cents , @Arrivest , but there is a truckload of posts spread out throughout the years in this community, that could tell you why transcoding from lossy to lossy isn't that great an idea - from both the qualitative and commonsensical POVs.

This is for use in his car, and he is keeping the original files, so not so much of an issue. If anything he should be pointed to a vbr setting.
2
Just my two cents , @Arrivest , but there is a truckload of posts spread out throughout the years in this community, that could tell you why transcoding from lossy to lossy isn't that great an idea - from both the qualitative and commonsensical POVs.
3
General Audio / Re: Clipping audible when lossy encoding?
Last post by bennetng -
1.35 = +2.6dB so -1.5dB is not enough anyway.
Audibility may vary from track to track, need to ABX.
Safetest way to avoid clipping for all formats is to use a playback app with proper RG support.
5
Validated News / Re: Rockbox 3.14 released
Last post by includemeout -
As I cannot recall doing that back then, I'll give it a try.

As ever, thanks for the help, @saratoga !
6
General Audio / Clipping audible when lossy encoding?
Last post by halb27 -
I ran upon this question when testing opus last night. But it's not a specific opus question.

I encoded my standard test set of various pop tracks in order to find out average bitrate for a certain opus bitrate setting.
But I also looked at the peak track values. They were often something like 1.35.
Thanks to the loudness war most of my original flac tracks have a peak of 1.0 or not much below.
With this basis it is clear that such things happen going from the time domain to the frequency domain and back.
But what does this mean for the final quality? With peak values ~1.35 can I be sure clipping is inaudible or avoided somehow?
With mp3 I always look at the track peak value given by foobar, and - if necessary - edit a track gain value of -1.5 db (or more if necessary) and apply track Replaygain to the file content. After that I expect listening with foobar can't have clipping issues any more. With other decoders things can be different, but I expect to have significnatly lowered the chance for audible clipping also in these cases.

I do the same with AAC files. But what about codecs like opus? What can I do to prevent clipping? Or are these worries irrelevant? I'm pretty much beyond certainties at the moment.
7
Not to mention that there is no way that response at 20 Hz and below can be accurately reproduced by a LP.
But maybe by an EP played back at 16 2/3. A switch every kid who had access to it just had to try, right?

IIRC there were some turntables where you had to remove the platter and gear the belt manually between 45 and 33 1/3. Certainly there must have been a demand among parents.

Its not the turntable speed that is the key factor here. It is the tone arm resonance

Right. So it does not (only) matter what can be reproduced to the LP format, but what a turntable setup can get out of the groove.
To balance out their desire for supersonic hiss they cannot hear and which is only noise anyway, audiophiles must certainly equip themselves with a laser turntable in order to listen to those subsonic signals that never were intended to make it to the medium.
8
Actually ... I did not even know about this: https://en.wikipedia.org/wiki/Hi-MD had a capacity for 94 minutes PCM. Brought to you by the creators of Betamax ...

But I cannot imagine industry support - rather, one would have to expect major costs fighting the RIAA back then when they tried to restrict audio-on-CD-R to very special hardware and media.

I imagine if you'd pitched that to Sony in 1996, they'd have pointed out that they could easily expand the capacity of a CD by more than 30% just by increasing the NA of the read out optics without having to spend a small fortune on semiconductors.  Remember, by the late 1990s things like GD-ROM were already commercially available with larger capacity and almost no cost premium:

https://en.wikipedia.org/wiki/GD-ROM

"considered a mistake that contributed to the Dreamcast's early demise."

Yeah ... so even in the game console world, where the drives are bundled with the machine (unlike for music) and the vendor ensures that this is how the content is delivered (unlike for music), this choice of format could ruin your business.

9
This morning I also tested harp40_1 using --bitrate 160 (172 kbps on average for my test set of varios pop music) and --bitrate 192 (205 kbps for my test set) using buildA and build C. I can ABX each of the encoding results.

Using --bitrate 160 I'd call the results acceptable, using --bitrate 192 the issue is negligible to me.

Considering the fact that --bitrate 128 is transparent for nearly everything, --bitrate 160 is my sweet spot giving some (but not exaggerated) headroom for very evil samples.

Ignoring very evil samples --bitrate 96 is great and --bitrate 128 is pefect (judging from what we know so far).
10
Opus / Re: Opusenc's built-in resampler
Last post by OrthographicCube -
Yep. It sounded exactly like when you apply nearest neighbor algo on samples. Harmonics everywhere! No filtering, nothing!

Not trying to be a smart aleck or something (please--the least thing I want is an enemy) but I am also a real fan of chiptune music. (The fact that Opus does best when it comes to chiptune music as per my ABX tests is another topic but https://hydrogenaud.io/index.php/topic,105808.new.html )

Some computers do apply nearest neighbor resampling on samples, most noticeably the Amiga's sound chip nicknamed "Paula" does this on the hardware level. The Gameboy has a single sound channel that can playback samples resampled using--yep, nearest neighbor. The C64 and Atari XL/XE can also play samples through software mixing, and when it comes to their slow CPUs burdened with realtime audio mixing, we have no choice but to go with nearest neighbor resampling. We can't afford even linear resampling with these computers. A notable exception however, is with the Super NES / SNES / Super Famicom, which uses gaussian resampling on the hardware level (not quite sure if it is indeed called gaussian but I'm certain it doesn't use nearest neighbor or linear resampling--it uses something more advanced than linear but less advanced than cubic.)

Since you mentioned PSG, then that points us to (mostly) non-sample based chips, like the Nintendo's 2A03 or the 3-channel AY chips, or even FM synth ones like the OPL series, maybe even the Gameboy's sound chip. These chips make variable pulse width square waves, and it won't even matter if we resample them using nearest neighbor or cubic--those are square waves :) The NES (not Super NES) also creates triangle waveforms by playing back an internally stored sample of a (weirdly distorted) triangle wave at various frequencies resampled--yep, nearest neighbor :) The VRC6 sound chip made by Konami uses an internal counter that increments in fixed time intervals before looping back to zero and incrementing again to create a sawtooth waveform. Analyzing the waveform, it exibits a "staircase"-like wave, produced because of the integer counter which jumps incrementally, the jump size depending on the requested frequency. Since it isn't a perfect sawtooth wave, nearest neighbor resampling wouldn't hurt too much :) Yet another NES sound expansion chip, the Namco N106 (not sure about the name) is a sample-based audio chip that resamples in nearest neighbor.

My point is that chiptune already uses lots and lots of linear neighbor interpolation / resampling. In my opinion, THAT gives them that unique chiptune feel. Because of that, they already contain large amounts of harmonics (can't blame square waves--that's normal and expected from them) and if a lossy audio codec can't retain those harmonics, then that's not a good codec.

Just my two cents. Please feel free to ignore this or what. :)