"Conventional wisdom": even among those with a type IV switch, many couldn't really record to metal tape.No, type IV was actually superior. There's a demo of noise/hiss levels in the video I posted (starting at 15:25.) The sound is extremely clean. Back then it sounded very near to the CDs I recorded from. And that's without dolby noise reduction. With dolby, there's virtually zero noise. But a poor deck will always produce crappy results though. A decent deck is needed to get good recording and playback quality.
Question: was that just audiophoolery? (I stuck to chrome myself.)
I use foobar to play tracks for theatre plays. Usually I select the track from the playlist with keyboard arrows and once the cursor is over the track, I press <Enter> to play it via shortcuts.
A month ago I changed my laptop and downloaded latest version of FB2K. The problem is 'Play' command has a subtle but important change:
- When status is 'playing':
IF cursor is over played track: Play command plays the track from beginning.
ELSE Play command plays the track where the cursor is
- When status is 'stopped': Play command plays the track where the cursor is
- When status is 'paused': Play command resume the playback of paused track, no matter where the cursor is. This behavior should be for Play/pause command instead, just like older versions of FB2K.
I've tried different configurations of "Playback follows cursor" and "Cursor follows playback" with no results.
Anyone knows a way to reverse this?
Is this a bug or just a change?
Where can I get an older version?
Im using foobar version 1.3.17 on Windows 10.
I believe @ziemek.z misunderstanding of FFT (i.e. the fact that FFT returns a complex function from a real variable) comes from how audio editors often display FFTs. Many - including Audacity - display it as a sequence of buckets, where frequencies of each bucket are painted in a voiceprint. The tools often quitely ignore the Imaginary part of the resulting function, and simply paint the magnitude of the complex value, and ignore the argument, which in that case is phase.Okay -- that makes sense. From what I know (and from my previous practical experience), the phase is all important. When I wrote my prototype compressor/expander, the math operations were done on complex numbers. Using magnitudes only was just a waste of time and just produced garbage.
In my recent work on compressors, expanders, Aphex Exciter removers, DolbyA decoders, etc -- when trying to clean up the sound of some old recordings -- I found that they were sometimes screwing with the phase in bad ways and summing/subtracting phase shifted versions of the signal to/from itself -- the results might have given an effect of making middle freqs more intense (in the case of the parameters used in certain devices), but with the super high quality equipment of today -- the ugliness becomes more apparent. At least, they weren't zeroing the phase, but even playing with the quadrature (the stuff 90deg out of phase) should not be done lightly. Even though I didn't like the results -- they weren't going to crazy with the phase -- it is just that phase can mess things up simiilarly to messed up freq response. Doing some phase things (fancy for the time that 4ch matrix was common) is part of how matrix quad could work reasonably well. IMO, f possible, unless done absolutely carefully -- not a good idea to play with the phase of any aspect of the signal. There are right ways/good reasons for doing it -- but if questions need to be asked on forums like this, then it is a good idea to avoid doing it for now :-).
By MathWorks: ( https://www.mathworks.com/help/audio/ref/weightingfilter-class.html?s_tid=gn_loc_drop )
"These coefficients are recomputed for nonstandard sample rates using the algorithm
described in Mansbridge, Stuart, Saoirse Finn, and Joshua D. Reiss. "Implementation
and Evaluation of Autonomous Multi-track Fader Control." Paper presented at the
132nd Audio Engineering Society Convention, Budapest, Hungary, 2012."
(AES Convention Paper 8588)
Looks like the original coefficients are calculated using:% ITU-R BS1770-4 --------------------------------------
fs = 48000;
db = 3.999843853973347;
f0 = 1681.974450955533;
Q = 0.7071752369554196;
K = tan(pi * f0 / fs);
Vh = power(10.0, db / 20.0);
Vb = power(Vh, 0.4996667741545416);
pa0 = 1.0;
a0 = 1.0 + K / Q + K * K
pb0 = (Vh + Vb * K / Q + K * K) / a0
pb1 = 2.0 * (K * K - Vh) / a0
pb2 = (Vh - Vb * K / Q + K * K) / a0
pa1 = 2.0 * (K * K - 1.0) / a0
pa2 = (1.0 - K / Q + K * K) / a0
f0 = 38.13547087602444;
Q = 0.5003270373238773;
K = tan(pi * f0 / fs);
rb0 = 1.0
rb1 = -2.0
rb2 = 1.0
ra0 = 1.0
ra1 = 2.0 * (K * K - 1.0) / (1.0 + K / Q + K * K)
ra2 = (1.0 - K / Q + K * K) / (1.0 + K / Q + K * K)
Could this same code be used to calculate coefficients for other samplerates by just changing the vlue of parameter fs[/]?
I use a similar subroutine for similar purpose in my software -- that is, I just change the fs value -- the filter specs are defined by the 'f0, q values. I don't use that precise function -- I start with something like this: H(s) = (b0*s^2 + b1*s + b2) / (a0*s^2 + a1*s+a2);, where i have precalculated the b0-b2 and a0-a2 values for the kind of second order filter that I want. Then I have a function that accepts the b values, a values, fs, and the frequency for the warping parameter. So -- no matter the fs frequency -- I have a 2nd order IIR filter that matches the specified characteristics. Of course, the filter must be reasonable for the fs values that it is used for. Also, I have a set of functions that build FIR filters are runtime also (given cutoff, filter type, #taps, etc.)
I didn't carefully review your specific function, but it does look somewhat similar to the one that I wrote. You are going in the right direction.
And almost no one I knew was buying metal grade cassettes anyway, only normal and chrome.Many tape recorders didn't have chrome/metal settings. According to the Wikipedia article, commercial releases on BASF chrome tape were treated as Type I (for compatibility, I guess?).
"Conventional wisdom": even among those with a type IV switch, many couldn't really record to metal tape.
Question: was that just audiophoolery? (I stuck to chrome myself.)
Listening / Getting the Streamdump
Thanks to mpv, youtube-dl, and ffmpeg, listening to the /rh.x16 stream is actually possible on Linux, and I'm pretty sure on Windows, too. However, it seems the toolchain is kinda struggling as it is.
Simply playing the stream directly with mpv, is quite low on errors:
$ mpv http://stream.radioh.no:443/rh.16yields the following error right at the start:
[ffmpeg/audio] aac: channel element 2.15 is not allocatedhowever right after that, the stream starts playing quite nicely.
Error decoding audio.
With the help of youtube-dl, I managed to download a sample length of the stream.
Inspecting it with ffmpeg yields this:
[aac @ 0x59159c0] Estimating duration from bitrate, this may be inaccurateNote the "16 kb/s". The Duration is out by around 20 seconds, in that ~10min sample. The file is 1189420 (1.2M) bytes in length, and is a pure AAC dump.
Input #0, aac, from 'rh-rh.16.part':
Duration: 00:09:45.52, bitrate: 16 kb/s
Stream #0:0: Audio: aac (HE-AACv2), 32000 Hz, stereo, fltp, 16 kb/s
I can play the file back using mpv, but ffmpeg complains about "Reserved SBR extensions is not implemented", which seems to be kindof a minor thing (?), as the file plays alright.
When playing this file with "--msg-level=all=v", playing the stream directly with the switch using mpv, or inspecting the file with ffmpeg, it never mentions "xHE-AAC". Instead, the AAC stream is identified as HE-AACv2.
My youtube-dl version is: "2018.01.21" (probably not the latest one).
I tried two versions of ffmpeg, "3.3.7" and "N-45774-g223f3dff8-static https://johnvansickle.com/ffmpeg/" compiled just a couple days ago.
mpv version is: "0.27.2".
I'm using Fedora 27.
Since no log output of any of the tools I've used reports anything about "xHE-AAC" or "USAC", I'm not sure whether the USAC component is simply ignored. FFmpeg reports "HE-AACv2" with 16 kb/s, and that's pretty much it. Since all tools here use ffmpeg as back-end, I guess this isn't a surprise.
Having said that, the stream sounds "OK", given the low bitrate. However any sounds resembling noise, like someone making an 's', 'f', 'sh', or 'z' sound etc., sound all alike, and incredibly harsh. Similar to a cassette tape recorded with DC bias. Another analogy is perhaps very small speakers, like the ones you'd find on a cheap cellphone or an old 80's pocket radio. I imagine playing that stream through the speakers of a cellphone or cheap bluetooth speakers would be adequate; I wouldn't want to listen to it in my car, though. FM radio sounds much cleaner than this.
If anyone cares, I can upload a sample, together with one or two samples of one of their higher-quality streams for comparison.
Given that I don't know whether my tools are decoding the stream correctly, I'm unsure whether the (subjectively) bad sound quality is down to xHE-AAC simply being used with such a low bitrate, or if it's down to my tools not being able to decode the stream correctly.
Still haven't figured out a way, to create my own sample of xHE-AAC encoded audio. AFAIK, there are no encoders freely available, etc.
There's also a /rh.x16 stream with a content type of "audio/usac". So perhaps that is the /actual/ xHE-AAC stream?
I can't play the stream, with anything that I tried. I get errors like:
[aac_latm @ 0x4e82f80] Audio object type 42 is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.(in case of mpv, I'm getting the exact same message, except with [ffmpeg/audio] in the beginning)
So perhaps 42 is the AOT for xHE-AAC, and it's simply not implemented anywhere (specifically ffmpeg)?
Agreed, if more than one listening seat is involved.It has nothing to do with seats and everything to do with mono (one sub) having zero chance of reproducing inter-aural spatial effects/lateralisation etc, as has been covered many times here, including many AES links by your truly. Einstein rules of insanity applies.
By what metric(s)?All. Nearfield = zero modal problems at seat. Nothing to "correct" (and/or "incorrect" elsewhere).
Only issue might be phase due to propagation delay vs the mains, but the lower you cross, the less that matters as the low pass filter of the sub will automatically introduce a delay...and it will all appear in the frequency domain at given crossover, so easily measured (though not necessarily heard, depending on Q of any notch).
Please read, there are links to over 40 studies compiled https://secure.aes.org/forum/pubs/conferences/?elib=17270