Last post by dc2bluelight -
There were actually quite a few early all digital recordings, many pre-dating the CD by several years (using the Soundstream system, for one). In fact, the process of transfer of an analog master to CD involved the same gear you'd use to record digitally in the first place: the Sony PCM-1600/1610/1630 (those are different models) working with an slightly modified U-Matic video deck. There were strict guidelines for CD mastering published by the big CD houses - Matsushita in particular - that dictated you put the highest peak of the entire CD at or below 0dBFS (it wasn't called that then), and that any audible transients be clearly logged and identified by the corresponding time code so they wouldn't be confused for errors in the CD master. Somebody at the plant would actually QC these things by listening!
Mastering for CD, actually digital editing too, was handled via the Sony DAE-1100 editor, a crude device that controlled up to 3 U-Matic machines, time-code locked, and could accomplish digital cross-fade edits and apply simple digital gain control, fades, etc. The final tape was sent to the CD plant where it was transferred to glass master with no data changes, but with PQ subcode added and formatted for CD.
Digital processing really began with reverb, the Lexicon 224, and Ursa Major Space Station, both in 1978, but dynamics processing wouldn't become practical until the late 1980s.
Also: "In 1994, the digital brickwall limiter with look-ahead (to pull down peak levels before they happened) was first mass-produced." And that accords with my recollection of when it all really started going to hell.Yup. All true. However, loudness processing in the analog domain was already brutal in some areas decades before that. Radio in particular, where multi-band limiting and deliberate clipping was already going on in the late 1960s, but also 45rpm single records, which were cut hot almost universally for several decades. The pop hit "Please Go All The Way" by the Raspberries (1972) has blatantly audible peak limiting artifact (rapid attack and release) all over it, and was one of the loudest singles of that year.
Movie companies were brick-wall processing trailer soundtracks then too, even back to the Academy mono optical days. The "loud trailer" problem became epic for a couple of decades, and it got really bad when digital film tracks became common in the 1990s. With so many patron complaints, a program was instituted to pre-qualify trailers to the Leqm 85 standard. Films would be run and metered on a system made by Dolby that would return an Leqm figure. Trailers that didn't pass were "rejected", but it's not clear to me how much impact this actually had, since we still have loud trailers today. Theater projectionists would respond to complaints by turning down the fader for trailers, then forgetting and leaving it low for the feature, which generated the inverse complaints. It was/is a mess.
We think of brick-walled CDs as the beginning of the problem, but in reality it had already been going on for 30-40 years. Yes, digital processing made it easier to make it really bad.
Last post by dc2bluelight -
No, that's not right. Mass market radios were introduced in the late 1920s, and by the 1930s they'd already improved a lot. They were not narrow band devices yet, as stations were few and selectivity wasn't a big issue, and the RF limiter circuit wasn't invented yet either. Dynamic range was not limited by the radio, it was limited by the medium being highly affected by electromagnetic noise, and the fact that broadcast audio processing had not been invented, so average modulation levels were quite low. When peak limiters (actually invented by the film industry for optical sound) appeared, stations to raise modulation levels without peaks taking the transmitter off the air from an overload, then S/N ratio got better. But the real loudness war on radio didn't start until rock and roll really took off and stations began some serious competition. I'd put that at the late 1950s as the start. But loud mastering on singles was already a huge problem, hence the jukebox compressors.
Last post by cliveb -
OK- I have it recorded. Does this site host sound files?Please remember - do NOT encode them with a lossy codec (eg MP3). Lossy codecs can smear vinyl ticks. Use something like FLAC.
Last post by kode54 -
As I PMed to you, this sounds more like something that would be useful for mastering audio.
I have never tried Sonar in recent years, but I have used both Logic Pro X and Reaper. You may have better luck with the real time auditioning in Reaper?
What are your machine specs, if I may ask? That may go a long way to identifying whether it's even worth switching to a different DAW.
Rendering with tracked tools in a text or json based format, using either rendered MIDI with foo_midi or any other plugin, or using a MIDI with no notes as a CC system to apply controls to the audio streams you're rendering from, may be possible, but I haven't tried something that bold as far as projects go.
Again, no amount of upsampling will improve the audio quality, unless the hardware is incredibly terrible at playing lower sample rates, which is highly unlikely.
There is no grand extrapolator which can intelligently reproduce information that never existed in the original signal, or was lost due to recording at a lower sample rate or downsampling. A neural network could be trained to try to replicate lost information, using original and downsampled audio, but this sort of replication is not totally reliable, and has only really been trained and demonstrated on visual data, not really on audio. And considering how slow it is on just images, it would be insane to expect it to be usable on massive quantities of samples like you have with audio. Well, not entirely. Figure 12 million samples in one dimension instead of the multiple of two dimensions. Still, don't expect miracles like this to actually produce anything you can hear, if your original signal is already at least Redbook quality.
Sorry for my late reply.
I was coding with WASAPI in exclusive mode, and i've tried to play 24bit audio with it.
My code failed to work on my own computer (with realtek audio chip), but worked fine on my friend's computer with a XMOS audio device.
I just sent him Binary Exe for testing, not source code. You may say Realtek chips need word padding, yes, i understand.
But the tricky part is, i can hear the music from the noise, and it has been speed up, maybe 10% or somewhat.
To figure out the byte padding pattern, I even tried to fill the buffer with only 8bit data, and only left channel data.
However, no matter how hard i tried, the best result i got was the SPEED UP audio.
If it was just a byte padding or byte order problem, the out put sound SHOULDN'T be speed up, as each time you GetBuffer(), you can know the buffer size in SAMPLEs.
That is why i say Realtek seems having a buggy driver, however, maybe i'm still wrong in some parts.
Actually, I was coding a Emulator of a CPU (or you can say a Computer) that is designed by myself.
I use WASAPI to emulate the audio device, and it's quite important to use EVENT mode, since many real world audio chips works in this way (Interrupt).
On the other hand, exclusive mode gives a constant audio buffer length, which is also similar to real world audio chips.
(when using shared mode, you have to check the buffer length each time after call to GetBuffer().)
At the first time i come to the audio part, i tried WaveOut() and it has very large latency, for about half a second.
Half a second may sounds not bad, but i want the latency smaller than 30ms, meaning about 30Hz around or even higher, close to real world audio chips.
You can guess that i want the audio device IRQ as a timer, so that it would be quite straightforward to implement A/V synchronize when doing video playing back.
If the audio latency is large, i'll have to prepare another timer and the video may become jerky.
Now i've almost finished coding my Emulator and i have implemented simple audio/video playback on it.
For the audio part, i just expose a WAVEFORMATE struct to the Guest System in the form of PCI bus configuration space registers,
and just copy data from Guest System (DMA) to fill the audio buffer without padding/ordering.
As you may expect, it can't play 24bit audio properly on my computer, i just round the audio data to 16bit when i have to work with it.
Did you use something else than foobar2000 to ReplayGain scan the Opus files? Opus specs forbid the use of old ReplayGain tags as it has its own R128 gain tags. Header gain adjustment is also part of the specifications. Decoders are supposed to always apply the header gain and optionally R128 Gain from tags as an additional adjustment. Since not all players support tag based ReplayGain foobar2000 allows writing the desired RG info to header. That feature is supposed to give ReplayGain with Opus everywhere where the format can be decoded.
That said I get the same ReplayGained loudness with Opus with all header writing options when I use foobar2000 to do the tagging and playback.
Last post by eric.w -
Yes - click "Reply", then below the text box for composing your post there's "Add files by dragging & dropping"
Rolled back more, to v. 1.3.14.
The same situation.
Experimentally found out that Preferences : Advanced : Tagging : Opus : Header gain option was "Use Track Gain", switched to "Leave null". The playback is still as without RG info. Good news: scan runs with stable and correct result now.
UPD: switched back to v. 1.3.17. ReplayGain seems to function OK with Opus now. Remained "Leave null" option for mentioned parameter.
Head-spinning issue related to small preferences option.
Last post by Zip -
Is there anyway to embed a couple midi channels into a Wave file using fubar2000 your midi plugin?
Or to record midi info into the audio? Possibly a new fumat :-)
I do this sonar but hate their playlist view for live performance. (unstable and hard to read & few options)
All I want to do is run a few midi CCs over two channels in-sync with the audio.
Any ideas or suggestions would be greatly appreciated.