Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Have a working 'expander' based on DolbyA (not same design) -- works well. (Read 40994 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Commentary about doing an encoder and/or future work

Reply #125
I have gotten questions about the DolbyA decoder, and why I didn't do a DolbyA encoder also?   Well -- there are at least two reasons, but the first and most important is that most of the problem is to copy music from old archives and be able to use it (process/mix/finalize/produce) with current technology.  The second reason is that DolbyA is so much weaker NR than more recent technologies, that I don't suggest using it for encoding.  Perhaps I could do an encoder that produces much less distortion than anything else (like the decoder has less distortion than anything else), but why encode into DolbyA?   If/when I do a DolbySR decoder, then an encoder with that technique might be more useful.  The big problem with SR is that it is incredibly more complex than DolbyA.

Please note that this DolbyA compatible decoder truly sounds similar to a real DolbyA (sans fuzz, distortion, lack of clarity) with a very similar freq response balance.  (Even if someone uses the cat22/360/361 as a design basis in digital form, the result will not likely sound similar because the filters don't emulate well.  I found that my decoder would sound similar (but cleaner) to another known DolbyA decoder if I used the DolbyA design as a reference.)  I rejected those filters -- and I was worried that I could come up with something compatible that sounded more accurate, but I was lucky to find a better solution.

Doing a REALLY compatible/similar sounding DolbyA encoder would be a project similar in scope to the decoder project, but with even fewer users and less usefulness.   Frankly, even if I had a reel-to-reel deck, I would NOT bother encoding anything into DolbyA form.  DolbyA MIGHT be more useful than SR for very long term archival purposes (because of the simpler DolbyA decoder design), but still -- I'd try to find something BETTER QUALITY than DolbyA.   Knowing what I know now -- I am somewhat suspect of the quality of any of the dynamic gain schemes which can-not mathematically be reversed, and DolbyA (however being close to being reversable) is not reversable enough.

My criteria for a long-term analog compatible NR (and possibly dynamic range extension) system would be a constant compression ratio, multi-band system with mathematically designed analog&digital compatible characteristics for the filters.  At least, if properly executed and working on a deck with a fairly flat response, it will be totally reversable (distortion products even better cancelling.)

So, what I suggest if someone wants to use a nearly analog compaitble NR system, it would be closer to the HI-COM (afair -- not sure) type design.   The multi-band approach is good, but the DolbyA filters are kind of finicky, and I'd rather see the system designed from specification rather than HW design.   If there is an interest, and it would be really used if it works -- I could do a rough specification, and an implementation of both a HW and SW compaible design that has the best features of DolbyA and DBX.   One good thing about a compatible SW design is that it can be prototyped using software modules that act very similarly to real hardware.  For example, I'd base the design on dB linear technology (like THATCORP) stuff, and use standard filter design techniques that can be implemented in HW & SW, e.g. well constrained IIR filters that emulate well in HW.  FIR filters can be more ideal, but are also not easy to emulate in HW.   After doing a rough design and a SW implementation (probably 2X easier than my DolbyA effort), then a real HW design could be started.  Before that, I'd do as much of a spice simulation as possible.

The end result of such an effort would be at least 25dB NR, almost no level sensitivity, almost no modulation-type noise, very good transient response, much better distortion than almost any other system.  Also, encoding/decoding could be done on computer or in hardware, and the result of the encoding could be designed to be listenable.   So, it would have all of the advantages of DBX, DolbyA and DolbySR, and almost none of the disadvantages of any.

But doing new DolbyA encoding operations are only useful in museums where there are demos of ancient technologies :-).

John

Usage hint for the DolbyA compatible decoder

Reply #126
I have a usage hint -- and let you know how I normally use the decoder.  Except for producing my own listening archives or demos, I use the decoder in realtime most of the time.  When I use it -- I don't use the default quality settings, but usually use the highest quality setting available.  There is also a 'heroic effort' setting when there might be a lot of high frequency intermod (e.g. lots of kids voices will do it.)

The normal setting runs a bit more quickly than the higher quality settings, and an upcoming version runs about 20% faster in general (at least that much better -- I just started on optimizations.)  However -- we are talking about listening quality here...

There is a very aggressive intermod removal mode, which doesn't have much of a change in frequency response or anything like that, but helps to keep voices and instruments clearly separate (intermod makes things mush together.)  The magical setting is: "--ai=high", which means set the anti-intermod on high.   There is another super-aggressive setting, which uses the aggressive improvements in the --ai=high mode, but also narrows the frequency bands that lend themselves to distortion production.  It really doesn't decrease the ultimate frequency response, but can make the music have a little less clarity.  That aggressive setting is "--ai=max".

The default mode is "--ai=med", but you should never need to specify it because it is currently the default.  The biggest disadvantage of the --ai=high mode is that it runs about 20% slower than the --ai=med mode.  Once I speed up the code to run perhaps 2-3X faster (yes, I have some ideas), I might move the default to be --ai=high at that point.

In casual listening, I always use "--ai=high", because my computer is fast enough.  The perhaps overly aggressive "--ai=max" doesn't run any slower than "--ai=high", it is just that some of the parameters are slightly different.

John


 

Answer to possibly encountering 1/2 encoded DolbyA

Reply #128
Thanks for that pointer!!!  I have been hoping to find more concrete evidence that the 1/2 encoder technique was used, and below is my comment on my possible encounters with the use of that technique.   DolbyA just about maxes out the fastest compression that can be done in HW without splatting intermod all over the place -- so I can understand the use for compression only.

I do agree that some examples of my DolbyA demos just might be the playback of the 'half encoded' DolbyA HF compressed music (but unlikely in the present cases.)  My earlier decoder releases had a terrible bug that made me think that there was more use of the two HF channels only than there really was.  I had even advertised that some of my Carpenters examples failed because the specific Carpenters music was enhanced by partially disabled DolbyA.  Practically every major step of development of my DolbyA decoder has been done with public scrutiny, and versions up until about 3-4wks ago had a bug that some kinds of music wouldn't work right -- almost sounding as if it was trying to decode 1/2 encoded music -- but that was wrong.  The bug was in the decoder.  * Some of my decoding examples DO SHOW that there might still be some latent HF DolbyA encoding, but in those cases, I do believe that there just might have been HF-only use of the DolbyA encoder in those cases -- esp on vocal channels only.

However, my guess that I was trying to decode 1/2 encoded DolbyA was WRONG IN THE SPECIFIC CASES.   I had made a rather frustrating error in the conversion of feedback to feedfoward conversion.  (My expander is a compatible feedforward design -- works MUCH better than feedback and much better control of intermodulation.)  So -- after some embarrassment, I had to admit that the first versions of my DolbyA decoder were broken for some kinds of long duration changes in music patterns.  Frankly, it was just broken, and I had to rework the decay times just as the attack times had to be done differently in the conversion between feedback and feedforward.  A simple use of a 1msec/30msec and 2msec/60msec attack/decay times for the HF and LF/MF channels would have been terribly broken for a feedforward design -- even though tests showed that it ALMOST sounded like that it would work.

When doing ad-hoc comparisons in the results for decoding, a simple use of the straightforward 1/30 and 2/60 attack/decay pairs in feedforward ALMOST sound correct -- and are correct enough in perhaps 1/2 of the decoding attempts to sound okay.  But, that is simply wrong.  I have an almost 100% capable design where even the most evil and syncopated timing will cause the decoder to respond correctly.  A few weeks ago, I got a rather frustrating complaint that the decoder didn't work correctly on 'Band On The Run'.   After finally believing that complaint to be true (getting past my ego), I re-visited the basics of my design, and realized that the bug was certainly my own, and corrected the decay code so that EVERYTHING so far that seems to be completely DolbyA encoded is properly decoded.

I am not going to disclose the technique - because I have gotten some feedback  that this decoder (which my design is decoder only) is much more compatible than ANY OTHER software technique available, and also has much less distortion in difficult cases.  I have a proprietary copy of music, provably DolbyA encoded where a true DolbyA decode produces a chorus of childrens's voices that sounds like a blob of voices,  the Satin sounds like it has a messed up HF balance, and mine sounds nearly like a real DolbyA except the childrens voices are clearly distinct -- but closer to real DolbyA HF/MF balance.  This is partially due to the vastly superior intermod handling, where the amount of intermod is much closer to that which is theoretically necessary (especially when in --ai=high mode.)  Also a direct conversion between the commonly available DolbyA schematics and SW will definitely result in sound that is more similar to the Satin.  Use of that technique will result in something that kind of works, but won't sound like a real DolbyA.

I am still thinking about possible improvements, and there are even some stealth capabilities in the decoder which I am not disclosing -- some errors in the DolbyA encoding process which are compensated for.  I think that I have found the reason for that quality-loss syndrome, and somecapability for the correction can be disabled by a switch.  There is almost NO reason to disable this feature, as I have heard no music which is damaged by the default setting.  There is a difference in audible character, but the disabled version only discloses more distortion from the ENCODING process.  The default setting uses a rather tricky technique to hide the distortion from the decoder.  I think that I even know the theoretical basis for that distortion.

Truly, if you want to hear music that is closer to the pre-DolbyA encoded version than has been previously possible (including that which was decoded by a real DolbyA or Satin SW) -- my decoder will enable that ability.  Actually, '--ai=max' MIGHT be more accurate yet, but I believe that '--ai=high' is the best/safest bet.

I admit that my decoder is a decoder-only, and it will be staying that way.  i don't think that DolbyA encoding is of high enough quality in this perfect digital world to be of any long term future use.   There are too many flaws in the assumptions made, but it was a genius design for the middle 1960s.   Ray Dolby was a REAL genius.

I do have a better (easier to HW & SW replicate) design concept that is closer to a Hi-Com (AFAIR) technique, where it is almost a mix between the DBX & DolbyA techniques -- but with almost none of the disadvantages.   The amazing thing is that in the deep deep future, it would even be easier to replicate in HW only if needed.  (There are some aspects of semiconductor physics which are more consistent than the newer DolbyA design depends upon, and the reason for so damned many tweaks.)  For long term (Library Of Congress) type applications, I think that I have an idea for a better compander system which is very stable and dependent mostly just on physics.  It only requires matching -- which is easy to do on chips, but tweaking FETS is NOT so easy -- FETS do not replicate ideally so very well on chips.  The original DolbyA design, which does have some advantages over the 360, also has some disadvantages, but also comes closer to what is needed for long term archive recovery.

But, again, thanks for that pointer!!!

John

Possible (and likely crazy) attempt to remove the extra add-on DolbyA sheen

Reply #129
Apparently, in the olden days, some record companies used a partially disabled DolbyA unit to add an additional high frequency sheen (or exciter-type) sound to their recordings.  (Apparently, CBS records had that habit.)   My decoder can provide a removal service for that sheen, but only if the entire recording was processed to create the extra sheen, and was not used for just the voices.  Any settings using the decoder for that purpose would be very experimental, but I can give you some hints.

Firstly, you would use the 'sheen removal' only after the full-on DolbyA decode (either from my compatible decoder or using an unmodified true DolbyA unit.)   Then, the way to invoke my decoder would be something like this:

sox infiles.type  --type=wav --encoding=floating-point --bits=32 - rate -v 96k | da-avx --lfoff --mfoff  --ai=med --thresh=15.75 | sox - outfiles.type  <also add needed EQ because of sheen removal>

Note the full details of using sox need to be figured out using the sox documentation, and I am just showing the general jist of doing the 'sheen removal'.  Note the specific switches for the decoder program...  -lfoff and --mfoff turn off the dynamic gain on the 20-80Hz and 80-3kHz channels.  By turning off the MF channel, some special decoding features are also disabled -- so that the decoder acts more focused towards sheen removal.  Also, the threshold (--thresh=-15.75) will likely have to be very different from the norrmal -15.00 to -16.00 values.

The additional EQ will probably look like this:  "treble 3 3k 0.707q treble 3 9k 0.707q", but this suggestion just might be a little too strong or weak -- not sure.

Another note -- do you see where I convert the input to floating point and 32bits -- that is the optimum format for the decoder, and also the decoder has maximum performance between 88.2kHz and 124kHz.  The perf falls off a little at 192kHz, and I suggest that after the rework, there is never a real benefit to running at 192kHz unless there is a specific need to support 192kHz on the output.

Nowadays, the processing at 44.1kHz is pretty good, but it is almost always beneficial to bump it up to 48k before the decoder -- this is because the internal DSP operations work at exactly the sample rate frequency, and there is little room for distortion removal and filtering when running at 44.1kHz.  44.1kHz is more of a distribution rate than an ideal production rate anyway.   The biggest benefit to running at 48k instead of 96k is that the decoder runs much faster, but the quality should be barely noticeably better in the 88.2kHz through 124kHz ranges.

There will be an upcoming new release (probably end of day today -- 14May2018, or early tomorrow), and it runs about 10-20% faster and have a few very minor quality improvements.

John

Slightly faster (10-20%), slightly better decoder.

Reply #130
The new version is available here and on the repository site below.  I haven't updated all of the demos yet, but there are not major changes in sound quality.  This new version does some calculations now in single precision float rather than double precision.  (The reason for previously using double precision was because of an arcane behavior of the compiler that I am using.  I reworked the calculation, which both allowed using single precision -- which is more than enough, and also twice as many operations can be done because the attack/decay can be calculated for both (L+R) channels at once rather than one at a time.

The slight improvement in quality results because of quicker calculation, so better time precision can be provided by doing the longer time decay calculations more time-accurately.  Also, the longer term decay can calculate further into diminishing returns providing a better decay match in possible situations that I have not yet encountered.   Also, the --ai=high vs. --ai=max are more different than before.   --ai=max removes more of the encoder 'hash' at the slight expense of certain dynamics at HF.  --ai=high continues to be the best, safest aggressive quality, while --ai=med allows slightly faster encoding, and has a very slightly simpler decoding calculation (marginally closer to a normal decoder behavior.)  Using both  --ai=none and --raw=8192 gives a barebones decode which will seem brighter (the extra brightness is believe it or not,  distortion), but would be essentially the results of doing a traditional decoding in software.  I only use those modes for development comparisons, but thought that it might be interesting to show what all of the fast gain control can sound like, and shows one reason why SW based compressor/expanders sometimes sound different than a HW equivalent.   Naturally, a bare bones software design will produce much more distortion than HW because of the effects of sampling.  This is one case where higher-than-44.1k sample rates really DO make a difference, but it is purely because of the nonlinearity (producing beyond 20kHz signal components), and sampling.  The decoder is designed to fully mitigate those uglies.

More detail:  because of how the compiler generated code for the newer and older CPUS, I did the math for attack/decay in a math vector size 4 with a data type of double.  This fit perfectly the capability of the CPU and the number of audio bands on each of the L+R channels.  This also matched well when compiling for less capable CPUs, where they supported a vector size of 4, but only in single precision.  SO, the code was designed to use the different data type for the older CPUs -- but with absolutely no difference in quality.
I decided to do both the L+R calculations at the same time because newer CPUs can do 8 single precision operations at the same time, so now both L and R attack/decay are calculated at the same time.   The effect on the older CPUs should also have etiher a slight improvement or break even because the locality of reference is a bit better.

Anyway -- -probably more details than what you really want to know.

repo: https://spaces.hightail.com/space/tjUm4ywtDR

John

Found an attack time error -- fixed

Reply #131
Someone on another forum turned me on to 'Howard Jones'.  He was curious if the specific recording was DolbyA encoded.  Well, I found that it was, but there was something wrong with the decoding.  After listening to a few more pieces, I found the problem, and it was related to the anti-intermod parts of the code (but not the anti-intermod code per-se.)  So, I pulled back and reverted the code to an earlier version, and the attack time problem (however small) is now gone.   The newer code was just a little too aggressive in trying to avoid unnecessary intermod, but actually the code now enabled is better all around.

There are so many variables in the code -- especially since this decode goes FAR BEYOND a basic decoder in trying to extract every bit of quality out of the music.  Anyway -- here is a new one -- and this release IS justified.

New version -- not worth downloading if you got the previous version

Reply #132
This version has some clean-ups -- per some commentrary from another forum's leader.  Made some documentation a little more accurate/clear.  Also, the code rejects mono files now (before, it just went nuts.)  Two new switches added (for windows convienience), where you can specify --input=filename or --output=filename instead of using the '<' and/or '>'.   This is meant for windows sensibilities, not really any underlying change.

For the first time we ran a harmonic distortion analysis -- and it is unmeasurable on the normal settings of the Audacity spectrum analyzer.  A real DolbyA does show some distortion. 

The major new thing learned is that the program drops about 3k samples during the conversion.   This is true for every version, and hasn't been fixed yet.  Just letting you know, and it will be fixed soon!!!

Also, MAKE SURE YOU USE THE --outgain=-3 switch on EVERY version.  Because of precise emulation of a real DolbyA in certain regards, it will clip if you supply 0dB input, and the --outgain=-3 keeps that from happening.  Actually, it has 0.33dB excess gain, so you can fix it in the way that you want.

Again, don't bother downloading if you have the 16may version (unless you like to always keep up with the latest-latest :-).)

John

Really 'cool' demo song.

Reply #133
Minor off topic, not being actually technical -- but is related to the current subject...

It sometimes isn't easy to find a really good demo of how the decoder maintains the stereo (and unflattens the depth), and actually sounds REALLY natural.  This demo has no 'love' put on it -- that is no, EQ.  This is raw decode from my original DolbyA encoded copy.  REALLY NEAT!!!  It doesn't have much artificiality, so you can really hear of something bogus is happening.   The only thing that I did was to normalize it to -0.25dB.   I don't have space for the original, but will provide if really asked for.

The file:  Nat-LOVE-DAdecode.mp3

Repository:  https://spaces.hightail.com/space/tjUm4ywtDR

John

Decoder update -- moderately helpful

Reply #134
This update gives one audio quality improvement, one audio quality fix, and a usage feature.

The audio quality improvement is for --ai=max mode only, where the intermod is better controlled yet (smoother sound without loss of detail.)  There is an additional slowdown, but the extra calculation is worth it, if needed or desired.

The audio quality fix is for a calculation error on the 3k-9k and 9k-20k ranges.  You might notice slightly crisper, more detailed HF sound without an increase in intermod.  In fact, the intermod might be slightly decreased with better clarity.  I made a math error on one of the 'careful' attack time calculations (the attack/decay time calculations last for a total of about seven hundred lines in two seperate sections -- not simple -- VERY careful shaping.)  The error was the truncation of a variable name and a mistaken use of a zero initialized variable instead of a specially filtered signal level.  Because it was utilized in a 'max' calculation and was part of shaping the attack time, it wasn't fatal, just not good.  It was an obvious transcription error from a previous version of the calculation.

The usage feature is the ability to display the gains for the individual channels instead of the average gains.  So, there is one extra line per 1second display if you specify '--info=10'.  It isn't fully documented yet, but this is a heads-up that the feature exists and is intended to be permanent.
(I don't intend on working on this today -- so this will be the last version released today -- unless I get a complaint about a major failing.)

Helpful usage note...

Reply #135
I have been chasing down some of the last of the edginess like distortion, and I found some filter issues which I am addressing.  However, a quick RIGHT NOW workaround for those who have copies of the decoder that they are using is fairly easy.

Through studying the filters and the skirts of the filters, I found that the ideal sample rate for quality is between 56k and 64k samples per second.  Those rates give the advantage of being high enough above the minimum necessary sample rate to help to avoid any noticeable aliasing given practical filters (if someting leaks beyond 20kHz, there is enough room to filter it out.)  Additionally, as the sample rate goes up, then fixed size filters become less effective at lower frequencies.  So -- running at 96k does work, but produces lower quality than running at, say 64k.

The filters in the processor currently only partially support 'wide' mode where the 21.5kHz limitation is not enforced, so running at a very high sample rate has no benefit at all given the structure of the decoder (the filters being used/etc.)  The fact that out of band filtering is really necessary for best audio quality (out of band filtering internally -- lots of nonlinear stuff going on), that there is actually a real disadvantage to allowing more than a flat 21.5kHz bandwidth.

From now on -- to infinity, I am going to be running/testing predominently at 56k-64k (still haven't decided), but that does NOT limit the sample rates being used for the data source and data sink (I ALWAYS use sox for the conversions.)  The decoder is planned to always work at 44.1k through 192k, it is just that it might not be as good as when running at 64k, for example.  Actually, there is a mathematical reason for choosing around 64k as the ideal rate  -- it happens to be approx ideal given a lot of arcane DSP factors.

John

New version -- just quality improvements

Reply #136
New version of the DolbyA compatible decoder.  The best yet.

da-win-23may2018B.zip
location: https://spaces.hightail.com/space/tjUm4ywtDR

You can check the location above -- sometimes I forget to post about updates.  More and more people are realizing that they have DolbyA encoded material and didnt know it!!
John

Here is an example justificaition of the usefulness of the DolbyA compatible dcd

Reply #137
I put up a demo of two (maybe eventually 3) songs that I must take down very soon.  The demo is a result of the decoder processing and afew other steps.  I plan to document everything some time soon.

If you listen and are a fan of the group, it WILL be a religioius experience.  If you are not a fan, then you JUST MIGHT become one.
This sounds different than I have EVER heard from the group, and the material is a commonly available street copy that was processed by the software that I have been talking about.

Just listen -- it is worth 10 minutes:
https://spaces.hightail.com/space/UntM4LCdcm

Give me feedback -- I'd really like to see the face of each person who listens -- and yes -- it is very safe :-).

Major freq response fix (it lost too much HF before -- antialias too aggressive)

Reply #138
This is probably the last release for a while unless there will need to be a major bugfix.  Also, it is probably okay now to use --ai=max all of the time.  The anti-alias fix also improved the freq response for --ai=max.   There was a comedy of  freq response errors resulting from some FIR filters being too short -- silly mistake.

No change to basic design, just re-jiggering some filters.

Basic design is now solid, but if I find a new anti-alias or quality improvement I will certainly persue it.

Refer to previous posting (probably have 24 more hours) for incredibly good examples about the quality of this decoder.  Esp this version.

This is MANDATORY if you want/need the best quality.

The unexciter is also now included -- it is very slow, but is part of what made the ABBA voices sound so human instead of that horrid Aphex Exciter sound.   There are no real docs, but it can be used once or multiple times.  The switch is --dr=0.156, and that ist the best default value.  If you need the very best quality and almost total removal of that cringing sound - use the unexciter sequentially with --dr=0.120, --dr=0.078 and then --dr=0.0135.  (Values determined experimentally.)  Just pipe multiple instances of the unexciter together.  The sound (esp voices) is incredibly improved.  Unexciter name prefix is 'unex', and just choose the best for your computer.

Is there any interest in a DBX I only decoder only?

Reply #139
I just had to write a DBX I decoder (some material was DolbyA encoded, then it was DBX I encoded after that -- fun time!!!)  By the way, the material is ABBA studio collection TCHS or something like that.  Anyway -- the DBX I encoder works pretty well and has almost all aliasing suppressed, so should sound as good as a real DBX I.  To me, it sounds perfect and after the intermod fixes, everything is smooth sounding.

If there is interest, I'll put something together -- maybe even add it to the DolbyA decoder.  If there is little/no interest, then so be it!!!  I have placed it in my 'uncompressor' package, but that is a behemoth -- so putting into the DolbyA decoder would be very simple -- but it is something that I don't want to do unless someone might want it.

John

Re: Have a working 'expander' based on DolbyA (not same design) -- works well.

Reply #140
Have you posted about this decoder over at for example Gearslutz and TapeOp where a lot of studio engineers hang out?

Re: Have a working 'expander' based on DolbyA (not same design) -- works well.

Reply #141
Have you posted about this decoder over at for example Gearslutz and TapeOp where a lot of studio engineers hang out?
Oddly enough I just started trying to set up an account with Gearslutz -- just this last second when I noticed the notification about your message.  I have been on the Hoffman forums a lot also.

Thanks for the heads up -- I'll do it.

Mostly, my interest is to help the listening people who just happen to be technical enough to use the software.  My recording engineer friend is focusing on the pro side.  I don' t know if he knows about the DBX I decoder yet.

John

Have the DBX Type I decoder working almost perfectly.

Reply #142
This is the first time that I have a 'perfect' copy of ABBA (really.)   This is the complete studio edition that happens to be both DolbyA and DBX I encoded.  AFter some analysis - it is actually a good idea to use both.  This is really a demo of BOTH decoders...  Before the results were amazing, this is even more amazing because I know the internals of the DBX 1 compatible decoder are much more solid.

Listen -- if you hear an flaws attributable to the decoders -- let me know.

The best way to describe is silky smooth.  I'll be distributing the DBX I decoder sometime soon -- might merge with the DolbyA compatible decoder!!!

Attached is typical of the examples on the repo.  This is NOT necessarily the best example, but is a reasonably good one.   This is a case where the mp3 is noticeably different from the original -- the original is THAT GOOD.  I made the attachment at a very high quality level mp3.

REMEMBER THIS IS ABBA -- and 90% of the songs sound this good!!!  I'll be disclosing the method and source materials in as much detail as possible (I bought mine years ago, but I'll try to point your in the right direction for the music source material):

PLEASE ENJOY -- repository: https://spaces.hightail.com/space/UntM4LCdcm

My decoding results are 'ALL that', but I was partially in error about a comment

Reply #143
My representation about DolbyA was/is 100% correct -- it is nearly perfect if not better than normal unit for decoding purposes.  However, the DBXI is not up to snuff.  The decoding results are great, but not due to the material being DBX1 compliant.

They were doing some kind of DBX style compression on the material, and in general from the ABBAstudio (very compressed copy) I am getting in the range of 12-14.5dB peak-to-RMS, with the expanding technology (which will be released soon), I am getting between 14-18dB peak-to-RMS, and with the other ABBA discography release I am getting approx the same peak-to-rms as my decoding.

The resulting peak-to-rms that I am getting is similar to, but not quite as good as the real analog abba disks, but the general sound quality appears to be much better than anything else -- that is, a good peak-to-rms balance, good sounding natural dynamics and none of the extreme strident sound that I have gotten from my ABBA studio copy without it being cleaned up.

I hope to have demos available soon, and software should be available quite soon.  The order of complexity is less than the DolbyA for the DBX expander, and I'll provide some cookbook usage information.  It actually takes me a little time how to use the software also -- just because I write the software, doesn't mean that I know or fully understand how to use it (kind of like a musical instrument builder -- except mine MUST NOT add too much of its own character.)

My goal is to revert the sound back to the original!!!

John

Since the DolbyA is nearly perfect -- working on another expander

Reply #144
The DolbyA project is pretty much finished.  The final version has not been released publically, but individuals do have the final pre-release working copy.  Still doing the final cleanup, but the project WRT innovation is complete.

I went on this DBX Type I distraction -- but a medical problem distracting me helped to cause a mistake -- and the expander was really only a 1:1.4 instead of 1:2.  This caused some confusion, but something new was learned (by me) -- that is that the RMS detector structure implied by the THATcorp/DBX design is fantastic -- better than any of my other schemes so far.   SO -- instead of producing just an analog to a supercharged DBX style 1:N expander, I decided to consider bolting on the DBX detector onto my multiband-GP expander (the uncompressor or the restoration processor.)  THe GP expander has attributes which make it NOT create typical expander artifacts and most of those advantages do not result from my older detector design.  So, adding the DBX style detector onto the GP expander infrastructure is potentially useful.

The results of bolting the two kinds of expander together have been breathtaking - with a very good increase in crest factor, but still a sane peak-RMS level.  So -- the short term dynamics are very good without causing excessive dynamic range.  One bugaboo with making an expander that helps the dynamics is that the longer term dynamics might become excessive.  This new scheme seems to give the best of both worlds.

I am considering reorganizing the GP expander so that it is not part of the compressor/expander complex, but will be a seperate program.  This will make maintenance much easier.

The biggest change in the RMS detector scheme has been a change from the fixed rate nature associated with typical DBX schemes -- they tend not to be very adaptive, where most of the 'adaptive' nature might be a dynamic/nonlinear capacitor scheme -- not really doing very much in comparison to the very adaptive infrastructure of the GP expander.  So -- I have adjusted the RMS detector design to by highly dynamic so that the RMS averaging interval is based upon the material.  So -- both the attack characteristics and the driver for the decay characteristics are dynamic based upon the THATcorp style measurement scheme.  The older method was the a dynamic extension of the more obvious long term average of the signal squared.   The THATcorp scheme is similar to that, but is based upon a different measurement domain of the signal -- and seems to be significantly better.

As I wrote before, the crest factor on the output seems to better than the DBX expander scheme alone with a much smaller expansion ratio -- thereby giving the natural sound with much less likelihood of expansion artifacts (which are already suppressed by other methods.)

I do intend on releasing the first production worthy of the DolbyA compatible decoder in the next few days.  RIght now, a big part of my expander test suite is the day-to-day utilization of the DA decoder.

John

New DolbyA compatible decoder release (much improved)

Reply #145
This new release has a significant improvement -- the front end has been redesigned to be very compliant behavior.  Some of the feedback architecture has been implemented so that the input filtering can now dynamically cancel some of the 'misbehaviors' of the original HW design.  This has allowed the removal of the anti-hash code, which the removal then provides a nicer spectral output -- less dynamic filtering of the HF region.  (Actually, there was no explicit dynamic filtering, but the effect was that of dynamic filtering.)  Now, the spectrum remains stable and complete (in the way that decoding requires) at all times.  Some of the feedback behavior is impossible (really -- I mean, would drive a developer crazy to do properly), but the parts that can significantly impact the sound quality are fully implemented.

You'll notice a more intense natural sound output and more complete low-end also (the low end was slightly attenuated by the previously needed filter structure.)  This new version mimicks more (probably close to all) of the behaviors associated with the feedback design.  The new general structure has not changed, but the amount of feedback possible has been implemented to emulate some of the 2nd order behaviors of the original design (previous workarounds had been very frustrating to me.)

The caveat is that I do NOT suggest running the decoder outside of the range of 48k samples and 124k samples per second, with the ideal still approx 64k samples/second (perhaps slightly higher now.)   Longer filters do kick in to better support 176-192k samples per second, but the decoder does become quite slow at those high sample rates.

The general audio quality is vastly improved.   It was already pretty amazing for a computer implementation.

John

Warning -- decoder only works at 64k or less

Reply #146
Somehow I messed up the filters -- works great at lower rates.  Should have been perfect at 96k, but not so right now.  Fix forthcoming.  The attached version is better at higher rates -- I screwed up on a last modification.  Will be fixed better soon. (darned timing/delay issues -- not trivial sometimes.)   Have to get all filters all synchronized, and I violated one of my protocols.

The best is still at 64k, but faster should at least work without losing all of the highs.  The gain numbers are correct -- it is just that the rebuilt signal is all messed up because of my timing foobar.

John

Complex timing issues got me, and I was a bit sloppy.

Reply #147
Made a last minute change, and I didn't think about how sensitive the timing is.  It is kind of complicated, but the standard DolbyA style filters must have EXACTLY the characteristics of analog filters, but are digital emulations.  Digital emulations add several sample delays typically, and I didn't cover for them correctly (I broke the algorithm.)  So, now the filters used to produce the bands are the standard analog to IIR conversion, but with a sample skew built in and also a necessary all pass filter analog to the 80Hz fitlter was added to keep the phases all matched up -- so with both the sample skew and the all pass filters, the timing of the filters matches just as well as a normal analog filter (for between 44.1k and 192kHz.)  I didn't bother compensating beyond that.

Before, I didn't notice a lack of proper removal of the 80Hz from the other bands -- it wasnt 100% cancelled, but now it is!!!  Much less apparent distortion.  The older front end filters did have full cancellation, but also didn't as perfectly track a real DolbyA.

I truly suspect that I might have the correct structure to implement a feedback DolbyA compatible.  I am only interested in that because I am going to try to do an SR soon.  These last strange changes to the DolbyA compatible decoder were only meant to evnetually improve the decoder.  My first attempts were in error, and I apologize for any wasted time!!!

John

Finally -- got something that sounds very similar to a DolbyA 360.

Reply #148
Before -- the previous versions could decode pretty well, but needed some EQ from time to time.  That had always been that real freq balance problem that I had to work on - and now have been successful fixing the 'sound'.  There were a few mistakes in my filter conversions, more needed tuning due to the sampled IIR filters vs. real analog filters, and a lot of improvements have been made -- including true intermod distortion cancellation along with simplified/better distortion removing/avoiding filters.  Also, --wide mode does support out to 80kHz at 192k sample rate for people who'd perfer wider bandwidth rather than maximum removal of distortion products.
It sounds close to the consistently same between 48k and 96k, and pretty good at 192k.  44.1k is a little metallic sounding because the filters cannot remove the distortion products at HF before aliasing happens.  If you are willing to use an odd sample rate, between 64k and 72k are theoretically the best (wide enough available bandwidth to capture all of the relevant distortion products and remove them -- and the decoding speed is faster at lower sample rates.)  96k is close to ideal, but at that speed the distortion filter skirts widen a bit. They start widening enough that above 97kHz, there is a different set of distortion filters enabled so that the best quality (at expense of a bigger slowdown) at 176.4 and 192.

I also found that the distortion products are more pronounced at low frequencies than outside of each band on the high side -- so I changed the filters to only remove the low side of the distortion products, and actually the distortion and noise cancellation works better when the filters are wide-open up to 21.5kHz.

The sound is seems 90% similar to a 360/cat22, and the threshold setting is pretty easy (the threshold value is approx 1dB less than the Dolby tone.)
The best way to describe the sound is:  'sweet' while earlier versions were a little 'harder'.   Amazingly, the spectrum of the output is incredibly similar to a DolbyA given the same input.  I was really surprised as to how similar the spectrum measurements had been on test material.  The spectographs are also similar, except the distortion products are more pronounced on the real DolbyA (constant tones produce a more intense and larger number of distortion spectral components.)
I am also going to post this on my repo site, but won't do so until after tomorrow, so this is the only place to get the latest version of the decoder for now.

Here is the difference in spectrum of my DA decoder vs a real DolbyA

Reply #149
Here I am providing both a spectrum output of my DolbyA compatible vs. a real DolbyA 360.

Mine is the top, and the real DolbyA is the bottom.  Refer to the attacked PNG screenshot for the pictures being talked about.

When comparing the spectrums, please not that the signal normalization on the real DolbyA had some troubles because of the errant peaks in the output of the real DolbyA.  If you look at the spectrum of the peaks, that spectrum of those glitches (probably intermod in the real DolbyA) are at the same 741Hz that the massive peak is on the spectrum of the real DolbyA.  The compatible decoder does have some peaks, but not nearly as often or not mostly as strong as the real DolbyA.  So, the measured energy of the real DolbyA 741Hz peaks is measurably greater -- there might even be peaks at different freqs that we cannot see adding even more to the 741Hz energy level and the energy level of some of the other slightly larger peaks.
Arguing against the idea that compatible Dolbya is just slower at dealing with peaks -- some of the energy levels at HF are the same or just slightly higher in one or two other places at HF -- again this is splitting hairs but trying to be as critical as possible.

So, if you ignore the ampltude of the LF peak, plus the fact that my decoder only filters the LF starting at 10Hz (0.00022 IIR feedback at 44100Hz -- all attack/decay stuff is normalized to 44.1kHz in my code.)  It might be more compatible for the gain filters or the input filters to remove more LF -- but I need to study the matter.  So there is the liklihood that there is more LF material in the signal than the real DolbyA can process.  Other material with less LF doesn't show the LF peak.  (This was pipe organ stuff.)  I as asked by my collaborator to provide better LF perf than the real DolbyA, and that is why I am just using several LF 1pole rolloffs at 4Hz and perhaps one pole at 10Hz also.

So, ignoring the 44Hz LF peak, and the slightly larger 741Hz glitch peak (and a few other extreme peak differences), the real DolbyA and the compatible decoder are very, very close.  You might also think that there is HF rolloff in the compatible decoder -- but all listening measurements have shown that not to be true.  The acutal listening sounds like the compatible has more HF (slightly), and might be because of numerous factors (might be because the attack/decay are slightly faster/less intermod in the compatible.)   However, I will be researching this.  There is in NO WAY a lack of HF in the compatible decoder.