Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Synthesized audio and coherent phase issues (Read 2272 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Synthesized audio and coherent phase issues

If signals with the same levels have uncorrelated phase, as most sound sources do in the real world, adding two results in a 3dB increase. But adding identical signals with identical phase makes levels 6dB louder.

When you're synthesizing music that involves multiple voices on the same instrument, it would seem natural to end up with identical phase. So how do well-designed midi/synthesis/etc programs avoid having multiple voices sound too loud on unison notes?

Doing something to randomize the phase of each voice?

MuseScore, the open-source scorewriter, doesn't handle this well, and I wondered what other programs do to get it right.

Re: Synthesized audio and coherent phase issues

Reply #1
How can two notes have the same phase? You mean identical notes from identical instruments?

This makes little sense.

Re: Synthesized audio and coherent phase issues

Reply #2
There are various approaches, but it's not usually a problem for anything but but pure digital synth tones. If you're trying to mimic multiple musicians playing together you're going to want to introduce subtle pitch and timing issues eliminating the problem. And if the instruments are panned or EQ'd differently it can introduce phase differences as well

Otherwise, sample based instruments in the modern world have multiple samples for each note and can commonly be programmed to vary sample start time and analog synth emulations often employ an emulation of analog "free running" oscillators or randomize the oscillator phase with each note.

Re: Synthesized audio and coherent phase issues

Reply #3
This makes little sense.
Did you actually read what I wrote, or the bug description I linked to?

Suppose you're using MuseScore to write a score for 8 trumpets. In the real world, 8 trumpets at equal volume are 9dB louder than one trumpet, regardless of whether they're playing the same note or 8 different notes. If you play back your score with MuseScore, the trumpets will be the correct volume as long as they're all playing different notes, but if they ever all join on a unison note then they'll be 18dB louder than one trumpet, because MuseScore naively generates and adds up 8 perfectly identical trumpet sounds.

Re: Synthesized audio and coherent phase issues

Reply #4
This makes little sense.
Did you actually read what I wrote, or the bug description I linked to?

Suppose you're using MuseScore to write a score for 8 trumpets. In the real world, 8 trumpets at equal volume are 9dB louder than one trumpet, regardless of whether they're playing the same note or 8 different notes. If you play back your score with MuseScore, the trumpets will be the correct volume as long as they're all playing different notes, but if they ever all join on a unison note then they'll be 18dB louder than one trumpet, because MuseScore naively generates and adds up 8 perfectly identical trumpet sounds.
Compare the differences between an acoustic recording of 9 trumpets playing in unison and an all electronic version with 9 trumpets playing in unison.

The acoustic trumpets are not identical instruments.  They each have a slightly different timbre, they are slightly detuned, each not is not started precisely at the same instant, and they are not perfectly co-located in space.  Even a single microphone isn't going to pick up 9 physical trumpets in perfect sync because they aren't all the same distance from the mic, and each will have slightly different room acoustics around it.  Remember that for perfect 6dB buildup between two identical signals they also have to be in perfect phase.  At 1kHz the acoustic wavelength is approximately 1 foot, so moving a source 3" is a change of 90 degrees in phase, and down to a 3dB sum with its identical signal pair. 

If you wanted to match that electronically, you could pick different trumpet samples, detune each one very slightly, pan them slightly differently, introduce very small differences in time delay on each, use a reverb program on each one but adjust it tiny bit different for each in some way, decay, wet/dry mix, EQ, etc.  I'd say the slight detune and panning would take care of most of the problem.

Re: Synthesized audio and coherent phase issues

Reply #5
Yes, I'm quite aware this kind of coherence doesn't happen in the real world and I know some reasons why. And were I using a wave editor I'd have some ideas of things I could try putzing with on different tracks until I got an acceptable result.

But that's not the use case I'm interested in - I'm interested in what the best practices are for a program which is synthesizing music. MuseScore can't require people to fiddle around with separate tracks; it has to be "press play and hear what your score sounds like." Also, though I don't know that this matters for MuseScore, in some instances people may expect determinism - same musical score or same MIDI should produce identical .wav output - precluding some approaches to forcing the issue via e.g. temporal jitter.

I'm pretty sure better synths have solved this problem and MuseScore's is just naive. In dealing with a bug report it'd be nice to be able to point people to what others are doing correctly.

Re: Synthesized audio and coherent phase issues

Reply #6
Yes, I'm quite aware this kind of coherence doesn't happen in the real world and I know some reasons why. And were I using a wave editor I'd have some ideas of things I could try putzing with on different tracks until I got an acceptable result.

But that's not the use case I'm interested in - I'm interested in what the best practices are for a program which is synthesizing music. MuseScore can't require people to fiddle around with separate tracks; it has to be "press play and hear what your score sounds like." Also, though I don't know that this matters for MuseScore, in some instances people may expect determinism - same musical score or same MIDI should produce identical .wav output - precluding some approaches to forcing the issue via e.g. temporal jitter.

I'm pretty sure better synths have solved this problem and MuseScore's is just naive. In dealing with a bug report it'd be nice to be able to point people to what others are doing correctly.

Yes, MuseScore has limitations.  But you can put the different parts on separate tracks, there is volume, panning, reverb and chorus available on a per-track basis.

You did ask about how to achieve a solution with other software.  If you were using Logic for example, you can accomplish everything I outlined.  In fact, you can plug in a "modifier" or "modulator" and perform micro detuning at will, per track. Combine that with slightly slipping note triggers, and you've got it.

Re: Synthesized audio and coherent phase issues

Reply #7
No, I'm not asking how to achieve a solution with other software. My problem is not "I have one song I need to produce, would someone tell me how to achieve this effect in a DAW or wave editor." If I have a single score I need to work around the problem on, I can putz around with dynamics on every single unison note in MuseScore and get a satisfactory result that way more easily than by exporting to multiple tracks and putzing around in a DAW.

My problem is "I'm discussing a problem with the software's developers, who haven't been aware there was an issue, and it'd be helpful if I had an idea what the proper solution is, the best practice that other synths have adopted."

 

Re: Synthesized audio and coherent phase issues

Reply #8
No, I'm not asking how to achieve a solution with other software. My problem is not "I have one song I need to produce, would someone tell me how to achieve this effect in a DAW or wave editor." If I have a single score I need to work around the problem on, I can putz around with dynamics on every single unison note in MuseScore and get a satisfactory result that way more easily than by exporting to multiple tracks and putzing around in a DAW.
I didn't suggest exporting tracks to a DAW.  But ok, I get where you're going.
My problem is "I'm discussing a problem with the software's developers, who haven't been aware there was an issue, and it'd be helpful if I had an idea what the proper solution is, the best practice that other synths have adopted."
It's not a solution you'll necessarily find in other "synths".  It's a production and mixing problem.  I've outlined the solutions, detuning, different sound timbre, panning, reverb, blah blah blah.  Some of that does already exist in MuseScore.  Actually, most of it. No exporting tracks, no modification of code by the developers required.