Skip to main content

Topic: MS WaveIn/Out API Synchronization Algorithm (Read 2078 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • cox
  • [*]
MS WaveIn/Out API Synchronization Algorithm

I am implementing a software to play audio data in different computers. The data is captured from a computer that sends thru network to another computer to playback.

I can use audio codecs, it's generic. You need to initialize the class with parameters to use codecs or not. I already implemented Speex, mLaw, uLaw codecs.

It's working fine but I have one problem. When I start to play, in some situations when one computers is playing the audio data a delay rises, I need to bufferize to do not loose data.

I was reading some articles and I realize that exists a problem of WaveBoards clocks dissimilitarity.

I guess that to guarantee the playback rate I will need to drop or to copy samples in my lpData Buffer.

If it's correct, I want to know how can I compute this number? I know that my buffer represents one time period. Do I need to compute the real time that the board take to play the audio-data and then use this reference to others packets/buffers or I have to compute this synchronization number for every packet/buffer? Or I am wrong, It's not a solution.

Any help is very good!

Sorry my english mistakes, If you do not understand any part tell me and I will try to write again.



-- Guilherme Cox

  • Jasper
  • [*][*][*]
MS WaveIn/Out API Synchronization Algorithm
Reply #1
You could indeed try compensating by checking how long it really takes to play buffers. But as you are getting your sound from a live feed you might be better off to let you be guided by the recording, just make sure your buffer doesn't grow too large or small. That way all computers would play more or less synchronized with the recording computer.

  • wkwai
  • [*][*][*][*]
  • Developer
MS WaveIn/Out API Synchronization Algorithm
Reply #2
I am not very sure if there exists a playback rate dissimilarities between various sound cards..  You will have to measure that!!

Anyway, why don't you use direct sound APIs? The bufferings are taken care of by the libraries.. It is a lot simplier than having to construct 2 wavein buffers and swapping the buffers..

There are a lot of ways of sending audio data to the sound card but Direct Sound APIs is the easiests.

  • Jasper
  • [*][*][*]
MS WaveIn/Out API Synchronization Algorithm
Reply #3
Of course it is a matter of taste, but I wouldn't want to call DirectSound the easiest way to send audio data to a soundcard. It's actually quite easy to work with waveOut by simply using a queue of buffers. With DirectSound you have to setup events or poll the driver for it's current reading position. And don't forget PortAudio ( either, it's not too difficult to work with either and even offers a wrapper for blockwing reads/writes.