Just have watched Day 19 video. I try to explain my thoughts (my english is awful).

I've been working with sound for several years and the way you use to output sound looks weird for me.

In most audio APIs (such as ASIO) you always use a callback function to fill new chunk of sound. Size of this chunk is always equals to latency you get. You don't need to do any polling and rely on any play cursor or write cursor. If you set up latency too low you get scratchy chunky sound.

When I used to write DirectSound output layer for my application I used same principle. DirectSound buffer was split into to equal parts. Each part was of size the latency you want (so total buffer size equals latency x2).

When DirectSound begins to play first part of the buffer - you should fill the second part and vice versa. You can use several ways to know wich part of buffer you should fill. The simplest way is polling play cursor position. The other way is to use DirectSound notifications (see MSDN: http://msdn.microsoft.com/en-us/l...ws/desktop/ee418746(v=vs.85).aspx).

Using this way you always output sound in equal time periods.

Finally I want to note about high latency you get in Windows 7 using DirectSound.

If I remember correctly, starting from Windows Vista DirectSound is kind of software emulated. No matter of your hardware capabilities, latency is always high and impractical for interactive application. It is better to switch to XAudio on modern system and leave DirectSound only for Windows XP compatibility.

Sorry for my bad English =)