sol_hsa
Curious. I hadn't even considered using DirectShow for game audio =)
I never made games before. I know nothing of them. But now it's been like 3-4-5 days or so, with this "tileshooter", and it's been extremely fun. I am like: "why idn't I do this before"?
sol_hsa
Anyway, the need for delaying the sounds comes from looking at the alternative: only triggering sounds at the start of a new buffer, in which case sounds get "clumped" together. Here's a short vid showing the difference of these methods:
[video]https://www.youtube.com/watch?v=Qt79F8NRLcE[/video]
(youtube link)
Can you explain the video in a little more detail? When you say "buffer", do you mean the DSSoundbuffer?
Is the only difference between (3) and (4), the offset? That (3) is mixed into the same buffer, at different sampleoffsets? But what happens in (4)? Overwriting the buffer? I am not sure I understand. If I mixed 2 identical sounds to one DSbuffer, at the same sampleoffset, I would expect the volume to go up, or just have one sound. Not noise, unless I missed the playcursor.
It sound like (4) has more sounds. If the difference is just the offset, then I understand why you would not consider directsound :) As it has, I guess, this overlapping noise, because they fire so often?
But isn't Directshow supposed to take care of the mixing, and the noise I hear with my tests, are just amplitude noise, comming from the fact that there just to many (can be over 100, at least 50) of the same sounds.