mmozeiko
No, it is not done like that. Casey is simply showing simplest possible way how to get something rendering.
As Casey pointed out, yes it is. Or, at least, it kind of is.
Think of the screen as divided into pixel groups (i.e. tiles), perhaps 4x4 or 16x16 pixels. If you draw a triangle on the screen, then the pixels that it covers can be calculated using a traditional scanline-type method. However, this is also true of the pixel groups, which can be thought of as big pixels.
The way that modern GPUs do fragment generation is to scan-convert triangles into pixel groups, and then process all of the pixels in the group together using a SIMD processor. The size of the pixel groups gives you a tradeoff between exploiting parallel processing and internal bandwidth.
Incidentally, this trick was also used for a while by offline scanline renderers to compute sub-pixel coverage. The PDI-internal renderer used to render
Shrek, written by Dan Wexler, used 8x8 tiles to compute sub-pixel coverage. 8x8 is, conveniently, 64, so the "coverage" of an object over a pixel will fit in two 1999-era machine words and can be manipulated with bitwise operations.
I don't know what algorithm modern GPUs typically use for scan conversion. Nobody seems to say, possibly because it's not interesting these days, but older publications do.
The paper on the Reality Engine (ACM subscription required), for example, notes:
Fragments are generated using Pineda arithmetic, with the algorithm modified to traverse only pixels that are in the domain of the Fragment Generator.
I didn't know about
Pineda's method before today. Skimming the paper, it seems like it would hold its own on modern hardware, suitably modified.
Is it suitable for HMH? Maybe...
[quote=mmozeiko]But it was not the classical way of rasterization.
I don't think there's ever been one single "classical way of rasterization", because everyone's requirements have always been different.