Rendering

Hello everyone,
does the process described in the stream, going through every pixel in a suspected region and checking it, is really the way people do rendering right now? i guessed that for some primitives you can use some shortcut to do it, like applying some function to some points to transform it from the rotated frame to the screen frame, but i don't know.
I was always into rendering, but i have just seen the beginning of the rabbit hole with vertices, edges and 4X4 matrix operations. i would like to return to it again sometimes, maybe on this spring break.
No, it is not done like that. Casey is simply showing simplest possible way how to get something rendering. Optimizations will come later.

On CPU you typically find edges of triangle and rasterize scanlines.
Here's a nice description how to do that:
http://chrishecker.com/Miscellane...icles#Perspective_Texture_Mapping
(it includes some optimizations that are not relevant anymore, but the general idea is the same)

Here's just the code without much of explanation: http://www.xbdev.net/maths_of_3d/...zation/texturedtriangle/index.php
http://www.stanleyhayes.com/triangle-rasterizerc.html

Here is another using barycentric coordinates that can be easier to optimize using SIMD instructions (this is close to solution what I think Casey will implement):
https://fgiesen.wordpress.com/2013/02/06/the-barycentric-conspirac/
https://fgiesen.wordpress.com/201...iangle-rasterization-in-practice/
https://fgiesen.wordpress.com/201.../optimizing-the-basic-rasterizer/
https://fgiesen.wordpress.com/201...11/depth-buffers-done-quick-part/
Including SSE2 code: https://github.com/rygorous/intel...pthBufferRasterizerSSEMT.cpp#L219
(this doesn't include texture mapping, just the raw triangle rasterization).

On GPU it gets a bit more complicated. Here's a decent explanation what happens on GPU: https://fgiesen.wordpress.com/201...he-graphics-pipeline-2011-part-6/

Edited by Mārtiņš Možeiko on
But actually, mmozeiko, if you look at those links they _are_ doing exactly what I was doing. You bound the region as tightly as you can, certainly, but then you do actually do blocks of pixels and test edge inclusion. The barycentric coordinates are not much different from what we did.

So it's really very similar to what we're already doing - it's not a totally different thing.

- Casey
Yes, it is. Only big difference is dealing with triangle vs quad.

But it was not the classical way of rasterization. That is explained in Cris Hecker articles.
mmozeiko
No, it is not done like that. Casey is simply showing simplest possible way how to get something rendering.

As Casey pointed out, yes it is. Or, at least, it kind of is.

Think of the screen as divided into pixel groups (i.e. tiles), perhaps 4x4 or 16x16 pixels. If you draw a triangle on the screen, then the pixels that it covers can be calculated using a traditional scanline-type method. However, this is also true of the pixel groups, which can be thought of as big pixels.

The way that modern GPUs do fragment generation is to scan-convert triangles into pixel groups, and then process all of the pixels in the group together using a SIMD processor. The size of the pixel groups gives you a tradeoff between exploiting parallel processing and internal bandwidth.

Incidentally, this trick was also used for a while by offline scanline renderers to compute sub-pixel coverage. The PDI-internal renderer used to render Shrek, written by Dan Wexler, used 8x8 tiles to compute sub-pixel coverage. 8x8 is, conveniently, 64, so the "coverage" of an object over a pixel will fit in two 1999-era machine words and can be manipulated with bitwise operations.

I don't know what algorithm modern GPUs typically use for scan conversion. Nobody seems to say, possibly because it's not interesting these days, but older publications do. The paper on the Reality Engine (ACM subscription required), for example, notes:

Fragments are generated using Pineda arithmetic, with the algorithm modified to traverse only pixels that are in the domain of the Fragment Generator.

I didn't know about Pineda's method before today. Skimming the paper, it seems like it would hold its own on modern hardware, suitably modified.

Is it suitable for HMH? Maybe...

[quote=mmozeiko]But it was not the classical way of rasterization.
I don't think there's ever been one single "classical way of rasterization", because everyone's requirements have always been different.
I recently came across this vid:[url=http://www.youtube.com/watch?v=IyUgHPs86XM]
Principles of Lighting and Rendering with John Carmack at QuakeCon 2013
[/url]
Yeah, so "classical" rasterization in the game sense definitely worked more like Chris Hecker's articles described. The reason was because there were no "wide" operations on the original 386/486/586 processors, so there was no benefit to treating pixels as combined blocks. Once SIMD came around, both CPU and GPU wise, things changed because now you needed a way to do a single set of computations that would operate on a whole set of pixels, so you needed to compute the inclusion of those pixels as part of the operation (since not all of them will be in the triangle). So the notion of "just looping over the pixels that were included" kind of went out the window, and it had to become "just looping over the _groups_ of pixels that were included", and hence at the lowest level you always needed to have edge tests.

That's why I started with edge tests on HMH: probably, if one were doing a software rasterizer today, even on a CPU you need to start with edge tests, because you're going to want to be at least 2x2 everywhere, but probably 4x4 so you can take advantage of the newer AVX512 stuff (which, coincidentally, was _explicitly designed_ for software rasterization).

- Casey
I'm reading the aforementioned articles by Chris Hecker and have a couple of simple questions:

Part 1

What do these symbols mean in math?

Page 5: Ê, ˆ, ¯
Page 6: Î, Í, ˘, ˚


Google is particularly bad at searching for non alphabetic characters.


Part 2, page 2

Why isn't the pixel @ (8,1) lit when using a top-left fill convention? (The pixel center is on the polygon boundary and it's a top edge)

Edited by elle on
On page 5 those symbols don't mean anything. If you look at first equation - (x1-x2)/(y1-y2) = (x4-x2)/(y4-y2) and from that you extract x4, then you'll get equation a bit down from it with those weird symbols. So they are simply some broken rendering to pdf.

On page 6 the equation for Xint should use ceiling function (it says so in text). Basically round up whole equation for x (one above it). Those strange symbols are some broken rendering for ceiling function - https://en.wikipedia.org/wiki/Floor_and_ceiling_functions

For second question - while yes, pixel is a top edge, it is also a right edge. And if it is right or bottom - then you exclude it. That takes precedence over being top or left. Because imagine drawing triangle (8,1)-(9,4)-(10,1). For this triangle (8,1) will be lit. So you don't want (8,1) be lit for triangle in picture.

Edited by Mārtiņš Možeiko on