Question about bilinear sampling

Hi,

When we gather the 4 pixels used in bilinear sampling, shouldn't we pick different pixels based on where the sampling position is in the pixel ?

If I want the bilinear value of a pixel at ( 10, 20 ), with UV coordinates of ( 10.1, 20.1 ) when they are multiplied by the width and height of the texture; shouldn't the 4 samples be at ( 10, 20 ), ( 9, 20 ), ( 10, 19 ), ( 9, 19 ) instead of ( 10, 20 ), ( 11, 20 ), ( 10, 21 ), ( 11, 21 ) ?

What I mean is: if the sampling position in a pixel (real position - integer position) is :
- < 0.5 on x => sample left
- >= 0.5 on x => sample right
- < 0.5 on y => sample down
- >= 0.5 on y => sample up

If it helps to understand my question here's some code to illustrate my idea:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
inline u32 bilinearSample( Bitmap* bitmap, Vector2 uv ) {

	f32 pixelRealX = uv.u * bitmap->width;
	f32 pixelRealY = uv.v * bitmap->height;
	u32 pixelX = ( u32 ) pixelRealX;
	u32 pixelY = ( u32 ) pixelRealY;
	
	f32 offsetX = pixelRealX - pixelX;
	f32 offsetY = pixelRealY - pixelY;
	
	s32 pixelOffsetX = 1;
	s32 pixelOffsetY = 1;
	
	if ( offsetX < 0.5f ) {
		pixelOffsetX = -1;
		offsetX = 0.5f - offsetX;
	} else {
		offsetX = offsetX - 0.5f;
	}
	
	if ( offsetY < 0.5f ) {
		pixelOffsetY = -1;
		offsetY = 0.5f - offsetY;
	} else {
		offsetY = offsetY - 0.5f;
	}

	if ( pixelX + pixelOffsetX < 0 || pixelX + pixelOffsetX > bitmap->width ) {
		pixelOffsetX = 0;
		offsetX = 0.5f;
	}
	
	if ( pixelY + pixelOffsetY < 0 || pixelY + pixelOffsetY > bitmap->width ) {
		pixelOffsetY = 0;
		offsetY = 0.5f;
	}

	Vector4 pixelA = integerColorToV4( *( u32* ) ( bitmap->pixels + bitmap->pitch * pixelY + bitmap->bytesPerPixel * pixelX ) );
	Vector4 pixelB = integerColorToV4( *( u32* ) ( bitmap->pixels + bitmap->pitch * pixelY + bitmap->bytesPerPixel * ( pixelX + pixelOffsetX ) ) );
	Vector4 pixelC = integerColorToV4( *( u32* ) ( bitmap->pixels + bitmap->pitch * ( pixelY + pixelOffsetY ) + bitmap->bytesPerPixel * pixelX ) );
	Vector4 pixelD = integerColorToV4( *( u32* ) ( bitmap->pixels + bitmap->pitch * ( pixelY + pixelOffsetY ) + bitmap->bytesPerPixel * ( pixelX + pixelOffsetX ) ) );
	
	Vector4 pixel = lerp(
		lerp( pixelA, pixelB, offsetX ),
		lerp( pixelC, pixelD, offsetX ),
		offsetY
	);
	
	u32 result = v4ColorToInteger( pixel );

	return result;
}

Edited by Simon Anciaux on Reason: Typo
No, I don't think so. The difference between the real pixel coordinates and integer pixel coordinates only determines the percentage of each pixel you sample. You should always sample between the Integer pixel and the Integer pixel + 1, if you truncate the real value.

Example:

Real pixel coordinate; x = 10.2
Integer pixel coordinate; x = 10
-> you sample between 10 and 11
t = 10.2 - 10 = 0.2
and you sample (1-t)*10 + t*11

Real pixel coordinate; x = 10.8
Integer pixel coordinate; x = 10
-> you sample between 10 and 11
t = 10.8 - 10 = 0.8
and you sample (1-t)*10 + t*11

There's a comment in the code, though, that mentions the coordinates used to lookup the texture pixels should still be formalized. For example, if the pixel coordinate = width or height, you can't use the same procedure as for the other pixels or you could access out of the texture bounds. Now we mitigate this by subtracting the width and height by 3, I think.

Edited by elle on
You should imagine pixels being shifted by (0.5,0.5) offset. Then bilinear sampling formulas Casey uses will make more sense. Here's some reading about this: https://msdn.microsoft.com/en-us/...ary/windows/desktop/bb219690.aspx


Edited by Mārtiņš Možeiko on
Yeah, there is still something we have to address, which is how we want to treat pixel centers, and that is why we don't have a 100% specified texture sampling situation at the moment. We will get to this a little bit later on, perhaps during the optimization pass, even, when we start constructing some test cases.

- Casey
mmozeiko, the article you pointed actually talks about direct9 rasterisation and a "problem" with how the uv coordinate don't match the rasterized primitive. It was interesting and it contains a link to this article Bilinear Texture Filtering (Direct3D 9) which answer my question:
A slightly more accurate and more common filtering scheme is to calculate the weighted average of the 4 texels closest to the sampling point; this is called Bilinear filtering...
So, it seems that the "right way" is to sample different pixels depending of the "real" uv positions.

Thanks for the answers.