How to render pixel art correctly?

Just finished watching this handmade chat about pixel art rendering. I still don't understand it. Can I summarize the video to this: if the pixel is inside the texel then don't do the bilinear filter, if not then do it? Is there anything more to it? How to implement it in a fragment shader, given I have a UV position (0 to 1 range) and the texture's size?

if the pixel is inside the texel then don't do the bilinear filter, if not then do it?

That is exactly what is. You understand it correctly.

The trick in shader is to use fwidth() function that gives you uv change in texture coordinate space. From that you can clamp your texcoords to avoid bilinear filtering.

Here's a GLSL code:

  // uv - your texcoord
  // tex - your texture sampler with bilinear filtering
  vec2 size = vec2(textureSize(tex, 0));
  uv *= size;
  vec2 duv = fwidth(uv);
  uv = floor(uv) + vec2(0.5) + clamp((fract(uv) - vec2(0.5) + duv)/duv, 0, 1);
  uv /= size;

  // now sample as usual
  color = texture(tex, uv);

Here's a good article on this technique and some variations on it: https://jorenjoestar.github.io/post/pixel_art_filtering/


Edited by Mārtiņš Možeiko on

There are still a couple of things that I can't wrap my head around:

  1. Where is the UV at when the GPU sampling the texture? Are they on the center or the bottom left of the texel/pixel? This would also explain the +/- .5 part in the code.
  2. fwidth gives you the current pixel size in texel space, right? So if your texel fits perfectly into a pixel then it will return 1? How much will it return if a texel fits inside a 2x2 pixels range? .5 or .25?
  3. The differences between the algorithms in the blog are just where do they get their texels per pixel (i.e where is the bilinear-filtered area) and how much bilinear filtering should be applied, right? Are there any other constraints or requirements that I need to take into account?
  4. Another small thing that I often see is people use the terms bilinear and linear filtering interchangeably. Are they the same?

Edited by longtran2904 on
Replying to mmozeiko (#25584)
  1. uv will be 0.5 when you want "nearest sampling" mode - just exact color of pixel. Or it will be somewhere in middle of two pixel centers if pixel covers two texels.

  2. fwidth gives partial derivative of argument. Basically difference between next pixel and current for the varying argument.

  3. there are different ways to handle pixels that cover texel partially. Some of those techniques do interpolation for extra pixels, not just one - also for neighbor pixels. It's up to you to decide which one looks better, it can depend on what kind of look exactly you want to achieve.

  4. bilinear is linear interpolation doing twice - once for u, once for v. In this context we're talking "linear sampling" is mostly used for same thing as "bilinear" word.


Edited by Mārtiņš Možeiko on
Replying to longtran2904 (#25604)

For some reason, your code doesn't work properly. Here's a reimplementation I made for Unity. The first part is your code, the second part is CSantos's filter in the blog.

Couldn't see a way to post the code correctly with markdown (the code just got squeezed into a single line). So here's a link.

Here's the result. The top row is your, left is editor view, right is the game view that I zoom in on. If you look closely at the top and bottom pixel, you can see your code made the sprite shift down and left. image.png


Edited by longtran2904 on
Replying to mmozeiko (#25584)

tbh I don't remember where that code comes from. It may be from some buggy implementation I had at some point. Here's another I found in my code - but no guarantees it is correct:

  // uv - your texcoord
  // tex - your texture sampler with bilinear filtering
  vec2 size = vec2(textureSize(tex, 0));
  vec2 duv = fwidth(uv);
  uv *= size;
  vec2 fuv = floor(uv + 0.5);
  uv = fuv + clamp((uv - fuv) / duv, -0.5, 0.5);
  uv /= size;

  // now sample as usual
  color = texture(tex, uv);

you will need to check that ir works or figure out math correctly yourself to get it working.

To put multiline code in markdown you do three backtick marks ``` on newline, then put your code and then again ``` to end code.


Edited by Mārtiņš Možeiko on

Thanks for the markdown tip. I thought I just need to indent each line and only use Fenced Code Blocks when don't want to do it for every line.

Haven't checked the code but I've fixed this "heavy" pixel problem in the previous post. My game doesn't have any zooming or rotating but this problem can happen if you move the camera at a specific speed.


Replying to mmozeiko (#25610)

Another thing that I also want to do is to render text correctly. I heard Jon briefly mentioned it here. I don't really understand what he meant. Is it related to what we have been discussing here?

I think what they meant in the video is that if you need to display some text that has a height of 30 pixels, don't use 20 pixels character sprites and display them with a height of 30 pixels because it will look bad and make the text harder to read. Instead render a 30 pixel character and display that.

If your characters are pixel art and moves on the screen than yes you probably want to do the same think than for regular sprites.


Edited by Simon Anciaux on

I still don't understand. We are still using the same character sprites here in both cases, right? So it still 20 texels stretch into 30 pixels, right?


Replying to mrmixer (#25627)

A font file (e.g. ttf file) contains a vector graphics (Bézier curves) meaning it's a "mathematical" representation of the characters (more precisely of the glyphs) that you need to rasterize (convert to a bitmap representation).

When you do that you can choose the size you desire for the bitmap. So if you need to display a character that is 30 pixels in height, you ask the library (stb_truetype, freetype...) to rasterize at that size so that you don't need to scale the sprite to the desired size.

If you want to use handmade bitmap pixel art font (a font where you draw each character by hand), you'll need to make different size versions of the glyphs.


Edited by Simon Anciaux on

Oh, so you don't rasterize the bitmap to a resolution upfront but store the actual vector value from the font file. That makes sense. So what Jon said is to store the font's vector graphic value and then rasterize it at the screen's resolution later in a post-processing pass?


Replying to mrmixer (#25629)

What I (and I assume most people) do is to rasterize the font in an atlas at startup at different height because most of the time you know that you will only have a few sizes of text and the glyphs you need. You can change the size depending on the screen resolution or DPI scaling parameters. E.g.: at 1080p you may want text at 20, 24 and 30 pixel height, but if the user uses a 4k monitor, you multiply those numbers by 2 (or the DPI scaling factor), so you render at 40, 48 and 60 pixel height.

Note that you could rasterize each character separately (no atlas) using the actual position of the character on the screen (with subpixel precision) to get the best looking text, but if you have a lot of text that would be (I suspect) expensive. And if you use a pixel art or monospaced font, it's unnecessary.

You can also look at cleartype to have "3 time the horizontal resolution". I never used it but I think there are examples in these forums.

For storing, I just keep the font file (.ttf) in memory so I can recreate the atlas with new glyphs (mostly in application that needs to handle text input or external text source).

The part about the post processing part is just that independently of the render size of the "world" (sometimes you render at a smaller resolution and then upscale to the screen resolution for performance or artistic reasons) the UI is often rendered in a separate pass using the screen resolution so that it always looks sharp. It could also be that they actually suggest to render the font in a pixel shader but I don't know much about that (except that it's possible).


Edited by Simon Anciaux on
Replying to longtran2904 (#25630)

In my game, I use SmoothDamp to move the camera around. So when the camera moves close to the player, its speed reduces rapidly, which will make the sprites look laggy. Applying the shader fixes the problem.

Here's an example, the camera use SmoothDamp to move to the left, wait a couple of seconds, move to the right, and repeat. The left image is laggy (look closely to the edge of the pixels) and the right image is ok. The top text (which is just a normal sprite) is laggy and the bottom one is good.

But there's another problem: the bloom effect. Here's an example. If you apply the bloom effect to a sprite, then it isn't noticeable. But in my case, I randomly generated a bunch of stars on a texture that consist of a few bright dots in a dark background. How can I fix it?

Edit: To be more specific, the shader for generating the stars is here:

    uv.x *= ratio; // The texture isn't square and I want the noise to stretch rather than tile.
    
    // Pixelate the texture
    uv *= pixelateAmount;
    uv = floor(uv);
    uv /= pixelateAmount;
    
    // Star generation
    float star = GetNoise(uv, noiseScale);
    star = smoothstep(minValue, maxValue, star);
    star *= GetMovingNoise(uv); // I have another noise that change its scale over time so that the star value isn't static
    
    vec3 col = star * starColor * brightnessScale;
    col += backgroundColor; // This will make black pixels have background color (it also make normal stars a little brighter but I don't care)

What I want to know is when should I apply the fat pixel shader to the star shader (for example, calculating the correct UV first then pass into the star shader). The bloom effect I'm using is from Unity's URP which is implemented based on this article (13-tap downsampling and 9-tap upsampling with tent filter).


Edited by longtran2904 on

Regardless of what you do in your shader, the "fat pixel shader" code applies directly before sampling texture. Whatever uv you calculated you pass to "fat pixel shader" calculations to adjust it before actual sampling. It'll work with any effect you applied before to uv.


Replying to longtran2904 (#25970)