Visual Acuity, Vernier Acuity, Anti-Aliasing, and You

Well, it’s been a long time since I’ve posted here. My time has been spent focusing on the blog and making HDR video a reality. But somehow I got into twitter fight last evening during the Super Bowl. And as it turns out, it’s not easy to explain the subtleties of Visual Acuity and Vernier Acuity and how it relates to anti-aliasing in hypothetical next gen consoles in 140 characters. I don’t actually have any inside information about next gen consoles, but let’s assume that the next gen games will output at 1080p and have about as much processing power as a top end GPU.

The argument essentially boils down to: Is FXAA enough, which costs <1ms, or do you want (or NEED) better AA. The first thing that you have to understand is the difference between Visual Acuity and Vernier Acuity which unfortunately is one of those things that no one teaches you. So let’s try.

You have all heard about visual acuity before. If you need a refresher you can talk to your good friend Wikipedia: Essentially, as things get really small, the human eye has trouble distinguishing them (duh). So if text is too small and too far away then you can’t read it.

What you probably don’t know about is Vernier Acuity. Wikipedia is of course a great resource: Vernier acuity should make sense to anyone who has worked in games and seen aliasing. I’ve talked about this issue before when I’ve argued that Apple is lying to you when they call the iPhone 4 a “Retina Display” as well as in an early post about the difference between 720p and 1080p. But it should make intuitive sense you. The human eye has an incredible ability to tell if two lines aren’t exactly aligned with each other. That’s how a vernier caliper works. You can think of vernier acuity as the official term for our eye’s ability to see aliasing.

But when it comes to choosing what AA technique you need for a given resolution, Visual Acuity vs Vernier Acuity is incredibly important. There are three cases:

1: Visual Acuity &#60 Vernier Acuity &#60 Resolution
If you are rendering at a resolution that is finer than Vernier Acuity, then AA is worthless (except for crazy extreme cases which I’ll get to in a moment). If you rendered to a screen with 10,000 DPI there would be no need for AA because the resolution is so fine that your eye can’t make out the aliased edges.

2: Resolution &#60 Visual Acuity &#60 Vernier Acuity
On the other hand, if you are rendering at a resolution coarser then Visual Acuity, then your eye can clearly see blurriness in your original image. In this case any AA technique that increases the blurriness of your image will be easy to see. Also, in this situation techniques like MSAA have a definite quality advantage over post AA techniques (like FXAA) but are usually more expensive.

3: Visual Acuity &#60 Resolution &#60 Vernier Acuity
If your resolution is in between Visual Acuity and Vernier Acuity then you are in a strange land. You’re eye can’t pick out individual details but it can tell if there are artifacts like aliasing. In these cases, FXAA should be good enough for 99% of your cases. Sure, a more expensive option like MSAA might look a teeny bit better. But if you’re AA technique reduces sharpness a little bit around the edges then your eye can’t tell because the resolution is beyond your Visual Acuity.

My belief is that on next-gen consoles, most users will be in category 3. Most people who actually work in games (myself included when I was at Naughty Dog) think that aliasing is much more of an issue than it actually is. Partially, it’s because we are trained to look for it. But also the median viewer is looking at the screen with much less resolution than we do. When I was at Naughty Dog I sat about 4 feet away from a 32-inch screen. But the average user sits about 10 feet away. And then if we change resolution from 720p to 1080p aliasing becomes even less of an issue for the average user.

So that’s about where I stand. Once the next gen consoles come around and we move to 1080p then there is a very negligible difference between what a cheap solution like FXAA gives you and a theoretical “perfect” solution. The original comment that started this discussion was about using tiling to get MSAA. And I find it pretty hard to believe that the quality difference of MSAA (with TILING!) would justify the cost on next gen.

Side Note #1
There are always exceptions. If you are making Flower 2 and you are willing to dedicate 60% of your budget to rendering grass than MSAA might be worth it. But for the “standard” games like Skyrim, Halo, Modern Warfare, Uncharted, etc. I see FXAA as the best solution.

Side Note #2
For me, the #1 problem in video games is Shading and the #2 problem is Lighting. Everything has that “weird video game look” to it. If you compare games to other games, you would think that games look pretty good. But when you compare games to either VFX or real life on the same monitor it breaks down. Every time a commercial on TV comes up showing in-game footage I always cringe a little bit (even on games that I worked on). We can’t even make the easy surfaces like concrete/wood/tile look right. And we are REALLY far away from the harder surface like skin/cloth/hair/marble. That’s where I would rather put cycles if I was planning a next-gen game budget.

Side Note #3
So what about thin objects that cause flickering because they are so small? Simple: Don’t have them. Instead use alpha for things like one million blades of grass. Don’t use individual triangles. The hardware isn’t really effective with lots of 1 pixel triangles so you’re better off using alpha cards anyways. For thin object that are harder to replace I think there are more interesting options for blending them with geometry shaders if you really need to. And if your game is a true outlier case (like you absolutely must render a field) then see Side Note #1.

Side Note #4
Still think I’m completely wrong? This is easy enough to test. Just load up one of Post AA comparison samples onto a PC, hook it up to a 42 inch display at 1080p, and invite random people off the street to sit 12 feet away. Then switch between no MSAA, 4xMSAA, and FXAA randomly and ask them to rate the quality. That’s the real way to settle this issue, although the results would be a bit biased because people are more likely to notice aliasing if they are asked to look for it than if they were just playing at home.

Side Note #5
Users and video game reviewers know virtually nothing about why an image looks good or bad. That’s why you always here them say “texture resolution”. If a game looks good because it has good lighting, nice animation, and high-quality lighting models most forum posters will say “Wow, this game has such detailed textures!”. But when a game has bad animation and gamma space lighting they will say “Those textures look low-res!”. I don’t remember hearing a game reviewer say “Game A has a great anisotropic reflectance model on it’s brushed metal shaders!”. Instead, they’ll say “Game A has really crisp textures” as if the developer magically found an extra 100 megs of texture RAM inside the console.

The same thing tends to happen with anti-aliasing. Often times you’ll see two games, let’s call them Game A and Game B, where A looks much better than B. The reviews will say “Game A has detailed texture and clean edges” whereas “Game B has low-res textures and lots of jaggies” even when they both use the exact same technique. Game reviewers don’t know what to say other than texture detail and jaggies so they use those terms to justify what they already believe.

19lights blog is up!

For those who are interested in the HDR side of things, I’ve got a new blog for you. As most of you know, is my personal site. And I’ve finally put together a blog for 19lights, the company I’ve started. If you’re into all things HDR, you should check out the blog at

The latest post is Top 5 Optical Illusions for HDR Photographers. I’ve found that when trying to explain the concept of HDR Photography to people that I keep re-explaining the Adelson Checkerboard illusion over and over and over. I can’t tell you how many times I’ve sat at someone’s laptop, opened Windows Paint and started drawing rectangles to explain it (like in my GDC talk). Actually, the entire point of the filmicgames website was to stop explaining things to people–I wanted one place to say my piece about arguments that I would have over and over (like linear-space lighting). So I’ve finally put my explanation for the Adelson Checkboard online (as well as a few other important illusions) and, if I’m really lucky, I can stop using Windows Paint. Although who are we kidding, I’ll always use Windows Paint.

As far as this blog, I’m going to keep posting here on occasion, but obviously the 19lights blog is a higher priority. Happy HDR!

Future of AA Techniques

Hopefully you all had a good time at Siggraph. For game programmers, the one course that you absolutely must check out is Filtering Approaches for Real-Time Anti-Aliasing. Make sure to check out the course webpage which will include slides and sample images. Here’s my general thoughts on it.

1. We don’t need perfection. And FXAA is pretty cool.
When looking at AA approaches, 720p with no AA has very bad aliasing artifacts. It’s really bad, especially if you have long lines and lots of contrast. I remember talking with Alex Fry (from EA) who made the point that you get a huge gain going from 1x to 2x MSAA. When you go from 2x to 4x, it looks better, but by a smaller margin. When you are an average distance away from a real screen, 2x gets you about 80% of the way there and 4x gets you that last 20%. 4x rotated-grid MSAA on the 360 looks very good. You’ll still see some aliasing in the worst cases but artifacts will be very rare. Obviously, there are many considerations here that I’m glossing over (like deferred/light prepass rendering).

I have the same feeling about FXAA. Eric Haines (from the RealTimeRendering blog) wrote the article FXAA Rules, OK? Guess what his opinion is? Generally, I’d agree with Eric’s points. FXAA is not perfect. You don’t get ideal edge fixing. But when I look at it in real games, it seems “good enough”. There are so many reasons why games don’t look photoreal, such as lighting models, faceting, texture filtering/compression, shadows, you name it. And FXAA is good enough that I’d rather focus my development time and GPU time on something else.

NaturalHDR: HDR Tonemapping for Video

Hey guys. I’ve gotten a lot of questions about what I’ve been up to since leaving Naughty Dog last January. And today I’m finally able to announce something!

NaturalHDR Trailer from 19lights on Vimeo.

After leaving, I started 19lights, LLC, where I’m the Founder/CEO. I’ve got big plans for world domination, and it starts with HDR for Video. The product is called NaturalHDR and you can check out all the info at If you’re interested, please sign up for the mailing list, or at least follow on twitter:

See you all at Siggraph!

The greatest failure of our patent system was…

Arithmetic encoding patents in JPEG images.

To be clear, I strongly oppose software, semiconductor design, and business method patents. That’s a longer discussion. So I was looking at how much time/money/innovation we actually lose from patents. IMO, the most damaging set of patents to the world are the ones covering Arithmetic Coding in the JPEG format.

As an example, check out this image.

Here are two links for larger versions. Can you see both in your browser?
Huffman Compressed
Arithmetic Compressed

No, it’s not a “Retina Display”

Ah yes, the “Retina Display” post. I’ve been meaning to do this one for a while, and decided to let it go, but then it came up in a private conversation so I decided it was time to go through with it. Guess what my take on it is??? I talked about similar issues before in my 720p vs 1080p post before it got overridden by trolls from

As you may have heard, the iPhone 4 with it’s “Retina Display” has a 326 DPI (dots per inch). And they make the implication that the display is so good that if you had a better display that you wouldn’t be able to tell the difference because at a typical viewing distance the resolution of the display is higher than the human retina. Of course, technically “Retina Display” is just an Apple marketing term, but clearly that’s the marketing narrative that Apple is pushing. Here’s the exact quote from Steve Jobs:

“It turns out there‚Äôs a magic number right around 300 pixels per inch, that when you hold something around to 10 to 12 inches away from your eyes, is the limit of the human retina to differentiate the pixels.” (more…)

How Hard is Your Face?

Got a face you have to render? CG faces are hard. Of course, some are harder than others so I’m introducing the “How Hard is Your Face” test.

How Hard is Your Face:

  1. Does it have real human proportions (vs. creature-like)
  2. Is it light-skinned (vs. dark skinned)
  3. Is it younger (vs. older)
  4. Is it attractive (vs. ugly)
  5. Is it a woman (vs. a man)

Of course, ALL faces are hard. Any time someone does a good CG face that moves, it’s a real accomplishment. That being said, some are harder than others.

OpenCL Drivers Drive Me Crazy

Let me ask you a question. Suppose Intel came out with a CPU that was 10x faster. But there was a catch: every once in a while it would just give you the wrong number. So suppose that you calculate x=x+7, and once in a while it just gives you x-4 instead. Or sometimes when you write memory it might just put it somewhere else. Would you rather use a CPU that actually works, or the CPU that is 10x faster but unreliable? Obviously, you would want a CPU that gives the right results because even one calculation error could easily crash your program.

For that last 10 years, GPUs have gotten away with cutting corners here. We all have dealt with various driver problems on PCs. And in fact, we’ve all seen how when cards start to die, you can sometimes see the “white dots” issue where sporadic white dots tend to appear.

This is why OpenCL is such a pain. Here is a test case that I ran into today. I was trying to figure out what was wrong with my code on this image. I’m using an AMD 5750. My program is doing processing of average sized raw files (3900×2616).


The Problem With Graphics Research: Error Metrics

Graphics researchers and games programmers don’t talk to each other. That’s just how it is. I can’t count how many Siggraph papers that claim to be applicable to games. Naty has a post over at where he summarizes some of the discussion at I3D about this and adds some insights as well. I give everything in that post a solid +1. But actually, my main complaint has to do with error metrics.

Here is a hypothetical for you. There is some existing technique out there that everyone uses, and I have an approximation for it. It could be a comparison of my lighting model vs some established lighting model. My HDR tonemapping algorithm vs your HDR tonemapping program. Anything really. My technique has 2% average error. At any point, the error is no more than 4% of the maximum absolute value. So is my lighting model good enough that it “solves” the problem? Of course not. If you are saying that your technique is “visually indistinguishable” from another technique, absolute error metrics are basically worthless. And unfortunately, absolute error is the value that almost everyone uses in the graphics research community.

The classic example that I use for how error metrics can be deceiving is the Xbox 360 PWL Gamma Curve. Here is a graph of the gamma 2.2 curve, the sRGB curve, and the Xbox 360 curve. I’ve mentioned this elsewhere on this site. You can click on it for the full-res version.


“Gamma Correction” and “Gamma Correction”

Hi everybody. It’s been a while since my last post, mainly because I’ve left Naughty Dog and started my own company. Suffice it to say, I’ve been very busy and feverishly coding. Don’t have anything to say about it right now other than it’s going to be cool and it’s not going to be a game. Btw, U3 is going to be crazy. And yes, I need to fix that image on the right.

Back on topic, one issue that always confused people is the terminology around gamma correction. Sometimes, when I’m talking about the issue, someone will say “It doesn’t matter because people have poorly calibrated TVs?” The answer is “Wrong Gamma Correction”. This kind of confusion happens all the time because we have two completely separate problems in CG/Video Games and they are both called “Gamma Correction”.

Problem 1: Gamma Correction (Linear Space Lighting)
This problem is making sure that all your lighting calculations are done in the correct space internally. You fix this problem by “Gamma Correcting” your textures in your shader. I.e. convert from sRGB to linear. And then at the end of your lighting calculations, you have to convert from linear back to sRGB, which is what we usually assume your output color profile is. (more…)