Reducing stretch in high-FOV games using barrel distortion

January 20, 2015 Graphics, Math

Abstract

Lens distortion in computer games is nothing new, but it’s typically only used for effects like looking through a weapon’s scope or binoculars. This article discusses in detail why and how to apply a certain (barrel) distortion effect to any virtual wide-angle camera (e.g. a first or third person camera) using a post process effect. The proposed effect is meant to reduce the stretch artifacts in perspective-projected virtual scenes as perceived when the display is too small or is viewed from too afar. The implementation requires only 2 additional instruction slots of a post effect fragment shader, making it extremely efficient. To see the effect in real time, open this WebGL sample in your browser or download the more extensive Ogre3D sample from the Downloads section at the bottom of this page.

Pushing the field of view

Figure 1. Physical versus virtual FOV.

Figure 1. Physical versus virtual FOV.

Rendering a game using a wide FOV may be desirable because it allows a gamer to be more aware of the virtual surroundings or may aid in achieving a particular cinematic look. However, it can cause noticeable visible distortion, depending on where the gamer is relative to the used display. For example, a viewer sitting at A as depicted in Figure 1 wouldn’t notice any distortion, while a viewer sitting at B would. The observed distortion causes on-screen objects to look too small near the screen’s center and too large and stretched near the screen’s edges. This distortion becomes even more objectionable when the virtual camera is rotating, making objects not only rotate around the camera’s position, but also scale and stretch unnaturally along their 2D path over the screen as if they were projected onto a hollow screen somehow. As a result, distances between on-screen objects become harder to judge and the unnatural movement may even trigger feelings of nausea in some people.

Reducing stretch

This article suggests applying a small amount of barrel distortion in order to reduce the negative effects of rendering with a higher-than-ideal FOV. Adding barrel distortion reduces stretch, but it will also bend previously straight lines. Luckily, humans don’t seem to find this side effect too objectionable as long it’s kept subtle.

Figure 2. Pinchusion and barrel distortion.

Figure 2. Pinchusion and barrel distortion.

Figure 3. A 140° FOV perspective camera.

Figure 3. A 140° FOV perspective camera.

Maybe that’s because we’re used to looking at pictures created with physical lenses that almost always add some barrel distortion themselves. Or maybe it’s because experiments suggest that the human visual system already produces some barrel distortion by its own, even though most people aren’t consciously aware of this effect. In any case, more extreme FOV cameras will require more stretch compensation, and will therefore also bend straight lines further. Hence, adding barrel distortion to compensate for perspective stretching can only be pushed so far before it becomes objectionable in its own way.

Figure 4. Adding barrel distortion to Figure 3.

Figure 4. Adding barrel distortion to Figure 3.

Additionally, some resolution will effectively be lost when the barrel distortion effect is used to warp an already rendered input image as part of some post effect pipeline. That’s because the center of the image is slightly zoomed in, which causes the pixels near the center to spread over a larger area while pushing thin off-diagonal outer regions near the image’s edges out of the screen rectangle. Consequently, the diagonal FOV will be exactly the same before and after the distortion, while the horizontal and vertical FOV will be slightly smaller afterwards. Compare the perspective-projected Figure 5 to the barrel-distorted Figure 6 for an example. The loss of horizontal and vertical FOV can be compensated for by rendering the original image with a slightly larger FOV to begin with. Compare Figure 6 and 7 to see the difference. How much FOV compensation is needed be will be discussed in a later section.

So in short, adding enough barrel distortion to remove the surplus of stretching in wider-than-ideal FOV cameras is by no means a silver bullet, especially when applied as a post effect. But when the effect is kept subtle enough, the benefits can definitely outweigh the downsides, and the FOV can easily be pushed a bit further with little objection.

Please note that I am actually not the first to suggest using a reprojection to remove certain types of visual distortion, as some of the articles and papers mentioned in the Further Reading section show. This article does, however, introduce a novel more parameterized and generalized stretch-removing reprojection method and implementation.

Introducing the math

Now that the concepts have been explained, it’s time to cover the math involved. For more detailed explanations of FOV and different projection types, see the Further Reading section. Starting with the physical display and the viewer in front of it, the half height i is half the display height divided by the distance between the display and the viewer. So, when the display has the aspect ratio a, a diameter of d inch and a distance to the viewer of x meters, then i is equal to 0.0254\ d\ /\ (2 \sqrt{1 + a^2} x) . The physical vertical FOV, which is the angle subtended vertically by the display as seen from the viewer’s perspective is therefore equal to 2\ atan(i).

The virtual camera will have a vertical FOV as well, from which the half height h can be calculated (i.e. h = tan(FOV_Y/2) ). As the physical and virtual FOV need to match to get a perspective-projected image that is perceived by the viewer to be free of stretch, and a half height value is directly related to a FOV value, the half heights h and i will need to match as well. However, when the virtual FOV becomes larger than the physical FOV, or equivalently, when h is pushed beyond i, the resulting imagery will be perceived as being overly stretched unless sufficient barrel distortion is added to compensate for this effect.

Radial distortion types including barrel distortion are often implemented using Brown’s model, scaling each individual 2D pixel position vector by a weighted sum of powers of this vector’s length. Any type of radial distortion, including any variant of radial barrel distortion, can be realized this way by choosing the right set of weights. This, however, is not per se the most efficient method when a precise distortion profile is required. The barrel distortion function presented here is based on the direct transform from perspective projection to stereographic projection instead. Stereographic projection has the nice property of being conformal, which causes objects to be projected without any stretch at the cost of introducing radial bending. Furthermore, the math involved in this remapping is relatively efficient and easy to invert.

For the barrel distortion function it is assumed that the perspective projection has already been applied and that the resulting 2D screen coordinates lie in or on the rectangle [-1, -1]\ -\ [1,\ 1]. These perspective-projected screen coordinates will be represented by the vector \mathbf{p}, while the vector \mathbf{b} will be used to represent screen coordinates that are barrel distorted. Distorted coordinates can be converted to perspective screen coordinated using the function \mathbf{p}(\mathbf{b}), which is defined as follows:

(1)\quad \mathbf{p}(\mathbf{b}) = \mathbf{b}\ /\ (z\ -\ n_x\ b_x^2\ -\ n_y\ b_y^2) ,

where z = \frac{1}{2} + \frac{1}{2}\sqrt{1 + h^2 s^2 (1 + a^2)}, n_x = a^2 c^2\ n_y  and n_y = (z-1)\ /\ (1 + a^2 c^2). The function \mathbf{p}(\mathbf{b}) can be used to implement a distorting post effect by sampling the scene rasterized into an intermediate buffer (i.e. the render target) at \mathbf{p}(\mathbf{b}) for each final screen pixel at some position \mathbf{b}. The actual derivation of \mathbf{p}(\mathbf{b}) is omitted here for brevity, but can be found in the notebook available in the Downloads section.

The helper constant z defines the zoom factor that is needed to map each corner coordinate to itself so that the corners will lie on the [-1, -1]\ -\ [1,\ 1] rectangle both before and after the barrel distortion. The constants n_x and n_y are related to the rate at which b_x and b_y change the output \mathbf{p} non-linearly, respectively. As before, the values a and h represent the aspect ratio and virtual half height, respectively. c is the cylindrical ratio which can be assumed to be equal to 1 for now and will be covered in a later section in more detail. Lastly, s is the strength of the distortion effect, where 0 means no added distortion, and 1 means full stereographic-to-perspective re-projection.

The above formula can easily be implemented as an OpenGL post effect shader pair, as the following GLSL snippet shows. This snippet is used as-is in the WebGL sample. See the Ogre sample for an equivalent implementation using the Cg shader language.

//////////////////////////////// vertex shader //////////////////////////////////

uniform float strength;           // s: 0 = perspective, 1 = stereographic
uniform float height;             // h: tan(verticalFOVInRadians / 2)
uniform float aspectRatio;        // a: screenWidth / screenHeight
uniform float cylindricalRatio;   // c: cylindrical distortion ratio. 1 = spherical

varying vec3 vUV;                 // output to interpolate over screen
varying vec2 vUVDot;              // output to interpolate over screen

void main() {
    gl_Position = projectionMatrix * (modelViewMatrix * vec4(position, 1.0));

    float scaledHeight = strength * height;
    float cylAspectRatio = aspectRatio * cylindricalRatio;
    float aspectDiagSq = aspectRatio * aspectRatio + 1.0;
    float diagSq = scaledHeight * scaledHeight * aspectDiagSq;
    vec2 signedUV = (2.0 * uv + vec2(-1.0, -1.0));

    float z = 0.5 * sqrt(diagSq + 1.0) + 0.5;
    float ny = (z - 1.0) / (cylAspectRatio * cylAspectRatio + 1.0);

    vUVDot = sqrt(ny) * vec2(cylAspectRatio, 1.0) * signedUV;
    vUV = vec3(0.5, 0.5, 1.0) * z + vec3(-0.5, -0.5, 0.0);
    vUV.xy += uv;
}

/////////////////////////////// fragment shader ////////////////////////////////

uniform sampler2D tDiffuse;     // sampler of rendered scene’s render target
varying vec3 vUV;               // interpolated vertex output data
varying vec2 vUVDot;            // interpolated vertex output data

void main() {
    vec3 uv = dot(vUVDot, vUVDot) * vec3(-0.5, -0.5, -1.0) + vUV;
    gl_FragColor = texture2DProj(tDiffuse, uv);
}

The vertex shader is used here to prepare the inputs to the fragment shader in which the actual result of \bf{p}(\bf{b}) is calculated and used to sample the render target tScene at. Because of these precalculations, the fragment shader itself is extremely efficient. In fact, it requires only one additional interpolation register (that is, one extra varying), and two additional instruction slots (that is, one dot product, and one multiply-and-add) more than the simplest possible pass-through post effect. The texture2DProj() should be available on most hardware and perform as fast as a regular texture2D() operation. However, it may be replaced by the equivalent texture2D(tScene, uv.xy / uv.z) if unavailable.

Reversing the distortion

The distortion function \mathbf{p}(\mathbf{b}) undistorts a barrel-distorted perspective projection. In other words, when the input \mathbf{b} represents the final barrel-distorted screen position then this function’s output represents the corresponding position \mathbf{p} on the perspective-projected rendered image without barrel distortion. This matches what is needed for a post effect and for mouse picking, among other things. It is also possible to calculate the inverse mapping using the following formula, thus converting a perspective-projected position \mathbf{p} to the corresponding distorted position \mathbf{b}.

(2)\quad \mathbf{b}(\mathbf{p}) = z\ \mathbf{p}\ /\ (\frac{1}{2} + \sqrt{\frac{1}{4} + z\ (n_x p_x^2\ +\ n_y p_y^2) } ).

This particular formula may be used to calculate where some 3D object will be visible on the final barrel-distorted screen, for example. To do so, the object’s 3D position first has to be projected using the standard model-view-projection matrix to a 2D coordinate between [-1, -1]\ -\ [1,\ 1], which can then be mapped to \mathbf{b} using the formula above.

Controlling the strength

The formulas \mathbf{b}(\mathbf{p}) and \mathbf{p}(\mathbf{b}) are based on the transform from perspective to stereographic projection and back, respectively, but they have been generalized to (among other things) allow control over the distortion strength through the parameter s. When s is 1, perspective stretch is completely removed. This setting has been used to create the figures so far. However, this is probably too extreme for most actual use cases, as some perspective stretch is still desirable when the image is to be displayed on a flat screen from a finite distance. It is possible to calculate the ideal amount of stretch for a particular viewer and reduce the stretch caused by an overly wide-angled virtual camera back to that same ideal amount. The s that exactly leads to that ideal stretch factor can be calculated as follows:

(3)\quad s = \sqrt{ (h^2 - i ^2)\ /\ (h^2\ (1 + i ^2(1 + a^2)\ )\ ) }

For the derivation of this formula, see the notebook under Downloads. The formula doesn’t take into account any of the downsides of adding barrel distortion like the introduced radial bending and the loss of some effective resolution, which perhaps makes this calculation more of an upper bound for s than an a ideal value. Consequently, for some applications it might be preferable to scale down the output of this formula by some tweaked value, use a value for i closer to h, or perhaps even not use this formula at all and set s it directly to some tweaked constant like s = 0.5. Nevertheless, directly or indirectly controlling s based on the individual viewer might be an interesting idea that could be achieved automatically in real-time using the distance estimation capabilities of a Kinect device, for example.

Controlling cylindricity

Another parameter in the distortion function is the cylindrical ratio c. This parameter doesn’t change the overall amount of the distortion, but affects the main direction of it. To be more exact, it defines the amount of applied horizontal versus vertical distortion without altering the amount of distortion in the image’s corners. Setting c to 1 results in a perfectly spherical distortion, values higher than one result in vertical lines being bent less, and values closer to zero result in more straight horizontal lines. Compare Figures 11, 12 and 13 which show the results of applying a  full strength distortion to a 140° FOV perspective-projected image using three different values of c.

Setting c to a value larger than one may improve the visual quality in many cases. That might be because humans seem to be ‘better’ at interpreting bent horizontal lines as straight than at interpreting bent vertical lines as straight, especially when the virtual camera is rotating. For this or other aesthetic reasons, some movie makers have been known to select certain anamorphic lenses for their films, which possess a type of semi-cylindrical distortion that can be approximated by c = 2. In the Ogre3D sample, it’s also possible to make this value depend smoothly on the virtual camera’s pitch, reverting to a spherical projection when looking up or down, while approaching a tweakable value when gazing closer to the horizon.

Pinning the horizontal FOV

The loss in horizontal and vertical FOV after distortion (as a result of the original outer areas being pushed off the screen) can be compensated for by rendering the scene with a slightly higher FOV in the first place. It’s possible to calculate the half height h that is needed to be rendered with in order to end up with a specified horizontal FOV at some height y on the screen, which will be denoted here as FOV_{X|y}.

The chosen value for y may lie anywhere between 0 and 1. For example, FOV_{X|y=0} represents the horizontal FOV through the center of the screen, which effectively specifies the minimum horizontal FOV for the whole screen. FOV_{X|y=1} can be used to specify the horizontal FOV between the left and right corners of the screen, effectively specifying the the maximum visible horizontal FOV on the screen. FOV_{X|y=1/2} is interesting as well, as it can be shown that y=1/2 is the largest y for which the average horizontal FOV over the whole screen will be at least as large as the horizontal FOV specified. Compare Figure 14, 15 and 16 to see the difference between using h based on c = s = 1 together with FOV_{X|y=0} = 140 ^{\circ}, FOV_{X|y=1/2} = 140 ^{\circ} and FOV_{X|y=1} = 140 ^{\circ}, respectively. Also compare these to the original 140° FOV perspective-projected image in Figure 3, which would be equal to the output for all three FOV calculations if s had been 0 instead of 1.

The half height h to be rendered with in order to end up with the desired FOV_{X|y} after distortion can be computed as follows:

(4)\quad h = \mathbf{p_u}\bigg(\ \mathbf{b_u}\Big(\begin{bmatrix}w \\ w\ y / a\end{bmatrix} \Big)_x\ \ \begin{bmatrix}1\\1 / a\end{bmatrix}\ \bigg)_y

Here, the value w is the desired horizontal half width, which is equal to tan(FOV_{X|y}\ /\ 2). The functions \mathbf{b_u}(\mathbf{p_u}) and \mathbf{p_u}(\mathbf{b_u}) are distort and undistort functions that are equivalent to \mathbf{b}(\mathbf{p}) and \mathbf{p}(\mathbf{b}), respectively, but work on non-normalized coordinates instead. This means that any position \mathbf{p_u} will lie in or on the rectangle [-a\ h, -h]\ -\ [a\ h,\ h], and that these function distort and undistort, respectively, without doing any uniform scaling to guarantee matching corner coordinates. Consequently, the functions \mathbf{b_u}(\mathbf{p_u}) and \mathbf{p_u}(\mathbf{b_u}) may be defined as being identical to Eq. 1 and Eq.2, respectively, but require the values z = 1, n_x = c^2 n_y  and n_y = (s^2 (1 + a^2))\ /\ (4(1 + a^2 c^2))  instead.

It’s worth noting that smaller cylindrical ratios (i.e. smaller values for c) will need a wider camera FOV to fully compensate for the loss of horizontal FOV after barrel distortion, as there is less horizontal loss for higher values of c. Compare Figure 11 and 13. Consequently, when pinning the horizontal FOV, higher values for c cause less image resolution degradation during the resampling of an original perspective-projected image, as more ‘pixels per degree’ will be available to sample from when distorting it into the final visible number of degrees.

Properties in more detail

The previous sections contain all the information needed to implement and use the distortion effect. This section is about better understanding how different input parameters affect different aspects of the output. The math itself has been omitted here for brevity, but is available in the downloadable notebook file. All graphs are plotted assuming that c = 1 and are presented as functions of the diagonal FOV angle, which is equal to arctan(h\ \sqrt{1 + a^2}). By using the diagonal FOV, the data presented here is made independent of the aspect ratio.

Figure 17. Maximum object stretch.

Figure 17. Maximum object stretch.

The proposed barrel distortion effect is meant to (partially) remove stretch visible in perspective-projected images. Stretch is defined here as the ratio of the amount of scaling in the direction towards the image’s center and the amount of scaling perpendicular to that direction. This stretch ratio is the most extreme near the screen’s corners, which is where the stretch has been measured for the following graph. The curve for s = 0 clearly shows that stretch as a result of perspective projection quickly grows as the FOV increases. In fact, it will have grown to infinity at 180^{\circ} . Note that using the barrel distortion with s = 1 removes all stretching artifacts, as can be expected from the resulting stereographic projection.

Figure 18. Maximum object scale.

Figure 18. Maximum object scale.

The distortion also affects the maximum ratio in apparent scale of an object seen in the center of the screen and the same object seen at the same 3D distance near the screen’s corner. This ratio is plotted in the next graph for a number of values for s again. Note that scaling differences get smaller for larger values of s but they are never completely removed, as the presented barrel distortion formula (or any stereographic-based reprojection, for that matter) isn’t capable of perfectly mapping a perspective projection into a so-called equi-solid projection.

Figure 19. Maximum object bending.

Figure 19. Maximum object bending.

The amount of bending of straight lines caused by the distortion effect can also be quantified. Bending is the most severe near the screen’s corners, which is why the data presented in the graph on the right has been computed there. The output is given as the ratio of the screen diagonal versus the ‘line diameter’. This line diameter is the diameter of a circle that has the same curvature as the most bent on-screen straight line possible. So, for example, a value of 1/3 means that the most bent on-screen line is as curved as a circle three times the diameter of the circle circumscribing the screen.

Figure 20. Minimum resolution.

Figure 20. Minimum resolution.

Adding barrel distortion will cause the image to become somewhat zoomed in near the center of the image. Consequently, when used to resample an already rasterized image instead, the center of the image will suffer the most from loss of resolution. The following graph shows how pixel resolution at the center is affected as a function of the diagonal FOV.

Conclusions

The presented distortion method allows for the exact calculation and compensation of the surplus of stretch that results from pushing a virtual perspective camera beyond its ideal FOV for a particular viewer. As it uses a straightforward generalization of the conversion from perspective projection to stereographic projection and back, it’s both exact and efficient. Furthermore, the introduced stepless strength and cylindricity parameters allow for fine grained tweaking and adaptive control in real time. Although the transform can somewhat degrade resolution and introduce radial bending of features and lines, these downsides only become noticeable when the FOV and strength are set both to particularly high values. Consequently, compared to standard perspective projection, the presented technique allows the FOV to be pushed further before any form of distortion becomes objectionable or uncomfortable. The benefits and drawbacks of the technique have been described in detail through the use of example images and graphs, and two downloadable sample applications have been made available below experiment with the effect in real time.

Further Reading

Downloads

  • WebGL sample. This sample in three.js should work in any WebGL-enabled browser and demonstrates the basic barrel distortion post effect. It allows the camera angle, FOV, strength and cylindrical ratio can be controlled in real-time. All code is licensed under BSD, which means that it can be used both for commercial and non-commercial purposes as long as credit is given appropriately. Open the HTML page’s source in your browser to view the javascript source. [HTML] [Source ZIP]
  • Ogre3D sample project. All formulas discussed in this article have been implemented in this Ogre3D demo, and all screenshots in this article have been made using it. It includes a Cg version of the distortion post effect, and a compact and reuseable C++ distortion math library. Next to being able to control the FOV, strength and cylindrical ratio, it shows the current values of the properties discussed in the ‘Properties in more detail’ in real time. All code is licensed under BSD, which means that it can be used both for commercial and non-commercial purposes as long as credit is given appropriately. [Windows binary ZIP] [Source code ZIP]
  • Mathematica notebook. The derivations of all presented formulas and properties are available in this Mathematica 9 notebook. [NB] [PDF]

Comments (29)

rouncer
January 25, 2015

Thats great, but->
Is there anything you can do to help the lack of resolution in the centre of the screen?

Giliam
January 25, 2015

Hi rouncer. There’s a couple of things you could try, all with different pros and cons. (1) When the backbuffer is antialiased and the increase in FOV is fairly limited, you may be able to get away with applying a simple sharpening filter (with a strength that fades to zero towards the edges) as part of the same post effect pass. (2) You could also slightly increase the rendertarget’s resolution beyond the screen’s resolution to compensate. (The rendertarget width and height ratio to use to get perfect 1:1 pixels in the center of the screen is one over the appropriate value from Figure 20). (3) If you want to push the FOV a lot, rendering to (half) a cube map might also be an option. (You could still use the same math functions up to a diagonal FOV of 180 degrees, if you want, by using a 1 for the W dimension). However, using cubemaps might require more rigorous changes to a renderer’s setup. Option 2 and 3 both require more pixels to be rendered to the rendertarget. Option 2 will be more performant than option 3 up to some (relatively high) FOV. At which FOV option 3 becomes better than option 2 is hard to say, as that will depend on the aspect ratio, but also on the more complex setup of cubemaps (as it would affect the no. of reprocessed vertices, no. of batches, no. of occlusion query runs, ..). You could also combine these ideas. (4) For example, render both to a left-half and right-half 2D rendertarget, where the two halves effectively form a V shape (as if rendering to a cube rotated about the Y axis by 45 degrees, although the two faces don’t have to be square or at 90 degrees). You could seamlessly reproject these back as part of a post effect pass applied to the left half of the screen and another applied to the right half of the screen. With option 4, you can get some of the quality benefits of cubemaps (especially for wide screens), but using a probably more performant setup. You’d probably want to invest some time in figuring out what rendertarget aspect ratio together with what angle between the two cameras works best, though. Hope this helps!

Pascal
April 22, 2015

Hello Gilliam,
I have a question, is it possible to use the three.js raycaster hittest in combination with the lensdistortion?
The hittest is working, but on the wrong items, there’s an offset caused by the lensdistortion.
Do you have an idea, how to fix this?

Regards,
Pascal

Giliam
April 22, 2015

Hi Pascal, I didn’t have a look at the hit-test but my first guess is that you might not have applied formula 1 to ‘undistort’ the final (distorted) screen space input coordinates (e.g. the mouse position) to the undistorted screen space equivalent. In other words, be sure to apply formula 1 to the mouse coordinates normalized between [-1,1], and use those new coordinates to base the raycast/hit-test on instead. Good luck!

Doe
July 25, 2015

Hello, I have been unable to implement your algo (even as a simplified version) with HLSL\Direct3D\ReShade. Here is a very simple example (though I slightly edited out framework stuff) of a lens distortion algo from ReShade: http://pastebin.com/TJyDiWk3 Is it possible, that you could port your algo to a simple HLSL example at least, like my example? (You could use constants to simply.) Regards.

Doe
July 26, 2015

Nevermind I found the bug with the HLSL port. tex2Dproj seems to be broken for some reason. But tex2D with divide by z works. But I still have a question, is aspect ratio the aspect ratio in the projection matrix? Because normally that’s Aspect Ratio * 0.75, with a 1.0 aspect ratio being used for 4:3.

Giliam
July 26, 2015

Hi Doe, HLSL’s tex2Dproj divides by the w component, not by the z component, which is probably where it went wrong. The expected aspect ratio value for the shader is the width divided by height, so 1.333 for 4:3 and 1.777 for 16:9. Good luck!

Doe
July 27, 2015

Hi, Giliam, I noticed I can get quite good effect at s=1, c=2; however there still seems to be curve left in. The curve decreases, if I decrease the strength; but then I lose the needed effect. This might be because my game just doesn’t have much curve in its perspective distortion? Is there a way to change the algo to decrease the curve?

Doe
July 27, 2015

Forgot to mention c=2 gets rid of most vertical curve; it’s the horizontal curve that I want to get rid of.

Lawrence
December 18, 2015

Fantastic work Giliam, thank you very much for the ready to use solution. What do you think of the idea of applying the reprojection in a tessellation evaluation shader?

Brendan
December 21, 2015

Hi Giliam,

Could this technique be used to transform just the vertex positions? How would that work for a simple unlit shader? I know I would need high tessellation on my geometry. Any thoughts? Thanks so much.

Giliam
December 21, 2015

Hi Brendan. Given indeed a high enough geometry density, it might be possible to do so. But I wonder how noticeable the artefacts would be.. In any case, a possible solution is hinted at in ‘Reversing the distortion’. You’d normally pass the vertex through a model-view-projection matrix in the vertex shader and directly output the 4D result (lets call that ‘pos’). And so, the final normalized screen coordinate position would normally be at float2(pos.x/pos.w, pos.y/pos.w). To add the distortion effect, use this float2 as the 2D ‘p’ vector, and apply equation 2 to get the 2D ‘b’ vector. Then, update pos by using ‘pos.xy = b * pos.w’, and output the updated pos instead. Good luck!

Brendan
February 3, 2016

Giliam,

Thank you for your help. I understand what you are suggesting, but I am having difficulty with the math when trying to solve the formula for P.

How do I solve for each Px and Py respectively?

I apologize, and understand if this question is a bit too much.

Thanks again so much.

Giliam
February 4, 2016

Hi Brendan. No problem. I just noticed I flipped the b and p symbols in my original reply. This is now corrected.
This is what I meant:
// define some constant/uniform called distort where:
// distort.x = nx * z, distort.y = ny * z, distort.z = z.
{
// add your vertex shader code calculating the model-view-projection-transformed
// output float4 ‘pos’ here.
float2 p = pos.xy / pos.w; // get p fom pos
float2 b = distort.z * p / (0.5 + sqrt(0.25 + dot(distort.xy, p * p))); // this is equation 2
pos.xy = b * pos.w; // replace the perspecetive position with the distorted position
return pos;
}

Alternatively, you could also use the following more optimized but less readable equivalent implementation:
// define some constant/uniform called distort2 where:
// distort2.x = nx * z, distort2.y = ny * z, distort2.z = z * 2, distort2.w = 1.
{
// add your vertex shader code calculating the model-view-projection-transformed
// output float4 ‘pos’ here.
pos.xy *= distort2.z * pos.w / (pos.w + sqrt(dot(distort2.xyw, pos.xyw * pos.xyw)));
return pos;
}
(Again, no guarentees on the quality of your results when using this to transform vertices instead of pixels.). Good luck!

Brendan
February 25, 2016

Brilliant! Works perfectly! :)

Using the distort2 option, I left z as is without multiplying it by 2, and it gave me the look I’m after.

I can’t thank you enough!

Joe
March 8, 2016

Great article and explanations!
Thank you very much!

maxest
August 26, 2016

Hey,

Great article. I implemented the formulas and they work great.

I’m only having problems with equation (4). I defined Pu and Bu functions the way you described, that is, doing this code:
[code]
float z = 1.0f;
float ny = (s*s*(1.0f + a*a)) / (4.0f*(1.0f + a*a*c*c));
float nx = c*c*ny;
[/code]

Then, in my new function Py I calculate the new half height like this:
[code]
float y = 0.5f;
h = Pu(Bu(float2(h, h*y/a)).x * float2(1.0f, 1.0f/a)).y;
[/code]
where h = Tan(140 degrees / 2)

The result is that I get barrel distortion somewhere in the middle of using P and not using it all. Also, changing the y plane has very little effect.

Do you know what might be wrong?

Giliam
August 30, 2016

Hi Maxest. Thanks for your interest. Is it possible that w and h (i.e. the horizontal and vertical fov) are getting mixed up in the implementation somehow? That is, are you using the output of the calculation indeed as a vertical FOV (and not a horizontal FOV) input to your camera?

Filbs
September 25, 2016

This is very useful. Thanks.
By rendering multiple views, it’s possible to exceed 180 degrees FOV, and do less “wasted” rendering/avoid pixel density mismatch approaching 180. I’ve played around with this in webgl a bit. Hopefully this is useful! See https://filbs111.github.io/webgl-wideanglecamera/testscene.html

Giliam
October 1, 2016

Hi Filbs. That’s pretty neat!

Genesis
May 30, 2017

Very cool stuff … Has someone finished such a Shader for ReShade?

Joshua
December 14, 2017

Barrel distortion is actually a great idea. Naturally, the projection onto the 2D image will be more stretched the further we are from the center of the screen, so by scaling or distributing it back into place we can maintain consistency with the 2D plane. This would help heaps with muscle memory in gaming as players won’t misjudge how much distance they need to move their mouse to a point on the monitor.

Khao
May 3, 2022

Hello Giliam,

I am a little late to the party :]
In Street Fighter V they use a perspective correction vertex shader for the characters (https://youtu.be/EDlbJdmo7KE?t=1883).
Does your “Reversing the distortion” would work ?

Best regards and thank you again for this kind of content

Giliam
May 3, 2022

Hi Khao, thanks for the link. It seems like they’re effectively projecting the character center using regular 3D perspective projection (if even), and then use an orthographic projection of the actual vertices (relative to that projected center) to get rid of any parallax-like effect on the character’s shape, all inside the same vertex shader. And their slider sort of interpolates between the two modes. That’s why it seems the feet actually rotate around their centers when that slider goes to 1.0, for example. In the context of being able to see on a 2D screen if something hits in 3D, this makes a lot of sense, so if that’s the most important thing to achieve, then that’s a great option. It does, however, make it hard to integrate that with the rest of the perspective-projected 3D scene. In contrast, the technique described my post doesn’t have that problem because it’s based on just regular perspective projection on the whole scene, and then distorts that as a whole as a post effect. But it comes with the downside of stretching and compressing pixels, of course, which is a problem that only using vertex distortion wouldn’t have. So, different tradeoffs…

It would’ve been nice if it was possible to just put the distortion I described in a vertex shader as well, but that doesn’t work because this type of distortion turns straight lines into curves. So if the distortion was only used on vertices (using a custom vertex shader), the rasterizer would still render straight-edges triangles between these vertices, which is incorrect and may lead to weird holes at T-junctions, for example. So, that means that either the effect needs to be applied after rasterization (as i was doing), or it should sidestep the whole problem of straight-line rasterization and be implemented into a ray tracer directly. Which is less weird to consider nowadays than it was back when i wrote the article… Anyway, hope this is helpful.

James
July 6, 2024

Re: last
I believe that modern tech has the ability to handle this, maybe? In particular im thinking of auto-tessellation via a hull/geometry shader, to give you consistent vertex density across the render target. The unreal engine term/implementation of this is Nanite, i believe.

Giliam
July 6, 2024

Hi James. Other more modern approaches include using variable rate shading (when available), and even making the distortion be part of the (re)projection in a temporal upscaler so that it can keep on converging to subpixel detail quality.

Rudolf
October 23, 2024

Hi Giliam,

Can you please talk a some more about those modern approaches? I’d really like to minimize the FOV distortion in my Godot 4 project, but I lack the knowledge. Thanks!

Giliam
October 23, 2024

Hi Rudolf. Briefly, variable rate shading is an optional hardware feature that allows different zones of the render target to be filled with pixels that are either full-res or half-res, or something in between depending on the exact hardware API. This means that the frames can be full-res in the middle and half-res near the edges, for example. This obviously makes a frame cheaper to run as it requires fewer pixel/fragment shaders to be invoked in total. Or, put differently, the overall resolution of the frame can be increased without costing more than rendering without variable rate shading, while ending up with more detail in the center and less near the edges. And with the barrel distortion applied on top of that, you end up with a more even distribution of details through the frame. That’s the theory, anyway. This is (sometimes) used in VR glasses/platforms, for example, for this exact reason. But my guess is that this is not something that you can easily just ‘activate’ in Godot.. Alternatively, it’s theoretically possible to write your own ‘DLSS’-like temporal AA&upscaler technique, which would do the distortion as part of the internal reproject-and-upscale process instead of running a barrel distortion pass after the AA&upscaler passes, as you’d ‘normally’ do. Then you’d effectively enlarge sub-pixel information, instead of enlarging final resolved pixels. But differently, it would allow you to trade lack of detail/too much blur in the center for more detail with a bit more aliasing. And again, this is sadly not trivial to implement. Not impossible, but not trivial either…

Rudolf
October 24, 2024

I initially thought to compensate the loss of resolution by rendering at 125% resolution or something, but that means I need a different shader that takes this into account and I just couldn’t figure it all out…

Leave a comment