SIRDS: Rendering 3D models as SIRDS in real-time

May 10, 2011 Graphics

A Single Image Random Dot Stereogram, or simply SIRDS, is a noisy autostereogram that, when stared through in the right way, reveals the illusion of a 3D shape somewhere embedded in this noise. Popularized by the Magic Eye books, they already have been stared at by millions of people. To be able to see SIRDS images of moveable 3D objects, I developed an interactive and customizable SIRDS viewer in 2006, capable of generating SIRDS of 3D models at high frame rates.

What are SIRDS?

One of the reasons we ‘see’ everyday objects in 3D is because both our eyes see objects from a slightly different perspective. Our brain interprets the differences between these images as depth cues, in a process called stereopsis. SIRDS exploit this feature in our brain. It encodes a 3D scene into an image in such a way that both eyes look at slightly distorted copies of the same (noisy) pattern. More precisely, the distortion of these copies is specifically crafted to encode the depth of each pixel in a rendered virtual 3D scene.

SIRDS use random dots instead of regular patterns to hide artefacts that could distract the viewer from the illusion. However, it’s easier to explain why SIRDS work a regular pattern used. Technically, this is no longer called a SIRDS, but simply an autostereogram, but I’ll call it a SIRDS anyways.

SIRDS example of a car

A SIRDS of a car, explaining the principle

The SIRDS in the image on the right is based on the rendered depth image of the 3D car (in white and gray, bottom). Instead of random dots, the SIRDS’ pattern contains exactly four differently coloured vertical lines. This pattern is repeatedly copied and distorted to the right. The resulting SIRDS is overlaid here on top of the depth image. Note the correlation between the depth image of the car and the SIRDS.

The top of image the shows how you would look at a line from the SIRDS somewhere halfway down, when staring through the image. Instead of focusing (or, more correctly, converging your eyes) on the image plane, you look past it just deep enough to overlap the coloured vertical lines in neighbouring copies. Red with red (see the top half of the image), green with green, etc. Now both eyes will observe slightly different copies of the same pattern and our brains will interpret the differences between these copies as depth cues, just like when seeing real 3D objects.

The exact distance between these copies determines the depth at which you must converge your eyes to get the illusion of a coherent point in 3D. As you can see in the car example, these points of conversion neatly match the depth contours of the selected horizontal line in the greyscale depth image of the car.

Every red line in this SIRDS (bottom half of the image) marks the boundary between copies of the original pattern of four coloured lines. To begin the construction of a SIRDS, the original pattern is first copied once to the left side of the result. The rest of the SIRDS is then filled in pixel by pixel from left to right. First, the depth for a specific pixel is looked up from the original depth image. From this depth value, the ideal horizontal pixel offset x between dot copies on the image plane is calculated using basic geometry. Lastly, the colour value for the SIRDS pixel is set to the colour in the SIRDS x pixels to its left.

SIRDS Viewer

So, by carefully crafting the distances between copies of dots in a SIRDS, the depth information of any 3D scene can be encoded into an image. This, of course, is something best left to a computer. Out of interest, I developed a SIRDS renderer, capable of efficiently rendering full-screen SIRDS of 3D scenes. Consequently, it cannot only generate a SIRDS image of a static 3D scene, but can also generate real-time SIRDS of interactive 3D scenes.

The viewer can be used to freely zoom and rotate around various 3D models, while viewing these as either depth images or SIRDS. A number of parameters and options are available, including a staring and cross-eyed SIRDS mode, different noise patterns and tweakable pattern repeat distances.

It also includes a depth dithering mode that effectively hides the fact that pixelized SIRDS have only a (very) limited number of depth planes to choose from. It does this by breaking up the regularity of these depth planes by adding noise to the quantization step from depth to discrete pattern pixel offset, effectively jittering  the quantization threshold. Like the other available options, the depth dithering mode in the viewer can easily be toggled on and off for comparison.

Internally, a DirectX9 shader is used to rasterize the 3D object to a depth buffer. The CPU then builds the SIRDS image, using the individual values in the depth buffer to calculate the horizontal offsets needed to lookup and copy the colour values from left to right.

The (Windows-only) viewer can be downloaded freely below. The C++ source code is also available under the GPL license.


No comments yet.

Leave a comment