# Introduction to Ray-tracing

## Introduction

Ray-tracing is a graphic technique that can produce realistic images, by simulating the path of light and its interactions with the environment.

The idea is inspired from real life: We see the world thanks to light that originates from light sources, interacts with the environment, and end up in our retina.

If you like to skip reading, here is a playable demo based on this blog on spiderengine.io

In practice, it’s not feasible to consider light as originating from light sources, because it would mean wasting time simulating paths that may fall outside of view:

A much better approach is to simulate light paths from the viewer to the light sources. This is called backward tracing. Performance-wise, it’s a win because only objects in the field of view are being processed. Visually speaking, the result can be the same since light propagation is a symmetric process, and the equations work the same in reverse direction.

Here is a simple implementation of this idea (in Typescript)

function rayCast(ray: Ray) {
let toIntersection = -1;
let closestIntersection = null;
for (let obj of objects) {
let intersection = obj.intersectsWithRay(ray);
if (intersection) {
let distance = Vector3.distance(ray.origin, intersection);
if (toIntersection < 0 || distance < toIntersection) {
closestIntersection = intersection;
toIntersection = distance;
}
}
}
return closestIntersection;
};

let frameBuffer = new FrameBuffer(width, height, rgba);
for (let i = 0; i < height; ++i) {
for (let j = 0; j < width; ++j) {
let ray = new Ray().setFromPerspectiveView(
inverseView,
j,
i,
width,
height
);

if (rayCast(ray)) {
frameBuffer.setPixel(j, i, Color.red);
} else {
// environment/background color
frameBuffer.setPixel(j, i, Color.black);
}
}
}

Shading is the process of determining the colour at each pixel of the resulting image. In this article, we will use a Diffuse Shading model to simulate how light is absorbed and reflected.

For each pixel, we collect the information needed for shading. Namely, the intersection point (P) between a ray projected from that pixel and the environment, the surface properties (Normal and Light direction) at that location, and the properties of light. The color is calculated as:

$$Color = d * Li * Lc * cos(θ)$$

• d: The diffuse color at the intersection point
• Li: Light intensity
• Lc: Light color
• θ: The angle between the Normal and the direction towards light (Ld)

Here is the resulting image, along with sample code:

function rayTrace (ray: Ray, colorOut: Color) {
let intersection = rayCast(ray);
if (!intersection) {
return;
}

for (let light of lights) {
let toLight = new Vector3().copy(light.transform.position)
.substract(intersection.position)
.normalize();

let cosTheta = toLight.dot(intersection.normal);
cosTheta = Math.max(cosTheta, 0); // Fully dark if facing away from light
let currentColor = new Color().copy(intersection.diffuseColor)
.multiplyColor(light.color)
.multiply(light.intensity)
.multiply(cosTheta);
}
}

// .. initialize frame buffer
for (let i = 0; i < height; ++i) {
for (let j = 0; j < width; ++j) {
// .. initialize ray
finalColor.set(0, 0, 0); // environment/background color
rayTrace(ray, finalColor);
frameBuffer.setPixel(j, i, finalColor);
}
}

## Reflections

Reflections are a natural byproduct of ray-tracing. When light hits a reflective object, it changes direction, and continues travelling until it either doesn’t hit anything, or the maximum amount of bounces is reached.

We refer to the reflected rays as secondary rays, in contrast with the original rays which are called primary rays. Each time a secondary ray is generated, we accumulate the color of the intersection point that created it, using the same shading equation we saw earlier. The final color at the original intersection point is simply the sum of all colors encountered while bouncing secondary rays.

An object reflects light depending on its material properties. In this article, we define a reflectance factor on materials. Implementation-wise, the best practice is to make the ray-tracer a recursive process, so that reflected rays are processed in the exact same way as primary rays. Here is the recursive ray-tracer and the corresponding result:

function rayTrace (ray: Ray, colorOut: Color, currentBounce: number) {
let intersection = rayCast(ray);
if (!intersection) {
return;
}

// Handle reflections
if (currentBounce < maxBounces) {
let reflectance = intersection.object.getComponent("Visual").material.reflectance;
if (reflectance > 0) {
let secondaryRay = new Ray(
// Nudge the reflection ray origin a bit along the normal to avoid self reflection artifacts
new Vector3().copy(ray.direction).reflect(intersection.normal) // Direction
);
let reflectedColor = new Color();
rayTrace(secondaryRay, reflectedColor, currentBounce + 1);
reflectedColor.multiply(reflectance);
}
}
}

To support shadows, we must determine whether the intersection point at each pixel is reachable by light. If no light is accessible, it must be darkened. We introduce the concept of shadow rays. For each intersection point, we cast a ray towards each light source. If a light is not reachable, we remove its influence from the shading equation by zeroing its intensity.

Here is the new implementation, taking into account shadows:

function rayTrace (ray: Ray, colorOut: Color, currentBounce: number) {
let intersection = rayCast(ray);
if (!intersection) {
return;
}

for (let light of lights) {
let toLight = new Vector3().copy(light.transform.position)
.substract(intersection.position)
.normalize();

// Nudge the shadow ray origin a bit along the normal to avoid moire pattern
toLight // Direction
);

let lightIntensity = 1;
// Hit an object, check if it's obstructing light
let toLight = Vector3.distance(light.transform.position, intersection.position);
if (toOccluder < toLight) {
// Current light is not visible from intersection point
lightIntensity = 0;
}
}

}

// .. Handle reflections
}

Sharp shadows happened because we considered light sources as single points in space. In reality, light sources are just like any object in space, with a shape and a volume, they just happen to emit light. When shading, we must test how much of the light source is visible from each pixel of interest, and shade accordingly.

We give light sources a non-zero volume, and define an number of sample points on their area that will be used for casting additional shadow rays. The implementation is exactly the same as for sharp shadows, but since we are now casting multiple shadow rays, we must keep track of the number of occluded rays. Then we adjust the lightIntensity variable to the inverse ratio of occluded rays vs total rays:

$$lightIntensity=1 – \frac{occludedShadowRays}{totalShadowRays}$$

Here is the result with smooth shadows:

## Optimizations

Most of the time spent by a ray-tracer is in computing ray-object collisions. They need to be done multiple times, for each pixel on the screen, which can be very expensive. A form of Spatial partitioning is needed to ray-trace most worlds in a decent amount of time. This deserves it’s own article and I will cover it in a future post!

Checkout the playable demo based on this blog on spiderengine.io.