## eScience Lectures Notes : Global Illumination Models

Slide 1 : 1 / 49 : Global Illumination Models

# Physically Based Illumination, Ray Tracing and Radiosity

#### Radiosity OverView Part 5 : Remarques

Slide 2 : 2 / 49 : Global Illumination Models

# Physically Based Illumination, Ray Tracing and Radiosity

### Raytracing

Slide 3 : 3 / 49 : Usual Graphical Pipeline

# Usual Graphical Pipeline

## Properties of the Graphics Pipeline

• ### Processing is forward-mapping

Slide 4 : 4 / 49 : Z-Buffer Algorithm

# Z-Buffer Algorithm

### N.B : Case of the Painter's Algorithm : objects are painted from back-to-front

The painter's algorithm, sometimes called depth-sorting, gets its name from the process which an artist renders a scene using oil paints. First, the artist will paint the background colors of the sky and ground. Next, the most distant objects are painted, then the nearer objects, and so forth. Note that oil paints are basically opaque, thus each sequential layer completely obscures the layer that its covers. A very similar technique can be used for rendering objects in a three-dimensional scene. First, the list of surfaces are sorted according to their distance from the viewpoint. The objects are then painted from back-to-front.
While this algorithm seems simple there are many subtleties. The first issue is which depth-value do you sort by? In general a primitive is not entirely at a single depth. Therefore, we must choose some point on the primitive to sort by.
1. Sort by the minimum depth extent of the polygon
2. Sort by the maximum depth extent of the polygon
3. Sort by the polygon's centriod (Sum(vi, i = 1..N)/N)
But the main issue is that it is easy to face some absurdity like Triangle1 covers part of Triangle2 which covers part of Triangle3 wich covers part of triangle1

### z-buffer algorithm

The basic idea is to test the z - depth of each surface to determine the closest (visible) surface. Declare an array z buffer(x, y) with one entry for each pixel position. Initialize the array to the maximum depth. So initialize all values to maximum depth. Then the algorithm is as follows:

```for each polygon P
for each pixel (x, y) in P
compute z_depth at x, y
if z_depth < z_buffer (x, y) then
set_pixel (x, y, color)
z_buffer (x, y) = z_depth ```

• #### And we may need additional z - buffers for special effects, e.g. shadows.

• If linear ... 16 bits => if far clipping plane - front clipping plane = 100 m, => resolution = 1.5 mm
• 24 bits => if far clipping plane - front clipping plane = 10 km, => resolution = 0.5 mm
• And it is not linear !

Slide 5 : 5 / 49 : Path Notation

# Path Notation

## Which are handled by ray tracing? Which by radiosity?

Slide 6 : 6 / 49 : Path Notation (2)

# Path Notation (2)

### Local Illumination Model : L (D|S|G) E

First, let’s introduce some notation for paths. Each path is terminated by the eye
and a light.
E - the eye.
L - the light.
Each bounce involves an interaction with a surface. We characterize the interaction
as either reflection or transmission. There are different types of reflection and
transmission functions. At a high-level, we characterize them as
D - diffuse reflection or transmission
G - glossy reflection or transmission
S - specular reflection or refraction
Diffuse implies that light is equally likely to be scattered in any direction. Specular
implies that there is a single direction; that is, given an incoming direction there is
a unique outgoing direction. Finally, glossy is somewhere in between.
Particular ray-tracing techniques may be characterized by the paths that they
consider.
Appel Ray casting: E(D | G)L
Whitted Recursive ray tracing: E[S*](D | G)L
Kajiya Path Tracing: E[(D | G | S) + (D | G)]L
The set of traced paths are specified using regular expressions, as was first proposed
by Shirley. Since all paths must involve a light L, the eye E, and at least one
surface, all paths have length at least equal to 3.
are not traced, and hence when certain types of light transport is not considered
by the algorithm. For example, Appel’s algorithm only traces paths of length 3,
ignoring longer paths; thus, only direct lighting is considered. Whitted’s algorithm
traces paths of any length, but all paths begin with a sequence of 0 or more mirror
reflection and refraction steps. Thus, Whitted’s technique ignores paths such as
the following EDSDSL or E(D | G)* L. Distributed ray tracing and path tracing
includes multiple bounces involving non-specular scattering such as E(D | G)* L.
However, even these methods ignore paths of the form E(D | G)S* L; that is, multi-ple
specular bounces from the light source as in a caustic. Obviously, any technique
that ignores whole classes of paths will not correctly compute the solution to the
rendering equation.

### Whitted Recursive Ray tracing: E S* (D|G) L

• #### a view-independent diffuse computation followed by a view-dependent specular computation is doable

First Pass - formalized by Rushmeier and Torrance

• accounts for ideal specular and ideal diffuse reflection and transmission
• number of specular surfaces should be small
• derive the radiosity equation in terms of intensities

for an ideal diffuse surface

• This equation is extended to account for diffuse to specular transfer and transmission

Diffuse Transmission

• (through a light shade, for example)
• assuming no specular interaction, we get

where Id,i is the diffuse transmittance for patch i and Tij is transmission form factor
• Tij
• accounts for interaction between ideal diffuse surfaces due to transmission
• called backwards diffuse form factor
• Tij computed by placing a hemisphere over the back side of the surface, then performing the same computation as for Fij.

Specular Transmission

• place a constraint that no two specular surfaces can see each other
• then

• Tf,ijp is the window form factor and accounts for the energy leaving patch p that is specularly transmitted through j and reaches i
• [image: shows three patches p floating above a patch j which is floating above patch i. Caption reads: Patches p visible to i through j ]
• Tb,ijp is a similar quanitity that accounts for transmission from p through j to the back side of i
• These T form factors can be computed with the hemicube, just as the F form factors are.
• [image: shows a patch i with a hemicube sitting atop it. Floating above the hemicube is patch j. Above patch j is patch p. Tf,ijp appears to be an area on the hemicube, representing what is visible of patch p through patch j onto patch i. ]

Specular Reflection

• use the same constraint that no two specular surfaces can see each other, but contribute to interaction between patches.
• mirror form factors
• [image: Patch p is a mirror. To the right of patch p, we have the real environment, which consists of patch j and patch i. To the left of patch p, we have a virtual environment representing what is seen in p's reflection. Patch j', the reflection of patch j, sits in the virtual environment. ]
• Ff,ijp is the forward mirror form factor
• Fb,ijp is the backward mirror form factor
• using the mirror approach with virtual space, the specular patch is treated as a specular tranmittor that receives light from the virtual patch.
• mirror form factors are computed just as window form factors, except virtual surfaces are used.

With these extensions, we can now account for:

• diffuse - diffuse (regular radiosity)
• specular - diffuse (extended radiosity à la Rushmeier & Torrance

Once this pass is complete, we then perform the 2nd pass to compute specular - specular and diffuse - specular

Specular - specular is given by ray tracing

For diffuse - specular, we would need to send out many rays from the point through the hemisphere around the point, weight the rays by the bidirectional specular reflectivity, then sum them together.

• [image: too complicated to describe, but you can look at the photocopied notes if you have them ]
• the reflection frustrum is a square pyramid
• the end of the pyramid is divided into n x n pixels (typically very low resolution - e.g. 10 x 10)
• Z-buffer is used on that face to determine visible surfaces
• The specular contribution is the intensities of the patches seen through the frustrum (as computed in pass 1)
• if the patch seen through the frustrum is specular, the process is recursively repeated
• the incoming intensities can be weighted to approximate the specular spread\

Slide 7 : 7 / 49 : The Rendering Equation

# The Rendering Equation (à la Kajiya - Siggraph 86)

## An attempt to unify rendering so that all rendering had a basic model as a basis.

• ### solutions can be view independant (radiosity) or not (raytracing)

we can rewrite this equation as
where R is the linear integral operator

rearranging terms gives:

.

Local Reflection Models

.

only first 2 terms are used

X is the eyepoint

the g(epsilon) term is non-zero only for light sources

R1 operates on (epsilon) rather than g, so shadows are not computed

Basic Ray Tracing

• by performing transformations outlined on page 293 of the text, we get
• dB(x') is the radiosity of surface element dx'
• p0 is p( x , x' , x'' ) and is constant
• H(x') is the energy incident on the surface element dx'

The Extended Two-Pass Algorithm (Sillion 1989)

.

• uses the rendering equation as the basis
• does not place the restriction Wollace does of making specular surfaces perfect planar mirrors

The general equation used is:

• the visibility function g is incorporated into the reflection operator R.
```
p(x, x', x'') = pd(x') + ps(x, x', x'')

bidirectional   diffuse    specular
reflectivity
function
```

In the first pass, extended form factors are used to compute diffuse to diffuse interaction that has any number of specular transfers inbetween

extended form factors: Diffuse - specular* - diffuse

The 2nd pass uses standard ray tracing to compute specular transfer

Slide 8 : 8 / 49 : The Rendering Equation

# The Rendering Equation (2)

• ### solutions can be view independant (radiosity) or not (raytracing)

Slide 9 : 9 / 49 : Revisiting Phong's Illumination Model

# Revisiting Phong's Illumination Model

• ### Is my picture accurate?

Slide 10 : 10 / 49 : Desiderata

# Desiderata

• ### If it was easy... everyone would do it.

Slide 11 : 11 / 49 : Better Illuminance Models

# Better Illuminance Models

• ### adds polarization, statistical microstructure, self-reflectance

Slide 12 : 12 / 49 : Untitled Document

Cook-Torrance Illumination

• ### v - vector to viewer

Slide 13 : 13 / 49 : Microfacet Distribution Function

# Microfacet Distribution Function

• ### m - the root-mean-square slope of the microfacets large m indicates steep slopes and the reflections spread out over the surface

Slide 14 : 14 / 49 : Geometric Attenuation Factor

# Geometric Attenuation Factor

## The geometric factor chooses the smallest amount of light that is lost as the local self-shadowing model.

Slide 15 : 15 / 49 : Fresnel Reflection

# Fresnel Reflection

### This variation in reflectance is called the Fresnel effect.

Slide 16 : 16 / 49 : Fresnel Reflection

# Fresnel Reflection

## No mirage without Fresnel

Slide 17 : 17 / 49 : A Plot of the Fresnel Factor

# A Plot of the Fresnel Factor

Slide 18 : 18 / 49 : Energy Conserving Approaches

# Energy Conserving Approaches

### Lightout = Lightemitted + Lightin

Slide 19 : 19 / 49 : Definitions

# Definitions

• ### the rate of incident or incoming energy at a surface point per unit surface area.

Slide 20 : 20 / 49 : Irradiance

### is a two dimensional function describing the incoming light energy impinging on a given point.

Slide 21 : 21 / 49 : BRDF

# Bidirectional Reflectance Distribution Function (BRDF)

### A BRDF relates light incident in a given direction to light reflected along a second direction for a given material.

Slide 22 : 22 / 49 : BRDF Approaches

# BRDF Approaches

### Measured BRDFs

Slide 23 : 23 / 49 : Remaining Hard Problems

# Remaining Hard Problems

• ### Satin and velvet cloths

Slide 24 : 24 / 49 : Ray Tracing

# Ray Tracing

## Effects needed for Realism

• ### Realistic Materials

The light of Mies van der Rohe / Modeling: Stephen Duck / Rendering: Henrik Wann Jensen

3. ### Light rays travel from the light sources to the eye, but the physics is invariant under path reversal (reciprocity).

Ray Tracing is a global illumination based rendering method. It traces rays of light from the eye back through the image plane into the scene. Then the rays are tested against all objects in the scene to determine if they intersect any objects. If the ray misses all objects, then that pixel is shaded the background color. Ray tracing handles shadows, multiple specular reflections, and texture mapping in a very easy straight-forward manner.

Note that ray tracing, like scan-line graphics, is a point sampling algorithm. We sample a continuous image in world coordinates by shooting one or more rays through each pixel. Like all point sampling algorithms, this leads to the potential problem of aliasing, which is manifested in computer graphics by jagged edges or other nasty visual artifacts.

In ray tracing, a ray of light is traced in a backwards direction. That is, we start from the eye or camera and trace the ray through a pixel in the image plane into the scene and determine what it hits. The pixel is then set to the color values returned by the ray.

Slide 25 : 25 / 49 : Ray paths

# Ray paths

• ### Reflected/Transmitted Rays

#### Appel 68

Slide 26 : 26 / 49 : Ray Casting

# Ray Casting

• ### Evaluate illumination model to color pixel

Compared to Forward Mapping, there are other ways to compute views of scenes defined by geometric primitives. One of the most common is ray-casting.

## Ray Casting

### E (D | G) L

In a ray-casting renderer the following process takes place.
1. For each "Screen-space" pixel compute the equation of the "Viewing-space" ray.
2. For each object in the display-list compute the intersection of the given ray
3. Find the closest intersection if there is one
4. Illuminate the point of intersection

Slide 27 : 27 / 49 : Recursive Ray Tracing

# Recursive Ray Tracing : Whitted Ray Tracing

## E[S*](D | G)L

#### Turner Whitted (1980)

Figure from Andrew S. Glassner, "An Overview of Ray Tracing" in An Introduction to Ray Tracing, Andrew Glassner, ed., Academic Press Limited, 1989.

A primary ray is shot through each pixel and tested for intersection against all objects in the scene. If there is an intersection with an object then several other rays are generated. Shadow rays are sent towards all light sources to determine if any objects occlude the intersection spot. In the figure below, the shadow rays are labeled Si and are sent towards the two light sources LA and LB. If the surface is reflective then a reflected ray, Ri, is generated. If the surface is not opaque, then a transmitted ray, Ti, is generated. Each of the secondary rays is tested against all the objects in the scene.

The reflective and/or transmitted rays are continually generated until the ray leaves the scene without hitting any object or a preset recursion level has been reached. This then generates a ray tree, as shown below.

The appropriate local illumination model is applied at each level and the resultant intensity is passed up through the tree, until the primary ray is reached. Thus we can modify the local illumination model by (at each tree node)

I = Ilocal + Kr * R + Kt * T where R is the intensity of light from the reflected ray and T is the intensity of light from the transmitted ray. Kr and Kt are the reflection and transmission coefficients. For a very specular surface, such as plastic, we sometimes do not compute a local intensity, Ilocal, but only use the reflected/transmitted intensity values.

Slide 28 : 28 / 49 : From Pixel to Ray

# From Pixel to Ray

## Rayi,j = Eye + orig + scale * ( i * u + j * v )

Slide 29 : 29 / 49 : Maximum recursion depth

# Maximum recursion depth

### The reflected rays can generate other reflected rays that can generate other reflected rays, etc. The next sequence of three images shows a simple scene with no reflection, a single reflection, and then a double reflection.

Slide 30 : 30 / 49 : Ray Tracing Architecture

# Ray Tracing Architecture

## Practical Considerations in Writing a Ray Tracer

### Create Model

The first step is to create the model of the image. One should not hardcode objects into the program, but instead use an input file.

### Generate primary rays and test for object-ray intersections

For each pixel we must generate a primary ray and test for intersection with all of the objects in the scene. If there is more than one ray-object intersection then we must choose the closest intersection (the smallest positive value of t).To ensure that there are no objects intersected in front of the image plane (this is called near plane clipping), we keep the distance of the primary ray to the screen and test all intersections against this distance. If the t value is less than this distance, then we ignore the object.

### Generate the reflection and transmition ray

If there is an intersection then we must compute the shadow rays and the reflection rays.

The shadow ray is a ray from the point of intersection to the light source. Its purpose is to determine if the intersection point is in the shadow of a particular light. There should be one shadow ray for each light source. The origin of the shadow ray is the intersection point and the direction vector is the normalized vector between the intersection point and the position of the light source. Note that this is the same as the light vector (L) that is used to compute the local illumination.

### Local Illumination

Compute the Local Illumination at each point, carry it back to the next level of the ray tree so that the intensity I = Ilocal + Kr * R + Kt * T . Note that Kr can be taken as the same as Ks.
For each color (R, G, B) I is in the range 0.0 <= I <= 1.0. This must be converted to an integer value of 0 <= I <= 255. The result is then written to the output file.

### Output File

The output file will consist of three intensity values (Red, Green, and Blue) for each pixel. For a system with a 24-bit framebuffer this file could be directly displayed. However, for a system with an 8-bit framebuffer, the 24-bit image must be converted to an 8 bit image, which can then be displayed.

Slide 31 : 31 / 49 : Computing a Reflected Ray

# Computing a Reflected Ray

### NB : ||Rin|| = ||Rout|| = 1

Rin an Rout are unit vector

### Rout = Rin - 2N(N.Rin)

You can form a losange with Rin and Rout

Slide 32 : 32 / 49 : Ray Plane Intersection

Slide 33 : 33 / 49 : Ray Sphère Intersection

Slide 34 : 34 / 49 : Ray Triangle Intersection

Slide 35 : 35 / 49 : Ray Trace Java Demo Program

# Ray Trace Java Demo Program

Slide 36 : 36 / 49 : Raytracer Example

# Example

The source code for the applet can be found here.

• ### Hard to accelerate with special-purpose H/W

Slide 37 : 37 / 49 : Ray Tracing: History

# Ray Tracing: History

• ### Ray tracing architecture

Slide 38 : 38 / 49 : Acceleration Methods

# Acceleration Methods

## Among the important results in this area are:

• ### Spatial Subdivision

Slide 39 : 39 / 49 : Bounding Volumes

# Bounding Volumes

### However, spheres do not usually give a very tight fitting bounding volume. More frequently, axis-aligned bounding boxes are used. Clearly, hierarchical or nested bounding volumes can be used for even greater advantage.

Slide 40 : 40 / 49 : Spatial Subdivision

# Spatial Subdivision

## Idea: Divide space into subregions

• ### Octree or BSP-Trees can be applied here

Slide 41 : 41 / 49 : Shadow Buffers

## Idea:

• ### Keep a list of objects at each subdivision cell

Slide 42 : 42 / 49 : Radiosity

### RMIT CG lectures

References:
Cohen and Wallace, Radiosity and Realistic Image Synthesis
Sillion and Puech, Radiosity and Global Illumination
Thanks to Leonard McMillan for the slides
Thanks to François Sillion for images

## Why ?

### ... a sculpture by John Ferren.

A powerful demonstration introduced by Goral et al. of the differences between radiosity and traditional ray tracing is provided by a sculpture by John Ferren. The sculpture consists of a series of vertical boards painted white on the faces visible to the viewer. The back faces of the boards are painted bright colors. The sculpture is illuminated by light entering a window behind the sculpture, so light reaching the viewer first reflects off the colored surfaces, then off the white surfaces before entering the eye. As a result, the colors from the back boards “bleed” onto the white surfaces.

Slide 43 : 43 / 49 : Radiosity (2)

Original sculpture lit by daylight from the rear.

Ray traced image. A standard Ray tracer cannot simulate the interreflection of light between diffuse Surfaces.

note color bleeding effects.

Slide 44 : 44 / 49 : Ray Tracing Vs Radiosity

## Ray tracing is an image-space algorithm, while radiosity is computed in object-space.

### Ray Tracing : Specular reflection

Because the solution is limited by the view, ray tracing is often said to provide a view-dependent solution, although this is somewhat misleading in that it implies that the radiance itself is dependent on the view, which is not the case. The term view-independent refers only to the use of the view to limit the set if locations and directions for which the radiance is computed.

Slide 45 : 45 / 49 : Radiosity Introduction

### The radiosity, B, of a patch is the total rate of energy leaving a surface and is equal to the sum of the emitted and reflected energies: Radiosity was used for Quake II

Slide 46 : 46 / 49 : Radiosity Introduction

# Solving the rendering equation

### Ray tracing computes L [D] S* E Photon tracing computes L [D | S]* E Radiosity only computes L [D]* E

Slide 47 : 47 / 49 : Continuous Radiosity equation

### No analytical solution, even for simple configurations

Slide 48 : 48 / 49 : Discrete Radiosity equation