eScience Lectures Notes : Global Illumination models


Slide 1 : Global Illumination Models

Global Illumination Models

Physically Based Illumination, Ray Tracing and Radiosity

Introduction

Usual Graphical Pipeline

Z-Buffer Algorithm

Path Notation

Path Notation (2)

The Rendering Equation (à la Kajiya - Siggraph 86)

The Rendering Equation (à la Kajiya - Siggraph 86 - 2)

Revisiting Phong's Illumination Model

Desiderata

Better Illuminance Models

Cook-Torrance Illumination

Microfacet Distribution Function

Geometric Attenuation Factor

Fresnel Reflection

Fresnel Reflection Equation

A Plot of the Fresnel Factor

Energy Conserving Approaches

Definitions

Irradiance

Bidirectional Reflectance Distribution Function (BRDF)

BRDF Approaches

Remaining Hard Problems

Ray Tracing

Ray Tracing: History

Ray Casting

Ray Path

Recursive Ray Tracing

Maximum recursion depth

Ray Tracing Architecture

Computing a Reflected Ray

Ray Plane Intersection

Ray Sphere Intersection

Ray Triangle Intersection

Ray Trace Java Demo Program

Example and Advantages of Ray Tracing

Acceleration Methods

Bounding Volumes

Spatial Subdivision

Shadow Buffers

Radiosity

Radiosity (2) : Because...

Ray Tracing Vs Radiosity

Radiosity Introduction

Solving the rendering equation

Continuous Radiosity equation

Discrete Radiosity equation

Radiosity OverView Part 1

Radiosity OverView Part 2

Radiosity OverView Part 3

Radiosity OverView Part 4

Radiosity OverView Part 5 : Remarques


Slide 2 : Global Illumination Models

Global Illumination Models

Physically Based Illumination, Ray Tracing and Radiosity

Usual Graphics Pipeline and its Z buffer algo

Different classifications of globals illuminations model

Raytracing

Radiosity


Slide 3 : Usual Graphical Pipeline

Usual Graphical Pipeline

Graphics Pipeline Review

Properties of the Graphics Pipeline


Slide 4 : Z-Buffer Algorithm

Z-Buffer Algorithm

N.B : Case of the Painter's Algorithm : objects are painted from back-to-front

The painter's algorithm, sometimes called depth-sorting, gets its name from the process which an artist renders a scene using oil paints. First, the artist will paint the background colors of the sky and ground. Next, the most distant objects are painted, then the nearer objects, and so forth. Note that oil paints are basically opaque, thus each sequential layer completely obscures the layer that its covers. A very similar technique can be used for rendering objects in a three-dimensional scene. First, the list of surfaces are sorted according to their distance from the viewpoint. The objects are then painted from back-to-front.
While this algorithm seems simple there are many subtleties. The first issue is which depth-value do you sort by? In general a primitive is not entirely at a single depth. Therefore, we must choose some point on the primitive to sort by.
1. Sort by the minimum depth extent of the polygon
2. Sort by the maximum depth extent of the polygon
3. Sort by the polygon's centriod (Sum(vi, i = 1..N)/N)
But the main issue is that it easy to face some absurdity like Triangle1 cover part of Triangle2 which cover part of Triangle3 wich cover part of triangle1

The basic idea is to test the z - depth of each surface to determine the closest (visible) surface. Declare an array z buffer(x, y) with one entry for each pixel position. Initialize the array to the maximum depth. Note: if have performed a perspective depth transformation then all z values 0.0 <= z (x, y) <="1.0". So initialize all values to 1.0. Then the algorithm is as follows:

z-buffer algorithm

for each polygon P
  for each pixel (x, y) in P 
    compute z_depth at x, y
    if z_depth < z_buffer (x, y) then 
       set_pixel (x, y, color) 
       z_buffer (x, y) <= z_depth 

 

Advantages of z-buffer algorithm:

Disadvantages:

 


Slide 5 : Path Notation

Path Notation

How do we accurately simulate all light interactions between objects?

  • Diffuse to Diffuse

  • Specular to Diffuse

  • Diffuse to Specular

  • Specular to Specular

  • D - diffuse reflection or transmission

  • G - glossy reflection or transmission

  • S - specular reflection or refraction

Which are handled by ray tracing? Which by radiosity?

 


Slide 6 : Path Notation (2)

Path Notation (2)

An accurate method must handle all four types : L (D|S|G)* E

Local Illumination Model : L (D|S) E

First, let’s introduce some notation for paths. Each path is terminated by the eye
and a light.
E - the eye.
L - the light.
Each bounce involves an interaction with a surface. We characterize the interaction
as either reflection or transmission. There are different types of reflection and
transmission functions. At a high-level, we characterize them as
D - diffuse reflection or transmission
G - glossy reflection or transmission
S - specular reflection or refraction
Diffuse implies that light is equally likely to be scattered in any direction. Specular
implies that there is a single direction; that is, given an incoming direction there is
a unique outgoing direction. Finally, glossy is somewhere in between.
Particular ray-tracing techniques may be characterized by the paths that they
consider.
Appel Ray casting: E(D | G)L
Whitted Recursive ray tracing: E[S*](D | G)L
Kajiya Path Tracing: E[(D | G | S) + (D | G)]L
Goral Radiosity: ED*L
The set of traced paths are specified using regular expressions, as was first proposed
by Shirley. Since all paths must involve a light L, the eye E, and at least one
surface, all paths have length at least equal to 3.
A nice thing about this notation is that it is clear when certain types of paths
are not traced, and hence when certain types of light transport is not considered
by the algorithm. For example, Appel’s algorithm only traces paths of length 3,
ignoring longer paths; thus, only direct lighting is considered. Whitted’s algorithm
traces paths of any length, but all paths begin with a sequence of 0 or more mirror
reflection and refraction steps. Thus, Whitted’s technique ignores paths such as
the following EDSDSL or E(D | G)* L. Distributed ray tracing and path tracing
includes multiple bounces involving non-specular scattering such as E(D | G)* L.
However, even these methods ignore paths of the form E(D | G)S* L; that is, multi-ple
specular bounces from the light source as in a caustic. Obviously, any technique
that ignores whole classes of paths will not correctly compute the solution to the
rendering equation.


Radiosity : L D* E

Ray tracing: L (D)? S* E

Combining Radiosity and Ray Tracing

 


First Pass - formalized by Rushmeier and Torrance

Diffuse Transmission

Specular Transmission

Specular Reflection

With these extensions, we can now account for:

Once this pass is complete, we then perform the 2nd pass to compute specular - specular and diffuse - specular

Specular - specular is given by ray tracing

For diffuse - specular, we would need to send out many rays from the point through the hemisphere around the point, weight the rays by the bidirectional specular reflectivity, then sum them together.


Slide 7 : The Rendering Equation

The Rendering Equation (à la Kajiya - Siggraph 86)

An attempt to unify rendering so that all rendering had a basic model as a basis.

 where:


This is of course a recursive definition !

Complexity => practical solution are aproximations

View Independant statement of the problem


we can rewrite this equation as
where R is the linear integral operator

rearranging terms gives:

.


Local Reflection Models

.

only first 2 terms are used

X is the eyepoint

the g(epsilon) term is non-zero only for light sources

R1 operates on (epsilon) rather than g, so shadows are not computed

Basic Ray Tracing

Radiosity

  • by performing transformations outlined on page 293 of the text, we get

    The Extended Two-Pass Algorithm (Sillion 1989)

    .

  • uses the rendering equation as the basis
  • does not place the restriction Wollace does of making specular surfaces perfect planar mirrors

    The general equation used is:

  • the visibility function g is incorporated into the reflection operator R.
    
    p(x, x', x'') = pd(x') + ps(x, x', x'')
    
    bidirectional   diffuse    specular
    reflectivity
      function
    

    In the first pass, extended form factors are used to compute diffuse to diffuse interaction that has any number of specular transfers inbetween

    extended form factors: Diffuse - specular* - diffuse

    The 2nd pass uses standard ray tracing to compute specular transfer


    Slide 8 : The Rendering Equation

    The Rendering Equation (2)

     

    This is of course a recursive definition !

    Complexity => practical solution are aproximations

    View Independant statement of the problem


    Slide 9 : Revisiting Phong's Illumination Model

    Revisiting Phong's Illumination Model


    Slide 10 : Desiderata

    Desiderata


    Slide 11 : Better Illuminance Models

    Better Illuminance Models


    Slide 12 : Untitled Document

    Cook-Torrance Illumination

    Definitions:


    Slide 13 : Microfacet Distribution Function

    Microfacet Distribution Function


    Slide 14 : Geometric Attenuation Factor

    Geometric Attenuation Factor

    There are many different ways that an incoming beam of light can interact with the surface locally....

    The entire beam can simply reflect.

    A portion of the out-going beam can be blocked.

    A portion of the incoming beam can be blocked.
    Cook called this self-shadowing.

       
       
       


    In each case, the geometric configurations can be analyzed to compute the percentage of light that actually escapes from the surface. Blinn first did this analysis. The results are:

    The geometric factor chooses the smallest amount of light that is lost as the local self-shadowing model.

     

     


    Slide 15 : Fresnel Reflection

    Fresnel Reflection

    The Fresnel term results from a complete analysis of the reflection process while considering light as an electromagnetic wave.

    The electric field of light has a magnetic field associated with it (hence the name electromagnetic).

    The magnetic field is always orthogonal to the electric field and the direction of propagation. Over time the orientation of the electric field may rotate. If the electric field is oriented in a particular constant direction it is called polarized.

    The behavior of reflection depends on how the incoming electric field is oriented relative to the surface at the point where the field makes contact.

    This variation in reflectance is called the Fresnel effect.


    Slide 16 : Fresnel Reflection

    Fresnel Reflection

    The Fresnel effect is wavelength dependent. It behavior is determined by the index-of-refraction of the material (taken as a complex value to allow for attenuation). This effect explains the variation in colors seen in specular regions particular on metals (conductors).

    It also explains why most surfaces approximate mirror reflectors when the light strikes them at a grazing angle. This version of the equation ignores the polarization of the incoming and reflected rays.

    No mirage without Fresnel


    Slide 17 : A Plot of the Fresnel Factor

    A Plot of the Fresnel Factor


    Slide 18 : Energy Conserving Approaches

    Energy Conserving Approaches

    There are still noticable flaws in physically based models.

    Lightout = Lightemitted + Lightin


    Slide 19 : Definitions

    Definitions

    Hyperphysics : http://hyperphysics.phy-astr.gsu.edu/hbase/vision/photomcon.html


    Slide 20 : Irradiance

    Irradiance

    Irradiance is power per unit area incident from all directions in a hemisphere onto a surface that coincides with the base of that hemisphere.

    The irradiance function

    is a two dimensional function describing the incoming light energy impinging on a given point.

     

     


    Slide 21 : BRDF

    Bidirectional Reflectance Distribution Function (BRDF)

    A BRDF relates light incident in a given direction to light reflected along a second direction for a given material.


    Slide 22 : BRDF Approaches

    BRDF Approaches

     

     

     

    Physically-based models

     

     

    Measured BRDFs

     

     

     

     

     

     

     

     

     

     

     

     

     


    Slide 23 : Remaining Hard Problems

    Remaining Hard Problems

     

    • Reflective Diffraction Effects

      • thin films

      • feathers of a blue jay, butterflies

      • oil on water

      • CDs

    • Anisotropy

      • brushed metals

      • strands pulled materials

      • Satin and velvet cloths

     


    Slide 24 : Ray Tracing

    Ray Tracing

    Effects needed for Realism

    • Shadows

    • Reflections (Mirrors)

    • Transparency

    • Interreflections

    • Detail (Textures etc.)

    • Complex Illumination

    • Realistic Materials

    The light of Mies van der Rohe / Modeling: Stephen Duck / Rendering: Henrik Wann Jensen

    Three ideas about light

    1. Light rays travel in straight lines

    2. Light rays do not interfere with each other if they cross

    3. Light rays travel from the light sources to the eye, but the physics is invariant under path reversal (reciprocity).

    Ray Tracing is a global illumination based rendering method. It traces rays of light from the eye back through the image plane into the scene. Then the rays are tested against all objects in the scene to determine if they intersect any objects. If the ray misses all objects, then that pixel is shaded the background color. Ray tracing handles shadows, multiple specular reflections, and texture mapping in a very easy straight-forward manner.

    Note that ray tracing, like scan-line graphics, is a point sampling algorithm. We sample a continuous image in world coordinates by shooting one or more rays through each pixel. Like all point sampling algorithms, this leads to the potential problem of aliasing, which is manifested in computer graphics by jagged edges or other nasty visual artifacts.

    In ray tracing, a ray of light is traced in a backwards direction. That is, we start from the eye or camera and trace the ray through a pixel in the image plane into the scene and determine what it hits. The pixel is then set to the color values returned by the ray.


    Slide 25 : Ray Tracing: History

    Ray Tracing: History

    • Appel 68

    • Whitted 80
      [recursive ray tracing]

      • Landmark in graphics

    • Lots of work on various geometric primitives

    • Lots of work on accelerations

    • Current Research

      • Real-Time raytracing (historically, slow technique)

      • Ray tracing architecture


    Slide 26 : Ray Casting

      Ray Casting

    • Shoot rays through pixels into the world

    • For each object in the display-list compute the intersection of the given ray

    • For each pixel,
      Find closest intersection in scene

    • Evaluate illumination model to color pixel

    Compared to Forward Mapping, there are other ways to compute views of scenes defined by geometric primitives. One of the most common is ray-casting.

    Ray-casting searches along lines of sight, or rays, to determine the primitive that is visible along it.

    Properties of ray-casting:

    • Go through all primitives at each pixel

    • Sample first

    • Analytic processing afterwards

    • Requires a display list

    • Per-pixel evaluation, per-pixel rays (not scan-convert each object)

    Usual Graphics Pipeline

     

    Ray Casting

    E (D | G) L

    In a ray-casting renderer the following process takes place.
    1. For each "Screen-space" pixel compute the equation of the "Viewing-space" ray.
    2. For each object in the display-list compute the intersection of the given ray
    3. Find the closest intersection if there is one
    4. Illuminate the point of intersection


    Slide 27 : Ray paths

    Ray paths

    • LR*E

    • Arbitrary paths: realism

    • Trace from light or eye?

      • Most light rays don’t hit eye

      • Importance sampling

    • Eye Ray tracing

      • Primary Rays

      • Shadow Rays

      • Reflected/Transmitted Rays

    Appel 68

     

     


    Slide 28 : Recursive Ray Tracing

    Recursive Ray Tracing : Whitted Ray Tracing

    For each pixel

    • Trace Primary Ray, find intersection

    • Trace Shadow Ray(s) to light(s)

      • Color = Visible ? Illumination Model : 0

    • Trace Reflected Ray

      • Color += reflectivity * Color of reflected ray

    • Trace Transmitted Ray

      • Color += refractivity * Color of transmitted ray

    Recursive

    • Reflection rays may be traced forever

    • Maximum recursion depth

    • Stop at purely Diffuse surface

    • Stop when light is lost in the background

    • Stop when light intensity is below a given value

    E[S*](D | G)L

    Turner Whitted (1980)



    Figure from Andrew S. Glassner, "An Overview of Ray Tracing" in An Introduction to Ray Tracing, Andrew Glassner, ed., Academic Press Limited, 1989.

    A primary ray is shot through each pixel and tested for intersection against all objects in the scene. If there is an intersection with an object then several other rays are generated. Shadow rays are sent towards all light sources to determine if any objects occlude the intersection spot. In the figure below, the shadow rays are labeled Si and are sent towards the two light sources LA and LB. If the surface is reflective then a reflected ray, Ri, is generated. If the surface is not opaque, then a transmitted ray, Ti, is generated. Each of the secondary rays is tested against all the objects in the scene.

    The reflective and/or transmitted rays are continually generated until the ray leaves the scene without hitting any object or a preset recursion level has been reached. This then generates a ray tree, as shown below.

    The appropriate local illumination model is applied at each level and the resultant intensity is passed up through the tree, until the primary ray is reached. Thus we can modify the local illumination model by (at each tree node)

    I = Ilocal + Kr * R + Kt * T where R is the intensity of light from the reflected ray and T is the intensity of light from the transmitted ray. Kr and Kt are the reflection and transmission coefficients. For a very specular surface, such as plastic, we sometimes do not compute a local intensity, Ilocal, but only use the reflected/transmitted intensity values.


    Slide 29 : Maximum recursion depth

    Maximum recursion depth

     The reflected rays can generate other reflected rays that can generate other reflected rays, etc. The next sequence of three images shows a simple scene with no reflection, a single reflection, and then a double reflection.

    Scene with no reflection rays
    Scene with one layer of reflection
    Scene with two layers of reflection
       

     


    Slide 30 : Ray Tracing Architecture

    Ray Tracing Architecture

    Pat Hanrahan

    Practical Considerations in Writing a Ray Tracer

    Process:For each pixel a primary ray will be generated and then tested against all objects in the scene.

    Create Model

    The first step is to create the model of the image. One should not hardcode objects into the program, but instead use an input file.

    Generate primary rays and test for object-ray intersections

    For each pixel we must generate a primary ray and test for intersection with all of the objects in the scene. If there is more than one ray-object intersection then we must choose the closest intersection (the smallest positive value of t).To ensure that there are no objects intersected in front of the image plane (this is called near plane clipping), we keep the distance of the primary ray to the screen and test all intersections against this distance. If the t value is less than this distance, then we ignore the object.

    Generate the reflection and transmition ray

    If there is an intersection then we must compute the shadow rays and the reflection rays.

    Shadow ray

    The shadow ray is a ray from the point of intersection to the light source. Its purpose is to determine if the intersection point is in the shadow of a particular light. There should be one shadow ray for each light source. The origin of the shadow ray is the intersection point and the direction vector is the normalized vector between the intersection point and the position of the light source. Note that this is the same as the light vector (L) that is used to compute the local illumination.

    Local Illumination

    Compute the Local Illumination at each point, carry it back to the next level of the ray tree so that the intensity I = Ilocal + Kr * R + Kt * T . Note that Kr can be taken as the same as Ks.
    For each color (R, G, B) I is in the range 0.0 <= I <= 1.0. This must be converted to an integer value of 0 <= I <= 255. The result is then written to the output file.

    Output File

    The output file will consist of three intensity values (Red, Green, and Blue) for each pixel. For a system with a 24-bit framebuffer this file could be directly displayed. However, for a system with an 8-bit framebuffer, the 24-bit image must be converted to an 8 bit image, which can then be displayed. A suggested format for the output file is the Microsoft Windows 24-bit BMP image file format.

     


    Slide 31 : Computing a Reflected Ray

    Computing a Reflected Ray

    NB : ||Rin|| = ||Rout|| = 1

    Rin an Rout are unit vector

    The projection of Rin onto N is -N cos (q) so

    Rout - Rin = - 2N cos(q) = -2N.( N.Rin )

    Rout = Rin - 2N(N.Rin)

    You can form a losange with Rin and Rout


    Slide 32 : Ray Plane Intersection


    Slide 33 : Ray Sphère Intersection


    Slide 34 : Ray Triangle Intersection


    Slide 35 : Ray Trace Java Demo Program

    Ray Trace Java Demo Program

     


    Slide 36 : Raytracer Example

    Example

    The source code for the applet can be found here.

    Advantages of Ray Tracing:

    Disadvantages:


    Slide 37 : Acceleration Methods

    Acceleration Methods

    The rendering time for a ray tracer depends on the number of ray intersection tests that are required at each pixel. This is roughly dependent on the number of primitives in the scene times the number of pixels. Early on, significant research effort was spent developing methods for accelerating the ray-object intersection tests.

    Among the important results in this area are:


    Slide 38 : Bounding Volumes

    Bounding Volumes

    Enclose complex objects within a simple-to-intersect object. If the ray does not intersect the simple object then its contents can be ignored. If the ray does intersect the bounding volume it may or may not intersect the enclosed object. The likelihood that it will strike the object depends on how tightly the volume surrounds the object.

    Spheres were one of the first bounding volumes used in raytracing, because of their simple ray-intersection and the fact that only one is required to enclose a volume.

    However, spheres do not usually give a very tight fitting bounding volume. More frequently, axis-aligned bounding boxes are used. Clearly, hierarchical or nested bounding volumes can be used for even greater advantage.

     

     


    Slide 39 : Spatial Subdivision

    Spatial Subdivision

    Idea: Divide space into subregions


    Slide 40 : Shadow Buffers

    Shadow Buffers

    A significant portion of the object-ray intersections are used to compute shadow rays.

    Idea:


    Slide 41 : Radiosity

    Radiosity

    Reference:

    Radiosity OverView

    SIGGRAPH 1993 Education Slide Set, by Stephen Spencer

    RMIT CG lectures

    References:
    Cohen and Wallace, Radiosity and Realistic Image Synthesis
    Sillion and Puech, Radiosity and Global Illumination
    Thanks to Leonard McMillan for the slides
    Thanks to François Sillion for images

    Why ?

    ... a sculpture by John Ferren.

    A powerful demonstration introduced by Goral et al. of the differences between radiosity and traditional ray tracing is provided by a sculpture by John Ferren. The sculpture consists of a series of vertical boards painted white on the faces visible to the viewer. The back faces of the boards are painted bright colors. The sculpture is illuminated by light entering a window behind the sculpture, so light reaching the viewer first reflects off the colored surfaces, then off the white surfaces before entering the eye. As a result, the colors from the back boards “bleed” onto the white surfaces.


    Slide 42 : Radiosity (2)

    Radiosity (2) : Because...

    Original sculpture lit by daylight from the rear.

    Ray traced image. A standard Ray tracer cannot simulate the interreflection of light between diffuse Surfaces.

    Image rendered with radiosity.
    note color bleeding effects.


    Slide 43 : Ray Tracing Vs Radiosity

    Ray Tracing Vs Radiosity

    Ray tracing is an image-space algorithm, while radiosity is computed in object-space.

    Ray Tracing : From Eye to Light

    Radiosity : From Light to Surface : Complete solution

    Ray Tracing : Pseudo View Dependent Solution

    Ray Tracing : Specular reflection

    Radiosity : Diffuse reflection

    Because the solution is limited by the view, ray tracing is often said to provide a view-dependent solution, although this is somewhat misleading in that it implies that the radiance itself is dependent on the view, which is not the case. The term view-independent refers only to the use of the view to limit the set if locations and directions for which the radiance is computed.


    Slide 44 : Radiosity Introduction

    Radiosity Introduction

    The radiosity approach to rendering has its basis in the theory of heat transfer. This theory was applied to computer graphics in 1984 by Goral et al.

    Surfaces in the environment are assumed to be perfect (or Lambertian) diffusers, reflectors, or emitters. Such surfaces are assumed to reflect incident light in all directions with equal intensity.

    A formulation for the system of equations is facilitated by dividing the environment into a set of small areas, or patches. The radiosity over a patch is constant.

    The radiosity, B, of a patch is the total rate of energy leaving a surface and is equal to the sum of the emitted and reflected energies:
    Radiosity was used for Quake II


    Slide 45 : Radiosity Introduction

    Solving the rendering equation

    L is the radiance from a point on a surface in a given direction _
    E is the emitted radiance from a point: E is non-zero only if x’ is emissive
    V is the visibility term: 1 when the surfaces are unobstructed along the direction _, 0 otherwise
    G is the geometry term, which depends on the geometric relationship between the two surfaces x and x’

    Photon-tracing uses sampling and Monte-Carlo integration Radiosity uses finite elements:
    project onto a finite set of basis functions (piecewise constant)

    Ray tracing computes L [D] S* E
    Photon tracing computes L [D | S]* E
    Radiosity only computes L [D]* E


    Slide 46 : Continuous Radiosity equation

    Continuous Radiosity equation

     


    Slide 47 : Discrete Radiosity equation

    Discrete Radiosity equation

     


    CARREFUL : There is a subdirectory to explore/print : radiosity