Assignment 5: Pathtrace

Out: Dec 8. Due: Jan 13.


In your fifth assignment, you will implement a pathtracer. You will see that with a small amount of code, we can produce realistic images.

You are to perform this assignment using C++. To ease your development, we are providing a simple C++ framework to represent the scene, perform basic mathematical calculations, and save your image results. The framework also contain simple test scenes to judge the correctness of your algorithm. These test scenes are build straight into the app. Just running the application will render the test scenes included, that you can compare to the correct output images we supply. To build the framework, you can use either Visual Studio 2013 RC on Windows or XCode 5 on OS X. We will use external libraries to help us interact with the system, including GLFW for window management and GLEW to simply access to OpenGL on Windows.

Framework Overview

We suggest you use our framework to create your renderer. We have removed from the code all the function implementations your will need to provide, but we have left the function declarations which can aid you in planning your solution. All code in the framework is documented, so please read the documentation for further information. Following is a brief description of the content of the framework.

In this homework, scenes are becoming more complex. A Scene is comprised of a Camera, and a list of Meshes, a list of Surfaces and a list of Lights. The Camera is defined by its frame, the size and distance of the image plane and the focus distance (used for interaction). Each Mesh is a collection of either points, lines or triangles and quads, centered with respect to its frame, and colored according to a Blinn-Phong Material with diffuse, specular coefficients as well as an emission term for area lights. Each Mesh is represented as an indexed polygonal mesh, with vertex position normals and texture coordinates. Each surface is either a quad or a sphere of a given radius. Each Light is a point light centered with respect to its frame and with given intensity. The scene also includes the background color, the ambient illumination, the image resolution and the samples per pixel.

In this homework, model geometry is read from RAW files. This is a trivial file format we built for the course to make parsing trivial in C++. This geometry is stored in the models directory.

Since we perform a lot of computation, we strongly suggest you always compile in Release mode and modify the scenes, at least the amount of samples while debugging. Also, we provide a solution that runs code in parallel based on the hardware resources. While this might be confusing to debug, we felt it was important to provide the fastest execution possible. To disable it, just change the call in pathtrace::main.


You are to implement the code left blank in pathtrace.cpp. In this homework, we will provide code for a standard raytracer that you can modify to reach the pathtracer. You will implement these features.

  1. Basic random tracer. Modify the standard raytracer to use a random number generator to set up the samples in the pixel.

  2. Textures. Implement bilinear texture lookup in the renderer. Foreach material property, scale the value by the texture if present.

  3. Area lights. Implement area light sampling for quad surfaces. Use uniform sampling over the quad surface for this. Check for path_shadows to see if shadows are enabled.

  4. Environment illumination. Implement environment mapping by first looking up the environment map if a camera ray misses. Then implement environment lighting by sampling the brdf with the supplied function sample_brdf. Using this direction sample the environment light.

  5. Microfacet materials. Implement a microfacet modification to Blinn-Phong illumination.

  6. Indirect illumination. Implement recursive path tracing by shooting rays in the direction given by sample_brdf; stop recursion based on path_max_depth.

  7. Create a complex and interesting scene. Create an interesting scene by using the models supplied before in class or new ones. We include in the distrubution a Python script that converts OBJ files to RAW. The script is in now way robust. We have tested it with Blender by exporting the entire scene with normals checked.

  8. To document the previous point and to support extra credit, please attach to your submission a PDF document that includes the images you generated, and the rendering features shown.


We suggest to implement the renderer following the steps presented above. To debug code, you can use the step by step advancing.


Please upload your code as well as the images generated as a .zip file on moodle. Also submit a PDF report that includes the images you generate and timing for them and the features that the images shows.

Extra Credit

  1. Implement depth of field.

  2. Implement motion blur.

  3. Implement spherical light sampling.

  4. Implement mesh area light sampling. Contact the professor for this one.

  5. Implement russian roulette for faster ray termination.