# Assignment 4: Raytracing 2

Out: May 20. Due: May 30.

### Introduction

In this assignment, you will add various features to a basic raytracer implementation to support more efficient and higher quality image synthesis.

Differently from previous assignments, you can choose which features you would like to implement; for this reason we will provide only the basic framework code, and you are tasked to extend it with new codepaths and/or datatypes.

The features you can implement are:

• Ray-scene intersection
• Texture mapping
• Mip-map texture filtering
• Normal interpolated Meshes
• Distribution Raytracing
• Blurred Reflections
• Depth of Field
• Ambient Occlusion
• Motion Blur
• Compbined estimators
• Path Tracing
• Indirect Diffuse Illumination
• Next Event Estimation (area lights)
• Importance Sampling (faster diffuse and glossy materials)

If you have another feature you would like to implement that is not listed (e.g., toon shading, displacement mapping), please talk to us to define the problem and determine its point value.

### Requirements

For this assignment you can choose which of the following features to implement. To receive full grade, the sum of the points of your chosen features should add up to 30. Any additional features you implement is considered extra credit and will be counted as such.

For each feature you are to supply scene files and rendered images. Since most of these features will require a lot of time to render, we might not rerun your raytracer to full convergence. This means that we will detract points if you do not hand in images. Also you must to use scenes that clearly demonstrate the effect you are trying to display. We have supplied a few scenes that you may use or modify. If your renders do not clearly show the effect we will detract points, while if the images are particularly nice, we will grant extra credit.

You are to put together a short document and submit it in PDF format along with your code solution. For each feature you implement, this document should contain a clear indication of the feature and a few images to illustrate the feature. You should also include a "before-and-after" comparison, i.e., a set of images showing the scene with and without the feature. For each image please indicate the running time and how many samples were used to compute it (having the before and after image will give us an idea of how fast you have implemented these features). If the report is missing we will detract a considerable number of points.

1. Texture Mapping [5 points]. Sphere and Triangle shapes should support texture mapping. Triangles should support mapping by interpolating UV sets stored at vertices using baricentric coordinates. Spheres should support mapping using the spherical angles as the parameters.

2. Texture Filtering [15 points]. Implement trilinear mip-map based texture filtering. For the purposes of this assignment you can use a distance-based heuristic to determine which mip-map level to lookup. This should come after you have implemented the previous one. Hint: Hard.

3. Normal Interpolated Meshes [5 points]. Add support for normal interpolation for Mesh based on baricentric coordinates.

4. Distribution Raytracing - Soft Shadows [5 points]. Add soft shadows to the raytracer using Monte Carlo uniform sampling. You can do so by implementing uniform sampling of the shape in the AreaLight. You can assume that the shape will always be a Quad.

5. Distribution Raytracing - Blurred Reflections [5 points]. Add blurred reflection support to the raytracer using Monte Carlo uniform sampling of the perturbed reflected direction. You can do so either using a square uniform sampling or a disk uniform sampling.

6. Distribution Raytracing - Depth of Field [5 points]. Add depth of field to the raytracer using Monte Carlo uniform sampling when generating view rays (this will subsume antialiasing). You can do so either using a square uniform sampling or a disk uniform sampling.

7. Distribution Raytracing - Ambient Occlusion [10 points]. Add ambient occlusion support to the raytracer using Monte Carlo uniform sampling of the hemisphere about the point of intersection. You can do this by multiplying the ambient term ($k_a$) by the ratio of rays that can escape to infinity (does not intersect geometry).

8. Distribution Raytracing - Motion Blur [10 points]. Add motion blur to support animation rendering by adding time to ray intersection routines and performing Monte Carlo integration using uniform sampling in the time domain. To do this you have to change the ray intersection code to carry time as well as position and direction of the ray.

9. Distribution Raytracing - Disk Sampling [5 points]. When possible use a disk (circular) domain instead of the quad one. This will be counted only once.

10. Distribution Raytracing - Combined Estimators [5 points]. Combine multiple estimators in single ray to speed up rendering time. For example show depth-of-field with soft shadows or motion-blurred soft shadows with a small number of rays.

11. Path Tracing - Indirect Illumination [15 points]. Implement indirect illumination for a simple scene made of diffuse materials using the path tracing algorithm. Use uniform sampling of the hemisphere and russian roulette as stopping criterion.

12. Path Tracing - Next Event Estimation [5 points]. Add area lights to the path tracer using next event estimation (i.e., directly sampling the light area and not the hemisphere). Use Lambertian sourc for this. Note that this is different than the previous soft shadows since these are physically correct, thus have a cosine falloff.

13. Path Tracing - Importance Sampling [10 points]. Add importance sampling to the indirect illumination evaluation of your path tracer. You should use cosine distributed samples for Lambertian surfaces and weight cosined and Phong distributed samples for a Phong-like material.

### Framework and Reference

We have many of the features implemented in the reference executable. Your scene (JSON file) may be tested using the reference executable if it is formatted according to the supplied framework code (in other words, no new data structures or feature settings).

The framework will build two executables: trace and view. The view program provides an interactive interface to help you edit your scene files and test your code; the trace program only renders the scene to a PNG file. To change which codepath the two programs use can be specified with one of these options: -t for raytracing (default for trace), -d for distribution raytracing, and -p for pathtracing. You can have trace progressively write out PNG files by specifying the -P command-line option. Use --help option for more command-line options, and press h when running view for keyboard shortcuts. Note: the view program renders a lower resolution and sampled image than what is expected to be turned in. It is provided for your convenience only. Use trace to render full resolution images, making sure to adjust the samples to keep the noise low.

In each of the tracing option structures (RaytracingOptions, DistributionRaytraceOptions, PathtracingOptions) is an rng member. Use this to generate random numbers for sampling.

### Hints

Most of these examples will take a fair amount of time to render. This is particularly true for the distribution raytracing and path tracing cases. You should do your testing using a small number of samples and then increase that number only for the final submission.

### Submission

Please send your code, images and PDF to pellacini@di.uniroma1.it. If you feel you must explain some of your code, submit a README text file. We will run your code by calling the main method provided and refer to your document. If you want to complete more extra credit, we will grant an additional week of time. But submit the completed homework on the due date.