Ray Tracing: The Rest Of Your Life (Ray Tracing Minibooks Book 3)

**Chapter 0 Overview**

This page is for further reading and a page to comment on. This book is for people who have already written a ray tracer as an entry portal into the world of graphics research. So this book has a narrower audience than the previous two mini-books. It covers a path tracer's probabilistic sampling in enough detail to get people up to speed to follow the literature and rendering trends.

**Chapter 1: A Simple Monte Carlo Program**

Don Mitchell has a nice paper on how jittering changes convergence rates.

**Chapter 2: Monte Carlo Integration**

**Chapter 3: Light Scattering**

**Chapter 4: Importance Sampling Methods**

**Chapter 5: Generating Random Directions**

**Chapter 6: Ortho-normal Bases**

A very nice little trick on quickly generating ONBs

**Chapter 7: Sampling Lights Directly**

**Chapter 8: Mixture Densities**

**Chapter 9: Some Architectural Decisions**

**Chapter 10: Cleaning Up Pdf Management**

**Chapter 11: The Rest of Your Life**

Rendering is a mature enough field that there are several good book on it now. First I would get yourself a copy of this book:

Physically Based Rendering, Third Edition: From Theory To Implementation

I used it as a text in my course instead of my own book. It is that good!

This is another extremely good book on physically based rendering:

Advanced Global Illumination, Second Edition

This book is older, but it mostly covers topics that don't age and I would definitely get a copy. It's treatment of the math topics useful in graphics are my favorite (for example the basis function discussion is amazing): (update-- it is free online!)

Principles of Digital Image Synthesis (The Morgan Kaufmann Series in Computer Graphics) 2 Volume Set

You will want to start putting out high dynamic range images. You are already computing them! This book is a very good discussion of high dynamic range issues:

High Dynamic Range Imaging, Second Edition: Acquisition, Display, and Image-Based Lighting

This is another older book. It's still got some gold in it, and if you're gonna stay in the area long-haul then I would buy it:

An Introduction to Ray Tracing (The Morgan Kaufmann Series in Computer Graphics)

Henrik Jensen has a very nice book on photon mapping that is also just a good book on physically-based rendering. Realistic Image Synthesis Using Photon Mapping 2nd Revised edition by Jensen, Henrik Wann (2001) Paperback

Hi Peter,

ReplyDeleteI enjoyed reading the 3rd part of this series, great work and thanks on diving into the details of Monte Carlo ray tracing. Here's some remarks:

- There are some leftover declarations of inside_circle and inside_circle_stratified in the pieces of code in Chapter 2.

- There's two chapters numbered "2"

- The book has quite some math in it, and, although I am familiar with most of the concepts introduced, writing operations/concepts like sqrt and integral as text (instead of using the math symbols) made it a little hard to follow.

- The Cornell Box rendering just above the start of Chapter 11 shows some black pixels. I wouldn't have expected those when using 1000 rays/pixel and doing importance sampling of the light source.

- More a remark on the whole series, but was it intentional that the code segments appear as images in the book, thereby making them impossible to copy-paste and build upon?

- Funny that you mention Intel before NVIDIA in the closing chapter, knowing your employer :)

Thanks Paul. Those are all very helpful. Those black pixels are the most worrisome-- I will track that down-- sounds like a bug!

DeleteOn the black spots I have made some progress: http://psgraphics.blogspot.com/2016/04/debugging-by-sweeping-under-rug.html

ReplyDeleteThis comment has been removed by the author.

ReplyDeleteAround section 160 the code for surrounding_box() is repeated, I think the first is erroneous as it says it is for calculating the bounding box of a moving sphere. And then a couple of paragraphs later the text introduces surrounding_box() appears

ReplyDeleteHello Peter.

ReplyDeleteAccording Mitchel paper http://mentallandscape.com/Papers_siggraph96.pdf “In the worst case, stratification is no better (but no worse) than uniform random sampling,” and as I understand from paper this observation is true for a higher dimension too. But in book you advise to use stratification only for one dimension and avoid use it for higher dimension. Why is there such controversy? Or maybe I missed something while reading Mitchel paper.

Dmitry

Oh Mitchell is quite right that stratification never hurts convergence. However, there is some software engineering cost. So I avoid it unless it helps a lot. Note in 2D it definitely helps a lot. In 3D-4D it usually does too. In distribution ray tracing it can help even in 7D if the noise mainly comes from a couple of the dimensions and the projection of the samples is well stratified in those dimensions.

DeleteThis comment has been removed by the author.

ReplyDeleteHi Peter,

ReplyDeleteThanks for your great work, it is a pleasure to read your books and to learn with you.

I am currently learning a lot of things related to path and ray tracing, and everything useful about the mathematical and physical meaning of the ways to compute physically based (or anything else model) rendering.

I will ask a lot of questions but I spent a lot of time trying to undestand by many ways but I'm still stuck. I have learnt Probability distribution and probability density function thanks to your book and "Physically based Renderging : From theory to implementation"

I am stuck on the chapter 2 of this book and I'm not able to solve it by myself, neither by demonstration nor thanks to comments(I was not surprised to find too few comments for this book, it's pleasant to learn theory but it need to spend more time than the two previous books).

Here is point I don't understand from the chapter 2(MC integration on the sphere directions):

- At the beginning, why do you chose to integrate cos^2(theta) ?

We are supposed to find a way to sample points in a unit sphere in a uniform way, aren't we ?

Moreover, the code you provide in the end of of this chapter seems to compute the volume of a unit sphere, on my side it gave me 4.18 which seems to be this volume (4/3 * PI * r^3) (whereas it is written it's supposed to give (8/3)*PI for a reason I don't understand).

Here are the points I don't understand in the chapter 3:

You define the color of a given surface in therms of these quantities:

color = INTEGRAL( A * s(direction) * color(direction))

where s() is defined as the "scattering pdf"

I understand it as "the relatve probability an incident ray with a given direction scatters". But scatters toward what direction ? Why the s() function doesn't take into account the outgoing direction ?

then

color = (A * s(direction) * color(direction)) / p(direction))

I fully understand why you divide by p(direction), but I can't figure out why you multiply by s(direction). To me pdf are here to down weigh.

To finish, why do you state that "For cos(theta) < 1, we have s(direction) = 0, and the integral of cos over the hemisphere is PI, So for a lambertian suface the scattering pdf is: s(direction) = cos(theta)/PI".

Do you state that s(direction) != 0 for cos(theta) = 1 ? And Why ?

Why the integral of cos over an hemisphere is Pi ? And why it gaves this s(direction) ?

I hope all those questions didn't afraid you but I don't know what I don't understant and it make me feeling a little bit upse.

Thanks for your patience and your answers :).

Hi sorry for the delay in replying. Here is my answer for the Chapter 2 questions. I will consider the chapter 3 questions as soon as I can get to them.

ReplyDelete"At the beginning, why do you chose to integrate cos^2(theta) ?"

I just picked an arbitrary function that I could analytically integrate just so I could check my answers. In practice the integrals with do with Monte Carlo will not be analytically integrate.

We are supposed to find a way to sample points in a unit sphere in a uniform way, aren't we ?

Moreover, the code you provide in the end of of this chapter seems to compute the volume of a unit sphere, on my side it gave me 4.18 which seems to be this volume (4/3 * PI * r^3) (whereas it is written it's supposed to give (8/3)*PI for a reason I don't understand).

"We are supposed to find a way to sample points in a unit sphere in a uniform way, aren't we ?"

Only for convenience. A great thing about Monte Carlo is we can use any distribution as long as it samples all of the domain.

"Moreover, the code you provide in the end of of this chapter seems to compute the volume of a unit sphere, on my side it gave me 4.18 which seems to be this volume (4/3 * PI * r^3) (whereas it is written it's supposed to give (8/3)*PI for a reason I don't understand)."

We should get the integral of cos^2 over the whole sphere. This is 2PI INT_-pi^pi cos^2 sin dtheta = 2*PI*(1/3)cos^3 = 4PI/3... ok we need to revisit this tomorrow! You may be right there and I might be wrong!

OK I have had my coffee. Yes I think that integral is indeed 4PI/3 so the book is ok there. Note we are not computing the volume of the sphere-- we are doing the integral of the function cos^2(theta) which will be some number we compute.

ReplyDeleteHi Peter, thanks for yours answers, after reading your answers I figured out that I missed the global purpose of your approach but it's clear now !

DeleteOn the second part (chapter 3) there is a question about the s and the p. "color = (A * s(direction) * color(direction)) / p(direction))" This is just my variant on the more usual MC integration

ReplyDeletecolor = BDRF(direction) cosTheta color(direction) / p(direction).

The reason there is an s() AND a p() is it isn't always convenient/possible/desirable to have the p be exactly the same as the reflection function. So the p() might send the ray in the "wrong" direction, and you weight it (by dividing by p) to account for over or under sampling certain directions.

The exact math turns out to be nice and simple. Choose and implement a p. Sample according to p. Weight according to p.

Okay, I didn't realise why there was this s(direction) and that you were trying to guide your audience toward the famous rendering equation computation with the MC integration.

DeleteIt makes sens now and your approach is relevant.

Thanks for your time and your clear answers, I finally finished your book some weeks ago and that was a great journey to start learning ray tracing.

However I have a last question before leaving you in piece (look at the final question :) ).

Thanks Yoann-- I am being thick. Which final question?

DeleteHi, Peter.

DeleteMy questions are about 2D sampling with multidimensional transformations.

As you know, When you want to generate a uniformly distributed direction on the hemisphere you can use the inversion method.

Because your probability density function is uniform, and you are integrating over the hemisphere you can state that:

- p(w) is constant.

- INTEGRAL_{\Omega}p(w)d(w) = 2PI

So p(w) = 1/(2PI).

However we want to find a way to generate a couple {theta,phi} to compute a direction in spherical coordinates.

So we have to calculate p(theta,phi).

In the book "Phisically Based Rendering", they state that:

- p(theta,phi)d(theta)d(phi) = p(w)d(w)

- d(w) = sin(theta)d(theta)d(phi)

So : p(theta, phi) = sin(theta)p(w).

My questions are:

- Why can we state that p(theta,phi)d(theta)d(phi) = p(w)d(w)?

- What is the transformation between p(w) and p(theta, phi) ? I know that to find relation between pdf we can calculate the jacobien matrix with partial derivatives of each component of the transformation, but what is this transformation ?

- What does p(theta,phi) really mean ? If p(w) represents the relative probability for a given direction (so a solid angle, because the "relative" term implies a direction and a surface area arount it), but what is the difference with p(theta,phi)?

- In your book (and in Physically Based Rendering too), you generate a direction from p(theta,phi) and the inversion method, but when you compute the value of one element of the Monte Carlo Estimator, you divide by p(w). Why can we generate a direction with p(theta, phi) and still integrate over solid angles ?

I know that there are lots of questions for only one person, but even Stack Overflow didn't answer to them, and I'm still trying to find proves of what I'm looking for.

Thanks in advance for your time :).

This (theta,phi) to theta and phi bookkeeping is critical to get right, and is always mildly painful because of the details. The good news is the code doesn't suffer from increased complexity. I will do a full blog post on this soon-- lots of people have this question and I certainly did as well.

Deletewhich will be some number we compute.

ReplyDeletezerzerting

Hi,

ReplyDeleteThank you for the books, they were very helpful!

I implemented a ray tracer with their help. I think I found an issue with the importance sampling. I'll try to describe it here in words.

The idea is that there is a list of objects that are 'important' (like the light in your book). Now, the same object also exists in the scene. It might happen that both the origin point and the generated target point to be on the same object, which for a rectangle for example means along the surface. But the pdf value for such a vector is zero, which means a division by zero. In my code I just checked for such a case and tried sampling again if that was the case, in a loop until success.

شركة مكافحة حشرات بنجران

ReplyDelete

ReplyDeleteOrder form here World's bitcoin atm card that works to Withdraw funds from blockchain account to local cash through ATM with current market rate. The Card is super speed financial tool to experience easiness for crypto currency investors.

Now you can see, this is trusted website for

ReplyDelete4str bass guitar electric beginner bass chords and more, this must be good for you to visit this. thanks

The newest technological power has groomed and upgraded mobiles so much. They are made top up phone with bitcoin . so fascinating that every person keeps interest in having a best to best mobile.

ReplyDeleteEumaxindia Pvt Ltd - Offering Dinamalar News Paper Advertisement Services in Chennai, Tamil Nadu Know the benefits of booking ads with affordable.

ReplyDeleteDinamalar Advertisement in Chennai

In the original entry, Chap 6 has a link to a pdf file on orbit.dtu.dk on ONBs, but the link goes to a "page not found."

ReplyDeleteNote on the ebook:

It looks like the PDFs explore the Lambert material, but don't really get into the rest. I think the defaults could be set up to still leave the other materials working-ish while the Lambert is optimized.

In Chapter 4 PDFs the material base class has a float method scattering_pdf that has a virtual base returning false. Presumably, that will cast to a float 0.0f, but since color will be multiplying this, then returning a float 1.0f would be more "do no harm."

The color declares a float pdf that is uninitialized because it is expected to be set in scatter, but we're only implementing this (for now) in Lambertian. This is later divided in the return calculation, so I think at declaration time, if it is initialized to 1.0f, it is more "do no harm." That is, by setting these both to 1.0f, the other materials will just multiply and divide by 1, and should return something close to what they did before.

In Chapter 7: Sampling Lights Directly, you reference the emitted member function of hitable with a signature that includes the hit_record.

ReplyDeleteI believe emitted is actually a member function of material and does not include a hit_record, so at this point, it actually needs to be implemented in the color function.

Note that later in Chapter 8 (Mixture Density), this is confirmed in the color function, where there is a reference to rec.mat_ptr->emitted, so I think in Chapter 7, emitted is not in hitable. Maybe that came from a later branch of the code?

In Chapter 10: Cleaning up pdf management

ReplyDeleteI think there is a serious memory leak in lambertian. The line

srec.pdf_ptr = new cosine_pdf(hrec.normal);

allocates a new cosine_pdf, but nothing ever destroys it.

In the next code block for color, I see a scatter_record (srec) is created for each hit, and it's lifetime is inside the "if world->hit". Something like a unique_ptr from C++11 would take care of that automatically, but if not that, either the scatter_record should have a destructor that checks the pdf_ptr and destroys it if it's non-nullptr/0. Or at least does that at the end of the world->hit loop.

I mention this because I watched the memory completely explode when I was running this with 1000 samples, and it actually segfaulted. I saw the memory grow to consume all available memory on my computer - I think the process was well over 5 Gig. But after changing it to a unique_ptr, the memory stayed down at around a calmer 2 Meg.

Thanks so much Steve!

ReplyDeleteAre you also a victim of not achieving good results of your online business? Don't worry our

ReplyDeletebuy seocompany will help you to solve your problem.Buy seocompany brought our customers out of these worrying phase. Why not you also come and join us and we will finish your worries altogether. We have best and most experienced SEO team.