Mitsuba Renderer  0.5.0
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
mitsuba::Scene Class Reference

Principal scene data structure. More...

#include <mitsuba/render/scene.h>

+ Inheritance diagram for mitsuba::Scene:

Public Member Functions

virtual const ClassgetClass () const
 Retrieve this object's class. More...
 
Initialization and rendering
 Scene ()
 Construct a new, empty scene (with the default properties) More...
 
 Scene (const Properties &props)
 Construct a new, empty scene. More...
 
 Scene (Scene *scene)
 Create a shallow clone of a scene. More...
 
 Scene (Stream *stream, InstanceManager *manager)
 Unserialize a scene from a binary data stream. More...
 
void initialize ()
 Initialize the scene. More...
 
void invalidate ()
 Invalidate the kd-tree. More...
 
void initializeBidirectional ()
 Initialize the scene for bidirectional rendering algorithms. More...
 
bool preprocess (RenderQueue *queue, const RenderJob *job, int sceneResID, int sensorResID, int samplerResID)
 Perform any pre-processing steps before rendering. More...
 
bool render (RenderQueue *queue, const RenderJob *job, int sceneResID, int sensorResID, int samplerResID)
 Render the scene as seen by the scene's main sensor. More...
 
void postprocess (RenderQueue *queue, const RenderJob *job, int sceneResID, int sensorResID, int samplerResID)
 Perform any post-processing steps after rendering. More...
 
void flush (RenderQueue *queue, const RenderJob *job)
 Write out the current (partially rendered) image. More...
 
void cancel ()
 Cancel a running rendering job. More...
 
void addChild (const std::string &name, ConfigurableObject *child)
 Add a child node to the scene. More...
 
void addChild (ConfigurableObject *child)
 Add an unnamed child. More...
 
void configure ()
 Configure this object (called once after construction and addition of all child ConfigurableObject instances).) More...
 
Ray tracing
bool rayIntersect (const Ray &ray, Intersection &its) const
 Intersect a ray against all primitives stored in the scene and return detailed intersection information. More...
 
bool rayIntersect (const Ray &ray, Float &t, ConstShapePtr &shape, Normal &n, Point2 &uv) const
 Intersect a ray against all primitives stored in the scene and return the traveled distance and intersected shape. More...
 
bool rayIntersect (const Ray &ray) const
 Intersect a ray against all primitives stored in the scene and only determine whether or not there is an intersection. More...
 
Spectrum evalTransmittance (const Point &p1, bool p1OnSurface, const Point &p2, bool p2OnSurface, Float time, const Medium *medium, int &interactions, Sampler *sampler=NULL) const
 Return the transmittance between p1 and p2 at the specified time. More...
 
Ray tracing support for bidirectional algorithms
bool rayIntersectAll (const Ray &ray, Intersection &its) const
 Intersect a ray against all scene primitives and "special" primitives, such as the aperture of a sensor. More...
 
bool rayIntersectAll (const Ray &ray, Float &t, ConstShapePtr &shape, Normal &n, Point2 &uv) const
 Intersect a ray against all normal and "special" primitives and only return the traveled distance and intersected shape. More...
 
bool rayIntersectAll (const Ray &ray) const
 Intersect a ray against all normal and "special" primitives and only determine whether or not there is an intersection. More...
 
Spectrum evalTransmittanceAll (const Point &p1, bool p1OnSurface, const Point &p2, bool p2OnSurface, Float time, const Medium *medium, int &interactions, Sampler *sampler=NULL) const
 Return the transmittance between p1 and p2 at the specified time (and acount for "special" primitives). More...
 
Direct sampling techniques
Spectrum sampleEmitterDirect (DirectSamplingRecord &dRec, const Point2 &sample, bool testVisibility=true) const
 Direct illumination sampling routine. More...
 
Spectrum sampleSensorDirect (DirectSamplingRecord &dRec, const Point2 &sample, bool testVisibility=true) const
 "Direct illumination" sampling routine for the main scene sensor More...
 
Spectrum sampleAttenuatedEmitterDirect (DirectSamplingRecord &dRec, const Medium *medium, int &interactions, const Point2 &sample, Sampler *sampler=NULL) const
 Direct illumination sampling with support for participating media (medium variant) More...
 
Spectrum sampleAttenuatedSensorDirect (DirectSamplingRecord &dRec, const Medium *medium, int &interactions, const Point2 &sample, Sampler *sampler=NULL) const
 "Direct illumination" sampling routine for the main scene sensor with support for participating media (medium variant) More...
 
Spectrum sampleAttenuatedEmitterDirect (DirectSamplingRecord &dRec, const Intersection &its, const Medium *medium, int &interactions, const Point2 &sample, Sampler *sampler=NULL) const
 Direct illumination sampling with support for participating media (surface variant) More...
 
Spectrum sampleAttenuatedSensorDirect (DirectSamplingRecord &dRec, const Intersection &its, const Medium *medium, int &interactions, const Point2 &sample, Sampler *sampler=NULL) const
 "Direct illumination" sampling routine for the main scene sensor with support for participating media (surface variant) More...
 
Float pdfEmitterDirect (const DirectSamplingRecord &dRec) const
 Evaluate the probability density of the direct sampling method implemented by the sampleEmitterDirect() method. More...
 
Float pdfSensorDirect (const DirectSamplingRecord &dRec) const
 Evaluate the probability density of the direct sampling method implemented by the sampleSensorDirect() method. More...
 
Emission sampling techniques
Spectrum sampleEmitterPosition (PositionSamplingRecord &pRec, const Point2 &sample) const
 Sample a position according to the emission profile defined by the emitters in the scene. More...
 
Spectrum sampleSensorPosition (PositionSamplingRecord &pRec, const Point2 &sample, const Point2 *extra=NULL) const
 Sample a position on the main sensor of the scene. More...
 
Float pdfEmitterPosition (const PositionSamplingRecord &pRec) const
 Evaluate the spatial component of the sampling density implemented by the sampleEmitterPosition() method. More...
 
Float pdfSensorPosition (const PositionSamplingRecord &pRec) const
 Evaluate the spatial component of the sampling density implemented by the sampleSensorPosition() method. More...
 
Float pdfEmitterDiscrete (const Emitter *emitter) const
 Return the discrete probability of choosing a certain emitter in sampleEmitter* More...
 
Spectrum sampleEmitterRay (Ray &ray, const Emitter *&emitter, const Point2 &spatialSample, const Point2 &directionalSample, Float time) const
 Importance sample a ray according to the emission profile defined by the sensors in the scene. More...
 
Environment emitters
const EmittergetEnvironmentEmitter () const
 Return the scene's environment emitter (if there is one) More...
 
bool hasEnvironmentEmitter () const
 Does the scene have a environment emitter? More...
 
Spectrum evalEnvironment (const RayDifferential &ray) const
 Return the environment radiance for a ray that did not intersect any of the scene objects. More...
 
Spectrum evalAttenuatedEnvironment (const RayDifferential &ray, const Medium *medium, Sampler *sampler) const
 Return the environment radiance for a ray that did not intersect any of the scene objects. This method additionally considers transmittance by participating media. More...
 
Miscellaneous
const AABBgetAABB () const
 Return an axis-aligned bounding box containing the whole scene. More...
 
bool hasDegenerateSensor () const
 Is the main scene sensor degenerate? (i.e. has it collapsed to a point or line) More...
 
bool hasDegenerateEmitters () const
 Area all emitters in this scene degenerate? (i.e. they has collapsed to a point or line) More...
 
BSphere getBSphere () const
 Return a bounding sphere containing the whole scene. More...
 
bool hasMedia () const
 Does the scene contain participating media? More...
 
void setSensor (Sensor *sensor)
 Set the main scene sensor. More...
 
void removeSensor (Sensor *sensor)
 Remove a sensor from the scene's sensor list. More...
 
void addSensor (Sensor *sensor)
 Add a sensor to the scene's sensor list. More...
 
SensorgetSensor ()
 Return the scene's sensor. More...
 
const SensorgetSensor () const
 Return the scene's sensor (const version) More...
 
ref_vector< Sensor > & getSensors ()
 Return the list of sensors that are specified by the scene. More...
 
const ref_vector< Sensor > & getSensors () const
 Return the list of sensors that are specified by the scene (const version) More...
 
void setIntegrator (Integrator *integrator)
 Set the scene's integrator. More...
 
IntegratorgetIntegrator ()
 Return the scene's integrator. More...
 
const IntegratorgetIntegrator () const
 Return the scene's integrator (const version) More...
 
void setSampler (Sampler *sampler)
 Set the scene's sampler. More...
 
SamplergetSampler ()
 Return the scene's sampler. More...
 
const SamplergetSampler () const
 Return the scene's sampler. More...
 
FilmgetFilm ()
 Return the scene's film. More...
 
const FilmgetFilm () const
 Return the scene's film. More...
 
ShapeKDTreegetKDTree ()
 Return the scene's kd-tree accelerator. More...
 
const ShapeKDTreegetKDTree () const
 Return the scene's kd-tree accelerator. More...
 
ref_vector< Subsurface > & getSubsurfaceIntegrators ()
 Return the a list of all subsurface integrators. More...
 
const ref_vector< Subsurface > & getSubsurfaceIntegrators () const
 Return the a list of all subsurface integrators. More...
 
std::vector< TriMesh * > & getMeshes ()
 Return the scene's triangular meshes (a subset of getShapes()) More...
 
const std::vector< TriMesh * > & getMeshes () const
 Return the scene's triangular meshes (a subset of getShapes()) More...
 
ref_vector< Shape > & getShapes ()
 Return the scene's normal shapes (including triangular meshes) More...
 
const ref_vector< Shape > & getShapes () const
 Return the scene's normal shapes (including triangular meshes) More...
 
ref_vector< Shape > & getSpecialShapes ()
 Return a set of special shapes related to emitter/sensor geometry in bidirectional renderings. More...
 
const ref_vector< Shape > & getSpecialShapes () const
 Return a set of special shapes related to emitter/sensor geometry in bidirectional renderings. More...
 
ref_vector< Emitter > & getEmitters ()
 Return the scene's emitters. More...
 
const ref_vector< Emitter > & getEmitters () const
 Return the scene's emitters. More...
 
ref_vector< Medium > & getMedia ()
 Return the scene's participating media. More...
 
const ref_vector< Medium > & getMedia () const
 Return the scene's participating media. More...
 
ref_vector< ConfigurableObject > & getReferencedObjects ()
 Return referenced objects (such as textures, BSDFs) More...
 
const ref_vector
< ConfigurableObject > & 
getReferencedObjects () const
 Return referenced objects (such as textures, BSDFs) More...
 
const fs::path & getSourceFile () const
 Return the name of the file containing the original description of this scene. More...
 
void setSourceFile (const fs::path &name)
 Set the name of the file containing the original description of this scene. More...
 
const fs::path & getDestinationFile () const
 Return the render output filename. More...
 
void setDestinationFile (const fs::path &name)
 Set the render output filename. More...
 
bool destinationExists () const
 Does the destination file already exist? More...
 
void setBlockSize (uint32_t size)
 Set the block resolution used to split images into parallel workloads. More...
 
uint32_t getBlockSize () const
 Return the block resolution used to split images into parallel workloads. More...
 
void serialize (Stream *stream, InstanceManager *manager) const
 Serialize the whole scene to a network/file stream. More...
 
void bindUsedResources (ParallelProcess *proc) const
 Return an axis-aligned bounding box containing the whole scene. More...
 
void wakeup (ConfigurableObject *parent, std::map< std::string, SerializableObject * > &params)
 Return an axis-aligned bounding box containing the whole scene. More...
 
std::string toString () const
 Return a string representation. More...
 
- Public Member Functions inherited from mitsuba::ConfigurableObject
virtual void setParent (ConfigurableObject *parent)
 Notify the ConfigurableObject instance about its parent object. More...
 
void addChild (ConfigurableObject *child)
 Add an unnamed child. More...
 
const std::string & getID () const
 Return the identifier associated with this instance (or "unnamed") More...
 
void setID (const std::string &name)
 Set the identifier associated with this instance. More...
 
const PropertiesgetProperties () const
 Return the properties object that was originally used to create this instance. More...
 
- Public Member Functions inherited from mitsuba::SerializableObject
 SerializableObject (Stream *stream, InstanceManager *manager)
 Unserialize a serializable object. More...
 
- Public Member Functions inherited from Object
 Object ()
 Construct a new object. More...
 
int getRefCount () const
 Return the current reference count. More...
 
void incRef () const
 Increase the reference count of the object by one. More...
 
void decRef (bool autoDeallocate=true) const
 Decrease the reference count of the object and possibly deallocate it. More...
 

Static Public Attributes

static Classm_theClass
 
- Static Public Attributes inherited from mitsuba::NetworkedObject
static Classm_theClass
 
- Static Public Attributes inherited from mitsuba::ConfigurableObject
static Classm_theClass
 
- Static Public Attributes inherited from mitsuba::SerializableObject
static Classm_theClass
 
- Static Public Attributes inherited from Object
static Classm_theClass
 Pointer to the object's class descriptor. More...
 

Protected Member Functions

virtual ~Scene ()
 Virtual destructor. More...
 
- Protected Member Functions inherited from mitsuba::NetworkedObject
virtual ~NetworkedObject ()
 Virtual destructor. More...
 
 NetworkedObject (const Properties &props)
 Constructor. More...
 
 NetworkedObject (Stream *stream, InstanceManager *manager)
 Unserialize a configurable object. More...
 
- Protected Member Functions inherited from mitsuba::ConfigurableObject
virtual ~ConfigurableObject ()
 Virtual destructor. More...
 
 ConfigurableObject (const Properties &props)
 Construct a configurable object. More...
 
 ConfigurableObject (Stream *stream, InstanceManager *manager)
 Unserialize a configurable object. More...
 
- Protected Member Functions inherited from mitsuba::SerializableObject
 SerializableObject ()
 Construct a serializable object. More...
 
virtual ~SerializableObject ()
 Virtual deconstructor. More...
 
- Protected Member Functions inherited from Object
virtual ~Object ()
 Virtual private deconstructor. (Will only be called by ref) More...
 

Additional Inherited Members

- Static Public Member Functions inherited from Object
static void staticInitialization ()
 Initializes the built-in reference count debugger (if enabled) More...
 
static void staticShutdown ()
 Free the memory taken by staticInitialization() More...
 
- Protected Attributes inherited from mitsuba::ConfigurableObject
Properties m_properties
 

Detailed Description

Principal scene data structure.

This class holds information on surfaces, emitters and participating media and coordinates rendering jobs. It also provides useful query routines that are mostly used by the Integrator implementations.

Constructor & Destructor Documentation

mitsuba::Scene::Scene ( )

Construct a new, empty scene (with the default properties)

mitsuba::Scene::Scene ( const Properties props)

Construct a new, empty scene.

mitsuba::Scene::Scene ( Scene scene)

Create a shallow clone of a scene.

mitsuba::Scene::Scene ( Stream stream,
InstanceManager manager 
)

Unserialize a scene from a binary data stream.

virtual mitsuba::Scene::~Scene ( )
protectedvirtual

Virtual destructor.

Member Function Documentation

void mitsuba::Scene::addChild ( const std::string &  name,
ConfigurableObject child 
)
virtual

Add a child node to the scene.

Reimplemented from mitsuba::ConfigurableObject.

void mitsuba::Scene::addChild ( ConfigurableObject child)
inline

Add an unnamed child.

void mitsuba::Scene::addSensor ( Sensor sensor)

Add a sensor to the scene's sensor list.

void mitsuba::Scene::bindUsedResources ( ParallelProcess proc) const
virtual

Return an axis-aligned bounding box containing the whole scene.

Reimplemented from mitsuba::NetworkedObject.

void mitsuba::Scene::cancel ( )

Cancel a running rendering job.

This function can be called asynchronously, e.g. from a GUI. In this case, render() will quit with a return value of false.

void mitsuba::Scene::configure ( )
virtual

Configure this object (called once after construction and addition of all child ConfigurableObject instances).)

Reimplemented from mitsuba::ConfigurableObject.

bool mitsuba::Scene::destinationExists ( ) const
inline

Does the destination file already exist?

Spectrum mitsuba::Scene::evalAttenuatedEnvironment ( const RayDifferential ray,
const Medium medium,
Sampler sampler 
) const
inline

Return the environment radiance for a ray that did not intersect any of the scene objects. This method additionally considers transmittance by participating media.

This is primarily meant for path tracing-style integrators.

Spectrum mitsuba::Scene::evalEnvironment ( const RayDifferential ray) const
inline

Return the environment radiance for a ray that did not intersect any of the scene objects.

This is primarily meant for path tracing-style integrators.

Spectrum mitsuba::Scene::evalTransmittance ( const Point p1,
bool  p1OnSurface,
const Point p2,
bool  p2OnSurface,
Float  time,
const Medium medium,
int &  interactions,
Sampler sampler = NULL 
) const

Return the transmittance between p1 and p2 at the specified time.

This function is essentially a continuous version of isOccluded(), which additionally accounts for the presence of participating media and surface interactions that attenuate a ray without changing its direction (i.e. geometry with an alpha mask)

The implementation correctly handles arbitrary amounts of index-matched medium transitions. The interactions parameter can be used to specify a maximum number of possible surface interactions and medium transitions between p1 and p2. When this number is exceeded, the function returns zero.

Note that index-mismatched boundaries (i.e. a transition from air to water) are not supported by this function. The integrator needs to take care of these in some other way.

Parameters
p1Source position
p2Target position
p1OnSurfaceIs the source position located on a surface? This information is necessary to set up the right ray epsilons for the kd-tree traversal
p2OnSurfaceIs the target position located on a surface?
mediumThe medium at p1
interactionsSpecifies the maximum permissible number of index-matched medium transitions or BSDF::ENull scattering events on the way to the light source. (interactions<0 means arbitrarily many). When the function returns a nonzero result, this parameter will additionally be used to return the actual number of intermediate interactions.
timeAssociated scene time value for the transmittance computation
samplerOptional: A sample generator. This may be used to compute a random unbiased estimate of the transmission.
Returns
An spectral-valued transmittance value with components between zero and one.
Spectrum mitsuba::Scene::evalTransmittanceAll ( const Point p1,
bool  p1OnSurface,
const Point p2,
bool  p2OnSurface,
Float  time,
const Medium medium,
int &  interactions,
Sampler sampler = NULL 
) const

Return the transmittance between p1 and p2 at the specified time (and acount for "special" primitives).

This function is essentially a continuous version of isOccluded(), which additionally accounts for the presence of participating media and surface interactions that attenuate a ray without changing its direction (i.e. geometry with an alpha mask)

The implementation correctly handles arbitrary amounts of index-matched medium transitions. The interactions parameter can be used to specify a maximum number of possible surface interactions and medium transitions between p1 and p2. When this number is exceeded, the function returns zero.

Note that index-mismatched boundaries (i.e. a transition from air to water) are not supported by this function. The integrator needs to take care of these in some other way.

This function does exactly the same thing as evalTransmittance, except that it additionally performs intersections against a list of "special" shapes that are intentionally kept outside of the main scene kd-tree (e.g. because they are not static and might change from rendering to rendering). This is needed by some bidirectional techniques that care about intersections with the sensor aperture, etc.

Parameters
p1Source position
p2Target position
p1OnSurfaceIs the source position located on a surface? This information is necessary to set up the right ray epsilons for the kd-tree traversal
p2OnSurfaceIs the target position located on a surface?
mediumThe medium at p1
interactionsSpecifies the maximum permissible number of index-matched medium transitions or BSDF::ENull scattering events on the way to the light source. (interactions<0 means arbitrarily many). When the function returns a nonzero result, this parameter will additionally be used to return the actual number of intermediate interactions.
timeAssociated scene time value for the transmittance computation
samplerOptional: A sample generator. This may be used to compute a random unbiased estimate of the transmission.
Returns
An spectral-valued transmittance value with components between zero and one.
void mitsuba::Scene::flush ( RenderQueue queue,
const RenderJob job 
)

Write out the current (partially rendered) image.

const AABB& mitsuba::Scene::getAABB ( ) const
inline

Return an axis-aligned bounding box containing the whole scene.

uint32_t mitsuba::Scene::getBlockSize ( ) const
inline

Return the block resolution used to split images into parallel workloads.

BSphere mitsuba::Scene::getBSphere ( ) const
inline

Return a bounding sphere containing the whole scene.

virtual const Class* mitsuba::Scene::getClass ( ) const
virtual

Retrieve this object's class.

Reimplemented from mitsuba::NetworkedObject.

const fs::path& mitsuba::Scene::getDestinationFile ( ) const
inline

Return the render output filename.

ref_vector<Emitter>& mitsuba::Scene::getEmitters ( )
inline

Return the scene's emitters.

const ref_vector<Emitter>& mitsuba::Scene::getEmitters ( ) const
inline

Return the scene's emitters.

const Emitter* mitsuba::Scene::getEnvironmentEmitter ( ) const
inline

Return the scene's environment emitter (if there is one)

Film* mitsuba::Scene::getFilm ( )
inline

Return the scene's film.

const Film* mitsuba::Scene::getFilm ( ) const
inline

Return the scene's film.

Integrator* mitsuba::Scene::getIntegrator ( )
inline

Return the scene's integrator.

const Integrator* mitsuba::Scene::getIntegrator ( ) const
inline

Return the scene's integrator (const version)

ShapeKDTree* mitsuba::Scene::getKDTree ( )
inline

Return the scene's kd-tree accelerator.

const ShapeKDTree* mitsuba::Scene::getKDTree ( ) const
inline

Return the scene's kd-tree accelerator.

ref_vector<Medium>& mitsuba::Scene::getMedia ( )
inline

Return the scene's participating media.

const ref_vector<Medium>& mitsuba::Scene::getMedia ( ) const
inline

Return the scene's participating media.

std::vector<TriMesh *>& mitsuba::Scene::getMeshes ( )
inline

Return the scene's triangular meshes (a subset of getShapes())

const std::vector<TriMesh *>& mitsuba::Scene::getMeshes ( ) const
inline

Return the scene's triangular meshes (a subset of getShapes())

ref_vector<ConfigurableObject>& mitsuba::Scene::getReferencedObjects ( )
inline

Return referenced objects (such as textures, BSDFs)

const ref_vector<ConfigurableObject>& mitsuba::Scene::getReferencedObjects ( ) const
inline

Return referenced objects (such as textures, BSDFs)

Sampler* mitsuba::Scene::getSampler ( )
inline

Return the scene's sampler.

Note that when rendering using multiple different threads, each thread will be passed a shallow copy of the scene, which has a different sampler instance. This helps to avoid locking/contention issues and ensures that different threads render with different random number sequences. The sampler instance provided here is a clone of the original sampler specified in the sensor.

const Sampler* mitsuba::Scene::getSampler ( ) const
inline

Return the scene's sampler.

Sensor* mitsuba::Scene::getSensor ( )
inline

Return the scene's sensor.

const Sensor* mitsuba::Scene::getSensor ( ) const
inline

Return the scene's sensor (const version)

ref_vector<Sensor>& mitsuba::Scene::getSensors ( )
inline

Return the list of sensors that are specified by the scene.

As scene can have multiple sensors – however, during a rendering, there will always be one "main" sensor that is currently active.

See Also
getSensor
const ref_vector<Sensor>& mitsuba::Scene::getSensors ( ) const
inline

Return the list of sensors that are specified by the scene (const version)

As scene can have multiple sensors – however, during a rendering, there will always be one "main" sensor that is currently active.

See Also
getSensor
ref_vector<Shape>& mitsuba::Scene::getShapes ( )
inline

Return the scene's normal shapes (including triangular meshes)

const ref_vector<Shape>& mitsuba::Scene::getShapes ( ) const
inline

Return the scene's normal shapes (including triangular meshes)

const fs::path& mitsuba::Scene::getSourceFile ( ) const
inline

Return the name of the file containing the original description of this scene.

ref_vector<Shape>& mitsuba::Scene::getSpecialShapes ( )
inline

Return a set of special shapes related to emitter/sensor geometry in bidirectional renderings.

const ref_vector<Shape>& mitsuba::Scene::getSpecialShapes ( ) const
inline

Return a set of special shapes related to emitter/sensor geometry in bidirectional renderings.

ref_vector<Subsurface>& mitsuba::Scene::getSubsurfaceIntegrators ( )
inline

Return the a list of all subsurface integrators.

const ref_vector<Subsurface>& mitsuba::Scene::getSubsurfaceIntegrators ( ) const
inline

Return the a list of all subsurface integrators.

bool mitsuba::Scene::hasDegenerateEmitters ( ) const
inline

Area all emitters in this scene degenerate? (i.e. they has collapsed to a point or line)

Note that this function only cares about the spatial component of the emitters – its value does not depend on whether the directional emission profile is degenerate.

bool mitsuba::Scene::hasDegenerateSensor ( ) const
inline

Is the main scene sensor degenerate? (i.e. has it collapsed to a point or line)

Note that this function only cares about the spatial component of the sensor – its value does not depend on whether the directional response function is degenerate.

bool mitsuba::Scene::hasEnvironmentEmitter ( ) const
inline

Does the scene have a environment emitter?

bool mitsuba::Scene::hasMedia ( ) const
inline

Does the scene contain participating media?

void mitsuba::Scene::initialize ( )

Initialize the scene.

This function must be called before using any of the methods in this class.

void mitsuba::Scene::initializeBidirectional ( )

Initialize the scene for bidirectional rendering algorithms.

This ensures that certain "special" shapes (such as the aperture of the sensor) are added to the scene. This function should be called before using any of the methods in this class.

void mitsuba::Scene::invalidate ( )

Invalidate the kd-tree.

This function must be called if, after running initialize(), additional geometry is added to the scene.

Float mitsuba::Scene::pdfEmitterDirect ( const DirectSamplingRecord dRec) const

Evaluate the probability density of the direct sampling method implemented by the sampleEmitterDirect() method.

Parameters
dRecA direct sampling record, which specifies the query location. Note that this record need not be completely filled out. The important fields are p, n, ref, dist, d, measure, and uv.
pThe world-space position that would have been passed to sampleEmitterDirect()
Returns
The density expressed with respect to the requested measure (usually ESolidAngle)
Float mitsuba::Scene::pdfEmitterDiscrete ( const Emitter emitter) const
inline

Return the discrete probability of choosing a certain emitter in sampleEmitter*

Float mitsuba::Scene::pdfEmitterPosition ( const PositionSamplingRecord pRec) const

Evaluate the spatial component of the sampling density implemented by the sampleEmitterPosition() method.

Parameters
pRecA position sampling record, which specifies the query location
Returns
The area density at the supplied position
Float mitsuba::Scene::pdfSensorDirect ( const DirectSamplingRecord dRec) const

Evaluate the probability density of the direct sampling method implemented by the sampleSensorDirect() method.

Parameters
dRecA direct sampling record, which specifies the query location. Note that this record need not be completely filled out. The important fields are p, n, ref, dist, d, measure, and uv.
pThe world-space position that would have been passed to sampleSensorDirect()
Returns
The density expressed with respect to the requested measure (usually ESolidAngle)
Float mitsuba::Scene::pdfSensorPosition ( const PositionSamplingRecord pRec) const
inline

Evaluate the spatial component of the sampling density implemented by the sampleSensorPosition() method.

Parameters
pRecA position sampling record, which specifies the query location
Returns
The area density at the supplied position
void mitsuba::Scene::postprocess ( RenderQueue queue,
const RenderJob job,
int  sceneResID,
int  sensorResID,
int  samplerResID 
)

Perform any post-processing steps after rendering.

Progress is tracked by sending status messages to a provided render queue (the parameter job is required to discern multiple render jobs occurring in parallel).

The last three parameters are resource IDs of the associated scene, sensor and sample generator, which have been made available to all local and remote workers.

bool mitsuba::Scene::preprocess ( RenderQueue queue,
const RenderJob job,
int  sceneResID,
int  sensorResID,
int  samplerResID 
)

Perform any pre-processing steps before rendering.

This function should be called after initialize() and before rendering the scene. It might do a variety of things, such as constructing photon maps or executing distributed overture passes.

Progress is tracked by sending status messages to a provided render queue (the parameter job is required to discern multiple render jobs occurring in parallel).

The last three parameters are resource IDs of the associated scene, sensor and sample generator, which have been made available to all local and remote workers.

Returns
true upon successful completion.
bool mitsuba::Scene::rayIntersect ( const Ray ray,
Intersection its 
) const
inline

Intersect a ray against all primitives stored in the scene and return detailed intersection information.

Parameters
rayA 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which applies when the shapes are in motion)
itsA detailed intersection record, which will be filled by the intersection query
Returns
true if an intersection was found
bool mitsuba::Scene::rayIntersect ( const Ray ray,
Float t,
ConstShapePtr shape,
Normal n,
Point2 uv 
) const
inline

Intersect a ray against all primitives stored in the scene and return the traveled distance and intersected shape.

This function represents a performance improvement when the intersected shape must be known, but there is no need for a detailed intersection record.

Parameters
rayA 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which applies when the shapes are in motion)
tThe traveled ray distance will be stored in this parameter
shapeA pointer to the intersected shape will be stored in this parameter
nThe geometric surface normal will be stored in this parameter
uvThe UV coordinates associated with the intersection will be stored here.
Returns
true if an intersection was found
bool mitsuba::Scene::rayIntersect ( const Ray ray) const
inline

Intersect a ray against all primitives stored in the scene and only determine whether or not there is an intersection.

This is by far the fastest ray tracing method. This performance improvement comes with a major limitation though: this function cannot provide any additional information about the detected intersection (not even its position).

Parameters
rayA 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which applies when the shapes are in motion)
Returns
true if an intersection was found
bool mitsuba::Scene::rayIntersectAll ( const Ray ray,
Intersection its 
) const

Intersect a ray against all scene primitives and "special" primitives, such as the aperture of a sensor.

This function does exactly the same thing as rayIntersect, except that it additionally performs intersections against a list of "special" shapes that are intentionally kept outside of the main scene kd-tree (e.g. because they are not static and might change from rendering to rendering). This is needed by some bidirectional techniques that e.g. care about intersections with the sensor aperture.

Parameters
rayA 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which applies when the shapes are in motion)
itsA detailed intersection record, which will be filled by the intersection query
Returns
true if an intersection was found
bool mitsuba::Scene::rayIntersectAll ( const Ray ray,
Float t,
ConstShapePtr shape,
Normal n,
Point2 uv 
) const

Intersect a ray against all normal and "special" primitives and only return the traveled distance and intersected shape.

This function represents a performance improvement when the intersected shape must be known, but there is no need for a detailed intersection record.

This function does exactly the same thing as rayIntersect, except that it additionally performs intersections against a list of "special" shapes that are intentionally kept outside of the main scene kd-tree (e.g. because they are not static and might change from rendering to rendering). This is needed by some bidirectional techniques that e.g. care about intersections with the sensor aperture.

Parameters
rayA 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which applies when the shapes are in motion)
tThe traveled ray distance will be stored in this parameter
shapeA pointer to the intersected shape will be stored in this parameter
nThe geometric surface normal will be stored in this parameter
uvThe UV coordinates associated with the intersection will be stored here.
Returns
true if an intersection was found
bool mitsuba::Scene::rayIntersectAll ( const Ray ray) const

Intersect a ray against all normal and "special" primitives and only determine whether or not there is an intersection.

This is by far the fastest ray tracing method. This performance improvement comes with a major limitation though: this function cannot provide any additional information about the detected intersection (not even its position).

This function does exactly the same thing as rayIntersect, except that it additionally performs intersections against a list of "special" shapes that are intentionally kept outside of the main scene kd-tree (e.g. because they are not static and might change from rendering to rendering). This is needed by some bidirectional techniques that e.g. care about intersections with the sensor aperture.

Parameters
rayA 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which applies when the shapes are in motion)
Returns
true if an intersection was found
void mitsuba::Scene::removeSensor ( Sensor sensor)

Remove a sensor from the scene's sensor list.

bool mitsuba::Scene::render ( RenderQueue queue,
const RenderJob job,
int  sceneResID,
int  sensorResID,
int  samplerResID 
)

Render the scene as seen by the scene's main sensor.

Progress is tracked by sending status messages to a provided render queue (the parameter job is required to discern multiple render jobs occurring in parallel).

The last three parameters are resource IDs of the associated scene, sensor and sample generator, which have been made available to all local and remote workers.

Returns
true upon successful completion.
Spectrum mitsuba::Scene::sampleAttenuatedEmitterDirect ( DirectSamplingRecord dRec,
const Medium medium,
int &  interactions,
const Point2 sample,
Sampler sampler = NULL 
) const

Direct illumination sampling with support for participating media (medium variant)

Given an arbitrary reference point in the scene, this method samples a position on an emitter that has a nonzero contribution towards that point. In comparison to sampleEmitterDirect, this version also accounts for attenuation by participating media and should be used when dRec.p lies inside a medium, i.e. not on a surface!

Ideally, the implementation should importance sample the product of the emission profile and the geometry term between the reference point and the position on the emitter.

Parameters
dRecA direct illumination sampling record that specifies the reference point and a time value. After the function terminates, it will be populated with the position sample and related information
mediumThe medium located at the reference point (or NULL for vacuum).
interactionsSpecifies the maximum permissible number of index-matched medium transitions or BSDF::ENull scattering events on the way to the light source. (interactions<0 means arbitrarily many). When the function returns a nonzero result, this parameter will additionally be used to return the actual number of intermediate interactions.
sampleA uniformly distributed 2D vector
samplerOptional: a pointer to a sample generator. Some particular implementations can do a better job at sampling when they have access to additional random numbers.
Returns
An importance weight given by the radiance received along the sampled ray divided by the sample probability.
Spectrum mitsuba::Scene::sampleAttenuatedEmitterDirect ( DirectSamplingRecord dRec,
const Intersection its,
const Medium medium,
int &  interactions,
const Point2 sample,
Sampler sampler = NULL 
) const

Direct illumination sampling with support for participating media (surface variant)

Given an arbitrary reference point in the scene, this method samples a position on an emitter that has a nonzero contribution towards that point. In comparison to sampleEmitterDirect, this version also accounts for attenuation by participating media and should be used when the target position lies on a surface.

Ideally, the implementation should importance sample the product of the emission profile and the geometry term between the reference point and the position on the emitter.

Parameters
dRecA direct illumination sampling record that specifies the reference point and a time value. After the function terminates, it will be populated with the position sample and related information
itsAn intersection record associated with the reference point in dRec. This record is needed to determine the participating medium between the emitter sample and the reference point when its marks a medium transition.
mediumThe medium located at its (or NULL for vacuum). When the shape associated with its marks a medium transition, it does not matter which of the two media is specified.
interactionsSpecifies the maximum permissible number of index-matched medium transitions or BSDF::ENull scattering events on the way to the light source. (interactions<0 means arbitrarily many). When the function returns a nonzero result, this parameter will additionally be used to return the actual number of intermediate interactions.
sampleA uniformly distributed 2D vector
samplerOptional: a pointer to a sample generator. Some particular implementations can do a better job at sampling when they have access to additional random numbers.
Returns
An importance weight given by the radiance received along the sampled ray divided by the sample probability.
Spectrum mitsuba::Scene::sampleAttenuatedSensorDirect ( DirectSamplingRecord dRec,
const Medium medium,
int &  interactions,
const Point2 sample,
Sampler sampler = NULL 
) const

"Direct illumination" sampling routine for the main scene sensor with support for participating media (medium variant)

Given an arbitrary reference point in the scene, this method samples a position on an sensor that has a nonzero response towards that point. In comparison to sampleSensorDirect, this version also accounts for attenuation by participating media and should be used when dRec.p lies inside a medium, i.e. not on a surface! This function can be interpreted as a generalization of a direct illumination sampling strategy to sensors.

Ideally, the implementation should importance sample the product of the response profile and the geometry term between the reference point and the position on the sensor.

Parameters
dRecA direct illumination sampling record that specifies the reference point and a time value. After the function terminates, it will be populated with the position sample and related information
mediumThe medium located at the reference point (or NULL for vacuum).
interactionsSpecifies the maximum permissible number of index-matched medium transitions or BSDF::ENull scattering events on the way to the light source. (interactions<0 means arbitrarily many). When the function returns a nonzero result, this parameter will additionally be used to return the actual number of intermediate interactions.
sampleA uniformly distributed 2D vector
samplerOptional: a pointer to a sample generator. Some particular implementations can do a better job at sampling when they have access to additional random numbers.
Returns
An importance weight given by the radiance received along the sampled ray divided by the sample probability.
Spectrum mitsuba::Scene::sampleAttenuatedSensorDirect ( DirectSamplingRecord dRec,
const Intersection its,
const Medium medium,
int &  interactions,
const Point2 sample,
Sampler sampler = NULL 
) const

"Direct illumination" sampling routine for the main scene sensor with support for participating media (surface variant)

Given an arbitrary reference point in the scene, this method samples a position on an sensor that has a nonzero response towards that point. In comparison to sampleSensorDirect, this version also accounts for attenuation by participating media and should be used when the target position lies on a surface.

Ideally, the implementation should importance sample the product of the emission profile and the geometry term between the reference point and the position on the sensor.

Parameters
dRecA direct illumination sampling record that specifies the reference point and a time value. After the function terminates, it will be populated with the position sample and related information
itsAn intersection record associated with the reference point in dRec. This record is needed to determine the participating medium between the sensor sample and the reference point when its marks a medium transition.
mediumThe medium located at its (or NULL for vacuum). When the shape associated with its marks a medium transition, it does not matter which of the two media is specified.
interactionsSpecifies the maximum permissible number of index-matched medium transitions or BSDF::ENull scattering events on the way to the light source. (interactions<0 means arbitrarily many). When the function returns a nonzero result, this parameter will additionally be used to return the actual number of intermediate interactions.
sampleA uniformly distributed 2D vector
samplerOptional: a pointer to a sample generator. Some particular implementations can do a better job at sampling when they have access to additional random numbers.
Returns
An importance weight given by the radiance received along the sampled ray divided by the sample probability.
Spectrum mitsuba::Scene::sampleEmitterDirect ( DirectSamplingRecord dRec,
const Point2 sample,
bool  testVisibility = true 
) const

Direct illumination sampling routine.

Given an arbitrary reference point in the scene, this method samples a position on an emitter that has a nonzero contribution towards that point.

Ideally, the implementation should importance sample the product of the emission profile and the geometry term between the reference point and the position on the emitter.

Parameters
dRecA direct illumination sampling record that specifies the reference point and a time value. After the function terminates, it will be populated with the position sample and related information
sampleA uniformly distributed 2D vector
testVisibilityWhen set to true, a shadow ray will be cast to ensure that the sampled emitter position and the reference point are mutually visible.
Returns
An importance weight given by the radiance received along the sampled ray divided by the sample probability.
Spectrum mitsuba::Scene::sampleEmitterPosition ( PositionSamplingRecord pRec,
const Point2 sample 
) const

Sample a position according to the emission profile defined by the emitters in the scene.

To sample the directional component, please use the Emitter::sampleDirection() method.

Parameters
pRecA position record to be populated with the sampled position and related information
sampleA uniformly distributed 2D vector
Returns
An importance weight associated with the sampled position. This accounts for the difference in the spatial part of the emission profile and the density function.
Spectrum mitsuba::Scene::sampleEmitterRay ( Ray ray,
const Emitter *&  emitter,
const Point2 spatialSample,
const Point2 directionalSample,
Float  time 
) const

Importance sample a ray according to the emission profile defined by the sensors in the scene.

This function combines both steps of choosing a ray origin and direction value. It does not return any auxiliary sampling information and is mainly meant to be used by unidirectional rendering techniques.

Note that this function potentially uses a different sampling strategy compared to the sequence of running sampleEmitterPosition() and Emitter::sampleDirection(). The reason for this is that it may be possible to switch to a better technique when sampling both position and direction at the same time.

Parameters
rayA ray data structure to be populated with a position and direction value
spatialSampleDenotes the sample that is used to choose the spatial component
directionalSampleDenotes the sample that is used to choose the directional component
timeScene time value to be associated with the sample
Returns
An importance weight associated with the sampled ray. This accounts for the difference between the emission profile and the sampling density function.
Spectrum mitsuba::Scene::sampleSensorDirect ( DirectSamplingRecord dRec,
const Point2 sample,
bool  testVisibility = true 
) const

"Direct illumination" sampling routine for the main scene sensor

Given an arbitrary reference point in the scene, this method samples a position on an sensor that has a nonzero contribution towards that point. This function can be interpreted as a generalization of a direct illumination sampling strategy to sensors.

Ideally, the implementation should importance sample the product of the response profile and the geometry term between the reference point and the position on the emitter.

Parameters
dRecA direct illumination sampling record that specifies the reference point and a time value. After the function terminates, it will be populated with the position sample and related information
sampleA uniformly distributed 2D vector
testVisibilityWhen set to true, a shadow ray will be cast to ensure that the sampled sensor position and the reference point are mutually visible.
Returns
An importance weight given by the importance emitted along the sampled ray divided by the sample probability.
Spectrum mitsuba::Scene::sampleSensorPosition ( PositionSamplingRecord pRec,
const Point2 sample,
const Point2 extra = NULL 
) const
inline

Sample a position on the main sensor of the scene.

This function is provided here mainly for symmetry with respect to sampleEmitterPosition().

To sample the directional component, please use the Sensor::sampleDirection() method.

Parameters
pRecA position record to be populated with the sampled position and related information
sampleA uniformly distributed 2D vector
extraAn additional 2D vector provided to the sampling routine – its use is implementation-dependent.
Returns
An importance weight associated with the sampled position. This accounts for the difference in the spatial part of the response profile and the density function.
void mitsuba::Scene::serialize ( Stream stream,
InstanceManager manager 
) const
virtual

Serialize the whole scene to a network/file stream.

Reimplemented from mitsuba::NetworkedObject.

void mitsuba::Scene::setBlockSize ( uint32_t  size)
inline

Set the block resolution used to split images into parallel workloads.

void mitsuba::Scene::setDestinationFile ( const fs::path &  name)

Set the render output filename.

void mitsuba::Scene::setIntegrator ( Integrator integrator)
inline

Set the scene's integrator.

Note that the integrator is not included when this Scene instance is serialized – the integrator field will be NULL after unserialization. This is intentional so that the integrator can be changed without having to re-transmit the whole scene. Hence, the integrator needs to be submitted separately and re-attached on the remote side using setIntegrator().

void mitsuba::Scene::setSampler ( Sampler sampler)
inline

Set the scene's sampler.

Note that the sampler is not included when this Scene instance is serialized – the sampler field will be NULL after unserialization. This is intentional so that the sampler can be changed without having to re-transmit the whole scene. Hence, the sampler needs to be submitted separately and re-attached on the remote side using setSampler().

void mitsuba::Scene::setSensor ( Sensor sensor)

Set the main scene sensor.

Note that the main sensor is not included when this Scene instance is serialized – the sensor field will be NULL after unserialization. This is intentional so that the sensor can be changed without having to re-transmit the whole scene. Hence, it needs to be submitted separately and re-attached on the remote side using setSensor().

void mitsuba::Scene::setSourceFile ( const fs::path &  name)

Set the name of the file containing the original description of this scene.

std::string mitsuba::Scene::toString ( ) const
virtual

Return a string representation.

Reimplemented from Object.

void mitsuba::Scene::wakeup ( ConfigurableObject parent,
std::map< std::string, SerializableObject * > &  params 
)
virtual

Return an axis-aligned bounding box containing the whole scene.

Reimplemented from mitsuba::NetworkedObject.

Member Data Documentation

Class* mitsuba::Scene::m_theClass
static

The documentation for this class was generated from the following file: