Announcements

26
Feb 14

Mitsuba 0.5.0 released

Hello all,

much has happened since the last version of Mitsuba, so I figured it’s time for a new official release! I’m happy that were quite a few external contributions this time.

The new features of version 0.5.0 are:

  • Multichannel renderings:

    multichannel

    Mitsuba can now perform renderings of images with multiple channels—these can contain the result of traditional rendering algorithms or extracted information of visible surfaces (e.g. surface normals or depth). All computation happens in one pass, and the output is written to a dense or tiled multi-channel EXR file. This feature should be quite useful to computer vision researchers who often need synthetic ground truth data to test their algorithms. Refer to the multichannel plugin in the documentation for an example.

  • Python integration: Following in the footsteps of previous versions, this release contains many improvements to the Python language bindings. They are now suitable for building quite complex Python-based applications on top of Mitsuba, ranging from advanced scripted rendering workflows to full-blown visual material editors. The Python chapter of the documentation has been updated with many new recipes that show how to harness this functionality. The new features include

    • PyQt/PySide integration: It is now possible to fully control a rendering process and display partial results using an user interface written in Python. I’m really excited about this feature myself because it will free me from having to write project-specific user interfaces using C++ in the future. With the help of Python, it’s simple and fast to whip up custom GUIs to control certain aspects of a rendering (e.g. material parameters).

      The documentation includes a short example that instantiates and renders a scene while visually showing the progress and partial blocks being rendered. Due to a very helpful feature of Python called buffer objects, it was possible to implement communication between Mitsuba and Qt in such a way that the user interface directly accesses the image data in Mitsuba’s internal memory without the overhead of costly copy operations.

      python_demo
    • Scripted triangle mesh construction: The internal representation of triangle shapes can now be accessed and modified via the Python API—the documentation contains a recipe that shows how to instantiate a simple mesh. Note that this is potentially quite slow when creating big meshes due to the interpreted nature of Python.

    • Blender Python integration: An unfortunate aspect of Python-based rendering plugins for Blender is the poor performance when exporting geometry (this is related to the last point). It’s simply not a good idea to run interpreted code that might have to iterate over millions of vertices and triangles. Mitsuba 0.5.0 also lays the groundwork for future rendering plugin improvements using two new features: after loading the Mitsuba Python bindings into the Blender process, Mitsuba can directly access Blender’s internal geometry data structures without having to go through the interpreter. Secondly, a rendered image can be passed straight to Blender without having to write an intermediate image file to disk. I look forward to see these features integrated into the MtsBlend plugin, where I expect that they will improve the performance noticeably.

    • NumPy integration: Nowadays, many people use NumPy and SciPy to process their scientific data. I’ve added initial support to facilitate NumPy computations involving image data. In particular, Mitsuba bitmaps can now be accessed as if they were NumPy arrays when the Mitsuba Python bindings are loaded into the interpreter.

      For those who prefer to run Mitsuba as an external process but still want to use NumPy for data processing, Joe Kider contributed a new feature for the mfilm plugin to write out binary NumPy files. These are much more compact and faster to load compared to the standard ASCII output of mfilm.

    • Python 3.3 is now consistently supported on all platforms

    • On OSX, the Mitsuba Python bindings can now also be used with non-Apple Python binaries (previously, doing so would result in segmentation faults).

  • GUI Improvements:

    • Termination of rendering jobs: This has probably happened to every seasoned user of Mitsuba at some point: an accidental click/drag into the window stops a long-running rendering job, destroying all progress made so far. The renderer now asks for confirmation if the job has been running for more than a few seconds.

      motion

    • Switching between tabs: The Alt-Left and Alt-Right have been set up to cycle through the open tabs for convenient visual comparisons between images.

    • Fewer clicks in the render settings: Anton Kaplanyan contributed a patch that makes all render settings fields directly editable without having to double click on them, saving a lot of unnecessary clicks.

  • New default tag: One feature that has been available in Mitsuba since the early days was the ability to leave some scene parameters unspecified in the XML description and supply them via the command line (e.g. the albedo parameter in the following snippet). This is convenient but has always had the critical drawback that loading fails with an error message when the parameter is not explicitly specified. Mitsuba 0.5.0 adds a new XML tag named default that denotes a fallback value when the parameter is not given on the command line. The following example illustrates how to use it:

    default

  • Windows 8.x compatibility: In a prior blog post, I complained about running into serious OpenGL problems on Windows 8. I have to apologize since it looks like I was the one to blame: Microsoft’s implementation is fine, and it was in fact a bug in my own code that was causing the issues. With that addressed in version 0.5.0, Mitsuba now also works on Windows 8.

  • CMake build system: The last release shipped with an all-new set of dependency libraries, and since then the CMake build system was broken to the point of being completely unusable. Edgar Velázquez-Armendáriz tracked down all issues and submitted a big set of patches that make CMake builds functional again.

  • Other bugfixes and improvements: Edgar Velázquez-Armendáriz fixed an issue in the initialization code that could lead to crashes when starting Mitsuba on Windows

    The command line server executable mtssrv was inconvenient to use on Mac OS X because it terminated after any kind of error instead of handling it gracefully. The behavior was changed to match the other platforms.

    A previous release contained a fix for an issue in the thin dielectric material model. Unfortunately, I did not apply the correction to all affected parts of the plugin back then. I’ve since then fixed this and also compared the model against explicitly path traced layers to ensure a correct implementation.

    Anton Kaplanyan contributed several MLT-related robustness improvements.

    Anton also contributed a patch that resets all statistics counters before starting a new rendering, which is useful when batch processing several scenes or when using the user interface.

    Jens Olsson reported some pitfalls in the XML scene description language that could lead to inconsistencies between renderings done in RGB and spectral mode. To address this, the behavior of the intent attribute and spectrum tag (for constant spectra) was slightly adapted. This only affects users doing spectral renderings, in which case, you may want to take a look at Section 6.1.3 of the new documentation and the associated entry on the bug tracker.

I’d also like to announce two new efforts to develop plugins that integrate Mitsuba into modeling applications:

  • Rhino plugin: TDM Solutions SL published the first version of an open source Mitsuba plugin for Rhino 3D and Rhino Gold. The repository of this new plugin can be found on GitHub, and there is also a Rhino 3D group page with binaries and documentation. It is based on an exporter I wrote a long time ago but adds a complete user interface with preliminary material support. I’m excited to see where this will go!

  • Maya plugin: Jens Olsson from the Volvo Car Corporation contributed the beginnings of a new Mitsuba integration plugin for Maya. Currently, the plugin exports geometry to Mitsuba’s .serialized format but still requires manual XML input to specify materials. Nonetheless, this should be quite helpful for Mitsuba users who model using Maya. The source code is located in the mitsuba-maya repository and prebuilt binaries are here.

Getting the new version

Precompiled binaries for many different operating systems are available on the download page. The updated documentation file (249 pages) with coverage of all new features is here (~36 MB), and a lower resolution version is also available here (~6MB).

By the way: a little birdie told me that Mitsuba has been used in a bunch SIGGRAPH submissions this year. If all goes well, you can look forward to some truly exciting new features!


16
Dec 13

‘Tis the Season

My fiancée baked some Mitsuba leaf-shaped Christmas cookies.

IMG_1148
To cut them out of the dough, she 3D-printed the logo into wax and cast it into silver.

IMG_1149

Awesome! :)


12
Nov 13

Mitsuba 0.4.5 released

Hello all,

I’m happy to release a new version of Mitsuba containing many bugfixes and a couple of new features. They are as follows:

  • Height field intersection shape:


    shape_heightfield

    The heightfield primitive represents a quad that is vertically displaced by an arbitrary texture. All storage and ray intersection computations are done in image space (i.e. without creating explicit dense triangle geometry), which leads to significantly better performance. This new plugin was contributed by Miloš Hašan and adapted by Wenzel Jakob. Internally, it relies on the Min-Max MIP Map acceleration data structure.

  • Perspective camera model with radial distortion:

    When rendering images that are supposed to look just like an image taken by a real-world camera, it is important to take lens distortions into account. Mitsuba now has a new perspective_rdist plugin that accepts fitted radial distortion parameters from standard camera calibration packages like the MATLAB Camera Calibration toolbox by Jean-Yves Bouguet.

  • Fast image convolution & bloom filter:

    Mitsuba now supports fast convolutions of large 2D images in frequency space using a convolution theorem approach (Fourier Transform implemented via the FFTW library). This is a pretty standard technique, though it sometimes seems like magic.

    This fast convolution method used to implement Spencer et al’s physically-based bloom filter in the mtsutil tonemap utility. This can be useful when rendering images where pixels are clipped because they are so bright. Take for instance the rendering below: there are many reflections of the sun, but they are quite hard to perceive due to the limited dynamic range. After convolving the image with an empirical point spread function of the human eye, their brightness is much more apparent.


    metal_bloommetal_bloom

  • Initial Retina display support, support for OSX 10.8:

    The latest iteration of OSX and Apple’s new devices with HiDPI monitors brought countless changes that caused problems with the last version of Mitsuba. This release addresses many of them, though full HiDPI support will still have to wait until Digia fixes several critical OSX-related bugs in Qt5.

  • Texture channel filter:

    The bitmap texture has a new feature to create a grayscale texture from a specified bitmap channel (e.g. alpha). This is useful in conjunction with the mask plugin.

  • Python bindings:

    Considerably expanded Python binding coverage and better support for recent Python releases. The Windows package now comes with bindings for versions 2.7, 3.2, and 3.3; all Linux packages come with bindings for the Python 2.x and 3.x releases of the respective distributions.

    This release also adds convenience functions for quickly converting between Mitsuba bitmaps and Python bytearray data structures. This can be useful when shuffling data between Python binding libraries, such as Mitsuba and PyQt/PySide.

  • Multithreading:

    By default, Mitsuba collects lots of statistics while rendering a scene. A key requirement for this process is that it should not impact rendering time in any significant way, and the implementation tries to ensure that this is indeed the case. For instance, to prevent resource contention involving performance counters, Mitsuba replicates them many times so that each thread has access to a unique uncontended memory region.

    As it turns out, this replication was not quite working as expected, causing slowdowns on machines with many cores (e.g. >4). This problem has been fixed.

    Another threading-related change that has brought small but measurable performance improvements is that worker threads now request CPU affinity on platforms where this is supported (Windows and Linux).

  • Visual Studio 2013 support:

    As of about a month ago, the new version of Visual Studio is available. Those who develop on Windows will be pleased to hear that it’s now possible to work on Mitsuba using this compiler—also, all pesky compilation of dependencies has been done for you. The older visual Studio 2010 will continue to be supported in the future.

    To get the most recent version of SCons to work with Visual Studio 2013, a small modification to the SCons code is necessary: change lines 132+ in the file scons-2.3.0\SCons\Tool\MSCommon\vc.py so that they read

    _VCVER = ["12.0", "11.0", "11.0Exp", "10.0", ...
    _VCVER_TO_PRODUCT_DIR = {
        '12.0': [
            r'Microsoft\VisualStudio\12.0\Setup\VC\ProductDir'],
        '11.0': [
            r'Microsoft\VisualStudio\11.0\Setup\VC\ProductDir'],

    and so on.

  • New dependency binaries:

    For consistency, all dependency binaries have been brought up to date on Windows and Mac OS X.

  • KD-tree improvements when rendering in double precision:

    Due to roundoff errors when converting between single and double precision, faces that were aligned with the top-level bounding box of a kd-tree could get clipped. This caused problems when rendering in double precision, particularly when using the instance plugin.

  • Many smaller robustness fixes and bugfixes in various parts of the renderer—see the repository for details.

I’ll close with a link to an interesting paper which tackles an extremely difficult measurement problem, to be presented at SIGGRAPH Asia next week. The paper’s teaser image was rendered using the volumetric path tracer in Mitsuba.

  • Inverse Volume Rendering with Material Dictionaries (by Ioannis Gkioulekas, Shuang Zhao, Kavita Bala, Todd Zickler, and Anat Levin)


    teaser_hd

    Abstract: Translucent materials are ubiquitous, and simulating their appearance requires accurate physical parameters. However, physically-accurate parameters for scattering materials are difficult to acquire. We introduce an optimization framework for measuring bulk scattering properties of homogeneous materials (phase function, scattering coefficient, and absorption coefficient) that is more accurate, and more applicable to a broad range of materials. The optimization combines stochastic gradient descent with Monte Carlo rendering and a material dictionary to invert the radiative transfer equation. It offers several advantages: (1) it does not require isolating single-scattering events; (2) it allows measuring solids and liquids that are hard to dilute; (3) it returns parameters in physically-meaningful units; and (4) it does not restrict the shape of the phase function using Henyey-Greenstein or any other low-parameter model. We evaluate our approach by creating an acquisition setup that collects images of a material slab under narrow-beam RGB illumination. We validate results by measuring prescribed nano-dispersions and showing that recovered parameters match those predicted by Lorenz-Mie theory. We also provide a table of RGB scattering parameters for some common liquids and solids, which are validated by simulating color images in novel geometric configurations that match the corresponding photographs with less than 5% error.


31
Jul 13

It’s done!

Things have been a little quiet around here as of late. Writing my thesis, finishing course work, and moving to another country all took a toll on the amount of time left for Mitsuba coding. So I am happy to say that I’ve finally moved, graduated from Cornell, and just turned in my Ph. D. thesis!

It’s called Light Transport on Path-Space Manifolds and is essentially a significantly expanded version of the Manifold Exploration paper (from 13 to 153 pages).

If you’re interested, click below for a link to the full thesis (20MB PDF).


thesis


1
Mar 13

Mitsuba 0.4.4 released

Hello all,

I’ve uploaded binaries for the Mitsuba 0.4.4 release. This is mainly a bugfix release to address issues concerning the previous version. There is, however, one new feature:

Improved Python bindings for rendering animations

It’s a fairly common operation to render a turntable animation of an object to understand its shape a little better. So far, doing this in Mitsuba involved many separate invocations of the renderer (one for each frame). Not only is this a bit tedious, but it also wastes a considerable amount of CPU time by loading and preprocessing the same scene over and over again. Python to the rescue!

In Mitsuba 0.4.4, the Python bindings make this kind of thing straightforward: simply load the scene and render out frames in a for loop. The following piece of code does this, together with motion blur. The work can be spread over the local cores or those on networked machines. Some setup code is omitted for brevity (see the Python chapter in the documentation for all details).

# Render a turntable with 360 / 2 = 180 frames
stepSize = 2
for i in range(0,360 / stepSize):
    # Compute the rotation at the beginning and the end of the frame
    rotationCur  = Transform.rotate(Vector(0, 0, 1), i*stepSize);
    rotationNext = Transform.rotate(Vector(0, 0, 1), (i+1)*stepSize);

    # Compute matching camera-to-world transformations
    trafoCur  = Transform.lookAt(rotationCur  * Point(0,-6,10),
        Point(0), rotationCur  * Vector(0, 1, 0))
    trafoNext = Transform.lookAt(rotationNext * Point(0,-6,10),
        Point(0), rotationNext * Vector(0, 1, 0))

    # Create an interpolating animated transformation
    atrafo = AnimatedTransform()
    atrafo.appendTransform(0, trafoCur)
    atrafo.appendTransform(1, trafoNext)
    atrafo.sortAndSimplify()
    sensor.setWorldTransform(atrafo) # Assign to the sensor

    # Submit the frame to the scheduler and wait for it to finish
    scene.setDestinationFile('frame_%03i.png' % i)
    job = RenderJob('job_%i' % i, scene, queue)
    job.start()
    queue.waitLeft(0)
    queue.join()

This is basically a 1:1 mapping of the C++ API. At this point, a good amount of the interfaces have been exposed, making it fun to prototype stuff while subjected to the amazing weightlessness of Python. Here, you can see an example of a video created this way (a turntable of the material test ball with a bumpy metal BSDF):



Other changes

  • Photon mapper: In previous releases, the standard photon mapper could miss certain specular paths compared to the path tracer. They are now correctly accounted for.

  • thindielectric: The thindielectric plugin computed incorrect transmittance values in certain situations; this is now fixed.

  • Robustness: Improved numerical robustness when dealing with specular+diffuse materials, such as “plastic”.

  • twosided: Fixed cases where the twosided plugin did not make a material two-sided as expected.

  • Instancing: The shading computed shading frame was incorrect for non-rigid transformations.

  • Cube shape: This recently added shape is now centered at the origin by default, to be consistent with the way that other shapes in Mitsuba work. This will require an extra translation in scenes which are already using the cube shape.

  • TLS cleanup logic: on some platforms, the mtssrv binary crashed with an exception after finishing a rendering job, due to some issues with cleaning up thread-local storage.

  • Other minor fixes and improvements, which are listed in the HG history


29
Jan 13

Mitsuba 0.4.3 released

Hello all,

I’ve just uploaded binaries for a new version of Mitsuba. This release and the next few ones will focus on catching up with a couple of more production-centric features. This time it’s motion blur — the main additions are:

Moving light sources

The first image below demonstrates a moving point light source in a partially metallic Cornell box moving along a Rose curve. Because it’s rendered with a bidirectional path tracer, it is possible to see the actual light source. Due to the defocus blur of the camera, the point light shows up as a ribbon instead of just a curve.

Painting with light using a moving point source

This can be fairly useful even when rendering static scenes, since it enables building things like linear light sources that Mitsuba doesn’t natively implement.
Animating a light source is as simple as replacing its toWorld transformation

<transform name="toWorld">
 ... emitter-to-world transformation ...
</transform>

with the following new syntax

<animation name="toWorld">
   <transform time="0">
     ... transformation at time 0 ...
   </transform>

   <transform time="1">
     ... transformation at time 1 ...
   </transform>
</animation>

Mitsuba uses linear interpolation for scaling and translation and spherical linear interpolation for rotation. Higher-order interpolation or more detailed animations can be approximated by simply providing multiple linear segments per frame.

Moving sensors

Moving sensors work exactly the same way; an example is shown below. All new animation features also provide interactive visualizations in the graphical user interface (medieval scene courtesy of Johnathan Good).



Objects undergoing linear motion

Objects can be animated with the same syntax, but this is currently restricted to linear motion. I wanted to include support for nonlinear deformations in this release, but since it took longer than expected, it will have to wait until the next version.

Beware the dragon

Beware the dragon (model courtesy of XYZRGB)

Render-time annotations

The ldrfilm and hdrfilm now support render-time annotations to facilitate record keeping. Annotations are used to embed useful information inside a rendered image so that this information is later available to anyone viewing the image. They can either be placed into the image metadata (i.e. without disturbing the rendered image) or “baked” into the image as a visible label. Various keywords can be used to collect all relevant information, e.g.:

 <string name="label[10, 10]" value="Integrator: $integrator['type'],
   $film['width']x$film['height'], $sampler['sampleCount'] spp,
   render time: $scene['renderTime'], memory: $scene['memUsage']"/>

Providing the above parameter to hdrfilm has the following result:

annotation_example

Hiding directly visible emitters

Several rendering algorithms in Mitsuba now have a feature to hide directly visible light sources (e.g. environment maps or area lights). While not particularly realistic, this feature is convenient for removing a background from a rendering so that it can be pasted into a differently-colored document. Together with an improved alpha channel computation for participating media, things like the following are now possible:



Improved instancing

While Mitsuba has supported instancing for a while, there were still a few lurking corner-cases that could potentially cause problems. In a recent paper on developing fractal surfaces in three dimensions, Geoffrey Irving and Henry Segerman used Mitsuba to render heavy scenes (100s of millions of triangles) by instancing repeated substructures— this was a good incentive to fix the problems once and for all. The revamped instancing plugin now also supports non-rigid transformations. The two renderings from the paper shown below illustrate a 2D Gospel curve developed up to level 5 along the third dimension:



Miscellaneous

  • Threading on Windows: Edgar Velázquez-Armendáriz fixed the thread local storage (TLS) implementation and a related race condition that previously caused occasional deadlocks and crashes on Windows

  • Caching: Edgar added a caching mechanism to the serialized plugin, which significantly accelerates the very common case where many shapes are loaded from the same file in sequence.

  • File dialogs: Edgar upgraded most of the “File/Open”-style dialogs in Mitsuba so that they use the native file browser on Windows (the generic dialog provided by Qt is rather ugly)

  • Python: The Python bindings are now much easier to load on OSX. Also, in the new release, further Mitsuba core functions are exposed.

  • Blender interaction: Fixed a issue where GUI tabs containing scenes created in Blender could not be cloned

  • Non-uniform scales: All triangle mesh-based shapes now permit non-uniform scales

  • NaNs and friends: Increased resilience against various numerical corner cases

  • Index-matched participating media: Fixed an unfortunate regression in volpath regarding index-matched media that was accidentally introduced in 0.4.2

  • roughdiffuse: Fixed texturing support in the roughdiffuse plugin

  • Photon mapping: Fixed some inaccuracies involving participating media when rendered by the photon mapper and the Beam Radiance Estimate

  • Conductors: Switched Fresnel reflectance computations for conductors to the exact expressions predicted by geometric optics (an approximation was previously used)

  • New cube shape: Added a cube shape plugin for convenience. This does exactly what one would expect.

  • The rest: As usual, a large number of smaller bugfixes and improvements were below the threshold and are thus not listed individually—the repository log has more details.

Where to get it

The documentation was updated as well and has now grown to over 230 pages. Get it here.
The new release is available on the download page


31
Oct 12

Mitsuba 0.4.2 released

The next bug-fix release of Mitsuba is available, which has the following improvements:

  • Volumetric path tracers: improved sampling when dealing with index-matched medium transitions. This is essentially a re-implementation of an optimization that Mitsuba 0.3.1 already had, but which got lost in the bidirectional rewrite.

  • Batch tonemapper: due to an unfortunate bug, the batch tonemapper in the last release produced invalid results for images containing an alpha channel. This is now fixed.

  • Shapes: corrected some differential geometry issues in the “cylinder” and “rectangle” shapes.

  • MLT: fixed 2-stage MLT, which was producing incorrect results.

  • MEPT: fixed the handling of directional light sources.

  • Robustness: got rid of various corner-cases that could produce NaNs.

  • Filenames: to facilitate loading scenes created on Windows/OSX, the Linux version now resolves files case-insensitively if they could not be found after a case-sensitive search.

  • Python: added Python bindings for shapes and triangle meshes. The Python plugin should now be easier to load (previously, this was unfortunately rather difficult on several platforms). The documentation was also given an overhaul.

  • Particle tracing: I’ve decided to disable the adjoint BSDF for shading normals in the particle tracer, since it causes an unacceptable amount of variance in scenes containing poorly tesselated geometry. This affects the plugins ptracer, ppm, sppm and photonmapper. See the commit for further details.

  • Subsurface scattering: fixed parallel network renderings involving the dipole model.

  • Homogeneous medium & dipole: added many more material presets by Narasimhan et al.

  • OBJ loader: further robustness improvements to the OBJ loader and the associated MTL material translator.


10
Oct 12

Experimental Rhino 5 exporter

I’ve been hacking on a Mitsuba plugin for Rhino 5 and have some preliminary results to share.

Disclaimer: This is an early experimental integration plugin—if you don’t like to edit Mitsuba XML files by hand, it probably won’t be useful to you. The plugin currently only exports geometry and does not support editing materials. I am still waiting for McNeel to provide the necessary API hooks to be able to implement full material support.

On the other hand, the geometry export works quite smoothly even for large geometry with instances and all sorts of crazy shape types. It exports directly to Mitsuba’s native .serialized format, hence the export+open in Mitsuba operation is reasonably fast.

To install it, download Mitsuba.rhp and move it into your Rhinoceros 5.0 Beta/Plug-ins folder. This file then needs to be registered in the Settings using the Install button on the Plug-ins panel (see the first screenshot below). Now, restart Rhino. After the restart, there will be a new Mitsuba panel in the settings (second screenshot below). Select this panel and configure the path to where Mitsuba is installed.

Once this is done, exporting the geometry and opening it in the renderer is as simple as entering Mitsuba on the command prompt. Below, you can see a simple clay realtime preview of an object with sun & sky illumination.

The source repository is here.

Many thanks go to Nathan Coatney, who provided some guidance in the form of an early version of his Python-based LuxRender exporter.


10
Oct 12

Mitsuba 0.4.1 released

I have just released a new version of Mitsuba. This is mostly a bugfix release, though there is one new feature: full user interface support for unicode across Linux, Windows, and OSX.




Excuse the silly Katakana :). Edgar Velázquez-Armendáriz helped a lot with the unicode transition — thanks!

Note: This version does not yet work with the Blender exporter maintained by Bartosz Styperek.

Development will likely remain in bug-fixing mode for a while, since the last release introduced so many changes. Here is a list of the issues that were addressed in this particular version:

  • negative pixel values in textures and environment maps are handled more gracefully.

  • minor robustness improvements to the OBJ and COLLADA importers. They now handle multiple cameras in a single scene.

  • fixed the Ubuntu packages so that they don’t depend on a specific version of the libjpeg development headers.

  • fixed a stack overflow issue in the bidirectional path tracer, as well as some other crash-causing bugs that were found via the Breakpad reports.

  • fixed an issue where sun and sky interpreted the combination of a ‘toWorld’ transform and explicit ‘sunDirection’ differently, causing misalignment.

  • fixed a photon mapper regression involving environment maps.

  • Edgar rewrote a piece of initialization code that prevented Mitsuba from running on Windows XP.

  • Sean Bell contributed an improved setpath.sh script, which adds ZSH autocompletion on Linux.

  • fixed some issues in the bidirectional abstraction layer when dealing with alpha masks.

  • fixed a bug where the rectangle shape produced incorrect results when used as an area light.

  • on OSX, the python bindings could not be loaded due to invalid library import paths—this now works.

Downloads are available here.

Enjoy!


2
Oct 12

Update: new 0.4.0 binaries for Windows

Some folks encountered a DLL-loading related issue, which curiously isn’t reproducible on my Windows 7 test machine. The most likely cause of the problem has been identified and fixed.

If you download Mitsuba again, it should hopefully work.

Update 2: turns out that the problem was a missing Intel runtime library. The current binary distribution now includes this.


1
Oct 12

Mitsuba 0.4.0 released

I am excited to announce version 0.4.0 of Mitsuba! This release represents about two years of development that have been going on in various side-branches of the codebase and have finally been merged.

The change list is extensive: almost all parts of the renderer were modified in some way; the source code diff alone totals over 5MB of text (even after excluding tabular data etc). This will likely cause some porting headaches for those who have a codebase that builds on Mitsuba, and I apologize for that.

Now for the good part: it’s a major step in the development of this project. Most parts of the renderer were redesigned and feature cleaner interfaces, improved robustness and usability (and in many cases better performance). Feature-wise, the most significant change is that the renderer now ships with several state-of-the-art bidirectional rendering techniques. The documentation has achieved coverage of many previously undocumented parts of the renderer and can be considered close to complete. In particular, the plugins now have 100% coverage. Hooray! :)

Please read on for a detailed list of changes:

User interface

Here is a short video summary of the GUI-specific changes:




Please excuse the jarring transitions :). In practice, the preview is quite a bit snappier — in the video, it’s runs at about half speed due to my recording application fighting against Mitsuba over who gets to have the GPU. To recap, the main user interface changes were:

  • Realtime preview: the VPL-based realtime preview was given a thorough overhaul. As a result, the new preview is faster and produces more accurate output. Together with the redesigned sensors, it is also able to simulate out-of-focus blur directly in the preview.

  • Improved compatibility: The preview now even works on graphics cards with genuinely terrible OpenGL support.

  • Crop/Zoom support: The graphical user interface contains an interactive crop tool that can be used to render only a part of an image and optionally magnify it.

  • Multiple sensors: A scene may now also contain multiple sensors, and it is possible to dynamically switch between them from within the user interface.

Bidirectional rendering algorithms

Mitsuba 0.4.0 ships with a whole batch of bidirectional rendering methods:

  • Bidirectional Path Tracing (BDPT) by Veach and Guibas is an algorithm that works particularly well on interior scenes and often produces noticeable improvements over plain (i.e. unidirectional) path tracing. BDPT renders images by simultaneously tracing partial lights path from the sensor and the emitter and attempting to establish connections between the two.

    The new Mitsuba implementation is a complete reproduction of the original method, which handles all sampling strategies described by Veach. The individual strategies are combined using Multiple Importance Sampling (MIS). A demonstration on a classic scene by Veach is shown below; in the images, s and t denote the number of sampling events from the light and eye direction, respectively. The number of pixel samples is set to 32 so that the difference in convergence is clearly visible.




  • Path Space Metropolis Light Transport (MLT) is a seminal rendering technique proposed by Veach and Guibas, which applies the Metropolis-Hastings algorithm to the path-space formulation of light transport.

    In contrast to simple methods like path tracing that render images by performing a naïve and memoryless random search for light paths, MLT actively searches for relevant light paths. Once such a path is found, the algorithm tries to explore neighboring paths to amortize the cost of the search. This is done with a clever set of path perturbations which can efficiently explore certain classes of paths. This method can often significantly improve the convergence rate of renderings that involve what one might call “difficult” input.

    To my knowledge, this is the first publicly available implementation of this algorithm that works correctly.

  • Primary Sample Space Metropolis Light Transport (PSSMLT) by Kelemen et al. is a simplified version of the above algorithm.
    Like MLT, this method relies on Markov Chain Monte Carlo integration, and it systematically explores the space of light paths, searching with preference for those that carry a significant amount of energy from an emitter to the sensor. The main difference is that PSSMLT does this exploration by piggybacking on another rendering technique and “manipulating” the random number stream that drives it. The Mitsuba version can operate either on top a unidirectional path tracer or a fully-fledged bidirectional path tracer with multiple importance sampling. This is a nice method to use when a scene is a little bit too difficult for a bidirectional path tracer to handle, in which case the extra adaptiveness due PSSMLT can bring it back into the realm of things that can be rendered within a reasonable amount of time.

    This is the algorithm that’s widely implemented in commercial rendering packages that mention “Metropolis Light Transport” somewhere in their product description.

  • Energy redistribution path tracing by Cline et al. combines aspects of Path Tracing with the exploration strategies of Veach and Guibas. This method generates a large number of paths using a standard path tracing method, which are then used to seed a MLT-style renderer. It works hand in and with the next method:

  • Manifold Exploration by Jakob and Marschner is based on the idea that sets of paths contributing to the image naturally form manifolds in path space, which can be explored locally by a simple equation-solving iteration. This leads to a method that can render scenes involving complex specular and near-specular paths, which have traditionally been a source of difficulty in unbiased methods. The following renderings images (scene courtesy of Olesya Isaenko) were created with this method:


Developing these kinds of algorithms can be quite tricky because of the sheer number of corner cases that tend to occur in any actual implementation.  To limit these complexities and enable compact code, Mitsuba relies on a bidirectional abstraction library (libmitsuba-bidir.so) that exposes the entire renderer in terms of generalized vertex and edge objects. As a consequence, these new algorithms “just work” with every part of Mitsuba, including the shapes, sensors, and emitters, surface scattering models, and participating media. As a small caveat, there are a few remaining non-reciprocal BRDFs and Dipole-style subsurface integrators that don’t yet interoperate, but this will be addressed in a future release.

Bitmaps and Textures

The part of the renderer that deals with bitmaps and textures was redesigned from scratch, resulting in many improvements:

  • Out-of-core textures: Mitsuba can now comfortably work with textures that exceed the available system memory.

  • Blocked OpenEXR files: Mitsuba can write blocked images, which is useful when the image to be rendered is too large to fit into system memory.

  • Filtering: The quality of filtered texture lookups has improved considerably and is now up to par with mature systems designed for this purpose (e.g. OpenImageIO).

  • MIP map construction: now handles non-power-of-two images efficiently and performs a high-quality Lanczos resampling step to generate lower-resolution MIP levels, where a box filter was previously used. Due to optimizations of the resampling code, this is surprisingly faster than the old scheme! :)

  • Conversion between internal image formats: costly operations like “convert this spectral double precision image to an sRGB 8 bit image” occur frequently during the input and output phases of rendering. These are now much faster due to some template magic that generates optimized code for any conceivable kind of conversion.

  • Flexible bitmap I/O: the new bitmap I/O layer can read and write luminance, RGB, XYZ, and spectral images (each with or without an alpha channel), as well as images with an arbitrary number of channels. In the future, it will be possible to add custom rendering plugins that generate multiple kinds of  types of output data (i.e. things other than radiance) in a single pass.

Sample generation




This summer, I had the fortune of working for Weta Digital. Leo Grünschloß from the rendering R&D group quickly had me convinced about all of the benefits of Quasi Monte-Carlo point sets. Since he makes his sample generation code available, there was really no excuse not to include this as plugins in the new release. Thanks, Leo!

  • sobol: A fast random-access Sobol sequence generator using the direction numbers by Joe and Kuo.

  • halton & hammersley: These implement the classic Halton and Hammersley sequences with various types of scrambling (including Faure permutations)

Apart from producing renderings with less noise, these can also used to make a rendering process completely deterministic. When used together with tiling-based rendering techniques (such as the path tracer), these plugins use an enumeration technique (Enumerating Quasi-Monte Carlo Point Sequences in Elementary Intervals by Grünschloß et al.) to find the points within each tile.

Sensors and emitters (a.k.a. cameras and light sources)

The part of Mitsuba that deals with cameras and light sources was rewritten from scratch, which was necessary for clean interoperability with the new integrators. To convey the magnitude of these modifications, cameras are now referred to sensors, and luminaires have become emitters. This terminology change also reflects the considerably wider range of plugins to perform general measurements, rendering an image being a special case. For example, the following sensors are available:

  • Perspective pinhole and orthographic sensor: these are the same as always and create tack sharp images (demonstrated on the Cornell box and the material test object).



  • Perspective thin lens and telecentric lens sensor: these can be thought of as “fuzzy” versions of the above. They focus on a planar surface and blur everything else.

    Lens nerd alert: the telecentric lens sensor is particularly fun/wacky! Although it provides an orthographic view, it can “see” the walls of the Cornell box due to defocus blur :)




  • Spherical sensor: a point sensor, which creates a spherical image in latitude-longitude format.


  • Irradiance sensor: this is the dual of an area light. It can be attached to any surface in the scene to record the arriving irradiance and “renders” a single number rather than an image.

  • Fluence sensor: this is the dual of a point light source. It can be placed anywhere in the scene and measures the average radiance passing through that point.

  • Radiance sensor: this is the dual of a collimated beam. It records the radiance passing through a certain point from a certain direction.

The emitters are mostly the same (though, built using the new interface). The main changes are:

  • Environment emitter: the new version of this plugin implements slightly better importance sampling, and it supports filtered texture lookups.

  • Skylight emitter: The old version of this plugin used to implement the Preetham model, which suffered from a range of numerical and accuracy-related problems. The new version is based on the recent TOG paper An Analytic Model for Full Spectral Sky-Dome Radiance by Lukáš Hošek and Alexander Wilkie. The sun model has also been updated for compatibility. Together, these two plugins can be used to render scenes under spectral daylight illumination, using proper physical units (i.e. radiance values have units of W/(m^2 ⋅ srnm)). The sky configuration is found from the viewing position on the earth and the desired date and time, and this computation is now considerably more accurate. This may be useful for architectural or computer vision applications that need to reproduce approximate lighting conditions on a certain time and date (the main caveat being that these plugins do not know anything about the weather).



When rendering with bidirectional rendering algorithms, sensors and emitters are now interpreted quite strictly to ensure correct output. For instance, cameras that have a non-infinitesimal aperture are represented as actual objects in the scene that collect illumination—in other words: a thin lens sensor facing a mirror will see itself as a small 100% absorbing disc. Point lights are what they really are (i.e. bright points floating in space.) I may work on making some of this behavior optional in future releases, as it can be counter-intuitive when used for artistic purposes.

Other notable extensions and bugfixes:



  • obj: The Wavefront OBJ loader now supports complex meshes that reference many different materials. These are automatically imported from a mtl file if present and can individually be overwritten with more specialized Mitsuba-specific materials.

  • thindielectric: a new BSDF that models refraction and reflection from a thin dielectric material (e.g. a Glass window). It should be used when two refraction events are modeled using a single sheet of geometry.

  • blendbsdf: a new BSDF that interpolates between two nested BSDFs based on a texture.

  • mfilm: now supports generating both MATLAB and Mathematica-compatible output.

  • hdrfilm: replaces the exrfilm plugin. This new film plugin can write both OpenEXR, Ward-style RGBE, and PFM images.

  • ldrfilm: replaces the pngfilm plugin. The new film writes PNG and JPEG images with adjustable output channels and compression. It applies Gamma correction and, optionally, the photographic tonemapping algorithm by Reinhard et al.

  • dipole: the dipole subsurface scattering plugin was completely redesigned. It now features a much faster tree construction code, complete SSE acceleration, and it uses blue noise irradiance sample points.

  • Handedness, part 2: this is a somewhat embarrassing addendum to an almost-forgotten bug. Curiously, old versions of Mitsuba had two handedness issues that canceled each other out—after fixing one of them in 0.3.0, all cameras became  left-handed! This is now fixed for good, and nobody (myself, in particular) is allowed to touch this code from now on!

  • Batch tonemapper: the command-line batch tonemapper (accessible via mtsutil) has been extended with additional operations (cropping, resampling, color balancing), and it can process multiple images in parallel.

  • Animation readyness: one important aspect of the redesign was to make every part of the renderer animation-capable. While there is no public file format to load/describe the actual animations yet, it will be a straightforward addition in a future 0.4.x release.

  • Build dependencies: Windows and Mac OS builds now ship with all dependencies except for SCons and Mercurial (in particular, Qt is included). The binaries were recompiled so that they rely on a consistent set of runtime support libraries. This will hopefully end build difficulties on these platforms once and forever.
    Note: The entire process for downloading the dependencies and compiling Mitsuba has changed a little. Please be sure to review the documentation.

  • CMake build system: Edgar Velázquez-Armendáriz has kindly contributed a CMake-based build system for Mitsuba. It essentially does the same thing as the SCons build system except that it is generally quite a bit faster. For now, it is still considered experimental and provided as a convenience for experienced users who prefer to use CMake. Both build systems will be maintained side-by-side in the future.

  • SSE CPU tonemapper: When running Mitsuba through a Virtual Desktop connection on Windows, the OpenGL support is simply too poor to support any kind of GPU preview. In the past, an extremely slow CPU-based fallback was used so that at least some kind of tonemapped image can be shown. Edgar replaced that with optimized SSE2 code from his HDRITools, hence this long-standing resource hog is gone.

  • SSE-accelerated Mersenne Twister: Edgar has also contributed a patch that integrates the SSE-accelerated version of Mersenne Twister by Mutsuo Saito and Makoto Matsumoto, which is about twice as fast as the original code.

  • Multi-python support: some platforms provide multiple incompatible versions of Python (e.g. 2.7 and 3.2). Mitsuba can now build a separate Python integration library for each one.

  • Breakpad integration: Mitsuba will happily crash when given some sorts of invalid input (and occasionally, when given valid input). In the past, it has been frustratingly difficult to track down these crashes, since many users don’t have the right tools to extract backtraces. Starting with this release, official Mitsuba builds on Mac OS and Windows include Google Breakpad, which provides the option to electronically submit a small crash dump file after such a fault. A decent backtrace can then be obtained from this dump file, which will be a tremendous help to debug various crashes.

  • boost::thread: Past versions of Mitsuba have relied on pthreads for multithreading. On Windows, a rather complicated emulation layer was needed to translate between this interface and the very limited native API. Over the years, this situation has improved considerably so that a simpler and cleaner abstraction, boost::thread, has now become a satisfactory replacement on all platforms. Edgar ported the all of the old threading code over to boost.

Compatibility:

There were some changes to plugin names and parameters, hence old scenes will not directly work with 0.4.0. Do not panic: as always, Mitsuba can automatically upgrade your old scenes so that they work with the current release. Occasionally, it just becomes necessary to break compatibility to improve the architecture or internal consistency. Rather than being tied down by old decisions, it is the policy of this project to make such changes while providing a migration path for existing scenes.

When upgrading scenes, please don’t try to do it by hand (e.g. by editing the “version” XML tag). The easiest way to do this automatically is by simply opening an old file using the GUI. It will then ask you if you want to upgrade the scene and do the hard work for you (a backup copy is created just in case). An alternative scriptable approach for those who have a big stash of old scenes is to run the XSLT transformations (in the data/schema directory) using a program like xsltproc.

Documentation

A lot of work has gone into completing the documentation and making it into something that’s fun to read. The images below show a couple of sample pages:



The high resolution reference manual is available for download here: documentation.pdf (a 36MB PDF file), and a low-resolution version is here: documentation_lowres.pdf (6MB). Please let me know if you have any suggestions, or you find a typo somewhere.

Downloads:

To download this release along with set of sample scenes that you can play with, visit the download page. Enjoy!


12
Sep 12

Manifold exploration talk now online

I’ve just uploaded a video capture of my talk on Manifold Exploration, for those who didn’t see it at SIGGRAPH:


This will all be available in the next version of Mitsuba, which is close to its release. The last remaining item that I’m now working on is an improved version of the real-time preview, which hopefully wont’t take more than 1-2 weeks.


13
May 12

Mitsuba used in SIGGRAPH 2012 technical papers

Mitsuba was used to render the results and illustrations in several technical papers that have just received their final acceptance to SIGGRAPH 2012. The ones that I know of are listed below (if I missed yours, please let me know!)

  • Manifold Exploration: A Markov Chain Monte Carlo technique for rendering scenes with difficult specular transport (by Wenzel Jakob and Steve Marschner)

    Abstract: It is a long-standing problem in unbiased Monte Carlo methods for rendering that certain difficult types of light transport paths, particularly those involving viewing and illumination along paths containing specular or glossy surfaces, cause unusably slow convergence. In this paper we introduce Manifold Exploration, a new way of handling specular paths in rendering. It is based on the idea that sets of paths contributing to the image naturally form manifolds in path space, which can be explored locally by a simple equation-solving iteration. This paper shows how to formulate and solve the required equations using only geometric information that is already generally available in ray tracing systems, and how to use this method in in two different Markov Chain Monte Carlo frameworks to accurately compute illumination from general families of paths. The resulting rendering algorithms handle specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering interactions, all using the same fundamental algorithm. An implementation is demonstrated on a range of challenging scenes and evaluated against previous methods.

    Gallery: the following images are courtesy of Wenzel Jakob and Steve Marschner. The interior scene was designed by Olesya Isaenko.

  • Structure-aware Synthesis for Predictive Woven Fabric Appearance (by Shuang Zhao, Wenzel Jakob, Steve Marschner, and Kavita Bala)

    Abstract: Abstract: Woven fabrics have a wide range of appearance determined by their small-scale 3D structure. Accurately modeling this structural detail can produce highly realistic renderings of fabrics and is critical for predictive rendering of fabric appearance. But building these yarn-level volumetric models is challenging. Procedural techniques are manually intensive, and fail to capture the naturally arising irregularities which contribute significantly to the overall appearance of cloth. Techniques that acquire the detailed 3D structure of real fabric samples are constrained only to model the scanned samples and cannot represent different fabric designs.

    This paper presents a new approach to creating volumetric models of woven cloth, which starts with user-specified fabric designs and produces models that correctly capture the yarn-level structural details of cloth. We create a small database of volumetric exemplars by scanning fabric samples with simple weave structures. To build an output model, our method synthesizes a new volume by copying data from the exemplars at each yarn crossing to match a weave pattern that specifies the desired output structure. Our results demonstrate that our approach generalizes well to complex designs and can produce highly realistic results at both large and small scales.

    Gallery: the following images are courtesy of Shuang Zhao, Wenzel Jakob, Steve Marschner, and Kavita Bala.

    Video:

  • Stitch Meshes for Modeling Knitted Clothing with Yarn-level Detail (by Cem Yuskel, Jonathan Kaldor, Doug L. James, and Steve Marschner)

    Abstract:Recent yarn-based simulation techniques permit realistic and efficient dynamic simulation of knitted clothing, but producing the required yarn-level models remains a challenge. The lack of practical modeling techniques significantly limits the diversity and complexity of knitted garments that can be simulated. We propose a new modeling technique that builds yarn-level models of complex knitted garments for virtual characters. We start with a polygonal model that represents the large-scale surface of the knitted cloth. Using this mesh as an input, our interactive modeling tool produces a finer mesh representing the layout of stitches in the garment, which we call the stitch mesh. By manipulating this mesh and assigning stitch types to its faces, the user can replicate a variety of complicated knitting patterns. The curve model representing the yarn is generated from the stitch mesh, then the final shape is computed by a yarn-level physical simulation that locally relaxes the yarn into realistic shape while preserving global shape of the garment and avoiding “yarn pull-through,” thereby producing valid yarn geometry suitable for dynamic simulation. Using our system, we can efficiently create yarn-level models of knitted clothing with a rich variety of patterns that would be completely impractical to model using traditional techniques. We show a variety of example knitting patterns and full-scale garments produced using our system.

    Gallery: the following images are courtesy of Cem Yuksel. Some of the models are by Christer Sveen, Rune Spaans, and Alexander Tomchuk.

    Video:



  • Energy-based Self-Collision Culling for Arbitrary Mesh Deformations (by Changxi Zheng and Doug L. James)

    Abstract:
    In this paper, we accelerate self-collision detection (SCD) for a deforming triangle mesh by exploiting the idea that a mesh cannot self collide unless it deforms enough. Unlike prior work on subspace self-collision culling which is restricted to low-rank deformation subspaces, our energy-based approach supports arbitrary mesh deformations while still being fast. Given a bounding volume hierarchy (BVH) for a triangle mesh, we precompute Energy-based Self-Collision Culling (ESCC) certificates on bounding-volume-related sub-meshes which indicate the amount of deformation energy required for it to self collide. After updating energy values at runtime, many bounding-volume self-collision queries can be culled using the ESCC certificates. We propose an affine-frame Laplacian-based energy definition which sports a highly optimized certificate preprocess, and fast runtime energy evaluation. The latter is performed hierarchically to amortize Laplacian energy and affine-frame estimation computations. ESCC supports both discrete and continuous SCD, detailed and nonsmooth geometry. We demonstrate significant culling on various examples, with SCD speed-ups up to 26x.

    Gallery: the following images are courtesy of Chanxi Zheng and Doug L. James.

    Video:



  • Precomputed Acceleration Noise for Improved Rigid-body Sound (by Jeff Chadwick, Changxi Zheng and Doug L. James)

    Abstract:
    We introduce an efficient method for synthesizing acceleration noise—sound produced when an object experiences abrupt rigid body acceleration due to collisions or other contact events. We approach this in two main steps. First, we estimate continuous contact force profiles from rigid-body impulses using a simple model based on Hertz contact theory. Next, we compute solutions to the acoustic wave equation due to short acceleration pulses in each rigid-body degree of freedom. We introduce an efficient representation for these solutions—Precomputed Acceleration Noise—which allows us to accurately estimate sound due to arbitrary rigid-body accelerations. We find that the addition of acceleration noise significantly complements the standard modal sound algorithm, especially for small objects

    Gallery: the following images are courtesy of Jeff Chadwick, Chanxi Zheng and Doug L. James.


13
Apr 12

Mitsuba 0.3.1 released

A new release of Mitsuba is available for download. This is the last stable release before version 0.4.0, which will introduce bidirectional features and a major overhaul of the system’s internals. It is mainly a bugfix release, which was necessary since a few rather serious problems had crept into version 0.3.0. Apart from bugfixes, there were also a couple of new features, and some announcements are to be made:
  • Reflectance models: The plastic, roughplastic, and roughcoating models have been completely overhauled. They are now reciprocal, and the rough variants support a texturable roughness.
  • Intersection shapes: For convenience, I’ve added two new intersection primitives: rectangle and disk. These do exactly what the name implies :).
  • Documentation: The documentation has seen further improvements and clarifications. While of course still far from complete, it’s getting a bit larger every day.
  • Wireframe texture: there is now a special texture that reveals the underlying mesh structure when attached to a triangle mesh.
  • Linear blend material: it’s now possible to interpolate between two arbitrary BSDFs based on a texture.
Two announcements:
  • Fluid solver: I’ve updated my homebrew fluid solver to generate volume data files that current versions of the renderer can handle. The download page also contains a new example scene with heterogeneous smoke generated by that program.
  • Forums: I’ve added forums to the bug tracker (link). You will need a valid tracker account to create forum posts.
The bugs resolved in version 0.3.1 are as follows:
  • Photon mapper: The photon mapper had some serious issues in the last release, which apparently didn’t keep people from using it :). These are now fixed, and it should run faster too.
  • Performance on Linux/x86_64: On Linux, the performance of the single precision exp and log math library functions is extremely poor when compiling for the x86_64 architecture. Why that happens (and is still the case in 2011) is a long and sad story. To circumvent this problem for now, Mitsuba will revert to the double precision functions on affected systems. This causes a noticeable performance improvement when rendering homogeneous or heterogeneous participating media.
  • Primitive clipping: When constructing the scene kd-tree with primitive clipping (a.k.a. “perfect splits”), some numerical issues caused holes to appear when the renderer was compiled in double precision. This likely never affected more than a handful people, since the default builds all use single precision.
  • Adaptive integrator: The adaptive integrator interacted badly with certain sub-integrators—this is now fixed.
  • Geometry instancing: Instanced analytic shapes (e.g. spheres, cylinders, ..) are now supported, and an error involving network rendering with instanced geometry is fixed.
  • Scene corruption: Fixed a serious issue that could destroy a scene file when saving from a cloned tab!
  • Multi-screen setups: On Windows, Mitsuba had occasional problems with multi-screen setups, where windows could disappear to a position that was not covered by any of the screens. The GUI is now aware of the screen configuration and tries to place windows more intelligently.
  • OBJ texture coordinates: textures applied to an OBJ mesh used to be vertically flipped. This is now fixed.
  • Hair intersection shape: the construction of the hair kd-tree used to take very long and has now been shortened considerably.
  • Texture lookups: Sean Bell found an off-by-one error that caused texture lookups with x=0 to wrap. This has been corrected in the new release.

8
Mar 12

Nori: an educational ray tracer

Apart from working on SIGGRAPH paper submissions, I’ve been helping my advisor Steve Marschner teach Cornell’s graduate-level graphics course CS6630 this semester. My job was to create a compact rendering framework that students can use (and understand) in a sequence of increasingly difficult assignments.

PBRT and Mitsuba were not an option: they are both much too large for a one-semester course, and all fun components are already “done” (i.e. not available for assignments). So we decided to make a new framework called Nori.

It is always tempting to stop working on a piece of code as soon as it produces good-looking results (which might still be subtly incorrect). An important design goal of Nori was to really force students to end up with the right answer. Thus, each assignment comes with a battery of statistical hypothesis tests to ensure correctness.



We use χ2 goodness of fit-tests to validate sample generation and Student’s t-test to check if the submitted MC integrators converge to the right value.

Everything is licensed under the GPL and thus also available to instructors at other institutions. If you are interested, please get in touch.


30
Nov 11

Winter break

In anticipation of the SIGGRAPH deadline on January 17, the public Mitsuba releases are now entering their annual “winter break”. This means that there probably won’t be any new releases in that time, plus  I may be slow in responding to email or bugtracker entries.

That is not to say that I’m taking a break :). Some exciting changes will be coming to this project — stay tuned!


22
Sep 11

Heterogeneous volume datasets

Due to popular demand, I’m releasing two scenes that demonstrate how to render heterogeneous participating media with Mitsuba. The first one is a (relatively low-res) plume of smoke generated by my fluid solver.

Plume of smoke

The second example is the blue scarf dataset from our SIGGRAPH 2010 project. It demonstrates how to render with the anisotropic volume scattering framework presented in that paper, and it uses the micro-flake scattering model implemented in Mitsuba.

SIGGRAPH 2010 Scarf scene

Enjoy! :) You can find the scene files on the download page.


2
Sep 11

Mitsuba 0.3.0 released

I’ve just released Mitsuba 0.3.0! In hindsight, my previous announcement of a release within 1-2 weeks clearly turned out to be a bit too optimistic… But I hope that it was worth the wait. In addition to all the previously mentioned features, I’ve worked on the following changes:

Python integration: Mitsuba 0.3.0 comes with Python bindings. While the bindings don’t expose the full C++ API, they are already good enough for controlling the renderer, dynamically constructing scenes, and many other useful things. Chapter 11 of the documentation contains a basic overview and several “recipe”-type examples. My main motivation for adding bindings is to reduce the amount of work that is necessary to integrate Mitsuba into commercial modeling tools. I found that since I don’t do any modeling myself, I’m not particularly good at developing such plugins :). I hope that a solid Python API will lower the bar enough so that someone else can give it a shot and succeed.

Integrator overhaul: Some of the integrators in Mitsuba (photon mapping variants, irradiance caching, subsurface integrators) have become somewhat broken due to the many changes that happened over the last months. These are all fixed in the current release. In addition, the photon map-based integrators are now about twice as fast, which is a consequence of switching to a more optimized generic point kd-tree implementation.

Preetham sun/sky & Hanrahan-Krueger models: As part of his thesis (link) on realistic rendering of snow, Tom Kazimiers has developed a great series of extensions to Mitsuba. I’m planning to eventually merge most of them. For now, I’ve added his implementation of the Preetham sun/sky model, as well as an implementation of the Hanrahan-Krueger BSDF (which is an analytic solution to single scattering in a layer of a homogeneous medium). Marios Papas greatly extended the H-K model implementation and verified its correctness against reference simulations.

API reference: A nightly process now creates API documentation for the most recent Mitsuba version. Take a look at http://www.mitsuba-renderer.org/api. The API documentation is also linked from the main Mitsuba web page.

Better builds: I’ve grown a bit tired of making release builds and packages for various platforms, since it tends to be a tedious manual process.

To reduce this burden, I’ve set up seven virtual machines that automatically download the latest version of Mitsuba, compile it, and upload packages for:

  • Windows (x86 and x86_64)
  • Mac OS 10.6+ (universal binary for x86 and x86_64)
  • Ubuntu 10.10 — maverick (x86_64)
  • Ubuntu 11.04 — natty (x86_64)
  • Debian 6.0 — squeeze (x86_64)
  • Fedora Core 15 (x86_64)
  • Arch Linux (x86_64)

This set should hopefully cover almost everyone (I’m assuming that any sane Linux user has made the switch to 64-bit at this point). Most of the packages also include development header files, which makes it possible to create custom Mitsuba plugins without ever having to compile the main codebase.

The Windows and MacOS X builds are compiled using the Intel C++ compiler 12. This means that OpenMP now finally works on OSX! (Apple has been shipping a seriously broken version with their compiler for years..)

Since the entire build process is now automated, you can expect to see more frequent releases in the future — I might even switch to a nighly build system at some point.

Future license change: I intend to switch Mitsuba’s license to the more liberal LGPL license (a.k.a. the “lesser GPL”, or “library GPL”) at some point in the near future. This will essentially allow users to write proprietary plugins or link Mitsuba to external applications without having to release their source code. Only changes to the renderer itself would need to be made available. I believe that this is a good compromise between making the system available to a larger group of users, while benefiting from any improvements that are made. The transition will involve getting permission from a few more people, rewriting code, and replacing certain dependencies. For that reason, it might still take a few months. Stay tuned…

Better documentation coverage: A huge amount of work went into the documentation, which now covers many previously undocumented plugins. While still far from complete, it’s at a point where it can serve as a good starting point and reference for every-day use. Be sure to check it out if you are using or evaluating the renderer in any way.


6
Aug 11

Random picture of the day

Rough copper coated with a transparent varnish layer. Lit by the Preetham et al. sun & sky model.


17
Jul 11

Upcoming changes

For the last week, I have been hacking on a new release of Mitsuba (0.3.0), which will be publicly released in another week or two. Its main new feature is a complete redesign of the material system, specifically the surface scattering models (a.k.a. BSDFs). I’ve become increasingly unhappy with the state of this very important part of the renderer and finally decided to redesign it from scratch.

I have just merged most of these developments into the main branch. Since other researches using Mitsuba might be concerned about the many changes, this post is meant to quickly summarize what is going on.

  • Spectral rendering: most of the code pertaining to spectral rendering has seen a significant overhaul. It is now faster and in certain cases more accurate.
  • Flexible material classes: the changes introduce a robust and very general suite of eight physically-based smooth and rough (microfacet-based) material classes. The smooth plugins are called diffuse, dielectric, conductor, and plastic. The equivalent rough microfacet models are called roughdiffuse, roughdielectric, roughconductor, and roughplastic. Please see the documentation link below for an overview of what these do.
  • Material modifiers: I have added a two new “modifier” BSDFs, which change the behavior of a nested scattering model
    • bump: applies a bump map specified as a grayscale displacement texture.
    • coating: coats an arbitrary material with a smooth and optionally absorbing dielectric layer in the style of [Weidlich and Wilkie 07].
  • Material verification: the sampling methods of all material models in Mitsuba are now automatically verified with the help of statistical hypothesis tests (using χ²-tests).
  • Generated documentation: there is now a javadoc-like system, which extracts documentation directly from the plugin source code and stitches it into a LaTeX reference document.
  • lookAt: Mitsuba inherited a bug from PBRT, where the <lookAt> tag changed the handedness of the coordinate system. This is now fixed—also, the syntax of this tag has changed to make it easier to read.
  • Scene portability: the above changes clearly introduce severe incompatibilities in the file format. Even the old lambertian plugin now has a different name (it was renamed to diffuse to better fit into the new scheme), so one would ordinarily expect that no old scene will directly work with this release. To adress this problem, Mitsuba 0.3.0 introduces a version attribute to the scene. In other words, a scene XML file now looks something like the following:

    <scene version="0.3.0">...</scene>

    When it is detected that there are incompatibilities between the file version and the current release, Mitsuba will apply a set of included XSLTtransformations, which upgrade the file to the most current file format.

    That way, scenes will always work no matter how old they are, while at the same time allowing large-scale changes to be made without the need for an (ugly) deprecation mechanism.

To upgrade to this release, simply pull from the main repository as usual (hg pull -u).

Note: you will need to update your config.py file with an appropriate file from the build directory, since there were some changes to the compilation flags.

For a peek at the upcoming documentation, take a look at the following PDF file:

Mitsuba 0.3.0 Beta Documentation