17
Jul 11

Upcoming changes

For the last week, I have been hacking on a new release of Mitsuba (0.3.0), which will be publicly released in another week or two. Its main new feature is a complete redesign of the material system, specifically the surface scattering models (a.k.a. BSDFs). I’ve become increasingly unhappy with the state of this very important part of the renderer and finally decided to redesign it from scratch.

I have just merged most of these developments into the main branch. Since other researches using Mitsuba might be concerned about the many changes, this post is meant to quickly summarize what is going on.

  • Spectral rendering: most of the code pertaining to spectral rendering has seen a significant overhaul. It is now faster and in certain cases more accurate.
  • Flexible material classes: the changes introduce a robust and very general suite of eight physically-based smooth and rough (microfacet-based) material classes. The smooth plugins are called diffuse, dielectric, conductor, and plastic. The equivalent rough microfacet models are called roughdiffuse, roughdielectric, roughconductor, and roughplastic. Please see the documentation link below for an overview of what these do.
  • Material modifiers: I have added a two new “modifier” BSDFs, which change the behavior of a nested scattering model
    • bump: applies a bump map specified as a grayscale displacement texture.
    • coating: coats an arbitrary material with a smooth and optionally absorbing dielectric layer in the style of [Weidlich and Wilkie 07].
  • Material verification: the sampling methods of all material models in Mitsuba are now automatically verified with the help of statistical hypothesis tests (using χ²-tests).
  • Generated documentation: there is now a javadoc-like system, which extracts documentation directly from the plugin source code and stitches it into a LaTeX reference document.
  • lookAt: Mitsuba inherited a bug from PBRT, where the <lookAt> tag changed the handedness of the coordinate system. This is now fixed—also, the syntax of this tag has changed to make it easier to read.
  • Scene portability: the above changes clearly introduce severe incompatibilities in the file format. Even the old lambertian plugin now has a different name (it was renamed to diffuse to better fit into the new scheme), so one would ordinarily expect that no old scene will directly work with this release. To adress this problem, Mitsuba 0.3.0 introduces a version attribute to the scene. In other words, a scene XML file now looks something like the following:

    <scene version="0.3.0">...</scene>

    When it is detected that there are incompatibilities between the file version and the current release, Mitsuba will apply a set of included XSLTtransformations, which upgrade the file to the most current file format.

    That way, scenes will always work no matter how old they are, while at the same time allowing large-scale changes to be made without the need for an (ugly) deprecation mechanism.

To upgrade to this release, simply pull from the main repository as usual (hg pull -u).

Note: you will need to update your config.py file with an appropriate file from the build directory, since there were some changes to the compilation flags.

For a peek at the upcoming documentation, take a look at the following PDF file:

Mitsuba 0.3.0 Beta Documentation


08
Jun 11

Server crash

Due to a server crash, this webpage was unavailable for the past two days.

For some reason, both the mainboard and hard disk broke down. There was a recent backup, hence nothing of serious importance was lost.


06
Jun 11

Rendering with style

Every once in a while, I need to create an image that truly takes a very long time to render in Mitsuba, to the point that I am simply not willing to wait that long. In some cases, this just means that an inefficient algorithm somewhere in the code had better be replaced. But in other cases, the algorithms are all fine, and it’s really the scene’s fault for being excessively complicated. Such circumstances leave me with only two choices: 1. I can adjust my expectations and simplify the scene, or 2. throw a huge amount of processing power behind Mitsuba, and make it go fast. In this blog entry, I will explain how to do the latter with the help of Amazon’s Elastic Compute Cloud (EC2).

Amazon Elastic Compute Cloud

The business model of EC2 is to rent on-demand processing power to individuals and companies. If you have a job that requires several Linux machines for a few hours, you’re in business. Probably the most common use of EC2 is to host web applications that scale dynamically. In that scenario, the web server is able to respond to heavy load situations by automatically buying additional EC2 nodes until the load reverts back to normal conditions.

EC2 can also be used to render images, and Amazon offers a particular kind of machine (in EC2 lingo: instance type) that is well-suited for this purpose. In particular, their c1.xlarge, or High-CPU Extra Large instances each have 8 cores and 7GB of RAM and currently cost about $0.68 per hour on the East coast. This price compares favorably to maintaining a compute cluster around the year, which only sees 100% use during a small portion of that time.

Going cheap

Although the normal EC2 prices are reasonable, it would be nice if there was a way to spend even less. One such approach involves buying idle capacity from Amazon based on a current “stock price” (EC2 lingo: spot price), which they assign to each kind of machine. The idea is as follows: one makes a bid for a certain amount of capacity, e.g. “I’m willing to run this job, if I can do it for less than $0.30 per hour and machine”. As soon as the spot price drops below the bid amount, the requested machines are booted up automatically. As long as they run, only the spot price is incurred (as opposed to the higher bid amount).

This spot price usually lies noticeably below the regular EC2 prices — for instance, as of this moment, a c1.xlarge machine on the East coast only costs ~$0.23 per hour. But here is the caveat: if at any time, the spot price exceeds the bid amount, your machines are turned off without so much as a warning (which obviously doesn’t work well for many kinds of workloads). It is worth noting that one only has to pay for every fully completed node hour in this case.

Since no irreparable damage occurs when a node disappears (other than having to redo the last still or animation frame), I usually prefer the cheaper spot price approach to having guaranteed availability.

Running Mitsuba instances on EC2

Assuming that you’re signed up with EC2, you should be able to step through the following description to get Mitsuba up and running on a few machines and run a parallel render job. It is Linux/OSX-centric, hence the actual commands may differ a bit when doing this on Windows.

Legal disclaimer: Some of the following will cost actual money — while I have done thorough tests, I can make no guarantees on the correctness of the launcher script and the information provided here.

Before starting, make sure that you have a recent version of boto installed on your machine. (This is a Python Library for scripting EC2 services.) On Ubuntu, this can be done by entering

$ sudo apt-get install python-boto

1. After logging into the AWS Management Console, click on “Your Account” and “Security credentials”. Towards the bottom of this page, you should be able to see your Access Key ID, as well as the Secret Access Key. Make note of these two values.


Now, click on the EC2 tab in the AWS Management Console and add an inbound TCP rule to the default security group. The rule should open port 7554 without source restrictions (i.e. 0.0.0.0/0). Add another such rule for port 22 (for secure shell access).

Next, create a new key in the control panel labeled “Key Pairs”. The browser will prompt to save a .pem file, which you can save into a new directory (e.g. mitsuba-ec2) with a filename matching the key pair name (for instance: mitsuba.pem when the key pair was named mitsuba).

Now, create a clean copy of data/ec2/cluster.py from the Mitsuba distribution and place it into the same directory.

You will need to modify a few values at the top. In particular, the access key, key pair, and region fields all need to be filled out. When building a custom version of Mitsuba, you will also need to modify the PKG_REPOSITORY attribute to point to your own repository.

Now, we are almost ready to go. Open a terminal and navigate to the directory containing the modified EC2 launcher script. To set the correct permissions for the private key, execute the following command (replace mitsuba by the name the key pair crated earlier)

$ chmod og-rwx mitsuba.pem

For an overview of all supported commands, type

$ ./cluster.py

The following command allocates a specified number of spot nodes from EC2 and boots them with a stateless version of Ubuntu Maverick (64 bit).

$ ./cluster.py addSpotNodes [instance-type] [count] [bid] <group>

To get an idea of what to specify as a bid, it may be useful to look at the list of previous and current spot prices on the Cloud Exchange. To get (more expensive) regular machines with guaranteed availability, use the following command instead:

$ ./cluster.py addNodes [instance-type] [count] <group>

For instance, to purchase 16 c1.xlarge spot nodes (that’s 128 cores) with a max. bid of $0.30/hr each, enter

$ ./cluster.py addSpotNodes c1.xlarge 16 0.30 myGroup
Requesting 16 spot nodes of type c1.xlarge (group name = "myGroup",
  max. price=0.300)..
Done.

The last parameter designates the name “myGroup” to these 16 machines. To see whether your nodes have started successfully, enter

$ ./cluster.py status
Querying spot instance requests...
  sir-724e8411: status=open, price must be <= 0.300$
  .... (15 more)

When the bid is above the spot price and the requests have been fulfilled (this usually happens within a minute), this status will change to

$ ./cluster.py status
Querying spot instance requests...
  sir-724e8411: status=active, price must be <= 0.300$
  .... (15 more)

Querying instances ...
Nodes in group "myGroup"
===================
  ec2-50-17-103-151.compute-1.amazonaws.com is running (type:
    c1.xlarge, running for: 0d 0h 1m, internal IP: 10.86.31.233,
    spot request: sir-724e8411)
    .... (15 more)

These are perfectly standard Ubuntu machines — to obtain shell access to any one of them, pass the host name seen seen in the previous command to the login command, e.g.

$ ./cluster.py login ec2-50-17-103-151.compute-1.amazonaws.com

To quickly install Mitsuba on all nodes in parallel (time is money at this point!), run

$. /cluster.py install myGroup
Sending command to node ec2-50-17-103-151.compute-1.amazonaws.com
...
0/16 nodes are ready.
...
16/16 nodes are ready.

This process usually takes about 30-60 seconds; a few harmless warnings may appear, which can be ignored. With Mitsuba installed on all machines, the last step is to create a rendering cluster. For this, execute

$. /cluster.py start myGroup
Creating a Mitsuba cluster using the nodes of group "myGroup"
Sending command to node ec2-50-17-102-163.compute-1.amazonaws.com
....
15/15 nodes are ready.
All nodes are ready.
Creating head node ..
Done -- you can specify the head node
   "ec2-184-73-78-201.compute-1.amazonaws.com" in the Mitsuba
   network rendering dialog

The host name in this last command is the head node of the cluster. To save network bandwidth, the head node transparently provides access to all cores in the cluster without you having to create internet connections to 16 separate machines.

Default network topology

To use the cluster, simply add this machine in the rendering preferences of the Mitsuba GUI or provide it using the -c parameter when rendering from the command line interface.


(In case you are wondering why 2 cores appear to be missing in the above images — the launcher script leaves them idle on the head node to ensure that it has enough CPU and network I/O capacity to coordinate the rendering)

Once you are done, don’t forget to run

$ ./cluster.py terminateAll myGroup

to shut down the machines and stop any charges to your account (note: Amazon will bill you for any partially used hours).

This guide only covered the most basic use case; more advanced features also supported. For instance, to do some serious rendering with volumetric datasets, I usually upload the (multi-gigabyte) volume data files to Amazon S3 ahead of time. After booting up a cluster, I use the syncData command to have all cluster nodes simultaneously download those files over the EC2-internal network (this does not incur any network charges). Another useful feature is that multiple users can simultaneously create Mitsuba clusters on a single account without interfering, as machines are always referred to using group names.


03
Jun 11

Mitsuba 0.2.1 released

After a long development cycle, I have just released a new version of Mitsuba. Please read on for a list of changes (these are in addition the ones mentioned in this Blog entry).

  • Participating Media: the most significant feature of this release is a complete redesign of the participating medium layer in Mitsuba. This change was necessary to remove limitations inherent in the previous architecture, which was overly complicated and could only support a single medium per scene. The new Mitsuba version handles an arbitrary amount of media, which can be “attached” to various surfaces in the scene. For instance, rendering a bottle made of  absorbing colored glass now involves instantiating an absorbing medium and specifying that it lies on the interior of the bottle’s glass surface.

    Apart from these changes, the new implementations are also significantly more robust, particularly when heterogeneous media are involved. In a future blog post, I will provide more detail on the rewritten participating media layer.

  • Micro-flake model: Mitsuba was used to create the high-resolution volumetric cloth renderings in the paper “Building Volumetric Appearance Models of Fabric using Micro CT Imaging” by Shuang Zhao, Wenzel Jakob, Steve Marschner, and Kavita Bala.

    This project was a *big* challenge for the micro-flake rendering code and led to many useful changes. For instance, the code previously made heavy use of spherical harmonics expansions to compute transmittance values, and to importance sample the model. For very shiny materials (such as the cloth models we rendered), this can become a severe problem due to ringing in the spherical harmonics representation. The rewritten model has fast and exact importance sampling code that works without spherical harmonics, and it uses a high-quality numerical approximation for the transmittance function.

  • Irawan & Marschner woven cloth BRDF: This release adds a new material model for woven cloth, which was developed by Piti Irawan and Steve Marschner. The code in Mitsuba is a modified port of a previous Java implementation. A few measured patterns shown below are already included as example scenes (many thanks go to Piti for allowing the use of his code and data!)

    This model relies on a detailed description of the material’s weave pattern, which is described with the help of a simple description language. For instance, the description of polyester lining cloth looks something like the following:

    weave {
      name ="Polyester lining cloth",
    
      /* Weave pattern description */
      pattern {
        3, 2,
        1, 4
      },
    
      /* Listing of all yarns used in the pattern (numbered 1 to 4) */
      yarn {
        type = warp,
        /* Fiber twist angle */
        psi = 0,
        /* Maximum inclination angle */
        umax = 22,
        /* Spine curvature */
        kappa = -0.7,
        /* Width and length of the segment rectangle */
        width = 1,
        length = 1,
        /* Yarn segment center in tile space */
        centerU = 0.25,
        centerV = 0.25
      },
      ....
    }

    For more details on this model, please refer to Piti Irawan’s PhD thesis.

    Due to its performance and expressiveness, I believe that this model is of genuine utility to a larger audience and hope that including it in Mitsuba will increase its adoption.

    A cool feature that I might add in the future is an interactive editor to design new pattern descriptions with a live preview.

  • Amazon EC2: This release adds a launcher script to create virtual render farms on the Amazon Elastic Compute Cloud (EC2). This is very useful when rendering time is critical, since EC2 can give you essentially infinite parallelism. I will write more on how this works in a separate post.

  • Blender Plugin: Due to its experimental nature, Blender 2.5x has been a bit of a moving target, making it difficult to develop stable plugins. Recently, a large batch of changes broke many plugins, particularly custom rendering backends. Since then, I have been working on restoring compatibility with Blender 2.56, which is mostly complete at this point. Some work remains to be done, hence I will release the final Blender plugin in a few days.

  • Build system: The build system has undergone several cleanups:

    1. Binaries are now placed in a separate directory instead of being co-located with the source code. (build/release or build/debug depending on the type of build)
    2. The distribution now supplies project files for Visual Studio 2008 and Visual Studio 2010 with support for code-completion, debugging, etc.
    3. Builds with Visual Studio 2010 now work correctly (they used to be prone to crashes), and the release adds support for the Intel C++ compiler 12 on Windows.
    4. The compilation flags for the Intel C++ compiler have been adjusted so that the binaries also run on some older AMD hardware that doesn’t support SSE3.
    5. The config directory was removed.

    To upgrade to this version without making a mess of your repository, I recommend to clean before updating, i.e.

    $ scons -c
    $ hg pull -u
    

    If you forgot that step, old .obj/.os files and other build products will probably litter your source tree. In that case, it might be easiest to check out a clean copy.

    If you are on Windows or OSX, note that you must also update the dependencies repository.

  • Beam Radiance Estimate: Mitsuba now contains an implementation of the Beam Radiance Estimate to accelerate Volumetric Photon Mapping within homogeneous participating media (scene courtesy of Wojciech Jarosz).

  • COLLADA: Previously, the import of very large scenes using COLLADA failed when the associated XML document contained text nodes that were larger than 10 megabytes. I submitted a patch to fix this in the COLLADA-DOM library, which was recently accepted. Mitsuba now ships with this version of the library.

  • Rotation Controller: several people commented that the interactive preview navigation was rather unintuitive. I have now added a rotation controller that will be more familiar to people using Maya or Blender. Dragging the mouse while pressing the left button rotates around a fixed point. The right mouse button & mouse wheel move along the viewing direction, and the middle mouse button pans. Press ‘F’ to zoom to the currently selected object and ‘A’ to focus on the whole scene. Note that the previous behavior can still be re-activated through the program preferences.

  • Miscellaneous: this release adds code to perform adaptive n-dimensional integration (based on the cubature project), as well as a chi-square test for verifying sampling methods. In the future, these will be used to implement an automatic self-test of all scattering models within Mitsuba.

    As always, the release also contains a plethora bugfixes, which won’t be listed in detail.


02
Jun 11

New Blog

In case you are wondering why this blog suddenly looks so different: I have moved it to WordPress after continuously running into problems with the previous system.

On another note: today or tomorrow, a long-overdue release of Mitsuba will be released. Stay tuned!


03
Feb 11

A few changes

I’ve just committed a batch of changes that have piled up on my machine. Most are architectural improvements plus a few bugfixes. Here is a list of changes:

  • Switched to a cleaner build system organization (1 SConscript file per directory instead of a single huge file)
  • Robustness improvements to the KD-tree construction code: it now does a better job at handling degenerate triangles
  • Switched to an epsilon-free KD-tree traversal loop based on Havran’s T_{AB}^{rec} algorithm
  • Generalization of the KD-tree construction code (now supports plugging in arbitrary tree construction heuristics)
  • Addition of some utility code (a LRU cache, adaptive Gauss-Lobatto quadrature, etc.)
  • Switched to a generic dense matrix class that supports arbitrary dimensions
  • Pixel traversal within image blocks now uses a space-filling curve ordering
  • Added support for several noise functions from PBRT
  • Cleanups of various top-level interfaces (Luminaires, Phase functions, BSDFs, Participating media, etc.)
  • Robustness improvements to the participating media code (faster + now does a better job at dielectric boundaries)
  • Added a basic tonemapping utility (can be invoked via mtsutil)

28
Nov 10

Entering SIGGRAPH submersion mode

As SIGGRAPH submission deadline is drawing closer, I might not respond to support requests / bug reports over the next 6 weeks. Please don’t be put off — Mitsuba development will pick up at the usual pace once the deadline is over, but for now I must concentrate my efforts on the submission.


23
Nov 10

Mitsuba 0.2.0 released

After a long stretch of time, I’m releasing a new version of Mitsuba. Reasons for the delay were numerous architectural changes within the renderer. Most of these will not be visible when using the application, but they should result in faster rendering performance. Here is a breakdown of the high-level changes:

  • The COLLADA importer is more robust and should handle most scenes (hm, this sounds familiar). Rather than generating hundreds of translated mesh files, the new version instead produces one single compressed file.
  • I’ve added an experimental plugin for Blender 2.5 integration, including a custom material designer. Since it depends on features which won’t be in Blender until the upcoming 2.56 release, it is currently necessary to compile Blender from SVN to use the plugin. Many thanks go to Doug Hammond for providing his excellent EF package, which the plugin uses extensively.
  • Jonas Pilo has contributed a very nice test scene, which is currently used as an interactive preview in the material designer of the Mitsuba Blender plugin (see this video for an example of what this looks like)
  • The KD-tree acceleration and construction code has been completely rewritten. The new code produces noticeably better trees and does so within a fraction of the time of the old version. It also scales to very large polygonal meshes (>30M triangles), whereas the previous implementation would quickly exhaust all available memory in such cases. (see this blog entry for details)
  • Instancing support was added, and there is limited (rigid) animation support for shapes.
  • Edgar has kindly contributed patches to compile Mitsuba using the Intel C++ compiler. Official windows 32-/64-bit builds now use this compiler, since it produces faster-running executables (in comparison to Visual Studio).
  • The XML schema of the scene description language is now less picky. Specifically, it is possible to specify properties and objects in an arbitrary order.
  • Standard UV texture mapping controls (offset, scale) are provided
  • Luminaire importance sampling is more flexible. The previous implementation sampled a light source proportional to its power, which was often exactly the wrong thing to do. In this release, the sampling weights can be manually specified.
  • There is partial support for rendering vast amounts of hair (partial because only the intersection shape is implemented at this point — no hair-specific scattering models have been added yet)
  • A PLY file loader based on libply (courtesy of Ares Lagae) was added
  • Vertex colors are now accessible within the renderer. This is implemented using a special “texture”, which forwards the color information to scattering models
  • Severe lock contention issues in the irradiance cache were fixed (these resulted in slow performance when rendering on many cores).
  • The loading dialog now contains a console, which shows what is happening while waiting for a large scene to load
  • The builtin environment map luminaire supports importance sampling (it did uniform sampling before – jikes!)
  • A bunch of materials and textures now have GLSL implementations so that they can be used in the interactive preview
  • The preview itself should be quite a bit faster due to optimizations in how geometry is passed to the GPU

As usual, a large number of bugs were also fixed. The documentation is still rather incomplete, but I’m working on it.


20
Nov 10

Build system changes

I’ve just committed a few build system-related changes, which should help to prevent some major headaches in the future. These changes require four (simple) steps from anyone who is currently building from source on either Windows or OSX:

  1. Update to the newest repository version (hg pull -u)
  2. Check out a copy of the ‘dependencies’ repository at https://www.mitsuba-renderer.org/hg/dependencies and place it into the Mitsuba directory (e.g. C:\Mitsuba\dependencies), i.e.

    cd C:\mitsuba
    hg clone https://www.mitsuba-renderer.org/hg/dependencies

  3. Replace your ‘config.py’ file with a fresh copy from the ‘config’ directory. Note that a few more options exist now — specifically on Windows, the Intel C++ compiler is now supported (thanks, Edgar!).
  4. Remove the ‘mitsuba/tools’ directory (if it still exists after the last three steps).

14
Oct 10

New acceleration data structures

The upcoming release of Mitsuba will incorporate a few rather big changes under the hood. The most significant one is that a fairly large piece of code, the kd-tree construction and traversal logic, has been rewritten from scratch.

There were a few reasons for doing so: the old code produced reasonable kd-trees, but it was quite slow, used excessive amounts of memory, and it only ran on a single processor. Of those three, the memory requirement was perhaps the most problematic: if each triangle causes a temporary overhead of about 20x of its storage cost, big scenes become a problem.. For example, my laptop would just crash when I tried to load the 28 million triangle Lucy mesh from Stanford.

Part of the problem lies with the commonly used algorithm for constructing kd-trees: it represents shapes using abstract “bounding box events”. Even using various optimizations, these events alone take up so much memory that one might run out of it without having generated a single kd-tree node.

So what has changed?

  • The new implementation uses an approximate kd-tree building strategy (“Min-Max binning”) to generate the top of the kd-tree, which avoids creating massive amount of bounding box events. Once geometry has been partitioned into groups of less than <64K elements, the more accurate traditional O(N log N) builder takes over. An extra benefit of this approach is that it leads to much more coherent memory access patterns.
  • The build process now uses a custom memory allocator, which is specifically optimized for the allocation/deallocation sequences seen during kd-tree construction.
  • The construction runs in parallel: independent subtrees are assigned to different cores as they become available. This scales quite well, since most of the cost in constructing trees is at the leaves.
  • A rather wasteful data structure for triangle classification has been packed so that it only needs two bits per entry.

The best part: the resulting trees are also of much higher quality than those made by the previous implementation, which gives a 20%-30% speedup essentially for free.

When running the builder on standard scenes, the final SAH costs (a kd-tree quality metric) now come within 1% of those listed in the seminal paper “On building fast kd-Trees for Ray Tracing, and on doing that in O(N log N)” by ray-tracing gods Ingo Wald and Vlastimil Havran. So as far as optimization goes, I think I’ve reached the limit of what is possible using this method.

These changes make it possible to use Mitsuba for interactive visualizations of very large scenes, such as the UNC Powerplant (a popular benchmark scene with 13 million triangles).

Power plant scene

 


21
Sep 10

Material test scenes

The next release of Mitsuba will include documentation on many of the available plugins, especially materials. For this purpose, I’m currently modeling a material test scene to demonstrate the effects of various parameters in an intuitive way. Caution: programmer art follows!

 

As you can see, I’m rather bad at modeling and therefore open to any suggestions :)

 


08
Sep 10

Mitsuba 0.1.3 released

Builds for a new version of Mitsuba are now available for download. This is mainly a bugfix release to address a serious regression in the material system. Other notable changes are:

  • Imported scenes now store relative paths
  • OBJ importing works on Windows
  • Realtime preview (OpenGL + RTRT) fixed for point sources
  • The anisotropic Ward BRDF is now supported in the preview
  • Faster texture loading
  • The renderer now has a testcase framework similar to JUnit

07
Sep 10

Bug tracker

I’ve set up a public bug tracking system at the the following address:

https://www.mitsuba-renderer.org/bugtracker

Please create a report on this website whenever you find a bug in Mitsuba. It will be easiest to fix if you can also attach a scene file, which reproduces the error.


03
Sep 10

Mitsuba 0.1.2 released

Builds for a new version of Mitsuba are now available for download. Highlights of this release are:

  • Numerous bugfixes
  • Significantly improved COLLADA importer: it should now be able to handle most scenes from Maya, 3ds max, Blender and SketchUp
  • Basic graphical user interface for running the importer
  • Initial documentation, including some on the import process (available on the documentation page)
  • Support for environment sources in the realtime preview
  • When pressing the stop button while rendering, the partially rendered scene now remains on the screen. Pressing the stop button a second time switches back to the realtime preview
  • The user interface now has a fallback mode when the graphics card is lacking some required OpenGL features.
  • Create default cameras/lightsources if none are specified in a scene
  • Support for file drag & drop onto the user interface
  • The Mitsuba user interface now also doubles as an EXR viewer / tonemapper. Drag an EXR file onto the UI or open it using the File menu, and the image opens in a new tab. Afterwards, it is possible to export the image as a tonemapped 8-bit PNG image.
  • The realtime preview now has a ‘force diffuse’ feature to improve convergence in scenes with lots of glossy materials.
  • Two different navigation modes can now be chosen in the program settings
  • New material types: composite, difftrans, transparent, mask.
  • ldrtexture: support for loading uncompressed BMP and TGA images.
  • Switch to Xerces-C++ 3

16
Aug 10

Utility launcher, COLLADA import & documentation

When working on a larger project, one often needs to implement various utility programs that perform simple tasks, such as applying a filter to an image or processing a matrix stored in a file. In a framework like Mitsuba, this unfortunately involves a significant coding overhead in initializing the necessary APIs on all supported platforms. To reduce this tedious work on the side of the programmer, the Mitsuba repository now contains a utility launcher called mtsutil.

This binary loads simple “utility plugins”, which can now be as small as a few lines of code. Because mtsutil takes care of initializing the whole framework (including network connections to remote compute nodes), they provide a convenient solution to parallelize non-rendering workloads over many machines. The documentation contains a code example on how to do this.

Other changes in the repository are a much-improved COLLADA importer and import capabilities from within the GUI. At this point, it should be able to handle most scenes exported by Maya or Blender (I haven’t tested other programs yet).


11
Aug 10

First documentation draft

Amongst other things I’m currently writing some documentation for the renderer. An incomplete first draft can be found here for those who are interested: http://www.mitsuba-renderer.org/documentation.pdf.


09
Aug 10

Mitsuba blog created

To better coordinate changes and developments in the Mitsuba renderer, I’ve decided to set up a developer blog. My goal is to write on the current state every few weeks or more often when necessary. If you’re interested in participating, please let me know and I’ll add you to the blog.

There is now also a Mercurial repository: https://www.mitsuba-renderer.org/hg

To check it out, install Mercurial and then simply write
hg clone https://www.mitsuba-renderer.org/hg/mitsuba

To get acquainted with this system, I can recommend this very useful (and free) book: Mercurial: The Definitive Guide