Manifold exploration talk now online

I’ve just uploaded a video capture of my talk on Manifold Exploration, for those who didn’t see it at SIGGRAPH:

This will all be available in the next version of Mitsuba, which is close to its release. The last remaining item that I’m now working on is an improved version of the real-time preview, which hopefully wont’t take more than 1-2 weeks.


  1. It’s warming to know that 3D is still evolving with such leaps. It’s also amazing to know the initiative is not born inside commercial, kept in secret offices. I’m only afraid that it is too complicated to make it’s way to open mass uses like of Blender.

  2. “I’m only afraid that it is too complicated to make it’s way to open mass uses like of Blender” – but Blender can communicate with Mitsuba so it is not a problem, all that needs to happen is inter-program communication needs to improve.

  3. Very nice work WJ.
    Extremely clear and understandable. Mitsuba is such a cool project and as Konstantins says, it is very nice that you share your findings and explorations so that we too can learn from it!

  4. Very impressive work, Wenzel! I’m really looking forward to using Mitsuba as my production tool. Hoping to see more. Thanks for sharing.


  5. Wenzel, amazing stuff, you manage to explain it
    in a way that even for someone like me, lacking the mathematic knowledge, its understandable.
    Can I ask what you used to make the 2d demo explorations?

  6. Can’t wait for the next release of Mitsuba, what other improvements are going to be made other than the previewer and adding manifold? And is their a plan for using GPU for rendering? Or is gpu just something a third party would have to do?

    • There are no plans to incorporate GPU rendering (other than the preview) in the near future. This is something that’s massively over-hyped in my opinion.

      • Okay thank you just wondering. I have a laptop with a fairly good GPU so I was just curious about it. May I ask why it is over hyped?

        • The performance advantages are exaggerated and comparisons against CPU implementations are often outrageously unfair (e.g. showing a benchmark between a naive single-threaded exact CPU implementation and heavily optimized approximate GPU code).

          The programming model and architecture also limit the complexity of algorithms that can be implemented on them. Certain very “random” methods like MLT are just not all that well-suited for GPUs, which rely on coherence between independent pieces of work. This is in addition to the incredible amout of work that it would take to even get the algorithm working on a GPU (and maintaining that over the next years).

          In a few years, the architecture and programming models may have evolved to be more useful. But at this point, a project like Mitsuba has little to gain from them.

          • Okay that makes sense to me. Thanks for the answer :)

          • That’s interesting…

            General Purpose GPU sounds like a bit of an oxymoron to me anyway, because the more general purpose they become, the more like a CPU they will be.

Leave a comment