tool development

A small excerpt of my tool, plugin and script development work.

Nuke Plugin Prototype for Phyiscally Correct Depth-of-Field Simulation

For my Master’s Thesis I investigated existing depth-of-field simulation methods for VFX applications. Both object- and image space approaches have been considered. As part of this work, a prototypical Nuke Plugin Prototype has been developed in C++.

Most available image based depth-of-field simulation approaches do not allow for spatially varying bokeh shapes. Effects like “Cat’s Eye Bokeh” cannot be simulated with these methods. The prototypical Nuke plugin adds this functionality by interpolating PSFs.

Principal of Operation

The princial of operation is based on the research by Thomas Hach et al as presented in “Cinematic Bokeh rendering for real scenes “.

For the prototype, point-spread-functions (PSFs) were manually acquired with a photo camera for a certain lens, sensor and a small LED slight source at a given focus distance. In fixed interval distances from the camera, a center and an outer PSF have been obtained.


The plugin prototype reads these PSFs from disk and interpolates them in order to be able to synthesize a PSF for each pixel in an input image with a certain distance from the camera. The below image shows such an interpolation result. The PSFs in the green circles are the obtained PSFs while all other PSFs were synthesized through interpolation.


The interpolated PSFs are then used as filter kernels for convolution. Two different approaches have been tested for this thesis. The gathering and the spreading method. The below image shows the results of both methods as straight-forward implementations. Spatially varying bokeh shapes can be discovered in the stary background. These are the result of the PSF interpolation. Both images show severe depth discontinuity and intensity leakage artefacts. 3d Models and textures in the below test images showing ISS and Kepler Observatory, have been obtained from NASA’s extensive 3d resource library. Images and depth pass were rendered with Mental Ray for Autodesk Maya.


Java Tool for Rendering Polychromatic Point-Spread-Functions with PBRT

For my Master’s Thesis in Computer Science in Media I developed a rending tool to generate polychromatic point-spread-functions (PSF) with PBRT v3. The aim was to see if point spread functions ray-traced through a phyiscal lens system representation deliver realistic results so that they can be used as filter kernel for image-based depth-of-field simulation.

The tool allows to select various parameters such as a camera sensor, a lens, the aperture, the spectral power distribution of the light source, the spectral sensitivity of the senor and many others. It then renders monochromatic PSFs with PBRT and combines the all renders into a single polychromatic PSF.


Houdini: Adaptive Subdivision HDA

The Adaptive Subdivision HDA is a Houdini digital asset that allows the artist to dynamically subdivide specified portions or areas of a polygon geometry with different levels of subdivisions. These subdivisions can also be animated over time. The artist has full control over the amount and distribution of subdivisions which can be controlled by a float ramp in real time. Subdivisions can easily manipulated later.

The HDA was created using visual programming in Houdini nodes and short snippets of VEX code to support it. In addition to the Houdini built in nodes, the qlib digital asset library was used.

The initial problem

The tool was designed for a show that required highly detailed simulations of water surfaces and water interaction. Due to the large scale of the scenes, a high level of detail in terms of polygon subdivision is needed in the areas of interest for the ocean simulations and other surface deforming operations such as a ripple effect to be able to show good and clean results. Subdividing the entire grid to a usable level would however result in a very high poly count, blowing up scene files, simulation times, export geometry file sizes as well as alembic export speed and render times and would also have a significant impact on viewport and overall performance. It would also result in a high level of detail in areas of lesser importance that might be far in the distance, blurred by depth of field or not even visible to the camera at all. This problem asked for a solution to dynamically subdivide the grid only in the necessary areas, leaving the rest of the grid as low poly as possible.


Houdini: Decrypter Font Node


The decrypter font node is a Python driven Houdini Digital Asset that extends Houdini’s existing Font node by some pseudo encrypt and decrypt functionality. It was origrinally intended as an asset for HUD and GUI animation inside Houdini without having to resort to other software like After Effects to for example visualize the hacking of a password in an existing houdini animation. My decision to use Python for this node was based on the fact that it brings many convient string functions to the table that enabled a large feature set.

Example Houdini Animation

Feature Overview

Custom Save and Export Tools

These scripts are part of a series of pipeline scripts that ensure folder structure and naming conventions across multiple departments and projects when saving workfiles from different applications without the artists having to worry about them.

The scripts were developed in Maxscript for 3ds Max and in Python for Houdini and Nuke. All save and export scripts follow the same basic strcuture in order to ensure consitency across all production stages.

The scripts extract meta data information from the folder hierarchy and the workfiles and handle workfile versioning as well as file naming. Artist abbreviations are automatically inserted based on the logged in user’s name of the workstation. Preview, export and render paths are also automatically updated each time the workfile is saved.

SAVE_TOOL.pngimage_2017-11-22_12-41-16.pngSLICE Quick Export.PNG

3ds Max Quick Preview Script

The Quick Preview for 3ds Max is a simplified preview tool that automatically outputs preview renders from the selected viewport to the current shot’s preview folder and names all image files according to the current show’s naming convention. It also allows to burn meta data into the preview images utilizing a post render Python script.

The tool was developed in order to streamline the process of preview rendering so that artists can quickly create previews that comply with file naming and folder structure conventions with a few clicks.

SLICE Quick Preview 02.PNG

Server based metadata burn-ins for use in multiple post production software packages

To ensure a facility wide standard for displaying meta data information such as project, sequence, shot, date, frame, department, artist and version within any rendered or captured preview footage across multiple software packages such as Houdini, 3ds Max and Nuke, a simple server based Python script for automatic metadata burn ins has been developed. The script adapts font sizes to the input images resolution to ensure consistent sizes and placement of meta data information across footage of varying resolution and aspect ratio.

When rendering previews from 3ds Max, Houdini or Nuke the script is executed as a post render script on a network node. Metadata is then extracted from the image file’s naming or from specifically provided information and is then burned into the final preview image.

The script has been outsourced to a network node in order to have a single script that must be maintained for multiple applications and to reduce payload on workstations.