Last week I was working on pipeline tools for a Vray based studio. I was a good opportunity for me and my rusty maxscript skills to dive into 3dsmax as support programmer .
1) Pipeline, automation and user interaction:
The first tool to write was to extend an existing one. Its core functionality was to:
- Import render preset
- Apply material
- Save batch render files for distributed rendering purposes
My main contribution for this tool was to wrap around my head on the existing code base in order to include render element in the scene configuration.
To achieve this goal my first move was to use xml to save and import these information : much like python elementTree I have used dotnet power for this task ( in 3dsmax by calling dotNetObject “system.xml.xmlDocument” ).
This turns out to be a valid solution but in end, after studying more deeply max render preset file mechanism , I was able to re-factor this tool and invoke only the default scripting interface for render element
The second tool was interesting as well , some element I encountered were:
- material libraries merging ,saving
- material parameter tweaking and assembling
Among the other tool it was the easiest one to implement , and I also had some time to learn more about object oriented programming on this instance
2) Object Turn around post-mortem:
The last tool was in the end the more challenging. The core functionality was to automate scene or object turn around animation ( a la zbrush turntable ), with an interesting requirement : framing object and computing a margin based on its greater dimension.
( This image was crop for this blog )
The second requirement was to center the object space on the camera screen without modifying the camera focal , zoom or any other attributes.
Traditionally an object bounding rectangle in screen space is computed from its world bounding box. The vertices are projected on the camera film plane, then from this points coordinates we simply extract the minimum and maximum value for the X and Y axis.
This is a really fast method but its drawback is that it loosely enclose an object in a camera view.
Nonetheless it was useful to write a poor man zoom extend function that works with target camera and helps ensure object are center in the view and are not out of the camera frustum.
This can be done by comparing the width and height between the camera film plane and the object bounding volume in camera space : until the object bounding volume is fully enclosed in the camera view we can move the camera back by a large increment value.
Two scripts work really well to extract this image plane in 3dsmax
- viewFrustrum.mcr ( http://scripts.breidt.net ) by Martin Breidt.
- ImagePlane145.ms ( same author, can also be found on scriptspot )
They just don’t work with custom type of camera created by external plugins like Vray.
(image from http://www.illusioncatalyst.com/mxs.php#null –> Practical Space Mapping for interaction)
It was time to deal with 3dsmax viewport command and its associated graphic window functionality. The maxscript help file and how to section prove to be useful for this as much as Enrico Gullotti website illusioncatalyst.
I learn a lot of new thing by evaluating the difference with maya equivalent API function like M3dView or MFnCamera, and in the end I was most comfortable using a grid helper in 3dsmax to do the view projection operation needed by my script.
3) Bounding rectangle with brute force ray intersection :
In order to have a closer bounding rectangle I was knowing full well that my object vertices need to be projected onto the camera film plane.
( image from wikipedia on perspective/camera projection principle )
My first implementation was using really simple geometry collision:
- First from a vertex position to to camera eyePoint a vector is build accordingly.
- With this position and vector a ray is emitted to catch any intersection on the image plane
In 3dsmax this can be done with intersectRayEx and snapshotAsMesh( to extract the world state of node much like maya’s worldMesh attribute for polygonal object ), whereas in maya we can use an equivalent method in the MFnMesh class:
bool MFnMesh::closestIntersection ( const MFloatPoint & raySource, const MFloatVector & rayDirection, const MIntArray * faceIds, const MIntArray * triIds, bool idsSorted, MSpace::Space space, float maxParam, bool testBothDirections, MMeshIsectAccelParams * accelParams, MFloatPoint & hitPoint, float * hitRayParam, int * hitFace, int * hitTriangle, float * hitBary1, float * hitBary2, float tolerance = 1e-6, MStatus * ReturnStatus = NULL )
This was simple to write and manage but soon enough start to show some serious speed limitation when object vertex count start to go over 10000 elements.
4) Bounding rectangle with viewport points projection :
One partial solution was to use the view projection matrix in order to do the heavy math computation. I encounter a similar operation in a post on cgtalk: Python API script to node: project UV on camera.( shameless plug : I write several post on this thread )
In maya this can be done in two ways:
- the MFnCamera exposes this information with “cameraFunctionAttachedToAcameraShape”.projectionMatrix()
- M3dView::projectionMatrix can invoke the target viewport information
Max viewport and graphic window( gw()) is what I learn to use in this project. They expose roughly the same type of function than the M3dView API class which I was already familiar with.
// 3dsMax method // Space Mapping: WorldCS —>ScreenCS // ( from http://www.illusioncatalyst.com ) gw.setTransform (Matrix3 1) p3Temp = gw.transPoint p3PointWorldPos p2PointScreenPos = [ceil(p3Temp).x, ceil(p3Temp).y] //maya method:(we assume that we already setup and have acces to an M3dView) MPoint worldPt; // input vertex position parameter short x_pos, y_pos; // output storage equivalent pixel coordinate currentView.worldToView (worldPt,x_pos,y_pos)
This new method was faster but was not able to cope with real world scenario ( with over 4 billions vertice in a scene , each iteration to build a bounding rectangle was taking over 16 seconds ), it was a good time to dive into image buffer and pixel processing world.
5) Image based Bounding rectangle :
Instead of processing geometry , I though I would be more efficient to process an image : the computation would be dependent of the image size and not on the scene complexity anymore.
Using an image buffer is a pretty common technique in real-time rendering .It can be found in any Per-pixel lighting workflow :
- like deferred_lighting :
( image from the Leadwerks Engine tutorial : informative video on image buffer and deferred_lighting. Not as solid The unreal development kit, unity but interesting to play with )
- or Screen Space Ambient Occlusion :
( image from crytek cryEngine3 free SDK )
My first thought was to render with a low quality preset and no light the current scene in order to read the alpha channel.It was just a bit disappointing to see that on each occurrence the scene setup was a prohibitive operation and the overall method slower than I have previously imagine.
6)Bounding rectangle by grabbing a viewport image :
The next logical step was to retrieve the current viewport image. Several tools were useful for this purpose:
- miauugrabviewport.ms from miauu ( interesting use of a lot of maxscript viewport command ).
- VFB+ from Rotem Shiffman ( really sweet source code on dotnet UI and assembly compiling ).
In 3dsmax capturing a snapshot of a view is done with gw.getViewportDib() ( with the graphic window command get a viewport device independent bitmap, the name involved will start to be interesting later ) whereas in maya this operation needs to be done with the API(code below from nathan horne blog )
#Import api modules import maya.OpenMaya as api import maya.OpenMayaUI as apiUI #Grab the last active 3d viewport view = apiUI.M3dView.active3dView() #read the color buffer from the view, and save the MImage to disk image = api.MImage() view.readColorBuffer(image, True)
Before capturing the current view a bit of work needs to be done:
- First to detect the object we change the background color to white
- we then choose as a rendering method a flat shading (with no lighting )
- change the object color to pure black
- and finally filter the element by type in order to draw only geometry object
This setup is then used in an iterative process ( much like what an IK solver does )
To cope with the perspective projection the camera is move along the depth axis by a predetermined value, if the current frame dimension is greater than desired value ,we just cancel the last translation, halved our step variable and repeat our loop until completion.
I encountered two minor glitch when using image based bounding rectangle with maxscript:
- Large image has severe hit on performance
- On the closest range it was no longer possible to have a reliable object frame
These hurdle were easily overcome by splitting the viewport into 4 panel ( reducing the viewport size without having to grab any pointer or messing with any window api ) and by adding a second camera with a different zoom factor ( this camera act as a neutral observer much like what is use for occlusion or visibility culling ).
7)Parting words and alternate implementation:
When this project was completed , I wanted to go further , and decided to use dotnet to read and manipulate image data.
After playing with some dotnet function in maxscript I stumble upon some limitation on how to pass some data between max and C#.
Once again I found a solution after reading a thread on cgtalk ( “bitmap.lockbits method using maxscript and dotnet” ): Either create a dynamically compiled assembly or write a static c# class for a compiled dll file.
csharpProvider = dotnetobject "Microsoft.CSharp.CSharpCodeProvider" compilerParams = dotnetobject "System.CodeDom.Compiler.CompilerParameters" compilerParams.ReferencedAssemblies.AddRange #("System.dll") compilerParams.ReferencedAssemblies.AddRange #("System.Drawing.dll") compilerParams.ReferencedAssemblies.AddRange #("System.Windows.Forms.dll") compilerParams.GenerateInMemory = on compilerResults = csharpProvider.CompileAssemblyFromSource compilerParams #(source)
In the code above we use the powerful “CSharpCodeProvider” class to dynamically compiled an assembly from a string stream named “source”.
Compiling this code into a dll was then straightforward, the project solution file only needs the required dll reference to works.
( You can also notice that the same operation is done for the dynamically compiled assembly with the lines “compilerParams.ReferencedAssemblies.AddRange #(“System.dll”)” and for maya C++ plugins we also add the required libraries with the IDE ).
Using a dotnet assembly proves to be useful as the same operation was 10 times faster than using maxscript to read an image file, it was also a good opportunity to start playing with c# tool and the dotnet platform.