In this article i tried to address a specific need : animatable deformer vertex weights . It was inspired by a thread on cgtalk: deformation to wipe across a surface .
1) A python prototype for an adjacent idea:
Back in the maya 7 days , one of the major improvement was the generalization of component weight writing/reading to all deformer : it was now possible with the BlendShape deformer to isolate a region on a target in order to mix shapes together.
(above : the target mesh in the middle was decomposed into two shapes by painting a blendshape weight )
This workflow starts to be useful on facial rigging when we want to quickly setup the expression of a character.
The next logical step was to animate these weights , and can be found in other software like:
- XSI’s animatable weightMap’s
- houdini procedural architecture and it’s low level access to component data
A similar tool was develop for the 64bit version of maya: the soup nodes by petershipkov.
This node is able to update the vertex component weight of a deformer and is by itself an interesting piece of software. This was a good study opportunity and I decide to write something similar for the 32bits version of maya.
In maya node’s documentation,we can see that every deformer is a child of the weightGeometryFilter node and as such inherits its weightList attribute.
This compound array attribute enables users , for each component , to attenuate or amplify a deformation effect. Each vertex weight value can be animated individually , connected or driven by expression .
The only drawback is the huge speed degradation that starts to occur with this method. Each weight modification trigger an evaluation of the deformer. To cope with this limitation my first draft was a node that simply was outputting the same type of compound array :
- It was disappointing to see that the evaluation of this data was not correct .
- One explanation of this phenomenon can be found the developer guide on complex attributes : connection between array plug is not advisable .
- The second explanation can be related to a bug in maya 2011.
My workaround was to compact all the weights into a doubleArray and pass this single piece of data to another node that will use it to mix shapes between them.
The implementation in python was quite easy as I was able to choose 3 requirements for a wipe across a surface effect:
- First I wanted to use a curve ramp to compute the component weights value.
- Then I made my life easy by using a curve to translate a mesh point position into a ramp normalized position.
- Lastly from a neutral input mesh and a target mesh , a list of vector are extracted and multiplied by the appropriate ramp value.
This was enough to test my concept.
2) the 3 caballeros:
Upon closer inspection I start to see that each time this node was evaluated the same costly “ramp_uv” computation was being done.
I end up rewriting and extending my concept by separating each step into a separate C++ node.
a) Binding vertex to access weight value: a poor man UV mapping
At this time I plan to support 4 types of mapping:
- Curve proximity
- curve space
- nurbs surface
- polygon UV map
-The curve proximity mode sole function is to find from a mesh vertex position the closest point on an arbitrary curve( more precisely store the resulting u parameter ) … This is what MFnNurbsCurve class provides to API developer ( caution: once bitten , it might be impossible to live without it, be warned ) at frightening speed.
Any type of curves can be used but :
- It can help to choose a curve with a uniform spacing
- The closer the curve cover an object , the better will the mapping be.
Another important element to consider once we have retrieve a u parameter is to normalized this value. A naive approach ( I have myself made this mistake more then several time ) would be to think that all curve parameter range ( displayed in the above image as min/max value ) start from 0.0 to 1.0 or more overlooked from 0.0 to the number of spans…
Most of the time it may be true but nothings can stop the user to add knot to a curve , construct it with a specific knot vector array . As such you may end up with a curve parameter range of (0.0,2.0) with 212 spans…..
For this mode I will use the same algorithm that was developed for bending limb in a previous project( algorithm that will be thoroughly explained at a later time ) .
At any point along a curve we can robustly and reliably compute a space coordinate .( sorry the the default curve normal that maya compute for a curve point parameter doesn’t cut it: sometimes as the curve direction changes abruptly you can clearly see unwanted flipping along your motion path, spline IK ) .
Once we have build a matrix ,with simple vector math we can extract an angle from the projected mesh vertex vector ( yellow ) and the matrix up vector ( green ) . As a personal preference I like to map this angle from 0 to 359.9999, which make it easy to later translate this value into a normalized V space.
- NurbsSurface: when the user choose this mode we can find from a mesh vertex position its closest UV point. One efficient way to proceed this operation is to use an MNurbsIntersector.
- UV map: In this mode the user can provide a mesh with uv as input. In Maya ( from the MFnMesh class documentation ) UV’s are referenced on a per-polygon per-vertex basis , either all vertices of a face have UVs or not.
First we store in 2 array the UV’s we found in the current Uvset of our input mesh with the method MFnMesh::getUVs. Then a face vertex iterator ( MItMeshFaceVertex ) will inspect each face .As a vertex can be share by several face, the first time its ID is detected we change a status flag to prevent the same vertex ID in other face to be readen. The values that are found are simply clamped into a normalized UV space and written into the U_list and V_List double outputArray.
( graph network for this vertexMap node )
b) Reading weights: from ramp to textureNode
In a vertexWeights node weights value can be pulled from 3 types of sources:
- a curve ramp
- a color ramp
- and a texture node.
The first two drivers were easy to implement: an excellent article on this subject can be found on Chad Vernon’s website on ramp attribute, and was an invaluable source of inspiration for this project.
Reading from a texture node is a little bit more involved . This task is done logically with the MImage class of the maya API. It enables us to access image data :
- with a file path using the readFromFile method.
- or step outside our node to retrieve a fileTextureNode MObject and use it with the readFromTextureNode method.
What I learn is that image is not a multi-dimensional type of data: this is a tightly packed array whose size is dependent of its width ,height and depth properties. We can’t right off the bat request from the pixels coordinate (50,120) the value of the blue channel, but it is still manageable to write a method with this kind of functionality.
Maya provides 2 class to sample the color value of a shading node and make the life of developer easy :
- MRenderUtil::sampleShadingNetwork ()
With these classes we can take advantage of maya regular workflow by supporting the action of the place2DTexture/place3DTexture node attribute, enabling us to displace vertex position from color data
c) Taming vector with colors:
( image from Hajime Nakamura’s plugin )
Vertex displacement is common technique in image rendering, For animation purposes, lots of deformer/modifier where written and works well:
- displaceD from Hajime Nakamura.
- the peak deformer in soup node.
Most of the time the implementation of a vertex displace node involves using the vertex normal as a direction and the color value as percentage of a know distance. But that doesn’t restrict developer to use this color value as a vector like it is done with normal and displacement maps.
My idea was much simpler : creating a simpler blendshape node( and not a deformer ) with animatable weightMap support.
Instead of using the current data structure for storing weights ( a compound array that can be sparse thus saving memory ) , I choose to use a doubleArray attribute . This single attribute can pass efficiently this chunk of data as its value are constantly being updated.
Morphing / blending a shape is done with a simple algorithm :
current_offset_between_vertices = current_targetMesh_vertex_Position - current_inputMesh_vertex_Position current_Vertex_blendedPosition = current_inputMesh_vertex_Position + current_offset_between_vertices * currentVertexWeight * globalDeformerWeight
current_offset_between_vertices: the difference between 2 points returns a vector. In maya the correct order is targetPoint – sourcePoint to have a vector that start at sourcePoint and go to targetPoint.
current_Vertex_blendedPosition : In order to support maya deformer’s envelope attribute( globalDeformerWeight ) , API developer must add to sourcePoint an offset vector.
This vector is modified by the current vertex weight by a simple multiplication .
3) Parting words and file download:
This project was a good opportunity to learn more on shading network from the API stand point. I also discover that my vertexMap node has the same type of attribute of maya arrayMapper node. From this point it was easy to hack this node in order to replace my vertexWeight node in my current network.
A zip archive of this project is available on the ‘HighEnd3D’ website:
( New :maya 2012×64 version compiled by Mike Graessle from rigging dojo, thanks guy! )
( first tutorial to show the UI and script functionalities )
( second demonstration on curve and uv driver, and lastly on animated texture )
(soon fun with hacking maya arrayMapper node… )