This post is about two dependency nodes created to cope with the math limitation behind regular orientation constraint.
(Above: notice the flipping glitch when the torsion goes above 180.0 )
After writing a quaternion based aim constraint I was often confronted with twist popping out of control when the amount of torsion in a limb was too severe.
After several trials and errors I was able to find an intermediate solution for deformation purposes.
Part of the answer were lying in maya API MDataBlock structure and the ability for developer to read the content of an outputValue Handle previously filled in the same compute method.
1) From pie to pie ( with a thick French accent ) :
The default flipping behavior can be explained with some basic trigonometry knowledge that can be learn and found easily.
What was important for me to grasp was the valid range or domain of the trigonometric functions. These elements can be found in vector math, rotation matrix, and can really be considered as fundamental concepts.
The following code can replicate the functionality of the Protractor helper object in 3dsmax ( with the addition of an exposed angle parameter ):
import maya.cmds as cmds refLoc = cmds.spaceLocator( n='referenceLoc1') startLoc = cmds.spaceLocator( n='StartLoc1') EndLoc = cmds.spaceLocator( n='EndLoc1') cmds.parent( [startLoc ,EndLoc] , refLoc ) cmds.setAttr( startLoc + '.t', 10.0, 0, 0, type='double3' ) cmds.setAttr( EndLoc + '.t', 10.0, 2.0, 0, type='double3' ) cmds.addAttr( refLoc, ln="angle",at="double" ) cmds.setAttr( refLoc + '.angle',e=True,keyable=True ) #Use locator position as vector angleReader = cmds.createNode( 'angleBetween' ) cmds.connectAttr( startLoc + '.translate', angleReader + '.vector1' ) cmds.connectAttr( EndLoc + '.translate', angleReader + '.vector2' ) nullPos = [(0,0,0),(0,0,0)] startCurve = cmds.curve( d=1, p=nullPos,k=(0,1) ,n='starCurve1') cvShape1 = cmds.rename( (cmds.listRelatives(startCurve,s=True,ni=True)), 'starCurveShape1' ) endCurve = cmds.curve( d=1, p=nullPos,k=(0,1) ,n='endCurve1') endShape1 = cmds.rename( (cmds.listRelatives(endCurve,s=True,ni=True)), 'endCurveShape1' ) cmds.parent( [startCurve ,endCurve] , refLoc ) #Who needs a cluster for visual purpose simple curve? plug position value directly ... cmds.setAttr( cvShape1 + '.controlPoints', 0,0,0,type='double3' ) cmds.connectAttr( startLoc + '.translate' , cvShape1 + '.controlPoints', f=True ) cmds.setAttr( endShape1 + '.controlPoints', 0,0,0,type='double3' ) cmds.connectAttr( EndLoc + '.translate' , endShape1 + '.controlPoints', f=True ) cmds.connectAttr( angleReader + '.angle', refLoc + '.angle', f=True ) #Don't let depency node lives in the wild polluting the outliner: group them in container asset = cmds.container( type='dagContainer',ind=["inputs","outputs"] , includeHierarchyBelow=True, includeTransform=True, force=True, addNode=[refLoc],n=( "angleReader_Asset1" )) cmds.setAttr( ( asset + ".viewMode" ), 0 ) cmds.addAttr( asset, ln="angle",at="double" ) cmds.setAttr( asset + '.angle',e=True,keyable=True ) cmds.connectAttr( refLoc + '.angle', asset + '.angle' , f=True ) #dont forget to rename The hyperLayout node associate with this container layout = cmds.listConnections( asset , t='hyperLayout' ) cmds.rename(layout, asset + "_Layout" ) #kindly prefix the element in the container NodeList = cmds.container(asset, q=True, nl=True) for node in NodeList : cmds.rename( node,asset + '_' + node, ignoreShape=True)
What is interesting to see in the image above is that the angle between the two lines never goes beyond 180.0 degrees , nor below 0.0.
To determine a direction for this rotation we use the sign of the Y value of the purple locator( we can find the same logic in the atan2 function in python ) .Hence a positive rotation revolves counter clock wise , and a negative rotation in a clock wise manner.
2) History and time as a 4th dimension:
After refreshing my knowledge in trigonometry and vector math I was kind of stuck and didn’t know where to start: It was a good opportunity to browse two website which host a wealth of research papers:
- Chris Evans’s page dealing with Muscles and Skin Deformation : http://www.chrisevans3d.com/reference.htm
- Ke-Sen Huang’s home page which hosts lots of siggraph paper : http://kesen.realtimerendering.com/
Most of the papers were impossible to understand straight ahead when it comes to equations and mathematical notation.What was inspiring was the pseudo code and the overall ideas presented.
One of them is pretty common in the world of simulation : querying a variable at a different time to solve a problem.
Applied to a character a simulation can be used for soft body dynamic, cloth, hair or complex skin depiction .
The only drawback of a simulation is that it is history dependent ( to see the result at frame 100 you must compute all the frame since the beginning ), and not interactive.
Luckily I have found on cgtalk a thread that was going to be a good starting point. Alex V. U. Strarup was talking about a lag node inspired by maya framecache node.
Maya framecache node is able to read the value of a numerical input stream at a different time frame, and was a good candidate, but was useless in my case because this numerical value must be already processed ( like in a regular animation curve or an SDK ).
We can replicate this feature in mel with the -time flag of the getAttr command, or in the maya API with the combination of an MPlug and MDGContext.
This is what make it possible to ‘ghost’ an animated object in maya among many other things.
3) “The answer was not in the box, but in the band”:
(a little excerpt from Antitrust movie )
As I was trying to to sample a value at a different time in a custom python dependency node with MPxNode.getInternalValueInContext () , I was intrigued by the information provided for the setDependentsDirty() method in the maya API reference ( method whose sole purpose is to define relationship between attributes in a node ):
“If you want to know the value of a plug, use MDataBlock.outputValue() because it will not result in computation (and thus recursion)”.
I took advantage of this property to determine a twist direction:
- Inside a compute method ,the value of separate output buffer is first readen
- Then this output buffer and a twist input value are processed in a dedicated procedure.
- Finally if the input and buffer values are not identical we store the twist input value into the output attribute.
As the input value is changed and trigger a new evaluation , we can rinse and repeat the same operations.
As we have seen earlier the angle between two vectors is always positive, to determine the direction of a rotation that will transform a reference vector into a target vector ( traveling along the shortest path ) we can use their cross product to create a perpendicular vector that will act as a twist axis.
The same operation is done between the reference vector and this twist axis creating a vector that is the same result as rotating the reference vector about the twist axis by 90.0 degrees.
In the last step , if the angle between this “directional vector” and the target vector is greater than 90.0 our direction is negative.
This method has yet a major flaw : if a limb is twisted more than 180 degrees between two evaluations ( situation that can arise when an attribute value is modified manually, or when a joint is assuming its preferred angle ), this system cannot track reliably a rotation direction anymore .
In his article ( http://www.3dfiggins.com/writeups/forearmTwist/ ) Kiel Figgins talks about a manual animatable twist solution, with many relevant argument. At first this solution seems to be less of a hassle to deal with twisting limbs, it just, in my opinion, lacks scalability:
- Everywhere on the body , each region can twist and interact with their neighborhood.
- As a manual process, the result can be less accurate or lacks consistency.
- Apart from some specific case , where the twist of a limb is part of the character performance , twist visualization is not essential to animation.
- In my eyes , the role of an animator is to concentrate his skill and talent on timing, silhouette, and acting of a character . His time may be better used in animating secondary object to control jiggling, shaping and bending region, breaking joints etc.. than rotating twist bones to solve deformation issues.
It was disappointing to realize that this method was not robust enough, to be used on an animation rig, however as a deformation tool it is pretty solid.
Part of the remaining works was therefore to stabilize this tool and define a set of procedure to automates twist extraction and propagation on a deformation rig.
3) Angle accumulation versus number of revolution:
One method to compute the twist value in my node was to add or remove the angle value found between the past and current rotation vector in a buffer attribute. As I was concerned that due to numerical accuracy this value may drift from the current rotation value, i start investigating a more stable solution.
First it was important to translate the past and current rotation vector into a more consistent value : a normalized value U that represent a percentage of a circle perimeter.
One way to figure this circle purpose is to think of it as an audio tape that comes out of the upper point of the separation line and then roll up on a cylinder when its extremity is pulled by the current rotation vector.
As a user continue to twist a limb in the same direction, the length of the tape exiting from the separation line increases and its band can start to turn and wrap several times around this cylinder.
The amount of torsion in a limb is thus constructed from this twist history or number of revolution and a parametric value U.
# Procedure to compute the number of revolution along on unit circle # All value are rounded to the 5th decimal. # U parameter range from 0.0 to 0.99999 currentRevolution = currentRevolution_outputHandle.asFloat() if currentRevolution >= 0.0: if twistDirection > 0 : #Case A: positive rotation direction: if current_rotation < past_rotation : ### current rotation is in most case greater than past rotation ### when this is false we can increment the number of revolution if currentRevolution = -0.0: currentRevolution = 0.0 else: currentRevolution += 1.0 else: #Case B: negative rotation direction: if current_rotation > past_rotation : ### current rotation is in most case lower than past rotation ### when this is false we can decrement the number of revolution if currentRevolution = 0.0: currentRevolution = -0.0 else: currentRevolution -= 1.0 if currentRevolution < 0.0: #when the number of revolution is negative the U parameter value is reverse past_rotation = 1.0 - past_rotation current_rotation = 1.0 - current_rotation if twistDirection > 0 : #Case A: positive rotation direction: if current_rotation < past_rotation : ### current rotation is in most case greater than past rotation ### when this is false we can increment the number of revolution if currentRevolution = -0.0: currentRevolution = 0.0 else: currentRevolution += 1.0 else: #Case B: negative rotation direction: if current_rotation > past_rotation : ### current rotation is in most case lower than past rotation ### when this is false we can decrement the number of revolution if currentRevolution = 0.0: currentRevolution = -0.0 else: currentRevolution -= 1.0 currentRevolution_outputHandle.setFloat(currentRevolution) currentRevolution_outputHandle.setClean()
4) Counter measure for a previous value dependent node:
When it was time to use this node on a character limb , I was fully aware that only half the work was done.
To prevent incoherent evaluation of this node, I start investigating the worst case scenario:
- between two frame a value is changed too much for the node to compute a correct torsion value.
- jumping or requesting a value at a different time doesn’t match well with this node algorithm .
The solution and the workflow I develop around it was to control how a torsion value is stored on an external animation curve, animation curve which ultimately override the node computation.
One command was meeting the above requirement : the bakeResults command.
Due to my node architecture I build a simpler function from 2 flags of this runtime command :
- -time argument : provides a valid time range for this sampling function.
- -sampleBy argument: which controls the granularity of the recording
In the valid time range this function executes these instructions:
- copy the torsion value from the torsion node .
- paste this double value in a numerical attribute on an external empty group node.
- set a key frame.
- step in time by a defined amount.
- Once the end time is reached set all tangent to linear interpolation.
Once this script was finish the next step was to allow a user to override, correct or enhanced, the number of revolution value computed by the torsion node: storing a twist value that is not correct as little to no value, thus it was important to create tools to complete this task as effortlessly as possible .
This was enough to manually cache one twist attribute but I wanted to go further and automate this process for a whole character.
The first time I was exposed to the concept of modular procedural rigging was in maya techniques Custom Character Toolkit
In this SIGGRAPH’s masterclass Erick Miller and Paul Thuriot talks about custom character pipeline and tools creation.
The core concept , useful to me in this presentation, was the use of metadata to link , maintain or store meaningful information/relationship .
In action this means :
- linking elements together with message attributes ( to retrieve a mirror limb for example )
- storing informations in a string attribute ( for name , prefix etc.. )
- flagging or registering elements with an ID number.
In itself this represents a powerful idea that can be modified easily to fit someone needs.
Two similar recent approach can be found and dissected:
This metadata concept can also be found in maya blind data for polygonal mesh . Several types of information can be assign at object or component level and are store in a template node .
Traditionally to promote a parallel workflow , a puppet is split by its function :
- a first hierarchical structure is used by animators to move, “breath life” to a creature / character.
- the second structure is responsible for the surface deformation.
The two structure can then be glue together with a series of point / orient / scale constraint , the main purpose being to drive the deformation skeleton with the animation skeleton.
In order to automate my twist extraction mechanism it was important to blend sequentially the weight of each animation skeleton part over the corresponding elements in the deformation hierarchy.
The usual method is to blend the influence of an entire skeleton or a limb hierarchy all at the same time .
In place of blending we can also toggle directly this value with script , SDK or conditional nodes, and store the offset transform value to prevent visual artefact.
My requirement was modest : retrieve an ordered list of “transform” constraint , thus I used a simple approach : not to go too deep in a recursive search .
In a master puppet node several Hub nodes are connected via message attribute . These hub describes a hierarchy layout for a body region, and host two message list attribute :
- one for a transform object
- and the other for a constraint node.
It was easier for me to layout attributes, and connections rather then write script which was responsible to scan a hierarchy and apply a series of actions.
I also have decided to use only two level in my metaNode external hierarchy: for example a finger bone chain can be a child of a hand bone in an animation rig but in my process it will be connected after the arm_hub in the hub_List Attribute.
Once this step was finish I was able to write a script :
- that gather, one hub node at a time , an ordered list of constraint node and flatten it in a global list
- for each element in this list create 2 keys going from 0.0 to 1.0 that are spaced by N number of frame .
- finally, offset the time range of these keys by the element position on the global list (with the sole purpose that the last element key time value is placed just before the beginning of the current shot ).
This preprocessing is common in cloth simulation : a user assign a range of time before the beginning of an animation sequence to allow the simulation to start from a known state .The character is going from a dressing pose to the action pose.
In my workflow this preprocessing is used only to store all the torsion value in a character , once finished we can normally bake the surface deformation in a point cache file ( the time range of this file is the same as the sequence time range : the preprocessing goal was to start reading a torsion value from a known state relieving the user to manually enter a correct value )