During the last month I was attending a Facial rigging session at rigging dojo ( www.riggingdojo.com ).
“Fear does not exists in this dojo…
Pain does not exists in this dojo…
-No sensei!!!! ” ( and this goes on, don’t be afraid Daniel and call Miyagi san ).
More seriously rigging dojo is an online school were each student can receive personal and customized character rigging education.
A more in depth presentation can be found at http://area.autodesk.com/blogs/stevenr/to_rig_or_not_to_rig
In my spare time I love to write tools , test new idea for character deformation but I always feel that I was lacking experience in facial rigging and facial deformation.
More precisely at work, time or budget constraint involves providing enough controls for the animation performance. To achieve this goal I try to favor basic techniques like joint deformation for the neck and jaw region and blendshape for viseme and emotion.
With our limited resources and my lack of knowledge on this subject the resulting quality of our facial character performance is usually quite low. The main problem I encounter with my current method is the lack of connectivity between the face region .
This Facial rigging attending was a good opportunity to improve my skills and broaden my knowledge. I was lucky to receive support and teaching from Brad clark.
As a learning material I have recycled an old character : Speedman
( design by jean luc Patriat from Hanuman.fr for speedy 2007 TV commercial )
(Click on the above picture to see the full size of the image )
The ear and nose region were first modified to better fit my taste, then the mouth and eye region were tweaked to a more neutral position.
The new face topology is now more dense to allow detail in the skin deformation and was designed from hippydrome concept : http://www.hippydrome.com/ArticFace.html
Hippydrome video about mesh layout and curve tangent was also inspirational for this task.
A more thorough talk about curve can be found at:
- Maya API MFnAnimCurve Class Reference
The preceding links are not designed to strain any math muscle but was important for me in this project.
1) Showdown between a face robot, a machine and animation toolset…:
Facial animation is a rich subject that is actively researched.
The main area can be seen as:
- Geometry and texture acquisition
- Expression and facial action unit acquisition
- Animation from motion capture, marker less video footage or synthesis from an audio or text signal
- Pipeline automation with re-targeting of animation between character
- Automatic weighting / deformation for character of different proportion and shape.
Build from this research the average user can try lo leverage several software specialized in the facial rigging:
a) Face Robot
One of these tool is now part of softimage Xsi : face robot .
Like any tool the developer have try their best to streamline the workflow and user experience.
First the user must register different object that will be part of the face like the eyes, teeth and tong.
At this stage the face mesh must also comply with face robot requirement :
- the interior of the eye must be closed.
- the interior of the mouth needs also to be created.
The second step that is left to the user is the registration of a fix number of key point or landmark that will define the face proportion.
( image from XSI documentation at http://download.autodesk.com/global/docs/softimage2012/en_us/userguide/ )
From the image above we can see on each side of the face:
- 4 point that defines the eye region
- 2 for the forehead
- 4 points for the whole mouth
- 4 points also for the nose
- 5 points for the outline of the jaw
- 3 point for the ear
- 3 points for a region that is often overlook : the neck
From this information face robot is now able to solve the face deformation and the animation controls .
The user can continue fine tuning and improving the face behavior as much as he likes by painting weight/ wrinkle maps or sculpting correctives shapes
One interesting feature of face robot is its lip synchronization tools
Instead of dealing with keys this module treats viseme as animation clips in a video mixer interface. ( in a very smart way, its a pity that only English and Japanese are supported as audio speech recognition ).
b) Facial animation toolset:
This interesting piece of software is developed by theR&D department of the Filmakademie , and is tightly integrated in maya as a set of plugins.
( all images are from the facial animation toolset wiki )
Like other facial rigging system one of the first step is the registration process
Based heavily on Paul Eckman’s facial action coding system this process is then used to generate an animation interface.
These animation controls can import motion capture data or be manually key-framed.
The last step is to import the default weight template as a basic deformation layer and tweak the face behavior by painting weight or adding correctives shapes.
What I find interesting in this project is the tool that were developed for the corrective shape session: the Conditional Blend Weighted node.
This tool’s help file contains really useful information on the 3 method used to average the value of the drivers input connection:
- Arithmetic mean
- and harmonic mean
It shares some similarity with Daniel Pook-Kolb Attribute Combination System
More Information about his Advanced DataPoint Theory can also be found at http://www.stargrav.com/bcs/docs/data/ap1-dps.html#dps
One of the strong point of this facial animation toolset is its Facial Expression Repertoire and Filmakademie Public Facial Data-set where registered user can access and study facial actions images and 3d data.
c) The face machine:
This last program is also a Facial Rigging plugin for maya and was created by the author of the setup machine: Anzovin studio.
An interview of its founder can be found on rigging dojo broadcast archive: Rigging Dojo Live: Inside The Setup Machine 2 with Raf Anzovin
One interesting point is that the registration process is done with a set of curves that needs to be fitted to a humanoid character face.
These curve are then used to provide localized smooth deformation for the face in a custom skincluster node, much like the wire deformer without their global orientation restriction.
The rest of the features share the same amount of polish and is similar to what can be found in the maya developer community( this a quality in my opinion):
- Automatic rigging : with on face controls and joystick like control( a la Osipa )
- synoptic view/ face management UI
- Pose library
In this tutorial Warren Grubb, Animation Director for Fathom Studios explains a method to take advantage of the smooth nature of nurbs curve as influence object.
(by the way Amy thanks for the link to my blog in your pose space deformation section)
2) Face synthesis and mpg4 facial animation as additional learning material:
a) FacialStudio, life between procedural content creation and modeling template:
( All images from Di-o-matic website )
Di-o-matic’s FacialStudio is window application and 3dsmax plugin that enables users to create head through a set of parametric controls
Like many procedural content generation tool, the default result quality are usually barely average:
- The number of parameters to control ( that gives ultimately flexibility ) is often overwhelming.
- Despise the developer effort, the user can be confused by new workflow procedure and the learning curve ends up being too high with the lack of example, meaningful tutorial or documentation.
In his masterclass ( Next-Gen 3D head creation: modeling, rigging and animation in Autodesk Entertainment Creation Suite ) Laurent M. Abecassis try to educate his audience on these problems in order to change its expectation:
Use these tool as a starting point and not as a magical make art button.
One successful example that covers crowd duplication, hybridization and avatar customization can be found in Chris Fortier’s presentation at the 2009 game developer conference.
(images from power point presentation from gdcvault website )
This slide shares concept about universal body mesh, character morphing, normal map blending, layered clothing etc… a very interesting process developed at Volition for Saints Row 2.
b) Mpg4 , computer vision and talking heads :
One day ,during a research session on Google I found a great paper from the Visual Media Lab : Computer Facial Animation: A Survey
( “MPEG-4 speciﬁes and animates 3D face models by deﬁning Face Deﬁnition Parameters (FDP) and Facial Animation Parameters (FAP)” as we can find on page 16. )
In this paper the authors have listed various computer facial animation techniques and try to classified them into the following categories:
- shape interpolation,
- Facial Action Coding Systems based approaches,
- performance driven facial animation,
- MPEG-4 facial animation,
- visual speech animation,
- facial animation editing,
- facial animation transferring,
- and facial gesture generation.
After this reading anyone can start to grasp how much effort was put in solution like:
- image-metrics FACEWARE ( special mention for their character rigs and free webinars )
- mova’s CONTOUR Reality Capture system.
- xbox kinect and autodesk project Photofly ( image base modeling and photogrammetry )
3)Current result and motivation:
Broad surface based deformation of the neck and jaw region
Fleshy eyes and eye rig connected to the master face rig
Eyebrow and forehead test
a) A sleeve rig as a basic, versatile deformation template:
After crafting the head topology , I wanted to try out more complex method for facial rigging with an heavy emphasis on deformation .
My goal was not to despise simple and effective techniques, but to achieve better deformation in less time. As a second requirement I needed to have rig elements that can scripted and transferable .
I start my investigation from the core concept shared by Charles Looker and Brad Noble.
(images from Brad Noble’s facial_bone_rig )
( images of charles Looker from the facial animation setup thread on cgtalk )
From the images above we can see :
- that the basic layer of deformation is done with bones.
- the bones layout mimic roughly a real face anatomy.
- A curve network is used to drive these bones motion and orientation.
- This curve network is also used to control other driving curves( think naso-labial fold )
My primary goal was thus to blend some of the quality this technique provides with my interest in the maya API:
- Compact the bone elements in region based nurbs surfaces patches.
- Add them in a skinCluster as influences objects .
To achieve this goal , I have taken some time to write some custom curve nodes that comply with the following requirement:
- maintain volume through rough collision detection,
- equalize point distribution for surface relaxation,
- offset the midPoint or an input curve range along its path.
These face widget/ module works also in conjunction with a series of “sleeve” surface that are:
- a first layer of deformation for the neck region.
- and a versatile reusable deformation template that can be tweak to deal with most limbs or body region.
A sleeve module is an extruded curve arc with additional parameters :
- to control the basic twist behavior and the global twisting distribution.
- implant and mix bind Pose topology.
- compute corrective shape subsurface patches.
- isolate mesh region for faster user interaction.
3 interesting papers talks about spline deformations and will be use to implement these sleeve node in the maya API :
- Offset Curve Deformation from Skeletal Animation
( image from Arthur Gregory , Dan Weston from Sony Pictures Imageworks )
- Deformation Styles for Spline-based Skeletal Animation
( image of the spline coordinate system)
In this paper we can also learn about the Frenet-frame, and bind process.
useful concept for stretchy limbs and rubberhose like deformation
A similar concept is demonstrated in Anzovin studio’s rig tools pre-alpha video
b) neck module experiment:
For this project I wanted to include an underlying series of nurbs objects as collision object. It was not a complete success but I was able to learn other things in the process.
All the deformation are based on the movement of the main neck tube that can be bended , stretched but doesn’t twist. ( With maya regular tool this can be achieve with an extrude nurbs history node )
Attached to this tube ,the upper part of the trapezius element inherits some part of a this translation and maintains this region volume.
The blue surface in overlay represent the neck corset that more closely follow the neck initial volume at bindPose. This surface twist distribution is controlled by the final orientation of head.
This green surface is responsible for the transition between the neck corset and the trapezius.
Its twist distribution is different from the neck corset and as the neck twist an interesting phenomenon arise:
- some part of the surface is loosing volume but the combination of several other surface maintain the neck shape .
c) Face Implant and jaw basic deformation:
One of the key concept that Brad teach me was a layered approach for the face deformation:
In the image above two nurbs influence objects are responsible for the most basic jaw deformation: opening the mouth and having a smooth transition up to the eyelid with the side and bottom of the nose reacting as well.
As the masseter muscle contracts the jaw open and the temple region also reacts with a little portion of the ear.
The nose reaction is quite small in this layer as the primary target is to have a global deformation propagation across the whole face implant.
To complete this level of deformation the mouth width needs to shrink accordingly but was not implemented yet.
d) Forehead and brow region: Fun with MFnNurbsCurve …
( images reference from google )
The biggest feature of the forehead deformation is generally the large wrinkles that appear as the underlaying muscles are triggered.
This effect has also an action on the scalp region that is quite subtle.
We can be more atomic and decompose this head module into 2 elements:
- In red the frontalis region that can be mirrored and are angled in a specific way
- In blue the skin region which is stretched between these muscles and host the deeper part of the outer skin folding
One important requirement for this region is to preserve the underlaying skull shape and pulling a good amount of flesh at the same time.
Part of the solution that I developed was to build from the maya python API a custom curve node that can shift its points in a specific manner.
( This tread was built from a curve with an extrude history ) .
( Above : image of the shiftRange node in the attribute editor )
What my shiftRange node( shiftPoint node would be more appropriate ) does is quite simple:
- Extract the driving point that lies on half the length of a curve
- Shift this point parameter value Up or down along the curve
- sample on each side of this point at a regular interval N number of segment point
( image from maya muscle’s reference documentation )
To finalize this region one method to create the forehead wrinkle is to paint a weight map for maya muscle relax surface deformer: as the surface contracts the painted area starts to wrinkle.
The other alternatives can be done with tension base deformer:
- to create the wrinkles
- drive the vertex weights of a sculpted blendshape
Several plugins expose this type of functionality:
- fStretch2 by cgaddict
- The tensionBlendShape node from the soup plugin toolbox
After some weighting test it was clear that this node was not the right one to drive the eyebrow : it was simply loosing volume as this node shifts the deformation up or down.
It still is very good for compressing skin quickly and easily .
A future implementation will add:
- the starting parameter of the driving point
- the point distribution mechanism : a ramp attribute will control the points spacing if the user wish something less linear
- curve pruning: the user will be able to choose what part of the curve will be created: for this example the endpoint segment
e) Brow region: more API tweaking, splineIK and skin molding …
The next logical step for me to shift the brow was to re-factor my curve node in order to offset a predefined curve range
( notice the polygon strip that deforms the brow mesh )
After some tests and several weighting configuration, I started to see an improvement in the brow/forehead deformation.
Two elements were a bit disappointing:
- the current weight configuration of the brow was pulling the forehead too much and the underlaying skull was losing volume
- the brow was exhibiting volume artifacts because its sub segment were not rotating
( above : we can see shearing in the brow’s segment )
One solution to counter this artifact is to use a splineIK extracted from the brow surface strip to drive the position and orientation of series of regular joints
f) Parting words and future implementation:
After shifting curve points , offsetting curve range, the last node I wrote is able to shift a whole curve along a path.
This was for me a good practice to implement in one node what I was doing with 30 maya regular nodes. On the image above it doesn’t seems much but it starts showing its full potential on limb twisting.
By default maya has several excellent curve node:
- subCurve : creates a curve portion and can be hack to slide it along a path
- pointOnCurveInfo: retrieves position ,tangent, normal, derivatives etc.. one of the node that expose almost all the usefull method from MFnNurbsCurve to the average user.
- rebuildCurve: equalizing point distribution on a curve made easy
I hope this post was informative and that it can breed new ideas to the readers of this blog.