New York Film Academy Instructor Craig Caton has created a new plug-in on Autodesk Maya that may revolutionize the way digital puppetry in both independent productions and major motion pictures.
The new software is called Animservo. It is non-real time facial recognition software that allows a single puppeteer to craft and save a performance before ever arriving on set. The software records a performer using a go pro. Facial recognition software captures the performance, and it is then uploaded into the puppet. With Animservo, the nuances of facial recognition performance by the puppeteer are recorded and then uploaded to Maya. The performance is refined and then downloaded into the puppet.
Utilizing a GoPro and marker-less facial recognition software, the puppeteer does not even have to be in the same state as the production. A recent test allowed a puppeteer to give a performance in Texas for a puppet in Los Angeles.
Usually, crafting a performance with a puppet requires quite a few performers. For example, the T-Rex in “Jurassic Park” utilized five union performers: one controlled the eyebrows, another the mouth, a third the neck, and so on.
Animservo can save productions a ton of money on performers and allows directors to have a picture-ready performance with less rehearsal time. If a director changes his or her mind about the way a performance looks it will take the puppeteer just a few minutes to make adjustments and the puppet will be ready for the next take.
As great as this invention is both financially and on a time crunch Caton says he has “something even better on the horizon.” In the mean time, Caton will be previewing Animservo at SIGGRAPH, or the Special Interest Group on Computer GRAPHics and Interactive Techniques in Los Angeles.
In order to get this new plug-in sign up for the training class here. The software comes free with the class.