But now that that’s out, I’m working on something new that gets around one of the biggest remaining issues with rigging 2.5D Kinect characters: automating layer scaling based on Z-distance. It’s one of the most annoying things to deal with, and until now the best options were “stay in one depth plane” or “manually scale things up and down.” Ugh.
The guy on the left is what happened if you walked back and forth toward the camera and didn’t account for it:
The little expression I came up with this morning turns the same character with the same mocap data into the guy on the right.
Keep in mind this is an experimental feature and at the moment only works for camera-facing characters. It won’t be added to the UI Panel until I’ve worked out the necessary layer space transforms and a couple bugs. In the meantime, if you’d like to try it out, do the following: The new code will be added to the UI Panel shortly, but if you’re eager to try it out, here it is:
In the 3D template, set this as the “mocap” layer’s position expression:
Along with a bunch of other cool features, the next release of KinectToPin is going to include both 2D and 3D automatic template setup. Here are two quick experiments with the 3D version:
I really like the foreshortening. It was actually a bit of a happy accident, coding-wise, an unexpected side effect of the expression that ties the scale of the 2D puppet layers to the 3D camera’s position.
Instead of keyframing the characters by hand to match the existing audio track, “Machine Politics” was created using Kinect motion capture. I think the visuals add a lot to what was already one of my favorite bits we’ve ever done. There are a few spots where it gets a tiny bit uncanny valley, but there are also moments that really freak me out with how natural-looking they are. Kevin’s facial expressions in particular just… look like Kevin. It also looks like a real live panel game! And using the traditional panel show format lets us get around some of the serious limitations to this technique (i.e. characters can only face forward).
The skeletal tracking data was captured in Processing with KinectToPin running SimpleOpenNI, then applied to multi-layered puppets rigged in After Effects. Facial animation was done with a combination of automatic lip sync to audio waveforms and a couple of Motion Sketch nulls controlling smile-vs-frown and eyebrow height. I then switched between the 10 different camera angles (all using the same puppet precomps as sources) by bringing them into Premiere via Dynamic Link and creating nested multi-cam sequences.
All that meant there was a lot of asking software to do things it wasn’t meant to do, and I spent a good bit of the animation process going “I can’t believe this is actually working…” There were a few hiccups, though, and lessons learned for the future: next time I’m going to do the sequence edit before I add the Kinect and lip sync data — there were close to a million keyframes in the project at that point and Premiere really started to choke. But now that everything’s rigged and ready to go, I could probably turn around a new episode in a single day. Which, for several minutes of full-color, full-motion animation, is insane.
So in my Kinect + After Effects tutorials I offer a couple ways to rig the puppet’s head, but neither one is an ideal solution: the first one leads to occasional face-stretching and the second to increasing the manual animation workload substantially.
But there’s a better way! Put the anchor point in the center of the face, and attach the position keyframe to the Head control point. Then apply the following expression (based on one originally found here) to the rotation parameter:
ang = radians_to_degrees(angle);
Now the head will rotate to match to the angle formed by the head and neck points, but without the weird distortion the Puppet Tool can cause. You can tweak the head’s attach point by shifting the anchor point.
This tutorial is now obsolete. Check out the new KinectToPin website for the latest version of the software and how to use it — it’s dramatically easier now.
I’m working on a series of films about juries at the moment. They should be pretty fun to do (I get to animate trial by ordeal, for one), but there’s a lot of character work and not a lot of time. Thus, digital puppetry.
I was hoping to work with After Effects’ extremely fun Puppet Tool, but the results I got while experimenting were just a little too squishy for this project. (Anyone know some tricks for getting convincing, not-too-exaggerated motion out of it? Even liberal use of the starch tool seemed unhelpful, and I couldn’t for the life of me figure out a way to make elbows and knees bend properly.) So for the moment it’s back to IK rigging — and a lot of carefully placed anchor points.
I’m much more satisfied with the results, particularly now that I have a keyframable checkbox parameter that switches the bend direction of the joints. In plain English, I can make someone’s elbows bend both ways — e.g. a character can go from having their hands on their hips to picking something up off the table next to them with very little trouble.
Creating the jurors themselves was a lot of fun — the characters need to function more as archetypes than individuals. The result: a wide range of ages and races and a complete lack of faces.