When Kevin‘s brother turned 30, Kevin said we should get him something “monumental”. I, being my usual smartass self, said “What, like a monument?”
And, well, here it is: a victory stele commemorating the first 30 years of Dennis’ life.
There are fish because he’s had to travel to Norway in the dead of winter to inspect salmon farms, and a winged lion because WINGED LION.
It’s laser-engraved on stone, a medium I’ve never had a chance to try before. The design is an intentional mishmash of multiple ancient cultures, and the text is transliterated into Linear B, so, uh, it’s going to be a delightfully confusing find for any future archaeologist who stumbles across it.
The stone is actually a floor tile, and for unknown reasons when the EtchPop guys carved it the laser turned it white, so it’s actually easier to read than I expected!
…except for the whole “Linear B” thing, which makes it… not exactly easy to read? Dennis is trying, though:
This was really fun to do, so… If you need anything designed to be carved into stone with a laser, let me know!
But now that that’s out, I’m working on something new that gets around one of the biggest remaining issues with rigging 2.5D Kinect characters: automating layer scaling based on Z-distance. It’s one of the most annoying things to deal with, and until now the best options were “stay in one depth plane” or “manually scale things up and down.” Ugh.
The guy on the left is what happened if you walked back and forth toward the camera and didn’t account for it:
The little expression I came up with this morning turns the same character with the same mocap data into the guy on the right.
Keep in mind this is an experimental feature and at the moment only works for camera-facing characters. It won’t be added to the UI Panel until I’ve worked out the necessary layer space transforms and a couple bugs. In the meantime, if you’d like to try it out, do the following: The new code will be added to the UI Panel shortly, but if you’re eager to try it out, here it is:
In the 3D template, set this as the “mocap” layer’s position expression:
So Kevin needed a title sequence for one of his latest projects, a series of (very!) short films of his compositions based on Erin Watson’s poems inspired by @horse_ebooks tweets. One of them is set on a train platform, but was actually shot in a parking garage, so I wanted to add some “train-ness” to the film. But this was a super-super-super quick thing, so I didn’t want to bother with, say, actually going out and filming a train. So I made one.
Out of five rectangles and two keyframes.
How does that work? Roughly like this:
1. Make a “train car” comp that’s a big rectangle with some smaller ones for windows, like so:
2. Keyframe it looping once forward using the Offset effect, and cycle that with a loopOut expression.
3. Blur the hell out of everything. Crank the shutter angle way up on the motion blur.
4. Add some flicker by applying the Exposure effect controlled by a wiggle expression.
5. Tint, add some grain and vignetting.
6. Throw some text on top and some train sounds underneath, and voila — easy semi-abstract titles in about five minutes!
After a lot of work (and not a lot of sleep), the all-new KinectToPin has arrived. It’s Kinect motion capture for After Effects in one convenient package! Well, actually, two convenient packages: KinectToPin now includes both a standalone capturing app and an After Effects UI Panel plugin. It’s also about a million times easier to use, and has a ton of new features — automatic setup and rigging (no more coding), direct XML import, 3D features, audio playback during capture… I’m tempted to quote the RUCKUS NYC Kickstarter infomercial and say “And it just! Keeps! Going!”
It also has a snazzy new website all to itself: kinecttopin.fox-gieg.com. Download it and give it a try, I’d love to see what you can do with it!
Instead of keyframing the characters by hand to match the existing audio track, “Machine Politics” was created using Kinect motion capture. I think the visuals add a lot to what was already one of my favorite bits we’ve ever done. There are a few spots where it gets a tiny bit uncanny valley, but there are also moments that really freak me out with how natural-looking they are. Kevin’s facial expressions in particular just… look like Kevin. It also looks like a real live panel game! And using the traditional panel show format lets us get around some of the serious limitations to this technique (i.e. characters can only face forward).
The skeletal tracking data was captured in Processing with KinectToPin running SimpleOpenNI, then applied to multi-layered puppets rigged in After Effects. Facial animation was done with a combination of automatic lip sync to audio waveforms and a couple of Motion Sketch nulls controlling smile-vs-frown and eyebrow height. I then switched between the 10 different camera angles (all using the same puppet precomps as sources) by bringing them into Premiere via Dynamic Link and creating nested multi-cam sequences.
All that meant there was a lot of asking software to do things it wasn’t meant to do, and I spent a good bit of the animation process going “I can’t believe this is actually working…” There were a few hiccups, though, and lessons learned for the future: next time I’m going to do the sequence edit before I add the Kinect and lip sync data — there were close to a million keyframes in the project at that point and Premiere really started to choke. But now that everything’s rigged and ready to go, I could probably turn around a new episode in a single day. Which, for several minutes of full-color, full-motion animation, is insane.
You can’t animate the masks directly inside the script palette itself (although you can generate them at multiple keyframes along a single mask path), but that isn’t really a problem — I managed a lot of neat effects using a single static shape. These patterns work especially well as inputs for the Radio Waves effect — add a generous amount of blur and glow, and boom, instant animated BGs.
Here are stills of a few things I came up with during testing. If your screen’s 1920×1080, they make decent wallpaper:
I still want to experiment a bit more, but once I have things worked out I’ll do some presets and tutorial stuff for your downloading enjoyment.
So what exactly am I doing with all the mocap stuff I’ve been working on with KinectToPin + After Effects? Well… add in expression-controlled facial animation and using Dynamic Link to live-switch unrendered AE comps via Premiere’s multicam setup (I am kind of freaked out that this seems to Just Work), and it looks like we’re about to have an animated Actually Happening. Shhh!
I’ve been figuring this out as I go, but once I have all the elements rigged it should be almost trivial to make new episodes. Also I built the set in PHOTOSHOP which is ridiculous. I don’t have any proper 3D software on my laptop, so the table is all Repousse shapes extruded from rounded rectangles.
So in my Kinect + After Effects tutorials I offer a couple ways to rig the puppet’s head, but neither one is an ideal solution: the first one leads to occasional face-stretching and the second to increasing the manual animation workload substantially.
But there’s a better way! Put the anchor point in the center of the face, and attach the position keyframe to the Head control point. Then apply the following expression (based on one originally found here) to the rotation parameter:
ang = radians_to_degrees(angle);
Now the head will rotate to match to the angle formed by the head and neck points, but without the weird distortion the Puppet Tool can cause. You can tweak the head’s attach point by shifting the anchor point.
This tutorial is now obsolete. Check out the new KinectToPin website for the latest version of the software and how to use it — it’s dramatically easier now.