I was sick for much of last week, and I’ve also been doing planning on a short film. But my Kinect finally arrived today, so I was ready to move forward with another project, based on the video below.
The video is excellent and thorough, and the associated .blend file including prebuilt rigs was a big time-saver. I got almost all the way through – I ran out of time attaching the capture rig to a new armature. To be fair, I also spent three hours tonight in playtesting my RPG.
Still, I learned a lot!
- The Kinect comes with some warnings about doing damage to certain USB ports. My findings are that you need to plug it into a Gen 2 USB 3.0 port. Recent motherboards, like the one I’ve got that support the AMD Ryzen, has one Gen 2 port on the back, so I plugged it into that.
- During installation of the Kinect SDK and other tools, I got warnings about problems with the USB port, and the Kinect configuration. The capture seems to work just fine, so I think these warnings can be ignored.
- The Kinect Studio will be very laggy on displays above 60 Hz. I chose to go directly to NI-Mate’s own preview.
- Every time you turn on NI-Mate, you need to re-enable the option for skeleton tracking.
- For the Kinect to start doing skeletal tracking, you need to move around and otherwise present yourself as a humanoid shape to the sensor.
- I mounted the Kinect on a camera tripod, with the sensor at about eye level when I’m sitting down. At a distance of 10 feet, that gave me a good full-body view.
- The full-body view wasn’t as helpful as I hoped, since (to nobody’s surprise) depth sensors don’t always know what to do with fat people such as myself. I’d probably get more accurate tracking with VIVE trackers, but those are $100 a pop. To be fair, I paid around $180 for the Kinect plus adapter.
- Auto-Rig Pro seems to want me to pre-bake animations before it will retarget rigs. If there’s a way to make it track stuff live, I haven’t found it.
- I haven’t tried BVH Retargeter, but I think it would work for my scenario. Failing this, I’ll write a Python script to automate the bone retargeting I want to do – probably starting with MB-Dev and VRoid Studio rigs.
My next step is the last one – tie the capture rig from the Kinect to some existing character, so I can start really animating. After that, I need to build a face-capture helmet. If I’ve got these things together, I should be able to handle everything but finger motion.
I also found an interesting plugin: https://github.com/nerk987/triangulate
It looks like this is another way to improve optical motion capture via multiple cameras. The advantage of this system is that you can do whatever kind of tracking you like, such as setting up your own markers. The disadvantage is that it requires significant setup, two fixed cameras, and some timing foo to make sure the clips from those cameras start at the same moment. All doable, but only worth doing if the Kinect solution turns out to be unable to capture at the resolution I want.