There’s a lot of low-cost motion capture solutions that are available. Unfortunately, they’re not full coverage.
The ones that I’ve seen include:
- Kinect and NI-Mate (see this video) – covers the areas shown in red, including the whole body, but not facial expressions or hands
- OpenMocap – uses existing VR hardware to get the full body (the red area), but because it uses a head-mounted display, it excludes face capture
- BMC – covers both body and face capture (red and green), but right now requires that you mail data to yourself, which excludes real-time scenarios
- Valve Knuckles and VIVE Trackers – can be used to capture more specific position data, including fingers, but requires software that hasn’t been written (as far as I know)
The most promising solution in the long term is probably BMC, if the mobile app can also transmit data to Blender in real-time, and once it gets finger support.
The most promising solution in the short term is OpenMocap, because it uses hardware I already own to capture the data I need most. For animation practice, I can probably ignore the hands for the most part.
My ideal but time-consuming solution would be to write my own code, using OpenXR and other libraries to synthesize data from different sources, then feed it into Blender. At that point I might have just re-invented NI-Mate, and that’s way beyond the scope of what I feel ready to do.
Image by Jonggun Go from Pixabay