In the fourth part of this series, Matthew Collu looks in-depth at two of the more prominently used types of motion capture in a virtual production environment.
In the recent decades of digital storytelling, whether in gaming or filmmaking, the incredible technological feat of performance capture has become commonplace in our virtual ecosystems. The subtlety and nuance of a completely digital, fantastical avatar was either added meticulously by an artist or concocted as a retrofuturistic concept that belonged only in the far reaches of a fictional future. Nowadays, it’s not only a viable means of content creation but accessible too, with no better example than the new virtual production workflow.
Over the years, as this technology has developed to serve various purposes in various industries, different products and pipelines have hit the market. All have attractive benefits and intriguing use cases, but perhaps a quick definition of the practice would prove helpful.
Motion capture is a way of tracking and capturing the motion of an object – sometimes a prop or part, other times a living creature like you or me. This is done in a couple of ways. The most common with in-studio production is through infrared light and small markers that reflect to specialised cameras collecting the data. The second is a much more recent solution: a series of accelerometers are lined across the surface of an object, then translated back into rigid body data. This allows wireless motion capture, but can be far less detailed in its capture.
Motion capture is thus a workflow with a simple definition, but with numerous use cases, types and utilisations across the creative spectrum. Simply being able to capture the movement of something through a volume of space and to transfer that data in real time to a digital counterpart presents us with an entire universe of potential.
“While there are a number of existing options for camera tracking, the one to be selected is one that can be deployed without limiting production,” says The Other End’s Head of Virtual Production, Kevin McGeagh. “Crews need to be comfortable performing their duties without interference while still ensuring a stable, accurate output.”
Without some kind of motion capture solution, there is no reliable way to track the camera’s position correctly, which means no reprojection of its perspective on a LED wall. Most of the time, you won’t have to worry about not having any tracking. However, the real dilemma is one of choice. Of the many pipelines forged during the advent of this technology, a couple have proven to be the winners when it comes to reliable use during studio scenarios. So instead of explaining every motion capture solution, I’ll shift the focus to two of the more prominently used types in a virtual production environment: outside-in and inside-out.
Outside-in
The most widely understood and used solution type, outside-in motion capture is the workflow we’ve all come to commonly associate with the topic. Motion capture cameras are scattered around a volume, all pointed inward to track whatever is marked and moving through a cloud of infrared light. Companies like OptiTrack and PhaseSpace are the largest proprietors of this kind of workflow, which is used well beyond virtual production – from gaming to visual effects, it is found in some of the largest motion capture studios worldwide. It is precisely what the name suggests: motion is captured from outside sources looking in on a volume.
However, this means you’re at the mercy of something every good director always wants – coverage. Looking inwards at a select group of markers, the solution can only track what it sees. If anything occludes or covers those markers, tracking consistency erodes quickly. You’ll bounce around like a ping-pong ball in a paint shaker if you’re not careful.
Inside-out
More recently developed, inside-out motion capture is the same solution found on new-generation VR headsets – rather than infrared cameras pointed in towards an allocated space with markers placed within it, cameras face outwards and track infrared light bouncing around the space. Points of contrast that make up a room or space can be tracked by how and where that light is bounced.
Another more advanced and consistent system that leverages this kind of solution is Mo-Sys’ StarTracker. This puts markers on the ceiling above the tracked object, capturing its position relative to where it maps those points out in real time. Both follow the same principle but are different in their capture of the space.
Despite seeming to be the opposite of outside-in, inside-out functions have the same issue: they need enough of something to look at. If there aren’t easily identifiable points outside, the system can’t determine its correct location. This is increasingly concerning when contemplating its use with a LED wall accompanied by a ceiling LED rig. I know the sky is usually where you find the stars, but in this situation, not so much. Again, though, this is an easy fix once you understand the inner workings of each kind of solution.
Much as virtual production is a product of astute reconfiguration and inventive, creative problem-solving, so too are the many motion capture solutions. This broad term covers a multitude of ways to attack a creative endeavour, and even more ways to bring something from synapses to screen. The real challenge is choosing the solution that works best for you.
Matthew Collu is Studio Coordinator at The Other End, Canada. He is a visual effects artist and cinematographer with experience working in virtual production pipelines and helping production teams leverage the power of Unreal Engine and real-time software applications.