Specs, Integrations, and Production
What makes this technology special is the intersection of fidelity and compression. There is no reanimation of the 3D assets, no CG intervention in their movement. They are all directly derivative of photographic and video data, so actions are lifelike, and authentic.
Our stage uses 106 cameras and 8 shotgun microphones to capture performances from every possible angle. Point cloud data is turned to mesh data, smoothing, compression and encoding make the final deliverable. We can output data at polycounts as high as 250K and as low as 5K, and textures from 1K to 4K. This means that assets can be used in a diversity of applications from cinematic VR experiences to streaming mobile AR with little to no quality loss. Depending on the length of a shot and final integration needs, the assets can be as tiny as 5MB for 10 seconds.
Prior to production, we schedule a test day where all of the details of capture from wardrobe to props are assessed.
On Production Day, clients can capture multiple takes to get the best shot. Shooting with 106 cameras yields 10GB of raw data per capture (or more!). During capture, we take careful notes of each take of every shot to track selects and safeties (critical to avoiding an overload of our servers).
After production, we send time-coded dailies of each select shot so that frame ranges can be chosen for final data processing. Delivery takes typically 2-3 weeks after receiving frame ranges (depending on the amount of data we are delivering).
Deliverables and Tools
Encoded .mp4 containers: high-detail, compressed and ready to go
Un-encoded .obj/.png sequences, and .wav files (for assets with audio capture) per shot.
Plugins: Unity, Unreal, Magic Leap, Quest
Tools: Gaze Adjustment shader, Maya Roto Tool (for locator tracking, and mesh retargeting)
The power of human presence, in virtual 3D space.