Here we are again, ready for a fresh start after a year of technological progress and humanitarian setbacks. But let’s focus on the things that have stood out in our industry over the past 12 months. Most of them feel helpful, and some could actually upend the way we’ve been doing things for decades:
- Gaussian Splatting. This is not a brand-new technology, but with 3D scanning on the rise, the tech has become more prevalent. It’s essentially an uber-point cloud where a “splat” is placed at each point and then blended with each other. It’s really lightweight and is great for visualizing scenes in real time. However, it can’t be used for all 3D things because it’s not a polygonal mesh. But the data used to derive the splats is used in techniques that lead to photogrammetry and NeRFs: There are multiple paths to take depending on your needs.
- Neuralangelo. Here we have yet another tool for generating 3D meshes. Developed by NVIDIA and Johns Hopkins University, the technique takes video and extracts a higher-detailed mesh in comparison to photogrammetry and straight-forward NeRFs. The math is multi-sampled, iterative and more complex than I want to get into here. But given that NVIDIA is part of the science, you know that there will be GPU acceleration involved, and there are applications ready to implement — and not just for creating visual effects.
- MetaHuman Animator. MetaHuman (a realistic human-creation tool) is back again, but Epic has expanded the tool set to make it relatively easy to capture facial capture on your phone and apply the performance to MetaHuman. This lightweight parametric surface modeler is geared mostly to product and engineering design. It doesn’t seem to be meant for precise manufacturing, but more for quickly prototyping designs — kind of like SketchUp. Plus, at $100 its barriers to entry are super low.
- Move One. What’s not to like about being able to create single-camera motion capture using your iPhone? You can capture single subjects anywhere without suits or studios. With Move One, you can record takes up to 60 seconds and get your motion data back in five minutes, and apply to character via FBX or USD.
- Stable Video Diffusion. This is a useful expansion of Stable Diffusion to make moving images. The noncommercial version is limited to a few seconds, and some results out of the box are frightening — in more of a Jacob’s Ladder-nightmare way rather than AI-is-taking-over-the-world way. But this, combined with a quiver of additional tools for de-flickering, continuity, uprezzing, etc., and some intriguing possibilities, are on the horizon.
- Firefly/Photoshop Generative Fill. Adobe’s AI is astonishingly powerful and is really well integrated into its tools. Bonus points go to Adobe for proclaiming that the company only trains Firefly on its own image library.
- ai. Designers can now conceptualize ideas through sketches, photos, 3D models, etc., and then feed it into Vizcom.ai which will attempt to translate that idea into a more finalized image. The result can then be adjusted and iterated on itself, providing an artist-AI synergy that I like to see (as opposed to simply typing words and proclaiming victory at finally becoming an artist)!
- LucidLink. With all the cloud storage and collaboration environments such as Dropbox, Google Drive and Boxx, do we really need another system? Evidently, according to my visual effects cohorts, LucidLink is the next step up for launching and distributing projects to remote teams.
- ProdPro. This is a helpful tool for tracking data on film productions and providing analytics based on that data. ProdPro helps with crewing, networking, production planning, release planning, etc. And while most of that is above the pay grade of us wee artists, we can use it to be a little more engaged in choosing our next projects.
Todd Sheridan Perry is an award-winning vfx supervisor and digital artist whose credits include I’m a Virgo, For All Mankind and Black Panther. You can reach him at teaspoonvfx.com.