Blackmagic Design’s DaVinci Resolve 19
After a protracted beta phase (compared to previous releases), Blackmagic finally released Resolve 19, and as usual, I don’t have space here to talk about all the nearly 100 new features it offers. There are multiple advances in Cloud Collaboration, Cut and Edit, Color, Fusion and Fairlight. I think I’m going to keep it to us mortals with our Resolve workstation in the living room — rather than the bigger facilities with multiple editors working around the world on the same project, or the broadcast studios doing live sporting events — not that there aren’t important tools for those guys too!
In the Cut and Edit modes, we have a Speech to Text Transcription feature, which will generate editable text from clips with dialogue in your media bin. In turn, with that information, you can edit your clips based on the transcribed text … in the timeline. And speaking of text, you can autogenerate subtitles into the subtitle track on the timeline. The titles appear as clips in the timeline, which you can click on and edit. Further, on the Cut timeline, you can run Scene Cut Detection, which will identify shot changes in the footage and cut up the clip based on them.
OpenTimelineIO is now supported as an importable/exportable timeline format (see the Nuke 15.1v2 review for how that works together). And, as if that weren’t enough, there is an AI-based voice isolation tool and a Dialogue Leveler. (I suspect these might be in Fairlight as well.)
On the Color Mode side: Along with many other AI tools, Resolve has the UltraNR Noise Reduction palette with individual controls for spatial noise and temporal noise. Film Look Creator introduces grain, halation, weave, flicker, vignetting, etc. I haven’t taken a deep dive like I did with Dehancer, but it’s definitely powerful. They have moved Composite Modes to be inside the color node via a RMB click. For me, this is really handy because I like to compare versions via a difference node — and before I would bounce back and forth between Edit and Color.
Fusion, the sorely underrated compositing tool within Resolve, has received some love in the form of some really sophisticated nodes. There is support for VDB files for rendering volumetric effects such as clouds and explosions! The Multipoly Tool brings all of your roto shapes into a controllable list (not gonna lie, this probably should have arrived much sooner). There is USD support — because Fusion does have a full 3D system, it absolutely makes sense that you can bring in USD stages for either rendering in Fusion through Hydra-based renderers, or if you just need some extra AOV passes rather than going back into a 3D DCC or requesting it from the lighter. Lastly, in Fusion, there is a Multi-Merge Tool that will accept multiple inputs into a layer system — sort of like a hybrid node/layer compositing system.
We also get a slew of AI-driven tools fueled by the DaVinci Neural Engine: Person Masking, SuperScaling, Smart Refaming (for social media platforms), NPR Stylizing, Face Refinement (!), Dead Pixel Fixing, Object Removal and Patch Replacing. Last but not least, we have the Intellitracker for tracking and stabilizing, which can also be used to track characters or objects and then drive audio in Fairlight to control audio panning. Wow! There are so many new features, and the price is right too!
Website: blackmagicdesign.com/products/davinciresolve
Price: $219
ZBrush for iPad
After two years in development, ZBrush has been ported over to the iPad. Perhaps that time was spent rebuilding ZBrush from the ground up, while assessing what could make it better as a whole, not just on the iPad. Perhaps the hardware technology needed some time to mature with the M2 and M4 chips to handle what ZBrush needs to do. Regardless, the new tool has been shipped at the time of this printing. I have a feeling users are going to love it … and I say that because beta feedback is positive, and any new functionality will end up migrating to the next release of the desktop version.
ZBrush users will be happy to know that many, if not most, of the desktop features are found in the mobile version. Hundreds of built-in sculpting brushes as well as the ability to import user-created brushes. This accessibility is also true for ZTools and ZProjects — both interchangeable between the iPad and desktop versions.
The interface has been reconfigured for the device, so there is some initial searching around for your favorite tools. But rest assured that most of everything is there (tools that aren’t in the initial release are definitely being worked on). You have to rewire your brain a little because instead of the modifier keys, you have a puck on screen with customizable buttons — and moreover, it can be dragged around the screen so that it’s thumb-accessible whether you are holding the iPad from the top or bottom, or if you are a righty or a lefty. The Pencil taps and Pencil Pro squeezes are also customizable. Combined with multi-finger touch interactions, these features make for a very natural and intuitive experience.
High-power tools are part of ZBrush for iPad: ZRemesher, Sculptris Pro, Dynamesh, Live Boolean, Array Meshes, Dynamics and PolyPaint. All of these are available to the artist, and things stay amazingly responsive. The M2 iPad handles meshes as dense as 40M polys, while the M4 can tackle up to 92M. It’s honestly remarkable.
The beta testers have been loving the interaction and customizability so much so that they are clamoring to have the same features implemented in the desktop version. For users who are new and want to dip their toe into digital sculpting, Maxon is offering a free version of ZBrush for iPad with 28 of the most popular brushes and ratcheted-down versions of Dynamesh, SculptrisPro, ZSpheres and ZRemesher. It’s a great opportunity to learn the basics. When you want to buy, the subscription is part of Maxon One, but you can also get its own subscription that covers a license for both the iPad and desktop.
This is the result of thousands of hours of dedicated time from passionate and super smart individuals spending weekends and overtime, filling buckets with sweat and tears, to make sure that they get this right. I’m happy to report that they’ve achieved their goals!
ZBrush for iPad requires iPadOS 17 or later, and is available on iPad models with A12 Bionic or later.
Website: zbrushforipad.com
Price: $33.25 per month; $399 per year
Foundry’s Nuke 15.1v2
It hasn’t been that long since my last review of Nuke, but in the new 15.1v2, there are some important technical features that need to be mentioned. As our readers know, Nuke is the industry go-to compositing system brought to us by Foundry and born as the in-house compositor for Digital Domain.
Let’s take a quick look at the new things going on in this deceptively thick point release, which includes Nuke, Nuke Studio and Hiero:
BlinkScript, Nuke’s internal C++-like framework to run code on pixel data, has some changes to make it more accessible and efficient. Some of these are achieved through more thorough documentation and visibility for variable types and such. But the BlinkScript node now allows you to pass in four channels of data from a channel layer, which makes more room for channels such as motion and depth to be modified as well as the RGBA data — which streamlines what would be multiple Blink scripts down to one.
The CopyCat node, which is the core of Nuke’s machine-learning capability, is faster, using mixed precision training, which dynamically adapts its precision at different stages of its training process. And the workflow is more robust and optimized, providing controls for pausing and resuming training, or deleting a previous training run or creating inferences — within the CopyCat node, where before it was a much more manual process.
USD integration continues to advance with new staging tools and time remapping features to retime 3D animations within Nuke. Foundry is keeping up with Pixar’s latest version, but it provides a new environment variable to easily swap between versions in case someone you are working with, or other software you are using, doesn’t use the same flavor of USD.
Foundry also continues pushing forward toward using Open tools to be a leader in trying to bring studios and DCCs together. Nuke Studio supports a full OpenTimelineIO round trip for moving editorial information between platforms. And OpenAssetIO, which allows Nuke to access published assets in a production-tracking system, has some further integration for retrieving frame ranges, color space properties and ingest types. It relies on the tracking system’s “entity reference” rather than an explicit file path — so if things change, the Nuke script won’t break because it can’t find the file.
Yet the thing that caught my eye is actually hidden in this latest build but can be revealed through an environment variable change: This is Multi-Shot Support. This function makes management of similar shots easier and more consistent. For example, you might have eight shots of a CU of Scarlett Johansson with a CG background. Usually, that would mean 8 Nuke scripts, nearly identical. But if you got a note that they want the BG more blue or CG changes, that means opening up eight Nuke scripts, making the change, rendering the change and ensuring that the same change happens in the same way to all the shots. In this new methodology, you have one script with all the similar shots, and via some control variables, you can have frame ranges or roto or tracking or whatever change through a switch node, and the switch and everything is determined by the specific shot number you are rendering — but the overall note is addressed in all of the shots. I guess it’s like a state change in 3D software.
I’ve used my time, and I’m being asked to leave the stage. Have a good night and tip your waitstaff!