ADVERTISEMENT

May 2017 Tech Reviews

crazy talk

Crazy Talk Animator 3

www.reallusion.com/crazytalk-animator

So, let’s talk about this CrazyTalk Animator 3.

In last issue’s discussion about Perception Neuron, I mentioned that 3D motion-capture data could be fed into CTA3 and be applied to 2D characters through the bone deformation system. But that is certainly not the end of CTA3 functions.

The bone-based skeletal system has quite a few applications in a number of different areas within CTA3. The deforming capabilities allows you to attach bones to any imported image to add subtle — or not-so-subtle — animation. Get a scan of the Mona Lisa, extract her from the background, bring her into CTA3 and attach a series of bones, and then add a little head bob to some music. This bone-driven approach allows you to break up characters for more complex animations, by breaking apart limbs and head from body, and mask out the influences, so the arms don’t affect the chest, for instance.

Then, these bone systems can be saved as templates and you can swap out the characters, but use the same bone systems and same animations between them. So maybe you have five zombies. You can animate one and use the bone setup and animation as at least a foundation for the others. (You wouldn’t want to use exactly the same animation because zombies are individuals, of course.)

Included in CTA3 is a library of human and animal motions, which can be layered into the timeline as sequences of animation that blend into one another. You can then take sprites that you’ve built that are components of your character, and attach them to the correlating pieces on the template, including accessories like hats, jewelry, etc.

Facial animation has been enhanced with some key audio features, like scrubbing and text-to-speech tools for syncing the audio to phonemes. But with an added free-form deformation tool, you can add more movement into your original sprites to put in some additional personality. The facial animation system has been expanded beyond human faces to include animals, too.

There are definitely more things to find in the CrazyAnimator package, included drag-and-drop animation behaviors and curves, customized FFDs for props, and expression-based animations. As well as the integration of 3D motion, including motion-capture data, as mentioned in conjunction with the Perception Neuron

But try it out for yourself, or ask clients like Jimmy Kimmel, HBO, and Keanu Reeves, for starters.

vray for max

V-Ray 3.5.5

www.chaosgroup.com/vray/3ds-max

These guys at Chaos Group. They just seem to never rest. And all of this lack of sleep has really come to fruition. I mean, not only is V-Ray well loved around the world, but creator Vlado Koylazov received a Sci-Tech plaque from the Academy of Motion Picture Arts and Sciences, meaning Oscars folks loved it, too.

But not to rest on the golden laurels of awards, yet another version of V-Ray 3.5 has been pushed out for Max, with a Maya version hot on its tail.

The principal addition to 3.5 and a huge render-time saver is adaptive lights, which feels like the evolutionary next step from probabilistic lights, which was in the last release. Instead of choosing a specific number of lights that will “probably” affect the solution, V-Ray uses the light cache (known from the global illumination algorithms) to inform what lights to eliminate from the calculation without affecting the end result. This may not help as much if you have, say, eight lights, but when you are getting into the hundreds of lights, the time savings are dramatic.

You now have interactive production rendering. “But isn’t that what V-Ray RT is for?” you may ask. Sort of. IPR actually works in conjunction with the the advanced renderer, while RT is a separate renderer altogether.  RT must export the scene before it can start rendering while IPR accesses the scene directly, which means it can start rendering almost immediately.

Also, V-Ray 3.5 has established “resumable rendering,” which, like it sounds, allows you to pickup a render where it left off. Maybe your ferret chewed through your power cable while you were rendering. Once you’ve  bought a new cable, you would be able to restart the render from the point when Minky bit through. And it works in both bucket and progressive render modes.

Some third-party shaders have received some love. The alSurface that Arnold lovers know and love has been adapted for VRay, primarily as a complex skin shader. And MDL shaders from the NVidia library have been incorporated, as well as Forest Color support.

Furthermore, a ton of stuff has been pushed to the GPU for faster processing, including in-buffer lens effects, aerial perspective, V-Ray clipper, directional area lights, stochastic flakes, rounded corners, matte shadow, render mask, irradiance maps, and on-demand MIP-mapping. And they threw in a low GPU thread priority for load balancing.

Everyone loves beautiful renders. But everyone loves them more when they’re faster!

Phoenix FD

Phoenix FD 3.0 for Maya

www.chaosgroup.com/phoenix-fd/maya

And while we are on the topic of Chaos Group — you know, and that technical Oscar — they also have a fluid-solver called Phoenix FD, and version 3.0 was recently released for Maya.

Originally something one would turn to for smoke and fire, 3.0 now has an actual fluid — as in water — flip solver, which is all the rage in Houdini and Real Flow. Phoenix has all that, including the extra generated maps for creating foam on the surface of the water and wet maps for the geography it’s interacting with.

But don’t forget about the original tried and true fire and smoke. The solver has been updated to handle finer detail resolutions. But even if you have all that, you still have to render those volumes — and so the volume rendering has gotten a speed boost.

Setups for all that smoke and such can be time consuming, you may say. And for the most part you are right. But quick preset buttons have made it so you can get all that foundation work out of the way, and you can get to tweaking and making it super cool.

Additionally, the team at Chaos Group have added some fancy forces to interact with both the fluid fluids, and the water fluids. Path follow does what it says it does. The fluids will follow a chosen spline or splines. Then there is body force, which allows you to use a mesh to determine the shape of the force.

Basically, Phoenix is a light form of Real Flow or Houdini without the overhead — but also without many of the bells and whistles. Chaos Group is firmly hitting the soft belly of the same market as FumeFX

flowbox

Flowbox

flowbox.io

So, I’m just gonna say it. Roto blows. Honestly. I’m just not a fan.

But then there are those times where a tool comes along that makes you just a bit giddy because, like Tom Sawyer convincing his friends that painting the fence is fun, something draws you closer to believing rotoscoping is something that you don’t need to use as punishment.

I listed Flowbox as a top tech to check out for last year, but I’m only just getting to it until now, mainly because this cracking group of upstarts had some features that they really wanted to get down before people started clamoring about it from the mountain tops.

Flowbox looks and feels like Nuke, but using pen strokes, rather than click-dragging, you get a freeform style of connecting and disconnecting nodes. But, the workflow feels comfortable, like your slippers. Among the familiar roto tools, though, are some powerful ones that could be potential game changers.

The first is the stroke mode, which essentially puts you into a freehand mode to trace an outline using your Wacom or whatnot. Or you can be laying points the old fashion way and switch over to stroke mode, and then back again. The completed stroke becomes a point-based, controllable curve, whose density can be adjusted. So now what do you do, you can’t just go freestylin’ and draw curves all over the place and expect clean, non-fluttery rotoshapes. Or can you?

The snap line feature understands the structure of the previously drawn stroke and kind of projects onto the new stroke you’ve drawn on the new frame. The points move with an intelligence to try and ensure the fidelity of the silhouette.

Now if that isn’t enough to draw you back into being a lover of rotoscoping, Flowbox has an Intelligent ripple edit, which means that changes made to a point on a curve will propagate over all the key frames on that shape in the sequence. But what other flavors of this tool don’t have is an understanding of where those shapes go when the overall rotoshape rotates. Not so for this tool — the adjust points follow the ripple in a more useful way.

But the Flowbox guys aren’t stopping there. As more tools become available, it won’t be surprising to see this evolve into a compositing tool. In fact, Flowbox FX is already getting some buzz.

But back in the rotoscoping world, one of the forthcoming feature is a workflow for realtime collaboration in the same file, with multiple artists working on different shapes for the same roto. I have a few shots heading my way right now that could use that kind of collaboration.

ziva

Ziva Dynamics

www.zivadynamics.com

While rotoscoping is just kind of tedious, rigging on the other hand is hard. Which is why I usually leave the rigging to the riggers — those special guys and gals who simple need to solve incredibly complex problems with a combination of guts, code and coffee.

But how can we all benefit and ride on the shoulders of these giants of rigging? Well, some people from Weta who helped with the development of the character rigs in Avatar and the new Planet of the Apes movies, think they have something. You know. Those smart guys!

Essentially, the team at Ziva Dynamics has taken their experience and high-end degrees, and niched down to provide a product for recreating muscle, facia, fat and skin simulations on characters. It’s the combination of all of these that give recent CG characters their lifelike realism; the complexity of the entire anatomical system working together.

Ziva used the concept of the finite element method used in many if not most engineering practices to analyze forces, fluid flows, etc. Discretization takes the form of a shape similar to the shape of a muscle. The shape is made of tets, which kind of act like a cage around the geography of the muscle. Forces applied to the tets are transferred to the model.

Mind you, the above paragraph hardly taps into the real math that goes into this stuff.

You set up your character in Maya — yes, this is a Maya plugin — from the inside out. The skeleton is a controlled hierarchy with traditional Maya controls. The muscles and tendons are attached to the bones, the facial and fat wrap around those, the skin wraps around the facia, and the cloth wraps around the skin.

Anyway, its this collection of simulations, responding not only to the movement of the skeleton, but gravity and their own weight and momentum, that provides the realism that everyone it looking for.

This technology used to be developed internally at large visual-effects facilities with R&D money or hacked together in a pseudo-functional way that got us believing that the characters are sort of living. But it’s the subtlety in the simulation that bring out the reality.

For those people interested in 3D character animation, they should really be checking this out to bring their characters up to the next level.

Look for an update pretty soon, as they were kinda excited for me to see some new stuff. But, the review was slated for now, so I guess you all will just have to wait.

substance design

Substance Designer 6.0

www.allegorithmic.com/products/substance-designer

Allegorithmic has been going strong ever since it came out of the gate with Substance Designer and Substance Painter. Taking the game and visual-effects industry by storm with its PBR approach to texture and shader design, as well as the intelligent workflow for the dynamic shaders that use the extra maps such as normals, height, occlusion, etc., to drive how the shader behaves. These are the “substances.” And Substance Designer is where they are built.

In Substance Designer 6.0, Allegorithmic appears to have found room to make a powerful piece of software just that much more powerful. Among all kinds of preference to make user experience better, and some tweaks under the hood to make things faster, there have been a number of new nodes to play with in your Substance script.

A seemingly innocuous but deceivingly powerful addition is the curve node. We are all familiar with controlling colors and such with curves in Photoshop or Nuke or any number of color grading tools. And in SD6, you can drive color corrections or gamma or whatnot with bezier nodes on the curve. That’s the bread and butter stuff though. Remember, in Substance Designer, you have other map parameters that can be affected, like normals and height. By feeding the curves into the height parameter, you are essentially defining the equivalent of a loft profile — the curve defining the top surface of the geometry the substance is attached to. Think of it like wainscoting on a wall, or intricate Rococo etching — all without the extra geometry.

The text node is a similar and simple node that allows you to add text to the substance (duh!), driven by system or custom fonts, and fully tileable.

Node can now be in 16-bit or 32-bit float, taking advantage of high-dynamic ranges, allowing for internal creation and editing of HDR environments for lighting.

And you can now bake out textures to 8K!

But my favorite is the ability to shoot and process your own surface textures. By taking a sampling of your material with the lights at different angles, you can, through Substance Designer 6.0, extract proper normal, height and albedo maps — on top of the color, to get more precise replication of real-world material. Something pertinent to shader development both inside and outside of the Substance Designer workflow.

As I said earlier, a super strong release to an already super strong product.

ADVERTISEMENT

NEWSLETTER

ADVERTISEMENT

FREE CALENDAR 2024

MOST RECENT

CONTEST

ADVERTISEMENT