Critique => 2D & 3D => Topic started by: astraldata on April 27, 2016, 02:14:11 am

Title: 3D2D Animation Workflow Proposal
Post by: astraldata on April 27, 2016, 02:14:11 am
2D Animation is HARD work.

It is oftentimes A LOT of work too. That workload only increases the more you want to animate a character in a 2D game. There must be another way for people with little time who want to develop cartoon-looking art. And, if the ideas/workflow in the video below could be expanded upon into a more user-friendly way, we could really be onto something:


As a die-hard 2D artist AND animator, I have often envied the fancy toolsets 3D artists have available to them. Many 3d apps have tools you can just click a button, slide some edge-loops and vertices around, and then BAM, you have a sword, or a barrel, or even a full-on character, all prepared for you to spiffy-up and throw a skeleton on to animate it with as many different animations as you'd like (many times this involves simply reusing other pre-made animations with a whole different visual representation thrown on top of the nice animation!) -- and even with all of that, the most time-saving thing is -- you never have to worry about drawing your subject ever again! You can simply animate -- and give life to your otherwise static drawing. The problem is, 3D, unlike 2D, isn't really "drawing" in the traditional sense... but it can definitely be just as much fun.

While creating sprites and drawing frame-by-frame is easy enough for me, I've often wondered whether there is a better way to save time on developing assets that aren't aimed at being photo-realistic.

If you've watched the video above, especially all the way to the end, you should probably be aware that this stuff isn't /easy/ to do -- at least not yet. However, what they did do -- and shipped as a AAA game, I might add -- is surprisingly low-tech for what's possible with the visual style they've accomplished.

Speaking of what's possible -- although they mentioned (sadly very short-sightedly, I'll admit) in the question/answer session that something like an RPG isn't quite possible with this technique they've used, if you think about TRUE classic top-down RPG's of the past (or even something like Zelda: A Link Between Worlds, for example!), a visual style like this could work /very/ well. You wouldn't need a global light source to do that (as you have a 'sprite' layer!), and most importantly, your art style would mimic the beautiful sprites of the past, each having their own light source (mostly similar, but still independent of the environment they are a part of) -- and these 'sprites' would simulate "infinite resolution", which is the secret to all really great pixel art. And, to think, creating epic games like this with minimal man-hours in art would be many people's dream. My own included.

So... A proposal.

I want us, as a community, to consider developing a more general-purpose workflow to create 2D3D 'sprites' like the ones in the video. If we gained enough support, we could have dedicated tools to make developing these 2d3d sprites much easier than using something like XSI Softimage to wing them -- which XSI is nice, but is quite obviously not optimum. And perhaps a shader and workflow in some currently-available 3d programs could be used for now to suffice until then.

Any technically-inclined fellow 2d3d animators/programmers up to a challenge...?

If so, feel free to post any potential workflows you have to accomplish something like this here in this thread. We can all tweak and discuss these workflows in order to potentially come up with something reasonably faster than the one they came up with for this Guilty Gear game (that could work for top-down RPG games too, for example!)

I've got a few workflows in mind myself, but I'd like to hear your input first!

PS: Here is some additional information to consider:

Guilty Gear PDF (unofficial)

Japanese Cel-Shading Plugin

Softimage Video of Cel-Shading Normal Process

Older Discussion

Title: Re: 3D2D Animation Workflow Proposal
Post by: 0xDB on April 27, 2016, 08:35:04 am
Sadly, there seems to be zero interest (http://pixelation.org/index.php?topic=20016.0) in marrying 2D with 3D for the end goal to create Pixel Art. I will however continue to explore this in Blender and post again when I have more to show than the rotating monkey head. It's actually pretty easy to set up a basic cel-shader in there (all info included in that thread).
Title: Re: 3D2D Animation Workflow Proposal
Post by: Cherno on April 27, 2016, 11:46:09 am
Sadly, there seems to be zero interest (http://pixelation.org/index.php?topic=20016.0) in marrying 2D with 3D for the end goal to create Pixel Art.

Well, this thread (http://pixelation.org/index.php?topic=18842.0) has 12 pages, so there is some interest.
Title: Re: 3D2D Animation Workflow Proposal
Post by: 0xDB on April 27, 2016, 12:49:16 pm
Yes, I am aware of that thread but it appears old and does not seem to have lead to anything that is easily accessible or adopted as a workflow to pimp our Pixel Art by utilizing 3D, so while there was interest, there currently does not seem to be any.
Title: Re: 3D2D Animation Workflow Proposal
Post by: astraldata on April 27, 2016, 08:50:23 pm
I think the biggest problem with using Blender to render 'pixel-art' using any current 3d shader method is that you're unable to control the shadowing and thus can not achieve an animated/anime/cartoony look via specularity and shadow control as you can via normal maps using Softimage's tools or some other such tool. If you could achieve a shader akin to the one in the GG video I linked to above, you've effectively got a 'pixel-art-generator' right there with what you have and your ability to control a ramp for each object (assuming you can potentially add texture somehow with your shader!)

Softimage Video of Cel-Shading Normal Process
https://youtu.be/YE-AnfhqZqg (https://youtu.be/YE-AnfhqZqg)

Being able to modify the normals in a user-intuitive way is the secret to being able to render shadows/highlights properly.

When we find a tool (preferably something other than XSI Softimage -- something (preferably) free such as Blender!) that allows us to manually paint normals as if we were in a paint program (like in that video I linked just now), this becomes easily the best way to achieve the cartoon look you're going for in your shader.


The modification of normals to be precisely distributed is essential to retaining light on, for example, a face because you'd want select portions of the face to be visible at almost all angles (such as the bridge of the nose and the cheekbones/eyesockets, and probably most of the chin area, depending on the character.) This is the one thing that makes even the best rendering of a cel-shaded 3d object look so BAD when it tries to be portrayed as a cartoon.

Guilty Gear Normals:

Unfortunately most of the industry doesn't reference the Guilty Gear game here to see how to do cel-shading in 3D right -- and even if they /did/ they would create a /proprietary/ solution that the rest of us could never see or know about! This is why I'm suggesting WE come up with something so that we'll ALL have access to it! :(

I'm getting tired of looking at bland COD-looking ripoffs that lack the evocative colors of the golden age of gaming simply because these types of photo-realistic games are easier and more cost-effective to produce. If we could paint normals easily enough, we might be onto something if we could then figure out how to determine the threshold with the specular shading somehow.

Guilty Gear AO (Ambient Occlusion) Maps
https://youtu.be/yhGjCzxJV3E?t=1041 (https://youtu.be/yhGjCzxJV3E?t=1041)

It looks like AO (Ambient Occlusion) maps helped to make this happen a lot easier. You essentially shade the darker crevices and the areas of the model that are more likely to be in shadow, effectively /occluding/ vertices more or less, forcing them to take on shadow more heavily than other un-occluded vertices. Take a peek at the tiny black-and-white image below the two larger ones in this section of the video:

You'll see that vertex lighting control through AO maps (despite what the guy says in the video about vertex channels) is probably /mostly/ controlled like this because, in the image, it looks A LOT like a grayscale 2d fighting sprite would. As he mentions, this is connected to the Threshold value through the vertex coloring channel, and he specifically mentions that heavily-occluded pixels (in the AO map) are more likely to be in shadow.

Title: Re: 3D2D Animation Workflow Proposal
Post by: 0xDB on April 27, 2016, 09:41:59 pm
I think the biggest problem with using Blender to render 'pixel-art' using any current 3d shader method is that you're unable to control the shadowing and thus can not achieve an animated/anime/cartoony look via specularity and shadow control as you can via normal maps using Softimage's tools or some other such tool. If you could achieve a shader akin to the one in the GG video I linked to above, you've effectively got a 'pixel-art-generator' right there with what you have and your ability to control a ramp for each object (assuming you can potentially add texture somehow with your shader!)
Nope, you're not unable, you have full control over all of that in Blender. Well that thread I linked to is not up to speed to my experiments. I had already continued to experiment because of the normal manipulation required and modified the shader accordingly, just got distracted and haven't actually painted manual normals yet but here, have my last example .blend file from April 5th: .blend example file(wip) (http://www.dennisbusch.de/shared/2016_04_05_CellShaderShaderTest7pixelArtBakedNormals.blend.zip)

It contains a modified shader noodle(someone should probably double check the math for correctness) and un-modified normals baked from the geometry itself onto a seperate layer of vertex colors (it's also possible to bake the normals to a texture of course and use that if you prefer):
noodle root:



In Blender you can paint the normals intuitively, directly on the model as vertex colors or as a texture, you can also parent the model to the light source (not done in the example file either but the shader is set up to use only that lightsource). In this screenshot, vertex paint mode is on with the normals layer active, normals seen are the un-modified result from the baking action:

Well well... I haven't had the time/motivation to do a proper write-up of this process yet as it's still exploratory and not mature, so uh... learn Blender and check the example file. I would strongly suggest to use Blender and not develop something entirely new from scratch. Well, I know I will continue to use Blender and explore the topic in there and share what I find when ready.

In any case, mixing in a texture would be trivial with a few more nodes. Making a ramp per object is too, just duplicate and modify the shader for each object. I actually think that for clean infinite resolution results it would be preferable not to use textures at all and instead model every thing as its own little piece of geometry with its own variant of the shader. Performance in real-time could potentially be a problem but you could render everything with alpha-transparency and use as regular sprite sheets in any 2D engine.

There is also the hurdle that the Blender shader noodles need to be converted to GLSL (a minor one, perhaps there is already a way to automate that).

What the example file looks like at the moment when rendered (again, normals are from geometry and un-touched yet) with Blenders "OpenGL Render Image" function:
Title: Re: 3D2D Animation Workflow Proposal
Post by: 0xDB on April 27, 2016, 09:54:08 pm
P.S.: The one thing I am most uncertain about in the noodle in the vectormathstage is how to correctly convert the vertex color (which is the encoded Normal) back to a vector. I probably left it at some experimental stage that does not make too much sense when I got distracted by other things. I'll be gone for a couple of days. I hope you'll have it all figured out when I get back.  :crazy:
Title: Re: 3D2D Animation Workflow Proposal
Post by: astraldata on April 28, 2016, 12:25:41 am
Only have a few moments to reply, but what I meant is painting the actual *mesh* normals, not the normal map itself (in which I am not sure how to manage even painting the normal map, as Blender's rendering system hasn't been something I've focused much on learning due to being more invested in the in-game side of things myself.)
Title: Re: 3D2D Animation Workflow Proposal
Post by: Indigo on April 28, 2016, 06:28:19 pm
dont have time to reply, but a coworker brought this to my attention today.  Pretty relevant to the discussions.  Seems heavily based on Disney's paperman tech
Title: Re: 3D2D Animation Workflow Proposal
Post by: astraldata on April 29, 2016, 02:50:02 am

That's a bad-ass animation process right there. I'm assuming you could do something similar in Toon Boom Harmony nowadays (as far as rotoscoping the 3d background, vector line widths, etc.), but to be able to do that stuff all in Blender is quite nifty! No idea, though, what all is behind that monster of a shader's setup that he uses to do the rim-lighting and auto-light the 3d background in the video, but there's a TON of those 'noodles' going on there in the first couple of frames of the BG rotoscoping portion. I'd imagine it has something to do with the averaging of the normals of the vector lines/splines of the Grease-pencil though, and their angle to the lighting widget (which I don't think actually can be done in Toon-boom Harmony yet...), but it's definitely a cool concept and probably would be great if it was something we could all have access to! Either way, thanks for sharing dude!

Regarding rendering and the concept of animation using 3d2d sprites:

As mentioned before, I'm not as familiar with shaders as many 3d artists are, but I know what makes 3d stuff tick and how that applies to games efficiency-wise in terms of what's doable in a real-time engine. That being said, I've been struggling between 3d and 2d recently, and this is mostly because of what 2d generally LOOKS like when compared to 3d mimicking the same styles -- and until seeing that Guilty Gear video I linked to above, I wasn't convinced the two could ever be married convincingly enough without A LOT of effort that's, quite frankly, very unrealistic for a small indie team:

Some examples of unrealistic indie 3d2d sprite workflows:

On the other hand, when I saw the video in my original post above for the first time, I had the spark of an idea that it might be possible to create these sorts of 'sprites' in ultra high-resolution vector-style art as 3d models in realtime (if desired) -- and all it would take would be a proper shader (realtime would be ideal) with the proper threshold for highlights and shadows, an AO map (hand-painted), and a 3d model with a texture used only for colors and linework only (as mentioned in the video, using UV maps to control the line width!)

This simple process would obviously need to be tweaked, but if perfected, we could use 3d models directly in games (with linear animation curves) and have 3d2d 'sprites' of infinite resolution that look perfectly like cartoons, and could possibly surpass pixel art in some areas, but without the heavy workload of animating every lighting or clothing change by hand for every single frame, for every single action, and lead to much more varied environments in games that can affect the characters more, leading to more immersion and better-looking 3d cartoon-style games in general, at least in my opinion.

The shader, if created properly, could simulate actual airbrushed falloff in some cases too, leading to the same tools being used for both games as well as film-style animation (such as what Indigo posted) where all the heavy-lifting would be done for you (linework/coloring/shading/etc.) and all you'd have to do is draw over it to simulate your own style. Since Anime, at its core, is the cleanest possible cartoon rendering style (aside from pure black and white!), it would lend itself well to any further shading tweaks/styles that one could imagine, with all the form and coloring intact from the outset!

My personal interest in this is to create workflows that allow 3d models in 3d games to look 2d-rendered, especially when they use traditional 2d-oriented views (particularly side-scrolling and top-down views) so that games with gameplay akin to SNES-era RPGs and Platformers can get the high-res treatment they deserve and also have some fun camera and lighting shift effects to play with in the process. Much of what made those games great came from their limitations -- and with a limited perspective on, say, an isometric strategy game or whatever other type of game you can imagine, when mixed with 3d2d sprites (again, do reference the Guilty Gear video), with each character having their own light source, our games can again resemble the amazing visuals of games of the past -- without having to hand-paint our lighting on our 3d models to make them look as sprites (after all, lighting and shadow do change when sprites move... yet, it doesn't seem to do that on a lot of the 3d DBZ games, does it? [Tenkaichi series, I'm looking at YOU!] )

To close:
A pretty decent example of a high-res sprite that could have potentially been better-served as an 3d2d model (the rest of the iOS game looks MUCH worse! D: )

Title: Re: 3D2D Animation Workflow Proposal
Post by: 0xDB on May 01, 2016, 07:30:25 pm
Only have a few moments to reply, but what I meant is painting the actual *mesh* normals, not the normal map itself (in which I am not sure how to manage even painting the normal map, as Blender's rendering system hasn't been something I've focused much on learning due to being more invested in the in-game side of things myself.)
By my understanding, the mesh is the definition of the geometry itself, the vertices, the edges between vertices and the definition of which vertices or edges form the faces. The normals are derived from that(normalized cross product of any two vectors that make up the plane of the face). Usually, I think the normals of a face are just calculated from the triangulation of it and then perhaps averaged for n-gon-faces. The normals for edges are probably simply the average between the normals of adjacent faces and the normals for vertices are probably just he averaged normals of the edges meeting in that vertex.

So... I don't understand what you mean by "painting the mesh normals" as that would require modifying the geometry(the mesh) itself. I think what they do in GuiltyGear is painting the normals on top of the geometry as vertex colors or as a texture and use those painted normals in their shader (which is conceptually exactly the same as what I'm doing in Blender, just that I have so far only baked the normals derived from the geometry(in Object Space) into a vertex color Layer called "Normals" which I feed to the shader for the light/shadow calculations. Those baked normals are just a starting point and for cleaning up the shading and those can be painted manually very much the same way as shown in the Softimage Cel-Shading Normal Video that you linked to... and yes, you can also have a second 3D view next to the painting window to preview how the modified Normals affect the shading in realtime (I noticed that Softimage video is several years old... Blender has come a long way and is pretty powerful these days but it takes some dedication to learn to use that power.).

Perhaps this last paragraph becomes more clear by looking at this noodle-section again:

See the node at the top left titled "Geometry". The blue dotted output of that node named Normal are the normals that are directly calculated from the geometry itself. I am using those only for testing the "invert hull, then backfaces culled, toon outline shading" trick the guy also talks about briefly in the GG video which works by flipping the normals of all faces in the entire volume, removing the faces that don't face us after the flip and using a threshold combined with the view distance to paint the remaining faces (from the inverted hull) in the outline color. (The UV texture based interior linework is not implemented anywhere in the Blender file yet and I think I'll pass on that because it would require lots of texture mapping work.)

So, the blue dotted output "Normal" is from the Geometry and that's fixed (unless the geometry, the mesh itself, is changed). But that yellow dot output in that node called "Vertex Color": that's from the painted(currently baked and then untouched) normals in the "Normals" vertex color channel(or layer (conceptually identical terms for our purpose here)) which is selected in the little box at the lower end of the Geometry node of the shader noodle.

I think for clean Pixel-Art(ish) produced by this process it is probably best to avoid interior and exterior outlines, create each detail as its own model with its own ramp for the shader and manually paint/adjust the Normals to use for the shader to eliminate small variations in the geometry that would otherwise end up giving dirty patches of "stray shading" (which would be technically correct but would just not be a clean cel-shaded appearance).

Textures should also be avoided unless the target resolution and the size/zoom level at which the output is rendered is known and never going to change in the future because (for the same reasons they used only 90 degree angles on the texture for the interior outlines on the models in GG).

I have not looked into AO maps.

Well, I am not trying to reproduce the effects in the GG video 100% because I think those are meant for high resolution and a "drawn" look, while the goal for me is to get a hybrid between 3D-rendered and hand-pixeled that requires minimal cleanup (or ideally no cleanup but I think that might also require fairly high resolution and very careful modelling and normals adjustment, so careful that in the end it will only pay in terms of saving time over doing it all in 2D from scratch if a model needs many different animations).

Also currently busy with other things(taking a liking to high-res, high-end 3D raytracing (what they do at Pixar)) and learning more about all the things that can be done in Blender, so I'll get back to this in a few weeks I think.

I want to comment on "2D Animation is HARD work" as well:

So is 3D Animation. Requires different kinds of hard work though, mostly while making the actual models(mesh topology is very important there, can't just mindlessly sculpt something and expect to animate it easily. It seems to work best with a carefully crafted topology with clean quad stripes that follow the flow of the structure of the surface, e.g. muscles or folds, because otherwise, things will look "ugly" and morph and squish and squash around quite uncontrollably with both bone- and mesh-shape-key based animation) to be animated and building good rigs to be able to animate them intuitively afterwards. Still learning about all this myself but I have no illusions anymore that 3D will be easier or even save time until one gets fairly proficient at the modelling and rigging process (and then it will probably still not look like beautifully hand-crafted PixelArt... though I am starting to not mind that as long as it does not look like the usual train-wreck 3D rendered art that claims to look like it but does not).

One thing that is fairly easy to do is to make a very basic skeleton(like an art dummy or even simpler), rig and animate it (in 3D) and use that to paint over/(like rotoscoping). Then you get all the fancy animation tools 3D artists can use to iterate on the timing of your animation faster than having to draw each frame and then when you're happy with it, just draw on top of it to flesh it out. That's what the video Indigo linked seems to be all about, so it's more about how to rotoscope in Blender than about how to make a shader that automates "drawing" your models from real 3D data.

Haven't looked too closely, but I think the rim-lighting near the end is not any 3D shader but only compositing performed on the output image after the render pass, potentially using regular 2D edge detection algorithms and only applying it to selected "strokes" ( I think the Grease pencil in Blender basically creates 2D geometry which can be modified and animated in all the same ways as 3D geometry but lacking depth, you can't really throw any light on it ... but don't take my word for it as I'm not familiar with the details about the Grease pencil and this is just speculation from watching that video only once.)
Title: Re: 3D2D Animation Workflow Proposal
Post by: Conzeit on May 02, 2016, 11:27:39 pm
just had to say I love this stuff. I think I was the first to flip out about GGX. I totally do want to use some 3d because it seems like a great way to tween and do enviroments, but I'm kinda scared of the learning curve...I know very very little.

will look into this eventually.

EDIT: after reading the thread a bit better, Howard Day said he was already modifying the normal with his unity experiments, he's also shared his process to make 3d sprites before, specifically IRKALLA style sprites : https://forums.tigsource.com/index.php?topic=35320.msg1029055#msg1029055 we should try to get him in here

thx tsej, well be talkin =)

Has anybody tried making resizable bones like those in GGxrd? that's one of the things that interests me the most

Also, Has anybody tried contacting this yadoob person that made the GGxrd pdf? he seems to be looking at the GGxrd model himself, he reveals a nose geometry to create an outline that isnt revealed elsewhere AND he uses blender.
Title: Re: 3D2D Animation Workflow Proposal
Post by: tsej on May 03, 2016, 03:08:29 am
It's actually really easy and you can get tools for free. I plan to do a few things in the near future, that might help a bit. 
If you need more resources/help, PM me.
Title: Re: 3D2D Animation Workflow Proposal
Post by: astraldata on August 13, 2016, 11:42:58 pm
Some new insight into my ventures into the 3D2D realm to reproduce the Guilty Gear Xrd model shading:

The one critical aspect of this workflow that has eluded me since I started this topic has been an editor that could modify a 3D model's normals that also used modern 3D modeling software and did NOT require or rely on old/outdated software such as Lightwave or, alternatively, uncommon software like Softimage that most 3D modelers did not know or use these days.

The good news is, I've managed to find an (intuitive -and free-!) editor for normals as an addon for the popular 3D software, Blender. Take a look at this article if you'd like to know more about it:

https://www.blend4web.com/en/community/article/131/ (https://www.blend4web.com/en/community/article/131/)

Along with the editing of the Ambient Occlusion maps/channels, one can create 3d models that render as proper anime/cartooning with realtime shading (using a proper shader of course) without the gross visual artifacts that plague most 3d models with cartoon shading.

As mentioned before, these are 3D models:



Their shading is done something like what is done in the video below:

https://youtu.be/iH3p8N7qbv8?t=270 (https://youtu.be/iH3p8N7qbv8?t=270)

And for a visual example of what's going on, the following is the same geometry -- just the normals on the face are edited:


This is the power of normal editing, but for some reason, it's just been lost to the sands of time until now. Thankfully Softimage still exists and has shown through Guilty Gear Xrd that it's possible to make 3D look bad-ass without relying solely on shaders. Unfortunately fancy shaders get people thinking "if I just find the right one, I'll suddenly get insta-anime somehow!" Sadly, a very false assumption. The stylization comes from many places, but at the most basic, it comes from the brain saying "hmm, some part of this picture isn't right... but it was intentional... so I like it!" which could be from anything out of the ordinary -- from form and color, to lighting itself. Since our brains notice contrast always before anything else (aside from motion), lighting alone is a very powerful tool -- and is a staple of any 3D visual technique, which means control over it is absolutely critical.

Being able to control normals properly is fundamental to the success of the modeling process, just as much (or even moreso!) than the texturing, shaping, or even the lighting itself.

Once I've worked through the full process, I'll share my findings on workflow with you guys if you're interested. I'm after the highly vector-art look of models because I feel that 3D (done properly, as it was in GGXrd) is the future of serious 2D animation.