AuthorTopic: 3D2D Animation Workflow Proposal  (Read 13034 times)

Offline 0xDB

  • 0011
  • **
  • Posts: 873
  • Karma: +0/-0
  • Dennis inter-is.
    • dennisbusch_de
    • http://pixeljoint.com/p/1287.htm
    • 0xdb
    • View Profile
    • 0xDB

Re: 3D2D Animation Workflow Proposal

Reply #10 on: May 01, 2016, 07:30:25 pm
Only have a few moments to reply, but what I meant is painting the actual *mesh* normals, not the normal map itself (in which I am not sure how to manage even painting the normal map, as Blender's rendering system hasn't been something I've focused much on learning due to being more invested in the in-game side of things myself.)
By my understanding, the mesh is the definition of the geometry itself, the vertices, the edges between vertices and the definition of which vertices or edges form the faces. The normals are derived from that(normalized cross product of any two vectors that make up the plane of the face). Usually, I think the normals of a face are just calculated from the triangulation of it and then perhaps averaged for n-gon-faces. The normals for edges are probably simply the average between the normals of adjacent faces and the normals for vertices are probably just he averaged normals of the edges meeting in that vertex.

So... I don't understand what you mean by "painting the mesh normals" as that would require modifying the geometry(the mesh) itself. I think what they do in GuiltyGear is painting the normals on top of the geometry as vertex colors or as a texture and use those painted normals in their shader (which is conceptually exactly the same as what I'm doing in Blender, just that I have so far only baked the normals derived from the geometry(in Object Space) into a vertex color Layer called "Normals" which I feed to the shader for the light/shadow calculations. Those baked normals are just a starting point and for cleaning up the shading and those can be painted manually very much the same way as shown in the Softimage Cel-Shading Normal Video that you linked to... and yes, you can also have a second 3D view next to the painting window to preview how the modified Normals affect the shading in realtime (I noticed that Softimage video is several years old... Blender has come a long way and is pretty powerful these days but it takes some dedication to learn to use that power.).

Perhaps this last paragraph becomes more clear by looking at this noodle-section again:


See the node at the top left titled "Geometry". The blue dotted output of that node named Normal are the normals that are directly calculated from the geometry itself. I am using those only for testing the "invert hull, then backfaces culled, toon outline shading" trick the guy also talks about briefly in the GG video which works by flipping the normals of all faces in the entire volume, removing the faces that don't face us after the flip and using a threshold combined with the view distance to paint the remaining faces (from the inverted hull) in the outline color. (The UV texture based interior linework is not implemented anywhere in the Blender file yet and I think I'll pass on that because it would require lots of texture mapping work.)

So, the blue dotted output "Normal" is from the Geometry and that's fixed (unless the geometry, the mesh itself, is changed). But that yellow dot output in that node called "Vertex Color": that's from the painted(currently baked and then untouched) normals in the "Normals" vertex color channel(or layer (conceptually identical terms for our purpose here)) which is selected in the little box at the lower end of the Geometry node of the shader noodle.

I think for clean Pixel-Art(ish) produced by this process it is probably best to avoid interior and exterior outlines, create each detail as its own model with its own ramp for the shader and manually paint/adjust the Normals to use for the shader to eliminate small variations in the geometry that would otherwise end up giving dirty patches of "stray shading" (which would be technically correct but would just not be a clean cel-shaded appearance).

Textures should also be avoided unless the target resolution and the size/zoom level at which the output is rendered is known and never going to change in the future because (for the same reasons they used only 90 degree angles on the texture for the interior outlines on the models in GG).

I have not looked into AO maps.

Well, I am not trying to reproduce the effects in the GG video 100% because I think those are meant for high resolution and a "drawn" look, while the goal for me is to get a hybrid between 3D-rendered and hand-pixeled that requires minimal cleanup (or ideally no cleanup but I think that might also require fairly high resolution and very careful modelling and normals adjustment, so careful that in the end it will only pay in terms of saving time over doing it all in 2D from scratch if a model needs many different animations).

Also currently busy with other things(taking a liking to high-res, high-end 3D raytracing (what they do at Pixar)) and learning more about all the things that can be done in Blender, so I'll get back to this in a few weeks I think.

I want to comment on "2D Animation is HARD work" as well:

So is 3D Animation. Requires different kinds of hard work though, mostly while making the actual models(mesh topology is very important there, can't just mindlessly sculpt something and expect to animate it easily. It seems to work best with a carefully crafted topology with clean quad stripes that follow the flow of the structure of the surface, e.g. muscles or folds, because otherwise, things will look "ugly" and morph and squish and squash around quite uncontrollably with both bone- and mesh-shape-key based animation) to be animated and building good rigs to be able to animate them intuitively afterwards. Still learning about all this myself but I have no illusions anymore that 3D will be easier or even save time until one gets fairly proficient at the modelling and rigging process (and then it will probably still not look like beautifully hand-crafted PixelArt... though I am starting to not mind that as long as it does not look like the usual train-wreck 3D rendered art that claims to look like it but does not).

One thing that is fairly easy to do is to make a very basic skeleton(like an art dummy or even simpler), rig and animate it (in 3D) and use that to paint over/(like rotoscoping). Then you get all the fancy animation tools 3D artists can use to iterate on the timing of your animation faster than having to draw each frame and then when you're happy with it, just draw on top of it to flesh it out. That's what the video Indigo linked seems to be all about, so it's more about how to rotoscope in Blender than about how to make a shader that automates "drawing" your models from real 3D data.

Haven't looked too closely, but I think the rim-lighting near the end is not any 3D shader but only compositing performed on the output image after the render pass, potentially using regular 2D edge detection algorithms and only applying it to selected "strokes" ( I think the Grease pencil in Blender basically creates 2D geometry which can be modified and animated in all the same ways as 3D geometry but lacking depth, you can't really throw any light on it ... but don't take my word for it as I'm not familiar with the details about the Grease pencil and this is just speculation from watching that video only once.)
« Last Edit: May 01, 2016, 07:32:18 pm by 0xDB »

Offline Conzeit

  • 0100
  • ***
  • Posts: 1448
  • Karma: +3/-0
  • Camus
    • conzeit
    • View Profile
    • CONZEIT

Re: 3D2D Animation Workflow Proposal

Reply #11 on: May 02, 2016, 11:27:39 pm
just had to say I love this stuff. I think I was the first to flip out about GGX. I totally do want to use some 3d because it seems like a great way to tween and do enviroments, but I'm kinda scared of the learning curve...I know very very little.

will look into this eventually.

EDIT: after reading the thread a bit better, Howard Day said he was already modifying the normal with his unity experiments, he's also shared his process to make 3d sprites before, specifically IRKALLA style sprites : https://forums.tigsource.com/index.php?topic=35320.msg1029055#msg1029055 we should try to get him in here

thx tsej, well be talkin =)

Has anybody tried making resizable bones like those in GGxrd? that's one of the things that interests me the most

Also, Has anybody tried contacting this yadoob person that made the GGxrd pdf? he seems to be looking at the GGxrd model himself, he reveals a nose geometry to create an outline that isnt revealed elsewhere AND he uses blender.
« Last Edit: May 03, 2016, 08:07:01 pm by Conzeit »

Offline tsej

  • 0001
  • *
  • Posts: 77
  • Karma: +0/-0
    • View Profile
    • Art and Code

Re: 3D2D Animation Workflow Proposal

Reply #12 on: May 03, 2016, 03:08:29 am
@Conzeit
It's actually really easy and you can get tools for free. I plan to do a few things in the near future, that might help a bit. 
If you need more resources/help, PM me.
Correct me if I'm wrong

Offline astraldata

  • 0010
  • *
  • Posts: 391
  • Karma: +1/-0
    • View Profile
    • MUGEN ZERO

Re: 3D2D Animation Workflow Proposal

Reply #13 on: August 13, 2016, 11:42:58 pm
Some new insight into my ventures into the 3D2D realm to reproduce the Guilty Gear Xrd model shading:

The one critical aspect of this workflow that has eluded me since I started this topic has been an editor that could modify a 3D model's normals that also used modern 3D modeling software and did NOT require or rely on old/outdated software such as Lightwave or, alternatively, uncommon software like Softimage that most 3D modelers did not know or use these days.

The good news is, I've managed to find an (intuitive -and free-!) editor for normals as an addon for the popular 3D software, Blender. Take a look at this article if you'd like to know more about it:


https://www.blend4web.com/en/community/article/131/


Along with the editing of the Ambient Occlusion maps/channels, one can create 3d models that render as proper anime/cartooning with realtime shading (using a proper shader of course) without the gross visual artifacts that plague most 3d models with cartoon shading.

As mentioned before, these are 3D models:





Their shading is done something like what is done in the video below:

https://youtu.be/iH3p8N7qbv8?t=270


And for a visual example of what's going on, the following is the same geometry -- just the normals on the face are edited:




This is the power of normal editing, but for some reason, it's just been lost to the sands of time until now. Thankfully Softimage still exists and has shown through Guilty Gear Xrd that it's possible to make 3D look bad-ass without relying solely on shaders. Unfortunately fancy shaders get people thinking "if I just find the right one, I'll suddenly get insta-anime somehow!" Sadly, a very false assumption. The stylization comes from many places, but at the most basic, it comes from the brain saying "hmm, some part of this picture isn't right... but it was intentional... so I like it!" which could be from anything out of the ordinary -- from form and color, to lighting itself. Since our brains notice contrast always before anything else (aside from motion), lighting alone is a very powerful tool -- and is a staple of any 3D visual technique, which means control over it is absolutely critical.

Being able to control normals properly is fundamental to the success of the modeling process, just as much (or even moreso!) than the texturing, shaping, or even the lighting itself.

Once I've worked through the full process, I'll share my findings on workflow with you guys if you're interested. I'm after the highly vector-art look of models because I feel that 3D (done properly, as it was in GGXrd) is the future of serious 2D animation.
« Last Edit: August 13, 2016, 11:46:23 pm by astraldata »
I'm offering free pixel-art mentorship for promising pixel artists. For details, click here.

     http://mugenzero.userboard.net/