Only have a few moments to reply, but what I meant is painting the actual *mesh* normals, not the normal map itself (in which I am not sure how to manage even painting the normal map, as Blender's rendering system hasn't been something I've focused much on learning due to being more invested in the in-game side of things myself.)
By my understanding, the mesh is the definition of the geometry itself, the vertices, the edges between vertices and the definition of which vertices or edges form the faces. The normals are derived from that(normalized cross product of any two vectors that make up the plane of the face). Usually, I think the normals of a face are just calculated from the triangulation of it and then perhaps averaged for n-gon-faces. The normals for edges are probably simply the average between the normals of adjacent faces and the normals for vertices are probably just he averaged normals of the edges meeting in that vertex.
So... I don't understand what you mean by "painting the mesh normals" as that would require modifying the geometry(the mesh) itself. I think what they do in GuiltyGear is painting the normals on top of the geometry as vertex colors or as a texture and use those painted normals in their shader (which is conceptually exactly the same as what I'm doing in Blender, just that I have so far only baked the normals derived from the geometry(in Object Space) into a vertex color Layer called "Normals" which I feed to the shader for the light/shadow calculations. Those baked normals are just a starting point and for cleaning up the shading and those can be painted manually very much the same way as shown in the Softimage Cel-Shading Normal Video that you linked to... and yes, you can also have a second 3D view next to the painting window to preview how the modified Normals affect the shading in realtime (I noticed that Softimage video is several years old... Blender has come a long way and is pretty powerful these days but it takes some dedication to learn to use that power.).
Perhaps this last paragraph becomes more clear by looking at this noodle-section again:
See the node at the top left titled "Geometry". The blue dotted output of that node named Normal are the normals that are directly calculated from the geometry itself. I am using those only for testing the "invert hull, then backfaces culled, toon outline shading" trick the guy also talks about briefly in the GG video which works by flipping the normals of all faces in the entire volume, removing the faces that don't face us after the flip and using a threshold combined with the view distance to paint the remaining faces (from the inverted hull) in the outline color. (The UV texture based interior linework is not implemented anywhere in the Blender file yet and I think I'll pass on that because it would require lots of texture mapping work.)
So, the blue dotted output "Normal" is from the Geometry and that's fixed (unless the geometry, the mesh itself, is changed). But that yellow dot output in that node called "Vertex Color": that's from the painted(currently baked and then untouched) normals in the "Normals" vertex color channel(or layer (conceptually identical terms for our purpose here)) which is selected in the little box at the lower end of the Geometry node of the shader noodle.
I think for clean Pixel-Art(ish) produced by this process it is probably best to avoid interior and exterior outlines, create each detail as its own model with its own ramp for the shader and manually paint/adjust the Normals to use for the shader to eliminate small variations in the geometry that would otherwise end up giving dirty patches of "stray shading" (which would be technically correct but would just not be a clean cel-shaded appearance).
Textures should also be avoided unless the target resolution and the size/zoom level at which the output is rendered is known and never going to change in the future because (for the same reasons they used only 90 degree angles on the texture for the interior outlines on the models in GG).
I have not looked into AO maps.
Well, I am not trying to reproduce the effects in the GG video 100% because I think those are meant for high resolution and a "drawn" look, while the goal for me is to get a hybrid between 3D-rendered and hand-pixeled that requires minimal cleanup (or ideally no cleanup but I think that might also require fairly high resolution and very careful modelling and normals adjustment, so careful that in the end it will only pay in terms of saving time over doing it all in 2D from scratch if a model needs many different animations).
Also currently busy with other things(taking a liking to high-res, high-end 3D raytracing (what they do at Pixar)) and learning more about all the things that can be done in Blender, so I'll get back to this in a few weeks I think.
I want to comment on "2D Animation is HARD work" as well:
So is 3D Animation. Requires different kinds of hard work though, mostly while making the actual models(mesh topology is very important there, can't just mindlessly sculpt something and expect to animate it easily. It seems to work best with a carefully crafted topology with clean quad stripes that follow the flow of the structure of the surface, e.g. muscles or folds, because otherwise, things will look "ugly" and morph and squish and squash around quite uncontrollably with both bone- and mesh-shape-key based animation) to be animated and building good rigs to be able to animate them intuitively afterwards. Still learning about all this myself but I have no illusions anymore that 3D will be easier or even save time until one gets fairly proficient at the modelling and rigging process (and then it will probably still not look like beautifully hand-crafted PixelArt... though I am starting to not mind that as long as it does not look like the usual train-wreck 3D rendered art that claims to look like it but does not).
One thing that is fairly easy to do is to make a very basic skeleton(like an art dummy or even simpler), rig and animate it (in 3D) and use that to paint over/(like rotoscoping). Then you get all the fancy animation tools 3D artists can use to iterate on the timing of your animation faster than having to draw each frame and then when you're happy with it, just draw on top of it to flesh it out. That's what the video Indigo linked seems to be all about, so it's more about how to rotoscope in Blender than about how to make a shader that automates "drawing" your models from real 3D data.
Haven't looked too closely, but I think the rim-lighting near the end is not any 3D shader but only compositing performed on the output image after the render pass, potentially using regular 2D edge detection algorithms and only applying it to selected "strokes" ( I think the Grease pencil in Blender basically creates 2D geometry which can be modified and animated in all the same ways as 3D geometry but lacking depth, you can't really throw any light on it ... but don't take my word for it as I'm not familiar with the details about the Grease pencil and this is just speculation from watching that video only once.)