Critique > 2D & 3D

Experiment: using AI to turn pixel art into paintings

(1/2) > >>

Gil:
I'm using Waifu2x with noise reduction level 3, double resolution, then I'm running it through ESRGAN, and afterwards I do another Waifu2x scale pass, without noise reduction. I'm getting really nice results. If you want to do a similar experiment, you'll have to play around with it. ESRGAN is wonderful, but it's not very good at the initial step it seems.

Waifu2x: https://github.com/nagadomi/waifu2x
ESRGAN: https://github.com/xinntao/ESRGAN


Original:


Paintings:

pistachio:
Hey Gil.
Looks pretty good, tho there is weird convolution going on where there is no AA.
So because of these artifacts it would probably be an interesting upscaling algorithm for more heavily rendered/realistic stuff. (AFAIK there was upscaling posted here many years back for flatter stuff, no AI invasion involved though, so it is basically just pragmatic.)
6 years on machine learning is still arcane voodoo to me so I don't have anything to say yet about the tech behind it.
But I'll compile and maybe give this a shot with PA in different styles for testing, with credit of course.

Gil:
I don't mind the convolution artifacts that much (I could explain exactly why they appear btw, and you're correct that it's in part because of sharp edges), because it makes for a good story when I show these.

I've been thinking about going back and touching up areas that don't scale up nicely though. I assume that minor touchups could easily remove most of the artifacts.

That also brings me to maybe the most important aspect of this: I think this is another great tool for artists to use, but it's more fun if it's not a magic bullet. My next experiment will be to try and change the input to get better output results and to work with better intermediaries. I think the real fun will be to scale up, make adjustments, scale up, make adjustments, in several steps.

Very thought-provoking in any case to work with these new technologies.

pistachio:
Can't get at those 2x2 clusters :crazy: but I'll give it the slack it needs. It is pretty cool what it dredges up when you feed it even this limited info.

And this is way way out but I'm thinking it would beat the algorithms you get in some emulators, until you factor in the time cost (and probably being limited to static art, it would be cool to see if animations go wack, I must try that) Until that day I'm still seeing only a weird limited use case scenario here.

That said most useful stuff (and probably most revolutionary stuff!) starts out like that, a fun tool, "useless" side project. It gets you places you never go by making a straight tool with a single goal in mind. Hammer/nail. OFC it depends, in situ. Yes, keep it up dude.

Gil:
I think it's very useful for upscaling PS1 games. Like you said, probably still a bit expensive to do it on the fly, but you can prebake all the textures and use an emulator that allows texture packs, there's many of those out there for different platforms (PS1 and N64 for sure, not sure if there's SNES ones).

https://www.theverge.com/2019/4/18/18311287/ai-upscaling-algorithms-video-games-mods-modding-esrgan-gigapixel

Navigation

[0] Message Index

[#] Next page

Go to full version