AuthorTopic: Pixaki  (Read 13529 times)

Offline 32

  • 0011
  • **
  • Posts: 535
  • Karma: +1/-0
    • @AngusDoolan
    • http://pixeljoint.com/p/19827.htm
    • View Profile

Re: Pixaki

Reply #10 on: May 30, 2016, 11:24:55 am
1. Definitely no expert on this but that's generally how I would expect it to work, whether there's more effective ways to do it I don't know.

2. You're definitely right that losing colour detail in this process makes it basically useless. I think if the new palette has fewer entries I would match up closest colours and keep the extra entries. Options make a happy artist though so an interactive colour merge sounds great ;D

3: In addition to what you've mentioned an option to colour the front and back frames is extremely useful. So say the forward frame is tinted green and the back frame red. Otherwise it's basically impossible to distinguish between the two frames if you have them both turned on. Being able to see multiple frames forward and backward with diminishing opacity is also nice but there are fewer instances where I feel like I need that functionality in normal pixel art animating.

Offline Ai

  • 0100
  • ***
  • Posts: 1057
  • Karma: +2/-0
  • finti
    • http://pixeljoint.com/pixels/profile.asp?id=1996
    • finticemo
    • View Profile

Re: Pixaki

Reply #11 on: May 30, 2016, 12:22:33 pm
Does anyone have any thoughts on how the colour quantisation algorithm should work? My first thought was maybe to calculate the difference in hue, saturation, and brightness for each colour in the palette and the target colour, then take the palette colour with the smallest ΔH + ΔS + ΔB?
No. I mean, it's an understandable first thought, and the basic method is sound, but it would give crap results because HS* are pretty bad for measuring difference, so please don't do that. I'd also suggest staying away from standard -- non-linear -- RGB for this purpose.

Ideally: see https://en.wikipedia.org/wiki/Color_difference . Delta-E, which is described in this article, does a pretty good job, even the simpler versions of it. It uses LAB or LCH colorspace, depending on exactly which version you are using. GIMP's 'convert to indexed' quantizes using LAB colorspace, so it can serve as a demo of the results.

However, that is relatively computation-intensive, and also tricky to implement in a genuinely correct way. So if you can't do that, use linear rgb differences (see https://en.wikipedia.org/wiki/SRGB#The_reverse_transformation , and ignore the part that turns linear rgb into XYZ). Linearization of RGB can be done with a 256-entry lookup table, so it's fast, and mathematically it's pretty easy to do right.
« Last Edit: June 01, 2016, 09:29:03 am by Ai »
If you insist on being pessimistic about your own abilities, consider also being pessimistic about the accuracy of that pessimistic judgement.

Offline Probo

  • 0010
  • *
  • Posts: 317
  • Karma: +1/-0
    • View Profile

Re: Pixaki

Reply #12 on: May 31, 2016, 12:49:24 pm

1. I don't have a suggestion for an algorithm, but if its not too much work to integrate the kind of existing algorithms ai mentioned, why not add multiple algorithms and give us some choice in the UI? that would be pretty cool ive got to say

2. totally agree with what 32 said. A detailed palette merger where you can either manually decide on a per-colour basis whether it comes over (and choose which colour it replaces etc), or just apply the automatic colour-matching algorithm that does the work for you and stores the unused new colours in the palette, would be good. A 'remove unused colours' option like GG would be good, just in general palette options or a button.

3.
yeah that sounds good, Id like the option to show only the frame behind or only the frame in front though, it can sometimes be a bit confusing to see both.

Offline Ai

  • 0100
  • ***
  • Posts: 1057
  • Karma: +2/-0
  • finti
    • http://pixeljoint.com/pixels/profile.asp?id=1996
    • finticemo
    • View Profile

Re: Pixaki

Reply #13 on: June 02, 2016, 09:08:50 am

1. I don't have a suggestion for an algorithm, but if its not too much work to integrate the kind of existing algorithms ai mentioned, why not add multiple algorithms and give us some choice in the UI? that would be pretty cool ive got to say
MTPaint is a good(?) example of a program that does this.

It offers the following options;

* Number of colours
* Color matching:
  * Colorspace:
    * RGB
    * sRGB
    * LXN [note: similar to LAB. I have no idea why they felt they had to implement this instead.]
  * Difference measure:
    * Largest (Linf)
    * Sum (L1)
    * Euclidean (L2)
  * Reduce color bleed:
    * Gamut
    * Weakly
    * Strongly
  * Serpentine scan
  * Error propagation % [note: only applies to FS dithers AFAIK; used to reduce 'noisiness']
  * Selective error propagation: Off | Separate/Split | Separate/Sum | Length/Sum | Length/Split
  * [X] Full Error precision
* Method of deriving palette:
  * Exact
  * Current palette
  * PNN Quantize (slow, better quality)
  * Wu Quantize (fast)
  * Min-Max quantize (best for small pictures and dithering)
* Dithering:
 * None
 * Floyd-steinberg
 * Floyd-steinberg (quick)
 * Stucki
 * Dithered (effect)
 * Scattered (effect)

To me, this is an example of what not to do -- overwhelm the user with a huge array of options, rather than hardcoding many settings to generally-sensible values. However, it does illustrate the breadth of the problem domain, and the program itself could also serve as a tool for you, @rizer, to figure out exactly what color reduction algorithm is best for you to implement in Pixaki.
If you insist on being pessimistic about your own abilities, consider also being pessimistic about the accuracy of that pessimistic judgement.

Offline rizer

  • 0001
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
    • Pixaki

Re: Pixaki

Reply #14 on: June 02, 2016, 10:28:40 am
That's a lot of options!

Thanks for the feedback, especially @Ai for pointing me to Delta-E  ;D

I've been busy writing the code over the last couple of days, and I've just finished my Delta-E implementation, which seems to be working. I went with the '94 formula, which seems to be a nice middle ground between accuracy and speed. Quantising an image is my next step… maybe I'll post in the results.

@32 The shift in colours for the onion skin sounds like a good idea, and shouldn't be too hard to implement.

Thanks everyone!

Offline rizer

  • 0001
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
    • Pixaki

Re: Pixaki

Reply #15 on: June 02, 2016, 05:45:38 pm
And here are some results…


Original on the left, quantised to the Android Arts palette on the right.

Full size original | Full size quantised

It's pretty slow at the moment, particularly for such a large image, but the results seem good to me. Any thoughts?

A couple of sites that have been really useful for this are ColorMine and EasyRGB if anyone's interested in this sort of thing (and Wikipedia of course!).

Offline Ai

  • 0100
  • ***
  • Posts: 1057
  • Karma: +2/-0
  • finti
    • http://pixeljoint.com/pixels/profile.asp?id=1996
    • finticemo
    • View Profile

Re: Pixaki

Reply #16 on: June 03, 2016, 07:15:05 am
Nominally good, yes.

By which I mean, that sample picture is pretty big by pixel art standards. So while your results look pretty good -- color matching is quite accurate -- , I don't think they really represent the scale or style of picture that people will be likely to reduce.

I filtered your sample a bit (GMIC Anisotropic smooth + local normalization + gamma correct downscale to 25% the size) to arrive at something that IMO is more representative:



GIMP reduction of this image to Android Arts palette:


(tell me if the above images don't show up. They work for me but I'm getting suspicious of Imgur recently)

EDIT: To clarify, a wide variety of images might be used. But I don't really see why any of them would be hi res or contain many fine details, given the context.
« Last Edit: June 04, 2016, 12:30:20 am by Ai »
If you insist on being pessimistic about your own abilities, consider also being pessimistic about the accuracy of that pessimistic judgement.

Offline rizer

  • 0001
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
    • Pixaki

Re: Pixaki

Reply #17 on: June 04, 2016, 01:23:12 pm
Thanks @Ai — those images seem to be working fine. Yeah, I agree it's good to test results with images that are closer to what people are likely to be using. Here are my results with the image you posted:



It's certainly different. Do you think the GIMP results are better? I wonder whether they're either using a different Delta-E algorithm, or performing some other steps (or maybe both).

Thanks!

Offline rizer

  • 0001
  • *
  • Posts: 10
  • Karma: +0/-0
    • View Profile
    • Pixaki

Re: Pixaki

Reply #18 on: June 04, 2016, 01:31:37 pm
Thought I'd quickly implement the Delta-E 76 algorithm to see what the results are:



Certainly closer to my previous results than to GIMP's. I would guess that GIMP is doing some sort of smoothing, which is probably desirable for photos, but less so for pixel art.

Offline Ai

  • 0100
  • ***
  • Posts: 1057
  • Karma: +2/-0
  • finti
    • http://pixeljoint.com/pixels/profile.asp?id=1996
    • finticemo
    • View Profile

Re: Pixaki

Reply #19 on: June 04, 2016, 02:23:21 pm
I think your results are better. The '76 algorithm seems better, all other things being equal.

FWIW, GIMP doesn't do "smoothing" per se. It builds a LAB-colorspace-based histogram of the image and divides this histogram up into chunks ("bins"), each of which it assigns a color. If a color is inside that bin, it gets that color. Naturally, this is faster and somewhat less accurate than doing color difference calculations for each unique input color (which I guess is what you are currently doing. Anyway, AFAIK the algo I described for gimp is known as octree cut or median cut - pretty standard.)
If you insist on being pessimistic about your own abilities, consider also being pessimistic about the accuracy of that pessimistic judgement.