Discuss Scratch

MartinBraendli2
Scratcher
100+ posts

Non-photorealistic rendering projects

Layzej wrote:

MartinBraendli2 wrote:

Just keep in mind, that you cant take a simple average, since color/brightness isnt linear. The average Red value of 3 colors is not
(r1+r2+r3)/3
Actually its
sqrt(r1*r1+r2*r2+r3*r3)/3

Would this affect the 3d colour distance formula as well?

sqrt((r2-r1)^2 + (g2-g1)^2 + (b2-b1)^2)

Firstly, I just realized, that I got the formula for average wrong. For 3 colours it should be
sqrt(r1*r1+r2*r2+r3*r3/3)
(Division before taking the square root)


And yes, I think to be mathematically correct you'd need to do the same to get the correct colour distance. If we have two colours c1(100, 0,0) and c2(200,0,0) then, according to the (corrected) formula, the average should be m(158,0,0). If you now use your formula to calculate the colour distance, for c1-m you'd get 58, while for c2-m you'd get 42. IMO both colous should have the same colour distance to the average. So the formula should be

deltaR = sqrt(abs(r1^2-r2^2)) 
deltaG = sqrt(abs(g1^2-g2^2))
deltaB = sqrt(abs(b1^2-b2^2))
colorDist = sqrt(deltaR^2+deltaG^2+deltaB^2)
So in short you'd get
sqrt(abs(r1^2-r2^2)+abs(g1^2-g2^2)+abs(b1^2-b2^2))


However I don't know whether such a formula would make sense in practice. Other formulas reflect our colour perception better.

Off topic: Do Canadians use British English or use a mix of BE/AE? (because you spelled “colour”)
Edit: Changed brackets, so the numbers are now visible.

Last edited by MartinBraendli2 (June 13, 2016 07:06:12)


gtoal
Scratcher
1000+ posts

Non-photorealistic rendering projects

gtoal wrote:

novice27b wrote:

I came across this interesting article about different dithering algorithms:

http://www.tannerhelland.com/4660/dithering-eleven-algorithms-source-code/

I don't have time to implement any at the moment, but maybe someone else does.

Good article! Clearly the thing to implement is not the 11 different algorithms but the general-purpose matrix solution…

Notice the close relationship between this and a convolution kernel. Makes me think there is another more general-purpose operation on an image matrix of which a convolution kernel and a dither matrix are special cases (subsets). I wonder if that more general operation has been implemented or has a name? And are there other as-yet undiscovered neat tricks that can be done with it.

Note that dithering applies carried-forward data to the right and down. This general operation I'm hypothesizing would allow data to be carried backwards as well (ie the target being any element of the matrix). If a subsequent part of the operation was convolution-kernel-like, it would make sense to be able to access modified data above and left of the pixel under examination as well.

With the graphics processors we have nowadays, this hypothetical operation would be cheap to implement on a graphics card.

G
Layzej
Scratcher
100+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

So in short you'd get
sqrt(abs(r1^2-r2^2)+abs(g1^2-g2^2)+abs(b1^2-b2^2))

Thanks!

MartinBraendli2 wrote:

Off topic: Do Canadians use British English or use a mix of BE/AE? (because you spelled “colour”)

Primarily BE but with a smattering of AE though pronunciation can often be French rather than either British or American. We would pronounce foyer as fo-yay rather than the American fo-yer and a filet of fish fil-ay rather than the British fill-et.
gtoal
Scratcher
1000+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

deltaR = sqrt(abs(r1^2-r2^2)) 
deltaG = sqrt(abs(g1^2-g2^2))
deltaB = sqrt(abs(b1^2-b2^2))
colorDist = sqrt(deltaR^2+deltaG^2+deltaB^2)
So in short you'd get
sqrt(abs(r1^2-r2^2)+abs(g1^2-g2^2)+abs(b1^2-b2^2))

It occurs to me that the desired colour is a vector in a 3D space and the actual colour is also a vector in a 3D space and the closeness of the two is a combination of the angle between them θ=arccos(∣P∣∣Q∣ / ​P⋅Q) and the length of the vector joining one to the other (which is what I think you've derived above)

Anyway I'm saying we can reuse 3D vector maths formulae without having to reinvent anything if we treat the colours as the right kind of 3D vector. As opposed to simply a point in 3D space. ?

G

Last edited by gtoal (June 12, 2016 23:06:20)

Layzej
Scratcher
100+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

I think to be mathematically correct you'd need to do the same to get the correct colour distance. …
However I don't know whether such a formula would make sense in practice. Other formulas reflect our colour perception better.

This tests three different colour difference methods to draw the @NPRguy using the C64 palette:

1) DeltaE 1994 (theoretically this should match how your eye perceives distances between colours)
2) 3D distance. sqrt((r2-r1)^2 + (g2-g1)^2 + (b2-b1)^2)
3) like above but corrected for the fact that the computer stores the square root of the intensity rather than absolute intensity: sqrt(sqrt((r2^2-r1^2)^2 + (g2^2-g1^2)^2 + (b2^2-b1^2)^2))

MartinBraendli2
Scratcher
100+ posts

Non-photorealistic rendering projects

Layzej wrote:

MartinBraendli2 wrote:

I think to be mathematically correct you'd need to do the same to get the correct colour distance. …
However I don't know whether such a formula would make sense in practice. Other formulas reflect our colour perception better.

This tests three different colour difference methods to draw the @NPRguy using the C64 palette:

1) DeltaE 1994 (theoretically this should match how your eye perceives distances between colours)
2) 3D distance. sqrt((r2-r1)^2 + (g2-g1)^2 + (b2-b1)^2)
3) like above but corrected for the fact that the computer stores the square root of the intensity rather than absolute intensity: sqrt(sqrt((r2^2-r1^2)^2 + (g2^2-g1^2)^2 + (b2^2-b1^2)^2))
I'm surprised by a) the poor results of DeltaE1994 (which needs a LOT of calculation power) and b) by the good results of the “corrected” 3d distance. Now I HAD to update my Floyd-Steinberg Dithering project:






The corrected 3d distance fixes the problem I had with the HTML colours (loosing colour):






Intuitively I would have guessed, that the HTML colours would give better results than the C64 palette (since C64 colours are so dull). However, the results show the opposite. The C64 colours were chosen not only to reflect all colours, but also the whole luminance spectum:

The results show that for images, the C64 palette is superior to the HTML palette (both 16 colours):




So in conclusion
sqrt(abs(r1^2-r2^2)+abs(g1^2-g2^2)+abs(b1^2-b2^2))
seams superior to
sqrt((r1-r2)^2+(g1-g2)^2+(b1-b2)^2)
I wonder, why I have never seen the “corrected” formula before.
BTW, why do you use sqrt(sqrt((r2^2-r1^2)^2 + (g2^2-g1^2)^2 + (b2^2-b1^2)^2))?

Layzej wrote:

MartinBraendli2 wrote:

Off topic: Do Canadians use British English or use a mix of BE/AE? (because you spelled “colour”)

Primarily BE but with a smattering of AE though pronunciation can often be French rather than either British or American. We would pronounce foyer as fo-yay rather than the American fo-yer and a filet of fish fil-ay rather than the British fill-et.
In (german speaking) Switzerland its not just the pronunciation but also that vocabulary that is influenced by our French speaking minority. We use words like “Trottoir” and “Glace”, while in other German speaking countries “Gehsteig” and “Eis” (sidewalk, ice cream) are used. There are other quirks, like we use “parkieren” and “grillieren” which are derived from the French “parquer” and “griller”, while Germans use “parken” and “grillen”, which are derived from the English “to park” and “to grill”.


Edit: I just realized, that brackets hide stuff in BB, So you weren't able to see the numbers I used in my example. with parenthesis they are:

MartinBraendli2 wrote:

And yes, I think to be mathematically correct you'd need to do the same to get the correct colour distance. If we have two colours c1(100, 0,0) and c2(200,0,0) then, according to the (corrected) formula, the average should be m(158,0,0). If you now use your formula to calculate the colour distance, for c1-m you'd get 58, while for c2-m you'd get 42. IMO both colous should have the same colour distance to the average. So the formula should be

Edit2: Another thing I just realized, is that the Floyd-Steinberg algorithm uses error diffusion as if RGB colour was linear (which it isn't). This might explain, why some of the images get too dark/bright. I'll try, whether squaring makes a difference.

Last edited by MartinBraendli2 (June 13, 2016 07:15:32)


Layzej
Scratcher
100+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

I'm surprised by a) the poor results of DeltaE1994 (which needs a LOT of calculation power)

Yeah. Did you rewrite or use the same code as me? Possibly I've mucked something up?

MartinBraendli2 wrote:

The corrected 3d distance fixes the problem I had with the HTML colours (loosing colour):

Wow. That is way better.

MartinBraendli2 wrote:

Intuitively I would have guessed, that the HTML colours would give better results than the C64 palette (since C64 colours are so dull). However, the results show the opposite. …
The results show that for images, the C64 palette is superior to the HTML palette (both 16 colours):

Both colour distance formulas look pretty good with the C64 palette.

MartinBraendli2 wrote:

BTW, why do you use sqrt(sqrt((r2^2-r1^2)^2 + (g2^2-g1^2)^2 + (b2^2-b1^2)^2))?

Looks like I wasn't thinking straight last night. I fixed it. Thanks!

MartinBraendli2 wrote:

In (german speaking) Switzerland its not just the pronunciation but also that vocabulary that is influenced by our French speaking minority.

Interesting. We'll use toque for winter hat but otherwise avoid French words… although there are parts of the country where the two languages become muddled into something new: https://www.youtube.com/watch?v=7cRPH4lb8UI


MartinBraendli2 wrote:

Edit: I just realized, that brackets hide stuff in BB, So you weren't able to see the numbers I used in my example. with parenthesis they are:

That makes more sense!

MartinBraendli2 wrote:

Edit2: Another thing I just realized, is that the Floyd-Steinberg algorithm uses error diffusion as if RGB colour was linear (which it isn't). This might explain, why some of the images get too dark/bright. I'll try, whether squaring makes a difference.

It makes me wonder what else is affected by this issue…
MartinBraendli2
Scratcher
100+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

Edit2: Another thing I just realized, is that the Floyd-Steinberg algorithm uses error diffusion as if RGB colour was linear (which it isn't). This might explain, why some of the images get too dark/bright. I'll try, whether squaring makes a difference.

Wow, the difference is big.
Original Floyd-Steinberg:


My modified FSD:

I have to solve the problem with the artefacts on the left side, but otherwise, the result is great (you wouldn't even guess, that its only 3-bit colours).

Edit: I will later release the updated version. It will be able to draw with other “dithering kernels” now.
Here it is with C64 colours

Last edited by MartinBraendli2 (June 13, 2016 11:39:39)


Layzej
Scratcher
100+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

My modified FSD:

I have to solve the problem with the artefacts on the left side, but otherwise, the result is great (you wouldn't even guess, that its only 3-bit colours).

That looks great! How is it possible that this issue is largely ignored? You would think that the fix would be baked into the basic algorithms.
MartinBraendli2
Scratcher
100+ posts

Non-photorealistic rendering projects

layzej wrote:

That looks great! How is it possible that this issue is largely ignored? You would think that the fix would be baked into the basic algorithms.
Thats exactly what i thought.
I've seen the formula for 3d colour distance several times in other peoples code but never the corrected one. I guessed that the difference wouldn't be noticable. So thanks for trying, it opened my eyes.

Layzej
Scratcher
100+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

layzej wrote:

That looks great! How is it possible that this issue is largely ignored? You would think that the fix would be baked into the basic algorithms.
Thats exactly what i thought.
I've seen the formula for 3d colour distance several times in other peoples code but never the corrected one. I guessed that the difference wouldn't be noticable. So thanks for trying, it opened my eyes.

I wonder whether a short letter on the subject would get published?
gtoal
Scratcher
1000+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

My modified FSD:

I have to solve the problem with the artefacts on the left side, but otherwise, the result is great (you wouldn't even guess, that its only 3-bit colours).

Try extending the picture (virtually) left and starting the process in the virtually extended area?
gtoal
Scratcher
1000+ posts

Non-photorealistic rendering projects

gtoal wrote:

MartinBraendli2 wrote:

My modified FSD:

I have to solve the problem with the artefacts on the left side, but otherwise, the result is great (you wouldn't even guess, that its only 3-bit colours).

Try extending the picture (virtually) left and starting the process in the virtually extended area? In fact for continuity, not merely replicating the leftmost column leftwards, but mirroring what's to the right of the leftmost column?

Or you could do it ‘middle-out’ and not have any starting edges :-) (though the center may be a bit dodgy that way instead???) I guess the fix there would be middle-out with overlap…

Yeah - that's it. Do one pass right to left, and a second pass left-to-right, and find a way to merge the two sets of results. There's already a variation of the algorithm that does alternating rows in opposite directions. Maybe that alone is enough?

Last edited by gtoal (June 13, 2016 15:46:47)

novice27b
Scratcher
1000+ posts

Non-photorealistic rendering projects

gtoal wrote:

gtoal wrote:

MartinBraendli2 wrote:

My modified FSD:

I have to solve the problem with the artefacts on the left side, but otherwise, the result is great (you wouldn't even guess, that its only 3-bit colours).

Try extending the picture (virtually) left and starting the process in the virtually extended area? In fact for continuity, not merely replicating the leftmost column leftwards, but mirroring what's to the right of the leftmost column?

Or you could do it ‘middle-out’ and not have any starting edges :-) (though the center may be a bit dodgy that way instead???) I guess the fix there would be middle-out with overlap…

Yeah - that's it. Do one pass right to left, and a second pass left-to-right, and find a way to merge the two sets of results. There's already a variation of the algorithm that does alternating rows in opposite directions. Maybe that alone is enough?


Why not just add a random offset at the start?

i use arch btw
MartinBraendli2
Scratcher
100+ posts

Non-photorealistic rendering projects

gtoal wrote:

Try extending the picture (virtually) left and starting the process in the virtually extended area? In fact for continuity, not merely replicating the leftmost column leftwards, but mirroring what's to the right of the leftmost column?

Or you could do it ‘middle-out’ and not have any starting edges :-) (though the center may be a bit dodgy that way instead???) I guess the fix there would be middle-out with overlap…

Yeah - that's it. Do one pass right to left, and a second pass left-to-right, and find a way to merge the two sets of results. There's already a variation of the algorithm that does alternating rows in opposite directions. Maybe that alone is enough?

Firsth, merging two sets (with a limited palette!) is useless, since the fusion/conversion itself will lead to more new artefacts than it fixes. Secondly, I think the artefacts are not the result of the algorithm, but the image (the artefacts appear in about the seventh column, so the problem probably ain't the first column) . In Scratch, scanning an image will kill the last 3 bits. This will lead to unnaturally big areas with the same colour value (here white), which will produce artefacts when using an dithering algorithms on. I guess, those artefacts would be smoothed out on a image with true 24-bit depth.

gtoal
Scratcher
1000+ posts

Non-photorealistic rendering projects

MartinBraendli2 wrote:

gtoal wrote:

Try extending the picture (virtually) left and starting the process in the virtually extended area? In fact for continuity, not merely replicating the leftmost column leftwards, but mirroring what's to the right of the leftmost column?

Or you could do it ‘middle-out’ and not have any starting edges :-) (though the center may be a bit dodgy that way instead???) I guess the fix there would be middle-out with overlap…

Yeah - that's it. Do one pass right to left, and a second pass left-to-right, and find a way to merge the two sets of results. There's already a variation of the algorithm that does alternating rows in opposite directions. Maybe that alone is enough?

Firsth, merging two sets (with a limited palette!) is useless, since the fusion/conversion itself will lead to more new artefacts than it fixes. Secondly, I think the artefacts are not the result of the algorithm, but the image (the artefacts appear in about the seventh column, so the problem probably ain't the first column) . In Scratch, scanning an image will kill the last 3 bits. This will lead to unnaturally big areas with the same colour value (here white), which will produce artefacts when using an dithering algorithms on. I guess, those artefacts would be smoothed out on a image with true 24-bit depth.

There are two edge problems. One is that any kernel-based system is going to fail near the edge because it samples pixels that don't exist. You can fudge (I'm reluctant to say ‘fix’) this by expanding the image past the actual edge so there are enough virtual pixels to cover the matrix that is sampling the image (typically no more than 2 pixels outside the border for a large 5x5 matrix). I expect the current code is picking up zeros for off-screen pixels, or possibly wrapping from the right. (Or possibly just passing through the original pixels? I haven't examined the images closely enough to tell) . This issue has to be tackled in any implementation otherwise you get a 1 or 2 pixel border round any area that is in some way different from the rest of the image.

The other problem is that if you have a solid area with no random pattern to the right of a left edge, the generated dither is going to be completely in synch - either generating a vertical or a regular diagonal repeating pattern. This is the problem I was suggested would be ameliorated by one of several methods (hacks) in the particular instance of the “NPR Guy” image.

I was not suggesting merging a left and right scan by averaging or any other sort of combination of pairs of pixels - I was suggesting finding a suitable point along the line where you could switch from the left hand part of the image which would be generated by a right-to-left scan, over to the right-hand part of the image generated by a left-to-right scan. Anywhere along the two scans that had <n> pixels in common would be a good place to splice in the other row, as would any sharp boundary such as between a light strip and a dark strip.

However having said that, I realised (and suggested) that a simple alternation of left to right and right to left on alternate rows would have a fairly similar effect, especially since the error terms propagate downwards as well as sideways, which would be enough to break up the monotony of the solid area on the left hand side.

Does that make my intention clearer?

Try some dithering experiments with images that have only solid rectangles of unvarying colours. I expect you'll see the same artefacts as the regular pattern to the left of NPR guy. Alternating scans won't fix it completely but the patterns may be more aesthetic.

G
gtoal
Scratcher
1000+ posts

Non-photorealistic rendering projects

gtoal wrote:

Notice the close relationship between this and a convolution kernel. Makes me think there is another more general-purpose operation on an image matrix of which a convolution kernel and a dither matrix are special cases (subsets).

Also halftoning.

Last edited by gtoal (June 13, 2016 21:17:34)

MartinBraendli2
Scratcher
100+ posts

Non-photorealistic rendering projects

Layzej wrote:

How is it possible that this issue is largely ignored? You would think that the fix would be baked into the basic algorithms.

As far as i can tell, it's done incorrectly almost everywhere. I've been experimenting a bit.

Lets get the gray colour, that is in the middle between black and white:
sqrt((255*255-0*0)/2) = 180.31 

If I dither an image that is fully (180,180,180), I get this checkerboard as a result (as one might expect):


Now compare this to the dithering that Photoshop does:

There is no 1:1 ratio between white and black pixels. The image is too white/bright.

Squint your eyes or move a bit away from your monitor. Which of the 2 texts does disappear?


This might also serve as a test, how well adjusted your monitor is (ASUS MW201 on the left, Samsung T28D310EW on the right)


Adjusting some settings (left is after adjusting):

MartinBraendli2
Scratcher
100+ posts

Non-photorealistic rendering projects

Have a look at those two gradients (original in the background). One of them was dithered in marked-leading graphics software, one was dithered in a programming environment for children. Can you guess which one is which?



Layzej
Scratcher
100+ posts

Non-photorealistic rendering projects

The undithered reference grey is 180,180,180? Photoshop is not even close.

Powered by DjangoBB