Updated this page’s header image with something a little less monotone. Of course, it’s a Propaganda tile, called Praise-The-Blue-Steel-1. I ran it through GIMP to wash out the colors a little and tile it to the appropriate size. The original is on the left.

I’ve had some partial success with my project to write an application for searching the Propaganda tiles by color. I’ve extracted the top three colors from each image, along with the percentage of each. I’ve run a few tests to calculate the “distance” between a chosen color and the most-prominent (highest percentage) color in the images.

The good news is that the algorithm is fast. Processing the images and extracting the colors took some time (maybe two hours total, for 1,000+ images), but the mathematical formula to find the closest matching color is impressively fast. It should be even a shade faster when I rewrite it in C++ instead of Python.

The issue I’m running across isn’t my fault or a flaw in the code, but a fact about the tiles themselves. Most of them are predominantly dark. So if a user chooses a bright, vivid color, the closest match may not be as bright.

For example, look at the original Praise-The-Blue-Steel up there. Its dominant color, at 40.1%, is this extremely dark shade of blue:

That’s according to the mathematics. To my eye, the image “feels” much lighter than that. I’ve run the image through two different color-extraction routines: k-means clustering and finding the maximum eigenvalue. I don’t understand the calculus behind either implementation, but I know they generate nearly-identical results. So who am I to argue with the math?

Well, I’m me and I argue with the math. I don’t like these results.

Perception is reality, and my perception tells me that image isn’t as dark as the swatch. I don’t quite know what to do. I could extract more than three colors, which will certainly include those lighter shades that are catching my eye. But then I’d have to disregard the percentages. I may try extracting five colors instead of three (which will me re-processing all the images) and when I compare, try checking all five colors against the selected color. And see what kind of matches I get.

It’s an interesting problem. It’s also a bear to solve. I don’t have anything written in the way of an interface yet – I’ve just got Python code spitting out raw data to the console.

But this is how you learn, I suppose. Trial and error. Process and re-process. There’s an entire section of the library at MIT devoted to image analysis, and I’m just a guy in his living room with a laptop and a halfway-decent idea. But we’ll get there. Or we’ll get somewhere. Exactly where remains to be seen… and it’s part of the fun.

Leave a Reply

Your email address will not be published. Required fields are marked *