I finally found some time (and energy) (and motivation) to get back to work on the Propaganda project.
Thanks to an inspired suggestion from Bowie Poag, rather than try to find the dominant color with complicated math, I applied a very heavy “blur” effect to each image. The end result is an image with all the detail wiped out, with only the largest area of color remaining. You can click the thumbnail on the left to see a larger version of the preview.
Since some of the tiles didn’t blur to a single major color, I’m going to pick five random pixels, average them, and call that the dominant color.
This was dramatically easier, but required a lot more processing time. Remember, I’ve got 850+ of these to work on.
That’s the official count of Propaganda Tiles, according to Bowie Poag himself. These 852 were created by Bowie; someone else took over and added about 200 more. Those aren’t quite the same quality, so Bowie and I decided these 852 are canon, and the others are not. So my programming project will be focused only on these tiles, the others will be excluded.
In other words, I am in possession of the official Propaganda tiles, direct from the creator. I will brag about this.
Talking with him was like talking to the Oracle of Delphi. He gave me suggestions on how to improve my color matching, suggested some really awesome features to add to my program, and more. He’s given his full blessing to my little project, including modifying and remixing the images if I wish.
I’d love to get this polished to the point that I could get it included as an official KDE product, to be included with each new release of the KDE or Plasma versions. I’m a long way from that, but it’s a good goal.
This morning, I had a lengthy Facebook chat conversation with none other than Bowie Poag, the creator of the Propaganda tiles. He was flattered that someone remembers his work from 20 years ago.
I found him through a group for Chicago area Diversi-Dial users. Way back when, a Diversi-Dial (or DDial) was a remarkably versatile chat system. The host was an Apple ][ computer, with all 7 (8?) internal slots populated with dial-up modems. The hoster literally had 7 different telephone lines coming into their homes, one for each modem. You’d connect to a dedicated number, and some switching mechanism would try all the modems to see if one was free. Then you had an IRC-type multi-user chat system.
But I digress. I’ll write about DDials some other time.
Bowie seemed genuinely shocked that anyone remembers his project, nearly 20 years after its creation. I told him I’ve been carrying around a copy of Propaganda on various flash drives – and now Dropbox – ever since I found them.
I was hoping he could provide me with a definitive set of the tiles – direct from the source. He thinks he might have them stashed on an old computer that’s not currently in use. He’s going to do some digging and see what he can find.
He was very pleasant and helpful. I look forward to discussing my Propaganda For Plasma project with him in the future.
The downside is that I’m now committed to seeing this thing through. I’ve been slacking off working on it for the past month or two, devoting my time to various KDE patches, which always expand into larger and more complex projects than how they originally appear. But I’m continuing to learn and sharpen my skills. My latest patch got more user interface comments than technical error comments. But these all take time.
It was a kick to talk with Bowie. I’m glad he approves of my project.
“When I cry, I see millions of circles.” – Bob Mould, “Circles”
It was supposed to be so easy. KDE has an application launcher that also shows your user account picture (“avatar”). A user submitted a “wishlist” bug to have the normally square avatar cropped to a circle, which would better match other aspects of the interface. Fair enough, I thought. I can handle this.
I did a small amount of research and found the technique for cutting out a circle. I got this:
Nifty. A circle.
Reviewers check it out and say,
“Hey, that looks great!”
So I’m pleased. Wasn’t that hard. Cutting out the circle was done in a slightly odd way, but it worked.
Then, a few moments later, another reviewer:
“You know what would be even better? If the circle had a ring around it.”
It’s common for the review process to take several passes, with changes, tweaks, corrections, additions, subtractions, etc. So I open my code editor and take a stab at adding a contrasting-color circle around the avatar. I end up with:
Again, looks pretty cool. The frame was a good idea. It contrasts nicely against the background.
Then someone points out
“The circle is not very smooth; you can make out all the pixels.”
Which, upon close inspection, is true. It’s jagged around the edges instead of smooth. The problem is called aliasing, so the fix for it is known as anti-aliasing. The general principle involves adding extra pixels that are a sort of average between the color being drawn, and the background color. This image from Wikipedia shows the same letter, with and without anti-aliasing. How exactly the anti-aliasing is computed is, like, dude, way complicated. But this should show you the difference.
So I spend a chunk of time looking for some way to enable anti-aliasing in the software we use. Not having much luck. My circle is still being drawn with just the one color, so it’s ugly and jagged.
Then someone says,
“Hey, why don’t you re-use the method Programmer A used for Project B?”
So I scrap what I’ve done and go digging elsewhere in the source code. The technique is far more advanced and difficult to understand, but it generates a great result:
The circle is now nice and smooth. I post the updated code for people to review.
It doesn’t take long for someone to find a new problem.
“I went into System Settings and changed my avatar. Now all I have is a blank circle.”
I try it for myself. Sure enough, my lovely round avatar and anti-aliased circle has become:
And it won’t update until you log out and log back in. I check the file(s) that hold the user avatar. They’re changing immediately when a new avatar is selected. And the fancy circle code obviously notices the change, because it’s blanked out the old avatar.
I spend an ungodly amount of time trying to find some kind of “force-refresh” function, but can’t.
I ask Programmer A for help, since he coded this. He’s a very talented, very senior developer. And overall, a pretty good guy. But his team is crunching towards a deadline, and this is small, small, stuff by comparison.
Another very senior programmer comments on my dilemma, saying,
“I had problems trying to use Programmer A’s technique on a different project. Have you tried [the technique I originally tried]?”
While I’m waiting for some guru intervention, I unearth my old code, rewrite bits of it, and get a good results – a smooth circle, but without Programmer A’s tricky code. It looks just like this:
Which makes me say AWESOME to myself. This code is easier to read and change, so I tidy everything up and do some testing.
I change my avatar in System Settings. And the result is…
A goddamned blank circle. Again. Two totally different techniques, same problem.
I started on this about 9 days ago. Lots of trial and error. Two radically different drawing techniques. But still the same problem.
I need a guru, but the gurus are preoccupied and/or in distant time zones. I’ve got no choice but to wait for someone who knows the innards of the system to join the party.
In the meantime, I’ve grown to dislike circles immensely. It’s a little irrational to hate a geometric shape, but I’m really starting to hate circles.
For the past 10 years or so, I’ve been a devoted Apple user. I have a MacBook (with TouchBar), a 12.9″ iPad Pro, and a 256GB iPhone X.
But as of the first of the year, I’ve gotten hooked into KDE and the open source software culture. Being accepted and invited to participate and contribute is a huge draw. Knowing code you wrote will make it into the final release of a product used by thousands and thousands of people around the world.
You just don’t get that as an Apple user. You’re immunized from almost all bad things, but you’ve got very little control over your experience.
Now I have a laptop dedicated to running openSUSEKrypton, with the KDE desktop. I use it every day, for everything. Email, Facebook, coding, graphics, etc. The only thing I miss is having iMessage right on my desktop, but my phone is always right next to me.
KDE’s desktop environment is called Plasma, and there’s a small but dedicated team working on Plasma Mobile – which will run on cell phones and tablets. The systems and applications will be interchangeable, with most apps running on the desktop and the mobile device without the need for (much) additional coding. Pretty slick stuff.
The only problem is that I’ve learned that Plasma Mobile is dramatically unfinished and in need of lots of work. It runs well on a pair of Samsung phones, but that’s it, at least in terms of official releases. I’m too addicted to my state-of-the-art iPhone X to trade it in for a phone from 2015. Sorry.
So I figured I’d get a tablet instead. There hasn’t been much work done yet on getting Plasma Mobile on tablets, so I thought I’d give it a try.
My first attempt at buying a used Android tablet went sideways when reseller Blinq sent me the wrong device. So I picked a different tablet, a Samsung Galaxy Tab S2. I went a little over my gadget budget for this one, but it’s a nice, reasonably current (2016) device, and from what I can tell, it should be very possible to get Plasma Mobile running on it.
But today, two days from delivery, I learned, by chatting with some of the developers, that Plasma Mobile isn’t really ready for full-time use. Lots of things still need to be coded; apps as basic as a calculator.
I can contribute to the development by running a “virtual machine”, which simulates and runs Plasma Mobile as a sort of simulation on my laptop.
But in the mean time, I think I’m going to have a fairly spiffy Android tablet without Plasma Mobile. I can’t justify the expense for a lab rat device that barely works.
That doesn’t mean I can’t and won’t hack it, though. I’ll be dog-sitting a fleet of 3 basset hounds this weekend, which means plenty of downtime. The dogs are non-needy blobs of adorable canine laziness (a good example for all of us), so I should have lots of time to play.
To my millions of devoted readers, I apologize for the recent downtime.
From time to time, the script kiddies decide it’s a good time to try breaking into any WordPress sites they can find. This is one of them. Thankfully, my hosting service, NearlyFreeSpeech.net (blatant plug), is clever enough to detect these incoming attacks and preemptively disable the blogs under siege. When the kiddies get bored and the flood of requests ends, I need to log in and reset the system.
Except sometimes I forget. I’ve got a few issues.
Speaking of issues, one of my doctors changed one of my medications and few weeks ago. My sleep got terribly disrupted, my logic circuits got drowned out by static, and I spent the last few weeks being unproductive.
I slept about 10 hours last night and maybe even more the previous night. I’m finally feeling semi-productive and ready to work and contribute again. Before 7:30 this morning, I finished an outstanding KDE patch and hope it will be approved. I’d like to commit it to the master code and get it off my plate.
I’ll have some more details on how I’ve spent my time in the next post.
Artificial intelligence systems aren’t born smart. They have to be taught and trained. If you’ve got an iPhone, you might remember training Siri by repeating a few phrases, like “Hey Siri, it’s me”. That’s how it learns your voice – accurately. I believe Amazon’s Alexa has a similar feature.
But these guys have a head start: a huge set of training data. For the most part, your system already knows how to hear and understand you. That’s because big companies like Apple and Amazon have the resources to run a zillion samples through their speech recognition system and teach it.
Now consider an independent AI assistant, one that doesn’t spy on you or use your data for marketing purposes. That would be Mycroft, which I’ve written about before. I don’t have one of their standalone devices, but I do have the KDE desktop widget up and running. It’s not perfect, but it’s free, open source, and it does indeed work.
Here’s a clip of it in action. You can’t hear me, but you’ll see it processing and hear it answering.
It’s not perfect by any means, but considering it’s built by volunteers with donated money, I think it’s pretty impressive.
What they’ve done is gotten the community to submit a collection of training data – people from around the world waking up Mycroft by saying “Hey, Mycroft.” The rest of us get to listen to the samples and grade the clarity of the recording. If you definitely hear “Hey, Mycroft”, you give that sample a thumbs-up. If it’s murky or unclear, you tag it as a maybe. And of course, if it’s just background chatter or noise, you flag it as a negative.
All this (anonymous) data is used to teach the AI what “Hey, Mycroft” sounds like. The community provided the samples, now the community is helping to teach the AI.
It’s really pretty cool. Although I admit my bias and fondness for the KDE ecosystem, Mycroft still has catching up to do. It’s not nearly as responsive as Alexa or Siri, but it simply can’t match Amazon or Apple or Google in terms of development resources. And so I pitch in. I’ll take a half hour to listen to samples and grade them. I’ll fiddle about with the desktop widget and report bugs or issues.
Because it’s open source, I might even dream up and program a skill for it.
My latest patch for KDE was purely cosmetic, but it was to a key component: the system file manager, known as Dolphin.
A user reported that, when increasing the system font size, the icons would lose their horizontal centering. It’s not a huge deal, but KDE takes a lot of pride in producing a professional product. In fact, my mentor is a user interface/user experience guy, so these kinds of things get his attention.
It was a small task, but Dolphin is a large project. The whole package is spread over 426 files and an estimated 36,992 lines of source code*
Before patching, the icons got stuck to the top boundary of their rows:
This was only evident when the font size rendered larger than the icon size. It went unnoticed for ages, until an astute user reported the issue.
I reworked the Y position for the icons, basing it off the centerline of the text. Previously, the Y calculation was a wonky combination of the icon height and the padding around rows. It didn’t work properly, at least not under all conditions.
With my change, everything stays centered:
Special thanks to the original author – or maybe a subsequent maintainer – for writing in the debugging code that draws those frames around the items.
Naturally, that’s not the normal view:
It came in very handy for checking the alignment.
My patch is still in for review. It’s very minor (what KDE calls a “junior job”), but it’s how new contributors get exposed to the products and how they’re created and maintained. Reported bugs or feature requests that are limited in scope or scale get flagged as junior jobs, ideal for newcomers to practice on. I’ve done a handful of them now. But it’s only recently that I’ve gained the confidence to attempt surgery on major components of the system, as opposed to an isolated application. Even if my fix is approved, it won’t see the light of day for several months. KDE also follows a strict release schedule. Hell, this wide-ranging global free software project is better organized than some multinational corporations I’ve worked for. And they’re sufficiently confident in their review processes that they let “just anyone” take a crack at fixing problems.
I’ve still got ground to cover before they’ll grant me a developer account, which would allow me to publish changes myself. But the more small things I fix, the more experience I’ll gain. I think this was straightforward in the end; the hardest part was finding the two lines of code that needed adjusting.
And I think I accomplished it without causing any damage to a key system component.
*calculated with David A. Wheeler’s “SLOCCount” tool
Updated this page’s header image with something a little less monotone. Of course, it’s a Propaganda tile, called Praise-The-Blue-Steel-1. I ran it through GIMP to wash out the colors a little and tile it to the appropriate size. The original is on the left.
I’ve had some partial success with my project to write an application for searching the Propaganda tiles by color. I’ve extracted the top three colors from each image, along with the percentage of each. I’ve run a few tests to calculate the “distance” between a chosen color and the most-prominent (highest percentage) color in the images.
The good news is that the algorithm is fast. Processing the images and extracting the colors took some time (maybe two hours total, for 1,000+ images), but the mathematical formula to find the closest matching color is impressively fast. It should be even a shade faster when I rewrite it in C++ instead of Python.
The issue I’m running across isn’t my fault or a flaw in the code, but a fact about the tiles themselves. Most of them are predominantly dark. So if a user chooses a bright, vivid color, the closest match may not be as bright.
For example, look at the original Praise-The-Blue-Steel up there. Its dominant color, at 40.1%, is this extremely dark shade of blue:
That’s according to the mathematics. To my eye, the image “feels” much lighter than that. I’ve run the image through two different color-extraction routines: k-means clustering and finding the maximum eigenvalue. I don’t understand the calculus behind either implementation, but I know they generate nearly-identical results. So who am I to argue with the math?
Well, I’m me and I argue with the math. I don’t like these results.
Perception is reality, and my perception tells me that image isn’t as dark as the swatch. I don’t quite know what to do. I could extract more than three colors, which will certainly include those lighter shades that are catching my eye. But then I’d have to disregard the percentages. I may try extracting five colors instead of three (which will me re-processing all the images) and when I compare, try checking all five colors against the selected color. And see what kind of matches I get.
It’s an interesting problem. It’s also a bear to solve. I don’t have anything written in the way of an interface yet – I’ve just got Python code spitting out raw data to the console.
But this is how you learn, I suppose. Trial and error. Process and re-process. There’s an entire section of the library at MIT devoted to image analysis, and I’m just a guy in his living room with a laptop and a halfway-decent idea. But we’ll get there. Or we’ll get somewhere. Exactly where remains to be seen… and it’s part of the fun.