Circles Within Circles

“When I cry, I see millions of circles.” – Bob Mould, “Circles”

It was supposed to be so easy. KDE has an application launcher that also shows your user account picture (“avatar”). A user submitted a “wishlist” bug to have the normally square avatar cropped to a circle, which would better match other aspects of the interface. Fair enough, I thought. I can handle this.

I did a small amount of research and found the technique for cutting out a circle. I got this:

Nifty. A circle.

Reviewers check it out and say,

“Hey, that looks great!”

So I’m pleased. Wasn’t that hard. Cutting out the circle was done in a slightly odd way, but it worked.

Then, a few moments later, another reviewer:

“You know what would be even better? If the circle had a ring around it.”

It’s common for the review process to take several passes, with changes, tweaks, corrections, additions, subtractions, etc. So I open my code editor and take a stab at adding a contrasting-color circle around the avatar. I end up with:

Again, looks pretty cool. The frame was a good idea. It contrasts nicely against the background.

Then someone points out

“The circle is not very smooth; you can make out all the pixels.”

Which, upon close inspection, is true. It’s jagged around the edges instead of smooth. The problem is called aliasing, so the fix for it is known as anti-aliasing.  The general principle involves adding extra pixels that are a sort of average between the color being drawn, and the background color. This image from Wikipedia shows the same letter, with and without anti-aliasing. How exactly the anti-aliasing is computed is, like, dude, way complicated. But this should show you the difference.

So I spend a chunk of time looking for some way to enable anti-aliasing in the software we use. Not having much luck. My circle is still being drawn with just the one color, so it’s ugly and jagged.

Then someone says,

“Hey, why don’t you re-use the method Programmer A used for Project B?”

So I scrap what I’ve done and go digging elsewhere in the source code. The technique is far more advanced and difficult to understand, but it generates a great result:

The circle is now nice and smooth. I post the updated code for people to review.

It doesn’t take long for someone to find a new problem.

“I went into System Settings and changed my avatar. Now all I have is a blank circle.”

I try it for myself. Sure enough, my lovely round avatar and anti-aliased circle has become:

And it won’t update until you log out and log back in. I check the file(s) that hold the user avatar. They’re changing immediately when a new avatar is selected. And the fancy circle code obviously notices the change, because it’s blanked out the old avatar.

I spend an ungodly amount of time trying to find some kind of “force-refresh” function, but can’t.

I ask Programmer A for help, since he coded this. He’s a very talented, very senior developer. And overall, a pretty good guy. But his team is crunching towards a deadline, and this is small, small, stuff by comparison.

Another very senior programmer comments on my dilemma, saying,

“I had problems trying to use Programmer A’s technique on a different project. Have you tried [the technique I originally tried]?”

While I’m waiting for some guru intervention, I unearth my old code, rewrite bits of it, and get a good results – a smooth circle, but without Programmer A’s tricky code. It looks just like this:

Which makes me say AWESOME to myself. This code is easier to read and change, so I tidy everything up and do some testing.

I change my avatar in System Settings. And the result is…

A goddamned blank circle. Again. Two totally different techniques, same problem.

I started on this about 9 days ago. Lots of trial and error. Two radically different drawing techniques. But still the same problem.

I need a guru, but the gurus are preoccupied and/or in distant time zones. I’ve got no choice but to wait for someone who knows the innards of the system to join the party.

In the meantime, I’ve grown to dislike circles immensely. It’s a little irrational to hate a geometric shape, but I’m really starting to hate circles.

Except these circles, which are great. Enjoy.


Kicking the Habit

For the past 10 years or so, I’ve been a devoted Apple user. I have a MacBook (with TouchBar), a 12.9″ iPad Pro, and a 256GB iPhone X.

But as of the first of the year, I’ve gotten hooked into KDE and the open source software culture. Being accepted and invited to participate and contribute is a huge draw. Knowing code you wrote will make it into the final release of a product used by thousands and thousands of people around the world.

You just don’t get that as an Apple user. You’re immunized from almost all bad things, but you’ve got very little control over your experience.

Now I have a laptop dedicated to running openSUSEKrypton, with the KDE desktop. I use it every day, for everything. Email, Facebook, coding, graphics, etc. The only thing I miss is having iMessage right on my desktop, but my phone is always right next to me.

KDE’s desktop environment is called Plasma, and there’s a small but dedicated team working on Plasma Mobile – which will run on cell phones and tablets. The systems and applications will be interchangeable, with most apps running on the desktop and the mobile device without the need for (much) additional coding. Pretty slick stuff.

The only problem is that I’ve learned that Plasma Mobile is dramatically unfinished and in need of lots of work. It runs well on a pair of Samsung phones, but that’s it, at least in terms of official releases. I’m too addicted to my state-of-the-art iPhone X to trade it in for a phone from 2015. Sorry.

So I figured I’d get a tablet instead. There hasn’t been much work done yet on getting Plasma Mobile on tablets, so I thought I’d give it a try.

My first attempt at buying a used Android tablet went sideways when reseller Blinq sent me the wrong device. So I picked a different tablet, a Samsung Galaxy Tab S2. I went a little over my gadget budget for this one, but it’s a nice, reasonably current (2016) device, and from what I can tell, it should be very possible to get Plasma Mobile running on it.

But today, two days from delivery, I learned, by chatting with some of the developers, that Plasma Mobile isn’t really ready for full-time use. Lots of things still need to be coded; apps as basic as a calculator.

I can contribute to the development by running a “virtual machine”, which simulates and runs Plasma Mobile as a sort of simulation on my laptop.

But in the mean time, I think I’m going to have a fairly spiffy Android tablet without Plasma Mobile. I can’t justify the expense for a lab rat device that barely works.

That doesn’t mean I can’t and won’t hack it, though. I’ll be dog-sitting a fleet of 3 basset hounds this weekend, which means plenty of downtime. The dogs are non-needy blobs of adorable canine laziness (a good example for all of us), so I should have lots of time to play.

And not on an Apple device. Imagine.

Downtime, technical and personal

To my millions of devoted readers, I apologize for the recent downtime.

From time to time, the script kiddies decide it’s a good time to try breaking into any WordPress sites they can find. This is one of them. Thankfully, my hosting service, NearlyFreeSpeech.net (blatant plug), is clever enough to detect these incoming attacks and preemptively disable the blogs under siege. When the kiddies get bored and the flood of requests ends, I need to log in and reset the system.

Except sometimes I forget. I’ve got a few issues.

Speaking of issues, one of my doctors changed one of my medications and few weeks ago. My sleep got terribly disrupted, my logic circuits got drowned out by static, and I spent the last few weeks being unproductive.

I slept about 10 hours last night and maybe even more the previous night. I’m finally feeling semi-productive and ready to work and contribute again. Before 7:30 this morning, I finished an outstanding KDE patch and hope it will be approved. I’d like to commit it to the master code and get it off my plate.

I’ll have some more details on how I’ve spent my time in the next post.

Training an AI

Artificial intelligence systems aren’t born smart. They have to be taught and trained. If you’ve got an iPhone, you might remember training Siri by repeating a few phrases, like “Hey Siri, it’s me”. That’s how it learns your voice – accurately. I believe Amazon’s Alexa has a similar feature.

But these guys have a head start: a huge set of training data. For the most part, your system already knows how to hear and understand you. That’s because big companies like Apple and Amazon have the resources to run a zillion samples through their speech recognition system and teach it.

Now consider an independent AI assistant, one that doesn’t spy on you or use your data for marketing purposes. That would be Mycroft, which I’ve written about before. I don’t have one of their standalone devices, but I do have the KDE desktop widget up and running. It’s not perfect, but it’s free, open source, and it does indeed work.

Here’s a clip of it in action. You can’t hear me, but you’ll see it processing and hear it answering.

 

It’s not perfect by any means, but considering it’s built by volunteers with donated money, I think it’s pretty impressive.

They’ve got a new project underway right now: training a better voice recognition system. A screenshot of it is at the top of this post.

What they’ve done is gotten the community to submit a collection of training data – people from around the world waking up Mycroft by saying “Hey, Mycroft.” The rest of us get to listen to the samples and grade the clarity of the recording. If you definitely hear “Hey, Mycroft”, you give that sample a thumbs-up. If it’s murky or unclear, you tag it as a maybe. And of course, if it’s just background chatter or noise, you flag it as a negative.

All this (anonymous) data is used to teach the AI what “Hey, Mycroft” sounds like. The community provided the samples, now the community is helping to teach the AI.

It’s really pretty cool. Although I admit my bias and fondness for the KDE ecosystem, Mycroft still has catching up to do. It’s not nearly as responsive as Alexa or Siri, but it simply can’t match Amazon or Apple or Google in terms of development resources. And so I pitch in. I’ll take a half hour to listen to samples and grade them. I’ll fiddle about with the desktop widget and report bugs or issues.

Because it’s open source, I might even dream up and program a skill for it.

After I do everything else on my list, naturally.

Patching Plasma’s file manager

My latest patch for KDE was purely cosmetic, but it was to a key component: the system file manager, known as Dolphin.

A user reported that, when increasing the system font size, the icons would lose their horizontal centering. It’s not a huge deal, but KDE takes a lot of pride in producing a professional product. In fact, my mentor is a user interface/user experience guy, so these kinds of things get his attention.

It was a small task, but Dolphin is a large project. The whole package is spread over 426 files and an estimated 36,992 lines of source code*

Before patching, the icons got stuck to the top boundary of their rows:

This was only evident when the font size rendered larger than the icon size. It went unnoticed for ages, until an astute user reported the issue.

I reworked the Y position for the icons, basing it off the centerline of the text. Previously, the Y calculation was a wonky combination of the icon height and the padding around rows. It didn’t work properly, at least not under all conditions.

With my change, everything stays centered:

Special thanks to the original author – or maybe a subsequent maintainer – for writing in the debugging code that draws those frames around the items.

Naturally, that’s not the normal view:

It came in very handy for checking the alignment.

My patch is still in for review. It’s very minor (what KDE calls a “junior job”), but it’s how new contributors get exposed to the products and how they’re created and maintained. Reported bugs or feature requests that are limited in scope or scale get flagged as junior jobs, ideal for newcomers to practice on. I’ve done a handful of them now. But it’s only recently that I’ve gained the confidence to attempt surgery on major components of the system, as opposed to an isolated application. Even if my fix is approved, it won’t see the light of day for several months. KDE also follows a strict release schedule. Hell, this wide-ranging global free software project is better organized than some multinational corporations I’ve worked for. And they’re sufficiently confident in their review processes that they let “just anyone” take a crack at fixing problems.

I’ve still got ground to cover before they’ll grant me a developer account, which would allow me to publish changes myself. But the more small things I fix, the more experience I’ll gain. I think this was straightforward in the end; the hardest part was finding the two lines of code that needed adjusting.

And I think I accomplished it without causing any damage to a key system component.

 


*calculated with David A. Wheeler’s “SLOCCount” tool

 

 

Praise The Blue Steel

Updated this page’s header image with something a little less monotone. Of course, it’s a Propaganda tile, called Praise-The-Blue-Steel-1. I ran it through GIMP to wash out the colors a little and tile it to the appropriate size. The original is on the left.

I’ve had some partial success with my project to write an application for searching the Propaganda tiles by color. I’ve extracted the top three colors from each image, along with the percentage of each. I’ve run a few tests to calculate the “distance” between a chosen color and the most-prominent (highest percentage) color in the images.

The good news is that the algorithm is fast. Processing the images and extracting the colors took some time (maybe two hours total, for 1,000+ images), but the mathematical formula to find the closest matching color is impressively fast. It should be even a shade faster when I rewrite it in C++ instead of Python.

The issue I’m running across isn’t my fault or a flaw in the code, but a fact about the tiles themselves. Most of them are predominantly dark. So if a user chooses a bright, vivid color, the closest match may not be as bright.

For example, look at the original Praise-The-Blue-Steel up there. Its dominant color, at 40.1%, is this extremely dark shade of blue:

That’s according to the mathematics. To my eye, the image “feels” much lighter than that. I’ve run the image through two different color-extraction routines: k-means clustering and finding the maximum eigenvalue. I don’t understand the calculus behind either implementation, but I know they generate nearly-identical results. So who am I to argue with the math?

Well, I’m me and I argue with the math. I don’t like these results.

Perception is reality, and my perception tells me that image isn’t as dark as the swatch. I don’t quite know what to do. I could extract more than three colors, which will certainly include those lighter shades that are catching my eye. But then I’d have to disregard the percentages. I may try extracting five colors instead of three (which will me re-processing all the images) and when I compare, try checking all five colors against the selected color. And see what kind of matches I get.

It’s an interesting problem. It’s also a bear to solve. I don’t have anything written in the way of an interface yet – I’ve just got Python code spitting out raw data to the console.

But this is how you learn, I suppose. Trial and error. Process and re-process. There’s an entire section of the library at MIT devoted to image analysis, and I’m just a guy in his living room with a laptop and a halfway-decent idea. But we’ll get there. Or we’ll get somewhere. Exactly where remains to be seen… and it’s part of the fun.

Fig. 5

From the days when cool and unusual things were drawn by hand, I present you with Fig. 5:

It’s part of a patent for the use of the HSL color space in computer graphics terminals.

You don’t have to understand it. Just look at it.

It’s spectacular.

 

Mycroft

Wrong Mycroft.

The right Mycroft is an open-source AI speech-enabled personal assistant. That’s a mouthful, but it’s synonymous with one more familiar word: Alexa. Or maybe you prefer Siri. Or Cortana. Or Hey Google, even though that’s two words.

It’s an ambitious project, going up against giants like Amazon, Apple, Microsoft, and Google. But like all good open-source projects, they’re giving it a genuine, honest try.

And the key selling point is… privacy and transparency. They don’t have anything to sell you, so they don’t record them for marketing purposes. The code for all the skills is freely available, so you can inspect it, change it, improve it.

And yeah, it already exists. It’s not just a dream project. Admittedly, the Mark I unit looked a little too cutesy:

But they’ve already raised over $425,000 on IndiGogo for the creation of the Mark II speaker unit, which looks a lot more professional, and more in line with what’s on the market right now:

It can still do smiley faces and whatnot, but it looks a lot more like the rest of the smart-speaker units out there. And again, it’s all open-source, even the hardware. They’ll sell you a development kit that’s just the screen and the guts and let you create your own enclosure. 3D printing interest anyone?

There’s also a prototype desktop widget for your KDE Linux desktop. You can say, “Hey Mycroft” and it perks up and performs a Wikipedia search, gets the weather, etc. Well, it’s supposed to. I’m still in the process of getting it to work. The first hurdle was getting my laptop’s microphone enabled, but that’s not really Mycroft’s fault. The documentation for running this on your computer is a little sparse, at least for this desktop widget. And it’s got a few bugs, like displaying the weather in degrees Kelvin:

I think this has the potential to be a very cool project. I’ve already submitted a bit of code to it, a simple script to help install some of the needed software. But it’s still something. We’ll see if they accept it. Likewise, I’m going to see if I can find the degrees-Kelvin glitch and patch it. It’s the least I can do, considering I submitted a bug report featuring Joe Strummer.

I don’t have it working right just yet. Every time I try to speak to it, it thinks I’m saying “pair my device”. Hmm.

But it’s another cool project to experiment with; something new to try.

And isn’t that what it’s all about, in the end?

Propaganda: Processing Begins

The Image Processing Division of Bundito Heavy Industries reports success in analyzing the first batch of Propaganda tiles.

Terminal output attached as proof of life.

UPDATE (19:03)

I’ve crunched through all 19 (or is it 18.5?) volumes of the Propaganda tiles and stored all the color information in a SQLite database. I’ve got the three most-prominent colors in each image extracted and filed away. There are a total of 1,047 tiles, by the way.

The next trick will be matching that color data to a color chosen by a user. Since it’s unlikely that the user choice will hit an exact match, I need to find colors that are close to the user’s picked color. Finding colors that are close isn’t a straightforward endeavor. There’s going to be more math tomorrow. Lots more math.

In the mean time, I got feedback on some bug fixes I’d submitted for KDE. Henrik has more guidance and specific methods he’d like me to employ. More learning!

 

Acer Aspire E15

With my interest in Linux growing, I decided it was time to get a machine dedicated to running KDE directly. I already had my machine in the basement, which I connected to using NoMachine. It’s an excellent (free!) product if you want to remotely control another computer. I was able to run a reasonably-high quality connection from my MacBook Pro at 1680×1050. The connection never lags on NoMachine; it dials back the quality temporarily when the screen gets busy. That meant that scrolling through long directory listings or pages of source code would go blurry for a moment, then sharpen up. But it still wasn’t like a realtime, live screen. There was always a hint of softness and filtering to the display. Plus, if the thing crashed (more like, if I crashed it), I’d have to go downstairs, climb over bicycles and boxes of CDs, and power on my ancient LCD monitor to see what happened. I know, such a trauma.

But I digress.

After spending my vacation downtime doing research, I settled on the Acer Aspire E15 (specifically model E5-576G-5762; there are multitudes named “E15”). It’s got an Intel Core i5 quad-core processor, 8GB of RAM, a 2GB NVIDIA graphics card, a full 1920×1080 screen, and a 256GB solid-state drive. It shipped with Windows 10 Home pre-installed, so the $599 price tag certainly included the infamous “Microsoft Tax” for an operating system I didn’t want and planned to erase.

My reading and research told me that upon arrival, I should let Windows 10 do its painstaking update processes. Apparently Windows is the only way to update the BIOS on this machine; Acer doesn’t provide any other option. So I let it grind away for the better part of three hours performing updates. Meanwhile, I downloaded the latest developer edition of KDE neon and put it on a bootable USB drive. Once the updates finished, I made a Windows Rescue Disk on another USB drive (I really need to put labels on those…) and prepared to install Linux.

Linux on laptops has traditionally been a tricky proposition. If there’s not a problem with the touchpad, there will be hassles with the WiFi adapter. Or things like not going into suspend mode when you close the lid. There’s so much varied hardware in the world that even the agile open-source community can’t keep tabs on all of it. Which is why I did so much research and bugged so many people on Reddit, Ask Ubuntu, the KDE forums, and elsewhere. Finally I got a handful of partial answers, which I pieced together and decided this machine would work.

Booting from the USB stick was easy and simple. The install was quick and uneventful. I considered shrinking the Windows 10 partition down to the bare minimum and keeping it available as a boot option if I needed it, but at only 256GB, I wanted all the bytes I’d paid for. Plus, I had the bootable recovery USB stick if I needed it (I really need to label that).

The machine booted into KDE quickly and the touchpad and backlit keyboard worked flawlessly, with no headaches. The WiFi, though, was giving me trouble. I kept losing the connection, having to rejoin the network, and then endure awful speeds as I tried to search for the issue. I ended up tethering the laptop to my iPhone over Bluetooth and using the phone’s connection for a while. That worked great.

I narrowed the problem down to two suspects: the WiFi broadcast channel, or the repeater I had plugged in. Changing the WiFi channel made no difference. Once I yanked the repeater out of the wall, the machine snapped to life and browsing the web and downloading updates ran at full speed. I think the repeater and the main router were on different channels, which was causing a sort of interference. Apparently other devices and operating systems can negotiate this, but not the Linux drivers. I need to get that repeater reconnected and reconfigured – I may have reintroduced a problem on my wife’s MacBook Air. But I don’t think so. The Air is a tricky little sucker when it comes to WiFi – troubleshooting it for her in the past (without the repeater, or on hotel WiFi) has proven that much.

The only additional package I had to install was TLP – Advanced Power Management For Linux. It added or enabled kernel modules for full power management support, like dimming the screen when on battery power, or going into suspend mode when shutting the lid.

Music playback is pretty lousy. It’s got typical laptop speakers with no bass and tinny highs. I’d like to connect to my Amazon Echo as an external Bluetooth speaker, but I’ve got some work to do there. The laptop is insisting on pairing with a PIN, which the Echo doesn’t support. PIN-pairing is old-school Bluetooth, and this laptop supposedly supports the almost-latest version of Bluetooth. I’ll get back to you on this.

The screen is nice and sharp, much sharper than the NoMachine connection I was used to. Everything scrolls smoothly and crisply. I’m very pleased. I can’t find which laptop review site said this machine has a lousy display, but they need to clean their eyeballs. I don’t know about off-angle viewing, since I’m the only one using it. I won’t find out how good the screen is in daylight for another couple of months. But I’m perfectly happy. A full 1920×1080 (IPS) display in this price range is uncommon.

The keyboard is a little squishy, as is clicking the trackpad. But this is a $599 mid-tier laptop, not a $2500 MacBook. The body is plastic, not aircraft-grade machined aluminum. But these are compromises I can definitely live with.

So far, my fears about the laptop being underpowered are wrong. Even with a dozen open browser tabs, Thunderbird email, and a music player all running, I haven’t hit the wall on memory or speed yet. I’ve been able to compile code quickly and effortlessly.

In short, a few minor bumps in the road, but I’m overall very pleased. It’s been but a few days, but I’ve made sure they were thorough days with plenty of testing.

And after I peeled off the last of the stickers, no signs of Windows anywhere. Except this damned key on the keyboard…