I've been trying to learn programming ever since I was a teenager. And I'm terrible. It's not too much of a stretch to say that I became a writer because I couldn't program — the last job I had before I started blogging for Engadget was web design. My natural intuition for computers, which has kept me employed as a technology pundit, extends only to the glossy graphical user interfaces for computers that became popular in the 90s. But machines don't think in terms of graphical user interfaces, they think in terms of cold, unyielding logic.

We all know that deep down somewhere inside of all digital technologies are 0s and 1s — also known as bits. Computer programming is the generation and transformation of those 0s and 1s into something really great, like, for example, this web page. Somewhere in the world, this blog post is stored as a series of bits on a physical hard drive. When your web browser requests the blog post, those bits are gathered up, transmitted over the internet, and interpreted as a fancy web page by your computer.

Computer programming is the generation and transformation of those 0s and 1s into something really great

The way this works is through piles and piles of logic. A computer processor sees this stream of incoming data, and executes logical operations on it. "If I see this chunk of data, do this, if I see that chunk of data, do that." A computer has physical logic gates, millions of them, that make decisions and transformations based on the incoming bits.

What made computers revolutionary, and so much more than a fancy calculator, is that that both the data for computation (like, this parenthetical sentence), and the instructions for computation (like, put this parenthetical sentence on a computer screen) are malleable. A CD player, for instance, knows exactly how to take an input of 1s and 0s set down in a specific order and turn it into music, but a computer can take that input and do anything conceivable — make a remix, make an MP3, make a visualizer, make a MySpace page.

My job as a writer is to take a big idea, supply some useful examples or analogies, type it all up, spell check it, ignore most of my grammar mistakes, and press "publish". I'm encoding human thought in the vaguely-defined data format known as "English," for consumption by other humans. I make inaccurate, but useful, assumptions about how other humans will interpret what I write, but ultimately that's out of my hands.

Programmers have to think about data formats, but they have to be a lot more precise because their audience isn't humans, it's machines. For instance, I talked with Karma's CTO Stefan Borsje about the very first code he wrote for Karma: it was a definition of a format to store a list of Karma devices. "Because I know we'll have those," he reasoned.

These machine-readable definitions of information are precious to, and hotly debated by, programmers because the way information is stored on a computer informs how it's used and manipulated by a computer. I could put this whole blog post through Google Translate and publish it in Korean, but that would ultimately limit, confuse, and inconvenience my readers. Computation can be similarly directed and inhibited by a data format.

But data, on its own, isn't impossible to deal with. If you've ever written some HTML, you were encoding information in a machine-friendly format. Maybe the < > brackets and nested structure took some getting used to, but HTML's rules are much more straightforward than the rules of written English.

Where it really starts to fall apart for me is when I try to perform some sort of computation on data. While most programmers don't have to think about the logic gates inside their processor, the code they write is ultimately interpreted by those logic gates, and therefore has to be in some way logically correct.

What do I mean by logically correct? Let's do some math! So, like, x = x, right? And x + 0 = x, yes? Maybe even x * 2 = 2x, I don't know I'm just riffing. In the same way algebra defines a consistent system for manipulating numbers, everything a computer does to data needs to be consistent and predictable in order to be manipulable. Stefan's data format for devices, and any logical transformations he wants to apply to it (like add more devices, for instance) ultimately has to adhere to a computer's system of algebra, or it breaks.

As a writer, I make logical assumptions, jumps, or fallacies all the time. That's tolerable when talking to humans, but it's simply insufficient for talking to machines.

As a writer, I make logical assumptions, jumps, or fallacies all the time.

When I get an idea for a program in my head, I tend to think about the big picture, and the world-changing implications. Then I might even consider a few of the implementation details. But once I start programming, I quickly realize that I have no idea how to actually tell a machine what I want to do. I can't actually describe the logic of what I want to accomplish. A good programmer breaks a program down into minuscule, correct chunks. Then they build these chunks on top of each other, hoping none of the chunks break the logical chain. They fail repeatedly, and the program breaks constantly, until finally, at last, it doesn't.

If they're doing it right, those chunks are called "abstractions," and they encapsulate the nitty gritty details so the programmer's brain can focus on the nitty gritty of the next chunk. A programmer, like any human, can only hold so many elements of a problem in his head at one time. Abstractions let him generalize about the behavior of one part of his program so he can dump those details out of his brain and load in some new details. The problem with abstractions is... they're abstract, and could be totally wrong, and then everything breaks and you have to dive back into the details to figure out what went wrong.

When I write wrong bad sentence words, I can still press "publish" and be read by humans. When I write wrong bad code programs, the machine spits it back at me, refusing to read it.

If you've gotten this far through my English-format document titled "Why programming is hard," I suppose I owe you some takeaways. First, I'd like to reassure you that I haven't given up yet on programming. In fact, I'm making a lot of progress. I find programming so difficult that there's an almost perverse satisfaction I gain by banging my head against its ivory tower.

Secondly, I'd like you to give any programmers in your life a subtle head nod of appreciation now and then, because they're doing this insane job of translating human intuitions and understanding into computer logic. The next time they're slow on a project, or skeptical of your feature request, wonder if maybe their data formats or existing logical program structure or the current state of computer science are poorly suited to the transformations or additions required by a new specification. Or, like, maybe they're tired.


About Paul Miller

That guy who left the internet for a year