Dumbo’s Feather

Is your new tech successful because it's great, or because it's new?

Is your new tech successful because it’s great, or because it’s new?

In technology education, or what passes for it in today’s world of blog-tweets and youtube sync’d slide decks, signal-to-noise ratio is only part of the problem. The larger problem is signal degradation. Today’s hot technology, the one that’s doubling productivities around the world, is tomorrow’s dead dog rotting in a gutter, and the pace seems to be quickening. And whither the legacy codebase?

Every dead technology started the same way: somebody had a problem and they fixed it. Pleased with themselves, they told somebody else. Eventually, an evangelist heard about it, and they told everybody who would listen. Developers heard; most ignored it but the truly curious ones tried it out. Some of them agreed — this is the hot new thing. They applied it to a project, the project worked, and suddenly the hot new thing is making IPO millionaires out of guys writing LinkedIn articles about their cuture wearing sketchier beards than mine.

Then out of nowhere, our new technology isn’t new anymore. It’s missing X or too full of Y, doesn’t scale, isn’t elegant. Suddenly this once great technology is a doorstop, and anybody gauche enough to program with it is an industry pariah, delegated to cleaning up messes in the machines that made other people rich.

I wonder how many technologies have their reputations built on what I call the Dumbo’s Feather effect. You remember the story of Dumbo — baby elephant discovers, by virtue of waking up on a telephone wire, that he might be able to fly. But he has no faith in himself; after all, the idea is preposterous. Until, that is, he acquires a “magic” feather —  a placebo totem that gives him the confidence to grow from doubter to pachyderm Icarus.

Developers, like all skilled professionals, naturally mature. As their skills increase, their productivity improves. But this doesn’t protect them from failure. It’s inevitable — you can’t program without making assumptions, and some of them are going to be wrong. Not to mention the other inevitabilities of software, like flip-flopping customers, pushy bosses, shoddy analyses, short schedules. And in the digital space, where only the developers know what’s going on, obviously it’s their fault.

A developer can’t help blaming themselves for software failures, even if nobody else does, and even if it’s not their fault. A sensitive dev remembers every great idea scrapped, every beautiful implementation sullied by haste and hack and every production failure in a class with their @author tag. If only they’d been precognitive, clairvoyant and had infinite productivity!

And for many, this feeling of failure compounds; it begins to weigh heavy. It makes the developer risk averse, sometimes to the point of being afraid to do anything bold. And that’s a problem. A developer without confidence is supremely unproductive, refusing to make any assumptions, and turning every decision point into a meeting.

Then new technology comes around. And hey — says in this one blog post that it’s specifically designed to fix the problem the developer’s been dwelling on all this time. So he checks the documentation — say, hello world looks easy! He types it up, compiles after only a few tries! He stays up all night trying different permutations. He posts in the forums, reads everything he can, visits a conference, and is most careful not to repeat the mistakes he made on that OLD technology.

And when that first big project on the new technology hits production — a rocky release, to be sure, but who cares about that, it actually worked — it becomes the star. The elephant flew, and it was the feather what done it.

Confidence. It’s essential to every stack. Even a battle hardened skeptic like me needs it, sometimes, and the allure of a new technology brings with it an awful lot of confidence building. Sometimes, it’s even enough to overcome the interruptions, the missteps and other inherent risks in adopting it. The rest of the time — hack it, patch it, decay sets in, and it’s on to the next new tech.

The Cardboard Architect (#2 in a series)

Image

Photo Apr 01, 3 34 00 PM

The architect has just discovered an unexpected change to an integral design you did not discuss with him and will now require six weeks of work to repair.

Keep in mind: these are all real expressions, captured just moments after the event described. In this case, two competing implementations of functionality were produced to handle two distinct and only accidentally related business cases. The original goal was for the more flexible set, which mirrors other configuration in the system and can be deployed in real-time, to become the standard and for a basic set implemented in compiled Java classes to be replaced. Once the pair had run concurrently in production for long enough, it would clear the need for the second business case, so it didn’t matter that it wasn’t flexible. However, I was shocked to discover developers pursuing customer on-boarding automation had replaced all the carefully constructed logic in the flexible configuration files — with hard links to the java classes!

Through the benefit of hindsight and about 3 fingers of Albany Distilling Company’s Coal Yard Rye, I realized why this had happened: the java configuration, by virtue of a simple set of static requirements, had a really straightforward API and beautiful, self documenting implementations. The flexible configuration, which reused a complex, non-intuitive DSL and implementation from an earlier project, looked a mess. And while the automaton didn’t care about the legibility of its output, the clean and concise java classes had much higher appeal to its developer.

It’s flattering, really. When it isn’t maddening. And hey, maybe we’ll buy back those six weeks with the tool!

The Cardboard Architect (#1 in a Series)

Image

Photo Feb 22, 1 51 57 PM

The architect is not impressed by your proposed solution, and is trying not to downgrade his assessment of your skill level.

The “cardboard architect” is an exercise wherein you present an idea out loud to an inanimate object and in the telling are both practicing and honing the argument. It is, frankly, a silly idea that nobody practices more than once or twice, sheepishly. However, as I am generally known for wearing my design opinions on my sleeve, not to mention all over my face, I thought it’d be fun to put together a set of stock facial expressions for common software scenarios that might be encountered in an actual design review.

Collect ’em all!

How to learn how to program

20130324-012425.jpg

First of all: don’t be scared! You can’t break a computer by trying to program it. Oh sure, it’s possible to break a computer…but not for YOU as a beginner. And who cares anyway? The computer was made to serve you, somebody plunked down good money for it to do so, and if it isn’t serving you right, then it’s already broken. Programming should always be seen as a way forward, and any frustrations or missteps along the way merely part of the journey. You will break programs, and then you will fix them, better than before.

Programming is like writing: you need to have an idea and an intended audience before you can get started in earnest. This will help identify the platform and the language you will be writing in. There is no such thing as a “best language” — every computer language is a set of trade-offs distinct from other languages with different trade-offs — but there are better platforms and languages for specific tasks.

What’s a platform? Well, that’s the environment in which your program is going to run. It’s usually a program itself, such as an operating system, a device, a server, a virtual machine or a web browser. Programs written for one platform might not be portable to another and each has its own input and output restrictions, so it’s a pretty important choice. An easy way to start might be to write a program that lives in a web browser, or one that is interpreted by your computer’s operating system on the command line.

Which language should you learn? Well, it doesn’t matter all that much – all languages are very similar if you don’t know anything, and if you do any programming of real merit you’ll probably learn a dozen or more in your lifetime. A journeyman programmer should be able to pick up a new language and be productive in it in a couple of days, though mastery takes years. But you’re going to want to pick a language that works with your target platform, one that a lot of people know about, and preferably one whose platforms and tools don’t cost much. There are toy languages, like Logo, and academic languages like Eiffel. They’re fine for learning specific techniques, but there is no reason not to get started immediately in a language like JavaScript (for the browser), Powershell or perl (for the command line) or Java (for lots of things). These are low barrier to entry languages with free platforms and lots of great free tools and you can get seriously paid working with them.

What about academics? Well, math is pretty important, specifically algebra. Most computer languages owe their lineage to algebraic equations, and you will be building lots of them. Higher math, like trigonometry or calculus, is exceedingly important for some very rare programming tasks but for basic programming, algebra is it. I wrote my first program in second or third grade, so if you’re at least as smart as a sixth grader you’ll do fine. English is fairly important, as many programming languages use English commands in their grammar and many programmers use English in their variable names.

Some understanding of how computers function at a physical level can help you understand what’s going on, but you could get this from a book, like How the Internet Works. At the most basic level, though, you’ll need to know the following things about computers:

  • What a file is and how to manage files using your operating system.
  • How to open programs, use their menus and switch between them
  • How to copy and paste and other editorial functions.
  • How to undo mistakes

To get you moving in your language of choice, you’re going to want a guide. For the languages above there are excellent online resources, so you can just start Googling or Binging until you find one you like. You can also pay to be taught the language, either at a school or at a technical training company like New Horizons. Your best bet, however, would be to buy or borrow a book. Try to get a thin one that makes sense to you — no two readers are the same and that’s why there are so many tech books on the same subject. The Head First series from O’Reilley press are great examples of books that are like nothing else, attempting to teach the same subject using multiple techniques at the same time. Personally I find them insane.

So. Armed with courage, a platform and language choice, a guide, some basic academic skills and an idea, you’re ready to start learning programming. Here’s how you do it:

Program!

Write the examples from the book, then change them to do something else. Notice how each change to the program or to the input changes the output. Notice how the system complains or fails when you do something wrong. Figure out how to fix it — now you’re debugging! See something you don’t understand, a message or a command? Google it. Ask a friend. See if you can change the program around to make a different message. Try removing the command, see if thins don’t get worse!

Install some tools to help you program better. Read more books. Look at other languages and translate your knowledge. Learn some task specific languages like SQL or R. Read some blogs. Read Code Complete, the Pragmatic Programmer, Clean Code, Effective Java. Become a software developer, a software engineer, a software craftsman, a software designer, a software architect, a software artist. Read more books! Go to a “No Fluff Just Stuff” convention. Go to a user’s group.

Programming is easy! Learning to program is easy! And pretty much everybody can do it.

Software, on the other hand, is hard.

Velocity and quality attributes

Google Image Search is awesome you guys.

Believe it or not, this is the first link when I did an image search on “quality attributes.” It made me think but also it made me hungry.

Years ago, Bruce Eckel (of the excellent Thinking In Java series, a must read for newcomers to the language) started work on a patterns book. This was about the time that everybody was writing patterns books, and as a result the book never really hit press and is still under development, where it may remain for life. However, the notes are available here, and are worth a glance if you like well stated rehashing of common sense.

I do. And I got quite a lot out of his “Design Principles” section, wherein he rephrased the impetus behind SOLID in a way that I feel transcends that collection of OO software principles in a way that can be applied to any computing task (UI design, DSL design, database design etc). I have a copy of my own rewording of his attempt on my wall.

One of these principles is called “Complexity,” and paraphrasing:

The more random rules you pile onto a developer, rules that having nothing to do with the problem at hand, the slower he can produce. And this effect is exponential, not linear.

In other words, if you take a competent developer, provide him with a set of requirements, and he produces slowly: you’re asking him to do too many random things. And if you were feeling masochistic, you could work backward from the magnitude of this poor performance and take its logarithm to uncover just how many random and “unnecessary” requirements you imposed to create this quagmire.

Of course, in software there really are no random requirements. Everything you ask a developer for, you think you need. And in some cases, you’re quite right: if you’ve got an application handling data more interesting than, say, a blog post about software, you’re probably going to need a boatload of things that aren’t called for in the explicit business case. Things that we architectural nerds like to call “quality attributes.” Things like supportability, flexibility to changing business requirements, scalability, performance, multi-tenancy, auditability, transactionality and so on.

The way I see it, developer velocity is rarely a problem of not typing fast enough. Instead, it’s a function of how many quality attributes they need to address, in addition to their core requirement, to produce an acceptable deployment artifact, and again this is an exponential problem.

There is no silver bullet to this problem — it’s inherent to the task of software development — but there are some ways to reduce it. Using tools that make one or more attributes trivial or “free” is one — for example, a cache framework that scales from in memory to distributed with a minimum of fuss. Building on top of a quality AOP framework, thus allowing the incremental addition of certain attributes that can be described as aspects (transactionality, caching, monitoring, auditing, etc), is another. Baking essential quality attributes into the architecture is a third — think of web frameworks that make SSO, auditing and error logging a function of the web container, leaving applications free to ignore these concerns. Finally, an organization can take a layered approach to development, in which the implementer of the software builds the business function to completion and then adds (and tests!) one quality attribute at a time, working from the most essential to the least essential until you run out of time.

This last one is the tactic taken, often accidentally, by most “high performance” startups, and it is likely the least feasible for more mature organizations. The layered approach owes its performance to attitude summed up in the acronym YAGNI — You Ain’t Gonna Need It [some feature], and the reason it’ll work poorly in a more mature org is due to an opposing acronym YWFW — Yeah We Freakin’ Will. Your sales staff, your operations staff, the gals and guys on the frontline phones, they know: we will need auditability, some day, and we’ll need to manage the business process and force changes to the state machine, and so on. We need all of those things, and if we need them on the day of release we don’t want some smarmy designer apologizing for the tools we asked for in the first-dang-place.

So okay. Maybe you are going to need it. The same rules apply. The attributes that are most essential: bake those into the architecture. The next most essential, build those robustly into your toolset via good APIs injected into or enhancing your business code using a convention over configuration approach. And as a last resort, build them into phases of development that come AFTER you’ve successfully solved the use case.

In theory, this would be the fastest way to develop software, but don’t worry. You’ll still find ways to muck it up. Resisting good-but-wasteful ideas and identifying cross cutting concerns are both really hard during development, and it’s not like you’ve a surplus of architects to help out. But do consider identifying these as a primary output of your retrospectives or, god forbid, post-mortums.