The tools are sharper, but software development remains rife with misconceptions on productivity, code efficiency, offshoring, and more
Even among people as logical and rational as software developers, you
should never underestimate the power of myth. Some programmers will
believe what they choose to believe against all better judgment.
The classic example is the popular fallacy that you can speed up a
software project by adding more developers. Frederick P. Brooks debunked
this theory in 1975, in his now-seminal book of essays, "The Mythical
Man-Month."
Brooks' central premise was that adding more developers to a late
software project won't make it go faster. On the contrary, they'll delay
it further. If this is true, he argued, much of the other conventional
wisdom about software project management was actually wrong.
Some
of Brooks' examples seem obsolete today, but his premise is still sound.
He makes his point cogently and convincingly. Unfortunately, too few
developers seem to have taken it to heart. More than 35 years later,
mythical thinking still abounds among programmers. We keep making the same mistakes.
The
real shame is that, in many cases, our elders pointed out our errors
years ago, if only we would pay attention. Here are just a few examples
of modern-day programming myths, many of which are actually new takes on
age-old fallacies.
Programming myth No. 1: Offshoring produces software faster and cheaper
These days, no one in their right mind thinks of launching a major software project without an offshoring strategy.
All of the big software vendors do it. Silicon Valley venture
capitalists insist on it. It's a no-brainer -- or so the service
providers would have you believe.
It sounds logical. By off-loading coding work to developing economies, software firms can hire more programmers for less. That means they can finish their projects in less time and with smaller budgets.
But
hold on! This is a classic example of the Mythical Man-Month fallacy.
We know that throwing more bodies at a software project won't help it
ship sooner or cost less -- quite the opposite. Going overseas only
makes matters worse.
According
to Brooks, "Adding people to a software project increases the total
effort necessary in three ways: the work and disruption of
repartitioning itself, training new people, and added
intercommunication."
Let's assume that the effort required for
repartitioning and training is the same for outsourced projects as for
homegrown ones (a dangerous assumption). The communication effort
required for outsourcing is much higher. Language, culture, and
time-zone differences add overhead. Worse, offshore development teams
are often prone to high turnover rates, so communication rarely improves
over time.
Little wonder there's no shortage of offshoring horror stories.
Outsourcers who promise more than they deliver are a recurring theme.
When deadlines slip and clients are forced to finish the work in-house,
any putative cost savings disappear.
Offshoring isn't magic.
In fact, it's hard to get right. If an outsourcer promises to solve all
of your problems for nothing, maintain a healthy skepticism. That free
lunch could end up costing more than you bargained for.
Programming myth No. 2: Good coders work long hours
We
all know the stereotype. In popular culture, programmers stay up late
into the night, coding. Pizza boxes and energy-drink cans litter their
desks. They work weekends; indeed, they seldom go home.
There's
some truth to this caricature. In a recent analysis of National Health
Interview Survey data, programming tied for the fifth most
sleep-deprived profession. Long hours are particularly endemic in the
video game industry, where developers must endure "crunch time" as
deadlines approach.
But it doesn't have to be that way. There's plenty of evidence to
suggest that long hours don't increase productivity. In fact, crunch
time may hurt more than it helps.
There's nothing wrong with
putting in extra effort. Fred Brooks praises "running faster than
necessary, moving sooner than necessary, trying harder than necessary."
But he also warns against confusing effort with progress.
More
often than not, Brooks says, software projects run late due to chronic
schedule slippage, not catastrophes. Maybe the initial estimates were
unrealistic. Maybe the project milestones were fuzzy and poorly defined.
Or maybe they changed midstream when the client added requirements or
requested new features.
Either way, the result is the same. As the
little delays add up, programmers are forced into crisis mode, but
their extra efforts are spent chasing goals that can no longer be
reached. As the project schedule slips further, so does morale.
Some
programmers might be content to work until they drop, but most have
families, friends, and personal lives, like everyone else. They'd be
happy to leave the office when everyone else does. So instead of
praising coders for working long hours, concentrate on figuring out why
they have to -- and how it can stop. They'll appreciate it far more than
free pizza, guaranteed.
Programming myth No. 3: Great developers are 10 times more productive
Good programmers are hard to find, but great programmers are the stuff of legend -- or at least urban legend.
If you believe the tales, somewhere out there are hackers so skilled
that they can code rings around the rest of us. They've been dubbed
"10x developers" -- because they're allegedly an order of magnitude more
productive than your average programmer.
Naturally, recruiters and hiring managers would kill to find these
fabled demigods of code. Yet for the most part, they remain as elusive
as Bigfoot. In fact, they probably don't exist.
Unfortunately, the
blame for this myth falls on Fred Brooks himself. Well, almost -- he's
been misquoted. What Brooks actually says is that, in one study, the
very best programmers were 10 times more productive than the very worst programmers, not the average ones.
Most
developers fall somewhere in the middle. If you really see a 10-fold
productivity differential in your own staff, chances are you've made
some very poor hiring choices in the past (along with some very good
ones).
What's more, the study Brooks cites was from 1966. Modern software project managers know better than to place too much faith in developer productivity metrics,
which are seldom reliable. For one thing, code output doesn't tell the
whole story. Brooks himself admits that even the best programmers spend
only about 50 percent of the workweek actually coding and debugging.
This
doesn't mean you shouldn't try to hire the best developers you can. But
waiting for superhuman coders to come along is a lousy staffing
strategy. Instead of obsessing over 10x developers, focus on building 10x teams.
You'll have a much larger talent pool to choose from, which means
you'll fill your vacancies and your project will ship much sooner.
Programming myth No. 4: Cutting-edge tools produce better results
Software
is a technology business, so it's tempting to believe technology can
solve all of its problems. Wouldn't it be nice if a new programming
language, framework, or development environment could slash costs,
reduce time to market, and improve code quality, all at once? Don't hold
your breath.
Plenty
of companies have tried using unorthodox languages to outflank their
competitors. Yammer, a social network, wrote its first version in Scala. Twitter began life as a Ruby on Rails application. Reddit and Yahoo Store were both built with Lisp.
Unfortunately, most such experiments are short-lived. Yammer switched to Java
when Scala couldn't meet its needs. Twitter switched from Ruby to Scala
before also settling on Java. Reddit rewrote its code in Python. Yahoo Store migrated to C++ and Perl.
This
isn't to say your choice of tools is irrelevant. Particularly in server
environments, where scalability is as important as raw performance,
platforms matter. But it's telling that the aforementioned companies all
switched from trendy languages to more mainstream ones.
Fred
Brooks foresaw this decades ago. In his essay "No Silver Bullet," he
writes, "There is no single development, in either technology or
management technique, that promises even one order of magnitude
improvement in productivity, in reliability, in simplicity."
For
example, when the U.S. Department of Defense developed the Ada language
in the 1970s, its goal was to revolutionize programming -- no such luck.
"[Ada] is, after all, just another high-level language," Brooks wrote
in 1986. Today it's a niche tool at best.
Of course, this won't stop anyone from inventing new programming languages,
and that's fine. Just don't fool yourself. When building quality
software is your goal, agility, flexibility, ingenuity, and skill trump
technology every time. But choosing mature tools doesn't hurt.
Programming myth No. 5: The more eyes on the code, the fewer bugs
Open
source developers have a maxim: "Given enough eyeballs, all bugs are
shallow." It's sometimes called Linus' Law, but it was really coined by
Eric S. Raymond, one of the founding thinkers of the open source
movement.
"Eyeballs" refers to developers looking at source code.
"Shallow" means the bugs are easy to spot and fix. The idea is that open
source has a natural advantage over proprietary software because anyone
can review the code, find defects, and correct them if need be.
Unfortunately, that's wishful thinking. Just because bugs can be found doesn't mean they will
be. Most open source projects today have far more users than
contributors. Many users aren't reviewing the source code at all, which
means the number of eyeballs for most projects is exaggerated.
More
importantly, finding bugs isn't the same as fixing them. Anyone can
find bugs; fixing them is another matter. Even if we assume that every
pair of eyeballs that spots a bug is capable of fixing it, we end up
with yet another variation on Brooks' Mythical Man-Month problem.
One
2009 study found that code files that had been patched by many separate
developers contained more bugs than those patched by small,
concentrated teams. By studying these "unfocused contributions," the
researchers inferred an opposing principle to Linus' Law: "Too many
cooks spoil the broth."
Brooks was well aware of this phenomenon.
"The fundamental problem with program maintenance," he wrote, "is that
fixing a defect has a substantial (20 to 50 percent) chance of
introducing another." Running regression tests to spot these new defects
can become a significant constraint on the entire development process
-- and the more unfocused fixes, the worse it gets. It's enough to make
you bug-eyed.
Programming myth No. 6: Great programmers write the fastest code
A
professional racing team's job is to get its car to the finish line
before all the others. The machine itself is important, but it's the
hard, painstaking work of the driver and the mechanics that makes all
the difference. You might think that's true of computer code, too.
Unfortunately, hand-optimization isn't always the best way to get the
most performance out of your algorithms. In fact, today it seldom is.
One
problem is that programmers' assumptions about how their own code
actually works are often wrong. High-level languages shield programmers
from the underlying hardware by design. As a result, coders may try to
optimize in ways that are useless or even harmful.
Take the XOR
swap algorithm, which uses bitwise operations to swap the values of two
variables. Once, it was an efficient hack. But modern CPUs boost
performance by executing multiple instructions in parallel, using
pipelines. That doesn't work with XOR swap. If you tried to optimize
your code using XOR swap today, it would actually run slower because
newer CPUs favor other techniques.
Multicore CPUs complicate
matters further. To take advantage of them, you need to write
multithreaded code. Unfortunately, parallel processing is hard to do
right. Optimizations that speed up one thread can inadvertently throttle
the others. The more threads, the harder the program is to optimize.
Even then, just because a routine can be optimized doesn't mean it should be. Most programs spend 90 percent of their running time in just 10 percent of their code.
In
many cases, you're better off simply trusting your tools. Already in
1975, Fred Brooks observed that some compilers produced output that
handwritten code couldn't beat. That's even truer today, so don't waste
time on unneeded hand-optimizations. In your race to improve the
efficiency of your code, remember that developer efficiency is often
just as important.
Programming myth No. 7: Good code is "simple" or "elegant"
Like
most engineers, programmers like to talk about finding "elegant" or
"simple" solutions to problems. The trouble is, this turns out to be a
poor way to judge software code.
For one thing, what do these
terms really mean? Is a simple solution the same as an elegant one? Is
an elegant solution one that is computationally efficient, or is it one
that uses the fewest lines of code?
Spend too long searching for either, and you risk ending up with that
bane of good programming: the clever solution. It's so clever that the
other members of the team have to sit and puzzle over it like a
crossword before they understand how it works. Even then, they dare not
touch it, ever, for fear it might fly apart.
In many cases, the solution is too clever even for its own good. In their 1974 book, "The Elements of Programming Style," Brian
Kernighan and P.J. Plauger wrote, "Everyone knows that debugging is
twice as hard as writing a program in the first place. So if you're as
clever as you can be when you write it, how will you ever debug it?" For
that matter, how will anyone else?
In a sense, concentrating on
finding the most "elegant" solution to a programming problem is another
kind of premature optimization. Solving the problem should be the primary goal.
So
be wary of programmers who seem more interested in feathering their own
caps than in writing code that's easy to read, maintain, and debug. Good code might not be that simple. Good code might not be that elegant. The best code works, works well, and is bug-free. Why ask for more?
Source: http://www.infoworld.com
No comments:
Post a Comment