Recently, the site has been renamed into "Worse Than Failure", because the site owner had problems explaining to his mom that the name of his site, when spoken in full, contained the word 'fuck'.
Obviously, various 'conservatives' were not pleased with this, and a storm of critique rained down. Also, people asked: "What could be worse than failure?"
Today, Alex, the site owner, answers than question.
What could be worse than failure, is that us IT professionals regard anything that reaches production as a succes. But what happens with that system two, three years in the future? Will it scale, or will it crash and burn? Just how much code that is produced, really lasts fifteen years? By then, the consultants who do the initial implementation are long gone, and the poor support engineers are left to struggle along, battling against a multi-headed monster that no-one can tame.
But all this time, the consultants who first implemented the system, will think of it as a success story. They will apply the 'lessons learned' in that project in the next project -- creating a feedback loop of ever increasing fucked-upness.
Obviously, that is worse than failure.
When I studied computer science, there was talk of 'the software crisis': there is a need for more different software systems than can be made. Every day, the gap between demand and supply was widening.
New technologies like J2EE and .Net have not solved the software crisis. Instead, with a pertetually continued cycle of planned obsolescence, every system has to be rebuilt from scratch every few years. Perhaps technology vendors are partially to blame: every year, there is the Next Big Thing That Will Solve All Problems, and certainly it is a good idea to code your next Killer App with that technology? But three years on, when a particularly large piece of maintenance has to be done on that system, the Next Next Big Thing will have come along, and there will be pressure to conform to that.
Not all of that pressure will come from Pointy-Haired Bosses who think they can commodotise their software development cycle by using a certain framework (hint: you can not), but also from developers who have been trained in that particular framework. As frameworks continue to grow ever more complex, developers have to specialise -- it is not feasible to be both a top-notch J2EE and top-notch .Net developer.
But because of the software crisis, most organisations do not have the budget to completely re-write all of their applications -- they need more applications, and naturally there will be less budget to maintain the existing applications. Especially not the applications that do not touch onto the core business of the organisation.
This means that the life-span of a software product is much longer than the initial programmers would have thought. This is also why there is still a lot of COBOL code used in production environments -- it's simply too costly to rebuild.
The maintenance people have to live with the decisions made by the initial developers, leading to completely unwieldly behemoths that spiral out of control.
And yet we don't learn anything from that. I certainly don't think of what the applications I code today will be doing in three years. I might have contributed to completely fucked-up systems -- and with my (deserved) reputation for "creative solutions", I undoubtedly have done so.