The OSNews is accompanied by the by-line “Exploring the Future of Computing“. In this series I’ve decided to do exactly that, to go beyond the daily stream of the latest updates and rumours and cast my eyes at the future. What will happen to Software, Hardware, the Companies and Technologies involved and how these are developed. I for one think there will be big changes to come, some for the better, some for the worse.
Predicting the future is not an easy task. If it was life would be a lot easier.
Sometimes people get it right, more often wrong. Sometimes predictions become self fulfilling prophecies, someone writes about a future invention, then someone else reads it and decides to build one…
I don’t know if what I write is going to happen but I suspect much of it will in one form or another. I’ll tell you what I think is going to happen and why, only father time can tell us if I’m right or wrong.
Part 1: The Technology industry catches up with the rest of the world
The transfer of jobs overseas in the technology industry is the continuation of a process started in manufacturing many, many years ago. Organised workers and unionisation are next, a process started well over a century ago in other industries.
One way to predict the future is to look at the past. This may sound daft in the technology industry but perhaps surprisingly it is just as true here as anywhere else. The processes that will happen in the technology industry will mirror the changes in other industries as they too have had to adapt.
Liability and Regulation become the norm
The technology industry is still young, anyone can start a company and start selling software. They don’t at this point have to worry about officially sanctioned certifications or liability, but not for long, they’re both in the post, in fact in some cases they’ve already arrived [1]. With the increased complexity of software and our ever increased dependance on it, regulation of the software industry can’t be too far away. In some countries it is illegal to call yourself an Architect or Engineer unless you have be authorised to do so, this does not really apply in the software industry at the moment but it will do.
Some fear the idea of liability because it will mean the end of programmer’s freedom to create software. No, this does not mean that, this means that for major projects you will need to be qualified to participate in the most important aspects. Programmers will become like construction workers, as long as you can churn out working code that’s fine, but the Architecture and Engineering within the system system will be done by highly qualified people and if it goes wrong they will be liable for the failure.
When errors hit critical infrastructure, things go wrong and lives get lost it’ll all begin to change. The technology industry will mature and turn into just another infrastructure industry. I for one think software would work a lot better if those creating it were liable if it failed, programmers would be a lot more careful if they knew a huge fine was waiting if they got it wrong.
In the construction industry the Architect and Engineers are liable if something goes wrong, as a result they have to be very careful in their work and need to be certified competent in their profession before they work. As a result you don’t see many buildings falling out of the skies, if you do it’s major news [2]. Despite wind rain, sun and snow – the most powerful forces in nature – there are buildings still standing after thousands of years.
The Problem with Software
In order to create the more reliable that’s going to be required the way software is developed is going to have to change. Programmers tend to use a bottom up approach, they are responsible for their own little part, there isn’t much if anything in the way of an overall design. You don’t know at the beginning what the end result will be.
Could you imagine constructing a building like this? It’d be a complete disaster! [3] [(see pages 5-6)]. Yet this is exactly how software is built, is it any wonder that so much fails. Fails? Yes, there is a problem known as “The Software Crisis” which is the fact that 88% of business software is late, works but not to specification, or just doesn’t work properly. In an industry where the practitioners pride themselves on their intelligence, how come we get it wrong nine times out of ten?
There are many reasons for this: Bad software development models is just one [4], another reason is that business people and technical people don’t understand one another, there is a miscommunication of requirements so what one wants and the other delivers are two different things. It is the attempt to fix this that causes many problems, it’s often done to software never designed to change, late in the process so there isn’t sufficient time to make the changes necessary in a good way.
There is also an unfortunate tendency in the technology industry for using young workers willing to work long hours, this has a double negative effect:
Young programmers are inexperienced, long hours mean programmers get tired and make mistakes, mistakes take many times longer to fix than make so all those extra hours go to waste. I’d bet a team of older developers will produce better code faster in 40 hour weeks than a bunch of recent graduates working 60 (or more) hours.
An Opportunity for Open Source : No
Some may consider that Open Source is the answer to this, but that’s because Open Source gets it’s reputation from a relatively small number of successful projects all of which have a large number of contributors and active members. For the most part these are done as a hobby, for the sheer love of it. Unfortunately software produced by companies is done for rather different reasons and love isn’t one of them. Opening your source code will not automatically guarantee your software will get developed at all never mind on time. It does however guarantee potential attackers can see if there are holes in your software.
Programmers in business are under tight schedules to deliver so there is no time for love, time consuming optimal solutions are forgone in favour of quicker implemented approaches. Open Source teams may have found a solution to these problems but without the love this technique won’t help business.
The Real Problem
The biggest problem is that software is complex and as it grows it becomes exponentially more complex, no single developer is going to see the whole picture. Even if they could the skills required to understand the big picture are different from those required at the code level. Coding requires a good memory of techniques and the ability to use the right techniques to solve the right problems, creativity isn’t really a requirement.
The big picture, the Architecture, is a different area altogether, it requires the ability to solve problems without an answer, it requires creativity, the ability to design. Software developers are trying to solve this problem by applying the techniques of software development. Software development can be taught: you have this problem, you use this solution. Design is a skill, as such is also great deal more difficult to teach it.
Software Architecture will become popular, then a legal requirement
The original idea behind Software Architecture is to use a top down approach – to produce a design first then start developing. Unfortunately this method (referred to as the waterfall method) doesn’t work very well for systems which change and large systems tend to change.
One method which does work is a formalised method of the bottom up approach called Extreme Programming [5]. This method advocates the use of small iterative development stages and constant refactoring (the cleaning up of software). In Extreme programming there is no large design phase at the start, instead the refactoring technique allows a design to evolve into existence. You don’t know at the beginning what the end result will be. While this may sound counter intuitive for non-programmers this method works very well. The changes which mess up so many systems are caught and handled by the refactoring.
Modern Software Architecture methods have a large initial design phase but the design is continued alongside an iterative development process. Actual code tends to move away from the initial design so the design is tracked as it changes, when a new requirement is added or something needs to be changed the Architect can look at the overall design and modify the design before coding of that change begins. This is process is similar to the refactoring used in Extreme Programming but is done at the Architectural level first and later at the code level
The benefit of this approach is that someone always knows what they are actually building, what fits in where and effects what, as such it is much easier to make changes and keep the software maintained. Some Architects already use a iterative Architectural technique and Extreme Programming as the development model with success.
Managers and developers may currently see Architecture as an extraneous phase but when businesses realise the productivity gains and cost savings it can deliver it will come into focus.
If a change at the design phase takes 1 day, it will take 10 days if made at the development stage and 100 days if made in the maintenance phase. Given software spends most of it’s life in the maintenance phase Software Architecture makes a lot of commercial sense.
When liability becomes a major issue the reliability and accountability Architecture can deliver will be seen as the key to producing better software. Software Architecture will becomes a “real” job complete with real official certifications and liability. I don’t mean company based certifications as is common in the technology industry today, these will be industry wide with legal standing. It’ll be a difficult, risky job but we can learn from the past, the exact same thing happened in the building industry many years ago.
Some people have seen this process coming and formed their own organisation [6]. There are many books to be read for the aspiring Software Architect. Many programmers already have the requisite skills but probably don’t know it, Architecture is a different area and you’ll find out if it’s the area for you by reading some of these books which cover the different aspects [7] [8].
Unlike the building industry software development can be closely monitored, I expect we will see real time tools for visualising and monitoring architecture. I also expect that development models will evolve. Some models are huge and require a great deal of complex and expensive tools, others are too simple and end up with chaos. Consequently I expect we will eventually find a happy medium where an extensive design phase is followed by a process similar to Extreme Programming. I also expect testing will also play a much greater role in development.
Wither the Penguin?
Some have suggested that Liability will kill off open source, is this true? Will the penguins shuffle off back to the South pole?
I do not expect this will be a problem for Open Source development, it could in fact have quite the opposite effect and make it more popular than ever. If you are liable for your product you will be very keen on using tried and trusted technology. I expect the model used by Java and Perl programmers will become more popular, they use a great deal of Open Source modules in their systems.
For Operating Systems IBM, HP and friends can all control and make sure their own in House Unix systems are ready for liability, Unix isn’t dead yet by any means and this may keep it alive for a long time yet. I don’t expect the BSDs will have too many problems, they are generally created by smaller teams and FreeBSD, OpenBSD NetBSD all focus on stability anyway. OpenBSD focuses on security but stability is a pre-requisite for this. NetBSD is more research orientated but it’s ultra-portability means it too has to be a highly stable system.
Liability may prove difficult however for other systems:
Small modules even of unknown origin can be checked but it’s a different story for large monolithic systems such as linux, would you trust your life to Linux? Given it’s distributed development model who’s going to accept liability for it? This wont kill Linux but will lock it out of some applications until these issues are solved.
Microsoft will have to accept liability for Windows, when they decide to do another security binge they will have to do more than a marketing excercise. Microsoft have built their empire by building software which works most of the time, I expect they will have a hard time adapting to developing software which works all of the time.
Software Patents and Other Issues
Patents will not cause the death of the industry some predict, this is typical computer industry over-hyping. Patents are a double edged sword for the computer industry. They do allow big companies to stifle competition but on the other hand they also allow chancers (a person who takes a chance) to sue the very same big companies for very large sums of money – just ask Microsoft. I expect some companies will become rather less enthusiastic about patents when they’ve had to pay out a few billion dollars.
I think this may lead to a change in the patents are issued and administered, I also expect (or rather, hope) that patents will eventually be issued for shorter lengths of time as befits a rapidly developing industry, this will benefit everyone from the big companies to open source developers, it will also spur innovation since you wont be able to live off past glories for long.
DMCA and such like laws will be seen for exactly they are: a complete overreaction.
They will eventually be toned down to sensible levels. This may happen in court room precedents though rather than in government. In Europe it’ll happen the same way it always has with stupid laws: we’ll just ignore them. Some countries will probably not implement them in the way intended and thus take out their bite.
I’d like to see government mandated data exchange formats, i.e. standard office use file formats.
This would do more against Microsoft’s desktop monopoly in one night than Linux has done in ten years. Unfortunately I don’t see any move towards this whatsoever.
I fully expect SCO to lose their case and go bankrupt.
The management will not care because they got rich anyway.
This will raise questions and investors will end up suing SCO, this could get messy but could potentially lead to new company regulations in the US.
Germany already appears to have these regulations so they wont care either.
Banging the DRM
DRM is not going to work. In some cases such as Digital TV, it’s possible to control both the source and destination for the media yet even these sophisticated DRM systems are cracked. The only reason the TV companies can keep making money is because they still control the SetTopBox and can rapidly update the decryption keys.
DRM for mass market media is a different matter altogether, the companies do not control the media players and thus cannot update them. Once one of these DRM systems is broken that’s it – it’s broken forever. Once someone implements a working anonymous file sharing network it’ll all get shared.
Old style copy protection mechanisms worked because even when they are broken there was no way to get copies to everyone in an easy way. These days we have the internet so it doesn’t matter how difficult the DRM system is to break, it only needs to be broken once and that’s it, everyone can copy the files.
The future is flat rate media [9], pay $10 a month and listen to whatever you want whenever you want, royalties are distributed according to what’s being listened to. There will no need for DRM as there’s no point trying to cheat, you could try and download everything and stop paying but it’ll be more expensive to store it than pay the $10, more importantly it’ll be easier to pay the $10 so even if a few do cheat 99% of people wont be bothered.
—
That’s part one, next time I’ll cover the radical changes I expect to happen in the hardware domain, some of which will leave the industry reeling. I also cover how Microsoft will attempt to reverse the trends and try to re-gain it’s monopoly in a way nobody will expect.
Part 1 References
[1] Some organisations already legally require an ArchitectureEnterprise Architecture by Legislation [2] When a building falls it’s news:
Many die as Turkey flats collapse
http://news.bbc.co.uk/2/hi/europe/3453131.stm [3] The Software Architect’s Profession – Mark Sewell and Laura Sewell.
A philosophical look at Software Architecture and why we need it (scroll down page):
http://www.wwisa.org/wwisamain/books.htm [4] Article on software development myths
The Demise of the Waterfall Model Is Imminent” and Other Urban Myths [5] Extreme Programming
http://www.extremeprogramming.org/ [6] World Wide Institute of Software Architects
http://www.wwisa.org [7] Software Architect – By Nigel Leeming
What do they do? Read this on-line book to find out:
http://www.softwarearchitect.biz/arch.htm
http://www.softwarearchitect.biz/frames.htm (Older browsers) [8] Other books on modern Software Architecture:
Software Architect Bootcamp – By Raphael C. Malveau,Thomas J. Mowbray.
Want to be an Architect? Get your hair cut short and enlist for basic training here (scroll down page):
http://www.wwisa.org/wwisamain/books.htm
Software Architecture: Organizational Principles and Patterns – By David M. Dikel, David Kane and James R. Wilson.
http://www.vraps.com/index.jsp
http://www.theregister.co.uk/content/6/35260.html
Copyright (c) Nicholas Blachford, February-March 2004
Disclaimer:
This series is about the future and as such is nothing more than informed speculation on my part. I suggest future possibilities and actions which companies may take but this does not mean that they will take them or are even considering them.
because people die if the buildings fail. Nobody dies or cares if your peecee crashes. Save your work.
When systems do fail it’s often not the software vendor at fault. For example, when USS Yorktown was dead in the water because NT failed reasonable people didn’t blame Microsoft. It should be obvious to anyone that Microsoft operating systems are not suitable for running industrial grade applications.
As far as software patents go, I view creating and distributing software as speech and so patents shouldn’t be able to stop freedom of speech. Sometimes the courts agree and sometimes they don’t. American software regulation is seriously a problem for me. I have considerred moving to some place with more freedom. We’ll see how the next couple years pan out.
PS: I renewed my EFF membership last night and so should you.
Top-down software design by committee doesn’t work. Go read Linus’ comments on how software evolution and design very much follows the biological world.
Code evolves.
If you were to implement TCP/IP as per the specification, you wouldn’t get a connection. Search for what Alan Cox had to say on this issue.
This article is naive and out of touch with much of the work done on software development theory from the 1960s to today. What’s worse, would any of the silly forecasts come to be, software development would stagnate in a haze of regulations.
because people die if the buildings fail. Nobody dies or cares if your peecee crashes. Save your work.
People do care when software crashes, though it might not be as obvious as buildings coming crashing down. The recent blackouts throughout North America are suspected to have been caused by software failures. It would be wrong to say nobody was affected by this.
In the future, people will care more about reliability than features. How many people would go for cars that can clock 300 mph but crash every so often ? A poor analogy I admit, but it shows that in the long term, every technology reaches a stabilization phase, after which feature upgrades are incremental, and not really necessary.
As for the article, it looks extremely balanced. I commend the author on a well written commentary.
Clearly standards are a good thing, even in software. Most OSS projects do fail using the Bazzar method of development. Linux works because it has a benevolent dictator watching over it. He doesn’t like to take credit but clearly Linus is a major reason for the success of Linux.
The author says that the OSS development model won’t work for business software projects. Check out http://forums.asp.net Here is a quote by a MS developer/project manager “If you can help build such a tool that would be great.” If MS can get help building software at no charge then I think it’s fair to say that other large companies can too, even if the submitted code is a neo-hightech job application.
My major problem with the article is that it doesn’t break any new ground. I think everybody is in agreement with most of the author’s predictions. I think he also underestimates the collaborative snowballing of ideas and the resultant software that has been enabled by the Internet. I know enough about history to know that this is a time in history that isn’t predictable.
Unions were started before globalization (or at least, globalization as we see it now). Factories getting moved overseas happened /after/ unions. Being able to move to other countries is terrible for unions, I don’t see why unions would be introduced into technological fields. Current laws make it harder for traditionally white-collar jobs to have unions. Not that they shouldn’t have more unions, sounds like a good idea. Just not the best prediction.
As usual, people commenting on open source get their impression of it from all the hobby projects on Freshmeat. It is true that numerically most projects are like this. But as Blachford said, it does work different for the important ones. The ones that matter. What about Eclipse? Basically Eclipse shows how open source can be used as a tool for corporations to collorbate and make software for mutal benefit. The fact that open source isn’t a way for companies to get free development (if that was ever the point) doesn’t make it any less valid. Eclipse is obviously not that revolutionary as pure KDE-style (though even KDE has important corporate backing) open source projects, but another path for future development regardless.
Also its contradictory to argue that opening the source opens it to crackers but not to active developers. To say that open source poses a security risk statement requires some citation of examples and a demostartion of closed source softwares better security.
windows crashes, Mac freezes, linux panics with three insulting slashing lights.
When the OS is rock solid, the flash memory got burnt.
That’s all we need is more big goverment coming in and trying regulate programmers. I’m sure the freaking trial lawyers will love it.
From your article, I gather that although you’ve read about Extreme Programming, you haven’t actually tried it. That’s a shame, as you would have discovered that some of your speculations about it are, at best, dubious.
Your first mistake is in thinking that XP projects end up without an overall design. That’s just wrong. When’s the time when you know the most about the project? Obviously, at the end. Why, then, do all your design up front, which is when you know the least about things? Because XP teams are continually improving the design, all of my XP projects have ended up with much cleaner, clearer architectures than my non-XP ones. If design is important, you should do it all the time, not just at the beginning.
Another is that the building of buildings has much to do with the building of software. Unlike most every other kind of engineered product, software is soft. Most of the human activity involves design; the equivalent of a building’s construction phase is what your compiler and build scripts do.
A third implication is that XP practices would lead to reduced quality. Every team I know of who has adopted XP practices has reduced their bug rates radically. The worst I’ve seen was an outfit who halfheartedly adopted 4 of the 12 practices; their bug rates dropped by 80%. Many others report projects with 1 bug per month or less:
http://www.martinfowler.com/bliki/VeryLowDefectProject.html
You go on to complain that working long hours reduces code quality, but don’t acknowledge that one of Extreme Programming’s mandates requires a sustainable pace, generally interpreted as a strict 40-hour week.
Perhaps you know more about the rest of the topics you write about, but to find such poor research certainly makes me wonder.
So this guy doesn’t like extreme programming and apparently doesn’t like a bottom-up, distributed approach. He doesn’t propose a solution, but once again his central theme of a genius architect to solve software problems seems to creep in.
Doesn’t this guy realize that people have been thinking about these issues for 50 years or so now? He falls for the omnipresent fallacy that building software should be like building a bridge. Sorry to burst your bubble man, but people have thought about this before you were born and nothing has changed. Building software will never be like building a bridge. That’s not to say that people shouldn’t use some engineering principles in software construction, but that you can’t equate designing software systems to building bridges.
His two central themes tend to be genius architect to solve all problems and legally mandate that anybody that writes code carry insurance or something. This sounds like a recipe for disaster to me.
<P> Building software is definitely not like building a building, and that has been a large part of what’s wrong with the software development process. The author seems to be responding to the unfortunate choice of name of eXtreme Programming. I recommend more research on XP and how and why it came about.
<P> New development processes such as XP are working to resolve the extremely flawed analogy of treating software development as a typical construction project. Even XP misses the mark in some ways, but it gets a lot closer to what works and what doesn’t. The success of linux is based in large part on the evolutionary nature of the codebase, and its ability to adapt over time.
<P> Ultimately, it’s the market who demands immature software. Very few projects are successful that are not to market first. The hard numbers dictate a short development cycle and a lot of overlooked bugs. Nobody is going to pay for what it would cost to develop a mission critical MP3 player. Liabilities are already in place in truly mission-critical systems such as aircraft and medical systems.
He mentioned NetBSD as being unlike the other BSDs as not having stability as one of its project goals. Nothing could be further from the truth. See http://www.netbsd.org/Goals/ , where stability is one of the goals displayed prominently. Also note that Wasabi Systems, NetBSD’s biggest corporate benefactor, is in the embedded systems business; embedded systems values stability over all and certainly wouldn’t support NetBSD if it was *only* good for research. (Though it is a research Un*x too.) Although for a variety of reasons I’m not currently running NetBSD as my primary OS, Slackware currently fills that role for me, it was the most stable OS I’d ever used.
By the way, what if they start including this ‘downloading’ fee or whatever media fee in the price of the internet connection? I wonder… Since you pay for Windows when it comes preinstalled with the computer even if you don’t need it. Now they could as well include this in the monthly internet fee.
C’m on Nicholas… Strong Artificial Intelligence must be mentioned in an article about the future of computing! 🙂
It seems that the author doesn’t know the first thing about software development nor about development processes.
The first top to bottom (also called waterfall) software dev approaches were created based on the experience made in ohter engineering/constuction processes – the textbook example of constructing a building… first all the analyzing and design was done and then at the end the coding and implementation.
But as software projects grew and the business requirements started to changes more rapidly the waterfall software dp-s started to fail. It was realized that constructing software is not the same as constructing a building where the waterfall approach works in most of the cases becasue the requirements just do not change as fast as they do for software (especially business software).
At that point software dp-s started to occur where the whole development process was divided into iterative cycles with requirement analyze, design and coding (code refactoring) so that the development process could keep up with the changing real world requirements (for example RUP – Rational Unified Process).
XP (Extreme Programming) is so called lightweight software development process with shorter iterative cycles and with emphasis on testing, which is a “must have” for reducing bugs and increasing stability.
—–
About patents… first the patents are normally given to the author(s) of a unique solution to a problem not for an idea. As I understand the fear with software patents is that for common simple things there aren’t that many solutions.
For example if someone would get a patent on a progress bar… how many ways are there to implement one?!
Howdy
Ok i stoped reading after reading the part about XP (Extreem Programing) being one of the causes of the “software crisis”.
Iterative development is ONLY ONE form of software creation process and there are many models to choose from each with downsides and upsides like everything but to say this is just plain herasy!
The truth of the matter is we are only on the virge of figuring out how to create reliable software designs and representive requirements, to make some sort of brash comment like this it is worthy of the sort of spin doctoring the SCO group do.
Please OS News crew don`t let this FUD propagate through this trully great and normally informative site.
Howdy
Just another aside who is the author and what are his qualifications ?
I can`t seem to find this info in the article, anyone know ?
Try his website:
http://www.blachford.info/
When systems do fail it’s often not the software vendor at fault. For example, when USS Yorktown was dead in the water because NT failed reasonable people didn’t blame Microsoft. It should be obvious to anyone that Microsoft operating systems are not suitable for running industrial grade applications.
However the navy was obviously sold that it was for suitable for running industrial grade applications. If Microsoft had been liable for this debacle they would not have sold NT in the first place. They would have bid with a UNIX instead, possibly Xenix (being from MS).
While it is provably impossible to guarantee bug free programs (due to the Halting Problem, which the judges must be told about) if there was a possibility of getting sued then companies would have to be more realistic when bidding. On the other hand it would also require judges educated in the practicalities of IT projects and being more willing to throw out spurious lawsuits. If you can sue for things as dumb seeing Janet Jacksons superbowl display (source: The Independent on Sunday, British newspaper, offline edition) software companies will get sued sometime, and will one day be held liable setting the precedent.
I agree with the arguments of the downloading tax system, DRM will not work in the long term, but I would prefer a shareware type system.
I have no knowledge of the XP style of programming but it does seem a natural way of going about things from what I have read and more importantly stresses testing which is always a good thing.
They might start to regulate the monopoly because their software has turned out to be such a large liability, however open source software should be given more freedom and programmers should be able to find a nurturing and stable envioronment where people adopt the best ideas. Innovation will result from providing resources and time to developers. Innovation doesn’t occur on a tight schedule in a factory.
Microsoft sells products, however what it really actually sells is its vision. That’s what the people buy. The more ignorant the people, the easier it is to sell them a vision.
…or you could also say, that the less competitors and choices available, the easier it is to sell a vision of the future.
If you want better software and superior ideas, than you have to give programmers more time, and an environment of rich learning resources. You have to create an environment that nurtures development efforts. You can’t force people to develop quality software. Poor software is the result of running out of time, on a tight schedule, more than anything else.
I think that, in part #1, he’s right and wrong at the same time.
He’s right that software development is taken too easy. So what if it fails, it’s beta / just reboot / next patch will fix it. Tons of shoddy software are released to the public, and the Open Source / free software movement has done its part in this.
But the solution can neither be certification or top-down development.
An engineer can learn to build bridges. Every bridge he will ever build will be a variation of a basic concept.
But when you write software, most of what you do is a first-time. The repetitive parts – those things that are just variations of a basic concept – are in some libraries. In this, no kind of certification can keep you safe from making mistakes.
Add to this the ever-present customer, with unclear or even shifting requirements – the engineer does not have that problem either. He’s told to construct a bridge, he presents the blueprints, and the stuff is decided upon before the first brick is layed. Sadly enough, customers in the software industry don’t understand that this *should* be the same here – because they cannot *see* the construction, they don’t understand why they have to be precise in their specification and why it’s a bad idea to change things afterwards.
Then, the myth of top-down… that one only works if you have a very good understanding of what “down” implies. Far too often, the “top” is some CS student with a degree and a big ego, but little to no experience in “laying the bricks”. Sure he can come up with a briliant design, UML graphs and everything… but that doesn’t mean it will work.
“bottom-up” is about the only way to go if you don’t know your building blocks. Two factors work into why software engineers usually don’t:
1) Time is always shorter than necessary. You are never given the time to learn about the building blocks, then sit back and make your design, then build it. You’re expected to deliver next month, and if you argue, you’re just uncooperative.
2) The building blocks are not uniform, and usually come with poor documentation or none at all – either because the builders of those blocks suffered from 1) above, or lived by the proverb that “true programmers don’t comment / document”.
It’s not a matter of procedures, or of certifications. It is a matter of mindset: Only if managers and programmers alike start looking at software engineering as being the engineering discipline it is – which requires skill, time, accepting that the engineer knows what can be done and what cannot, and accepting that incorrect software must not be accepted, even if it means missing a deadline and exceeding a budget – only then can we expect our software to become as solid as our bridges.
Microsoft sells products, however what it really actually sells is its vision. That’s what the people buy. The more ignorant the people, the easier it is to sell them a vision.
and if where about to get sued when their vision turned out to be mushroom induced they would stick closer to reality and fewer software projects would end up failing. Like how if you sell quack medicine your going to get sued into the ground leaving only the real cures.
They might start to regulate the monopoly because their software has turned out to be such a large liability, however open source software should be given more freedom and programmers should be able to find a nurturing and stable envioronment where people adopt the best ideas. Innovation will result from providing resources and time to developers. Innovation doesn’t occur on a tight schedule in a factory.
I agree with inovation needing space to breath, but it also needs space to make mistakes. And doing so with millions of other peoples money for a project that is actually needed is IMHO not the place for experimentation, unless they really know the risk (rather than having been sold some vapourware by marketing that the developers have to rush into production because of an imminant contract deadline). Do it in your own time, part of an OSS group, as part of company R&D, or in accidemia. Then if it is good, innovative, and works, people will buy. Or at least they will when MS gets sued into the ground for being a viral swamp and so can nolonger cut the air supply off of innovative companies whilst they are still babies ;-).
What bothers me the most about this article is it’s closed nature. If we only let specific people, which are somehow predetermined that they are qualified people, to build systems than the factors of production would be locked away from the real world and limited to a controlled static environment. I think that this would stifle competition and lead to instability. Rather than place such strict control on a company and hold a monopoly accountable, the people should be able to decide in a competitive envionment which vendor that they want to purchase products from. If a vendor supplies a poor quality product, than they can choose to purchase from some other vendor.
With regard to a companies vision. Yes, that is what they sell. MS did not sell quality products, even to this day there is major problems with the quality of their products. What they sold was their vision. That’s why people joined, also they were one of the first to target the home PC market and they were able to sell their vision to households who wanted to participate (they wanted a computer too), and they monopolized that market.
The real stranglehold on innovation is not exactly Microsoft. It’s software patents.
As a mortal, or even a group of mortals, you have no way of altogether avoiding them. Some part of your code, no matter how genuine house-grown, will be covered by some patent. If your product becomes successful enough to be a threat to anyone, they’ll just sue you on grounds of patent / copyright infringement.
The attorney costs alone will eat you alive.
What can you do, except selling your code base to the company sueing, in exchange for them dropping the charges?
The alternative is, of course, that they’ll buy your code outright.
Provocative, thoughtful article, well-written, and, IMO, well-meaning. “Architecture”, however, and other analogies to building construction, should be applied more loosely, less literally.
Liability: It would be great if Bill Gates paid out of his own bulging pockets even a fraction of the downtime his Product has caused.
There won’t be a “flat rate” of 10$ for Content, IMO, ever. Media corporations will fight till the death to prevent the commoditization of content (and the delivery system for it, which is almost one and the same). Even now, after all these years of internet, the cheapest DSL price is about 25-30/month.
If companies are going to continue to buy from Microsoft than they shouldn’t complain about security problems. The way that I see it, is that it’s their own stupidity that is their problem. Try buying somewhere else for a change.
People should be held accountable for their own stupidity.
Ok i stoped reading after reading the part about XP (Extreem Programing) being one of the causes of the “software crisis”.
I never state that XP is cause of the software crises. I only gave it as an example of a software development process without a strong (initial) design phase.
Your first mistake is in thinking that XP projects end up without an overall design.
The design is known at the end, I only pointed out it is not known at the beginning.
It is if you like a “construction method” and in itself works well.
The truth of the matter is we are only on the virge of figuring out how to create reliable software designs and representive requirements,
I agree. Indeed I go on to state I expect we will end up with a hybrid method which will have elements of both Top-Down and Bottom Up and parts of XP may form part of that.
Top-down software design by committee doesn’t work.
There are examples a plenty in Architecture books where top down does work – but again they are moving (or perhaps evolving..) into a hybrid approach. It’s not one or the other.
A third implication is that XP practices would lead to reduced quality
This is not meant to be an attack on XP, please don’t take it as one.
This is not as bad as it looks… I expect innovation and research to continue, Liability will not effect all software or developers.
I just wanted to mention as a first thing that the author says programmers will become like “brickies” and architects will be responsible for everything. Programmers will just “churn out code.”
Aside from the fact that I think this is already true, and is precisely WHY bad code is produced (our “architects” are Managers leading hordes of India-based “brickies”), I also think the analogy doesn’t hold at all.
(Not to mention that I may not be a “professional programmer,” but I do enjoy programming, and don’t think I enjoy it because it’s like laying fucking bricks. I find on offensive on behalf of actual skilled programmers. You never saw a movement like open source forming out of a large group of bricklayers across the globe. Etc. Etc. Etc.)
In the last three or four decades, businesses have tried convince themselves that a single, skilled architect can produce a functional system if they control their programmers like tools from the top. Read the Cathedral and the Bazaar for one perspective on why this didn’t work. Read the Mythical Man Month for another. The basic problem is this: in your example, you say software architects are like building architects, and programmers are like brickies. But this is dead wrong. Programming is not brick-laying–it involves much more mental work than that. The equivalent to what you’re saying a software architect would be is a single man coding an ENTIRE program, and then having 40 people trascribe it into a computer and compile the source code. That’s what a building architect does: creates the plans, including the PLANS for implementations, and only because buildings are physical entities must they then be “built.” The building architect doesn’t just say, “I want a building, maybe 50 stories high. And make sure the support against wind is good” which is essentially what modern-day “Software Architects” say to programmers.
It has only been discovered recently (recently? I mean in the last DECADE or so, recently to you perhaps) that the way a software architect creates a good system is by being surrounded by skilled programmers and giving them more autonomy to innovate and get intimate with their code. An architect’s true goal is to participate in that development process with them, but give them perspective as to where the project is going, when critical elements have to be done, etc.
The reason open source works for wide-scale development is that each programmer is intimate with the code he writes. He is not TOLD: “you, write this function.” He “scratches his own itch,” which, in itself, produces better code.
Project leaders act as architects in the open source world by accepting or rejecting changes based on the overall goal of the project, by assembling a whole group of changes into “releases” (think the Linux kernel) and by maintaining the facilities that lead to code debugging and overall software improvement (think CVS, bugzilla).
The only point I agree with in the article is that the term “Software Architect” is being given to unqualified people currently. That is most definitely true. Will this change by fiat, by government certification or something? I doubt it.
“Software Architects” should be able to understand and even implement code, but also have the ability to manage people, resolve conflicts, and meet real-world goals (which are often out of sight for truly skilled programmers, who find themselves constantly thinking within the constraints of a machine). But the best “Software Architect” is one who is respected by the rest of the team as a peer, a peer who provides the necessary direction for the project. Not as someone who tells me where to put my bricks. Savvy?
I kinda wonder if breadbox buying the geos source code will
ever pay off.
They say they are creating a 32 bit os using the geos source
code.
I have heard it might take 3 years before they get it created. Geos was a great thing it still has a small
base of users.
A few have talked about building a os similar to geos
an using dos as the backbone. But i wonder if its worth
it plus geos software is patented but object oriented is
not.
But the best “Software Architect” is one who is respected by the rest of the team as a peer, a peer who provides the necessary direction for the project.
As I said, it’ll be a hybrid approach so yes, I agree.
You never saw a movement like open source forming out of a large group of bricklayers across the globe. Etc. Etc. Etc.)
Never heard of the DIY?
(Not to mention that I may not be a “professional programmer,” but I do enjoy programming, and don’t think I enjoy it because it’s like laying fucking bricks. I find on offensive on behalf of actual skilled programmers.
Programming for fun and programming for a job are two different things, in business you do end up doing some pretty boring stuff.
You are assuming I mean repetative manual labour and that brikkies are brain dead morons – something they might find rather offensive. There are highly dangerous and highly skilled jobs in the construction industry. I also explicitly mention the term “Engineer” which you didn’t mention.
That said I do admit that “brikkies” isn’t the best term I could have used.
People should be held accountable for their own stupidity.
True, but this should included the stupidity of selling something that you can’t produce, particually if others before have done equivalent things (e.g. selling NT as a stable industrial grade OS when others have created OpenBSD, Solaris, etc.), as well as buying something that obviously will never be produced (e.g. buying NT as a stable industrial grade OS when others have created OpenBSD, Solaris, etc.).
MS has a right to sell junk if people are willing to pay money for it. This is a better solution than instituting strict controls on businesses by governements. As soon as strict controls are initiated, they will be creating a new monopoly in some other unknown area. It should be left up to the smaller competitors to realize were the oportunities are rather than the goverment trying to force something unnatural.
Competitors who do not like the influence of a monopoly should not support a monopoly by marketing their products on Windows. They should expect to be absorbed by Microsoft because the wolf will eat the lamb, especially if the lamb tries to live in the wolf’s den.
In an architect shop, on a large project, there is a chief architect but he is helped by a large number of other architects that resolve specific design problems down to very small details.That’s exactly what coders do. They resolve specific design problems.
The difference is that the brick layer is the compiler, not an actual person. The code is actually a design, a plan that the compiler turns into a computing reality that can be made to exist by an executing architecture.
A design flaw in a building design doesn’t mean that the building necessarily collapses. It can be just that there aren’t enough lifts to cope with the employees arriving all between 8:50 and 9:10, that the air conditionning can’t cope with the windowed surface, turning the place into an oven in summer and a fridge in winter.
The big difference is that software components can be changed for the cost of the work. Try changing an elevator shaft in a building ! Possible but costly ! In software, it’s a lot cheaper so people take the opportunity.
But the biggest difference is that a building is ALWAYS composed of the same elements : floors, walls, windows, doors, plumbing, wiring. The sub components are designed and provided by specialised companies.
The number of possible purposes for software components is just a lot bigger. The small cost of software design flexibility and the variety of purposes makes it just not comparable with any other physical industry except for the bigger projects.
Read the article and skimmed some of the respones. Most of the responses seem to get hung up on the comparison of software to building construction. Sigh.
Anyway, I fail to see the “predictive” aspects of this article at all.
W/r to shipping software work overseas — we already do that. Our company found it was much cheaper to pay the requisit labor costs for a 3D modeling team in India than it was to hire a sufficiently sized group in the US. The costs weren’t even close. Now we are looking at shipping other aspects of our rendering process to Asia later this year.
As for the construction analogy, it just works. We are already seeing less and less of bottom-up design in my company and more of a top-down, especially in the higher-profile projects. The days of when you developed software tools just so something *would* be available are waning. Now, most of the required niches are filled and quality is as big of an issue as development time is. For the construction analogy impaired — you could build mediocre (or in some cases, shoddy) housing rapidly when people had no shelter. That is no longer the case.
Finally, there are already a number of flat fee services available on the internet and several big companies have stated their services will be available shortly.
So…I’m still waiting for the predictions. Anyone who’s not been under a rock the last few years could have made these predictions.
“It [Open source code] does however guarantee potential attackers can see if there are holes in your software.”
Which also means anyone can fix it instead of one small group. Believing closed source code is more secure because the source ain’t open is obscurity. Audits can still happen, and fixes can only be done by a small, niche group.
The latest MSIE patch is a bad benchmark for measuring how long it took them to fix it, and it was not even a fix, it is a dirty hack which just deletes functionality. It is however just one example which proofs how bad this very dependance can be.
The design is known at the end, I only pointed out it is not known at the beginning.
You make this statement in a section called “The Problem With Software,” and then talk about “bad software development models,” apparently referring to the paragraph about models without strong initial design phases. If you aren’t implying that XP’s design approach is flawed and responsible for project failures, then I have no idea what you’re trying to convey here.
Personally, I’ve tried it both ways. I did OO architecture for years with a big up-front design phase. I’ve also done the XP-style emergent design approach for a few years now. There’s no question in my mind that I and my teams produce better designs using short-cycle iterative approaches like XP.
Setting most of the design in stone up front is making a bet that you will learn nothing important, think of nothing new, and have no requirements change in the time between the end of the architecture phase and the delivery of the software. This is radically different than my experience on pretty much every project. If you can find projects where the requirements never change and the design’s so simple that you can figure out every aspect months in advance, more power to you.
XP, on the other hand, is really just a set of techniques for embracing the fact that things change. If today I realize that there’s a better way to do the architecture of my app, then I’ll change it. If tomorrow a new requirement comes up that requires a fundamental change, then I’ll welcome it. Why? Because that means my end product will be closer to what the users need, and I haven’t put a lot of effort into an architecture based on speculation. What I need today, I build today. When what I built yesterday does not meet today’s requirements, I improve it until it does.
Why are all the articles on the future of computing about software development? I never see anything on e.g. system integration, which increasingly is becoming a problem as software grows more and more complex. Take a random infrastructure project, those tend to exceed development projects both in time, resources and cost just because the task of fitting all the pieces together is becoming increasingly daunting. Making the right choices on what *not* to use out of an overload of features and options is an undertaking in itself.
So my prediction for the future is that in a couple of years time people will be screaming for less complexity, and a slowdown of the pace of development will occur to let the market catch up. I think it’ll start around the time Longhorn will be released (too late as usual), and the market won’t pick up on it.
i’m sure embedded system will be first (actuall are) liable for any damage… think about your car, for exemple: would you like to have the embedded computer look the break because of a glitch? I didn’t think so either…. Maybe talk about it in part 2?
(for thoses who didn’t know, YES modern car have embedded computer, they know EVERYTHING you did with it…. )
Oh, yeah, btw…doesn’t anyone proofread these articles anymore before they’re slapped on the front page?
Especially the first page isn’t very easy to read due to the funky grammar used…though I have to admit I wasn’t really focused when going through page 2 and 3, so it might be similar. Some editorial touches wouldn’t hurt.
the point of its so much easier to find bugs buffer overflows in open source isnt that valid. its not that much more work
to find the buffer that you want to overflow inte the machine code. assembler code is easy to get from binaries and those who write cracks are often good with assembly.
so closed source is not a good way of security
1) My company’s proprietary systems will never be open source.
2) My company will never increase the budget to hire a ‘Software Architect’.
3) My company will never find off shore resources with the required business knowledge to compete with in-house development.
There have been a few times now where they’ve posted something of rather dubious quality. As someone said before, it’s difficult to see the predictive nature of the commentary. Numerous other comments have pointed out his lack of knowledge in what he’s talking about.
In addition, I’ve never heard of him, I assume not many others have, so why should his opinion be important to us and exactly how is it relevant to OSNews?
Frankly he should get a blog and dump his rubbish there.
Oh yeah, has anyone else noticed he’s got an anti-gravity engine design on his website??
The future of MS VS.Net software engineering will be sort of like this:
Click this.
Type your name.
Click that over there.
Go for a washroom break.
Click this button.
…
Let’s see, if we built a house in 1898 according to the design everything at the beginning method. It would be something like this:
When electricyt becomes avilable in the block at 1905, we don’t do anything as that was not in the plan, When central heating becomes available we say it was not part of the original plan, so we do nothing as it was not part of the original plan. When telephone becomes available we don’t connect to the telephone network as that was never in the original plan. When the competition from a nearby supermarket forces the small shops to close, we only look for a new tenants that can use the shops in the same configuration that they were originally built for.
Of course this is insane, nobody builds houses this way
just like in XP we redisign all the time. The doesn’t seam to realize that software have a much shorter lifespan than buildings, but during that short lifespan the requirements change just like they do in the lifetime of a building. Many times the reality changes much faster than you are able to plan or design for. Meaning that the design will be outdated even when you start the building process.
This types of changes is what XP is desinged to handle. To maintain stability we use tests. In my experience XP gives you higher quality sofware than any “design everything before you start” method.
For example, when USS Yorktown was dead in the water because NT failed reasonable people didn’t blame Microsoft.
Reasonable people didn’t blame Microsoft because it wasn’t NT that failed, it was the software running on it.
There is a fundamental misunderstanding at the root of this article and it’s that software engineering is analogous to building construction. In fact this is a fairly old view that’s been around since the advent of software engineering and it shows just how little, we as a young industry, grasp what it is that we are doing.
First off, when we talk about the software crisis, we are largely talking about business software. If you want to apply an electrical or mechanical engineering model to software, you can do it better if you restrict discourse to simple embedded systems and hardware controllers. In this circumstance you have a generally limited and focused goal for the software as it maps to a physical device that can be specified and finalized. Because you’re dealing with a tightly coupled system of software and hardware in real time, you can start to deal with very specific expected behavior and code to that spec.
When we deal with the very complicated realms of various types of business systems, the picture gets a little more shaky. Often these systems are not well specified and quite often the client requesting the system is incapable of describing the system in whole as they may not even be sure what they really want. More often than not, this notion of what the business system is and what exactly it does is a moving target and it moves faster than the pace of development. Trying to hit a moving target with a fixed schedule is where the software crisis comes from, not unskilled professionals. This is not to say that we don’t make mistakes, but this is not the primary cause of software project failures.
Another misconception, not just about software project scheduling, but scheduling in general that is implied in this article is that a project, once initiated, is under control. If you read some of the great books on the subject by McConnel and DeMarco, you come to realize that software projects are like a kayaking expedition. You know where you’re starting and you know pretty much where you want to get to, but getting there is not something you’re entirely in control of. You have to watch out for and mitigate hazards on the course. Sometimes there are factors on the river that you can’t anticipate, but the art of project management is to know what you know and know what you only think you know.
So according to this article, the industry should move towards the Waterfall model of development, a model which I don’t think is currently employed in its raw form in any professional institution. The current state of the art in project methodologies are largely risk management based. They attempt to control risk by minimizing risk factors before they hit you. You can see this in RUP, Extreme Programming, and various spiral methodologies. This is the direction in which we are heading. What the future holds is more acceptance of these practices and automated systems to aid in testing, code review, and project tracking.
On the topic of certifications and oversight, this is a possibility but not as the author describes. Placing liability at the level of the architect or the programmer will not have the desired effect of increased quality. As I’ve mentioned, business software systems are often moving targets and they are subject to modification. While quality is the responsibility of all those working on a project, the important judgments are usually not left in the hands of the coders or the planners. Project deadlines and deployments tend to fall into the jurisdiction of business managers, Does it make sense to fine the software architect if the details of a system change and then it fails? How is this related to poor architecture? Can you blame a programmer if a manager rushes a project? What we need to do is to stop thinking of software development as a concrete plan and failure lying in defect with the team. Software engineering is a living process that grows from inception to delivery. It requires trust, communication, and cooperation from management, coders, planners, clients, and all other parties involved.
Furthermore, in the cases of certifications, I have to pose the question, just what skills do you intend to certify? A language? A modeling technique? A design pattern? You could, but as a programmer, I would not pay money to be certified in skills that become quickly obsolete. We already have accreditation boards that certify academic computer science programs. If you look at these curriculums, you will find courses so general that you’d have to wonder how programmers and architects learn any specific skills at all. The answer is they adapt. Constantly. A programmer may work on a project in Java one month and then in C++ or perl the next. They could use any number of APIs from MFC, Cocoa, CORBA, etc. Do you expect that someone would get a certification in Java? We may not even be using it in five years. Even if we are, consider the real differences between Java 1 and Java 2. Just how long would a Java 1 certification have been relevant?
On the architecture side, we could give certifications in middleware design patterns or MVC or any number of methodologies, but as anyone in the biz will tell you there is no single right solution to any problem. In fact, methodologies come and go. We could certify people in object oriented design, but we are already seeing the emergence of attribute and aspect oriented approaches. Certification in our absurdly young discipline of software engineering is not yet practical.
I think we can start to look at software in terms of corporate responsibility, but even this I think is far into the future. Your ability to sue a company for poor quality or faulty software is generally limited or nullified by software licenses. Should Microsoft be held accountable for the security flaws in Windows? Maybe. Should you be able to sue a company if a product does not conform to spec? Maybe? Certainly if you have it specified in a development contract.
The problem here is testing. This again is something that is being addressed by newer development methodologies. We are tending towards systems that can be tested before coding actually begins. Also, test first, code second methodologies like XP are creating higher quality systems in less time. The question here is what constitutes testing and quality assurance. What is liability in this case?
On one hand we have actual tests and test data. Frameworks like xUnit make this easier than ever, though the bottom line is that we aren’t going to write test cases for absolutely every portion of code. Using code generation techniques is one way to mitigate this risk. Still, unit, regression, and acceptance testing is one way to feel confident that our software is of sufficient quality.
Notice that I used “confident” and not “sure”. It’s not that tests only simulate uses and not actual use, but we know in the study of computer science that no amount of actual testing can conclusively prove that software behaves properly. We would have to execute a test for all possible inputs to each portion of code, and all branches of code execution. In many cases the domain of possible input is infinite. The way around this is to model the code logically and create mathematical, logic proofs that the system works properly for all input. There are various models for doing this kind of formal verification, but the time required to do it is often very large. If you’re writing a control program that adjusts the wing flaps of a jet to keep it airborne, than this sort of rigorous verification may make sense, It doesn’t really make sense for a VB app used in the office to track packages left at the reception desk.
Obviously our quality standards vary based on the project and in most cases it is not time or cost effective to verify systems conclusively.
So we can insist on some certification system or government body to test or verify some commercial products, but you have to accept that each case will require its own specifications and tests. It just may not be feasible. Currently we have a market driven system that controls the quality of software. If software doesn’t meet your quality needs, you as the client move to another product. I think what’s missing from the software industry is antitrust action. We need this to ensure that there is a variety of competing projects to choose from. I predict that quality will still be linked to the judgment of the client, and this will continue to depend on acceptance tests.
On the topic of Open Source software, I think it’s foolish to assume that you can impose legal restrictions on what is not a commercial endeavor. Certainly there are commercial applications to open source projects, but the liability for software failures and quality lie with the companies or individuals that apply this code to their own products. If Apache Tomcat was a buggy product, I don’t see where you would have recourse against the Apache Foundation. If some company made an application server that rolled in the Tomcat code and sold it to you, you may have some recourse against that company. Again, that goes back to the quality control question.
The bottom line is that we have to remember that software engineering and computer science in general are very young disciplines. We are still defining what we can do with software and how we get it done. I don’t think the discipline is yet mature nor stable enough to survive under strong regulation. We are working on the software crisis both as a methodology problem and a management problem, and we are going in the opposite direction predicted in this article. I think the future may hold corporate regulations that hold companies responsible for damages due to faulty software, but this is more of an economic reform and not a technological one.
– Booleanman
It’s clear now from your’s and many other comments that I haven’t emphasised the “Hybrid approach” which I think will become apparent.
I mentioned this in my comments above but didn’t really push it much in the article.
Essentially I think we’ll end up with something with an Architecture stage for a primary design then something like XP for the actual building of the sofware but with the Architecture being constantly updated alongside. It’ll be an Architecture with iterations rather than one or the other.
As for the quality, the Architect and or Engineers will have oversight and if it’s not good enough thay wont put their name to it.
In order for this to work though, the requirements will need to be nailed down properly and this is something which will need to change. Indeed the ability to understand requirements, will I expect, form part of any certification. Without that liability simply wont be workable.
As for your other points, I think you’re looking at the detail perhaps too much, I don’t know what the solutions are but I think they will come in time and will hopefully be resonable sensible. For example, I don’t think all software will be regulated, but those producing systems will be – this is already happening in some areas but it has no legal backing as yet apart from existing contract law.
Excellent comments, thanks.
You should have written them as an article…
Regarding liability on software, not all the software/developers will be regulated, only those areas in which software failure can lead to injuries or death.
At the moment if you want to build a bridge for the government, that is to make a “public infrastructure”, everything has to be certified; building materials, building tools, building procedures, contractors… everything.
What “may” happen in the future is that due this “irresponsible” way we develop software (changing technologies often without proper testing) people can start to be injured or die. If this will ever start to happen, laws will be made, so if you want to make or employ software for the public sector (Cars/Buses, Airports, bus stations, Electrical centrals, intelligent buildings, etc.) it will have to be certified software, made by certified engineers.
And no, it won’t mean the end of open source, some organization/group of individuals can take that OSS and make it compliant or certify it’s compliance with the law.
What will happen for sure in such scenario is that this crazy technology race will slowdown abruptly in the public sector, and believe me, that’s a good thing for the taxpayer and the public in general.
the requirements will need to be nailed down properly
I think this is a will o’ the wisp. I have never seen a non-trivial project where requirements are nailed down, by which I presume you mean that they are well specified in advance and don’t change during construction. Has anybody?
Re: William Pietri
Probably not but that’s the point, they will have to be at some point – or you say: “OK, you want this added – then you pay $$$ extra”.
There is clearly a communication failure which needs to be solved, that alone will take some special skills, you’ll need to find someone who understands management speak and technical people… a difficult task indeed.
This is one point the Architecture books I reference mention, the aim is not to produce a single fixed architecture at the start but to make an architecture ready for change, to explicitly allow change in the process.
This gives the best of both worlds.
Re: Ucedac
You have it nailed right on the head – that is exactly the point I wanted to make.
Re:all
Clearly my emphasis on the hybrid nature I think things will evolve to was not enough, I’ll see if I can add a conclusion of some form…
http://www.cs.utexas.edu/users/EWD/indexEWDnums.html
Browse these documents.
First; if you think the article is rubbish, write another article explaining why he is wrong.
Second; it looks like you are suggesting the article is so bad that it shouldn’t have been even published, and hence nobody else should read it. Basically you are deciding what’s good for the rest of the world based on your own prejudice. That’s called censure.
Third; you never heard about freedom of speech?
Fourth; No opinion is ever absolutely right or absolutely wrong, there’s always interesting to learn even from people, even from those you say appears to talk rubbish.
After said so, I personally think that he’s right (or may be) on many points, What I personally thing is wrong with the article is that he should give even more background information and examples to sustain his points.
@ Nicholas Blachford
>> There is clearly a communication failure which needs to
>> be solved, that alone will take some special skills,
>> you’ll need to find someone who understands management
>> speak and technical people… a difficult task indeed.
Wise words…
Communication management <–> Technical staff is virtually inexistent on 90% of the companies.
Corporate management doesn’t care understand (or give a shit) for what the technical staff say, recommend, or cares.
They live in a world which they understand well, a world of sales, profit, HR, etc.
They ruin most projects because their lack of technical knowledge (or just plain ignorance) it is damm difficult to make them understand even the most basic things, ranging from what do you try to achieve to what are you doing. They live in an old world in which their rules change very rarely opposed to our in which we are always adapting to new rules. Ironically, they demand immediate changes on projects which took months to get into their actual state. Basically they are not able to see what’s behind the physical screen.
What amazes me the most are those situations in which they agree to spend thousands of pounds in a solution which is going to be used very rarely (or it is even not really needed), and later they refuse to spend thirty or forty pounds on a piece of software which is vital.
Most of the time, even the IT managers are people which come from a management background, more than from a technical one. They use to care more for their ass and interests rather than projects, technicians and so.
I think I could write a book on the subject.