“There is something fundamentally wrong with the way we create software. Contrary to conventional wisdom, unreliability is not an essential characteristic of complex software programs. In this article, I will propose a silver bullet solution to the software reliability and productivity crisis. The solution will require a radical change in the way we program our computers. I will argue that the main reason that software is so unreliable and so hard to develop has to do with a custom that is as old as the computer: the practice of using the algorithm as the basis of software construction. I will argue further that moving to a signal-based, synchronous software model will not only result in an improvement of several orders of magnitude in productivity, but also in programs that are guaranteed free of defects, regardless of their complexity.”
Why Software Is Bad and What We Can Do to Fix It
80 Comments
Because of bad managers and bad decisions who kill innovation, and push closed source. Especially in case of device drivers, the closed nature of many devices and mistakes in blobs make them misbehaving and his reasoning could be applied there in order to improve things.
As a matter of fact I believe OSS is not bad.
-
2006-06-13 1:37 pmSteveT
read the article…
apparantly the reason software is bad is because of the algorithmic nature of software, not its complexity, human fallibility, bad management, lazy coders.
He seems to think hardware is more reliable than software and so we should design s/w like h/w.
He also seems to be under the impression that h/w is not algorithmic… hmmmm….
Users! They always want to input invalid data. No matter how carefully I try to validate every textbox, within 2 hours of the site being updated I get an email saying its broken because someone typed “1,000” into a textbox and Convert.ToInt32() isn’t smart enough to drop the comma and I was dumb and did a regex looking for characters that aren’t supposed to be there instead of looking for only numerals.
[0-9] is what I should have done instead of [a-zA-Z…]
-
2006-06-13 4:11 pmMikeekiM
No, you were screwed by Microsoft with:
– No checked exceptions
– Microsoft allowing properties instead of getMethod() setMethod()
– ASPX Memory hog Datagrid
– IIS’s memory crashing “recycling” every tiime it gets to run a big query.
Does IIS have ANY memory management routines a 4 year old couldn’t write?
But, yeah, you should have thought of that comma in 1,000.
But why doesn’t int.parse() actually use any language localization feature,
is a good question.
-
2006-06-13 6:13 pmeggs
– No checked exceptions
boo on checked exceptions
– Microsoft allowing properties instead of getMethod() setMethod()
I don’t understand why people are anti-properties, they are perfect as far as I am concerned
-ASPX Memory hog Datagrid
Well, .NET 2.0 uses a gridview which seems a lot better than the Datagrid.
I don’t deal much with the webserver (just copy my files mostly, do some basic configs of virtual directories and the such), but it runs very well for me.
-
2006-06-14 12:27 pmMikeekiM
Here’s the deal with checked exceptions.
When you write a class method, that throws an exception, the CALLER is required to stub that exception. So, you are giving the caller a heads up,
“Hey, I tested this with production data, and this exception WILL BE THROWN”. Why would you want to ignore a WARNING OF FAILURE IN PRODUCTION?
The problem with Properties:
– No editing of data entering the class.
Take Int.Parse() for example.
It throws:
NullArgumentException
FormatException
ArithematicOverflowException
If you don’t parse an integer entering your class,
then you can’t protect your class from these 3 error conditions.
You have to depend upon the CALLER to edit the data before it gets to your class.
Well, the caller could be any of these:
Aspx page
Console application
Web Service
Windows Service
Windows Application
All of which are going to have to have the edit of the data on their side of the fence.
Summary:
– Properties are UNSAFE.
– Properties are UNPRODUCTIVE.
-
2006-06-14 1:58 pmeggs
No editing of data entering the class
Thats false…
public in SomeNumber
{
set { this.someNumber = value + 1; }
get { return this.someNumber; }
}
I pass it 5 and 6 is stored in the class.
-
2006-06-14 10:51 pmfanbanlo
As an undergrad CS student… I must say this article and thread are very interesting. However, I’m not fond of those who don’t open to new ideas and just shot them off right away.
“The reason software is bad…” is a wonderful topic for ppl like me. Are there any links you guys can share, that discuss about solution or work-around’s for the common software (engineering) problem with tools today, without a radical, non-existing solution like COSA?
Thanks.
my 2 cents on The reason software is bad, ’cause the programmers are too concern with performance & lazy? (e.g. properties vs setter/getter function calls, or now using Exception)?
Edited 2006-06-14 22:56
-
2006-06-15 6:32 amcorentin
> Are there any links you guys can share, that discuss about solution or work-around’s for the common software (engineering) problem with tools today, without a radical, non-existing solution like COSA?
Praxis High Integrity Systems developed SPARKAda, a version of Ada with additionnal static checks : http://www.praxis-his.com/sparkada/ (you will find a lot of whitepapers here, too).
You may be interested in this article by Eiffel Software on Design by Contract : http://archive.eiffel.com/doc/manuals/technology/contract/
(Eiffel was the first language to include DbC).
A subset of C, MISRA-C (currently in its 2004 edition) is used by embedded software developers in the automotive industry: misra.co.uk
There is also a quite similar project for C++, the “High Integrity C++ Coding Standard Manual” available here : http://www.codingstandard.com/HICPPCM/index.html
On his webpage, Jack Ganssle (a veteran embedded systems engineer) has a lot of cool articles; some of them deal with quality : http://www.gannsle.com. Check his columns on embedded.com too, he often talks about embedded software quality.
-
2006-06-13 7:37 pm
The author mentions that hardware is more reliable than software for a number of reasons. I agree!
The cost of testing hardware is low, relative to the cost of a product recall or a physical repair. For this reason, hardware is subjected to strenuous tests simulating long-term usage conditions. These tests may take lots of time and money, but (most) companies pay the price to avoid a series recall.
Conversly, the cost of testing software is high, relative to the cost of a recall or repair. With software, a repair often consists of downloading a patch or modifying a few lines of a text file. For this reason, software is typically subjected to less intense testing than the corresponding hardware.
Note that I’m not saying that this is a good thing. For example, do a google search for Therac 25, or check it out here:
http://en.wikipedia.org/wiki/Therac-25
It’s a prime example of software in a critical application not being tested to the same degree as the corresponding hardware. The result — read the article for yourself, but I can say that it wasn’t pretty.
-
2006-06-13 2:28 pmSteveT
true in this sense, hw is more reliable than sw… not because it is inherently so, but because we spend the time/money upfront testing it.
The author seems to have a bit of an obsession with neural-nets and has tried to apply these thoughts into a general s/w architecture.
See his other article “AI in the bible” above.
He’s taken the concept of many simple parallel units and attempted to generalise it to encompass the whole of software.
he has a misunderstanding about algorithms, I think he uses the term to mean sequential algorithms as opposed to parallel implementations. If this is so then he is still incorrect. An algorithm implented for example using VHDL on an FPGA is parallel but is just as likely to have bugs as the sequential equivalent.
For those interested in the field of software reliability, there is an excellent article in this month’s issue of Scientific American. It describes a software tool that evaluates design specifications using theorem provers, and checks them for design flaws.
Interestingly enough, it also describes a technique that lifts a page from the testing of integrated circuits. The authors of the article describe a tool that considers the possible states of the software, and searches for failure modes.
-
2006-06-13 4:22 pmkokara4a
I guess there’s a lot of room for improvement in software verification. But first one has to state the properties to be verified. I assume this is far for trivial. And how can we decied it’s exhaustive? Anyhow, verification is bound by the properties of out computational model (which accroding to the Churh-Turing thesis DEFINES computation). Just remeber the halting problem and by implication that every non-trivial property is undecidable. For many cases it might be possible to verify properties but in the general case no computing device (including our brains if they are computing devices in the sense of Church-Turing) can help us. Alas!
-
2006-06-13 5:15 pmrayiner
The halting problem does not say that any non-trivial property is undecidable. It says that there is no general algorithm for determining whether a given algorithm will halt. Heuristic approaches can, however, guide automatic theorem provers to solutions for specific algorithms in many cases. The technique described in the article is built on such solvers.
Also, it should be pointed out that the halting problem only applies to a true Turing machine — one with unbounded memory. Since memory is bounded in every real computer, so is the set of all possible machine states, which means that the halting problem is solvable (inefficiently) by considering all the states of the machine. The technique in the article takes advantage of this fact by considering not the possible states of memory, but the more abstract and less numerous logical states of the system. Such sets are still huge, but modern solvers can be applied to them with good results in many cases.
You are, of course, right, in that design specification is the weakest link in the system. However, automatic verification of a formal design specification is still a whole lot better than what we have today. Moreover, in the future, such techniques could be combined with techniques for automatic verification of code, to ensure that the code matches the design. Such a solution would be far from perfect, in that it would not be exhaustive and there would be limits to the set of provable properties. However, even an imperfect solution could dramatically increase the reliability of software in practice.
-
2006-06-14 7:01 amkokara4a
The halting problem does not say that any non-trivial property is undecidable.
I didn’t say that either. But the halting problem being what it is, you can prove it:
http://en.wikipedia.org/wiki/Rice‘s_theorem
You are right about the finite state space but the same goes for factorizing numbers. It’s not impossible, given resources and (mostly) time. It’s just very improbable. I think what people are trying to address in automatic theorem proving is to avoid the combinatorial explosion. I was just trying to suggest that there are fundamental limits to automatic verification. It’s not a silver bullet. And it can never be. Which is lucky for computer scientists because it means there’s always room for improvement.
-
2006-06-13 6:57 pmsappyvcv
I think that fits along with the idea of using tools to make software development easier and more robust. The coder should be working in a more abstract manner, worrying more about higher level flow rather than every little low-level detail.
Being able to design software more abstractly also makes it [hopefully] easier for the tools to trap any errors or unhandled paths or what have you.
Think something like Windows Workflow Foundation, but much more evolved. You tell a software development tool (an IDE) the flow of the software and any implementation details and it handles the rest.
That’s a very basic view of it. I don’t have time to go much more in depth.
-
2006-06-14 2:50 amCloudy
It describes a software tool that evaluates design specifications using theorem provers, and checks them for design flaws.
Someone has resurected program correctness provers? Damn. I was sure David Gries drove a stake through the heart of that madness 20 years ago.
Hint: the most a correctness prover can prove is that a specification has been properly transformed from one representation to another. It can prove nothing about the validity of the original specification, and therefor, about the validity of the resulting transformation.
And even the proof of transformation correctness is only possible in the simplest cases.
This article was already covered in January 2005:
Regarding Project COSA: http://www.osnews.com/story.php?news_id=9473
Linked by Eugenia Loli-Queru on 2005-01-21 19:03:29 UTC
“There is something fundamentally wrong with the way we create software. Contrary to conventional wisdom, unreliability is not an essential characteristic of complex software programs. In this 4-page article, Louis Savain will propose a silver bullet solution to the software reliability and productivity crisis. The solution will require a radical change in the way we program our computers.“
The article did relocate from its former location at http://users.adelphia.net/~lilavois/Cosas/Reliability.htm to http://www.rebelscience.org/Cosas/Reliability.htm however.
I’d like to point out that this hmm “article” has already been posted last year. Yes, the exact same article linking to the phantom project from the same author, just with a new date. Dig the archives and look for a comment I made last year if you don’t believe me.
Heck, anything that has “silver bullet” in it is suspicious in the first place. And even without that, shouldn’t such high controversial claims raise alarms? And if you care digging a bit, you’ll see the guy has nothing new to say and nothing to prove his claims.
OSNews editors: you’ve been trolled. Admit it and take the article down.
-
2006-06-13 4:47 pmThom Holwerda
OSNews editors: you’ve been trolled. Admit it and take the article down.
You can always ask, but ordering us to do something won’t really work .
Ok, so this article has been posted before. Before my time, I might add. Big deal, it’s up now, people have commented, go on and debunk it, prove it, whatever. Just don’t go demanding we do something. That’s just not British.
cant we block every article with “signal-based, synchronous” in it???
Great idea. I suggest that the word “microkernel” should also be blocked.
-
2006-06-14 5:28 pm
“The biggest contributing factor to unreliability in software has to do with unresolved dependencies… every time a property is changed by a statement or a subroutine, it is up to the programmer to remember to update every other part of the program that is potentially affected by the change.”
This isn’t a necessary aspect of software, it’s evidence of badly designed software. If your code contains global variables, “magic numbers” or string literals, then of course there will be unresolved dependencies when you make changes to one part of the code.
A good programmer writes orthogonal, loosely-coupled routines that don’t exhibit the symptoms the author identifies as reasons why software is bad.
-
2006-06-13 6:59 pm
If you dislike what the article states, go write a refutation, and link to it. I’ve done it before. http://osnot.infogami.com/blog/
Athough I dislike the rambling writing style, I think the ideas the author is attempting to explain are sound.
Even if you shoot down the brain analogy, CPUs and other integrated circuits are much more reliable than software. Of course, many of you probably never read that far into the article…
Also, try to be a little open to the possiblilty that the author is right, instead of rejecting it because it’s radically different, or because he says current programming techniques are screwed up.
The proposed system allows actions to happen sychronously in parallel, which current algorithmic software, including functional programming languages like Haskell, Clean, and the somewhat-functional Erlang, and the declarative and functional Mercury do not have. Same goes for the imperative/Object Oriented languages, C, C++, Smalltalk, D, Ruby, Python, etc.
Edited 2006-06-13 17:10
-
2006-06-13 5:34 pmrayiner
The analogy to CPUs and ICs is really tenuous. CPUs and ICs are reliable because they are far easier to decompose into components, and interactions between components are far more limited. Most ICs are built out of macro libraries, containing high-level components like adders, muxes, etc. There is nothing really comparable for software.
I should also point out that hardware is a lot more expensive than software too, relative to its level of functionality. Making a modern CPU costs hundreds of millions of dollars and takes years of work by highly trained people, even though the design specification of a CPU is relatively small and simple. If you created a software program with a comparably simple design specificiation, with a comparably skillful programming staff, a comparable budget, a comparable schedule, and comparable testing, you’d probably get a comparably-reliable piece of software.
As for rejecting it out of hand because its radical, that’s just good sense. Most radicals are wrong, and the ones that work outside the established academic framework are more likely to be wrong than the ones that work inside it. Historical examples to the contrary are… rare.
If this guy wants some credibility, he’d write up his ideas as proper papers, and submit them to the ACM for publication. Otherwise, he’ll get labled a quack and rightly so.
-
2006-06-13 5:51 pmj-s-h
So lets apply the same techinques to software as to hardware. Software is more flexible than hardware. Therefore, anything you can do for hardware could also be done for software.
Yes, hardware is more expensive than software. And that has absolutely no bearing on his arguments at all.
I disagree on your view of credibility there. I’d say if he wants some credibility, he’d implement the system, and show how it’s better than the current paradigm with examples. After all, did Matz, the author of the Ruby language get published in the ACM? What about the author of Perl? I think your position is quite absurd.
In any case, there seems to be no point in arguing with people on this topic, because people seem to be quite close-minded. So I’ll let it rest.
-
2006-06-13 7:33 pmrayiner
So lets apply the same techinques to software as to hardware. Software is more flexible than hardware. Therefore, anything you can do for hardware could also be done for software.
That doesn’t follow. In fact, the inverse is usually true in practice. Things that are more flexible tend to be less amenable to the same sorts of analyses.
Yes, hardware is more expensive than software. And that has absolutely no bearing on his arguments at all.
Sure it does. A larger budget means you can hire more skilled and experienced people to implement a design, you can give them more time to complete the design, and you can afford to pay more people to test the design. That counts for an enormous amount.
After all, did Matz, the author of the Ruby language get published in the ACM? What about the author of Perl? I think your position is quite absurd.
Matz and Wall didn’t claim that the theoretical foundations of modern computer science were fundementally flawed.
-
2006-06-14 12:53 amj-s-h
What you say doesn’t make sense at all. If something is more flexible, you can use implement a less flexible system on top of it. You are only interested in shooting down this guy’s vision for programming and advocating the status quo. Cast not your pearls before swine. Argument with you is pointless.
One can have a much more useful, reliable, system without publishing papers in the ACM. What matters is how good the ideas are, not in which forum they are presented. Hey, you aren’t published in the ACM eiher, right? If so you have no credibility to shoot down his ideas, according to your own rules, and you are a hypocrite.
-
2006-06-14 1:43 amrayiner
What you say doesn’t make sense at all. If something is more flexible, you can use implement a less flexible system on top of it.
Your original statement was: “So lets apply the same techinques to software as to hardware. Software is more flexible than hardware. Therefore, anything you can do for hardware could also be done for software.”
These two statements mean different things. The key words are “on” versus “for”. “What you can do on a more restrictive system you can do on a more flexible one” is true, while “what you can do for a more restrictive system you can do for a more flexible one” is not generally true. In fact, the opposite is generally true. Solution techniques that work for restricted systems often do not work for more general ones.
Cast not your pearls…
You misspelled “quackery”
What matters is how good the ideas are, not in which forum they are presented.
The goodness of an idea can only be determined by an evaluation of that idea. The ACM is a forum for the evaluation of theories in computer science. Since what we have here is just an idea — a theory, not an implementation, the ACM is absolutely the appropriate forum for its evaluation.
Hey, you aren’t published in the ACM eiher, right? If so you have no credibility to shoot down his ideas, according to your own rules, and you are a hypocrite.
I’m not claiming that computer science as a field is barking up the wrong tree. The bigger the claim, the more credibility you need. You don’t need to be a rocket scientist to credibly claim that rockets move as the result of conservation of momentum. You’d better be a rocket scientist if you’re claiming that astrophysics is fundementally flawed!
-
2006-06-14 2:57 amCloudy
Athough I dislike the rambling writing style, I think the ideas the author is attempting to explain are sound.
They’re not. It’s trivial to show why, and the demonstration has been done in OSNews before.
First, integrated circuits aren’t as reliable as the author seems to think.
Second, the reliability of an integrated circuit is directly related to the complexity of the circuit.
Third, there’s a simple reason why simple ICs are easy to verify: they implement a finite state machine, and can be validated by enumerating the input states and expected output state and simply wiggling bits to exhaustively verify the match.
The more complex the IC, the more difficult and time consuming to enumerate, the more likely that there will be faults in the implementation.
Typical software has far more input/output matching than any IC, and so, due to the complexity, is far more difficult to test.
-
2006-06-14 8:25 amPowerMacX
Even if you shoot down the brain analogy, CPUs and other integrated circuits are much more reliable than software. Of course, many of you probably never read that far into the article…
Of course, you never heard of CPU bugs , which are THE NORM, and fixed/worked around in software.
Intel:
– Pentium 4, 65 bugs: http://download.intel.com/design/mobile/datashts/30922101.pdf (warning: PDF)
– Core Duo, 30+ bugs: http://www.macnn.com/articles/06/01/23/core.duo.errors/
AMD:
– Athlon/Duron: http://slashdot.org/article.pl?sid=02/01/21/0750226
– Athlon: http://216.239.51.104/search?q=cache:dfnSY2EXCOEJ:www.amd.com/us-en…
– Athlon: http://216.239.51.104/search?q=cache:HFDm3zBojEcJ:www.amd.com/us-en…
-
2006-06-14 4:12 pmMapou
Of course, you never heard of CPU bugs , which are THE NORM, and fixed/worked around in software.
That’s precisely it. Logic bugs are fixed/worked around before release. They are known bugs that are easily found during testing. Most software bugs, on the other hand, even in expensive, highly tested, safety-critical systems, are not fixed and are unknown until they cause major problems.
Louis Savain
-
2006-06-14 6:33 pmCloudy
Logic bugs are fixed/worked around before release. They are known bugs that are easily found during testing.
You’ve never worked on a hardware design, have you?
Many hardware logic bugs are not found before the hardware goes into production.
Most software bugs, on the other hand, even in expensive, highly tested, safety-critical systems, are not fixed and are unknown until they cause major problems.
Alas, reality disputes this claim. If you check the literature, you will find that most software bugs are found in beta-testing, and that, further, most software bugs do not cause major problems.
-
2006-06-14 10:01 pmMapou
Wow! Are you getting paid for this shit or are you possessed by some demon? ahahaha… OTOH, could be some form of autism mixed with anal retentive tendencies. ahahaha…
In the meantime, the Silver Bullet site is getting thousands of hits a day. From all over the world! ahahaha… I feel like the Joker today. ahahaha… AHAHAHA… ahahaha…
-
2006-06-15 2:03 amCloudy
Get back to me when you’ve actually done something besides rant about stuff you don’t understand and insult people who understand it, kthxbye
First they tell you you’re wrong and they can prove it. Then they tell you you’re right, but it’s not important. Then they tell you it’s important, but they’ve known it for years…
Charles Kettering
******
All truth goes through three steps: First, it is ridiculed. Second, it is violently opposed.
Finally, it is accepted as self-evident.
Arthur Schopenhauer
******
Laughs best he who laughs last. ahahaha…
The Joker
The author would have us belive, based on his empirical experience, that there is somehow a disparity between the reliability of hardware and the reliability of software as complexity increases. This is absurd.
I should go further; he doesn’t seem to possess much understanding of either hardware or software and how it’s designed.
All the good things that are implicit and taken for granted in hardware logic design become explicit and a constant headache for the algorithmic software designer.
Hah! As if hardware designers didn’t have to worry about signal timing, clock synchronization, failing components, etc. etc. etc.
It’s hard to even respond to this article, because you’d end up teaching 2 years worth of a college education to set this guy straight.
He advocates that hardware should be designed like software because hardware logic rarely has flaws. Firstly, that’s patently absurd; there are *tons* of flaws in hardware. Less so than in software because hardware logic is simpler and less dynamic than the logic often employed in software and thus easier to validate. Given a finite number of bits of input, one can validate all possible inputs. Even so, logic errors still make it out of the fab.
Not so with software, and he doesn’t seem to see the distinction.
Now that would solve dependency.
We let robots weld cars.
Let humans describe what they want, let computers write software.
-
2006-06-13 6:54 pmrycamor
Now that would solve dependency.
We let robots weld cars.
Let humans describe what they want, let computers write software.
**sigh…**
*Why* should we read the rants of someone who can show no track record of complex software development (an operating system, a compiler, a language, a RDBMS, etc.)?
When I read something written by people heavily involved in Multics, Unix, Plan9, BSD, Linux, NT or whatever I tend to trust them.
There already is a practical solution to the software crisis: it’s called (real) engineering. And buying snake oil technology is not something a decent engineer would do in order to solve problems.
I actually designed hardware using that model. And I can tell you that it’s way more complex to design and to debug than any software model that is in use. For hardware it’s the best we have. But give me some actual *software* projects which successfully use this model, and you might convince me…
-
2006-06-13 8:29 pmMapou
You might want to take a look at Occam, a parallel computer language which was used back in the 90s (I believe) to program the INMOS transputer. Also, take a look at all the synchronous reactive languages (e.g., Esterel, Signal, etc…) that are already in use by comnpanies like Airbus Industries for safety-critical software systems. COSA just goes much further, down to the instruction level, and eliminates algorithmic code altogether. Admittedly, COSA will not come of age until processors are designed and optimized for the new model. Sooner or later, the serious nature of the software crisis will force the computer industry to wake up out of its collective coma and realize that they’ve been doing it wrong from the beginning. They’ve been doing wrrong ever since since Lady Ada wrote the first computer algorithm more than 150 years ago. It’s time for a change.
Louis Savain
PS. Thank you, Thom Holwerda, for putting this link on the front page. One day, the entire computer world will thank you.
Edited 2006-06-13 20:33
-
2006-06-13 9:36 pmThom Holwerda
Thank you, Thom Holwerda, for putting this link on the front page. One day, the entire computer world will thank you.
Maybe, but not before the Mac community rips my arms off, the GNOME guys burn my legs off, the KDE guys stuff boiling led down my throat, the SkyOS community tears my hair out, and the AmigaONE owners finish it off.
Oh my.
-
2006-06-14 3:15 amCloudy
You might want to take a look at Occam, a parallel computer language which was used back in the 90s (I believe) to program the INMOS transputer.
Mid 80s. And if you’re going to mention Occam, point out that it’s derived from Tony Hoare’s “Communicating Sequential Processes” (CSP) and was an algorithmic language.
Actually, all of the languages you mention are algorithmic languages.
CSP and related programming languages have fallen out of favor because they turn out to not be very good ways of expressing parallelism.
Not surprisingly, the clever flow-control of CSP turns out to make it harder, rather than easier to reason about concurrency, except for with very rigorous formalisms which are only suitable for toy problems.
-
2006-06-14 4:52 amMapou
Actually, all of the languages you mention are algorithmic languages.
Of course, they’re algorithmic languages. That’s why there is a need for COSA. What synchronous reactive languages have that others don’t is the idea that every reaction to an event takes exactly one virtual cycle, hence the reason that they are said to be synchronous. In COSA, a reaction is an elementary operation whereas in a reactive language like Esterel, a reaction can be an algorithm, albeit a short one.
Louis Savain
Edited 2006-06-14 05:01
-
2006-06-14 5:30 amCloudy
every reaction to an event takes exactly one virtual cycle
would not make a system synchronous.
In COSA, a reaction is an elementary operation
you say that like you think it means something new. You’re really just describing the connection machine, only in vague obscure language.
There’s a famous comment made in the physics community that describes theories like COSA: “That’s not right; that’s not even wrong“. (usually attributed to Pauli.)
With all due respect, you’re stumbling around the edges of well covered ground, and you’d do yourself a great service if you obtained a bit more education in the literatures of complexity, concurrency, and circuitry. You’ll find of particular interest:
Knuth – complexity
Chandy and Misra – concurrency
Carver Meade – circuitry
although I suspect that you may not have the math background for Chandy and Misra, and so should probably start with Hoare.
Here are a few fundamental flaws in COSA to address once you’ve mastered that material:
1) The real world isn’t synchronous
2) Your characterization of hardware failure is inaccurate. Design-related hardware failure is directly correlated with design complexity.
3) You fail to address Gries’ observation: There is not automatic way of determining if a specification is good. That’s an infinitely recursive problem.
4) Algorithms are far from “utterly alien” to a hardware designer. Ask anyone who has ever implemented a Wallace tree.
Here’s a simple task: show how one would use COSA to answer the question “How many prime numbers are there less than or equal to 2^128?”
Once you’ve failed to do that without encoding an algorithm, you’ll understand why algorithms are such a critical part of programming.
-
2006-06-14 6:45 amMapou
every reaction to an event takes exactly one virtual cycle
would not make a system synchronous.
Too bad for your supposed expertise because that’s precisely the reason given by experts in synchronous languages. Axel Poigné of the German National Research Center in Information Technology put it this way:
“Synchronous programming reinterprets the basic idea of hardware design, and of discrete engineering formalisms, namely that processes run on a clock; a system reacts to each stimulus of the environment instantaneously meaning that the reaction consumes no (observable) time, or in hardware terms: every reaction terminates before the next ‘clock tick’. Software design is less accustomed to this idea which, however, should be the basis of systems with hard real time constraints. For these a reaction must be guaranteed to take place within a specified reaction time.”
There’s a famous comment made in the physics community that describes theories like COSA: “That’s not right; that’s not even wrong”. (usually attributed to Pauli.)
Well, personal insults fall into the ad hominem argument category. Didn’t you just complain against ad hominems in another post? Do you know the meaning of the word “hypocrite” by any chance?
1) The real world isn’t synchronous
The fact that the universe is discrete (quantized) means that it runs on a single heartbeat, i.e., all interactions are synchronous. If they weren’t, particles would have a hard time interacting.
2) Your characterization of hardware failure is inaccurate.
Which characterization are you refering to?
3) You fail to address Gries’ observation: There is not automatic way of determining if a specification is good.
Did I say there was?
4) Algorithms are far from “utterly alien” to a hardware designer. Ask anyone who has ever implemented a Wallace tree.
I define ‘algorithm’ on my site. It is not synonymous with method.
Once you’ve failed to do that without encoding an algorithm, you’ll understand why algorithms are such a critical part of programming.
I never said that algorithms should not be used to solve problems. In fact, if you had read my treatise instead of shooting from the hip, you would have seen that I praise the power of algorithms to do just that. What I am against is the idea of basing software on the algorithm. COSA does not prevent one from using traditional algorithmic solutions to problems as long as it is done synchronously using sensors and effectors.
You know, I am really tired of defending my work against pompous, hypocritical know-it-alls who have nothing better to do with their time than to show off their supposed knowledge and wrestle with chicken feather strawmen of their own making. Believe me, I’ve seen every argument against COSA at one time or another. Some are good and others are stupid. Yours are not that clever, nor are they novel. See you around. ahahaha…
-
2006-06-14 9:23 amCloudy
You might want to read Poigné’s comment again, carefully. Nothing in it implies that ‘reactions’ taking a single “virtual” cycle makes a system synchronous.
I suspect your lack of familiarity with hardware design doesn’t make that clear to you, but Poigné doesn’t seem to understand synchronicity, either. Hint: Even in clocked based systems, skew is a problem, and events do take observable time, as well as taking more than one cycle.
‘There’s a famous comment made in the physics community that describes theories like COSA: “That’s not right; that’s not even wrong”. (usually attributed to Pauli.)’
Well, personal insults fall into the ad hominem argument category. Didn’t you just complain against ad hominems in another post? Do you know the meaning of the word “hypocrite” by any chance?
Why yes I do. Apparently, though, you don’t know the meaning of ‘ad-hominem’. There’s nothing in Pauli’s famous quote that’s directed to the man. It is a comment on the quality of an idea.
The fact that the universe is discrete (quantized) means that it runs on a single heartbeat, i.e., all interactions are synchronous. If they weren’t, particles would have a hard time interacting.
Ah, physics is something else you’re not versed in, I see. First, there’s no evidence that either space or time are quantized, only that mass and energy are; although there are some tanilizing theories that indicate they might be. More importantly, however, the universe is non-deterministic at the quantum level, as demonstrated rather clearly by Hawking radiation.
I define ‘algorithm’ on my site. It is not synonymous with method.
You don’t actually define algorithm. You do have a footnote that reads: A purely algorithmic procedure is one in which communication is restricted to only two elements or statements at a time. In a non-algorithmic system, the number of elements that can communicate simultaneously is only limited by physical factors. It would take about six paragraphs to explain all the mistakes in that assertion, but basically, you’re making a domain-of-discourse transition error.
You know, I am really tired of defending my work against pompous, hypocritical know-it-alls
Why am I not surprised that when challenged to actually demonstrate COSA rather than pontificate about it you would issue insults and then cut-and-run?
There’s no there there. (Apologies to Gertrude Stein.) COSA is merely confused mutterings on topics that have long been fully explored by others. At least you appear to follow Bohr’s dictum to never express yourself more clearly than you think.
-
2006-06-14 6:04 pmMorin
> > 1) The real world isn’t synchronous
>
> The fact that the universe is discrete (quantized)
> means that it runs on a single heartbeat, i.e., all
> interactions are synchronous. If they weren’t,
> particles would have a hard time interacting.
The state of particles is quantized, i.e. particles jump from one state to another without smooth transition. Whether the universe (i.e. space and time themselves) are quantized is a theory that tries to replace string theory, and to my knowledge this is still an open question. If you know better, please give a link. Anyways, I don’t see how this affects software development.
> Anything u can do with a turning machine can be done
> with a behavioral machine
I can implement a turing machine in C, Java, Lisp, Haskell, C#, C++, whatever. Think again what makes a language *useful* (as opposed to turing-complete).
I must say i found this article interesting, and continued reading about COSA too.
Damn, this subject really stir up feelings in some people.
It seems that management likes to leap from one “new” development approach or technology to another like a thirsty kangaroo hops between water holes, but most of the experienced developers I know simply do what they’ve always done for years.
Leave the development processes and tool choices to the pros. If you are not a programmer, don’t try to tell us what to do or how to do it, at least if you expect to see good results. That MBA of yours doesn’t give you some magical level of insight into technical processes, and we already know what needs to be done. Really.
Edited 2006-06-13 22:45
I didn’t read the entire article, because I didn’t like the writing style, but it somehow made me think of spreadsheets where you can have cells depending on other cells. Change some value and all dependent cells are updated.
If you generalise this to other data types than numbers (actually allow specifying data type invariants, because it’s all numbers in the end), then I suppose you have something usable.
You could also model it as data objects flowing through a system of transformations (e.g. from a gui description instance to a 2D bitmap instance).
In any case, I like the inherent parallelism. Combine that with a JIT compiler and we might be able to manage all those multi-core/CPU systems that are just arround the corner.
“To solve the crisis, we must adopt a synchronous, signal-based software model.”
How?
For that to happen we need a new remodeled OS-building-language other than C. And this Language should be very well modeled and complex to reduce the complexity of the OSs it produces.
Creating a defect free OS or Application is impossible simply because we are humans not a God and if we are to simulate an OS or application like our Brains then why didn’t we still unable to uderstand the human brain structure and function ?!
Maybe we will be close from creating such an OS or Application when antigravity vehicles dominate our life much like in star wars!!!!!!!!!
that was a dumb article i thought. Hardware is the silver bullet eh?. NO. if you put windows or the complexity of windows on a hardware chip, it would be just as insecure and buggy.
People need to GET REAL. Hardware is way more simple than any of the most simple software projects. It is completely direct and one sighted, it processes, and that’s about it. If a chip had to do everything that windows or linux did, it would fail just as much. And to make matters worse, you wouldn’t be able to update the software, because the software would be stuck in hardware.
Comparing the human brain to the problem only makes things worse. I believe Humans are almost 99% hardware. We obviously have a body, we obviously have neurons and a brain, much like a microprocessor.
So what if you make a computer that’s more like a human. humans are simple. we can’t accomplish much. and if we make a computer like us, it wouldn’t be able to do anything complex, without tools.
SOFTWARE is like a human’s tools. If you give a human a broken hammer he wont be able to do anything with it. just like if you give a computer a broken piece of software it will also be worthless. The difference is the computer doesn’t know how to recognise when something is broken.
The real silver bullet will happen when computers get the ability to understand what a particular piece of software is *supposed* to do. analyze the results of the program, and fix it as necessecary until it gets the result the tool was supposed to produce.
Basically, AI will fix things. AI is the silver bullet. and IMO this guy is silly for thinking otherwise.
The lack references indicates that author came up with all the ideas. This is sad, because it is old news. It has been and still is an active research area.
Take a look at: http://en.wikipedia.org/wiki/Communicating_Sequential_Processes
Edited 2006-06-14 01:34
-
2006-06-14 1:48 amrayiner
The quest is, are the references omitted because the author chose not to cite them, or because the author isn’t aware of them?
Because as far as I can see from reading his site, his ideas don’t have any particular theoretical foundation — they’re various high-level features lifted from the world of hardware design without any unifying theoretical framework.
-
2006-06-14 2:43 amMapou
My main knowledge of CSPs came from reading various articles and papers on Occam many years ago. While on the surface similar to COSA’s signal pathways, CSPs lack one essential ingredient: synchronized operations. CSPs are independent threads, AFAIK. In addition, COSA introduces into general programming such concepts as sensors, effectors (borrowed from AI), the automatic resolution of event dependencies, sensory and motor coordination, etc… None of these things are part of CSPs as far as I know. The closest things to COSA are synchronous reactive programming languages. I give several references on the site.
Louis Savain
-
2006-06-14 3:36 amCloudy
CSPs lack one essential ingredient: synchronized operations.
The ignorance that this statement shows is breathtaking, coming from someone who is making strong claims about programming language design.
In addition, COSA introduces into general programming
nothing at all, in reality. The ideas behind “sensor” and “effector” have been around since Minsky’s work in the late 50s.
-
2006-06-14 5:00 amMapou
The ignorance that this statement shows is breathtaking, coming from someone who is making strong claims about programming language design.
Are you saying that operations in CSPs are synchronous across communicating processes? If that is true, then I confess that I did not know that. BTW, I am not making any claim about programming languages. Nor do I claim to be an expert in all languages. My claim is strictly about the underlying software model.
The ideas behind “sensor” and “effector” have been around since Minsky’s work in the late 50s.
Are you implying that sensors and effectors are already used in general programming and operating systems? Or are you just being a know-it-all nerd with an axe to grind?
Louis Savain
-
2006-06-14 5:36 amCloudy
I am not making any claim about programming languages.
You made incorrect claims about CSP.
Are you implying that sensors and effectors are already used in general programming and operating systems? Or are you just being a know-it-all nerd with an axe to grind?
Neither. I am merely observing that your claim to novelty is incorrect.
By the way, ad-hominem is not a particularly effective way of defending your ideas. You should try addressing the criticism. In doing so, you will come to understand the weaknesses in your arguments.
I find it funny that most of the people here criticizing the article didn’t appear to actually read it.
Anyways, the COSA model for software design is perfectly feasible today, albeit we would be emulating a physical synchronous machine (where all actions are computed within a single clock, i.e. the smallest unit of granularity, instantaneous in the real world ) with a linear turning machine (where actions are performed in order, and order matters).
The only possible points of failure in his programming model is the emulation layer. And because that layer will never change after it has been thoroughly tested and improved, there is a high probably for 100% software reliability.
reply to Cloudy:
1) The real world isn’t synchronous
Duh!? Of course it is. Any action I take at any instant is immediately effected. Think Einstein and “action at a distance.”
2) Your characterization of hardware failure is inaccurate. Design-related hardware failure is directly correlated with design complexity.
You obviously didn’t read the article. Or chose to ignore its basic principles. Yes, hardware does get more complex, but most of our high complex CPU’s don’t fail because of physical reasons, but software errors.
3) You fail to address Gries’ observation: There is not automatic way of determining if a specification is good. That’s an infinitely recursive problem.
That is the whole point of COSA. There is no need for an algorithm to determine if a value is good. All values should be good before an operation will be performed on it. And in the case where it isn’t, the whole system doesn’t come to a halt. COSA helps us determine how and when something can go wrong and prevent it from ever happening.
Here’s a simple task: show how one would use COSA to answer the question “How many prime numbers are there less than or equal to 2^128?”
Anything u can do with a turning machine can be done with a behavioral machine
Anyways…
-
2006-06-14 6:42 amCloudy
“The real world isn’t synchronous”
Duh!? Of course it is. Any action I take at any instant is immediately effected. Think Einstein and “action at a distance.”
You’re confusing causality with synchronicity. I suspect the author is as well.
You obviously didn’t read the article. Or chose to ignore its basic principles. Yes, hardware does get more complex, but most of our high complex CPU’s don’t fail because of physical reasons, but software errors.
Actually, you seem to be the one who didn’t read the article, which states “hardware faults are mostly physical faults, while software faults are design faults, which are harder to visualize, classify, detect, and correct.”[/i] This is an incorrect statement, but it is one made by the author.
“You fail to address Gries’ observation: There is not automatic way of determining if a specification is good. That’s an infinitely recursive problem.”
That is the whole point of COSA. There is no need for an algorithm to determine if a value is good. All values should be good before an operation will be performed on it. And in the case where it isn’t, the whole system doesn’t come to a halt. COSA helps us determine how and when something can go wrong and prevent it from ever happening.
And the way you know that the operation that was just performed is the operation that should have been performed is?
Here’s a simple task: show how one would use COSA to answer the question “How many prime numbers are there less than or equal to 2^128?”
Anything u can do with a turning machine can be done with a behavioral machine
That’s an unproven assertion. Feel free to demonstrate its correctness, either by formal proof, or by solving the problem specified, using COSA.
.. comes up every few years as a radical new idea
Maybe Ill actually read all that stuff one day .
Computer science theory isn’t an end in itself. Of course it’s far from being useless; yet, it’s nothing but a (very valuable) *tool* to help the poor engineers who have to face the complexity of the real world.
So, yes, COSA may be the shiny perfect solution in a wonderful theoretical world. Still, you need to show us a real example and we’ll decide for ourselves if it really helps us building better systems.
Lambda calculus and Turing machines are wonderful abstractions but in our real world we really have to sacrifice a lot of purity in order to build real things (e.g. the Haskell language could be described quite accurately as nothing more than three pounds of syntactic sugar over a tiny lambda calculus cake).
Before someone tries to implement COSA, we must have a reactive programming language in order to show that reactive programming is good for the masses. I suspect that it is so, that’s the reason I have posted a long example of a PhoneBook application in their site (http://www.rebelscience.org/phpBB2/viewtopic.php?t=33).
In my opinion, what is needed to increase productivity is to have a language where the mechanism of signals is embedded into the language. Through many thought experiments like the above, I have found that I can write applications with much less effort. Of course that does not mean that the algorithm, as a foundamental unit of programming, is to go away. Signal-based programming is about forming algorithms, as well…and chips have algorithms in them.
The main advantage of reactive programming, for me, is the high degree of decoupling of the application from the sequential execution model. Large effort in todays applications goes towards trying to put a non-sequential style of programming over a sequential style of programming.
All languages have the notion of signal:
-C callbacks,
-C++ functors,
-ML and Haskell combinators and tail calls,
-C# events,
-Java event listeners,
-Smalltalk blocks,
-Qt signals and slots,
-D delegates,
etc are the one and same thing: signals and reactions. The degree of easiness of support each language differs, but the actual concept is the same: a block of code can output a signal, and reactions are attached to possible signals. I think it is time that some serious consideration must be put in formalizing signal support in programming languages.
Another thing that should be done in order to increase correctness is to maximize correctness checking at compile time. Although it is not possible to prove that an algorithm halts, it is quite possible to reason about the set of values an algorithm accepts and produces; this kind of check, I believe, would solve many problems and improve software quality quite a lot. In order to do so, I think that values must be treated as types in a programming language. Here is an interesting discussion: http://lambda-the-ultimate.org/node/1518
Do we really have to keep reading this stuff?
its bad enough that this same c**p is posted in the comments of every other article but its now getting rather silly.
cant we block every article with “signal-based, synchronous” in it???
Aparently not, since Thom, knowing nothing about computers thinks such crap is interesting and/or relevant.
Rather than just call it crap, could you debunk this article? It’d be more useful…
admitedly it would be more useful to debunk it but to be honest I really cant be bothered. Just taking a read of this and the comments he usually puts up advertising his “Rebel Science” website is normally enough to show him up.
Trying to debunk him is like trying to debunk religion…
He just seems like someone hoping to change the world with his pet idea which in itself isnt wrong but he just seems to lack perspective.
Then don’t comment FFS.
I find the idea of radically changing the way software is written very compelling and important.
However, I think this guys propositions are way too shallow and don’t even begin to touch what is neccesary to make software development into a true science. He’s looking at the wrong things.
fithisux
This has nothing to do with OSS. Why can’t you guys keep OSS out of every little article, please. This is about software development as in the code itself, not the business model.
Edited 2006-06-13 13:37
This guys idea sucks. I don’t have to explain why that statement is true for it to be true.
Writing fault tolerant code is hard. Replacing function calls with something that does the same thing in a more complicated manner doesn’t make things simpler or better.
Better error handling routines like try/catch in languages is a good thing (except please do it less stupid than Java did). That by itself would accomplish more than this guys bad idea.
I text-searched through the article for the words << functional declarative HASKELL >> in addition to skimming. At University I studied Prolog and Lucid, this was in the old days. “Data Flow” is quite a seductive meme. I wistfully remember the articles anticipating that the Japanese “5th Generation” would rule the world. It’s a mental “pitcher plant” of computer science, but the author seems to think he personally came up with the idea “out of the blue”, which is EMBARRASSING !
Trying to debunk this article would be like trying to debunk: http://www.timecube.com/
It’s not that he says anything inherently wrong, but rather that he formulates his argument as propaganda, rather than as a thesis that can be analyzed and critiqued. His points are ultimately shallow, and he fails to offer a well thought-out solution to the problem, and instead just offers vague generalizations about what such a solution would look like.
There is also the problem that he’s a crackpot. Not only does he think the computer scientists of our time are heading down the wrong road, the things the physicists of our time are as well: http://www.rebelscience.org/Crackpots/notorious.htm
Of course, he is neither a computer scientist nor a physicist…
i do belive that this have been talked about before on osnews. so whats new about it?
I think the way we develop software today has found a balance between productivity and the flexibility needed to truly do the job in a dynamic way.
I write Java for a living…it’s quite productive *and* rewarding. Sure, it *can* be complex but *some* power has to be left in the hands of the programmer to do his/her thing.
Why do you think, fundamentally, programming hasn’t changed much in about 25-30 years? Most of the concepts are the same and the developer is doing much of the same steps. It works!
That’s not to say we don’t have progress to make in the way we do things…I just don’t think it’s a radical shift one way or another. I’ve read articles like this a dozen times or more in the last ten years…and these “shifts” never happen – for a very good reason, they’re not feasible.
The fundamental missing assumption in his brain analogy is *time*.
It is at first attractive to say that we can broadly agree that the brain is a massively complex, largely reliable system; in fact, the detail of that is arguable.
It *is* capable of degrading its functionality quite gracefully, and is heavily redundant, but it is also prone to massive system failures and regularly produces small errors, in computation, in memory, in accuracy for example.
And we are not on Brain 1.0. We’ve had literally millions of generations of evolution and versioning that has produced our brains. Brains that don’t work properly continue to cause massive system failures – failed pregnancies and other abnormalities.
The process that produced the brain is iterative and incremental – testing and rejecting failures, accepting changes in behaviour (including minor bugs) if they do not seriously impact on the usefulness of the device. (Providing you broadly accept evolutionary principles, of course).
This sounds rather like our current software development processes, in fact – but with a longer schedule.
So, in essence, I would contend that his basic argument that we have an example of a highly complex system that is highly reliable, that has been built by a different kind of process is dubious, at best, and that the rest of his argument is therefore also flawed.
Moving further along, most of the “specification” of the brain is, of course, emergent, rather than a-priori. For our systems to be useful, on the whole, we start with a definition of “what we want” and build towards it. Whether we do that in hardware or in software, taking a linear or a parallel approach, we need a means of determing whether the system does what we want, or not, and what we need to change to correct any discrepancies.
Or we build large numbers of systems that *might* do what want, and pick the closest fit (more like the brain analogy).
The latter is at best commercially, and at worst practically, unviable. The former is harder to achieve with the kind of system he describes than a linear system. That’s why we try to simplify to as a linear a system as possible.
If he *is* arguing for system building by massively parallel exploration of emergent behaviour, then I refer him to another analogy: the drug discovery process. For a number of years, this has been trying to take exactly that approach – run thousands of gels with millions of variants of compound, looking for something which happens to be useful. This has proven pretty ineffective, and there is a return to a more understanding-based approach to new-compound discovery. Of course, those databases of protetein analysis and gene expression are vital input to the more “linear” approach to drug discovery, but something analagous to the traditional computational approach is proving to be the way forwards.
Can we have this on the front page too??
<sarcasm>Another great article from the same author…</sarcasm>
“Artificial Intelligence From the Bible”
http://www.rebelscience.org/Seven/bible.html
Can we have this on the front page too??
Another great article from the same author…
“Artificial Intelligence From the Bible”