There is something fundamentally wrong with the way we create software. Contrary to conventional wisdom, unreliability is not an essential characteristic of complex software programs. In this 4-page article, Louis Savain will propose a silver bullet solution to the software reliability and productivity crisis. The solution will require a radical change in the way we program our computers.
Perfect for the ‘Cell’ processor…?
Objects in the real world behave synchronously and causally. Why should software objects be any different?
Because they’re not in the real world? Hahaha!
By that logic: “Recycle bins in the real world are made of plastic or metal. Why should software ‘recycle bins’ be any different?”
While reading the article it became more and more obvious that you can transform the authors model into the model he tried to oppose. So they are basically the same.
But, made me smile
By “four pages”, he means four web pages. The printed material is quite long and involved, but there’s too much bluster and handwaving for my taste; I was hoping for something more concrete.
No other paper in the annals of software engineering has had a more detrimental effect on humanity’s efforts to find a solution to the software reliability crisis. Almost single-handedly, it succeeded in convincing the entire software development community that there is no hope in trying to find a solution. It is a rather unfortunate chapter in the history of programming. Untold billions of dollars and even human lives have been and will be wasted as a result.
This is completely illogical. He evades the argument and attacks it by saying it is ‘negative’. He also makes the assumption that productivity and quality are equivalent (“Untold billions…”) and procedes with an emotional attack (“…even human lives…”).
It could be said that “No Silver Bullet” has a positive effect, making programmers more wary of dubious solve-all solutions like the one the article is about.
Why does every computer pundit seem to be completely incapable of fighting the urge to use some gross misrepresentation of the way the human brain works as support for their Amazing Hand-wavy Idea(TM)?
<the big long article read and snipped>
OK. Whatever. Now Dear Author, show me one piece of software that you have written using your great ideas that is 100% reliable and what not *and* that I can use right now.
Osho
Who’s going to be the first to mention to the author that every processor ships with a large number of bugs, which end up being worked around in the firmware?
One of you idots laughed at the idea of hardware being reliable. How many transistors does the modern CPU have? Translate that into lines of code. What has fewer bugs, a 100 million transistor CPU or a 10 million line program?
The CPU has less bugs. But comparing lines of code to transistors is stupid. Once you design one working transistor, you know that 100 million of them will also work. Each line of code, however, could be completely different from the one preceding it.
CPU’s have less bugs because there is a certain limited set of inputs and variables that can be fully tested. For software, the variables are infinite, and many are entirely external to the application.
Another moron dismissed the possibility software elements behaving “synchronously and causally”. Are you suggesting that if we can make them behave as such that we shouldn’t merely because they are not in the “real world”?
No, more like, the real world is different from the software world so any analogy doesn’t hold that much weight.
By the way, calling people “morons”, “idiots”, and “clueless fellows” doesn’t strengthen your argument. It only shows that you have no valid points yourself and must resort to personal insults.
I agree with Osho, lets see an example of this magic new paradigm in software development. I’ve spent enough time in a research setting to know that lots of things sound good in theory and then completely fail in practice.
The article wasted a half hour of my life the first time I came across it on comp.os.research in 2002, and I’m still amazed that it’s under discussion. (as found on google). http://groups-beta.google.com/group/comp.os.research/browse_thread/…
Feeding fuel to the fire
The more of this I read, the more off track this guy sounds.
His comparison of software to hardware is completely pointless. Just because a CPU has millions of transistors doesn’t say anythign about its complexity. The hard part about making faster CPU’s is shrinking the transistors, making them faster, handling the heat dissipation. The logic of it is fairly well understood and not that complex.
If I make a binary adder, and then put a million of them onto a chip, that doesn’t make that chip significantly more complex logically than putting one on a chip.
We will get better at solving the reliability problem, but it won’t be because of some magic bullet. It will be because of better tools, more mature libraries, and more experience.
is solid. rather then building a house of cards where if a lower item fails the whole contruction fails, one creates a construct that is able to bypass holes, replace parts without the need to shut down the whole construct and so on.
its not that dissimilar from a microkernel in concept. but taken so deep that it affects the way every program is designed.
this is a design metaphor, similar to object oriented programming. and allso similar to the idea of commandline pipes. but its a shell script where you can restart parts while its running, with no other effect then a moments slowdown.
its the internet in code. buffers will be abundant…
“By the way, calling people ‘morons’, ‘idiots’, and ‘clueless fellows’ doesn’t strengthen your argument. It only shows that you have no valid points yourself and must resort to personal insults.”
Sorry for the name calling. That being said no one posting had any valid points. Did any one address the paradigm that COSA puts forward? No, they dismissed it out of hand because of the analodies used to describe it.
Also, your assertion that name calling shows that I have no valid point is a fallacy of logic. I may have called people names but it is entirely possible to (A) name call and (B) have a valid point at the same time. To suggest that the two are mutually exclusive is to engage in an Ad Hominem attack. You suggest that there is something objectionable about me therefore what I write is not valid. It is possible to be valid and offensive.
By the way, sorry for the offense.
I would suspect, in addition to the reasons already mentioned in other comments, that hardware has traditionally tended to be more reliable because in the past it was (almost literally) set in stone once the design went to the factory. With software, particularly with the internet, it’s a lot easier to shove it out the door before it’s ready (to meet marketing deadlines) and then release point releases to fix the bugs later. Now, with FPGAs and firmware updates becoming more common, I would guess that in some industries hardware reliability will start to take on a similar curve.
Having worked on a large micropressor, I can say that hardware logic is not simple. Logic bugs happen all the time, and they can be very difficult to detect.
Also, the verification effort for detecting logic bugs is immense. There are much more verification people than architects. The reason microprocessors are relatively free of bugs is because companies can’t afford them. Creating new masks and fabing new versions of silicon is very expensive. Microprocessor teams HAVE to do verification.
The author suggested relaying a message synchronously to depending controllers unlike the current event model we have that calls each listener in turn. I think he might have missed the fact that there is a reason for the linear programming. The message or object being passed around isn’t necesserely immutable/final and each one of the controllers might change it and therefore the other controllers would miss that change. For his system to work in software every message would need to be cloned or each method of a message would need to be synchronized to avoid “reads” while “writing” problems which unfortunately puts you back on the linear path anyway…
Electrons make for very simple messages and one electron is essentially the same as any other. Object in software are unique, even if they have the same values.
If anyone is interested in the most recent development in solving the “dependency problem” they should look at one of “dependency injection” frameworks being developed like the Spring framework.
Great article none the less.
… until I see a moderately large scale system implemented both in this COSA and in a traditional language, together with a comparison between the two. Since the author of COSA seems to have quite a bit of time on his hands (given the length of the article and the posts in this forum), this should not be hard for him to do.
just like hardware is made from individual parts – various types of transistor sets, then software is made from individual parts – the individual commands of a language.
The complexity and unpredictability come from when lots of the smaller parts are combined into the large system. Isn’t this the same for software as it is for hardware conceptually that many commenters here have disregarded?
It seems that there isn’t anything concrete and final from the author, but that doesn’t mean that he hasn’t got well founded initial ideas. Even if nothing develops in the long term, it may be not due to the impossibility of it, but it may be due to the lack of support, the lack of time, the lack of finance or a million other reasons.
But as far as I can see, the ‘concepts’ introduced by the author are very interesting, and I would encourage him to develop them further.
Why does he have such a problem understanding the Universal Turing Machine? It’s difficult at the bottom level, but in abstract it’s a very simple concept. Hence it being the general way computability is explained (rather than Church’s lambda calculus).
But no, he totally misrepresents it and then suggests his theory has a way of getting round these ‘flaws’. A UTM doesn’t deal with time; once an algorithm is established all you have to do is speed up the hardware to the point where it appears to the “real world” things are happening concurrently.
So, as mentioned in an earlier post: his technique can be realised in a UTM (as have so many other supposedly different machines before) and thus *is* a UTM.
If COSA is the next big thing and COSA developed programs can be compiled to work reliably out of the COSA-framework (like the author suggested), than I would like to see a moderate COSA program compiled to a sequential platform and have the living daylights tested out of it. If this program proves to be bugless, than COSA deserves major funding…
@Ithika, but just because something can be realised somehow/ultimately as a UTM externally, it doesn’t necessarily mean that this would be the most efficient way to have something operate internally.
A UTM doesn’t deal with time but is physical real-world implementation takes time to perform each step. This is about the pure abstract UTM, the real world UTM, and the real real world (if it at all differs).
The real world is, quantum superpositions, quantum entanglement, general relativity and all that, – not abstract UTMs.
I think this is a very interesting theory, but there is one weakness. Because every COSA equavalent of a subroutine is active all the time, a COSA program would use as much memory as the sum of all automatic variables of all functions in a conventional program. May not be a problem for the huge memories of todays computers, but it might add up for realy large applications.
Interesting article. Too bad it doesn’t have any content.
Half of it describes a well known problem and the other halt mostly remains silent on the magical solution proposed besides a generics and unproven claims.
There is actually more content on the home page of the project than this article.
There is an awful lot of truth in that article but a bit of faith gets in the way.
Synchroneous programming is what a spreadsheet does. Tada ! Cells containing basic logic listen to each other and any change is relayed to listening cells without the “coder”‘s intervention. Spreadsheet are built using orthogonal branching and emulated synchronicity.
Better, the system functions as soon as it is coherent. But the catch (as a chip designer pointed out here) is that it has to be coherent. Most spreadsheet designers don’t go too far in complexity but quickly face that constraint. Any real world designer does too anyway.
Networks are other systems that function that way since it interconnects logic agents that listen to each other.
But as far as I can see, a spreadsheet is by no means bug free, and that’s without taking into account the program running it (Excel for most). I obviously exclude scripts that use a spreadsheet as data storage.
But it has undeniable advantages : when in a pure spreadsheet (no script) something fails, looking at the point of failure tells you immediately where to look for the unhandled circumstance. it’s either the formula that is wrong or the data that it reacts too that is unmanageable (a zero in a division). But again, there is no guarantee that it is the next point up the chain where the logic problem is.
In java, listeners try to mimic that albeit in a cumbersome way (imho and Java is the language I prefer by the way, no troll intended), and trolltech’s QT is based on objects subscribing to signals that they listen too and can change their state accordingly.
A slight leap of faith is the fact that this type of programming resolves latency problems. Spreadsheet don’t but network based (middleware comes to mind) systems implement both synchroneous and asynchroneous communications and that’s because the hardware takes a lot more time to process data than a circuit does to react to electrons. But it is true that faster computers and chipsets make the problem fixable even in our world of algorythms.
Although the expression ‘silver bullet’ is bound to lead to disappointment, it is obvious that it is a great way to developp apps and manage complexity, although it by no means makes logic bugs a thing of the past: just forget to connect a listener to a relevant source of data and you’ll soon find out.
Now I’d love to find an development environment based on that principle (outside excel I mean) and that can be used to developp real world programs. I thought java beans (not ejb) was it but I got the impression (from the lack of books on the subject mainly) that it was not well covered and that SUN was not that interested after all. Did I get it wrong ?
Any suggestion welcome.
And great article by the way but it suffers from being a profession of (justifiable) faith and trying to look like a subject study. Apart from that, great read.
Oh and something else, the brain is the most difficult system to investigate and its reliability (well, for day to day things) is also due to something way beyond our understanding, its ability to, to a certain extent, marginally recable itself according to experiences.
He loves to talk about how reliable ICs are in hardware. But there really isn’t anything special about ICs. Of course, you put an adder IC into a circuit and it will always work. In the same way, I can write an object in any number of programming languages, arbitrarily complex, and as long as I have tested every possible combination of inputs and outputs it will always work.
This is what unit testing is designed for, I don’t see how this is such a fantastic new idea.
The bit about automatic change notification for dependant cells is interesting, but doesn’t make a damn bit of diffence to bugs in an application. If you can guarantee that the dependant objects have been informed of the change, it’s still up to the programmer to make sure that the objects acted on that change and acted correctly.
A few working demoes of this guy’s ideas would go a thousand times farther than his endless writing on the subject. History has shown that if you overengineer your concept from the start it will never be particularly useful. Look at the Hurd, they’ve been dreaming of a perfect kernel for over a decade.
Having worked on a large micropressor, I can say that hardware logic is not simple. Logic bugs happen all the time, and they can be very difficult to detect.
Also, the verification effort for detecting logic bugs is immense. There are much more verification people than architects. The reason microprocessors are relatively free of bugs is because companies can’t afford them. Creating new masks and fabing new versions of silicon is very expensive. Microprocessor teams HAVE to do verification.
Exactly. I have not worked on hardware projects, although I understand several concepts of CPU design at the theoretical level. Bugs make no distinction between hardware and software. Bugs are simply the result of human error, whether they occur designing a logic circuit or writing a software application. Despite the fact that modern CPUs have the capability to update their firmware, that doesn’t mean it can fix ALL problems which could occur due to design mistakes, which can lead to a costly product recall. On the other hand, the impact of bugs on software is not so costly… when the Internet is there to ease the patching process. See how many times you had to patch a Sega Dreamcast game or a Nintendo Gamecube one. They have very few bugs because the relevant companies spend more time ironing them out since those systems do not have a patching mechanism. These may change though, as new generation consoles include both hard disk and Internet access, companies would become more lax:
I could summarize it as follows. The less of a hardware factor involved in a product, the less costly it is to fix problems that slip into retail; therefore the more lax is the company in its verification procedure and the more bug prone it is. Pure and simple!
This guy really doesn’t understand what he is talking about in respect to avoiding the algorithmic model of computation. The responses above about universal turing machines have the right idea, it simply doesn’t matter what model you are using the problem of writing a program (in whatever form) to do a particular computation is hard.
So I do mathematical logic and one of the things we do is prove things to show statements like this are flat out impossible. One of these results says that any recursive way of writing programs there is no recursive predicate for halting. In laymens terms this means no matter what model you use for your programs, if it can be run on a chip, you can’t catch every infinite loop. Basically what programming model you are using can provide a boost or a hindarance to writing code but ultimately writing computer problems is as hard as doing mathematics (number theory). Just as no magical new language can appear which makes it easy to solve every open number theoretic statement no new language of programming can appear which makes all programming bugs obvious.
Think of it this way. No matter what you program mathematica on the math is going to be tough to get right.
So apparently I didn’t realize that this nutcase attached some special meaning to the word simulation. You can replace simulation in the mathematical statements I gave above with emulation and they are still true.
To make what he is saying clear he is attaching a metaphysical claim to what it means to simulate something. In order to simulate a synchronos system he is saying the components of your system must be *actually* working simultaneously. So that even though a modern turing style processor might produce exactly the same output in the same order as a UBM he doesn’t want to call this simulation because the computer isn’t ‘really’ doing it synchronosly.
Frankly this is just dumb. The user doesn’t give a fuck whether his application was *really* executed synchronosly or in parrelel he just cares what the output is and how fast it went. Also it flatly contradicts what the common usage of simulate is. One doesn’t need to actually be spinning air to simulate a tornado nor should one need to be actually simultaneous to simulate a simultaneous event.
Speaking as an expert in the subject this guy is a nutcase as far as the turing machine and math stuff go. I don’t know what it is about my field but this and godels theorem attracts more cranks than almost anything else.
I like your spreadsheet analogy, but I would like to offer a different angle. In a spreadsheet, when you have an error it is immediately apparent that there is a bug, and you know that you have something to fix. It is because of its synchronous behaviour that you know this instantly. In a non synchronous universe you might only find out that your product is defective once it gets extensive use after it is deployed in many different environments.
It is because of this synchronous nature that your bugs will surface immediately, so that they can be fixed before releasing your product. I think it is this artifact of being synchronous that the author is claiming will help in releasing a bug free product.
Learn how Bell Labs programs distributed “pluggable” software:
http://www.cs.bell-labs.com/who/rob/lec1.pdf
http://www.cs.bell-labs.com/who/rob/lec3.pdf
http://www.cs.bell-labs.com/who/rob/lec5.pdf
http://www.cs.bell-labs.com/who/sape/gos/
Then try it out:
http://c2.com/cgi/wiki?LimboLanguage
http://www.vitanuova.com/inferno/net_download4T.html
The only good idea in the rant is “distributed pluggable software is good”, but who hasn’t thought that?
These COSA pages have been around for a while. I’m thinking of something similar myself (just click my name, not making much progress ATM though).
Take some complex program. Some numeric value is created, and used in X number of places in the program. Now make some changes, and the programmer has to find all the places where this value is used, change them, and make sure that the entire ‘dependency chain’ around this value remains correct. Programmers are humans -> bug says “hello!”
One idea here is to guarantee some of these dependency aspects of large systems. Update a value -> everything depending on it gets automatically recalculated (that spreadsheet example was a good analogy). Embed this in the grain of the woodwork so to speak, like the undelying architecture of a CPU is embedded in the woodwork when you’re writing assembly code. Make it impossible for a programmer to produce certain types of bugs.
Whether this guys design would work in real life projects or not, I don’t know. But he’s got some good points, don’t ridicule him for trying, okay? BTW: another guy seems to have -something- working (download gave me an error though):
http://www.iki.fi/marcus.sundman/COSAed/
Frankly, I find it weird that design of hardware and software is treated so different. @TechStorm: Software bugs not so costly? Under what stone have you been hiding for the lost couple of years?
* $500 million satellite lost due to programming +/-1 error * dozens of airplaines grounded for hours when some engineer forgot to reset a buggy Win95 system used for scheduling flights * see size of the virus/anti-spyware business, fuelled by IE bugs etc. * admins working countless hours to upgrade/test/deploy the latest security patches, released daily * Huge lost productivity because of bogged down Windows systems (how much is an hour of your time worth?) * Gloria Estefan concert cancelled due to software bug, fans angry * Early Airbus model crashed on demo, wasn’t that related to a software bug as well? * The list goes on and on and on… Unreliable software costs *heaps* of money, and yes in some cases, even people die. Not often, but it happens. Just read the news carefully if you don’t believe me.
>Frankly, I find it weird that design of hardware and software >is treated so different. @TechStorm: Software bugs not so >costly? Under what stone have you been hiding for the lost >couple of years?
And for mission critial software, we see excellent verification efforts, such as three seperate design teams all solving the same problem. Unfortunately, microsoft does not appear to be as rigorous. Given their monopoly, I guess they can afford to skimp on verification…
I think therandthem saying about new ideas is really relevant and pretty much the main issue here. A new idea is suggested for replacing something we don’t even have a name for and it’s being adamently rejected by people have faithfuly followed the current system.
It seems that people are less bashing the ideas presented and more often bashing the annalogies, which suggest that there is very little wrong with the logic, the problem is in the presentation.
People say they’d like to see a software system building using this model and THEN they’d give it a serious look. That’s what the whole darn world says about it. Some of you developers out there need to be willing to take a leap of faith and give it a shot before anything can be proven. If it doesn’t work, too bad. But if it does, it could be a paradigm shift in the way software is developed and lead to a whole new era of computing.
Too many great ideas have withered and died because it was too far from the comfort levels of people who have learned to do things a specific way. I thought one of the key concepts in geek/hacker culture is to find creative ways to do things better, not shun new ideas like software development models are a new religion.
It reminds me of the old saying “if all you have is a hammer then all problems are nails”.
There are sure problems that are better handled in a programming system that supports natively abstractions of sensors, actors, signals and such. And sure most prominent programming languages are descendants of fortran (formula translator) spiced up with object / interface / modularization techniques to help in a world with teams of programmers. I do buy it that some problems are better solved on a BTM model approach but I do not buy the overgeneralisation to try to enforce it on all problems. See above.
Use the right tools for the right problems.
Well I can’t speak to the usefullness of his system but his statements about turing machines and UBMs are just downright quackery. I’m a grad student in this area and frankly he doesn’t have the slightest clue what he is talking about and sounds no differnt from all the other crackpots who come in and think they can solve the halting problem.
Let me give you an analogy. Suppose someone started their article by describing how their compression system would reduce the size of *every* file. Since we know this is mathematically ridiculous (you can’t map every combination of 65 bits into combinations of 64 bits uniquely) you wouldn’t be given to trust his other ideas on compression would you. The situation isn’t any differnt here except the math he has choosen to misunderstand is less well known.
Look he starts out his paper with a massive misrepresentation of the no silver bullet paper. What the paper establishes is that you will have the problem in *any* model of computation which includes anything which can run on a processor including his system.
Besides, his idea sounds a great deal like smalltalk. It isn’t new or earthshattering.
I actually sat down and wrote some code based around ideas similar to COSA about six months ago. There are some non-trivial problems trying to do this sort of thing though. Memory and performance were my chief concerns.
The way I was working things was slightly different. I had a list of working cells (COSA calls them effectors) and data cells. Every working cell had a number of terminals (Data cells only had one terminal to indicate change in value) which connectors (COSA calls them synapses) could be attached to, information on connectors was held in a list (The Connector List). For every clock tick the kernel just ran down a list (The Scheduled List) of working cell/terminal ID pairs that had to be fired during that tick, if those cells decided to fire a terminal then the cell/terminal ID pair was placed on a second list (The Output List). At the end of the clock tick all cell/terminal id pairs on the Output list were checked against a the Connector list and if a match was found the destination cell/terminal id pair stored in the connector was placed on the Scheduled list. This worked pretty flawlessly and is incredibly simple (Although I’d probably switch to trees if I went back to the project).
However the book-keeping here is a very clear overhead. Each cell/terminal pair was a 32-bit integer, so 16 bytes/connector. However every working cell also needed information stored about which data cells it was attached to, which meant another 16 bytes/cell (Since I only implemented simple logic operations. Anything that needed to store an internal state, eg Waiting on 2 input terminals, would need more). Now that doesn’t sound like much, but over the course of even a simple program (I was playing with adders) it starts to add up.
It wasn’t any big surprise that performance took a hit, coming in at about 5 times slower than a comparable native code algorithm, but it was actually a lot faster than I thought (It could have been improved a lot by not using such primitive working cells).
On the plus side there were some very cool things that even my simple testbed showed me. Blocking seemed to be a thing of the past since the order of events was so totally predictable (Although it would still be required for networks with unpredictable run times). It was quite fun branching the running network to run different parts of a calculation in parallel (At least on the virtual clock). This is where this type of system really gets quite odd (In comparison to traditional programming). Since there is no real thread of execution branches of the network can just come to a complete stop. This sounds useless at first, but consider this scenario. I have value X, which is going to have a calculation performed on it, but I know that some ticks away I’m going to need 2*X. A trivial example, but in C I’d probably wait to calculate 2*X until just before it was required. With the cells I can just branch off a sub-network to calculate 2*X and die, whilst at the same time continuing with the calculation, secure in the knowledge that as long as it takes less ticks to calculate 2*X than it does for the original network to reach the point where it needs the value of 2*X then everything will be perfectly fine. The closest traditional technique would be threading I think, but without needing blocking.
That’s around about the point I moved on to other things. To write a really complex program I’d need to create some kind of language that represents the network (I can understand why the COSA folks like the idea of a graphical representation). Hardware operations are still going to have to be done by some sort of microkernel (Virtual memory isn’t a problem, data cells could be marked not-present with a simple flag). User permissions ARE a problem however. Do you maintain permissions over a whole network? Each cell in a network? Only on the data cells? What mechanism do you use to make a network run with higher priority (Shutdown the reactor NOW!!!)? Big questions to be answered.
***
I really like the idea (Don’t know how close to COSA I was, but it was plenty of fun), and I think with specialist hardware it could work very well indeed. Problem is that the hardware isn’t actually there yet (Cell processors are interesting, but aimed elsewhere I think). What would be very neat would be to have a chip on which the working cells are hardware, on which those cells could be altered to different primitives based on the demands of the Scheduled list. That’s something for the future :>
What this guy presents sounds like a possible approach. Just like OOP, functional programming or whatever is a good approach – for SOME problems. A good approach would be to apply it to some problems where it fits well, and then try to extend it to others. My guess would be that far from all problems can be well modeled as synchronous logic.
By that I don’t want to say there is no silver bullet. Other areas HAVE found silver bullets, like CMOS transistors for VLSI chips. But I don’t think the silver bullet for programming will be a certain program model, but rather some concept that can be applied to all models.
BTW, spreadsheets are NOT synchronous logic. Synchronous logic consists of units which act by some uniform timing, while asynchronous logic works with individual,independent timing. The difference is best seen when data is passed around in a ring of circular-connected units, or when data races along a split-then-joined path. Spreadsheets, on the other hand, do not have a notion of timing at all. They are based on functional programming, which is why they do not allow circular-connected cells (such would correspond to infinite recursion in FP), and split-then-joined paths never show race conditions (functional programs automatically wait until the slowest path has finished).
Spreadsheet analogy is really great – and this analogy helps to point to original idea flaws too
I like your spreadsheet analogy, but I would like to offer a different angle. In a spreadsheet, when you have an error it is immediately apparent that there is a bug, and you know that you have something to fix. It is because of its synchronous behaviour that you know this instantly.
Wrong. You make implicit reverse assumption that if there’s bug, then you can see error instantly. Usually you can’t see, and to find error, you have to walk through many spreadsheet formulas and associations before to find a bug. Nothing different from usual debugging:)
If using any “algorithmical” program you get dialog “This program has performed an illegal action and will be closed”, then it is immediately apparent that there is a bug in it either.
More about spreadsheet.
Another issue with spreadsheet (and with authors idea) is cyclic relations. Spreadsheet usually gives you chance to evaluate them step by step or make N cycles or something similar, I think similar functionalty should be exist in COSA. If we will evaluate such cyclic dependency on idealistic parallel (synchronous) system, we’ll deal with feedback – either positive or negative. If feedback happens positive then entire system will oscillate (if not crash itself). This is similar with analog circuitries – entire circuitry has to be designed carefully to disallow this kind of effects (if this isn’t done intentionally of course). There’s not enough take blocks and connect these, you have to know such blocks temporal, transitional and other behavioral characteristics.
Advertised (yes, entire article reads like advertisment for some magic product) system can be usable in controlling technological processes for example. Many years (15) ago I worked for company, providing automated control systems – I remember putting logical blocks with predetermined inputs, outputs and behavioral chracteritics together:) We used interpretator, which executed one step for all defined objects (regulators etc) in or system – exactly how author imagines emulation his COSA. This worked extremely well on i8080 based controllers. Of course primary blocks were coded algorithmically. (I refuse to understand, how COSA can avoid this?)
We stuck after we started to give usable user interface to some of our control systems. We attempted to expand our block based [state machine] ideology to user interface – it was possible, but very limited. After struggling some years with this we started code UI in C++. Technological control of course kept previuos design.
What I wanted to say with this – all these theories (OOP, data driven, state machines etc) tend to work well in some specific area, but they start to fail when user interface comes into play. I mean really usable interface, where user can do, whatever and when he wants, without errors and strange response from system. There will be so much different runtime dependencies and event sequences created that no designtime magic can’t predict this.
Interesting, how COSA based system would for example handle realtime order-based warehouse application, when operator starts filling order rows not from left to right, but from top to bottom (first column, second column from bottom to top and so on)(entering data in any column recalculates many others)? Our normal (algorithmical) application went crazy (it was minor design flaw in situation, never expected to happen:).
In hardware this just can’t be happen – no memory controller cannot decide that “this time I’ll give data to CPU in swapped byte order” or similar.
Look the issue of no silver bullet is not just someone’s opinion. In a very real and precisce sense it is provable.
We can introduce languages and models which make many mistakes more obvious and are easier for us to handle however we can’t ever hope to eliminate errors entierly.
Why? Because ultimately programming is just as hard as mathematics. If you want to write code to factor a number or search efficently for the next mersenne prime you are essentially relying on the truth of a mathematical theorem. Namely that your search/representation of a factor/mersenne prime is guaranteed to be equivalent to the standard definition.
As a trivial example suppose you are asked to provide a function F(n) which returns the first even integer m>n such that m is not the sum of two primes and 0 if no such integer exists. Now if one believes that such numbers exist the correct way to program the function may be just to start searching through numbers greater than M. On the other hand if the goldbach hypothesis is true, every even number is the sum of two primes, and the code you just wrote is actually an infinite loop. In other words writing this function correctly depends on solving a famous unsolved number theoretic conjecture, if it is true you always return 0 and if it is false infintely many times you search and if it is false finetly many times you keep a pre-defined list to check. This may be a rigged example but it emphasises the point, you might be able to get rid of simple errors with a good enough computational model but you can always find arbitrarily hard errors.
The point the author really doesn’t seem to understand is that this is an unavoidable complexity in any computational system. Where computational system includes any device whose operation can be specified in some finite number of rules, including ICs synchronos processors or whatever other object we can simulate on a PC.
Dude, programming isn’t “just as hard as” mathematics. Programming, deep down at its very core, IS mathematics. It’s an extension of number theory, formal logic and systems, set theory, and friends.
Think of it this way and mind you, this is oversimplifying grossly. Your input data are given axioms. Each expression in the program that manipulates a datum is a theorem. The result of execution (new overall state of the system) is a proof (or disproof) of that theorem using your axioms (data). Since the theorem is now proven, it is equivlent IN PRACTICE to an axiom. So, the theorem and its proof can now be used in proving new program statements (theorems).
In COSA, the environment is the set of all your axioms. A sensor cell generates a theorem sent across a synapse to an effector to form a proof (using the environment). Only, ideally, it’s all instantaneous because it’s all connected. Thus if it halts (a disproof), or loops indefinitely (a tautology?) it should halt quickly.
Godel and friends basically proved that any system powerful enough to describe, manipulate, theorize and prove about itself can never be totally bug-free. Ever.
So, we introduce graceful handling of failure, another multi-billion dollar industry. I wonder how COSA would do this?
Dude, programming isn’t “just as hard as” mathematics. Programming, deep down at its very core, IS mathematics. It’s an extension of number theory, formal logic and systems, set theory, and friends.
Dude, that’s basically what the logicnazi said. You seem both to be on the same track (which is a good track).
Software Engineering is hard, and systems are complex, because we are trying to solve real world problems, and understanding that domain and conveying the intricacies of it to the computer, is not a simple thing to do, especially when one is after a scalable solution to a very complex problem.
Some things to note:
1) The above is complex, I don’t care how you write your software. The tool can make it easier or harder but it will still be complex.
2) Splitting things up like this seems a little bit overblown. Nearly every single line of code in a system becomes a branch point in this guy’s plan (i.e. either an observer or an observable). Serious performance implications.
3) The guy is somewhat nuts. He’s effectively saying: “My idea is the best. Everybody will eventually recognize this. Even CPU’s will be created for it. The entire industry will go this way.” I think that’s quite a bit of hubris coming from a guy who hasn’t shown us a piece of working software.
Overall, I think it’s a waste of time. As has been stated numerous times: show me a reasonable sized project that’s been completed with it, with statistics on productivity (how many people, how long), as well as how many bugs.
…in his insight of the problem of software complexity: a lot comes from the synchronization of various internal state; updating of data structures after a transformation pass (say, in a compiler), all sorts of management state, etc. If COSA helps make that easier to manage, it will be a step forward, (though I doubt it will be a silver bullet).
The problem is in how he tries to introduce it; challenging Turing will not make you more convincing, especially under such a vague and not-mathematical setting, using only abstract concepts.
The part where hardware design is taken as an example is also simply a joke: the ‘synchronizing’ behavior of hardware does NOT happen magically as he tries to depict. Wires carry signals automatically, but the wires have to be PLACED RIGHT during the DESIGN, which is not different from the need of writing correct code when programming software. When done right, hardware OR software will behave as planned.
The illusion that hardware is stable and software is not (which of course many will not buy anyways , is because the complexity of hardware design does not entirely lie in its _functional_ complexity, but many other factors such as layout, verification, etc. In terms of pure complexity of design and contained logic, software is WAY beyond hardware.
I remember RMS said something about this too, about how software due it’s “precise” nature, is advantaged in that aspect over other engineering desciplines, e.g. building a physical object, where you need to combat all sorts of physical imprecison. Software does not need to deal with such issues, and takes over the gap by its sheer complexity in design.
> I like your spreadsheet analogy, but I would like to offer
> a different angle. In a spreadsheet, when you have an
> error it is immediately apparent that there is a bug, and
> you know that you have something to fix. It is because of
> its synchronous behaviour that you know this instantly.
Wrong. You make implicit reverse assumption that if there’s bug, then you can see error instantly. Usually you can’t see, and to find error, you have to walk through many spreadsheet formulas and associations before to find a bug. Nothing different from usual debugging:)
If using any “algorithmical” program you get dialog “This program has performed an illegal action and will be closed”, then it is immediately apparent that there is a bug in it either.
You are not thinking syncronously. In an “algorithmical” program you get the dialog “This program has performed an illegal operation” only if that code with the bug in it gets executed. In a very complex piece of software you might not see that bug til a year later. In a spreadsheet all the cells are running at the same time along with all their dependencies.
The spreadsheet is just an analogy by the way. A spreadsheet program can be easily written to show you all the cells with errors in them so you don’t have to look all over the spreadsheet to find cells that look like “ERROR!”. If the spreadsheet program knows to show the “ERROR!” string then it knows where the errors are also.
I would like to try whatever he is smoking
A little background on the author is obviously needed:
http://users.adelphia.net/~lilavois/Seven/bible.html
and
http://pharyngula.org/index/hmm_a_little_knowledge_really_is_a_dang…
No other comments.
Definitely there are more than one error of fact here… The Processor is also a UTM, so the UTM can not be the problem causing complexity…
Any syncronous system of n machines doing m calculations can “trivialy” be transferred to a nxm machine doing one calculation at a time.
In theory, any system not relying on random/”quantum” operations can be emulated by a UTM, even the internet at any given point in time. It could – theoretically – be checked for any output given any input. A system of (sorry) n UTMs sitting on M communication lines is still a UTM.
This doesn’t mean, synchronous behaviour can not act as a “paradigm” (hmmm, nice word), just as objects are just a p’digm. No hardware-guy has ever spotted an object in the wild.
> A little background on the author is obviously needed
Of course it’s needed! How else could we make ad hominem arguments left and right?
> BTW: another guy seems to have -something- working
> (download gave me an error though):
>
> http://www.iki.fi/marcus.sundman/COSAed/
Ouch! The versions that are available for download are *awful*! I really mean it. Even the source code is ugly! I have better versions, but none that would work well enough for me to put it up for download. The current state of the source code is not something I want to share with other people. 😉 Unfortunately I haven’t had time to work on this project for a long time now, but eventually I will resume my research in this area and then I’ll probably release a version of COSAed that is actually usable. Until then I recommend that people make their own implementations.