Saw this over at BSDForums.org. In this final series article, JavaWorld’s Jeff Friesen completes his exploration of threads by focusing on thread groups, volatility, thread-local variables, timers, and the ThreadDeath class. Also, the accompanying sidebar, “Finalization and Threads,” explores Java’s finalization activity in the context of threads. [Part 4] [Part 3] [Part 2] [Part 1]
“Java Threads considered harmful” (from the JavaCSP guys).
Threads & long thesis about SW v HW par programming
After reading even a small amount of this java thread code, I know why I prefer to do Par programming with CSP langs like Occam or HandelC or even a Hardware Description Language. C# I believe seems to be essentially the same as Java on this. There has been much criticism of Java Threads (& I would imagine C#) in the UK academic circles (U of Kent, Oxford) for not being thread safe esp when used in Swing, so much so that even Sun was forced to admit this. Their answer was JavaCSP & JavaPP that are provably thread safe as they are based on CSP sematics. This is important if Java (C#) is ever to be used to fly an airplane or control any cpu that could cause death.
There seems to be a fundamental difference in the way SW & HW people think about Par/Seq programming. In HW par is damned easy, seq is a damn pain. In SW its the other way around except that par can also be life threatening when it fails.
In SW, seq is what every body knows as just plain code, & par is simulated by time sharing the par threads and syncronising & communicating results. It seems as if most of the SW par langs today can suffer from variations of deadlocks that you just don’t hear about in HW. They are mostly based on the earlier work of Hoare from the 60’s, ie monitors, semaphores, mutexes etc.
In HW par is just a natural instance of thousands or modules which may be nested. At the bottom are gates & flops wired together. Signals flow from gate/flop to other gate/flop. The modern HW guy doesn’t express in gates/flops anymore though, they write in a C or ADA like language lots of assignments etc (C/Verilog==good & ADA/VHDL==bad IMO). Note that all variables are only assigned (written) once. The EDA tools turn these assignments into potentially thousands of gates or flops each. So a million gate chip might be expressed as 10K-100K lines of HDL code.
When a HW guy includes seq code, this has to be synthesized by hand or a Behavioral Compiler into something that bears little resemblence to the seq code. seq code can allow variables to be assigned repeatedly as in SW langs and allows loops, if then, procs etc. If the seq code is truly general, then it must be instanced as some sort of cpu that just runs the compiled seq code, only its not that easy to just plonk down a cpu where ever seq code is mentioned. So this part is managed by hiding the seq code as perhaps an instance of a ppc/arm/mips core, & moving the actual seq code now expressed as c/asm compiled module outside the HDL EDA tool.
In the CSP model (Communicating Sequential Processes Tony Hoare Oxford U), the idea is to constrain par so that it models processes in a natural way. Just as a process can model a small piece of HW, so CSP, JavaCSP, Occam, HandelC can all be used to model processes, & processes can model HW, therefore all CSP model languages can be used to model HW & are therefore HDLs (created by & for SW guys). But if you really want to model HW easily, just use Verilog or VHDL its alot more natural for HW things.
CSP was just so much unreadable academic math so & so till it was made into a practical language ie Occam. Occam did not sit well with SW or HW people due to it’s strange lexical style & other constraints, so it was reborn as a C dialect HandelC that just includes the Occam par & channel features. Remember that 40yrs ago Tony was a HW guy before becoming the famous SW Sir that he is today, so CSP was really based on practical HW all along.
So if all CSP model langs can model HW, then HW HDLs can also model SW esp VHDL(ie ADA++) & newer versions of Verilog (C–). Normally par programs expressed in HDLs are shipped as ASICs (or FPGAs) & the HDL source code is very securely locked up. But what if the delivery is the HDL source (compiled into bin of course) and that runs on the end users PC. Then its just a par programming language that is absolutely thread safe but requires a simulator (few $K) to run the program (you could call it the par HDL runtime engine).
So how does Occam get executed, well it gets compiled into seq threads that a are executed by runtime kernal (you could call it the par PDL runtime engine). At one time it used a special uP called a Transputer to execute Occam assembler code, this chip inluded the runtime engine in HW so par programming was both natural & had very low overhead. These days, that HW scheduler engine is emulated by a SW kernel that can run on most recent uP.
So now you have the idea that SW & HW are somewhat interchangeable, & that it can be fast as an ASIC/FPGA or slow as a simulation. This is much clearer for things like MPEG engines or 56k modems which exist as SW or HW.
So what goes on inside the simulator (or HDL runtime), obviously its a little operating system for time sharing lots of tiny threads and scheduling threads to run as their inputs become available using an event wheel. If you favourite par project can be expressed in any of these CSP langs or even a HDL, you also have both safety & possibly huge speed ups if synthesized to HW. Although the HDLs are usually used on the HW side, there is nothing to stop SW guys from using HandelC to build HW or SW except too many $, & HW guys often use C to model simple HW for its pure speed & low $.
To be fair, VHDL is a higher up more abstact language since it is just ADA with extensions to support HW and from a HW point of view it just looks foreign, 2x the typing. Verilog was more of a quick hack to make a C like language useable for HW design & is very poor on abstraction so it is a poor SW development lang.
http://www.accellera.org/
The 2 langs though are now under the same .org to improve both, Verilog is sucking up a big chunk of C++. VHDL is more used for system modelling that contains HW & SW processes. You will all know that ADA is the only lang considered mission critical safe for gov, space industry.
Now I guess I’m in deep doo dah
I have been programming 100% concurrent applications in Java for 1 year. My apps always had a blazing speed. even more on SMP systems. so please, don’t tell me that Java threading is bad….
What I am suggesting is that some forms of par SW & HW are so closely related & interchangeable but only if the SW is developed with CSP type languages. It can easily be demonstrated that one can be transformed into the other within limits, and as I said Java can be made utterly thread safe with the CSP classes that are in JavaCSP.
But clearly most SW & most HW people don’t want to be in the same boat or even speak a common language, & that is a shame as much more performance can be achieved for some systems by being on both sides of the fence. The HW people are just as anxious to not use or know C/Java languages either.
As for your work, no doubt your apps are blazing fast but safety is hard to prove if the system never appears to go bad. JavaCSP can be put through formal checks to prove that it is safe. If you are not doing life threating SW, a restart may be fine & you could care less about formal checks.
In the industry that I am in, nothing gets shipped till its been verified & formally checked to death, lest company lose a few mil $ & few months of delay. Remember the Intel Pentium bug?
Would you get on an airplane with avionics SW written in Java with extensive use of threads? I sure wouldn’t and nor would Sun executives. The gov wouldn’t allow it either.
PS I really wish people would not be anon unless you really don’t want to be found in a web search.
Well, I have to admit that I haven’t done much concurrent programming with Swing and all my fast apps were done without UI running (backend apps). I have also checked my code for any bugs and I have tested every case where:
1. exceptions would occur
2. threads would stay asleep forever
3. any other bugs
While I was checking, I found several ways to speed the whole thing up.
So if my code is found to crash, it is either the jvm’s fault or the application’s which uses my code fault. or hardware/os fault.
I would gladly put my code into the JF-35 fighter with the rule that the underlying code and the application code gets tested as much as my code was tested.
I guess it all depends on the programmer. If the programmer is lazy and doesn’t have any talent, there will be buggy and slow code like WebSphere Commerce Suite for AS/400 ( inside info from IBM, hehe).
PS: I’m too paranoid to not be anonymous
I can’t really comment about your particular env as I would never do Java programming that couldn’t also be done in C++ or Verilog etc. I use Java only on occasion because it forces some restriction on ptrs (removes em) & makes for quick & dirty apps that will be sure to be correct in less time than the C++ equiv, albeit slower than C. No wars please on that. I do follow the threading stuff as that relates directly to a language being useable for par programming & even potentially HW design, (perhaps on a quick & dirty basis).
Now I am very (extremely) surprised if you would be submitting java code for any Fighter as that is clearly a life/death situation possibility. You jest surely. Read the Sun user clause if you said yes.
Fighters crash rather often it seems, as happened in Ukraine a few days ago. Was it pilot error, mechanical or avionics SW or HW. Seems in that case engine failure, but did that engine have a sophisticated engine control SW package, I wouldn’t be surprised if it did. I bet a US plane would have cpu engine control. Anyway 70+ people died, but not the crew.
Also IIRC, only certified experts are allowed to write SW for these situations in which case a lawsuit could & would result and ADA is the only language that I thought was allowed. Why is that, its because ADA has several models for describing concurrent behaviour, one of which is the rendezvous mechanism which has been shown to be CSP compatible. Which is also why the VHDL version of ADA can be used as a logic simulator which is in essence a microscheduling OS kernel or event wheel. The CSP aspect essentially simulates the behaviour of wires & signals, signals must eventually arrive & be consumed.
Its not even a question of the SW guy checking their own code, you should be checking everybody elses code & they yours, sure you check for stupid errors but after that if it seems to work what else can you do but wait for a failure.
The work done at Kent U by the JavaCSP demonstrates in some clear examples how Java Threading can get far more complex than anyone can really understand & under the right conditions, failure is the inevitable result. IIRC they weren’t even talking about Swing but I’d have to find the pdf doc. google javacsp threading etc etc.
I was just explaining to my dear wife the Florida elections result as the HW equivalent of metastable theory that it was, the Judges decision per vote was millions time more expensive & important than if the voting had been done properly. Just like when a Dflop can’t decide a 1/0 due to overclocking or bad design or just the chip aging & slowing down. Chips do get slower over time. Chips also suffer errors SW guys or end users will not be familiar with, cosmic in nature usually, esp at high altitudes.
I also explained the 5 dining philosophers problem you no doubt learnt in college, this is the deadlock situation.
Same can be said for those damn fools who overclock their cpus & naively expect the same result to come out more quickly. Any engineer doing that sort of thing for a life/death situation is waiting for a prison term.
All of these are examples of the same thing, unintended failure & metastability. HW guys have really simple rules for avoiding these things by triple registering inputs etc, using spare time margins etc. And mostly brute force verification by billions of simulation cycles of as many types of cases as possible. Even booting Linux in simulation is doable, just takes a few days/weeks to boot up on a simulation farm.
I can even tell the world that somebody close to me inspite of their very best effort & due dilligence made a mistake on a chip that cost >$1M & the chip was probably cancelled as a result. The company can’t blame him, they have a history of designing chips that can’t be understood by the best engineers & they use techniques that are just asking for that kind of mistake but they won’t change what they do. I will never work on such chips. Perhaps that why their stock is so low.
Remember the Intel Pentium bug. It was just a typ0 in some test script. The eye couldn’t pick that up, the script still ran, but it missed a test case. You did catch that typ0 right? The result was that something like 3.47591278 / 3.47591279 didn’t give 0.99999999, it gave something vastly different. If the engine control had used that eqn that checked fuel with nos like that, it could have done something really stupid like empty the tanks, there is too much fuel on board. Luckily it was cought by working mathematicians in the real world.
It is my exp that most SW is not checked even remotely as rigorously as HW is (esp ASICs), why, because you can always fix it in a jif. Well MS SW error rate is probably 100s times higher than Intels ASIC SW/HW error rates, that because Intel & the ASIC industry will get sued a $B dollars if it lets crap chips out, thats what the Pentium FPU bug cost. MS ofcourse gets a slap on the wrist by the DOJ for nothing, few SW companies ever seem to get sued for buggy SW. Yet if SW can be made more like HW, it can be tested & qualified as being ultimately more reliable & bug free.
>Now I am very (extremely) surprised if you would be
>submitting java code for any Fighter as that is clearly a
>life/death situation possibility. You jest surely. Read the
>Sun user clause if you said yes.
If Sun tested it’s JVM as good as I tested my code, then it would work 100%. Unfortunately, JVM still has bugs, and it renders my code buggy too.
I have tested every possible case that could happen to my code and made all possible adjustments.
My code isn’t really an application, it’s an API for use in “embarrasingly” parallel applications for the base stuff like Threads, Organization and other stuff.
Well, if you don’t want it put in the JF-35, it can be put in the city’s Traffic Lights system or the telephone station.
My code has 0 bugs, but it’s only a library, not an app.
Some people would argue that even a traffic light or the telephone network are mission critical as loss of 911/999 etc or letting all the lights go green would probably cause loss of life, so I don’t know if Java would get a clearance for those. But then again, traffic lights used to be done by the simplest 8 bit cpus in asm for who knows how long.
My neighbour works for a mil contractor & is using Java for the air traffic control so either that got a waiver or its a less critical project. Anyway he’s got many yrs of mil project exp & a french EE degree so I thinks thats ok. The gov has been forced to waive ADA too to be replaced by C++ as thats what most people want to use but probably not for the most critical stuff life/death.
If you have a library then you are in the same position as Sun, the middleman. What if someone uses your library and they mess up. Does everyone just pass the buck back to Sun or do you use the same EULA as Sun to remove yourself from legal problems?
Pretty quiet today!
>or do you use the same EULA as Sun to remove yourself from legal problems?
I would just say that:
“In the event of this software failing, the buyer will be refunded the cost of this software at the time of purchase.”
In normal language:
If this thing fails, you will be refunded the cost you bought this thing for.
“If Sun tested it’s JVM as good as I tested my code, then it would work 100%. Unfortunately, JVM still has bugs, and it renders my code buggy too.”
If this is really true after all this time 6yrs?, then you should enlighten the rest of us or maybe you should be using another language, not that there are many to choose from that support par & are wildly popular. I would suggest that the majority of Java programmers should not be using Threads period unless they are quite well qualified in the fundamentals.
If Java were really robust in its threads, then I would expect to be able to write a crude chip simulator that could spawn a million or more threads with out blinking. I don’t think it could stand such a beating which is why developers inevitably end up building these things directly in C/C++ & optimising the hell out of and maybe even use assembler.
Directly coding the primitive objects with out needlesly new & impied delete will speed up the task enormously. Actually threadless Java could be used too, but not quite so fast.
well, what I meant to say that Java is not 100% bugless.
remember that suspend() stop() and resume() were not deprecated before 1.2. The developers made a mistake and the software that used these methods cannot use them now….
I’m sure that Java has less mistakes than any other language, but everyone makes at least one mistake.
I’ve read several academic papers on Java and several articles which complain about Java not being thread safe. Some of them are legitimate and others are pedantic and peevish. However, there are at least an equal number of articles and papers that will tutor you in ways of getting around the threading problem in Java.
To say that Java has no value because it doesn’t automagically handle every threading need is like saying C is bad because it allows you to overrun your array’s bounds. A good programmer knows how to avoid these pitfalls, and bad ones just complain.
Java may never be a good programming language for the chip that controls Otto the Autopilot in a commercial jet, but who cares? It has a vast number of other uses, which it handles quite well.
I think the biggest problem with programming Java or any other language is that, metaphorically, programmers try to insert a small screw into a watch by using a large gorilla taped to a stick. If programmers would actually stick several programming languages into their toolbelt instead of searching for the one magic sledgehammer that retracts into a screwdriver, folds out into a cement trowel, and contorts into a back-hoe tractor, their lives wouldn’t be so hard and we wouldn’t have to waste time discussing these fatuous issues.
>the one magic sledgehammer
Maybe this wonder language needs to be created. A language that is both high level and low level. The one that has a lot of flexibilty yet it employs safety. A language that is running an OS that is highly scalable and extremely small. Blazing speed and nine nines uptime! Running on the best hardware the human progress has to offer. Matching chess players around the world and exploring space! The computing of the new century!!!!!!
I will call my psychiatrist tomorrow
You are missing the point here when it come to life-critical applications.
Air traffic control, avionics, Xray beamers, pacemakers depends on predictable scheduling and performance.
This means, that if the physician sets up the xray machine to beam during 1 msec, the order to shut will be sent exactly (or at most) 1 msec after the begin order.
Now imagine the situation in java. at the moment the shut order is sent, garbage collection occurs. Now the patient’s chance to get a tumor has increased.
The stock JVMs are not suited for realtime application because of garbage collection and unprecise thread scheduling.
Performance is not relevant here. What you want is predictability, which often works against raw performance.
For lots of realtime applications, c/c++ on microcontrollers or realtime OSes is the only way to go.
About the threading bit, java threading is versatile which opens the doors to abuse and mistakes (the way c/c++ does with pointers)
The ultimate is provable models, in which you can f this precormally prove an application to be correct. In the current state of the art, this precludes dynamic memory allocation and java-like threading. CSP semantics are a step in the direction of provable threading models.
Now for shopping carts and web apps J2EE provides for rapid application development and predictablity is not an issue.
Someone claiming zero bugs can’t be serious, or is all it does exit(0)?
@Camel
“Some of them are legitimate and others are pedantic and peevish”
And the legit ones you are thinking of are ?
“To say that Java has no value….”
Aggreed, absolutely, Java/C# already cleans up on the ptr mess. Of course STL etc & C++ overloaded [] can do same.
“Java may never be a good programming language for the chip that ..”
Agreed thats why ADA has its place.
“I think the biggest problem with programming Java or any other language is that, metaphorically,”
Thats funny, and since you mention super transformer tools….
The ASIC industry is right now trying to create or choose such a language that fits both HW & SW design. They are moving towards Superlog a revised Verilog with half? of C++ thrown in. C++ compilers are cheap & plentifull. Combining langs together only drives up the cost enormously for the tools esp since the ASIC/EDA world only has a very few vendors, mostly 5-6 figure stuff.
Myself I would prefer to break up Verilog into smaller pieces, the part that I actually use for ASIC design I compile back to C (Vpp or V2C it) preserving the semantics of the logic design and #include this with any other C code on any platform etc. But I would be a lone voice in the wind on this.
@Anonymous
The wonder languages already exist, ADA & doughter VHDL.
The SW guys don’t like ADA much, its several times bigger than say C/Java. Its usually not as fast to code or run, & source can easily be >2x bigger. But the mil/gov people like it since it has far more checks & balances.
The HW guys don’t much like VHDL either for same reasons, esp since its wording is very unfriendly to logic design. Both of these are commitee langs from the 70& 80s.
The more wonder the language/tool, the more lonely it is.
@Serge
I wasn’t missing the point, but I did forgot to mention it.
Predictability is absolutely most important. It is what makes a piece of SW feel like a piece of HW. Embedded systems are always like that, consume some inputs, produce outputs in endless cycle in certain specified time. This was done with clockworks & relays & TTL logic then F8 cpus long before C was widely used. Fortunately these embedded systems don’t follow fads too much.
“For lots of realtime applications, c/c++ on microcontrollers or realtime OSes is the only way to go.”
Agreed
“About the threading bit, java threading is versatile which opens the doors to abuse and mistakes (the way c/c++ does with pointers)”
Agreed
“Now for shopping carts and web apps J2EE provides for rapid application development and predictablity is not an issue.”
Agreed
@me
Well 0 bugs is almost achievable, your pentium is probably as close to 0 defects as its gets for such a complex beast, yet Windows is thousands of times more buggy per pay$. If you buy any faulty HW, toaster, bicycle or PC, you take it back to store, that costs store & supplier big $. Why can’t SW be as reliable.
The lawyers at MS, Sun, Apple & others use the EULA to protect themselves from lawsuits so they have permission to ship as buggy code as they think their market can stand, just ship it often and charge $ for fixes that should be free with an apology. The gov gave the SW industry a pass on this as they see the SW industy as too valuable to allow to be sued out of business. And lower defect design would certainly slow product cycles.
Perhaps eventually the SW EULA will be sued away, or the HW industry might start to use it too, because HW can be relabeled as SW if it lives in a FPGA, then quality HW could go down hill.
I would paint an xy graph, +ve only
The x axis represents HW, par, speed & taming of physics
The y axis represents SW, seq, abstraction & creativity
Let y-n = 1/(x-m) define an area to work in.
Further out, is more complexity, for HW sheer speed, gate counts, even more out is the analog world, physical issues, then real world. Up the y axis lies abstractions, SW that is more & more layered & incomprehensible and slower, ie WinXP.
At the 0,0 origin though is simple boolean algebra, all CS,EE & MA 101 teach this stuff (even some junior school). Near the origin it is possible to do work on either axis substituting HW for SW, par for seq, here all the rules have to be simple & precise. No unbounded loops, mem refs or recursion etc. Here is where 0 defects are easy. This is where embedded SW-HW & math heavy datapumps mpeg codecs filters general algorithms with O..n.. solutions live. This is also where drivers live, where the HW is touched by the SW right on the x==y line upto m,n.
SW guys are getting used to more and more abstractions, layers upon layer. Each layer is unspecified in timing & performance. Each new language adds 1 or 2 more wrinkles up the y axis. Its been a long time since I heard of a compiler that even tries to estimate what the cycle counts are for statements because I guess its too complicated now.
HW guys are getting used to more and more complexity too, but all layers are well specified with detail specs, the total spec is the sum/product of the lower specs. Ever buy a pentium that runs “so & so”, “its fast if you don’t do that”, “a bit faster than that” freq. Mind you its getting harder to extract those specs from ASIC/CPU vendors.
With FPGAs in the PC, the origin field gets much bigger m,n push out and now bigger pieces of SW & HW can be interchanged. The rules must still be kept simple, hence CSP, Occam, HandelC & HDLs. Now more & more SW & HW guys have to be able to cross over the x==y diagonal.
Nuff said
Hi,
I’m reading your chat. Quite interesting stuff you mention.
While I programmed for 20 years, I started studying computer science only 3 years ago (part time study).
Reading about program verification there (esp. tony hoare’s calculus) and seeing the relationship between mathematics/logics and programming (finally understanding what PROLOG -which I saw first in the incarnation of Borland’s 80ies compiler- was really about).
I want to add to the discussion that Java for a good part was sold to the pointy haired bosses because it was a C++ for dummies in a time when programming talent was in high demand. These are usually not the people that are able to do formal proofs of program corectness.
I agree that threading und Java is a mind boggling issue. Such programms are hard to debug (and remember that different platforms have different threading mechanics).
It was not only the Java Thread API that got modified heavily (I agree that if Sun’s engineers got it wrong the first time, it is probably not a trivial issue but also the Swing Worker help class, which is offered on Sun’s java web site, had at least 3 major revisions because some non obvious bugs went in.
Thus threading is not easy.
A solution I want to look into, but haven’t had the opportunity to, is Doug Lea’s Java lib for concurrent programming (our anonymous cowboy might be him
Having not read his book, just looking at his lib’s web page, the offered classes are mind boggling as well.
This is not obvious stuff.
Thus even if Doug’s lib might get used and helps to improve matters, it will only be used by some specialists who invested time in understanding. I doubt that the general Java programming crowd will use that stuff.
And it is not because I deem them stupid or lazy, no, their bosses would flog them if they would spend time on that, instead of crankink out more Swing dialogs and such.
Leaves the question, if it is possible to embedd those mechanism such in the language, that using it is fool proof and of low effort, I suppose this is what JJ wanted to express with his mentioning of Occam.
I don’t know yet, if this is possible.
Right now I read my way into the Erlang language.
(http://www.erlang.org), a functional language created by Swedish communications giant Ericsson, which is designed to support verification and parallelism.
Give me some time to digest it and I might add to this discussion.
Regards,
Marc
P.S.: If anyone is interested in a nice video presentation by Tony Hoare and other gurus (Dijkstra ..) of the early computing era, there is a nice web cast and a great book (with original papers and 4 DVDs with vides – Software Pioneers, Springer Verlag).
Here is a link
http://www.sdm.de/dt/tec/eve/2001/2001.htm
most presentations are in English.
>our anonymous cowboy might be him
Yea, I wish…
I read Doug Lea’s book and I have gained a lot of insight.
After that I developed the “famed” concurrency library. I have never released it to the public because:
1. Someone might rip it off.
2. I’d really like to patent the concept. (getting ready for software patent flames)
3. I will submit it to JCP a year later, when me and my brother are done with it.
I wish I knew Doug Lea in person, he has a lot of knowledge of the stuff I love: Concurrency!