“Last week we spent some time with Sergey Dmitriev who talked about his Meta Programming System, which he says fits Charles Simonyi’s model of Intentional Programming. This generated a lot of interest, so I contacted Charles himself to see if we could talk with him about it and he graciously accepted to talk about his work at Intentional Software. He was also very speedy in his response to my questions, which allowed me to publish his interview this following week.” Read the interview here.
Anoter case hype.
People tried meta programming with case earlier and failed, nowadays case is only used for several production steps.
I am not 100% sure, but what Symoniy is working on sounds like case in combination with a component library to me. Nothing new under the sun as always.
The problem with all these models is, that at some point you run into a problem you can´t solve with a higher abstraction layer anymore and then, coding knowledge is needed.
I´ve seen companies going belly up in the past, because they thought, that they could solve entire projects case only.
There is a chapter about Intentional Programming in the book by Czarnecki/Eisenecker on Generative Programming (black with a Tangram game displayed).
One idea reminds me of a concept from differential geometry.
For describing certain geometrical manifolds, one map is not sufficient, you need a set of maps (an atlas) to describe that thing properly.
The idea in Intentional Programming is about describing the software not only in one format (like a Java programm) but with many different descriptions.
Each one describes a part of the system (Simonyi talks of projections here, which is the mathematical term for having only part of the informations).
So what is a programm?
The theoretical computer scientists have developed very simple but exact models of computations (-> theory of computation, recursion theory, computable analysis).
Computation is to tell it easy the realization of a function that takes some input value and spills out some output value. If that takes only finite much time and memory resources for evaluation, it is computable.
There are countably infinite many such computable functions-
If you have a general purpose computer, something that can compute all computable functions, you must somehow tell it what computuble function to choose from the set of all possible computable functions.
You do this by providing the computer kind of a key.
Thus there is a mapping from the set of keys to the set
of computable functions.
That key is the programm.
That mapping is the semantics.
One of the easiest key sets is the set of all natural number s |N, making it the easiest programming language around.
E.g. 243864836585398752976872084378 is the program that calculates the arithmetic means of five input values.
Why did I tell this?
I want to illustrate that a programm can be quite something different from what one is used to (like Java code).
The IP people go use of course not the conceptually simplest language (like |N) but rather aim for the specific languages of certain domains.
Why not programm a mathematical routine, giving the formulas with a graphical formula editor?
Or programming some sound filter by providing the cuircuit notation of digital signal processing?
The IP people forecast systems, where you programm in many more input formats, many of a graphical nature, than the plain old ASCII based programming.
I don’t know if that will make a big difference, but it might be one way software development might evolve.
Regards,
Marc
You describe exactly what the case people have been trying for years, trying to simplify programming by mapping various programming parts (aka functions) into different kind of diagram types, according to the problem.
So I still see no difference of Simonyis approach to normal case tools.
Sure you can capsule those mappings, but in the end you just have a case tool with a set of high level components suitable to target certain tasks.
And we all know that out of an infinite number of problems and an infinite number of solutions you only can target a limited amount of it with a certain set of mappings.
And that is the exact reason why the case for everything approach has failed in the past.
The day programming can be done by people wo don´t have any clue on how a computer works, is the day the computer has intelligence, so that you can tell it, that you want something and in a finite time it can map the input-output conditions into concrete algorithm descriptions computational by the processor (given that the Von Neumann concept then still is the one computers use)
The finite time problem was the main reason why languages like prolog which heavily relied on backpropagation trees failed miserably and such methods are only applied to a finite number of problems (like integral resolution, chess) where it makes sense.
> You describe exactly what the case people have been trying > for years, trying to simplify programming by mapping
> various programming parts (aka functions) into different
> kind of diagram types, according to the problem.
> So I still see no difference of Simonyis approach to
> normal case tools.
If you say a case tool is something that generates programms from diagrams, then IP is a case tool I think, as much as any UML tool today.
The idea of IP seems to use not the abstract diagrams from computer science, but to very domain specific formalims, obvious stuff like maths formulas or electrical cuircuits.
The hope seems to be that the domain expert (Mathematician or Electrical Engineer) is then empowered to provide part of the solution.
If this is something new, I don’t know. If it is useful, hm.. might be.
In the end the software guys envy the architects and CAD folks for their comparable rather easy notations.
Do we simply have no right enough developed own notation for doing our job, or is our problem simply such complex that there is no easy solution?
I feel that in the end we just might end up with a next generation of super power IDEs that allow richer input than just text.
By the way, here is the link to the software company of Simonyi:
http://intentsoft.com/
> And we all know that out of an infinite number of problems > and an infinite number of solutions you only can target a > limited amount of it with a certain set of mappings.
The problem in the theory of computation is that we have to deal with infinities of different magnitude.
The key set (programs) are traditionally finite strings or from some other countable infinite set, thus bijectivly mappable onto the natural numbers |N (integers >= 0).
But the number of computable functions, in the easiest case mappings f : |N -> |N are bijectivly mappable on the set of all subsets of |N.
So we have only card(|N) programms to index card(2^|N) functions. That is the reason why we can’t catch all computable functions. There is no magic, just more functions than programms.
> The day programming can be done by people wo don´t
> have any clue on how a computer works, is the day the
> computer has intelligence, so that you can tell it, that
> you want something and in a finite time it can map the
> input-output conditions into concrete algorithm
> descriptions computational by the processor (given that
> the Von Neumann concept then still is the one computers
> use)
I doubt that we will see any intelligent system soon.
> The finite time problem was the main reason why languages > like prolog which heavily relied on backpropagation trees > failed miserably and such methods are only applied to a
> finite number of problems (like integral resolution,
> chess) where it makes sense.
Prolog is very cool.
But way too advanced for the average programmer.
Who has the necessary background in logics and computer science to make a good use of it?
Regards,
Marc