JetBrains IDEA developer, Sergey Dmitriev, talks future programming paradigms and the problems with today’s programming models.
JetBrains IDEA developer, Sergey Dmitriev, talks future programming paradigms and the problems with today’s programming models.
Everything old is new again. This “language oriented programming” stuff seems awfully like Lisp macros to me. The underlying concept behind Lisp macros is painfully simple — code itself is a first-class object, just like an integer or a string. All this other stuff just falls naturally out of that single point. So you want a domain-specific language for your simulation? Just write the macro. Want domain-specific languages for traversing collections (or databases or XML files, or whatever?) Just write the macro. Want to control exactly how your code gets compiled? Just write the macro!
The concept this article describes isn’t really anything new – its just a rehashing of any macro language. You write your own syntax, rather than sticking to a set syntax defined by the language. Other than that, its the same thing all over again.
I couldn’t get past the first page without thinking:
A) Rehash of the 4GL concept.
B) He should probably look into Lisp.
Ok, we all concide on one thing. And it is…
Lisp
Lisp comes up in the discussion forums of this article over at jetbrains.com and Sergey responds.
Charles Simonyi, of Microsoft fame, is working on stuff like this too.
Here’s the link to Sergey’s response on it being Lispy.
Well, maybe not the worst, but I’ve rarely read so many paragraphs that go on and on without saying anything. Yes, we know that we don’t program the way we think. What’s your grand alternative? It’s vague and not well thought out, to say the least.
Patrick
sound like what he need is Forth.
The problem with translating concepts into computer languages is not the design and the writing, it’s dealing with ambiguity inherent in human languages. And there is no clean way of doing that.
Domain-specific language generators are nothing new either. Such meta programming systems (and I loathe the author for taking a generic term and using it to name his application) have already been widely discussed. `C (Tick C), JSE, Stratego/XT, JTS, Tempo… the list goes on.
Playing the devil’s advocate for a moment here, I do have to defend meta programming systems from those that claim that they are just macros – yes, they are based on macros, but the gist of a MPS is that it is meant to preserve hygiene thoughout the program. Again, this is a known issue and there is no one true magic solution.
The proposed system in the article itself seems to be concerned more with statically typed procedural languages. I cannot see it being too useful in dynamically typed languages and in functional languages.
In summary: The article presents domain-specific languages as a new concept (which they are not) and does not give proper credit to its predecessors. In an academic environment that would be considered a serious breach of research ethics.
1 very nice idea is the ability to create special editors/IDEs that support your new syntax. That’s surely not lisp with all it’s (((())()())))
Thibault et. al. provide a nicer paper concerning the practical design of domain-specific languages:
http://portal.acm.org/citation.cfm?id=316286
It might need some additional background reading, but it interesting nevertheless.
Want a search term? Google API
Want a sorted list? Google API
Your next API is GET and POST
The article presents domain-specific languages as a new concept (which they are not) and does not give proper credit to its predecessors. In an academic environment that would be considered a serious breach of research ethics
Don’t make such accusations unless you read the entire article. Hint – the end.
I much prefer C and assembler as they are now, I think it would be worthless to have such overhead on a compiler to interpret language when its just going to asm/machine code in the long run anyways.
I think what most – not all – of us might agree is that priests of the devil’s cult (java, C++, C, anything Assemblyalgoloid) should be the least qualified persons for enlightening others on where the trend of programming should be. Of course they find the ‘current paradigms’ limiting – all they know of them are their tenth-arsed versions.
@Anonymous: Who cares about overhead in the compiler? The compile stage is hardly a bottleneck to overall system performance. In any case, the ability to hook into the compilation process can be very powerful for performance optimization. If you’ve built a domain-specific language, chances are that the compiler’s built-in optimizer doesn’t know anything about optimizations that could be applied to your DSL. If you can control how the code gets compiled, you can add these smarts yourself for your DSL.
@Josh Goldberg: Using higher-level constructs doesn’t mean you that you don’t know what’s going on at the low levels. As long as you are familiar with your compiler, you will have an intuition for what kind of code it’ll generate in a given circumstance. In any case, programmer knowledge is always rather undependable. Nothing is as good for tweeking performance as a good profiler, and those work just as well on high-level languages as on low-level ones. Additionally, I must say throwing more hardware at slow software is often a perfectly acceptable solution. Buying a $3000 dual processor server might double or quadruple your software performance. Spending the same $3000 on developer time (1 person, 2 weeks), and you’re unlikely to see even a 50% performance improvement.
I’m guessing with all the stories on software tools, and language orientated programming that there’s an undercurrent of dissatisfaction with what we presently have.
Maybe the next thing will be tuple space: http://c2.com/cgi/wiki?TupleSpace
or MDA: http://ase.arc.nasa.gov/uml03/rouquette.pdf
There are definately some old concepts here. I don’t think the author has attempted to hide that fact. Lisp is a proof that everything boils down to macros. Problem is that Lisp is worse than reading assembler.
There are quite a few important points that need to be re-iterated and brought back into the attention of developers and researchers. One I believe is important is that syntax != language. Using a good editor like eclipse, idea, visual studio shouldn’t mean that the source is saved as an ascii text file. Using a binary description that can interleave various concepts would create much more dynamic and expressive language interfaces.
Maybe it’s not computer langagues that need to become more like natural langauges. Afterall there are other ways to bridge the gap. An example would be to create high level unambigous spoken/writen langauges and then use these to write programs, laws, medical texts…
http://lojban.org/
Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp Lisp!
Why do people keep trying to reinvent Lisp instead of JUST USING LISP!!!
Becuase there is always a better language for a given need or simply just a better language. And LISP is not the end all of very high level languages.
Because not everything flows well recursively…
The concept of Lojban is nice, but have you ever tried to learn that language?
“Why do people keep trying to reinvent Lisp instead of JUST USING LISP!!!”
The article mentions the following example: You are using some lower-level functionality (say, call a function) that takes a color as its input. In the LOP source code for that call, a constant argument could appear as a colored dot. Please explain how you’d do that in Lisp. Other example: a function call that takes a GUI button as input could draw that button directly in the source code. Something like:
myDialog.addButton([ok]);
The [ok] was meant to be a graphical representation of the button, but this forum can’t handle it. Again, you’re saying Lisp can already do that?
About compilers: As far as I understood, the creator of a specialized mini-language does not only create the language, but also defines how this is mapped to other languages. That way, he/she can optimize as needed.
Overall, the article is a nice introduction for someone who has never heard of LOP.
You mean something like
http://www.cs.utah.edu/~mflatt/tmp/guibuilder.jpg
that?
This is a proven concept and it works. REBOL is known to use it. A few examples:
1) Function Dialect used to define functions
2) Parse Dialect used to define dialects
3) VID Dialect used to define GUI
4) Control functions are dialects, a programmer can define new ones
5) Draw Dialect
There was a nice series of interviews on CGN (code generation network). I have submitted these links before to osnews, but they did not make it, so I am glad someone got this article posted.
Charles Simonyi: (Intentional Programming)
http://www.codegeneration.net/tiki-read_article.php?articleId=61
Krzystof Czarnecki: (Generative Programming)
http://www.codegeneration.net/tiki-read_article.php?articleId=64
What is behind it?
First you must recognize, that computation means evaluating some worker function that gets some input data and returns some output data using only finite resources of time and space.
A program is just a key, a piece of data used by a certain mapping (the semantics) to pick the worker function.
It could be series of instructions, a function graph, a piece of music notes, whatever.
For programming, thus specifying, what worker function the universal computer should evaluate, one should use whatever format is best suited.
Another id is the inadequacy of mapping. It is impossible to map a globe on a flat map without losing some properties. But is possible to cover the globe with a set of flat maps with much less error. If the globe stands for the problem, the flat maps stand for a certain local optimal representation.
Regards,
Marc
The association I got when reading the article, was the visual programming stuff like Borland Delphi and Visual Basic and things like that. Except from the part about creating our own language, it seemed somewhat the same to me.
I am not an experienced programmer, so I might have misunderstood it all
… is another way to look at these issues. See http://xlr.sf.net for more information.
> You mean something like
> http://www.cs.utah.edu/~mflatt/tmp/guibuilder.jpg
> that?
Exactly. This is embedding GUI components as I imagined.
Now it’s time to prove that standard Lisp is up to the task (if possible, let’s use common Lisp because I have that on my machine). What code do I have to write in my Lisp source to write runnable code like that?
Note: Modifying the compiler, IDE or similar is not allowed since that would not be required in LOP either.
Note: Modifying the compiler, IDE or similar is not allowed since that would not be required in LOP either.
Well, you don’t have to modify DrScheme, because it already comes with that functionality. In any case, why isn’t modifying the IDE allowed? The thing you forget is that most Lisp IDE’s are tightly integrated with the language and the environment. So adding a module to handle something like that is really no different than adding a library to your program.
Do you think that some future LOP environment (which doesn’t yet exist, mind you!), could implement such a feature as anything *other* than an IDE plugin? Because even if a language understands the reference to the button image natively (you could do that in Lisp, using a suitable macro), the IDE still has to contain the code for manipulating the button.
One thing that seems to be missing from the discussion thus far is the idea that with the introduction of modern GUIs, the computer moved from being a language of words to becoming a language of symbols (ie. icons, buttons, taskbars, etc.)
It would only make sense that programming will eventually shift from being an intricate and increasingly cryptographical collection of the tokens we define as “characters” (see the buttons on you keybord), to a set of relationships defined amongst various generic visual tokens.
Imagine a graphic to represent a procedure. Clicking on it would produce a view of other graphics that represent arguments, imported modules, etc.
Programming is no longer the domain of a select few. Many non-english speakers are programmers. Moving to a truly visual system of programming could be used to bridge the natural language gap, as well as the programming language gap.
Using a binary description that can interleave various concepts would create much more dynamic and expressive language interfaces.
Well, you’re right, that’s not Lisp. That’s Smalltalk and Dylan, which are merely offshoots of Lisp In particular, this smacks of Squeak.
Imagine a graphic to represent a procedure. Clicking on it would produce a view of other graphics that represent arguments, imported modules, etc.
Sounds like _very_ inefficient an painful way of writing a program.
Programming is no longer the domain of a select few.
Nope. You still have to understand the programming concepts in order to be able to produce something meaningful.
> Well, you don’t have to modify DrScheme, because it
> already comes with that functionality.
Yes, but the point was being able to add such things if the IDE _doesn’t_ come with it, because the IDE cannot include anything you could think of.
> In any case, why isn’t modifying the IDE allowed? The
> thing you forget is that most Lisp IDE’s are tightly
> integrated with the language and the environment. So
> adding a module to handle something like that is really no
> different than adding a library to your program.
Sorry for the misunderstanding, by saying that the IDE should not be modified I mean that it should be extensible that way without rewriting it completely. Admitted, by using an extensible IDE you can achieve mostly the same I think (probably that *is* some kind of LOP then by definition). I wasn’t right either to say that Lisp macros are not LOP – they are a limited variant because they’re text only (as most programming languages are a limited variant).
Still I think the comment by someone saying that people should use Lisp instead of re-inventing it is not valid. Original Lisp did NOT provide all that. If the original was enough, why use DrScheme after all?
I really want to play devil’s advocate with this one. I really don’t think symbols and graphics will be the next generation in programming. I think that the next generation is already underway. Xaml, meta-associated technologies, and their next generations will take off and be the de facto standard for the next 10-20 years at least.
We will see more and more that companies will bring together XHTML-ish applications, XML, and <insert your favorite programming or scripting language here>. Symbol programming will probably not take hold for another 50-100 years or so if that. The world is not ready to go back to hieroglyphics. English has become the internation language of business and trade. Until that changes, unicode will be used and english as a base language.
Symbols should be relegated to being an high level view of hand code. How could it possibly stand on it’s own?
>(probably that *is* some kind of LOP then by definition).
you mean that LOP is drScheme by definition ?
joking aside ,this approach i called “embedable resources” with “ole aware” editing widget. (some office like approach applied into development enviroment).
but complete infrastructure can be done with good macro aware language much more elegant than with separate sistems.
more than a language which has control over itself, we need our languages to interact.
we have a different tool for each job. but how many people build houses with JUST as sawz-all (reciprocating saw)?
not all languages need do every thing; each language has its place. LISP has some amazing powers to self-transmofigy, powers this article talks a lot about, but it lacks in other places. the real key is being able to combine languages. write the performance critical core in a fast mostly-static language & late-bound language, then run the logic-execution engine in LISP.
More than anything else, Microsoft scares me on this front. CIL is the single most useful forward facing quanta of progress i’ve seen from the computer industry as a whole in 10 years.
I appreciate the feedback given so far, even though it tends to be discouraging my idea, but that’s OK. Many great ideas have had to overcome a lot of discouraging words before suceeding in ways that no one else had envisioned.
The truth is, we are already on our way towards a more visual means of communicating our ideas for computer program functionality. UML is an excellent example of a step in that direction. Keeping in mind that UML is still relatively young, and it’s use, if I’m not mistaken, not so wide spread, we can draw some conclusions:
a) Many programmers have little or no experience in using ing techniques, let alone believe they are beneficial to the process of creating well-designed, efficient and manageable code.
b) Most programmers are trained to think of programming based on a heritage of, as Sergey Dmitriev said, “limitations of programming which force the programmer to think like the computer rather than having the computer think more like the programmer.”
c) Most people using UML see it as a means of * ing* higher-level structural and behavioural concepts, and not as a new visual symbolic language for *expressing* program structure and behaviour.
If there had not been the marriage of graphic design and usability with the logical needs of the machine, we’d all still be using the CLI in a GUI-less world. Having a nice GUI doesn’t mean that the underlying commands themselves have vanished, but that we instead have a choice as to how we would like to evoke them – we can type them out in a terminal, or we can click a graphic – in either case the underlying actions and the returned result are often unchanged.
Similarly, symbolic representations of programing logic will not necessarily do away with more traditional means, but will provide a clearer set of tokens for which to accomplish the task. (As an aside, I am intregued by the comment about hieroglyphics. Perhaps an archeologist of the future will someday look upon the original Mac icons as some form of ancient hieroglyphics.)
So, to wrap things up for now, I firmly believe we will progress from UML as a ing language, to UML as a programming language (which is already being done), and so on and so on, to eventually create a series of symbolic languages which allow for a more intuitive and elegant representation of programming concepts.
c) Most people using UML see it as a means of * ing*
Well, this [look at subject] isn’t exactly true. I can agree that formalization of programming task is most time-consuming step in software development (forget testing for now), but IMHO pointed problems with current development process are caused rather by poor project management.
What I mean (very shortly)?
Take first diagram from article:
Task->Solution—————–>Sourcecode->Executable
The long line includes formalization of initial problem and apparently this shouldn’t be done by programmer at all! But where’s formalization on second diagram?
Task->Solution->Sourcecode—————–>Executable
Source->exe path is automated, thereby it cannot contain formalization (no computer system can solve ambiguouty, think in human categories etc). Something is hidden here – formalization of needed specific languages, and this is manual work again.
If any (nontrivial) software porject is managed in either ways above, then actually “solution” is not solution and is not suitable for writing code efficiently. The actual formalization should be done on Task->Solution step, this includes writing detailed documentation, establishing desing of entire application, long annoying talks with customers, fighting with management (“We want to see working model NOW!”) etc etc. This way we don’t see many problems (talked in pointed article) at all, some examples:
“An application starts off needing some form of configuration, be it a simple options file, or a more complete deployment descriptor file. Eventually, configurations become more complex, and the application ends up needing a scripting language.”
Need of scripting language (or other configuring options) shold be visible at “Solution” step – this isn’t programmers decision.
“MPS is currently not ready for the real world, but it is getting there. There is also no documentation yet, except for this article.”
The worst example – design/coding is already half done, but there’s no documentation yet. Probably it’ll appear after the system is ready – it’s like writing car service manual after repairing it.
Finally, someone somewhere stated that time for designing and coding any [nontrivial] application doesn’t depend on tools used. I personally have coded in various environments from assembler to semi-automated CASE tools, I can assure that this hypothesis appears to be truth.
Some filtering firewall won’t let me type the word “M-O-D-E-L-I-N-G”
I find it hard to believe that many people think that this is reinventing LISP or Scheme or list processing languages. It’s not, those are heavily text oriented. This seems to be strongly addressing the fact that one must be more symbol agnostic.
Supposedly pithy remarks about project management really don’t mean much, getting everything figured out from the get go doesn’t work, it’s been shown to be wrong over and over, it’s a case of people taking, starting off on the right foot too far. The expression states to start off the right foot, not start and finish on that foot. Which is why more organic or iterative processes work out better, even though they might seem slower and with more overhead.
LOP, in particular MPS offers many things that LISP environments don’t, which is discipline by way of giving us tools to define structure unambiguously such that editors can work with you to determine your intentions and speed up expression. The editor is a key tie in something that people seem to be leaving out. Moreover one can manufacture force functions or desired restrictions which are a big aide.
there are 2 views on the same thing one is linguistic and other is presentation point of view. From the point of view langugages with extendable syntax there is nothing new.
and even from presentation view it is not completly new idea. while you have to be aware that lingivistic doesn’t determinate presentation as can be presented. linvistic point of view is mosly in fact that with lop like aproach
problems wouldn’t be presented easier than with already presented methodologies.
I think some interesting points have been raised in this discussion. As many have identified, much of what is being proposed is already out there in one form or another (lisp has been mentioned a few times). However a key challenge is making these approaches usable by every day developers – lisp is not. There is no silver bullet as far as language engineering is concerned, but the process can be made easier by providing the right language engineering mechanisms – particularly ensuring reuse is maximised so that languages aren’t always built from primitive constructs. A further challenge is making sure a collection of domain specific languages makes collective sense, since the more “domain specific” particular languages are, the more languages are going to be required to express a non-trivial system.
Finally, I would like to plug a book “Applied Metamodelling – A foundation for language driven development” which I co-authored. This book can be freely downloaded from http://www.xactium.com .
James