Xactium has published a white paper on Language Driven Development (LDD). LDD is a revolutionary approach to designing and implementing software and systems. LDD makes huge productivity gains by bridging the gap between the way developers think about their problem domain, and the languages and tools that implement the solution.
Aren’t fourth generation programming languages (like SQL) not already “language driven” – though only english haha…
that site is sooooooo slow, cant even read the article
It’s lisp all over again… done badly.
The flash demos demonstrate what is basically a graphical state machine modeling tool and infrastructure for composing DSLs. It doesn’t seem all that revolutionary, although something like it could be interesting for teaching children.
Language Driven Development is a very promising way to solve problems in the computer science domain.
It is the next step after 4th generation languages such as SQL: they are high-level domain-specific languages.
The LDD approach says that when facing a problem, you should design a language specifically for that problem. All the reasoning about the problem must be expressed using this new language.
I do not know about Xactium’s products, and they may be good or bad.
The main problem with the LDD approach is that you need compotent people (really competent) to use it, while to use an object-oriented approach you just need some Java code monkey.
> … while to use an object-oriented approach you just need some Java code monkey.
I disagree with your comment.
A code monkey may suffice to produce code in java, or almost any language for that matter, but it takes a highly trained chimpanzee to produce a well degisned object-oriented code, adhering to the oo principals.
The idea of developing software in a language specific to the domain of the problem is an underlying theme of the latest developments in software design. Meta programming, generative programming, abstract state machines.
But, this isn’t what they are doing… They just have a glorified UML modeler that can have a domain specific language layered on top. In a sense, they are backwards. With their product you can specify the OO implementation of some basic primitives then layer a language on top. You cannot describe the problem and implementation in a domain specific language and then elaborate into an implementation. Without that, the rest is just fluff of questionable utility.
Considering that .NET is designed to be language agnostic in the first place. Lots of code generators and tools can be built around the CLR, I’m not just talking about more Visual Studio plug-ins.
>> The LDD approach says that when facing a problem, you should design a language specifically for that problem.
so before you solve a specific problem you should create a language that solves all related problems in the domain space?
why not instead decompose a problem until it maps onto proven tools? come on, 99% of programs are searching, storing and sorting. you may think your problem is more than this but in the end it likely isn’t.
>>while to use an object-oriented approach you just need some Java code monkey.
With that punchline you pretty much turned your whole argument into a cheap flamebait.
I hope you sleep well knowing that lots of those “code monkeys” are make the software that controls your bank account and stuff like that.
I’d like to see scalable, robust, efficient, reliable systems built on these theories. You know, like real world projects. The tools available for these theories also need to be intelligent and convincing. Otherwise, it will be a cold day in hell before I write a language to solve a specific domain problem.
Algorithms and data structures, that’s all you need to solve problems. Oh, and add to that powerful development tools.
obviously a good way to solve a problem. lisp hacker have been doing this for half a century, and rubyists, pythonistas, tclers and smalltalkers did the same for a while. Note that 3 out of 4 of this languages are object oriented, and the other twot (thinking at least of CLOS anx xotcl) have actually DSLs to write oo software.
Corey: I tend to stay out of these debates, but I feel the need to respond to your post. Two points:
1) At the heart of XMF-Mosaic is a very small executable meta-kernel (defined in terms of itself). Everything the user sees on the surface – the languages, the tools, the GUI interface, the text parsers, XML parsers, transformation mechanisms … – are defined in terms of this kernel. XMF-Mosaic is *not* a UML tool with DSL facilities. In fact nothing could be further from the truth, XMF-Mosaic is a language modelling tool which has layers and layers of further language modelling tools, languages and other tools defined in terms of the kernel. Each layer is bootstrapped in terms of the previous layer. The UML style tools you are referring to are simply tools defined on top of the meta-kernel. To give you a feel for this, the XMF-Mosaic meta-kernel written in Java is under 50K LOC, but there are 120K LOC of models running on the kernel. Literally everything is modelled. In that sense it is much closer to Lisp than any UML tool.
2) While the diagrams may look like UML, they are very different in that they have fully (modelled) semantics both declarative and executable. They are as rich as programming languages.
James Willans (Xactium)
James Willans: Yes. I was too early in my conclusions 🙂 I did a nice Homer-style forehead smacking “doh!” when I read through your documentation some more. Sorry for the bad-mouthing! 🙂
This would sell me, and maybe it’s possible with the XMF-Mosaic tool:
– Design a language model that captures the correct semantics for an existing C++ API. Not just a mapping from language term to a function, but capturing any constraints on function order; the semantics that C++ just can’t verify at the compile stage without the use of some mad template metaprogramming trickery. Which isn’t very nice to old APIs anyways.
– Enable users to use this language to design applications using this API that can be checked against the defined model. If the users application does not check against the model, then the application cannot be successfully implemented, as they described, using the API. If the user’s application does check against the model, then their application can be implemented using the API and will work as expected. In other words, models rigorous enough to allow only valid applications but not disallow any (reasonably) valid applications must be definable.
– Allow users to transform the application into regular C++ that could then be compiled, without any tweaking required, into an package that would work as expected.
That would be sweet. The API users get a higher-level language to use that can be checked for conformance to an APIs semantics. The cost to use the product isn’t prohibitively high (Like many other products of this sort) since the existing API would not have to be changed. And believe me, the last thing I want to do is rewrite working, ugly, code just to fit a different development cycle. 🙂
Think I’ll spend some time looking at the papers. Maybe get the book.
I’d like to see scalable, robust, efficient, reliable systems built on these theories.
Ever play a game by Naughty Dog Entertainment? From Crash Bandicoot to Jak & Daxter, their games used a lot of Lisp. For Jak and Daxter, they used Allegro Common Lisp to create a game-oriented Lisp called GOAL. Jak and Daxter 2 is about 900,000 lines of GOAL code, and all the games in the series were commercially successful.* Ever book a flight using Orbitz? Their reservation system was written in Lisp, with 25% of the code consisting of the macros used to implement a domain-specific language suited to the problem.
* Of course, the scarcity of Lisp programmers did ultimately cause them problems. The tools were very successful, but there were not a lot of people who had the requisite skills to maintain them. Such is the weakness of using superior, but less widely-used, tools.
While Lisp pioneered the idea, there has also been a lot of other research into the field of domain specific languages. http://lambda-the-ultimate.org/ has some excellent discussions regarding developments in the field.
Oh, another example. Allegro CL was used to implement Nichimen’s N-World development suite, which was used for Mario 64. It too took advantage of the capability of domain specific languages:
But perhaps the most important feature of Allegro CL for Nichimen’s product development is its support for domain-specific macros. Nichimen extends the Common Lisp language by adding an application-specific syntax and semantics. These take paradigms common to the game application area and directly express them as extensions to the language. The extensions hide complexity, so that code becomes much easier to understand and much more compact.
http://www.franz.com/success/customer_apps/animation_graphics/nichi…
Free, Open Source, Multi-Language Programming Language:
Logix http://www.livelogix.com
Oh, and one can create their new language through this amazing interface called “plain text”. I know, it sounds crazy, but it is true.
Cheers,
therandthem
Looks remarkably like CASE that I used back when (’91?) on OS/2 and other platforms.
I don’t know if I’d agree with the MetaCase assessment in the PDF – it seemed we were always generating the framework and then jumping in and editing the code to finish up connecting the dots. The failure of the tools was that they never quite completed the task 100%, and always generated code slightly differently depending on layout (like a bad WYSIWYG html editor).
I’m willing to bet this evolution will still have limited scope, as was (another problem) with the MetaCase tools. Maybe by features, and maybe by user base.
Sounds a lot like the stuff the IntelliJ IDEA developer was talking about a while ago. What was it called? LOP? Language oriented programming? He had some kind of IDE in development for it also ….
You know .. I went to this website and found the white-paper Download, and the “product website” but I am very suspicious when someone advocates a new methodology to sell a product that supports it.
I would say the demo was nice, but immediately I became suspicious about this. I belong to the ACM, and am for example NOT supspicious when I read an academic article or research paper in the Journal of the ACM or something like that.
Perhaps later I can read the white paper, but I would rather read the proposal in a professional journal (which I am really behind on). Sorry but this whole article just seem like a ?Commercial?
“obviously a good way to solve a problem. lisp hacker have been doing this for half a century, and rubyists, pythonistas, tclers and smalltalkers did the same for a while.”
…and Forthers.
> I’d like to see scalable, robust, efficient, reliable
> systems built on these theories. You know, like real world
> projects. The tools available for these theories also need
> to be intelligent and convincing. Otherwise, it will be a
> cold day in hell before I write a language to solve a
> specific domain problem.
>
> Algorithms and data structures, that’s all you need to
> solve problems. Oh, and add to that powerful development
> tools.
Then take YACC as an example (focusing on the parsing process, not the construction of the AST). Its input files are written in a DSL. You specify neither algorithms nor data structures explicitly. It is a perfect example of LLD (side-note: wasn’t this called LOP some time ago?)
Also, many programmers apply LLD unconciously in other languages, e.g. Java. They try to use Java’s features to construct DSLs. These DSLs can never be designed as cleanly as a real DSL, so they are always inferior in one way or the other, but on the other hand they are 100% pure Java code.
A good way to find such cases is too look at “frameworks” in Java which do not only provide a library of pre-made classes, but also require you to “use them in a certain way, otherwise bad things/exceptions will happen”. GUI systems are usually good examples for this (AWT, Swing, SWT).
This will solve the software problem once and for all.
In one year all software will have been rewritten using this “REVOLUTIONARY” technology. People will focus on curing cancer and killing politicians and other useful things. There will be no more bugs, and productivity will rise by a factor of 20.