The experts at the Applied XML DevCon last week didn’t just show off shiny new features (though they did get a preview of the XML capabilities built into Whidbey). They honestly discussed the flaws in XML technologies and what has to be done to address them.
No?
…and thought it was an Amiga article.
What an uninspired set of conclusions from the industry (read Microsoft and Sun) experts.
Experts agree: XML is not a magical cure all . . .
How can XML be considered an invention? Structuring data into hierarchies is a concept older then Knuth. What is it then? The big innovation was structured data in raw text? Being able to reformat structured text data in a way other then Regular Expressions?
They mentioned a few problems with XML and offered no new solutions; the intro to the article and its contents didn’t really jive to well.*
*Never really understood why you’d want to use an XML Schema – they remind of me of Lisp Macros.
XML Schemes aren’t really anything like Lisp macros. Lisp doesn’t really have anything equivalent to schemas or DTDs. Lisp macros are more like xsl transforms, except not sucky. Indeed, the biggest fundemental problem with XML is not XML itself, but rather, the hideous kludge that is XSLT.
Why is XSLT a hideous kludge? Well, because it was designed for a single specific purpose (namely, converting an XML document into HTML). Then, somebody got the bright idea to try and make into a swissarmy knife.
XSLT wasn’t just designed to convert XML documents into HTML. It was designed to allow creation of documents that specified arbitrary transformations of XML documents. Hence the name — XSL *transformations*. The problem is that the syntax is awful, and the language semantics are severely limited and quite alien to both function and procedural programmers. They are at the same time harder to understand, and much less powerful than Lisp macros (their moral equivilent).
Heya,
I’ve never worked with lisp before (only some prolog … if you can compare those. I suppose not). I’ve also worked with xslt.
Ok, it’s not easy to work with xslt, but you can do some nice stuff with it. As far as I know, xslt is a lot about recursion and thought (not sure!) the latest version of xslt is turing-complete …
So, xslt is just as powerfull as any other turing-complete programming language if all the stuff I am telling is true . Ofcourse, I can’t really compare it with lisp. Maybe it is easier to do in lisp …
greetz,
Michel
Not only that, but most of the specifications around XML are overly complex and will likely never make it to implementation without being bastardised by each and every vendor in some incompatible fashion.
And when Microsoft is a big XML booster, well, you might as well pack up and go home, because the only ‘standards’ they support will be Microsoft standards.
Personally, I have used XML as a file format and have also seen XML/XSLT used via Apache Cocoon as a web application development framework.
Its adequate – even good – in its role as a file format, but downright awful as a programming language/framework.
XSLT enables the developer to do some clever, but incredibly opaque-to-the-casual-observer things with xml documents, and is far worse than even the much-maligned Perl for creating code that is unintelligible by anyone, including it’s creator.
XSLT documents that do moderately complex things are slow, unwieldy, and while i can sort of see where the designers of the language were going with ‘the code is an xml document’ thing – I wouldn’t touch it with a ten foot pole if I actually had to get something useful done.
And then we have the various RPC over XML standards. I have some experience with SOAP – and as far as I can see – it, like XML itself, defines so little in the way of a standard framework that anything you build on top of it is practically guaranteed not to interface with your neighbours SOAP system despite the common ‘wire protocol’.
A common wire-protocol is a reasonable thing to have, but we have had that for a decade or so with HTTP, and I have yet to see an application that tangibly benefits from having its RPC calls wrapped in SOAP as opoosed to simply being sent over HTTP, SOAP-less.
I have gone from being rather negative about XML to being quite interested in it, back to not caring – Tagged data files have been around forever, and languages like Perl make manipulating tokenised files (whether XML or CSV or something else) as straightforward as I need it to be.
I guess I just dont ‘get it’ with regard to XML – I don’t see the vision of a ‘semantic web’ being realisable while Microsoft etc. insist on breaking anything ‘Not Invented Here’, an I don’t see any concrete benefits for using XML over other data structures for any of the work that I, personally, do.
I’m sure there are people out there for whom XML is something that allows truly amazing things to be done, where they couldn’t before – What are some examples of current XML ‘killer apps’ past usage as a file format?
Is it just me, or did anybody else initially read the title of this thread as “Wal-Mart B Gone”?
Hi there
I have not used xml extensivly. But I was intrested in using
it for tranfering data between companies for example. There
s a lot of companies that transfer information to and from
there sponsering bank. xml was explored in ths task because
it was ment to be ‘interpolable’ and was told that it was
emerging format that will take data extange and make it
better.
In the end files that are 7MB when using a csv file can
be 10 to 14MB which is not bad. But it makes imports and
exports slower and is a pain to edit by hand if it needs
a value changing that the system did not like. And it seems
to be use more CPU time that other formats. (Well csv in this case)
On the upnote for the xml family is that I seen good UI’s
created that can by deployed over an intranet across
platformes. And any thing that requires input form a user
can be validated on the clients side.
On the downnote for xml is that it is makes bulky files. I’m
aware that HD space has gone up fast. But there is no point
in a waste of resorces becuase we can.
Two file formats…
‘csv’,’UFT-8′,’22.10.2004′,’Bank information’,’f6f41f231hgghjfhgj’
string,’value’,’value’,’value’,’value’,’value’,’value’,
string,’value’,’value’,’value’,’value’,’value’,’value’,
string,’value’,’value’,’value’,’value’,’value’,’value’,
string,’value’,’value’,’value’,’value’,’value’,’value’,
[?xml version=”1.0″ encoding=”UTF-8″?]
[Bank_information date=’22.10.04′ code=’f6f41f231hgghjfhgj’]
[data]string,’value’,’value’,’value’,’value’,’value’,’value’,[/data]
[data]string,’value’,’value’,’value’,’value’,’value’,’value’,[/data]
[data]string,’value’,’value’,’value’,’value’,’value’,’value’,[/data]
[data]string,’value’,’value’,’value’,’value’,’value’,’value’,[/data]
[data]string,’value’,’value’,’value’,’value’,’value’,’value’,[/data]
[/Bank_information]
Sorry for the very rough example and osnews does not
like the angle brackets.
The csv file is 294 bytes (zipped size is 197 bytes)
The xml file is 492 bytes (zipped size is 245 bytes)
In summery xml is very good at some things but is not the
fix to every problem.
Please note this is IMHO
Regards
Aaron
Size of xml is irrelevant, since it is not a random access file format (neither is CSV), it is a stream of characters, that stream can be compressed using e.g. gzip.
Providing a <500 bytes sample is next to irrelevant no matter which compression algorithm you use. I even advocate usage of whitespace in XML, since it makes debugging easier and only marginally adds to the compressed size of a stream.