“Programming languages are living phenomena: They’re born, the lucky ones that don’t die in infancy live sometimes long, fruitful lives, and then inevitably enter a period of decline. Unlike real life, the decline can last many, many years as the presence of large legacy codebases means practiced hands must tend the code for decades. The more popular the language once was, the longer this period of decline will be.”
C# dulls the mind.
Would anyone use Objectionable-C were it not required for Apps for Apple?
Java is cross-platform and consistent, but Android and some web’ apps need it.
FORTRAN, nor its more malignant sibling, COBOL, aren’t dead yet.
C comes closest to the hardware, a high level assembler. C++ is nonplussed. Scriptography needs simplicity and flexibility, so Python is squeezing out the competitors.
I like the a-b-c’s, Awk, bash, C.
Fortran is the correct spelling, and it’s been the correct spelling since Fortran 90 was ratified some 23 years ago.
BTW, what language were you trying to use when you composed the above sentence?
I’m with you on everything except Objective-C — what’s wrong with it in your estimation?
To me it’s a winner: all the benefits of C, combined with an object-orientation system with a simple, smalltalk-like syntax and dynamic dispatch.
All the drawbacks of C too, unfortunately.
I really wish Apple would take Cyclone (http://cyclone.thelanguage.org/), which is basically C done right, lash it to their existing OO mechanism, and call it Objective-C 3.0. It wouldn’t be perfect (e.g. error handling would still suck), but it’d address [Obj]-C’s single biggest weakness (safety) and with luck help drive the wider adoption of ‘safe C’ dialects on other platforms too (something long overdue).
Can Cyclone compile ANSI C code? If not, I don’t think it will ever gain traction, so would not be worth including in a new Obj-C.
The beauty of Obj-C is that you have standard C (with all of its warts), plus an elegant OO system that includes all the niceties of a modern OO language. This gives you one toolchain for the entire OS stack, from kernel up to web services.
I think that would depend on the particular piece of code. The following might help:
http://cyclone.thelanguage.org/wiki/Cyclone%20for%20C%2…
OTOH, Cyclone already has an ‘extern “C”‘ feature for interfacing to C code, so it wouldn’t be a big leap to extend that to Obj-C 2.0, which could be kept around for as long as needed. There’s also the option of cross-compiling – I suspect with backing from the likes of Apple and LLVM it wouldn’t take long for big improvements to appear there. The ‘Porting C code to Cyclone’ section of the Cyclone user manual has more info:
http://cyclone.thelanguage.org/wiki/User%20Manual/
Really, at age 40-something it’s long past time C grew up and stopped behaving like a sloppy, stroppy teenager. The question is, who has the motivation to drag it up by its bootstraps? MS has no need to do so since it’s already invested in C# and C++. The Linux world won’t push it forward either, since it’s even more sloppy and stroppy than C is.
The only real hope (for better or worse) is Apple, since C remains a foundation stone of their whole platform and therefore developer community, so as more (often less skilled) developers take up Cocoa development, the more of a liability C’s flaws become. To their credit they have been trying to modernize the Obj-C language a bit, but so far the front-of-house changes are just nibbling at the edges. OTOH, now the move to LLVM is done, they’re in a much stronger position to aggressively innovate.
Remember, Apple have pulled this sort of trick off before, in the transition from ‘Mac’ OS 9 to ‘Mac’ OS X. It’d be nice to see Apple demonstrate the same sort of boldness in their tool chain that they’ve shown in their hardware design and supply chain to such great success.
Personally I really wish Dylan had worked out – it had a far more powerful and elegant OO model than ObjC/Smalltalk. Infinitely better macro system than [Obj]C too. <wistful-sigh>
OTOH, even Unix Philosophy says it’s better to have a compliment of dedicated tools that play well together than a single Swiss-army tool that tries to do it all. And OS X is nothing if not opportunist, happy to integrate whatever works.
Your comment has a number of interesting points; I’ll try to respond:
1) There’s so much C code, and C retains such popularity, that porting is a non-starter IMHO. However, ‘extern “C”‘ could work if perchance Cyclone were to catch on. But, as discussed on OSNews and elsewhere, “good enough” with inertia behind it is far more powerful than “better” without inertia. Even withing Apple, it’s such a huge company now, their current toolchain has a lot of momentum, and there’s no Steve Jobs anymore to impose big changes of direction.
2) Whether Dylan has a more elegant OO model than Smalltalk is a matter of taste. On a practical level, what can macros do that Obj-C categories cannot?
3) To “play well together” in the Unix philosophy means to have a simple common data format, the text file/stream. When it comes to system programming interfaces in Unix, this means SysV-like APIs in C. Most crusty old Unix types prefer to just code in C rather than have multiple language bindings and a more complex software stack; Obj-C is a viable compromise to many since it retains the “virtues” of C while including a usable OO model.
It seems to me there are two design questions here: (a) whether C is worth continuing to use as a systems programming language and (b) what should the higher levels of the toolchain look like — many languages (hopefully with a common runtime, e.g. LLVM) or just one tool like Obj-C?
Nobody expects a C++ renaissance.
Even so, it’s a bit premature to declare anything given C++ has only just been standardized and compiler support is traditionally slow.
Renaissance? What for? It never died!
In the meantime, C code gets replaced by C++ (Windows, gcc, Quake3, …).
Nobody expects the Spanish inquisition
Seriously though, I fully agree with you. I can’t see how someone could expect a long-time established language like C++ suddenly have a noticeable user surge due to new features being added.
Also I can’t see how javascript could possibly be ‘treading water’ given how pervasive it’s become on the web, not to mention new ‘technologies’ like HTML5 relying on it.
Maybe for productivity applications, it does not make any sense; but if you plan to wrote something low level, the new features of C++ get you attracted more to the language than before.
And it makes me wonder* about the figures when Java moved from 1.4 to 5 to 6 to 7.
I do hope that functional programming languages undergoes a renaissance. However, in my opinion, imperative style languages have made it harder for people to think that way if they have been coding in an imperative style for too long. The thought process requires a radical shift which takes time. For me, it became relatively easy to pick up a new OO or imperative style language. However, it took me several months of wracking my brain to even begin to program in a functional style.
immutable vs. mutable data
recursion vs. iteration
closures vs. objects
method dispatch vs. polymorphism
pattern matching vs. switches
composition of functions vs. aggregation of objects
Macros vs. DSLs
Continuations vs. Exceptions/goto/setjmp
But once you do figure out, it just becomes more elegant. I don’t know how software is going to get faster with multi-cores unless people switch to functional languages. Lock-based concurrency is just really hard to get right.
If you don’t believe me, read Bartosz Milewski’s post on why he switched to FP (and if you don’t know who Bartosz is, just google him).
http://fpcomplete.com/the-downfall-of-imperative-programming/
My own dabble with functional languages has been mostly clojure and scheme based. Next on my list to learn is haskell. I tried Scala and didn’t like it much (personal choice, it reminded me of perl’s TIMTOWTDI and it wasn’t really all that functional to which Odersky himself agreed). Clojure is nice, and it’s way more readable than lisp/schemes since it also has [] for vectors and {} for maps. That little change makes clojure way more readable. That being said, being hosted on the JVM sucks for system programming. So that’s why I’ve been learning scheme and next on my list is haskell.
I started programming in 1975 with BASIC and COMAL and then later Pascal, FORTRAN, COBOL and assembly language at university. On the third year of university, I was taught LISP and was immediately sold. I have also used C, Java, Scheme, ML, Haskell and a dozen less known languages. I tend to use mainly Standard ML. Though it is not purely functional and rather less advanced than Haskell, I like its simplicity: I find that Haskell has evolved to a testbench for weird language and type features so people can write extremely generic programs that you need a PhD to understand.
Standard ML is, however, rather dated. Some attempts at revising it has been made, but they have, IMO, failed mainly by trying to add too many new features to the language. Then there are what I would call misguided derivatives (O’Caml and F#) that add object-oriented features to the language. I would rather see a minimal update that solves some of the more pressing problems (such as lack of Unicode support) and a more modern standard library. And parallelism, of course.
Have you tried F#? It’s a Functional Language. Comes in Visual Studio and can be used in MONO.
Functional languages are beautiful; but getting tied to a specific proprietary platform (.NET framework) is not a good idea.
I would take a look to Jaskell ( http://jaskell.codehaus.org ) instead.
Edited 2013-01-15 17:18 UTC
except it can be used in MONO which is open source.
The functional programming is going to be just another tool of multi-paradigm programming languages, I think.
From my enterprise experience I think functional programming has more chances of success by being integrated into mainstream languages (C#, Scala, …) as pure functional only.
People finally discovered that single paradigm languages are not a good idea.
I disagree. Maintaining a clear, precise focus is a Very Good Thing (see: Coupling and Cohesion 101). You never see a joiner using the same tool to do everything from slice wood to turn screws to drive nails to apply varnish. There’s a reason for that.
I think the real problem is developers not being able to hop quickly and effortlessly between different languages within a project. That may be partly down to bridging and tools not being good enough to allow seamless mixing and matching. But I suspect the biggest barrier is developers themselves lacking the mental agility to switch between languages and idioms. That, compounded by a self-indulgent fondness for inventing complex solutions using tools they already know rather than seeking out simple solutions involving tools they don’t. The modern trend for Computer Science courses to silently retool as Software Engineering, and from there to lowest-common-denominator Java diploma mills, probably doesn’t help either; but that’s another debate.
I share the same opinion, although I do like multi-paradigm languages.
I loathe the way some developers seem to try to bend a given language to use it in ever conceivable scenario.
Don’t mix Software Engineering with CS names. My university degree when I took it, was called Software Engineering in my mother language, but the actual contents were what is known as CS in other countries.
I’m absolutely for the existence Software Engineering courses; after all, there’s a huge need in today’s world for applied software engineers, and plenty folks who desire such jobs. I just hate it when such courses call themselves ‘Computer Science’ [1] when the curriculum clearly is not.
Personally I think a big heaping dose of honesty is needed from universities… and from students too. Everyone’s perfectly happy to distinguish between, say, Applied and Theoretical Physics, so why conflate Applied and Theoretical Computing? The only reason I can think of is ego marketing: students want to be software engineers but call themselves ‘computer scientists’ because that sounds cleverer.
And so everyone forgets what CS actually is and what it’s meant to do: push the boundaries and [re]define the state of the art. Just as in the sciences, you need a vibrant theoretical community generating new ideas and improvements that can eventually feed into the applied world, otherwise the latter grows close-minded and stagnant. It’s not healthy, but I suspect a lot in the programming world isn’t what it ought to be.
[1] Which should probably be called ‘Computer Math’, but that’s a separate gripe.
Eh, the line is somewhat blurry. Most CS degrees increasingly include a lot of practical stuff, while CE also covers theory. Not worth making a fuss over, I think.
That is because the boundary between Applied and Theoretical Computing is blurred.
“If I apply this patch, this should theoretically work”
Perhaps FP is a special case, but it only takes one mutable piece of data or a function with a side effect to make it useless (in a concurrent/parallel context).
So I agree that single paradigm languages have their place. Also, many of the things done in OOP can be done in FP but using a different way of thinking.
And I think that is the crux of the problem. People don’t want to have to learn a new way of thinking. They are ok with learning new API’s or even new languages that fit their mold (OO, imperative, Relational, logical etc), but having to think in a new way is too much work for many people.
Agreed. Over the years, I’ve found very little value in purely-functional languages – but I find a *lot* of value in having functional elements within more traditional procedural or OO languages.
Fixed that for ya
Serious now.
It’s true, pure functional is only a marginal improvement over impure functional. It’s sort of like the distinction between weak and strong typing. But just as we prefer strong typing for the guarantees it gives us we should prefer pure functional to impure.
“we prefer”?
If you’d like to be helpful, please point me to someone who prefers weak typing to strong typing. Then put that in context of everyone who prefers the reverse.
Please note that this is not a comment on the preference for dynamic or not. A dynamic language can still be weakly or strongly typed underneath (Python for instance is a strongly typed dynamic language.)
“We” in this case is in reference to the general populace of programmers.
There are many programmers out there who prefer weak typing, so “general populace of programmers” is, well, generalization.
I’m somewhere in between. Sometimes I like strong typing (large models with a simple structure), sometimes not (complex or dynamic models). A quick exercise: how (and why) would you classify a small hungry, short-haired, black cat?
Also, while dynamic languages may or may not be strongly typed (semantically), they are all have to be weakly typed during compilation – the only time strong typing actually matters.
Of course it is a generalization, however, it’s a perfectly valid one. There are very concrete reasons to prefer strong typing to weak typing in nearly all cases.
If anything, complex models dictate strong typing. If simply to give you appropriate guarantees and sign posts for navigating the model. It’s simple models in which adhering to rules isn’t so important. For example, your cat is a simple model having just 4 properties to vary on (plus animal type, if we’re dealing with other animals.)
A strongly typed language is a strongly typed language, no matter when the enforcement occurs. Static types merely help catch violations sooner, but you’re still not allowed to violate the system if we check later.
No generalizations are valid. Especially when there are whole groups of people thinking otherwise (here: for very concrete reasons prefer weak typing to strong typing in nearly all cases).
Strong typing works extremely well with abstract single-function objects (a number, a string, a list, a set) and fails *badly* at any attempt of modeling real objects. In a way, you have admitted that yourself – a “cat” from my example would be an “Object” type, distinguished only by its fields, not the type.
Strong typing implies enforcement, otherwise it wouldn’t be “strong”. Preferably at compile time, and at least at runtime, although many advocates of strong typing would disagree with that. The problem with your example is that Python doesn’t even enforce types at runtime – a “type” (class) is just a method of reusing implementation. At runtime the interpreter is only interested whether the field/method you’re accessing is defined or not – something that may and does change at any the time.
I see what you did there.
Your cat example was simple, so it deserves a simple implementation. If you’d wanted me to model the complete biological workings of a cat; that’d be an example of a complex model that would very much have components that differ in type. And I’d almost certainly want them to be strongly typed; no using a lung as a leg!
All current development models fail horribly at modeling the real world. Weak typing gives you no leverage here. Dynamic typing gives you no leverage. Structured, functional, or object oriented paradigms give you no leverage. The world is a messy graph that you can’t simplify into something nice and neat.
Python is a strong language. There isn’t any debate about this. It’s simply a fact.
The “cat” example was simple object yet most type systems would fail to classify it properly. Real objects have multiple “types” in many hierarchies, often changing over time.
As for python – there was never any debate about it. It has always been a weakly typed language by most definitions (admittedly, not yours). In fact, Guido had to often defend his choice against legions of people who prefer strong typing:
http://www.artima.com/intv/strongweak.html
Then why does the second page of what you linked explicitly say it’s not weak typing? Why does Wikipedia’s page say Python is strongly typed? Why do all of c2’s pages on typing say Python is strongly typed?
No, I’m afraid you’re wrong. Python is strongly typed. It’s just checked at runtime.
How would most type systems fail to classify a cat properly?
data FurType = Short | Long | Medium
data Color = Black | Orange | White | Tabby | …
data Cat = Cat { hungry :: Bool, fur :: FurType, color :: Color, … }
mycat = Cat True Short Black …
It isn’t that cat’s are hard to design a strong model for. It’s that there’s no point in doing it because it’s irrelevant. The following is just as sufficient for this.
(True,”short”,”black”,…)
Easy enough to check:
“Weak typing is not really a fair description of what’s going on in Python.”
Which is fair given that Python uses types for implementation reuse and error reporting. This of course doesn’t stop me from adding a method “bark()” to an existing class “cat”.
It also says that Lisp is strongly typed (a language specifically designed for making all non-primitive values untyped). The reason they give is that it doesn’t do type coercion, which is IMHO a stupid way of defining “strong typing”. But I will accept that, so if you wish, we may end the discussion here.
Interesting (really!). But note that Haskell’s type system is definitely not what people would call “most type systems”.
And even Haskell can’t make a “cat” object sometimes hungry, sometimes not. (True, this goes deeper than the type system in Haskell).
http://c2.com/cgi/wiki?WeakAndStrongTyping
http://c2.com/cgi/wiki?TypingQuadrant
http://en.wikipedia.org/wiki/Weak_typing
Type coercion is a generally accepted criteria of strong/weak. Not the only one, but it’s an easy to use tool.
Strong/Weak: Does the type of a value change? Or, is type checking even done?
Type systems are weak if the answers fall closer to “all the time and never” and type systems are strong if “never and always”. Irrespective of when the checking is done. Mind you, this is a gradient. The exact point when a language ceases to be strong and becomes weak is fuzzy (and may effectively change depending on what subset of the language we’re talking about.)
IMHO, adding cat.bark() isn’t weak, so much as open typing (not sure this is an actual phrase, but I’ve heard people describe class systems as “open” when you’re allowed to add things after definition.) Since it’s not really that cat is no longer a Cat, it just got new abilities.
And as you say yourself, Lisp’s data abstractions are untyped. But when there are types (e.g. primitives, functions, macros,) pretty much all Lisps use strong typing (some even insert static typing when they can.) In this case, it’s sort of a value judgement. Is untyped the same as weakly typed? After all something that is untyped doesn’t even have a notion of type to change, but it may have a structure that does get checked (which amounts to type checking.)
If we go meta and take a look at this whole conversation (which has gone on since the invention of those terms…
The fact that you can categorize something as strong or weak typing, but differentiate it from compile-time or run-time typing, and differentiate it from static and dynamic typing, and the conflation with OO, says something about the inherent flaws of any kind of language typing.
Really the conversation should be about:
Strong, compile-time, static typing vs
Strong, compile-time, dynamic typing vs
Strong, run-time, static typing vs
Strong, run-time, dynamic typing vs
Weak, compile-time, static typing vs
Weak, compile-time, dynamic typing vs
Weak, run-time, static typing vs
Weak, run-time, dynamic typing.
Sadly, that’s not even a comprehensive listing of possible type systems…
Of course, Gödel’s incompleteness theorems apply to type systems. So any type system is going to be incomplete or inconsistent. Generally speaking, the most extreme strong type system is going to be incomplete and a very weak type system (say that has no types) is going to be inconsistent. Compile/runtime follows the same sort of pattern (in that compile time checked systems will filter out some “valid” programs and runtime checked systems will have some inconsistent programs [corner cases where there’s no way to figure out how to operate on what we have].)
Static and dynamic are the outliers here. Instead of being a characteristic of the expressiveness of a type system, static/dynamic is more about the actual expression of the types. They’re more akin to SVO vs SOV vs VSO vs … sentence structures then they are to choices in strong/weak or compile/runtime.*
*Dynamic languages are often conflated with runtime type checking, but this is technically unnecessary. Runtime checking is easier than compile time for dynamic languages (due to names/variables having no defined type,) but (for example) many lisp compilers have done compile time type checking en route to optimization.
Disagree, pure functional is a serious step backward compared to a language that allow you to mix functional with procedural or OO techniques. Seriously, why should I prefer pure-functional when all that gives me is reduced flexibility?
“pure functional” = functional,
“functional” = maybe functional, maybe not – review the code.
OTOH, I agree that current pure functional languages are too limiting/inconvenient in practice.
In very simple terms, you’ve gotten the core of my argument.
Personally, I don’t find the limitations of a pure language to be particularly restrictive. I mean, I am most productive in Haskell.
Exactly because it reduces flexibility. That’s why I compared it to weak/strong typing.
Both strong typing and “pure functional” provide additional guarantees about the content of some object (values/variables and functions respectively.) In the case of pure functions, the guarantee is that they will always be referentially transparent and have no side effects. It’s simply impossible to construct something which isn’t. This is a valuable guarantee, at least for a compiler, due to the optimizations such a function allows.
I would disagree here. The problem is that it only takes one mutable piece of data or one function with a side effect to ruin an entire call chain. Mixing and matching pure FP with anything else renders your code non-pure and thus you may as well not have written in an FP style at all.
Less seriously, it forces users to think multiple ways. I saw many java developers complain vehemently about the upcoming lambdas in Java 8. So one developer might write code in an OO style, and another in an FP style, and now everybody has to learn both ways of doing it.
Carmack had a (imo) interesting blog post on functional programming from a C++ perspective:
http://www.altdevblogaday.com/2012/04/26/functional-programming-in-…
I love functional programming.
But imperative programming more matches the way that people actually think. People think in concrete, step-by-step terms: I get out of bed, I take a shower, I brush my teeh, I comb my hair, I get dressed, I eat breakfast, I leave the house, I go to work, I park my car, I walk into the office, I etc, etc, etc. In order to achieve a particular goal, people do each step in a particular order.
Functional (and logic) programming, is more atuned to abstract/mathematical thinking (and the more declarative the programming is, the more it becomes like math). Most people are not adept at that.
If that was how OO programs are written the world would be better. Most programs (and toolkits) look more like this:
“[You,] get out of bed! Take shower! Brush teeth! Comb your hair!”
or even:
“take a comb, raise your hand, do {stroke your hair} until (enough)”.
Calling other object’s “doSomething” or “setSomething” methods is modern equivalent of calling “goto”. A single setter may look innocent but try to sequence a number of them and you will have to deal with all the complexity (structure, sequencing, side effects) that leaked out from the object. That’s because the control is no longer inside the object but in a caller of the method.
How did I know in advance that the article would link to the Tiobe Index?
into Perl 6, a complete rewrite of Perl that began 12 years ago.