While demonstrating the rudiments of anonymous methods, Paul Kimmel answers the question “Are anonymous methods just someone being a bit too clever?”
While demonstrating the rudiments of anonymous methods, Paul Kimmel answers the question “Are anonymous methods just someone being a bit too clever?”
Looks like the author of this article didn’t grasp the power of closures. He only sees them as syntactic sugar for binding of GUI event handlers. Fortunately the C# designers know better than him.
Someone said, that every language soon or later reinvents lisp
http://www.theserverside.net/articles/showarticle.tss?id=AnonymousM…
” … anonymous methods may just be an example of some inventive person at Microsoft being a bit too clever.”
Ditto to the previous “Lambda” comment. Anyone care to take a stab at how many languages already implement anonymous functions/methods?
Besides, if Microsoft invented them, they wouldn’t be “anonymous” — they’d be named either “MS method” or “Visual function”. 😉
The author of the article is just another enterprise programmer. Seriously guys, before bashing things and appearing ignorant, learn some computer science and stop programming 10-tier XML beans.
RE: the serverside article, a very good read.
Re anonymous functions and capturing local variables…
Wow that seems like an easy to misunderstand ‘feature’. It seems to me that the rules are not well defined on how local variables are captured in to anonymous delegates. Is there a reference? Why do they get promoted to some special semi-real variables instead of something ‘simple’ that won’t get abused. I’m a java person so I’m biased, but I find that the enforcement that local variables cannot migrate to anonymous inner class definitions without being declared final is a good one since it makes debuggin easier, easier to read code and makes it ‘more’ deterministic rather then his own example where you’re not sure what will happen due to thread inconsistincy.
The rules seem well-defined enough to me. I think the article’s focus on the generated code somewhat confuses the underlying simplicity of the mechanism. It helps to think of closures as just being objects, with a single method created at runtime. When a function may create a closure, it’s stack frame is allocated on the heap. Any created closures retain a pointer to that stack frame — it is this stack frame that comprises the “member data” of the closure. Therefore, two closures created by the same function on different invocations will refer to two different stack frames, while two closures created by the same function on a single invocation will refer to the same stack frame.
Do you really think that this description simplifies the matter at hand, since you use terms like ‘stack frame,’ ‘pointer,’ ‘heap’ and so forth?
These are essentially lexical closures and could be understood abstractly in terms of capturing the enclosing lexical environment, which means that free variables in a method body will be substituted from the calling environment at runtime. That’s it. It’s not especially complicated. All they have to do is think of a lexical closure capturing free variables from the environment similarly as they might think of free variables (fields, for example) in a method body being obtained from the implicit receiver at method invocation.
class X
int z
public int Foo(int y) { return y + z }
public X(int s, int r) { z = s * r }
e1 = X(1, 2)
e1.Foo(1)
e2 = X(2, 3)
e2.Foo(1)
Here z is substituted with e1.z and e2.z respetively in the method when Foo is invoked.
public delegate int FooFN(int y)
public FooFN MkFoo(int s, int r) {
return delegate(int y) { return s * r + y }
}
FooFN Foo1 = MkFoo(1, 2)
FooFN Foo2 = MkFoo(2, 3)
Foo1(1)
Foo2(1)
Here s and r are substituted by 1 and 2 in Foo1 and 2 and 3 in Foo2. This isn’t all that hard to understand, and isn’t bogged down by any implementation details like what sort of activation records one has.
I’m a java person so I’m biased, but I find that the enforcement that local variables cannot migrate to anonymous inner class definitions without being declared final is a good one since it makes debuggin easier, easier to read code and makes it ‘more’ deterministic rather then his own example where you’re not sure what will happen due to thread inconsistincy.
But then you lose all the power of closures. The ability or the anonymous method to not only access, but mutate the local variables is very important. I use closures *all the time* in C# now that it has it and it has greatly increased code reuse (how can passing around closures/bits-of-code-with-context not?).
Thread safety is no more an issue with closures than with anything else you write. Java only avoids thread safety issues for some uses of closures by simply NOT supportig closures and not allowing you to write your code in that simple manner. If you try to emulate closures with Java and pass the closure to multiple threads, you will get into the exact same problems.
I recall Guy Steele saying that inner classes in Java originally permitted mutability, and that they scrapped them because people complained about the automatic heap allocation necessary for implementation.
It’s not a matter of “non-determinism.” Accessing a field without synchronization from multiple threads is fundamentally no different than accessing a captured variable from a closure with multiple threads without synchronization.
> Anyone care to take a stab at how many languages
> already implement anonymous functions/methods?
That’s hardly possible because there is great number of languages that almost nobody knows. I know about lisp/scheme, haskell, miranda, (s)(oca)ml, javascript(!), smalltalk, ruby, python, c#, Java has very lame read-only closures a.k.a anonymous inner classes, and probably future versions of VB.NET
perl http://hop.perl.plover.com/
Actually the quote goes:
“Any sufficiently complicated program contains an informal adhoc implementation of half of common lisp.”
or “Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp.” (Philip Greenspun)
Edited 2005-10-31 05:15
No.
I wish he had said this particular statement Linguistically, I love methods, but as a practical matter anonymous methods may just be an example of some inventive person at Microsoft being a bit too clever. at the beginning, so I would have known to stop right there, and would have been spared having to read the whole article.
Heh, the OSN tagline alone warned me not to bother reading it. I actually think most of the informit.com stuff that makes it here tends to be junk, though.
Just keeping the order and number of the parentheses, semicolons, and brackets straight is a pain in the neck.
I’m speechless. It’s called a bracket’s-matching editor, ever heard of it?
Is anybody know, why <,<=,>,>= operators are isn’t implemented for strings in C#/.NET (and java) ? The string.compare seems a little bit strange in C# and Java level language, especially after Object Pascal and C++.
Because of different character codepages. Different codepages will give you different results.
I’m confused on that, does C# 2.0 have parametric types yet as specified in the recent ECMA standards?
Yes. C# 2.0 corresponds to the 3rd edition of the ECMA C# standard, which includes Generics.
Why do non-computer science graduates also insist on making these inane comments about features they don’t understand?
Why is it that you think these people don’t have degrees in CS? Undergraduate CS curriculums can be fairly superficial in their treatment of everything, including programming languages. Then there are the many deficiencies of the actual students…
However, in the case of Paul Kimmel has:
Michigan State University East Lansing, MI
Computational Mathematics
That’s a little vague in terms of what level of education was obtained, but from a casual perusal of http://www.mth.msu.edu/UGAdvise/computational_math_bs_new.html I can get a rough idea of what he’d have studied were I to ignore that he doesn’t mention the years he attended (and thus there may be some variations in curriculum requirements).
His resume also appears to state that he has written or co-written about sixteen books. Of course they all look like the sort of books one would be better of burning for heat than reading. Perhaps even better for throwing at people.
Well, if the mathematics portion of his course consisted of constantly proving the flux of f(x) = something, then he probably would be stumped at closures. Maths is most definitely a necessity for computer science, but it needs to be the appropriate section of maths.
(with horrible syntax)
Guy Steele:
http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg04044…
Their excuse regarding heap allocation is pretty lame. Non-const string concatenation already results in heap allocation.
And as the poster said, boxing in 1.5 does this (as well as varargs).
Java programmers often use a final object[] of size one to simulate real closures. That is extremely ugly.
It is much better to have real support for this. The lack of support for real closures in java did not prevent anyone from using them. It just makes it more ugly.
Java programmers often use a final object[] of size one to simulate real closures.
Can you explain further what’s achieved through that? It still won’t capture local variables, right?
Can you explain further what’s achieved through that [the use of a final object[]]? It still won’t capture local variables, right?
It can capture local variables. Consider:
String s = “Hello”;
final object[] closure = new object[]{s};
return new Runnable () {
public void run () {closure[0] = “modified”;}
};
There you have it: s is captured within closure, because final applies to the object reference closure, not to the contents of closure.
Why would you want to do this? Because full capturing closures are terribly useful in some circumstances, where the equivalent is to manually do what C#’s anonymous delegate mechanism does — create a new inner class and populate its variables. Creating a new class requires more lines of code, which requires reading (and keeping in human memory) more information to understand what’s going on.
The problem is that you want to create an anonymous class that returns something to the local context. Since anonymous classes capture only final local variables, you just have a local final object[] with a length of 1 and store the result in the first position of the array.
It is just an ugly hack. But I saw it quite a lot back when I wrote java code (I have switched to .NET in 2003).
Since anonymous classes capture only final local variables, you just have a local final object[] with a length of 1 and store the result in the first position of the array.
I see. That really is ugly.
I guess Sun could employ the same trick, or an automatically generated holder object, to implement proper closures in the language without having to change the JVM.
Thanks for the explanations.
Another problem with Java “closures” is that you can’t throw exceptions out of its body, unless you declared it in the interface. You can’t throw them in Runnable because Runnable declares to throw no exceptions, and if you declare your own Runnable-like interface, you have to declare all possible exceptions there. And you have to repeat these declarations in *each* anonymous inner class you write.
You could wrap the exception in a RuntimeException and then unwrap it on the other end.
Ugly I know…
if you declare your own Runnable-like interface, you have to declare all possible exceptions there.
I was going to suggest to provide the exception type as a type parameter. But it appears type parameters are not allowed in throws clauses, why not?
Yes that’s the general problem with anon inner classes used as closures: the ugliness piles up into utter crap, and then you finally give up on them
Looks like the author of this article didn’t grasp the power of closures.
He also didn’t grasp the concept of C++ inline methods either.