Trove, an open source Java collection package, provides an efficient alternative to the core Java collection classes, especially for implementing collections whose keys or values are primitive types.Is the Trove collection a performance tuner’s dream or architect’s nightmare. This article takes a look at how the Trove classes differ from the Java collections you are used to, and consider when they should be used.
Elsewhere, Distributed Parallel Programming Environment for Java (DPPEJ) is a set of tools and technologies for developing simple, distributed, parallel applications using the Java programming language. This project is being developed by the IBM India Software Lab.
What’s with the “performance Nightmare” in the title? The article discusses the performance advantages of Trove for storing primitives; it didn’t talk about any downsides of Trove. Autoboxing, on the other hand, was mentioned in the text and the authors mentioned, that it can cause performance problems (because in the worst case, storing a primitive value results in an object allocation (if the value is not an integer or < -127 or > 128; if a value fulfills the mentioned conditions, it does not cause an object allocation, because objects for these values are cached )). These problems are be solved by using Trove.
Trove is an excellent collection library, no discussion about that.
But the main problem is, that the speed increase is not that big to justify yet another collection library.
The main problem is, that basically collections have pretty much at the end of research. They are well described basic programming knowledge, as well as the algorithms behind it.
Unless you have a totally wrong implemented collection you won´t see any stellar speed increases in changing the collections. Even if the keys in certain kind of collections which allow them suddenly are moved from object type to native datatypes.
I once had to implement a chaching solution and went with the original collections after trying out trove, the reason was that the speed difference was neglegtable in my typical situation because I had to go the thread save route in this, and there are problems making something thread save which cost much more performance than any change from a high level object as key to a native type. As I said under this condition the speed difference didn´t justify to plug in a third party solution for this problem.
Hm… I must test this whit jython…
“What’s with the “performance Nightmare” ”
It isn’t a performance nightmare – it is a maintainance nightmare – from the article:
“That’s nine types of keys and nine types of values, making 81 different types of Maps! And worse, a bug in one implies a high likelihood of a bug in the other 80, but the common codebase is limited because you need to implement the algorithms separately to be able to manipulate each data type efficiently, which makes for a heck of a lot of maintenance”
Howdy
This is the kind of thing the compiler should do, generics should allow it to atleast make a somewhat informed decision if Objects are to be used.
With a preprocesor they could greatly reduce the maintainance nightmare.
There is a reason why C++ wanted to do away with the preprocessor. There is a reason why Java did get rid of the preprocessor.
Yes it can be powerfull… but it can also wind up being a nightmare.
If you want to use the proprocessor with .java files go right ahead – nothing is stopping you – it just isn’t integrated.