Learn how to write your own resource-management code when you create types that contain resources other than memory, particularly for disposing of nonmemory resources.
Effective C#: Implement the Standard Dispose Pattern
About The Author
Eugenia Loli
Ex-programmer, ex-editor in chief at OSNews.com, now a visual artist/filmmaker.
Follow me on Twitter @EugeniaLoli
46 Comments
It is broken because not only do I, the implementer of an object, have to implement Dispose() when my object uses an unmanaged resource, but you, the client using my object, must remember to call Dispose() on it (or use a “using” statement for my particular object).
This is totally false if you were a good developer you do the following (don’t fault the good developers for the few bad ones out there):
<pre>
class MyClass : IDisposable {
public MyClass () { /* consturctor */ }
public void Dispose() {
Dispose(true);
GC.SuppressFinalize(this); // very important
}
protected void Dispose(bool b) {
if (b)
// clean managed code here
// clean unmanaged code here
}
~MyClass () {
Dispose(false);
}
}
</pre>
I hope this was educational for all of you out there that says you can’t control how and when the code gets removed. Not only can I clean up on my schedule, but if I don’t clean it up the GC will clean up the unmanaged code for me.
Ignorance is no excuse to rag on something you don’t understand. It’s even more ingorant to say you understand the language when you are missing something as simple as the code I just introduced above.
Do you see any explicit resource management here ? Did you see any need for it ? Did you see a leak ?
Great you created 3 lines of code that doesn’t have a memory leak, however most applications I deal with aren’t three lines of code. In addition from what I gather from you explaination above the code can turn into real speghetti code if you are not careful, which in turn makes debugging a whole hell of a lot harder.
is in that he doesn’t even realize that C++ is only a quasi-object-oriented language, in addition to Java. .Net on the other hand corrected the problem of using primitives, so it is truely object-oriented
OO is not the end all, be all programming paradigm.
And give me a definition of OO that a majority of people agree upon. I don’t think it exists. Various people have different conception of what exactly define OO and what the minimal feature set of a language must be do be deemed OO. Therefore, saying that language X is more object oriented than language Y doesn’t make much sense.
And please define “primitives”.
Great you created 3 lines of code that doesn’t have a memory leak, however most applications I deal with aren’t three lines of code.
This is called a sample. A snippet. Something short, that can be posted in a comment, to outline the principle of something.
I could have made the same pointless argument about your own example.
In addition from what I gather from you explaination above the code can turn into real speghetti code if you are not careful,
How so ? How related is using of a custom pointer instead of regular ones can be at all related to the overall structure, or lack thereof, of the code ?
The C++ for .NET which is currently being worked on will offer a “best of both worlds”: The destructor gets called at a deterministic time, yet memory management is left to the garbage collector.
I am certainly not dissing C# as a welcome alternative for rapid application development, and it’s much nicer than, say, jscript for that purpose.
Also, I’m not suggesting C++ is perfect.
What I take issue with is the hordes of people having coded their Easter Date Calculator in .NET in less time than it took in C++, and hence deride everyone “still” using C++ as being “old-fashioned” and outdated.
That’s why I wrote the article – C# is not my favorite programming language, because it doesn’t fit the kind of work I need to do.
As an example, someone here at our company writes an application in C# on top of a server in C++. There are objects representing (large) images, which are wrappers around shared memory. To the .NET garbage collector, these objects look quite small, so it doesn’t bother cleaning them up. However, virtual memory space was getting exhausted and the application ground to a halt.
The programmer had to resort to manually implementing a Dispose() method which explicitly freed the memory when it was no longer in use (despite the original object being a COM object, which already has reference counting) and calling Dispose() “every once in a while”.
Since the whole concept of garbage collection was “you don’t have to worry about your old objects anymore”, I don’t think this is something this programmer should have to waste his time on. Especially since it would have been avoided if destructors had been called at a deterministic time.
This is totally false if you were a good developer you do the following (code snippet calling Dispose() from the destructor removed).
This is only half of the story. As I keep trying to explain again and again, there’s no way of knowing when my destructor gets called, so neither am I sure when my Dispose() gets called. Again: my client has to remember to do so, which is a source of errors.
And please define “primitives”.
Primitives is a concept in essence where something that acts like an object to the programmer is interpreted in the compiler and then translated to something the processor understands. Such as an int in a 32-bit processor is different than one in a 16 or 64. Usually primitives are numbers, and other processor speicific “devices”. (I use devices loosely, because I am not sure if all languages have just numbers as primitives, all the ones that I have run into do though).
However the “primitives” (or a better name for them is aliases) in .net ARE objects, and care with them everything that an object normally would, (i.e. methods, properites, events, whatever).
I also ment to offense, which is why I used quasi-obeject-oriented, which is an excepted term amoung accademics, at least when I was in school, for how to define C++ and Java. This was done, because the building blocks for all the rest of the classes in a package, namespace, or whatever are build off of the primitives, which are not objects. That is why makes C++, and Java Qausi-Object-Oriented.
The C++ for .NET which is currently being worked on will offer a “best of both worlds”: The destructor gets called at a deterministic time, yet memory management is left to the garbage collector.
YES YES YES, you guys are finally getting it, it doesn’t depend on the language at all. If I wanted to program everything in unsafe code in C#, I could use pointers and clean stuff up my self. But I don’t want to. And if this fuctionality that you mentioned is available for C++ it is available for C#, and VB.Net, etc. Because they all compile down to the same IL, in addition they can use each others strenghts and weaknesses.
IT’S NOT ABOUT THE LANGUAGE, IT’S WHAT YOU CAN ACCOMPLISH WITH THEM. Everybody in this forum was making a big deal about the language when it’s not really about the language. In .net it’s about what you can accomplish with the many languages that you have at your disposeal.
you guys are finally getting it
Sigh. I spend several postings here trying to explain how currently, C# does not have deterministic destructor behaviour, and why that is bad, and getting posts in return stating that this is not a problem at all and that “this guy really doesn’t understand it”, and what I say is “totally false”. Then I point out that even the .NET-people recognize it, and how suddenly “we” are finally getting it?
Oh well. Since I refuse to get with the time and am still stuck using an outdated programming language, I have some more work to do while you guys can already go home, so I’d better get to it. Cheers!
Alright. See how even simple concepts have different names accross different programming languages…
What is called primitive in c# is (at least, in stroustrup’s book) called concrete type or value type in C++.
And yes, it definitely exists in C++. It’s indeed missing from java.
I give you that int, long, char and the other basic types in c++ aren’t objects. But you can very well define a class with, for instance, only one int member, and inline functions (including operators), and you get the exact same thing as c# primitives: it ends up in a single register, operations are inlined, and it all just only define custom treatments, specific member functions, and such.
The smart_pointer in my example would be such kind of thing: only a regular pointer as member, and construction/destruction would increment/decrement the reference counter of the object (and possibly delete it)
But, in my mind (and obviously, in java’s designers mind aswell), defining custom types is not the same paradigm as OO programming, even though in c++ and c# it uses the same mechanism.
Don’t forget about the Boost smart pointers! You really don’t have to implement them yourselves all that often anymore. Other than that, you’re dead on.
he doesn’t even realize that C++ is only a quasi-object-oriented language
You say that like its a bad thing. being able to use whatever programming paradigm appropriate to the promblem domain is a strength as far as I’m concerned. Though unquestionably usefult, OO is not the end-all-be-all that its cracked up to be. Template metaprogramming, for example, is a very powerful way to solve problems.
I spend several postings here trying to explain how currently, C# does not have deterministic destructor behaviour, and why that is bad, and getting posts in return stating that this is not a problem at all and that “this guy really doesn’t understand it”, and what I say is “totally false”.
All I can say is your absolutely correct. Anybody arguing against this point hasn’t been reading MSDN lately — and obviiously haven’t been reading on the advantages of using C++/CLI. Since there’s information aplenty on MSDN, I won’t repeat what’s written there….if you doubt what Sander has been saying, go do some homework.
I don’t know C#, but from what I’ve read here, this is the problem:
A simple example (not proper code).
public void talkToGoogle()
{
MySocket abc = new MySocket(“www.google.com”);
abc.send(“Hello”);
//do ohter stuff
}
Now at some reasonable time when the object gets deleted abc.closeSocket() would have to be called.
A fantasy C# compiler/runtime would recognize that abc is not used again, and delete it immediately. However, there is not guarantee this will happen. abc may actually live around for a while (hogging socket connections);
The easy solution is to take the view that closing the socket is separate from deleting the object. Closing the object is just like any function call…like send();
To help differentiate between deletion and just closing resources, C# introduces a ‘dispose’ function. In this case closeSocket() is called inside dispose(); In this way, any programmer knows the name of the of the function to release resources. Without it, programmers would have invented solution like close() for sockets, freeImageMemory() for large image files…
So, a smart/safe programmer would do this code as follows.
public void talkToGoogle()
{
MySocket abc = new MySocket(“www.google.com”);
abc.send(“Hello”);
abc.dispose();
//do ohter stuff
}
The question is what to do with the programmer that doesn’t call abc.dispose(). IMHO, he is ‘telling’ the compiler/runtime that he does not care when the object’s resources get released. As a result, using the proper dispose method the MySocket class would autmotically call dispose on the finalizer. So the socket would eventually…at some time get released.
The only thing that comes to my mind is that the .NET compiler/runtime guarantees that as soon as it knows an object is not needed, and that object implements the IDisposable interface, that it automatically calls dispose(). This way, it also deparates the resource releasing and memory elements.
I wasn’t saying in C# they are called primitives. In Java, and I have heard it many ways for C and C++, they are called primitives. In C# the best term is alias.
I don’t care how you named them. I was responding to the implication you were making that c++ wasn’t a proper OO language because you couldn’t do that. I just needed a clarification as to what you were referring to first.
It’s held as true, in most academic class rooms (people that actually care about this stuff), but because C++ uses primitives, (value types or whatever you want to call them). It is not truely object oriented until you can do:
int.SomeStaticMethod()
— or —
int i = 0;
if (i is Int32)
Console.Write(i.ToString());
That is what I mean by totally object oriented. If a part of the language isn’t object oriented but treated like it is just for the sake of the programmer than the language isn’t totally object-oriented.
For a language to be object-oriented you need “EVERYTHING” to be an object, not just some parts of it. That is what I was talking about. The word “primitive” comes from types that aren’t objects and are at the basis when you get down to the nitty gritty after go through all the levels of inheritance.
Basically everything in C++ all boils down to using “primitives”, much like all of math eventually boils down to addition.
I’m surprised no one has mentioned the “using” keyword. Using this keyword automatically calls dispose on objects that implement this interface, makes for much more readable code.
“The programmer had to resort to manually implementing a Dispose() method which explicitly freed the memory when it was no longer in use (despite the original object being a COM object, which already has reference counting) and calling Dispose() every once in a while”.
Then the programmer still did it the wrong way. Have a look at System.Runtime.InteropServices.Marshal.ReleaseComObject(), there is an excellent blog about using this method here: http://blogs.msdn.com/cbrumme/archive/2003/04/16/51355.aspx.
It’s a good idea NOT to get overzealous with implementing the Dispose pattern; you only need to use this if you need deterministic finalization, which for 90% of the scenarios out there is simply not needed. If you need more agressive GC’ing, use the ServerGC, not the client (google mscorsvr.dll mscorwks.dll for more information concerning the differences between the 2 GC’s, it should be worth noting that the two have merged in 2.0 and mscorsvr.dll appears to GC engine). Cheers.
(1) Someone said that C++ for .NET offers the best of “both worlds”, namely automatic destructor calling and automatic memory management (MM). I just wonder, when the destructor is called, why not free the memory, so the system does less swapping?
(2) Sander said that you can’t know when the destructor is called (or if?). Someone else(?) said that the problem is that the *client* has to remember to destruct. That’s the exact reason why people should use blocks in C++ where appropriate (because cons-/destruction is automatic then), or use methods like the NSAutoReleasePool in OpenStep, so that at some point the object is scheduled for destruction, even if you forget about it.
Ulrich
Hi. I just noticed your moderated-down link (which is IMHO at least partly on-topic…).
You hit the nail on the head. What you write is *part* of what I think is wrong with Java, and OOP in general, as it is used!
Too bad the rest of your website is all in Dutch
Why does Microsoft persist with the thoroughly discredited Hungarian Notation? eg. IDispose. It should be ‘Disposable’, as in customary in Java, because the compiler should sort out whether it is a concrete class or an interface, and it means you can switch Disposable between a base class or interface without having to change all your client code.
Also, Sander is right about the impact of non-determinism. Try writing image processing applications (which I do) which combine lots of images and require the memory to be disposed of as soon as possible. Having guaranteed disposal is not the same as timely disposal – which is why it is effectively possible to have memory leaks in both Java (which is well-documented) and .NET. At least in Java if you set the reference to null and call System.gc() there is a good chance that cleanup will happen.
I think it’s hard to change an interface to a class without breaking all code (in a single-inheritance language).
Also, inheritance in general is a bad/dangerous thing (fortunately more and more people adopt this opinion). Even James Gosling (the main Java designer) said, that if he would design Java today he would leave out classes (and keep interfaces).
As IDispose indicates merely that a class complains to the Dispose protocol, I think the name is not too bad.
BTW. I did a quick google on Hungarian Notation. What’s so bad about it? It seems to be the notation that most non-C libraries use (Java certainly), and I find o.addVal(v) better than o.add_val(v), for that matter (partly because the _ is awkward to type…).
Why oh why oh why would you want to manually invoke the gc? This has been beaten to death in the .Net circles, but as a refresher (for starters at least):
http://blogs.msdn.com/maoni/archive/2004/06/15/156626.aspx
http://blogs.msdn.com/maoni/archive/2004/09/25/234273.aspx
If you’re app is properly designed (and you keep objects in gen0/gen1), the gc should do a fine job left to its own devices. Manually invoking it will certainly leave you worse off performance wise than taking the initial memory hit.
BTW. I did a quick google on Hungarian Notation. What’s so bad about it? It seems to be the notation that most non-C libraries use (Java certainly), and I find o.addVal(v) better than o.add_val(v), for that matter (partly because the _ is awkward to type…).
This is not Hungarian Notation. Hungarian Notation means you prepend variable names by a moniker depending on their type, so you’d have int iIndex, double dValue, etc. all the way up to LPZWRSTR or whatever it’s called.
This is bad, because the name of the object should say something about what it represents, not how it represents it.
So, a smart/safe programmer would do this code as follows.
public void talkToGoogle()
{
MySocket abc = new MySocket(“www.google.com”);
abc.send(“Hello”);
abc.dispose();
//do ohter stuff
}
But a “smart/safe programmer” wouldn’t need to rely on a garbage collector, because she would never forget to release her resources.
The whole point is that you don’t have to bother with disposing resources yourself. I find that with deterministic destructors and the RAII idiom, you don’t really need a GC.
By the way, even Herb Sutter agrees:
http://blogs.msdn.com/hsutter/archive/2004/07/31/203137.aspx
(excerpt for the lazy: “The C++ destructor model is exactly the same as the Dispose and using patterns, except that it is far easier to use and a direct language feature and correct by default, instead of a coding pattern that is off by default and causing correctness or performance problems when it is forgotten.”)
Tada.
Try writing image processing applications (which I do)
We’re looking to hire another Image Processing Guy. Care to move to The Netherlands? The weather sucks, but coffee’s free here at work.
C# has destructors, but they get called at an indeterminate time. That is what’s broken about the language.
It is broken because not only do I, the implementer of an object, have to implement Dispose() when my object uses an unmanaged resource, but you, the client using my object, must remember to call Dispose() on it (or use a “using” statement for my particular object).
Ridiculous. There’s nothing “broken” about it. Languages such as Java and C# use a garbage collector to manage memory resources. By definition, garbage collectors abstract this memory management away from apps because (a) it’s tedious, (b) it’s error-prone to have apps do it, and (c) forcing immediate destruction of resources isn’t performant. If that isn’t good enough, both Java and C# provide means to force the garbage collector to invoke the finalizers that are sitting in the pending queue.
Using a Dispose pattern is really a marginal-use scenario. Very few apps are going to need this kind of capability unless they manage a lot of (or large-sized) unmanaged resources. But don’t blame the language for a lack of understanding of garbage collection.
But don’t blame the language for a lack of understanding of garbage collection.
I assume you meant my lack of understanding, but your sentence can also be parsed like this: “Don’t blame the language because it lacks understanding of garbage collection.” And this is exactly what I want to do. If a language provides a garbage collector, it should damn well understand garbage collection. So far, nobody here has pointed out that there are other resources beside just memory, and the garbage collector currently in .NET doesn’t cope well with that.
Everyone keeps bringing up that this is “marginal use case”. Exactly! I, the implementer of an object, know that I’m using precious resources which shoud be released ASAP. The only way I can “enforce” this in .NET is by printing in big red letters on the box I’m selling my component in, that its users are supposed to call Dispose() “every once in a while”. Programmers will forget this, because they’re told that they “almost never” need to worry about disposing of resources themselves.
Had C# offered deterministic destructors, this problem would have been easily overcome by me, implementer of the object, by giving it a destructor and calling Dispose() from it.
I’ll stop wasting my time on the subject until someone can give me an example of why it would be better to have indeterministic destruction (random generators are explicitly excluded!).
As you point out Hungarian Notation was originally used for C-types. However, ‘IDispose’ mangles the type into the name – so I consider it the OO equivalent (ie. quite poor programming style – but perhaps that’s just my opinion).
I agree with regard to inheritance, I have been arguing for over a decade that inheritance has been used inappropriately and excessively – although sometimes it still has its place. Fortunately the Computer Science people and industry have now caught up with me [I have a PhD in Astrophysics so I usually have a different way of attacking problems than the classic Computer Science approach].
> We’re looking to hire another Image Processing Guy. Care to move to The Netherlands? The weather sucks, but coffee’s free here at work.
Cheers, my workplace is in the middle of the Botanic Gardens in Wellington, New Zealand and it has just turned summer- which is pretty hard to beat, especially when plenty of your lovely Dutch ladies come to visit the gardens . Wanna swap places? I’d love to get paid in Euros given the exchange rate with the NZ dollar (1 Euro is $1.75 NZ)
Memory Management is tedious, if:
(1) you use a library (like malloc in C) instead of some nicer syntax, like new in C++ for instance
(2) the programmer has no concept of his program at all (its structure) and just allocates/decallocates resources all over the place. Since you have to manage non-mem resources anyway, I see no reason not to just do all cons/destruction explicitly.
Note that I’m not really a C++ guy, and C is pretty low-level for most things, but in principle I’m doing well with manual MM (would be nice to see a decent high-level language with explicit resource management; maybe I should build one some time…)
All,
Maybe I’m missing the point, but doesn’t C# offer *both* non-deterministic and deterministic GC?
Most of the time, the GC frees memory for old objects as and when it feels like it. You can add also a destructor to your if you want it to clean up other ‘unmanaged’ resources automatically.
Sometimes, however, we need to free up memory and other resources deterministically. If which case we implement the Dispose() method in our class. Calling Dispose() causes the resources to be released immediately.
Isn’t this the best of both worlds?
Kramii
Why? It’s not like OSNews is specific to Linux, or even open-source? Material on .NET is perfectly reasonable, and interesting to some of us.
While your post is very truthful, the fact remains that .NET is bad for the human race.
Well, I find it interesting to read about .NET, especially since it’s new and strongly hyped and I don’t use it.
Actually this article enforces my opinion not to use it, even if it had a cross-platform GUI. They got rid of manual memory management only to do the same hassle again with non-memory resources
I don’t get it. Code that is guaranteed to be called when you’re object goes out of scope (like C++ destructors or VB6’s Terminate method) is tremendously useful. Such code makes any kind of resource management simple, automatic, and transparent. Why don’t managed languages like C# and Java have that? Having to call cleanup methods for objects always makes for ugly, error prone, verbose, redundant code. Those are all major bad smells in any code. Every time I write a java/c# “finally” block I curse Sun and Microsoft.
I didn’t read the article but I can answer your question(sorta). C#/.NET has destructors that are called when the garbage collector gets ready to claim it. This is not necessarily when the object goes out of scope. IDisposable allows you to implement your object in a way that you can manually release the objects BEFORE the garbage collector decided to reclaim the object. I’ve used this in cases where I’m using Direct3D or DirectShow, etc in C# and I need to get rid of a precious resource. :-D. So, yeah… I guess it *could* be nice if you could know that a destructor would be called as soon as it goes out of scope but then that complicates things, and leads to the possibility that the programmer expects a destructor to be called but the object hasn’t really went out of scope so that causes problems, but then again.. that’s what IDisposable is for :-D.
“IDisposable allows you to implement your object in a way that you can manually release the objects BEFORE the garbage collector decided to reclaim the object.”
You don’t need an interface and 15 articles to do that… Just call your myMethodForCleaningUp() method when done with the object!
The reason for doing manual cleanup (IF you need to do that) is quite simply because of this simple statément from the article:
“Disposing of objects can happen in any order.”
Howdy all
Why don’t managed languages like C# and Java have that? Having to call cleanup methods for objects always makes for ugly, error prone, verbose, redundant code.
Ummm you don`t have to at all and when you do want to use it this feature comes in VERY handy.
And who said you wanted to get rid of it once it`s out of scope?
What about if other objects still have references to the resources, deleting them because you`ve simply moved outside a method/function/procedure call it`s kinda retarded.
actually it runs under linux using mono or dotgnu (using it on my linux+ppc box myself and it works quite well)
I’ve been programming without destructors for a while. You know, it’s not that bad as it seem, really.
Pythonistas and rubyists and javanese don’t give a f*ck about finalizers and destructors.
They have a chance, just like in .net, (but obviously withouth the stupid need to be both real GC and refcount) but I really have not seen them used, yet. Go figure.
What about if other objects still have references to the resources, deleting them because you`ve simply moved outside a method/function/procedure call it`s kinda retarded.
That is the exception, not the rule. If you hand out a reference to your object to someone else, then obviously it needs to be kept alive. This has nothing to do with deterministic destruction.
There are destructors in .Net, everbody seems to be missing the point of IDisposable. In the .Net langague there are a lot of resources that need to be consumed from unmanaged code. Such as graphics, network, IO, etc, and since these are unmanaged they bring with them the hassel of unmanaged objects. What makes these unmanaged objects easy to use is the IDisposable interface, an interface that was specifically designed to clean up objects in your own time and not have to wait for a garbage collector.
In addition for all of you saying there is no destructor in .Net that is totally false. Just type as a method ~MyClass() to create a destructor.
I really hate all you guys commenting on something that you hate and have never bothered to look into. I really can’t tell you how nice it is to clean up an object by calling a method like Dispose(). It not only releases the unmanaged object such as an IO editing a file, and releasing the file so it can be used by some other resource.
In addition you know what is using unmanaged objects in the framework because they use the IDisposable interface. Also the using feature works very well so you don’t have to remember to call .Dispose().
using (Stream s = new MyStream(@”c:myfile.txt”)) {
int i = 0;
while (i < 10) {
s.Write(i);
i++;
}
s.Write(“You last int was {0}”, i);
}
And that is all that I have to do to clean up the code, get the garbage collector to get rid of the object i, and call Dispose() on the s object.
If Stream didn’t have an IDisposable interface, the garbage collector would have no idea that there were unmanaged resources in the class and would never clean the unmanaged resources out of memeory. Thus leaving a memory leak which many of you C and C++ guys out there have to deal with on a regular basis. Personally I love not having to worry about cleaning up objects, because that is not what coding is about, it’s about getting the job done. But that also doesn’t mean that I am going to be sloppy.
My final thought to all of you is to stop ragging on something you have never used, and the only reason you hate it is because it’s new and you are no longer on the cutting edge of anything. It is really rather sad.
C# has destructors, but they get called at an indeterminate time. That is what’s broken about the language.
It is broken because not only do I, the implementer of an object, have to implement Dispose() when my object uses an unmanaged resource, but you, the client using my object, must remember to call Dispose() on it (or use a “using” statement for my particular object).
Don’t assume we’re all ragging on something because we have never used it. Perhaps we have, and found it lacking. For a more detailed explanation as to why C# is not my favorite programming language, see the link I posted above.
Thus leaving a memory leak which many of you C and C++ guys out there have to deal with on a regular basis. Personally I love not having to worry about cleaning up objects, because that is not what coding is about, it’s about getting the job done.
I have no idea what you’re talking about. Did YOU bother to look into how things are done in C++ ?
(note: I used [] instead of chevrons as the forum try to interpret it as html tage otherwise)
void someclass::somefunc()
{
smart_pointer[ Whatever ] pSomething( new Whatever );
smart_pointer[ Whatever ] pSomethingElse( new Whatever );
m_pSomeOtherInstance->SetWhatever( pSomething );
}
pSomething would then not be freed at long as the instance pointed to by m_pSomeOtherInstance exists.
pSomethingElse would be freed when leaving SomeFunc.
Do you see any explicit resource management here ? Did you see any need for it ? Did you see a leak ?
I left out the implementation of smart_pointer because it’s trivial and besides the point, which is to whos what day to day programming in C++ using smart pointers that automatically count references looks like.
I’m tired of people saying that memory management doesn’t exist in C++. It does. It’s only done in a different way than all those garbage collector based languages around, which is way simpler to implement, deterministic, and much simpler to implement than a grabage collection solution.
Yes, you have to do it yourself, it’s not provided in standard by the language. Instead, the language provides you the tools to do it easily, and the fexibility to do it any way you want (you could very well do a grabage solution system that would be just as easy and transparent to use)
http://www.stoks.nl/rants/favorite.html“