Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
–
–
–
–
–
–
Undefined behavior. There are no guarantees whatsoever made by the standard. Probably your operating system makes some guarantees, like "you won't corrupt another process", but that doesn't help your program very much.
Your program could crash. Your data could be corrupted. The direct deposit of your next paycheck could instead take 5 million dollars out of your account.
It's undefined behavior, so the actual result will vary depending on the compiler & runtime environment.
In most cases, the compiler won't notice. In many, if not most, cases, the runtime memory management library will crash.
Under the hood, any memory manager has to maintain some metadata about each block of data it allocates, in a way that allows it to look up the metadata from the pointer that malloc/new returned. Typically this takes the form of a structure at fixed offset before the allocated block. This structure can contain a "magic number" -- a constant that is unlikely to occur by pure chance. If the memory manager sees the magic number in the expected place, it knows that the pointer provided to free/delete is most likely valid. If it doesn't see the magic number, or if it sees a different number that means "this pointer was recently freed", it can either silently ignore the free request, or it can print a helpful message and abort. Either is legal under the spec, and there are pro/con arguments to either approach.
If the memory manager doesn't keep a magic number in the metadata block, or doesn't otherwise check the sanity of the metadata, then anything can happen. Depending on how the memory manager is implemented, the result is most likely a crash without a helpful message, either immediately in the memory manager logic, somewhat later the next time the memory manager tries to allocate or free memory, or much later and far away when two different parts of the program each think they have ownership of the same chunk of memory.
Let's try it. Turn your code into a complete program in so.cpp:
class Obj
public:
int x;
int main( int argc, char* argv[] )
Obj *op = new Obj;
Obj *op2 = op;
delete op;
delete op2;
return 0;
Compile it (I'm using gcc 4.2.1 on OSX 10.6.8, but YMMV):
russell@Silverback ~: g++ so.cpp
Run it:
russell@Silverback ~: ./a.out
a.out(1965) malloc: *** error for object 0x100100080: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap
Lookie there, the gcc runtime actually detects that it was a double delete and is fairly helpful before it crashes.
–
–
–
int* a = new int;
delete a;
a = nullptr; // or just NULL or 0 if your compiler doesn't support c++11
delete a; // nothing happens!
Thought I should post it since no one else was mentioning it.
–
–
–
The compiler may give a warning or something, especially in obvious (like in your example) but it is not possible for it to always detect. (You can use something like valgrind, which at runtime can detect it though). As for the behaviour, it can be anything. Some safe library might check, and handle it fine -- but other runtimes (for speed) will make the assumption you call is correct (which it's not) and then crash or worse. The runtime is allowed to make the assumption you're not double deleting (even if double deleting would do something bad, e.g. crashing up your computer)
Everyone already told you that you shouldn't do this and that it will cause undefined behavior. That is widely known, so let's elaborate on this on a lower level and let's see what actually happens.
Standard universal answer is that anything can happen, that's not entirely true. For example, the computer will not attempt to kill you for doing this (unless you are programming AI for a robot) :)
The reason why there can't be any universal answer is that as this is undefined, it may differ from compiler to compiler and even across different versions of same compiler.
But this is what "roughly" happens in most cases:
delete
consist of 2 primary operations:
it calls the destructor if it's defined
it somehow frees the memory allocated to the object
So, if your destructor contains any code that access any data of class that already was deleted, it may segfault OR (most likely) you will read some nonsense data. If these deleted data are pointers then it will most likely segfault, because you will attempt to access memory that contains something else, or doesn't belong to you.
If your constructor doesn't touch any data or isn't present (let's not consider virtual destructors here for simplicity), it may not be a reason for crash in most compiler implementations. However, calling a destructor is not the only operation that is going to happen here.
The memory needs to be free'd. How it's done depends on implementation in compiler, but it may as well execute some free
like function, giving it the pointer and size of your object. Calling free
on memory that was already deleted may crash, because the memory may not belong to you anymore. If it does belong to you, it may not crash immediately, but it may overwrite memory that was already allocated for some different object of your program.
That means one or more of your memory structures just got corrupted and your program will likely crash sooner or later or it might behave incredibly weirdly. The reasons will not be obvious in your debugger and you may spend weeks figuring out what the hell just happened.
So, as others have said, it's generally a bad idea, but I suppose you already know that. Don't worry though, innocent kitten will most likely not die if you delete an object twice.
Here is example code that is wrong but may work just fine as well (it works OK with GCC on linux):
class a {};
int main()
a *test = new a();
delete test;
a *test2 = new a();
delete test;
return 0;
If I don't create intermediate instance of that class between deletes, 2 calls to free on same memory happens as expected:
*** Error in `./a.out': double free or corruption (fasttop): 0x000000000111a010 ***
To answer your questions directly:
What is the worst that can happen:
In theory, your program causes something fatal. It may even randomly attempt to wipe your hard drive in some extreme cases. The chances depends on what your program actually is (kernel driver? user space program?).
In practice, it would most likely just crash with segfault. But something worse might happen.
Is the compiler going to throw an error?
It shouldn't.
–
No, it isn't safe to delete the same pointer twice. It is undefined behaviour according to C++ standard.
From the C++ FAQ: visit this link
Is it safe to delete the same pointer twice?
No! (Assuming you didn’t get that pointer back from new in between.)
For example, the following is a disaster:
class Foo { /*...*/ };
void yourCode()
Foo* p = new Foo();
delete p;
delete p; // DISASTER!
// ...
That second delete p line might do some really bad things to you. It might, depending on the phase of the moon, corrupt your heap, crash your program, make arbitrary and bizarre changes to objects that are already out there on the heap, etc. Unfortunately these symptoms can appear and disappear randomly. According to Murphy’s law, you’ll be hit the hardest at the worst possible moment (when the customer is looking, when a high-value transaction is trying to post, etc.).
Note: some runtime systems will protect you from certain very simple cases of double delete. Depending on the details, you might be okay if you happen to be running on one of those systems and if no one ever deploys your code on another system that handles things differently and if you are deleting something that doesn’t have a destructor and if you don’t do anything significant between the two deletes and if no one ever changes your code to do something significant between the two deletes and if your thread scheduler (over which you likely have no control!) doesn’t happen to swap threads between the two deletes and if, and if, and if. So back to Murphy: since it can go wrong, it will, and it will go wrong at the worst possible moment.
A non-crash doesn’t prove the absence of a bug; it merely fails to prove the presence of a bug.
Trust me: double-delete is bad, bad, bad. Just say no.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.