Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams Yes, it can matter. No, the compiler isn't going to "throw an error". Just don't do it :) paulsm4 Feb 7, 2012 at 1:21 I would think in practice the worst that can plausibly happen is that some time later, the memory allocator misbehaves. This (a) forces you to spend a very long time trying to debug it, since the symptoms are nowhere near the cause and additionally (b) creates a serious security vulnerability in the software, which your employer gets sued over, loses, goes bust, and your contract ends up owned by a guy named Clive who does house clearances at a fixed price per truckload. He forces you to work your notice period bug-fixing Japanese tentacle porn RPGs. Steve Jessop Feb 7, 2012 at 1:53 Don't and never do it. It is undefined behavior. Implementation of C++ can do anything in such cases. It might really do bad things like corrupt your heap, make arbitrary and bizarre changes to other objects on the heap & might do even worst things. If you use raw pointers it is better to set it as a null pointer after 1st delete because deleting null pointer is safe otherwise use smart pointers in modern C++. Destructor May 29, 2015 at 8:05 This is a good academical question, everyone say "don't do it" but I think OP is already aware that they shouldn't do it. It's not important if they should or not, they want to know what would happen from low level point of view, which indeed, is worth knowing in order to understand the language properly. Petr Nov 21, 2015 at 9:12 I tried the OP's example, just to see the result. I did not experience a runtime crash. On the other hand, my office building began breathing cheese. Be warned: anthyding can hadplen! Michael Krebs Oct 29, 2018 at 15:34 You only get an immediate runtime crash if you are very lucky. More often, you will delete some other innocent object or otherwise trash memory that has been reused, thereby killing the puppies. (And perhaps being otherwise completely undetectable until your program does something weird that might not be a crash at some unknown time in the future.) The worst part is that it cascades -- if the double delete did successfully delete another unintended object, when eventually that object does reach its intentional deletion it will be another double deletion, and you've killed more puppies. Miral Nov 13, 2018 at 6:16

Undefined behavior. There are no guarantees whatsoever made by the standard. Probably your operating system makes some guarantees, like "you won't corrupt another process", but that doesn't help your program very much.

Your program could crash. Your data could be corrupted. The direct deposit of your next paycheck could instead take 5 million dollars out of your account.

It's undefined behavior, so the actual result will vary depending on the compiler & runtime environment.

In most cases, the compiler won't notice. In many, if not most, cases, the runtime memory management library will crash.

Under the hood, any memory manager has to maintain some metadata about each block of data it allocates, in a way that allows it to look up the metadata from the pointer that malloc/new returned. Typically this takes the form of a structure at fixed offset before the allocated block. This structure can contain a "magic number" -- a constant that is unlikely to occur by pure chance. If the memory manager sees the magic number in the expected place, it knows that the pointer provided to free/delete is most likely valid. If it doesn't see the magic number, or if it sees a different number that means "this pointer was recently freed", it can either silently ignore the free request, or it can print a helpful message and abort. Either is legal under the spec, and there are pro/con arguments to either approach.

If the memory manager doesn't keep a magic number in the metadata block, or doesn't otherwise check the sanity of the metadata, then anything can happen. Depending on how the memory manager is implemented, the result is most likely a crash without a helpful message, either immediately in the memory manager logic, somewhat later the next time the memory manager tries to allocate or free memory, or much later and far away when two different parts of the program each think they have ownership of the same chunk of memory.

Let's try it. Turn your code into a complete program in so.cpp:

class Obj
public:
    int x;
int main( int argc, char* argv[] )
    Obj *op = new Obj;
    Obj *op2 = op;
    delete op;
    delete op2;
    return 0;

Compile it (I'm using gcc 4.2.1 on OSX 10.6.8, but YMMV):

russell@Silverback ~: g++ so.cpp

Run it:

russell@Silverback ~: ./a.out
a.out(1965) malloc: *** error for object 0x100100080: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap

Lookie there, the gcc runtime actually detects that it was a double delete and is fairly helpful before it crashes.

Also in case if there's any chance a "double delete" may happen - a smart pointer usually is the best solution. – paul23 Feb 7, 2012 at 1:23 The problem with this answer is that on my computer the same code runs fine with no crashes. – Jesse Good Feb 7, 2012 at 1:35 Here you go @Russell, an upvote for your excellent comment ;) Oh, and if you do not consider C++ "a real programming language", you are hereby invited to add it to your list of ignored tags. After that we'll all get together and crank out some machine code with a hex editor. – Ben Voigt Feb 7, 2012 at 2:20
int* a = new int;
delete a;
a = nullptr; // or just NULL or 0 if your compiler doesn't support c++11
delete a; // nothing happens!

Thought I should post it since no one else was mentioning it.

So I wonder why compiler doesn't assign nullptr to the pointer after delete. Is this for exposing double-delete bugs? What is the actual value of the pointer after deletion? – Vlad May 7, 2020 at 17:31 @Vlad The value of the pointer after delete is the same as it was before the delete — in other words, the address within the pointer is still the same, except the data at that address has now been freed. (Therefore, attempting to access that data by dereferencing the pointer is undefined behaviour.) The reason why the compiler doesn't automatically assign nullptr to deleted pointers is likely both for historical and performance reasons. – Enn Michael May 9, 2020 at 17:35 @EnnMichael: Actually after delete there are no guarantees about the address stored inside the pointer used to delete it or any other pointer to the same object. It could be left the same as it was before. It could be changed to nullptr. You don't know, and according to the Standard you can't know, because even reading a value (without dereferencing) from such an invalid pointer is UB. – Ben Voigt Nov 19, 2020 at 22:50

The compiler may give a warning or something, especially in obvious (like in your example) but it is not possible for it to always detect. (You can use something like valgrind, which at runtime can detect it though). As for the behaviour, it can be anything. Some safe library might check, and handle it fine -- but other runtimes (for speed) will make the assumption you call is correct (which it's not) and then crash or worse. The runtime is allowed to make the assumption you're not double deleting (even if double deleting would do something bad, e.g. crashing up your computer)

Everyone already told you that you shouldn't do this and that it will cause undefined behavior. That is widely known, so let's elaborate on this on a lower level and let's see what actually happens.

Standard universal answer is that anything can happen, that's not entirely true. For example, the computer will not attempt to kill you for doing this (unless you are programming AI for a robot) :)

The reason why there can't be any universal answer is that as this is undefined, it may differ from compiler to compiler and even across different versions of same compiler.

But this is what "roughly" happens in most cases:

delete consist of 2 primary operations:

  • it calls the destructor if it's defined
  • it somehow frees the memory allocated to the object
  • So, if your destructor contains any code that access any data of class that already was deleted, it may segfault OR (most likely) you will read some nonsense data. If these deleted data are pointers then it will most likely segfault, because you will attempt to access memory that contains something else, or doesn't belong to you.

    If your constructor doesn't touch any data or isn't present (let's not consider virtual destructors here for simplicity), it may not be a reason for crash in most compiler implementations. However, calling a destructor is not the only operation that is going to happen here.

    The memory needs to be free'd. How it's done depends on implementation in compiler, but it may as well execute some free like function, giving it the pointer and size of your object. Calling free on memory that was already deleted may crash, because the memory may not belong to you anymore. If it does belong to you, it may not crash immediately, but it may overwrite memory that was already allocated for some different object of your program.

    That means one or more of your memory structures just got corrupted and your program will likely crash sooner or later or it might behave incredibly weirdly. The reasons will not be obvious in your debugger and you may spend weeks figuring out what the hell just happened.

    So, as others have said, it's generally a bad idea, but I suppose you already know that. Don't worry though, innocent kitten will most likely not die if you delete an object twice.

    Here is example code that is wrong but may work just fine as well (it works OK with GCC on linux):

    class a {};
    int main()
        a *test = new a();
        delete test;
        a *test2 = new a();
        delete test;
        return 0;
    

    If I don't create intermediate instance of that class between deletes, 2 calls to free on same memory happens as expected:

    *** Error in `./a.out': double free or corruption (fasttop): 0x000000000111a010 ***
    

    To answer your questions directly:

    What is the worst that can happen:

    In theory, your program causes something fatal. It may even randomly attempt to wipe your hard drive in some extreme cases. The chances depends on what your program actually is (kernel driver? user space program?).

    In practice, it would most likely just crash with segfault. But something worse might happen.

    Is the compiler going to throw an error?

    It shouldn't.

    I think most people know what double delete results in - unexpected behaviour. But this answer is the best in describing what happens behind the curtains and why we actually see that crash caused by two deletes. +1 – Nikola Malešević Apr 25, 2018 at 10:25

    No, it isn't safe to delete the same pointer twice. It is undefined behaviour according to C++ standard.

    From the C++ FAQ: visit this link

    Is it safe to delete the same pointer twice?
    No! (Assuming you didn’t get that pointer back from new in between.)

    For example, the following is a disaster:

    class Foo { /*...*/ };
    void yourCode()
      Foo* p = new Foo();
      delete p;
      delete p;  // DISASTER!
      // ...
    

    That second delete p line might do some really bad things to you. It might, depending on the phase of the moon, corrupt your heap, crash your program, make arbitrary and bizarre changes to objects that are already out there on the heap, etc. Unfortunately these symptoms can appear and disappear randomly. According to Murphy’s law, you’ll be hit the hardest at the worst possible moment (when the customer is looking, when a high-value transaction is trying to post, etc.). Note: some runtime systems will protect you from certain very simple cases of double delete. Depending on the details, you might be okay if you happen to be running on one of those systems and if no one ever deploys your code on another system that handles things differently and if you are deleting something that doesn’t have a destructor and if you don’t do anything significant between the two deletes and if no one ever changes your code to do something significant between the two deletes and if your thread scheduler (over which you likely have no control!) doesn’t happen to swap threads between the two deletes and if, and if, and if. So back to Murphy: since it can go wrong, it will, and it will go wrong at the worst possible moment. A non-crash doesn’t prove the absence of a bug; it merely fails to prove the presence of a bug. Trust me: double-delete is bad, bad, bad. Just say no.

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.