Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I was performing Jmeter load testing of a spring boot application(Java 8) where I fired 50 concurrent requests and allocated heap size was 6GB. During testing I encountered

java.lang.OutOfMemoryError: GC overhead limit exceeded

According to Java docs,

by default, the JVM is configured to throw this error if the Java process spends more than 98% of its time doing GC and when only less than 2% of the heap is recovered in each run.

After this error, application became extremely slow which is expected behaviour. I checked for possible memory leaks but there were none. What I want to know is how and when will the garbage collector free up the occupied memory of failed requests?

For e.g. if I run test suite again with 20 concurrent requests, will my application be able to serve those requests by freeing up the memory of previous run or do I need to restart application?

Adding few application details:

  • Application is fetching around 50k-200k rows from tables(having 30 columns) per request
  • It was expected to run into out of memory during testing to find the number of rows that can be fetched per request
  • My concern is how my application will behave if this error comes in production. Will it able to serve the future requests at the same speed or I'll have to restart my application?

    with the limited information here, it's very complicated to answer in an accurate fashion. may be your application loads a pdf and it does need a lot of memory? may be your GC algorithm is not suited? may be you have a memory leak? may be you do not close some resources? Eugene Apr 22, 2020 at 13:43

    Java is unhappy due to the high level of GC which is indicative of a serious problem. However, this may be legitimate due to a high level of object churn or complex circular relationships which make GC spend more time determining dead object. It also does not help you determine exactly where the leak is occurring (if one is happening).

    When this error happen in testing, the first thing I do is turn off this error. This can be done with the -XX:-UseGCOverheadLimit JVM flag. This will allow the GC to continue, and will better highlight any "true" memory issues.

    There are no memory issue with application, used VisualVm to analyzed it. My concern is how my application will behave if this error comes in production. Will it able to serve the future requests at the same speed? Mayank Apr 22, 2020 at 16:37 There absolutely has to be a memory "issue" otherwise you would not be getting this error. The JVM does not have enough memory to do what it needs to do. What's not clear is whether this is a leak, or just not enough memory. If it is not a memory leak, the application should eventually recover. In the mean time, some requests may error out and not be fulfilled. Others may be handled very slowly. If it is a memory leak, that memory is "lost" until you restart. In production, the application will behave the same way. Necreaux Apr 22, 2020 at 19:08

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question . Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers .