Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Ask Question
Suddenly we started to experience following problem during connection to Mongo:
{u'code': 261, u'ok': 0.0, u'$clusterTime': {u'clusterTime': Timestamp(1614532995, 3141), u'signature': {u'keyId': 0L, u'hash': Binary('\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 0)}}, u'codeName': u'TooManyLogicalSessions', u'operationTime': Timestamp(1614532995, 3141), u'errmsg': u'Unable to add session into the cache because the number of active sessions is too high'}
.
This happens for any connection driver we use:
Mongo shell
pymongo 3.6
pymongo 3.11
And happens not for every query, but roughly for 30-40% of all queries.
Meanwhile,
maxSession
has default value (1000000) and I have following data from db status:
"logicalSessionRecordCache" : {
"activeSessionsCount" : 2244,
"sessionsCollectionJobCount" : 48430,
"lastSessionsCollectionJobDurationMillis" : 0,
"lastSessionsCollectionJobTimestamp" : ISODate("2021-02-28T16:56:03.438Z"),
"lastSessionsCollectionJobEntriesRefreshed" : 0,
"lastSessionsCollectionJobEntriesEnded" : 0,
"lastSessionsCollectionJobCursorsClosed" : 0,
"transactionReaperJobCount" : 49566,
"lastTransactionReaperJobDurationMillis" : 1,
"lastTransactionReaperJobTimestamp" : ISODate("2021-02-28T17:03:17.631Z"),
"lastTransactionReaperJobEntriesCleanedUp" : 0,
"sessionCatalogSize" : 33
Once the issue has been discovered, I checked periodically a number of sessions in config.system.sessions
. It varied from 11k to 560k (most of the time it was between 80k and 350k), which seems to be quite high.
However, the problem remained disregard of the amount of sessions.
An error is sudden, we have the same load as before (I don't know the number of sessions we used to have before but we didn't add any new clients - we have about 3k connections.
There is no sharding, only a replica (one primary and one secondary).
I would really appreciate any advice on how to overcome this problem.
UPD: another thing that looks weird for me:
> db.system.sessions.count()
416068
> db.currentOp(true).inprog.length
how is it possible to have such a difference?
–
–
–
Most likely you are going to need to do some debugging in your application to figure out where you are leaking sessions.
Update driver and server to most recent versions.
Identify where your application is using explicit sessions. Explicit sessions are those that you start via a start_session call. The driver also uses sessions automatically by itself, those are called implicit sessions.
Lacking evidence to the contrary, you have a session leak. Use https://docs.mongodb.com/manual/reference/command/killAllSessions/ to destroy all sessions, then graph the number of active sessions over time to see what your trend looks like.
Review your code and match every start_session call with how that session is ended (if any). If you do not use a scoped API like https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-sessions/#creating-a-session-from-a-mongo-client you need to CAREFULLY consider where each of the sessions is going to get destroyed.
Check your code for no timeout cursors. Those would probably hold session references (explicit or implicit).
Going by the information you provided in the question, my guess is your session state inspection isn't done properly so go over that again and make sure you are looking at the right things.
–
–
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.