Troubleshooting
Problem
Checking the pod you can see that only one of containers from the pod is running for mongo-ce-x.
Symptom
One or more pod of the Mongo replica set do not get in ready state.
Cause
The replica set member mongo-ce-x has fallen too far behind the primary and can no longer sync.
Diagnosing The Problem
Examine the mongod logs for mongo-ce-x pod
2023-07-31T09:25:10.170+0000 I REPL [rsBackgroundSync] sync source candidate: mas-mongo-ce-2.mas-mongo-ce-svc.mongoce.svc.cluster.local:27017
2023-07-31T09:25:10.171+0000 I CONNPOOL [RS] Connecting to mas-mongo-ce-2.mas-mongo-ce-svc.mongoce.svc.cluster.local:27017
2023-07-31T09:25:10.176+0000 I REPL [replication-0] We are too stale to use mas-mongo-ce-2.mas-mongo-ce-svc.mongoce.svc.cluster.local:27017 as a sync source. Blacklisting this sync source because our last fetched timestamp: Timestamp(1681366511, 1) is before their earliest timestamp: Timestamp(1690718891, 249) for 1min until: 2023-07-31T09:26:10.176+0000
2023-07-31T09:25:10.176+0000 I REPL [replication-0] sync source candidate: mas-mongo-ce-1.mas-mongo-ce-svc.mongoce.svc.cluster.local:27017
2023-07-31T09:25:10.176+0000 I CONNPOOL [RS] Connecting to mas-mongo-ce-1.mas-mongo-ce-svc.mongoce.svc.cluster.local:27017
2023-07-31T09:25:10.187+0000 I REPL [replication-0] We are too stale to use mas-mongo-ce-1.mas-mongo-ce-svc.mongoce.svc.cluster.local:27017 as a sync source. Blacklisting this sync source because our last fetched timestamp: Timestamp(1681366511, 1) is before their earliest timestamp: Timestamp(1690715297, 5005) for 1min until: 2023-07-31T09:26:10.187+0000
2023-07-31T09:25:10.187+0000 I REPL [replication-0] could not find member to sync from
2023-07-31T09:25:10.188+0000 E REPL [rsBackgroundSync] too stale to catch up -- entering maintenance mode
2023-07-31T09:25:10.188+0000 I REPL [rsBackgroundSync] Our newest OpTime : { ts: Timestamp(1681366511, 1), t: 1 }
2023-07-31T09:25:10.188+0000 I REPL [rsBackgroundSync] Earliest OpTime available is { ts: Timestamp(1690715297, 5005), t: 6 }
2023-07-31T09:25:10.188+0000 I REPL [rsBackgroundSync] See http://dochub.mongodb.org/core/resyncingaverystalereplicasetmember
2023-07-31T09:25:10.188+0000 I REPL [rsBackgroundSync] going into maintenance mode with 0 other maintenance mode tasks in progress
2023-07-31T09:25:10.188+0000 I REPL [rsBackgroundSync] transition to RECOVERING from SECONDARY
Resolving The Problem
The easiest way to fix the issue is to delete all the files under /data
directory using the container mongod
. Follow these steps:
- enter the
mongod
container for the podmongo-ce-x
- delete everything under
/data
- restart the pod
mongo-ce-x
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB59","label":"Sustainability Software"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSRHPA","label":"IBM Maximo Application Suite"},"ARM Category":[{"code":"a8m3p000000hAeeAAE","label":"Maximo Application Suite-\u003ECore"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]
Was this topic helpful?
Document Information
Modified date:
07 December 2023
UID
ibm17091465