Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

We recently moved to Nitrogen-SR3 and we have customized clustering with 2 node. When we restart a node (ie., after failback), we observe following exception in karaf.log and the node is unable to join the cluster. Any help is highly appreciated.

java.util.concurrent.TimeoutException: Connection attempt failed
        at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.wrap(AbstractShardBackendResolver.java:129)[505:org.opendaylight.controller.sal-distributed-datastore:1.6.3]
        at org.opendaylight.controller.cluster.databroker.actors.dds.AbstractShardBackendResolver.lambda$connectShard$2(AbstractShardBackendResolver.java:142)[505:org.opendaylight.controller.sal-distributed-datastore:1.6.3]
        at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)[:1.8.0_66]
        at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)[:1.8.0_66]
        at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)[:1.8.0_66]
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)[:1.8.0_66]
        at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)[:1.8.0_66]
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)[:1.8.0_66]
        at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)[:1.8.0_66]
Caused by: org.opendaylight.controller.cluster.access.concepts.RetiredGenerationException: Originating generation was superseded by 3
at org.opendaylight.controller.cluster.datastore.Shard.findFrontend(Shard.java:482)[505:org.opendaylight.controller.sal-distributed-datastore:1.6.3]
        at org.opendaylight.controller.cluster.datastore.Shard.handleConnectClient(Shard.java:522)[505:org.opendaylight.controller.sal-distributed-datastore:1.6.3]
        at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:325)[505:org.opendaylight.controller.sal-distributed-datastore:1.6.3]
        at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:270)[490:org.opendaylight.controller.sal-akka-raft:1.6.3]
        at org.opendaylight.controller.cluster.common.actor.AbstractUntypedPersistentActor.onReceiveCommand(AbstractUntypedPersistentActor.java:44)[498:org.opendaylight.controller.sal-clustering-commons:1.6.3]
        at akka.persistence.UntypedPersistentActor.onReceive(PersistentActor.scala:170)[321:com.typesafe.akka.persistence:2.4.20]
                We also observe following bundle takes more time to transition from "GracePeriod" to "Active" state.   (mdsal-eos-binding-adapter and mdsal-singleton-dom-impl)
– satlearner
                Jul 10, 2018 at 17:58
                We observe some code changes being committed, can we know what is the root cause for this issue.
– satlearner
                Jul 10, 2018 at 18:04
                you should be interacting with the jira ticket to get more details like root cause, etc. just post a comment/question there.
– jamo
                Jul 12, 2018 at 1:00
                We observe Opendaylight bundles are waiting indefinitely in "GracePeriod" state, but for our application bundles we notices bundles transition to "Failure" state after 5min, could you please let us know how to increase the timeout similar to ODL bundles
– satlearner
                Jul 14, 2018 at 19:20
        

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.