Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams
  • Launch ZooKeeper
  • Launch Kafka : .\bin\windows\kafka-server-start.bat .\config\server.properties
  • And at the second step the error happens :

    ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.common.InconsistentClusterIdException: The Cluster ID Reu8ClK3TTywPiNLIQIm1w doesn't match stored clusterId Some(BaPSk1bCSsKFxQQ4717R6Q) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong. at kafka.server.KafkaServer.startup(KafkaServer.scala:220) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)

    When I trigger .\bin\windows\kafka-server-start.bat .\config\server.properties zookeeper console returns :

    INFO [SyncThread:0:FileTxnLog@216] - Creating new log file: log.1

    How to fix this issue to get kafka running ?

    Edit You can access to the proper issue on the right site (serverfault) here

    Edit Here is the Answer

    Voting to reopen in order to close for the right reason, since: [1] The question is clearly a duplicate of Kafka Broker doesn't find cluster id and creates new one after docker restart , as noted by the OP. [2] The current reason for closing is invalid since the question is not about "professional server or networking-related infrastructure administration" at all; it is about a Kafka exception on startup. (And if this question really was off topic then thousands of other questions tagged Kafka on SO would be as well.) skomisa Apr 15, 2020 at 20:19 @skomisa this issue is slightly différent than the other one since it doesn’t use docker. And please also note that my issue was posted before the issue you are talking about ... TourEiffel Apr 15, 2020 at 20:22 @Dorian: I'm really confused now!... You have updated this question and linked to an another answer written by yourself as the solution ! If you are now claiming that it is not a solution then delete the text "Edit Here is the Answer" from your question above. skomisa Apr 15, 2020 at 20:31 @skomisa yes because I wasn’t allowed to ask to re open until today... ans I wanted to share with the community how I did solve my issue ... TourEiffel Apr 15, 2020 at 20:34
  • Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  • Run Zookeper
  • Run Kafka
  • [Since this post is open again I post my answer there so you got all on the same post]

    ** 1. The easiest solution is to remove kafka logs and start again.

    ** 2. But the root cause is Kafka saved failed cluster ID in meta.properties.**

    Try to delete kafka-logs/meta.properties from your tmp folder, which is located in C:/tmp folder by default on windows, and /tmp/kafka-logs on Linux

    if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see docs.docker.com/compose/compose-file/compose-file-v2/#volumes -- Chris Halcrow

    ** 3. How to find Kafka log path:**

    Open server server.properties file which is located in your kafka folder kafka_2.11-2.4.0\config\server.properties (considering your version of kafka, folder name could be kafka_<kafka_version>):

    Then search for entry log.dirs to check where logs locate log.dirs=/tmp/kafka-logs

    if you need to know where your log directory is at first, look at: <your-kafka-install-directory>/config/server.properties and search for the log.dirs=.. row. Emre Jun 20, 2021 at 12:05 Note that if kafka is running in docker containers, the log path may be specified by volume config in the docker-compose - see docs.docker.com/compose/compose-file/compose-file-v2/#volumes Chris Halcrow Jun 21, 2021 at 3:50
  • Stop kafka service: brew services stop kafka
  • open kafka server.properties file: vim /usr/local/etc/kafka/server.properties
  • find value of log.dirs in this file. For me, it is /usr/local/var/lib/kafka-logs
  • delete path-to-log.dirs/meta.properties file
  • start kafka service brew services start kafka
  • No need to delete the log/data files on Kafka. Check the Kafka error logs and find the new cluster id. Update the meta.properties file with cluster-ID then restart the Kafka.

    /home/kafka/logs/meta.properties
    

    To resolve this issue permanently follow below.

    Check your zookeeper.properties file and look for dataDirpath and change the path tmp location to any other location which should not be removed after server restart.

    /home/kafka/kafka/config/zookeeper.properties
    

    Copy the zookeeper folder and file to the new(below or non tmp) location then restart the zookeeper and Kafka.

    cp -r /tmp/zookeeper /home/kafka/zookeeper
    

    Now server restart won’t affect the Kafka startup.

    If you use Embedded Kafka with Testcontainers in your Java project like myself, then simply delete your build/kafka folder and Bob's your uncle.

    The mentioned meta.properties can be found under build/kafka/out/embedded-kafka.

    I had some old volumes lingering around. I checked the volumes like this:

    docker volume list
    

    And pruned old volumes:

     docker volume prune
    

    And also removed the ones that were kafka: example:

    docker volume rm test_kafka
    

    I deleted the following directories :-

    a.) logs directory from kafka-server's configured location i.e. log.dir property path.

    b.) tmp directory from kafka broker's location.

    log.dirs=../tmp/kafka-logs-1

    I was using docker-compose to re-set up Kafka on a Linux server, with a known, working docker-compose.config that sets up a number of Kafka components (broker, zookeeper, connect, rest proxy), and I was getting the issue described in the OP. I fixed this for my dev server instance by doing the following

  • docker-compose down
  • backup kafka-logs directory using cp kafka-logs -r kafka-logs-bak
  • delete the kafka-logs/meta.properties file
  • docker-compose up -d
  • Note for users of docker-compose:

    My log files weren't in the default location (/tmp/kafka-logs). If you're running Kafka in Docker containers, the log path can be specified by volume config in the docker-compose e.g.

    volumes:
          - ./kafka-logs:/tmp/kafka-logs
    

    This is specifying SOURCE:TARGET. ./kafka-logs is the source (i.e. a directory named kafka-logs, in the same directory as the docker-compose file). This is then targeted to /tmp/kafka-logs as the mounted volume within the kafka container). So the logs can either be deleted from the source folder on the host machine, or by deleting them from the mounted volume after doing a docker exec into the kafka container.

    see https://docs.docker.com/compose/compose-file/compose-file-v2/#volumes

    I also deleted all the content of the folder containing all data generated by Kafka. I could find the folder in my .yml file:

     kafka:
        image: confluentinc/cp-kafka:7.0.0
        ports:
          - '9092:9092'
        environment:
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
          KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
          KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
          KAFKA_BROKER_ID: 1
          KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
          KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
        volumes:
          - ./kafka-data/data:/var/lib/kafka/data
        depends_on:
          - zookeeper
        networks:
          - default
    

    Under volumes: stays the location. So, in my case I deleted all files of the data folder located under kafka-data.

    I've tried deleting the meta.properties file but didn't work.

    In my case, it's solved by deleting legacy docker images.

    But the problem with this is that deletes all previous data. So be careful if you want to keep the old data this is not the right solution for you.

    docker rm $(docker ps -q -f 'status=exited')
    docker rmi $(docker images -q -f "dangling=true")
            

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.