Kafka Schema registry error: Failed to write Noop entry to kafka store

I try to start the kafka schema registry, but I get the following error: Failed to write Noop entry in kafka store. Stack trace below. I checked the connections to zookeeper, kafka brokers - everything is in order. I can send messages to kafka. I tried to remove the _schema theme and even reinstall kafka, but this problem is still happening. Yesterday everything worked fine, but today, after rebooting my stray box, this problem arose. Can i do something? Thanks

[2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: master.eligibility = true port = 8081 kafkastore.timeout.ms = 500 kafkastore.init.timeout.ms = 60000 debug = false kafkastore.zk.session.timeout.ms = 30000 request.logger.name = io.confluent.rest-utils.requests metrics.sample.window.ms = 30000 schema.registry.zk.namespace = schema_registry kafkastore.topic = _schemas avro.compatibility.level = none shutdown.graceful.ms = 1000 response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json] metrics.jmx.prefix = kafka.schema.registry host.name = 12bac2a9529f metric.reporters = [] kafkastore.commit.interval.ms = -1 kafkastore.connection.url = master.mesos:2181 metrics.num.samples = 2 response.mediatype.default = application/vnd.schemaregistry.v1+json kafkastore.topic.replication.factor = 3 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135) [2015-11-19 19:12:25,904] INFO SchemaRegistryConfig values: master.eligibility = true port = 8081 kafkastore.timeout.ms = 500 kafkastore.init.timeout.ms = 60000 debug = false kafkastore.zk.session.timeout.ms = 30000 request.logger.name = io.confluent.rest-utils.requests metrics.sample.window.ms = 30000 schema.registry.zk.namespace = schema_registry kafkastore.topic = _schemas avro.compatibility.level = none shutdown.graceful.ms = 1000 response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json] metrics.jmx.prefix = kafka.schema.registry host.name = 12bac2a9529f metric.reporters = [] kafkastore.commit.interval.ms = -1 kafkastore.connection.url = master.mesos:2181 metrics.num.samples = 2 response.mediatype.default = application/vnd.schemaregistry.v1+json kafkastore.topic.replication.factor = 3 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135) [2015-11-19 19:12:26,535] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87) [2015-11-19 19:12:27,167] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:172) [2015-11-19 19:12:27,262] INFO [kafka-store-reader-thread-_schemas], Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68) [2015-11-19 19:13:27,350] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57) io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37) at io.confluent.rest.Application.createServer(Application.java:104) at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42) Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store. at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151) at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162) ... 4 more Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store. at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:363) at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:220) at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149) ... 5 more 
+5
source share
3 answers

The error message is misleading, as recommended by other developers on other posts, I would suggest the following.

1) Make sure Zookeeper is running. (check the log files and if the process is active)

2) Make sure that the various nodes in your kafka cluster can communicate with each other (telnet with host and port)

3) If both 1 and 2 are fine, I do not recommend creating another theme (for example, _schema2, as recommended by some people for other posts) and updating the kafkastore.topic schema configuration file with a new theme.
Instead 3.1) Stop the processes (zookeeper, kafka server) 3.2) clear the data in the zookeeper data directory 3.3) restart the zookeeper, kafka server and finally the schema services (this should work!)

PS: If you try to create another theme, you can get stuck when you try to use data from the Kafka theme (it happened to me, it took me several hours to figure it out !!!).

+1
source

I got the same error. The problem was that I expected Kafka to use the kafka namespace in ZooKeeper, so I installed it in schema-registry.properties

 kafkastore.connection.url=localhost:2181/kafka 

but in Kafka server.properties I did not install it at all. Configuration contains

 zookeeper.connect=localhost:2181 

so I just add the ZooKeeper namespace to this property and restart Kafka

 zookeeper.connect=localhost:2181/kafka 

Your problem probably is that your registry is expecting a '/' namespace, but you have defined something else in your Kafka configuration. Could you post the configuration of Kafka?

Alternatively, you can use zkCli.sh to find out where topic information is stored in ZooKeeper Kafka.

 /bin/zkCli.sh localhost:2181 Welcome to ZooKeeper! ls /kafka [cluster, controller, controller_epoch, brokers, admin, isr_change_notification, consumers, latest_producer_id_block, config] 
0
source

I made the following changes to schema-registry.properties that helped me:

 #kafkastore.connection.url=localhost:2181 kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092 kafkastore.topic=<topic name> 

I also ran the following command for another server startup problem:

 ./bin/kafka-topics --alter --zookeeper localhost:2181 --topic <topic name> --config cleanup.policy=compact 

Good luck

0
source

Source: https://habr.com/ru/post/1236364/


All Articles