Streamparse / Python - custom fail () method doesn't work for error tuples

I use Storm to process messages with Kafka in real time and using streamparse to build my topology. For this use case, it is imperative that we have a 100% guarantee that any message in Storm will be processed and ack'd. I implemented the logic on my bolt using try / catch (see below), and I would like Storm to reproduce these messages in addition to writing this on another topic with an error in Kafka.

In my KafkaSpout, I set tup_id to be the id of the offset from the Kafka theme that my consumer is feeding. However, when I introduce an error into my bolt using the link to the wrong variable, I do not see the message being played. I really see that someone writes in the topic of Kafka's "mistake", but only once - this means that the motorcade will never be re-submitted to my bolts. My setting for TOPOLOGY_MESSAGE_TIMEOUT_SEC = 60, and I expect Storm to continue playing the failed message every 60 seconds, and my error will continue to write to the error subject, constantly.

KafkaSpout.py

class kafkaSpout(Spout):

    def initialize(self, stormconf, context):

        self.kafka = KafkaClient(str("host:6667"))#,offsets_channel_socket_timeout_ms=60000)
        self.topic = self.kafka.topics[str("topic-1")]
        self.consumer = self.topic.get_balanced_consumer(consumer_group=str("consumergroup"),auto_commit_enable=False,zookeeper_connect=str("host:2181"))

    def next_tuple(self):
        for message in self.consumer:
            self.emit([json.loads(message.value)],tup_id=message.offset)
            self.log("spout emitting tuple ID (offset): "+str(message.offset))
            self.consumer.commit_offsets()

    def fail(self, tup_id):
        self.log("failing logic for consumer. resubmitting tup id: ",str(tup_id))
        self.emit([json.loads(message.value)],tup_id=message.offset)

processBolt.py

class processBolt(Bolt):

  auto_ack = False
  auto_fail = False

  def initialize(self, conf, ctx):
      self.counts = Counter()
      self.kafka = KafkaClient(str("host:6667"),offsets_channel_socket_timeout_ms=60000)
      self.topic = self.kafka.topics[str("topic-2")]
      self.producer = self.topic.get_producer()

      self.failKafka = KafkaClient(str("host:6667"),offsets_channel_socket_timeout_ms=60000)
      self.failTopic = self.failKafka.topics[str("topic-error")]
      self.failProducer = self.failTopic.get_producer()


  def process(self, tup):
      try:
          self.log("found tup.")
          docId = tup.values[0]
          url = "solrserver.host.com/?id="+str(docId)

          thisIsMyForcedError = failingThisOnPurpose ####### this is what im using to fail my bolt consistent

          data = json.loads(requests.get(url).text)

          if len(data['response']['docs']) > 0:
              self.producer.produce(json.dumps(docId))
              self.log("record FOUND {0}.".format(docId))

          else:
              self.log('record NOT found {0}.'.format(docId)) 

          self.ack(tup)

      except:
          docId = tup.values[0]
          self.failProducer.produce( json.dumps(docId), partition_key=str("ERROR"))
          self.log("TUP FAILED IN PROCESS BOLT: "+str(docId))
          self.fail(tup)

I would appreciate any help on how to properly implement the custom fault logic for this case. Thanks in advance.

+4

Source: https://habr.com/ru/post/1615455/


All Articles