When cassandra-driver completed the request, cassandra-driver returned an OperationTimedOut error

I am using a python script that jumps to a cassandra batch request, for example:

query = 'BEGIN BATCH ' + 'insert into ... ; insert into ... ; insert into ...; ' + ' APPLY BATCH;' session.execute(query) 



This has been working for a while, but after about 2 minutes after starting the scripts do not work and are printed:

 Traceback (most recent call last):<br> File "/home/fervid/Desktop/cassandra/scripts/parse_and_save_to_cassandra.cgi", line 127, in <module><br> session.execute(query)<br> File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 1103, in execute<br> result = future.result(timeout)<br> File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 2475, in result<br> raise OperationTimedOut(errors=self._errors, last_host=self._current_host)<br> cassandra.OperationTimedOut: errors={}, last_host=127.0.0.1<br> <br> <br> 

I changed the wait time from cassandra.yaml to:
read_request_timeout_in_ms: 15000
range_request_timeout_in_ms: 20000
write_request_timeout_in_ms: 20000
cas_contention_timeout_in_ms: 10000
request_timeout_in_ms: 25000


Then I restarted cassandra, but that didn't help. The error is repeated again and again!

Lines in the log at the point in time when the script failed:

INFO [BatchlogTasks: 1] 2014-06-11 14: 18: 10,490 ColumnFamilyStore.java (line 794) Flash capture Memtable-batchlog @ 28149592 (13557969/13557969 serialized / live bytes, 4 ops)
INFO [FlushWriter: 10] 2014-06-11 14: 18: 10,490 Memtable.java (line 363) Writing Memtable-batchlog @ 28149592 (13557969/13557969) serialized / live bytes, 4 ops)
INFO [FlushWriter: 10] 2014-06-11 14: 18: 10,566 Memtable.java (line 410) Flushing completed; nothing had to be saved. Commitlog position was ReplayPosition (segmentId = 1402469922169, position = 27138996)
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 13,758 GCInspector.java (line 116) GC for ParNew: 640 ms for 3 collections, 775214160; max - 1052770304
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 16,155 GCInspector.java (line 116) GC for ConcurrentMarkSweep: 1838 ms for 2 collections, 810976000; max - 1052770304
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 17,959 GCInspector.java (line 116) GC for ConcurrentMarkSweep: 1612 ms for 1 collection, 858404088; max - 1052770304
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 17,959 StatusLogger.java (line 55) Pool name Active
Waiting completed Blocked all the time blocked
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 17,959 StatusLogger.java (line 70) Read 0 0 627 0 0
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 17,960 StatusLogger.java (line 70) RequestResponseStage 0
0 0 0 0
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 17,960 StatusLogger.java (line 70) ReadRepairStage 0 0 0 0 0
INFO [Scheduled tasks: 1] 2014-06-11 14: 18: 17,960 StatusLogger.java (line 70) MutationStage 0
0 184 386 0 0
INFO [Scheduled Tasks: 1] 2014-06-11 14: 18: 17,960 StatusLogger.java (line 70) ReplicateOnWriteStage 0 0 0 0 0

+5
source share
2 answers
+12
source

According to the documents, this error indicates that the operation took longer than indicated on the client side. The error is generated by the driver, not Cassandra. I'm still looking for a way to handle this error.

http://datastax.imtqy.com/python-driver/api/cassandra.html#cassandra.OperationTimedOut

+1
source

Source: https://habr.com/ru/post/972336/


All Articles