Django 1.6 + RabbitMQ 3.2.3 + Celery 3.1.9 - why does my celery worker die with: WorkerLostError: The worker came out prematurely: signal 11 (SIGSEGV)

This seems like a very similar problem, but it doesn't give me enough deep insight: https://github.com/celery/billiard/issues/101 It seems like it might be a good idea to try a database other than SQLite ...

I have a direct installation of celery with my django app. In my file, settings.pyI set the task as follows:

CELERYBEAT_SCHEDULE = {
    'sync_database': {
        'task': 'apps.data.tasks.celery_sync_database',
        'schedule': timedelta(minutes=5)
    }
}

I followed the following instructions: http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html

I can open two new terminal windows and start celery processes as follows:

ONE is a celery extraction process that is required for scheduled tasks and will queue the task:

PROMPT> celery -A myproj beat
celery beat v3.1.9 (Cipater) is starting.
__    -    ... __   -        _
Configuration ->
    . broker -> amqp://myproj@localhost:5672//
    . loader -> celery.loaders.app.AppLoader
    . scheduler -> djcelery.schedulers.DatabaseScheduler

    . logfile -> [stderr]@%INFO
    . maxinterval -> now (0s)
[2014-02-20 16:15:20,085: INFO/MainProcess] beat: Starting...
[2014-02-20 16:15:20,086: INFO/MainProcess] Writing entries...
[2014-02-20 16:15:20,143: INFO/MainProcess] DatabaseScheduler: Schedule changed.
[2014-02-20 16:15:20,143: INFO/MainProcess] Writing entries...
[2014-02-20 16:20:20,143: INFO/MainProcess] Scheduler: Sending due task sync_database (apps.data.tasks.celery_sync_database)
[2014-02-20 16:20:20,161: INFO/MainProcess] Writing entries...

TWO - celery worker who must complete the task in the queue and run it:

PROMPT> celery -A myproj worker -l info

 -------------- celery@Jons-MacBook.local v3.1.9 (Cipater)
---- **** -----
--- * ***  * -- Darwin-13.0.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         myproj:0x1105a1050
- ** ---------- .> transport:   amqp://myproj@localhost:5672//
- ** ---------- .> results:     djcelery.backends.database:DatabaseBackend
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
 -------------- .> celery           exchange=celery(direct) key=celery


[tasks]
  . apps.data.tasks.celery_sync_database
  . myproj.celery.debug_task

[2014-02-20 16:15:29,402: INFO/MainProcess] Connected to amqp://myproj@127.0.0.1:5672//
[2014-02-20 16:15:29,419: INFO/MainProcess] mingle: searching for neighbors
[2014-02-20 16:15:30,440: INFO/MainProcess] mingle: all alone
[2014-02-20 16:15:30,474: WARNING/MainProcess] celery@Jons-MacBook.local ready.

When sending a task, however, it seems that about 50% of the time when the worker starts the task, and the remaining 50% of the time receives the following error:

[2014-02-20 16:35:20,159: INFO/MainProcess] Received task: apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25]
[2014-02-20 16:36:54,561: ERROR/MainProcess] Process 'Worker-4' pid:19500 exited with exitcode -11
[2014-02-20 16:36:54,580: ERROR/MainProcess] Task apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25] raised unexpected: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
  File "/Users/jon/dev/vpe/VAN/lib/python2.7/site-packages/billiard/pool.py", line 1168, in mark_as_worker_lost
    human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).

I am developing on a Macbook Pro working with Mavericks.

Version for celery 3.1.9 RabbitMQ 3.2.3 Django 1.6

Please note that I am using django-celery 3.1.9 and have included the djcelery application.

+4
source share
1 answer

When I switched from SQLite to PostgreSQL, the problem disappeared.

0
source

Source: https://habr.com/ru/post/1527994/


All Articles