Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

Overview:

I'm trying to run celery as a daemon for tasks to send emails. It worked fine in development, but not in production. I have my website up now, and every function works fine (no django errors), but the tasks aren't going through because the daemon isn't set up properly, and I get this error in ubuntu 16.04:

project_celery FATAL can't find command '/home/my_user/myvenv/bin/celery'

Installed programs / hardware, and what I've done so far:

I'm using Django 2.0.5, python 3.5, ubuntu 16.04, rabbitmq, and celery all on a VPS. Im using a venv for it all. I've installed supervisor too, and it's running when I check with sudo service --status-all because it has a + next to it. Erlang is also installed, and when I check with top , rabbitmq is running. Using sudo service rabbitmq-server status shows rabbitmq is active too.

Originally, I followed the directions at the celery website , but they were very confusing and I couldn't get it to work after ~40 hours of testing/reading/watching other people's solutions. Feeling very aggravated and defeated, I chose the directions here to get the daemon set up and hope I get somewhere, and I have got further, but I get the error above.

I read through the supervisor documentation, checked the process states to try and debug the problem, and program settings , and I'm lost because my paths are correct as far as I can tell, according to the documentation.

Here's my file structure stripped down:

home/
    my_user/               # is a superuser
        portfolio-project/
            project/
                __init__.py
                celery.py
                settings.py     # this file is in here too
            app_1/
            app_2/
        logs/
            celery.log
        myvenv/
                celery       # executable file, is colored green
    celery_user_nobody/      # not a superuser, but created for celery tasks
    supervisor/
        conf.d/
            project_celery.conf

Here is my project_celery.conf:

[program:project_celery]
command=/home/my_user/myvenv/bin/celery worker -A project --loglevel=INFO
directory=/home/my_user/portfolio-project/project
user=celery_user_nobody
numprocs=1
stdout_logfile=/home/my_user/logs/celery.log
stderr_logfile=/home/my_user/logs/celery.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
stopasgroup=true
priority=1000

Here's my init.py:

from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ['celery_app']

And here's my celery.py:

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

UPDATE: Here is my settings.py:

This is the only setting I have because the example at the celery website django instructions shows nothing more, unless I were to use something like redis. I put this in my settings.py file because the django instructions say you can: CELERY_BROKER_URL = 'amqp://localhost'

UPDATE: I created the rabbitmq user:

$ sudo rabbitmqctl add_user rabbit_user1 mypassword
$ sudo rabbitmqctl add_vhost myvhost
$ sudo rabbitmqctl set_user_tags rabbit_user1 mytag
$ sudo rabbitmqctl set_permissions -p myvhost rabbit_user1 ".*" ".*" ".*"

And when I do sudo rabbitmqctl status, I get Status of node 'rabbit@django2-portfolio', but oddly, I don't see any nodes running like the following, because the directions here show that I should see that:

{nodes,[rabbit@myhost]},
{running_nodes,[rabbit@myhost]}]

Steps I followed:

  • I created the .conf and .log files in the places I said.
  • sudo systemctl enable supervisor
  • sudo systemctl start supervisor
  • sudo supervisorctl reread
  • sudo supervisorctl update # no errors up to this point
  • sudo supervisorctl status
  • And after 6 I get this error:

    project_celery FATAL can't find command '/home/my_user/myvenv/bin/celery'

    UPDATE: I checked the error logs, and I have multiple instances of the following in /var/log/rabbitmq/[email protected]:

    =INFO REPORT==== 9-Aug-2018::18:26:58 ===
    connection <0.690.0> (127.0.0.1:42452 -> 127.0.0.1:5672): user 'guest' authenticated and granted access to vhost '/'
    =ERROR REPORT==== 9-Aug-2018::18:29:58 ===
    closing AMQP connection <0.687.0> (127.0.0.1:42450 -> 127.0.0.1:5672):
    missed heartbeats from client, timeout: 60s
    

    Closing statement:

    Anyone have any idea what's going on? When I look at my absolute paths in my project_celery.conf file, I see everything set correctly, but something's obviously wrong. Looking over my code more, rabbitmq says no nodes are running. when I do sudo rabbitmqctl status, but celery does when I do celery status (it shows OK 1 node online).

    Any help would be greatly appreciated. I even made this account specifically because I had this problem. It's driving me mad. And if anyone needs any more info, please ask. This is my first time deploying anything, so I'm not a pro.

    If you're shell-ed into the server, can you run the command /home/djangodeply/myvenv/bin/celery worker -A project --loglevel=INFO directly from the prompt? – Jack Shedd Aug 9, 2018 at 5:39 Not sure if you guy can see my edits in the original question, but I added some stuff that might help more where I put 'UPDATE'. – CyberHavenProgramming Aug 10, 2018 at 0:04 The fact you can't run the command from the prompt means the file doesn't exist or doesn't have it's permissions set correctly. What does ls -la /home/djangodeply/myvenv/bin/celery report? – Jack Shedd Aug 10, 2018 at 0:49 Thanks for the help, but the guy below got it right. I just had the pathing wrong. Your comments helped me debug it to though. With your tips and an idea the guy said below, I ran celery worker -A celery --loglevel=INFO instead of the app as project, and then it worked. Once I tried that celery ran, and I was able to run some_function.delay() to send emails (which came from a return statement). Then I changed the pathing in my .conf daemon file and it worked. – CyberHavenProgramming Aug 10, 2018 at 22:23

    Can you try any of the following in your project_celery.conf

    command=/home/my_user/myvenv/bin/celery worker -A celery --loglevel=INFO
    directory=/home/my_user/portfolio-project/project
    
    command=/home/my_user/myvenv/bin/celery worker -A project.celery --loglevel=INFO
    directory=/home/my_user/portfolio-project/
    

    Additionally, in celery.py can you add the parent folder of the project module to sys.path (or make sure that you've packaged your deploy properly and have installed it via pip or otherwise)?

    I suspect (from your comments with @Jack Shedd that you're referring to a non-existent project due to where directory is set relative to the magic celery.py file.)

    ヽ༼ຈل͜ຈ༽ノ OMG I LOVE YOU! Just tried the first and it worked! Seriously man, I've been on this for like 60 hours, and all it was was fixing the path. Both options work too! ヽ༼ຈل͜ຈ༽ノ – CyberHavenProgramming Aug 10, 2018 at 22:15

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.