I am in a restricted environment where I cannot install Docker or Node.JS . Anyone have a simple nomad JOB thatI can test with ? I have created my first cluster and would love to try a job .

I did create a job using this link Running bash scripts as a task in nomad using raw_exec driver but I am getting the following error

Task Group “cache” (failed to place 1 allocation):

  • Constraint “missing drivers” : 4 nodes excluded by filter .
  • I see the job in the UI but it shows pending and on my command line it shows “deployment in progress” perhaps a simple job is not my problem but would love to try something different .

    Any help is appreciated

    Hi @sammy676776 !

    For the most simple job imaginable, I typically use this - which just calls sleep using the raw_exec driver

    job "example" {
      group "group1" {
        task "sleep" {
          driver = "raw_exec"
          config {
            command = "sleep"
            args    = ["infinity"]
          resources {
            cpu    = 10
            memory = 10
    

    For a simple HTTP server, I like to use this job which just uses python3’s built-in http.server module

    job "example2" {
      group "group" {
        network {
          mode = "host"
          port "http" { static = 8888 }
        task "pyhttp" {
          driver = "raw_exec"
          config {
            command = "python3"
            args    = ["-m", "http.server", "${NOMAD_PORT_http}", "--directory", "/tmp"]
          service {
            name     = "py"
            port     = "http"
            provider = "nomad"
            check {
              type     = "http"
              path     = "/"
              interval = "3s"
              timeout  = "1s"
    

    Also note that raw_exec driver is not available by default on normal clusters -you have to enable it manually as a plugin option

    Perhaps there is some other issue going on in my new cluster that I built as even this simple job failed .

    in my Server.hcl I have

    > plugin "raw_exec" {
    >     config {
    >       enabled    = true
    >       no_cgroups = true
    

    client.hcl

    options {
          "driver.allowlist" = "raw_exec"
    plugin "raw_exec" {
        config {
          enabled    = true
          no_cgroups = true
    

    When I submit the job it appears in the UI but shows failed . It also shows the alloc and clients . How can I be sure that the server has communicated to clients properly ? Is there anything I can check ? J

    I ran “nomad server members -detailed” and I can see all server details including the clients . I can go to UI and see both servers and clients . When I run the job does it deploy to the clients and if it was successful will it say something in client or server log ? If so where ?

    thanks in advance

    Meant to say I also ran “nomad node status -stats” and it shows all clients . I am running your “sleep” raw_exec job and here is the status

    nomad job status test
    ID            = test
    Name          = test
    Submit Date   = 2023-02-24T03:37:59-05:00
    Type          = service
    Priority      = 50
    Datacenters   = <removed>
    Namespace     = default
    Status        = pending
    Periodic      = false
    Parameterized = false
    Summary
    Task Group  Queued  Starting  Running  Failed  Complete  Lost  Unknown
    group1      0       0         0        6       0         0     0
    Future Rescheduling Attempts
    Task Group  Eval ID   Eval Time
    group1      dd05e2f2  14m5s from now
    Latest Deployment
    ID          = 648402a6
    Status      = running
    Description = Deployment is running
    Deployed
    Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
    group1      1        2       0        2          2023-02-24T03:47:59-05:00
    Allocations
    ID        Node ID   Task Group  Version  Desired  Status  Created     Modified
    66e60599  24d34bf1  group1      2        run      failed  1m55s ago   1m54s ago
    8a0272fe  24d34bf1  group1      2        stop     failed  9m56s ago   1m55s ago
    e861a442  15ceea08  group1      0        stop     failed  15m7s ago   9m56s ago
    22defb0d  24d34bf1  group1      0        stop     failed  17m7s ago   15m7s ago
    a3d18006  098cf534  group1      0        stop     failed  18m7s ago   17m7s ago
    1d3fdcf0  dd964482  group1      0        stop     failed  18m37s ago  18m7s ago
    

    Based on the above output I also ran

    nomad alloc status --short 66e60599
    ID       = 66e60599-f557-8d35-f021-266e7e7013c5
    Name     = test.group1[0]
    Created  = 4m10s ago
    Modified = 4m9s ago
    Tasks
    Name   State  Last Event      Time                       Lifecycle
    sleep  dead   Not Restarting  2023-02-24T03:45:59-05:00  main