Files

Daggy Daemon

daggyd is the REST server process that handles receiving and running DAG specs.

Running it

daggyd    # That's it, will listen on 127.0.0.1:2503 , and run with a local executor
daggyd -d # Daemonize

daggyd --config FILE # Run with a config file

Config Files

{
  "web-threads": 50,
    "dag-threads": 50,
    "port":  2503,
    "ip": "localhost",
    "logger": {
      "name": "LoggerName",
      "config": {
        ...
      }
    },
    "executor": {
      "name": "ExecutorName"
        "config": {
          ...
        }
    }
}

Loggers

OStreamLogger

OStreamLogger doesn't persist data, but can write even updates to a file or stdout.

The config for OStreamLogger looks like this:

{
  ...
  "logger": {
    "name": "OStreamLogger",
    "config": {
      "file": "/path/to/file"
    }
  }
  ...
}

If file is equal to "-", then the logger will print events to stdout. This configuration is the default if no logger is specified at all.

RedisLogger

RedisLogger stores state in a Redis instance.

The config for OStreamLogger looks like this (along with default values):

{
  ...
  "logger": {
    "name": "RedisLogger",
    "config": {
      "prefix": "daggy",
      "host": "localhost",
      "port": 6379
    }
  }
  ...
}

The prefix attribute is used to distinguish daggy instances. All keys will be prefixed with the value of prefix.

Executors

ForkingTaskExecutor

ForkingTaskExecutor does pretty much what the name implies: it will execute tasks by forking on the local machine.

It's config with default values looks like:

{
  ...
  "executor": {
    "name": "ForkingTaskExecutor",
    "config": {
      "threads": 10
    }
  }
  ...
}

If no executor is sepcified in the config, this is the executor used.

SlurmTaskExecutor

The SlurmTaskExecutor will execute tasks on a slurm cluster. It relies on the slurm config to manage any parallelism limits and quotas.

It's config with default values looks like:

{
  ...
  "executor": {
    "name": "ForkingTaskExecutor",
    "config": { }
  }
  ...
}