consul - Health-check of a redis job flagged as critical in Nomad -
when deploying redis job in nomad (0.6), not manage have healthy in consul.
i start consul in container , make port 8500 available on localhost.
$ docker container run --name consul -d -p 8500:8500 consul
when run nomad, connects correctly consul can see in logs.
$ nomad agent -dev no configuration files loaded ==> starting nomad agent... ==> nomad agent configuration: client: true log level: debug region: global (dc: dc1) server: true version: 0.6.0 ==> nomad agent started! log data stream in below: ... 2017/08/18 15:45:28.373766 [debug] client.consul: bootstrap contacting following consul dcs: ["dc1"] 2017/08/18 15:45:28.377703 [info] client.consul: discovered following servers: 127.0.0.1:4647 2017/08/18 15:45:28.378851 [info] client: node registration complete 2017/08/18 15:45:28.378895 [debug] client: periodically checking node changes @ duration 5s 2017/08/18 15:45:28.379232 [debug] consul.sync: registered 1 services, 1 checks; deregistered 0 services, 0 checks ...
i run redis job following configuration file
job "nomad-redis" { datacenters = ["dc1"] type = "service" group "cache" { task "redis" { driver = "docker" config { image = "redis:3.2" port_map { db = 6379 } } resources { cpu = 500 # 500 mhz memory = 256 # 256mb network { mbits = 10 port "db" {} } } service { name = "redis" port = "db" check { name = "alive" type = "tcp" interval = "10s" timeout = "2s" } } } } }
redis service added consul appears critical. seems healthcheck cannot done. understand, checks done within task. there i'm missing ?
running consul on localhost or in container attached host network (--net=host) fixed thing.
Comments
Post a Comment