1

I have a cephadm-managed ceph-cluster consisting of 2 nodes. When trying to add a third node, cephadm says it was successful:

main@node01:~$ sudo ceph orch host add node03 10.0.0.155
Added host 'node03' with addr '10.0.0.155'

Checking the cluster however shows this isn't the case:

main@node01:~$ sudo ceph node ls
{
    "mon": {
        "node01": [
            "node01"
        ],
        "node02": [
            "node02"
        ]
    },
    "osd": {},
    "mgr": {
        "node01": [
            "node01.gnxkpe"
        ],
        "node02": [
            "node02.tdjwgc"
        ]
    }
}

Edit: changed the title - apparently the host is added, but the mon daemon is not started.

main@node01:~$ sudo ceph orch host ls
HOST    ADDR        LABELS  STATUS  
node01  10.0.0.101  _admin          
node02  10.0.0.131  _admin          
node03  10.0.0.155  _admin          
3 hosts in cluster
main@node01:~$ sudo ceph status
  cluster:
    id:     cab58bfb-9cef-11ef-a862-408d5c51323a
    health: HEALTH_WARN
            1 failed cephadm daemon(s)
            failed to probe daemons or devices
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 2 daemons, quorum node01,node02 (age 3h)
    mgr: node01.gnxkpe(active, since 6h), standbys: node02.tdjwgc
    osd: 0 osds: 0 up, 0 in
main@node01:~$ sudo ceph orch daemon add mon node03
Error EINVAL: name mon.node03 already in use
6
  • what is result of ceph orch host ls ? Commented Nov 7, 2024 at 15:52
  • Hi, that would be HOST ADDR LABELS STATUS node01 10.0.0.101 _admin node02 10.0.0.131 node03 10.0.0.155 3 hosts in cluster - seems like they are all added, but only node01 is admin, and mgr and mon only run on the first 2. Commented Nov 7, 2024 at 16:11
  • I just tried to add the mon to node03: sudo ceph orch daemon add mon node03 gave me Error EINVAL: name mon.node03 already in use. Commented Nov 7, 2024 at 16:20
  • in my ceph version (pacific), when adding a node, one can specify --labels=mon (or more function). However, I don't know how to add it afterward (hence this a comment) Commented Nov 7, 2024 at 20:11
  • Please share the output of: ceph orch ls mon --export. You might want to run: ceph orch apply mon -- placement=„count:3“, alternatively you can specify the hosts in the placement parameter. Or even better: create a spec file that contains the required hosts. There are several ways. If the cli command doesn’t work, you’ll need to inspect the logs of the active mgr. Commented Nov 7, 2024 at 22:58

1 Answer 1

1

I found that node03 had some leftovers from a previous installation. I did a complete removal of the ceph-installation on that node, including removing /etc/ceph, now everything is working as expected.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.