Pacemaker unable to communicate with corosync on restart under lxc

Bug #1439649 reported by James Page
58
This bug affects 9 people
Affects Status Importance Assigned to Milestone
pacemaker (Ubuntu)
Fix Released
Undecided
Unassigned
Trusty
Won't Fix
Undecided
Unassigned
Xenial
Won't Fix
High
Unassigned
Bionic
Fix Released
Undecided
Unassigned

Bug Description

We've seen this a few times with three node clusters, all running in LXC containers; pacemaker fails to restart correctly as it can't communicate with corosync, resulting in a down cluster. Rebooting the containers resolves the issue, so suspect some sort of bad state either in corosync or pacemaker.

Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: mcp_read_config: Configured corosync to accept connections from group 115: Library error (2)
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: main: Starting Pacemaker 1.1.10 (Build: 42f2063): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart nagios heartbeat corosync-native snmp libesmtp
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: cluster_connect_quorum: Quorum acquired
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: corosync_node_name: Unable to get node name for nodeid 1000
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: corosync_node_name: Unable to get node name for nodeid 1001
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: corosync_node_name: Unable to get node name for nodeid 1003
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: corosync_node_name: Unable to get node name for nodeid 1001
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: crm_update_peer_state: pcmk_quorum_notification: Node juju-machine-4-lxc-4[1001] - state is now member (was (null))
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: corosync_node_name: Unable to get node name for nodeid 1003
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1003] - state is now member (was (null))
Apr 2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]: notice: main: CRM Git Version: 42f2063
Apr 2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Apr 2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]: notice: corosync_node_name: Unable to get node name for nodeid 1001
Apr 2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Apr 2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Apr 2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]: [MAIN ] Denied connection attempt from 109:115
Apr 2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]: [QB ] Invalid IPC credentials (1033732-1033746).
Apr 2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]: error: cluster_connect_cpg: Could not connect to the Cluster Process Group API: 11
Apr 2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]: error: main: HA Signon failed
Apr 2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]: error: main: Aborting startup
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: error: pcmk_child_exit: Child process attrd (1033746) exited: Network is down (100)
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: warning: pcmk_child_exit: Pacemaker child process attrd no longer wishes to be respawned. Shutting ourselves down.
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: pcmk_shutdown_worker: Shuting down Pacemaker
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: stop_child: Stopping crmd: Sent -15 to process 1033748
Apr 2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]: warning: do_cib_control: Couldn't complete CIB registration 1 times... pause and retry
Apr 2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
Apr 2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]: warning: do_log: FSA: Input I_SHUTDOWN from crm_shutdown() received in state S_STARTING
Apr 2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]: notice: do_state_transition: State transition S_STARTING -> S_STOPPING [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
Apr 2 11:41:32 juju-machine-4-lxc-4 cib[1033743]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Apr 2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]: notice: terminate_cs_connection: Disconnecting from Corosync
Apr 2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]: [MAIN ] Denied connection attempt from 109:115
Apr 2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]: [QB ] Invalid IPC credentials (1033732-1033743).
Apr 2 11:41:32 juju-machine-4-lxc-4 cib[1033743]: error: cluster_connect_cpg: Could not connect to the Cluster Process Group API: 11
Apr 2 11:41:32 juju-machine-4-lxc-4 cib[1033743]: crit: cib_init: Cannot sign in to the cluster... terminating
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: stop_child: Stopping pengine: Sent -15 to process 1033747
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: error: pcmk_child_exit: Child process cib (1033743) exited: Network is down (100)
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: warning: pcmk_child_exit: Pacemaker child process cib no longer wishes to be respawned. Shutting ourselves down.
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: stop_child: Stopping lrmd: Sent -15 to process 1033745
Apr 2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: stop_child: Stopping stonith-ng: Sent -15 to process 1033744
Apr 2 11:41:34 juju-machine-4-lxc-4 corosync[1033732]: [TOTEM ] A new membership (10.245.160.62:284) was formed. Members joined: 1000
Apr 2 11:41:41 juju-machine-4-lxc-4 stonith-ng[1033744]: error: setup_cib: Could not connect to the CIB service: Transport endpoint is not connected (-107)
Apr 2 11:41:41 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: pcmk_shutdown_worker: Shutdown complete
Apr 2 11:41:41 juju-machine-4-lxc-4 pacemakerd[1033741]: notice: pcmk_shutdown_worker: Attempting to inhibit respawning after fatal error

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: pacemaker 1.1.10+git20130802-1ubuntu2.3
ProcVersionSignature: User Name 3.16.0-33.44~14.04.1-generic 3.16.7-ckt7
Uname: Linux 3.16.0-33-generic x86_64
NonfreeKernelModules: vhost_net vhost macvtap macvlan xt_conntrack ipt_REJECT ip6table_filter ip6_tables ebtable_nat ebtables veth 8021q garp xt_CHECKSUM mrp iptable_mangle ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xt_tcpudp iptable_filter ip_tables x_tables nbd ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi openvswitch gre vxlan dm_crypt bridge dm_multipath intel_rapl stp scsi_dh x86_pkg_temp_thermal llc intel_powerclamp coretemp ioatdma kvm_intel ipmi_si joydev sb_edac kvm hpwdt hpilo dca ipmi_msghandler acpi_power_meter edac_core lpc_ich shpchp serio_raw mac_hid xfs libcrc32c btrfs xor raid6_pq hid_generic usbhid hid crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd psmouse tg3 ptp pata_acpi hpsa pps_core
ApportVersion: 2.14.1-0ubuntu3.7
Architecture: amd64
Date: Thu Apr 2 11:42:18 2015
SourcePackage: pacemaker
UpgradeStatus: No upgrade log present (probably fresh install)

Revision history for this message
James Page (james-page) wrote :
Revision history for this message
James Page (james-page) wrote :

sudo: unable to resolve host juju-machine-4-lxc-4
Quorum information
------------------
Date: Thu Apr 2 11:46:03 2015
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 1001
Ring ID: 280
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
    Nodeid Votes Name
      1001 1 10.245.160.76 (local)
      1003 1 10.245.160.96

Revision history for this message
James Page (james-page) wrote :

This appears to degrade over time - rebooting all three nodes and then restarting corosync/pacemaker works just fine.

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in pacemaker (Ubuntu):
status: New → Confirmed
Revision history for this message
James Page (james-page) wrote :

Might be something todo with this:

https://github.com/ClusterLabs/pacemaker/commit/1f0b85af6de7ca0c37831a8f2375e6b1f6c02c42

building a test package to see.

James Page (james-page)
summary: - Pacemaker unable to communicate with corosync on restart
+ Pacemaker unable to communicate with corosync on restart under lxc
Revision history for this message
James Page (james-page) wrote :

I've being trying to reproduce this under KVM and on hardware (no LXC) but I'm unable to reproduce the problem, so this appears isolated to LXC.

The pacemaker <-> corosync communication occurs over IPC implemented using shared memory. I'm wondering whether this is managing to get into an inconsistent state.

The other thing worth noting is that in the deployment impacted, there are multiple corosync/pacemaker clusters running on the same physical host, but under different LXC containers.

Revision history for this message
Mario Splivalo (mariosplivalo) wrote :

Hi, guys.

I am also actively testing to reproduce similar issue (that our customer complained about) - I'm deploying three unit percona-cluster (each unit in a separate LXC on separate physical machine) with three unit keystone (both services have hacluster subodinated), but failed to reproduce it so far - we made sure everything is set up correctly, run 'keystone service-list' and similar, did a hard-reset via IPMI of the node where percona-cluster VIP resides, corosync/pacemaker did their job, VIP moved, 'keystone service-list' is happy...

I'll update here with more info as I go along.

Revision history for this message
Felipe Reyes (freyes) wrote :
Download full text (4.4 KiB)

I'm seeing this problem in another environment, similar deployment (3 lxc containers)

Apr 20 16:39:26 juju-machine-3-lxc-4 crm_verify[31774]: notice: crm_log_args: Invoked: crm_verify -V -p
Apr 20 16:39:27 juju-machine-3-lxc-4 cibadmin[31786]: notice: crm_log_args: Invoked: cibadmin -p -P
Apr 20 16:50:01 juju-machine-3-lxc-4 cib[780]: error: pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
Apr 20 16:50:01 juju-machine-3-lxc-4 cib[780]: error: cib_cs_destroy: Corosync connection lost! Exiting.
Apr 20 16:50:01 juju-machine-3-lxc-4 crmd[785]: error: crmd_quorum_destroy: connection terminated
Apr 20 16:50:01 juju-machine-3-lxc-4 attrd[783]: error: pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
Apr 20 16:50:01 juju-machine-3-lxc-4 stonith-ng[781]: error: pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
Apr 20 16:50:01 juju-machine-3-lxc-4 crmd[785]: notice: crmd_exit: Forcing immediate exit: Link has been severed (67)
Apr 20 16:50:01 juju-machine-3-lxc-4 lrmd[782]: warning: qb_ipcs_event_sendv: new_event_notification (782-785-6): Bad file descriptor (9)
Apr 20 16:50:01 juju-machine-3-lxc-4 lrmd[782]: warning: send_client_notify: Notification of client crmd/8ad990ba-cf09-4ba3-b74b-a7d05d377a1b failed
Apr 20 16:50:01 juju-machine-3-lxc-4 lrmd[782]: error: crm_abort: crm_glib_handler: Forked child 760 to record non-fatal assert at logging.c:63 : Source ID 4601370 was not found when attempting to remove it
Apr 20 16:50:01 juju-machine-3-lxc-4 pacemakerd[773]: error: pcmk_child_exit: Child process cib (780) exited: Invalid argument (22)
Apr 20 16:50:01 juju-machine-3-lxc-4 pacemakerd[773]: notice: pcmk_process_exit: Respawning failed child process: cib
Apr 20 16:50:01 juju-machine-3-lxc-4 pacemakerd[773]: error: pcmk_child_exit: Child process crmd (785) exited: Link has been severed (67)
Apr 20 16:50:01 juju-machine-3-lxc-4 pacemakerd[773]: notice: pcmk_process_exit: Respawning failed child process: crmd
Apr 20 16:50:01 juju-machine-3-lxc-4 attrd[783]: crit: attrd_cs_destroy: Lost connection to Corosync service!
Apr 20 16:50:01 juju-machine-3-lxc-4 attrd[783]: notice: main: Exiting...
Apr 20 16:50:01 juju-machine-3-lxc-4 attrd[783]: notice: main: Disconnecting client 0x7ff985e478e0, pid=785...
Apr 20 16:50:01 juju-machine-3-lxc-4 pacemakerd[773]: error: pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
Apr 20 16:50:01 juju-machine-3-lxc-4 pacemakerd[773]: error: mcp_cpg_destroy: Connection destroyed
Apr 20 16:50:01 juju-machine-3-lxc-4 attrd[783]: error: attrd_cib_connection_destroy: Connection to the CIB terminated...
Apr 20 16:50:01 juju-machine-3-lxc-4 cib[761]: debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Apr 20 16:50:01 juju-machine-3-lxc-4 crmd[767]: debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Apr 20 16:50:01 juju-machine-3-lxc-4 crmd[767]: notice: main: CRM Git Version: 42f2063
Apr 20 16:50:01 juju-machine-3-lxc-4 stonith-ng[781...

Read more...

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in lxc (Ubuntu):
status: New → Confirmed
Felipe Reyes (freyes)
tags: added: sts
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

@jamespage,

i assume the answer to comment #5 was no, test package didn't fix it? Was this in the end due to the mtu issue?

Revision history for this message
Jill Rouleau (jillrouleau) wrote :

We've run into this problem after an extended maas/dhcp outage with expiring leases on metals and units. All hacluster-deployed lxc's (openstack-ha services) lost corosync-pacemaker connectivity with "corosync Invalid IPC credentials", resolving with lxc reboots. This is a staging cloud so we could take down maas-dhcp to replicate/test.

tags: added: canonical-bootstack
Revision history for this message
Mario Splivalo (mariosplivalo) wrote :

A restart of pacemaker and corosync (down pacemaker on all units, down corosync on all units, start corosync, verify all is good, start pacemaker) should resolve the issue.

@Jill: are you able to reproduce the issue? I'm assuming you're running trusty - can you try reproducing using corosync and related packages from vivid?

Revision history for this message
Jill Rouleau (jillrouleau) wrote :

I can reproduce with corosync 2.3.3. Using corosync 2.3.4 from ppa:mariosplivalo/corosync on trusty I've not been able to reproduce on 2 tries.

Revision history for this message
JuanJo Ciarlante (jjo) wrote :

After trying several corosync/pacemaker restarts without luck,
I was able to workaround this by adding an 'uidgid'
entry for hacluster:haclient:

* from /var/log/syslog:
Aug 31 18:33:18 juju-machine-3-lxc-3 corosync[901082]: [MAIN ] Denied connection attempt from 108:113
$ getent passwd 108
hacluster:x:108:113::/var/lib/heartbeat:/bin/false
$ getent group 113
haclient:x:113:

* add uidgid config:
# echo $'uidgid {\n uid: hacluster\n gid: haclient\n}' > /etc/corosync/uidgid.d/hacluster

* restart => Ok (crm status, etc)

I can't explain why other units are working ok without
this ACL addition (racing at service setup/start?).

Revision history for this message
JuanJo Ciarlante (jjo) wrote :

FYI to re-check workaround (then possible actual fix), kicked corosync+pacemaker
on cinder, glance services deployed with juju:

$ juju run --service=cinder,glance "service corosync restart; service pacemaker restart"

, which broke pacemaker start on all of them, with same "Invalid IPC credentials":
http://paste.ubuntu.com/12240477/ , then obviously failing 'crm status' /etc.

Fixing using comment#14 workaround:
$ juju run --service=glance,cinder "echo -e 'uidgid {\n uid: hacluster\n gid: haclient\n}' > /etc/corosync/uidgid.d/hacluster; service corosync restart; service pacemaker restart"

$ juju run --service=glance,cinder "crm status"
=> Ok

David Britton (dpb)
tags: added: landscpae
tags: added: landscape
removed: landscpae
Revision history for this message
Billy Olsen (billy-olsen) wrote :

Looking at logs from bug 1491228, it would appear that the first time that pacemaker goes to talk to the corosync daemon it gets denied. Per upstream docs [0], if the enable-acl property isn't explicitly enabled, then any user in the haclient group should have access. Since the hacluster charm doesn't explicitly enable the acl, I'd expect pacemaker to be running under the haclient/hacluster uid/gid.

The charms don't enable the acl and the package creates the hacluster user in the haclient group, so I suspect there's something additional going on here. For completeness, I'll attach the syslog in here. The pacemaker node did start after following JuanJo's workaround in comment #15.

[0] - http://clusterlabs.org/doc/acls.html

Revision history for this message
Billy Olsen (billy-olsen) wrote :
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Hi Billy,

So can you confirm that pacemaker *is* running under haclient/hacluster uid/gid?

Note, the comments above don't seem correct to me. The 'INSTALL' file shipped with corosync says:

> Before running any of the test programs
> ---------------------------------------
> The corosync executive will ensure security by only allowing the UID 0(root) or
> GID 0(root) to connect to it. To allow other users to access the corosync
> executive, create a directory called /etc/corosync/uidgid.d and place a file in
> it named in some way that is identifiable to you. All files in this directory
> will be scanned and their contents added to the allowed uid gid database. The
> contents of this file should be
> uidgid {
> uid: username
> gid: groupname
> }
> Please note that these users then have full ability to transmit and receive
> messages in the cluster and are not bound by the threat model described in
> SECURITY.

So the 'workaround' in comment #14 seems to be not a a workaround but required configuration (for the charm to do).

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Serge,

I did double check that the pacemaker processes were running under hacluster/haclient uid/gid. I will double check for my own sanity (I may have seen one running as root). However, according to the pacemaker docs that I referenced above, root and hacluster users should always have full access (which is somewhat in conflict with the INSTALL file you reference):

> Users are regular UNIX users, so the same user accounts must be present on all nodes in the cluster.
>
> All user accounts must be in the haclient group.
>
> Pacemaker 1.1.5 or newer must be installed on all cluster nodes.
>
> The CIB must be configured to use the pacemaker-1.1 or 1.2 schema. This can be set by running:
>
> cibadmin --modify --xml-text '<cib validate-with="pacemaker-1.1"/>'
> The enable-acl option must be set. If ACLs are not explicitly enabled, the previous behaviour will be used (i.e. all users in the haclient group have full access):
>
> crm configure property enable-acl=true
> Once this is done, ACLs can be configured as described below.
>
> Note that the root and hacluster users will always have full access.
>
> If nonprivileged users will be using the crm shell and CLI tools (as opposed to only using Hawk or the Python GUI) they will need to have /usr/sbin added to their path.

If it were a necessity to add the ACL entry, then I would have expected that the hacluster charm code would always have needed this requirement and pacemaker should have always denied access. Additionally, since the charm has done no configuration of the ACLs, I would expect all nodes to get denied or allowed the same. Instead, what has been observed is that *some* of the nodes in the cluster have the pacemaker process successfully communicate with the corosync process, while others get this invalid credentials error that is seen.

I've already proposed a change (which has been merged into the /next branches of the hacluster charm) which incorporates JuanJo's comments (thank you JuanJo!) by explicitly defining the ACL entry, but would better like to understand why the inconsistent behavior.

no longer affects: lxc (Ubuntu)
Changed in pacemaker (Ubuntu):
status: Confirmed → Fix Released
Changed in pacemaker (Ubuntu Bionic):
status: New → Fix Released
Changed in pacemaker (Ubuntu Xenial):
status: New → Confirmed
Changed in pacemaker (Ubuntu Trusty):
status: New → Confirmed
Changed in pacemaker (Ubuntu Xenial):
importance: Undecided → High
Changed in pacemaker (Ubuntu Trusty):
importance: Undecided → Medium
Changed in pacemaker (Ubuntu Xenial):
importance: High → Medium
Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

From Corosync 2.4.1 Release Notes:

This release contains fix for one regression and few more smaller fixes.

"""
During 2.3.6 development the bug which is causing pacemaker to not work after corosync configuration file is reloaded happened. Solution is ether to use this fixed version (recommended) or as a quick workaround (for users who wants to stay on 2.3.6 or 2.4.0) is to create file pacemaker (file name can be arbitrary) in /etc/corosync/uidgid.d directory with following content (you can also put same stanza into /etc/corosync/corosync.conf):

uidgid {
    gid: haclient
}
"""

Anyone relying in Trusty or Xenial corosync:

 corosync | 2.3.3-1ubuntu1 | trusty
 corosync | 2.3.3-1ubuntu4 | trusty-updates
 corosync | 2.3.5-3ubuntu1 | xenial
 corosync | 2.3.5-3ubuntu2.3 | xenial-security
 corosync | 2.3.5-3ubuntu2.3 | xenial-updates

should apply the mitigation above, like discovered previously by commenters of this bug.

Note: Trusty is already EOS so I'm marking it as "won't fix".

Xenial should include the mitigation in a SRU.

Changed in pacemaker (Ubuntu Trusty):
status: Confirmed → Won't Fix
importance: Medium → Undecided
Changed in pacemaker (Ubuntu Xenial):
importance: Medium → High
Paride Legovini (paride)
tags: added: server-next
Revision history for this message
Paride Legovini (paride) wrote :

Xenial reached ESM so I'm marking the Xenial task as Wontfix.

tags: removed: server-next
Changed in pacemaker (Ubuntu Xenial):
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.