attached block devices not showing up

Bug #1479031 reported by Edward Hope-Morley
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Invalid
Undecided
Unassigned
Precise
Fix Released
High
Dan Streetman

Bug Description

I have Ubuntu Precise guests running in kvm to which I attach volumes immediately after boot. Sometimes I notice that the devices do not show up in the guest e.g. they are not visible in /proc/partitions and dmesg boot log shows no sign of then being noticed. Qemu itself shows that the devices are attached and if I do a rescan within the guest the devices appear. A reboot also cause the devices to appear. So, it feels like there is enough evidence to suggest that the devices were properly attached by qemu/seabios but they just didn't get noticed by the guest.

This issue is reproducible with Precise running 3.2, 3.5 (lts-quantal) and 3.8 (lts-raring) kernels but not as of 3.11 (lts-saucy) so it would appear that something changed between 3.8 and 3.11 that resolved this issue and if we can identify it we should backport to 3.2.

I am able to reproduce this issue using Openstack to spin up vms and attach volumes using the following script:

http://pastebin.ubuntu.com/11954253/

I hit this issue with 1/2 instances.

As a workaround one can rescan and the device shows up:
echo 1 | sudo tee /sys/bus/pci/rescan

CVE References

description: updated
Chris J Arges (arges)
Changed in linux (Ubuntu):
status: New → Fix Released
Changed in linux (Ubuntu Precise):
assignee: nobody → Chris J Arges (arges)
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in linux (Ubuntu Precise):
status: New → Confirmed
Chris J Arges (arges)
description: updated
Revision history for this message
Chris J Arges (arges) wrote :

Problem exists between v3.9 and v3.10.

Changed in linux (Ubuntu Precise):
assignee: Chris J Arges (arges) → Rafael David Tinoco (inaddy)
Changed in linux (Ubuntu Precise):
assignee: Rafael David Tinoco (inaddy) → nobody
Revision history for this message
Chris J Arges (arges) wrote :

This commit fixes the issue, whereas the previous commit results in failure.
3b63aaa70e1ccc4b66d60acc78da09700706a703

Need to figure out how to backport this into 3.2, since this is not a clean cherry-pick and is a fairly large change.

Chris J Arges (arges)
Changed in linux (Ubuntu Precise):
importance: Undecided → High
Changed in linux (Ubuntu Precise):
assignee: nobody → Rafael David Tinoco (inaddy)
Changed in linux (Ubuntu Precise):
status: Confirmed → In Progress
Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Okay, following instructions already provided by Chris Arges - responsible for the kernel bisection so far - and Ed - working on the case since its beginning - I was able to create another reproducer - not openstack dependant - that fails close to 100% of times.

Output:

# running test

----

inaddy@alien:test$ ./teste.sh
% RUN: 1
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 0 extra disks online
$ Missing disk(s) exiting

# checking results

inaddy@alien:test$ ssh sf00068947precise
Last login: Tue Aug 11 14:07:11 2015 from 192.168.128.1
inaddy@sf00068947precise:~$ ls -lah /dev/vd*
brw-rw---- 1 root disk 253, 0 Aug 11 14:29 /dev/vda
brw-rw---- 1 root disk 253, 1 Aug 11 14:29 /dev/vda1
brw-rw---- 1 root disk 253, 2 Aug 11 14:29 /dev/vda2
brw-rw---- 1 root disk 253, 5 Aug 11 14:29 /dev/vda5
inaddy@sf00068947precise:~$ echo 1 | sudo tee /sys/bus/pci/rescan
1
inaddy@sf00068947precise:~$ ls -lah /dev/vd*
brw-rw---- 1 root disk 253, 0 Aug 11 14:29 /dev/vda
brw-rw---- 1 root disk 253, 1 Aug 11 14:29 /dev/vda1
brw-rw---- 1 root disk 253, 2 Aug 11 14:29 /dev/vda2
brw-rw---- 1 root disk 253, 5 Aug 11 14:29 /dev/vda5
brw-rw---- 1 root disk 253, 16 Aug 11 14:29 /dev/vdb
brw-rw---- 1 root disk 253, 32 Aug 11 14:29 /dev/vdc

----

Attaching files for this "second" reproducer. This will guide me on verifying a possible fix.

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

To be attached disk definition (using ceph)

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Second reproducer for this bug (shell script attaching ceph volumes on kvm machine).
Make sure to run this as 1st level guest (2nd level is slow and don't trigger the issue).

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Virtual machine definition (being used to attach/dettach ceph volumes)

OBS: ceph is being used just because original problem was using ceph also

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Just out of curiosity, if I sleep in between starting the virtual machine AND adding the disks (sleep 10, for example)... Then the disk attachment works almost 100% of the times:

% RUN: 1
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 2 extra disks online
! Destroying virtual machine
% RUN: 2
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
...

Changed in linux (Ubuntu Precise):
assignee: Rafael David Tinoco (inaddy) → nobody
Dan Streetman (ddstreet)
Changed in linux (Ubuntu Precise):
assignee: nobody → Dan Streetman (ddstreet)
Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Per Dan's request... second test is what he asked me to test -> It reasonably mitigates the issue:

#
# Precise latest -updates kernel
#

inaddy@sf00068947precise:~$ uname -a
Linux sf00068947precise 3.2.0-89-generic #127-Ubuntu SMP Tue Jul 28 09:28:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

inaddy@alien:teste$ ./teste.sh
% RUN: 1
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 0 extra disks online
$ Missing disk(s) exiting

inaddy@alien:teste$ virsh shutdown sf00068947precise
Domain sf00068947precise is being shutdown

inaddy@alien:teste$ ./teste.sh
% RUN: 1
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 0 extra disks online
$ Missing disk(s) exiting

#
# Precise latest kernel
#
# CONFIG_HOTPLUG_PCI=y
# CONFIG_HOTPLUG_PCI_FAKE=m
# CONFIG_HOTPLUG_PCI_ACPI=y
#

inaddy@sf00068947precise:~$ uname -a
Linux sf00068947precise 3.2.69-inaddy #1 SMP Wed Aug 12 15:43:37 BRT 2015 x86_64 x86_64 x86_64 GNU/Linux

inaddy@alien:teste$ ./teste.sh
% RUN: 1
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 2 extra disks online
! Destroying virtual machine
...
...
% RUN: 16
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 2 extra disks online
! Destroying virtual machine
...

Revision history for this message
Dan Streetman (ddstreet) wrote :

Here is a PPA with a fix for this:

https://pad.lv/ppa/ddstreet/lp1479031

There was no code change required; the only change is the acpiphp driver is built-in, instead of a module.

During boot, when the pci-acpi driver is loaded, it enumerates the bus. Later when the acpiphp driver is loaded, it registers for any pci-acpi hotplug events. Between the pci-acpi driver loading and the acpiphp driver loading, any pci acpi hotplug events are lost, since the bus was already enumerated and there is no driver monitoring hotplugs.

Currently (without this PPA), pci-acpi is built-in, and enumerates the pci acpi devices/buses during boot. The acpiphp driver is a module, however, and therefore loads much later - more than 4 seconds later, in my brief tests. With this PPA, the acpiphp driver is built-in, and loads a fraction of a second after the pci-acpi driver - less than 1/4 second in my brief tests.

So to clarify, this does not eliminate the window of time where pci acpi hotplug events will be lost, it only reduces it from multiple seconds, down to a small fraction of a second.

You can check the size of the window where hotplug events will be lost with:
# dmesg | grep "ACPI: bus type pci"
[ 0.230350] ACPI: bus type pci registered
# dmesg | grep "acpiphp: ACPI"
[ 0.517999] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5

that's not the exact window, since pci enumeration happens after the bus type is registered, so it's a bit smaller than that.

tags: added: sts
Brad Figg (brad-figg)
Changed in linux (Ubuntu Precise):
status: In Progress → Fix Committed
Changed in linux (Ubuntu):
status: Fix Released → Invalid
Revision history for this message
Luis Henriques (henrix) wrote :

This bug is awaiting verification that the kernel in -proposed solves the problem. Please test the kernel and update this bug with the results. If the problem is solved, change the tag 'verification-needed-precise' to 'verification-done-precise'.

If verification is not done by 5 working days from today, this fix will be dropped from the source code, and this bug will be closed.

See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you!

tags: added: verification-needed-precise
Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

#
# Precise latest kernel
#
# CONFIG_HOTPLUG_PCI_ACPI=y
#

inaddy@sf00068947precise:boot$ uname -a
Linux sf00068947precise 3.2.0-92-generic #130-Ubuntu SMP Mon Oct 5 21:43:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

inaddy@alien:teste$ ./teste.sh
% RUN: 1
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 2 extra disks online
! Destroying virtual machine
...
...
% RUN: 16
+ Creating disk 0: /dev/vdb
+ Creating disk 1: /dev/vdc
! Starting virtual machine
! Adding disk 0: /dev/vdb
! Adding disk 1: /dev/vdc
! Waiting virtual machine network to be up
! Checking if disk is online:
* Found 2 extra disks online
! Destroying virtual machine
...

-> verification done

tags: added: verification-done
removed: verification-needed-precise
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package linux - 3.2.0-92.130

---------------
linux (3.2.0-92.130) precise; urgency=low

  [ Brad Figg ]

  * Release Tracking Bug
    - LP: #1500854

  [ <email address hidden> ]

  * [Config] HOTPLUG_PCI_ACPI=y
    - LP: #1479031

  [ John Johansen ]

  * SAUCE: (no-up) apparmor: fix mount not handling disconnected paths
    - LP: #1496430

  [ Upstream Kernel Changes ]

  * RDS: verify the underlying transport exists before creating a
    connection
    - LP: #1496232
    - CVE-2015-6937
  * virtio-net: drop NETIF_F_FRAGLIST
    - LP: #1484793
    - CVE-2015-5156

 -- Brad Figg <email address hidden> Mon, 05 Oct 2015 13:50:43 -0700

Changed in linux (Ubuntu Precise):
status: Fix Committed → Fix Released
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.