Stalled IO Operations During MySQL Tests (with sysbench)

Bug #1970737 reported by StorPool Storage
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
qemu (Ubuntu)
Fix Released
Undecided
Sergio Durigan Junior
Jammy
Fix Released
Undecided
Sergio Durigan Junior
Kinetic
Fix Released
Undecided
Sergio Durigan Junior

Bug Description

[ Impact ]

* I/O stall from the guests POV, details:

* internal to qemus I/O infrastructure is a io-plug counting which was off balance. Due to that I/O stalls could happen as new submitted calls could be skipped while being plugged

* Upstream identified and fixed the issue and this is backporting the fix for it

[ Test Plan ]

This is a somewhat tricky issue to reproduce, so we will mostly be relying on the reporter's feedback in order to perform the final SRU verification. Below you can find the instructions (provided by the reporter) to try and setup a testing environment for this bug. Keep in mind that the problem doesn't always manifest, so it might be necessary to try a bunch of times.

You will need access to an NVMe storage device as well.

- Using a Jammy host with an NVMe storage device, install qemu/libvirt and make sure the setup is properly configured to create VMs. The following guide might be helpful: https://ubuntu.com/server/docs/virtualization-libvirt.

- Create an Ubuntu Jammy LTS VM. Make sure that the host NVMe device can be accessed by the VM.

- Run:

# apt-get install mysql-server mysql-common sysbench apparmor-utils
# systemctl disable --now mysql.service
# aa-complain /usr/sbin/mysqld
# reboot

Assuming that your NVMe device is mapped to /dev/vdb inside the VM:

# mkdir -p /data
# mkfs.ext4 /dev/vdb
# mount /dev/vdb /data
# mkdir /data/mysql
# mkdir /var/run/mysqld

# /usr/sbin/mysqld --no-defaults --datadir=/data/mysql --lc-messages-dir=/usr/share/mysql/english --log-error --max_connections=256 --socket=/var/run/mysqld/mysqld.sock --table_open_cache=512 --tmpdir=/var/tmp --innodb_buffer_pool_size=1024M --innodb_data_file_path=ibdata1:32M:autoextend --innodb_data_home_dir=/data/mysql --innodb_doublewrite=0 --innodb_flush_log_at_trx_commit=1 --innodb_flush_method=O_DIRECT --innodb_lock_wait_timeout=50 --innodb_log_buffer_size=16M --innodb_log_file_size=256M --innodb_log_group_home_dir=/data/mysql --innodb_max_dirty_pages_pct=80 --innodb_thread_concurrency=0 --user=root --initialize-insecure

# /usr/sbin/mysqld --no-defaults --datadir=/data/mysql --lc-messages-dir=/usr/share/mysql/english --log-error --max_connections=256 --socket=/var/run/mysqld/mysqld.sock --table_open_cache=512 --tmpdir=/var/tmp --innodb_buffer_pool_size=1024M --innodb_data_file_path=ibdata1:32M:autoextend --innodb_data_home_dir=/data/mysql --innodb_doublewrite=0 --innodb_flush_log_at_trx_commit=1 --innodb_flush_method=O_DIRECT --innodb_lock_wait_timeout=50 --innodb_log_buffer_size=16M --innodb_log_file_size=256M --innodb_log_group_home_dir=/data/mysql --innodb_max_dirty_pages_pct=80 --innodb_thread_concurrency=0 --user=root &

# echo 'status' | mysql -uroot # verify that MySQL server is up
# echo 'drop database test1m' | mysql -uroot
# echo 'create database test1m' | mysql -uroot

# /usr/share/sysbench/oltp_read_write.lua --threads=10 --table-size=20000000 --events=0 --time=900 --mysql-user=root --tables=10 --delete_inserts=10 --index_updates=10 --non_index_updates=10 --db-ps-mode=disable --report-interval=1 --db-driver=mysql --mysql-db=test1m --max-requests=0 --rand-seed=303 prepare

# /usr/share/sysbench/oltp_read_write.lua --threads=6 --table-size=20000000 --events=0 --time=900 --mysql-user=root --tables=10 --delete_inserts=10 --index_updates=10 --non_index_updates=10 --db-ps-mode=disable --report-interval=1 --db-driver=mysql --mysql-db=test1m --max-requests=0 --rand-seed=303 run

According to the reporter's feedback, when the bug manifests you will see something like the following:

...
[ 620s ] thds: 6 tps: 327.00 qps: 18348.00 (r/w/o: 4578.00/13116.00/654.00) lat (ms,95%): 30.81 err/s: 0.00 reconn/s: 0.00
[ 621s ] thds: 6 tps: 320.00 qps: 17930.85 (r/w/o: 4479.96/12810.89/639.99) lat (ms,95%): 39.65 err/s: 0.00 reconn/s: 0.00
[ 622s ] thds: 6 tps: 317.00 qps: 17670.96 (r/w/o: 4432.99/12603.97/634.00) lat (ms,95%): 30.81 err/s: 0.00 reconn/s: 0.00
[ 623s ] thds: 6 tps: 299.83 qps: 16896.41 (r/w/o: 4202.61/12094.14/599.66) lat (ms,95%): 25.28 err/s: 0.00 reconn/s: 0.00
[ 624s ] thds: 6 tps: 0.00 qps: 6.00 (r/w/o: 0.00/6.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 625s ] thds: 6 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 626s ] thds: 6 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 627s ] thds: 6 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
...

which indicates that there's no I/O happening in the NVMe device at all.

[ Where problems could occur ]

Changes are isolated, small and only touch one subsystem. So we can reasonable assume that regressions - if any - would happen to the I/O subsystem. That is quite a small subset of the many things qemu does and should thereby help to spot regression-updates. So watch out for any odd disk I/O behavior with that SRU.

[ Original Description ]

---Problem Description---

In a virtual machine, during MySQL performance tests with sysbench, IO operations freeze, and the virtual disk does not respond. The data of MySQL is on a virtual drive, backed by a host's local NVMe, attached to VM as a raw virtio-block device. The test runs smoothly for a few minutes. After a while, the IO operations freeze, and any attempt to read or write to the virtual drive remains to wait. Also, after the problem occurs, every read operation to the affected drive (e.g. ls, cat, etc.) stays waiting forever.

---Host Hardware---

CPU: AMD EPYC 7302P 16-Core Processor (32 threads)
RAM: 128 GB
OS Drive: Toshiba KXG60ZNV256G M.2 NVMe PCI-E SSD (256 GB)
Data Drive: Samsung PM983 MZQLB960HAJR-00007 U.2 (960 GB)

---Host Software---

OS: Ubuntu 22.04 LTS
Kernel: 5.15.0-27-generic
Qemu: 1:6.2+dfsg-2ubuntu6
Libvirt: 8.0.0-1ubuntu7

---VM Hardware---

vCPU: <vcpu placement='static'>8</vcpu>
CPU Mode: <cpu mode='host-passthrough' check='none' migratable='on'/>
RAM: 64 GB
OS Type: <type arch='x86_64' machine='pc-q35-6.2'>hvm</type>
OS Drive (64 GB):
  <disk type='file' device='disk'>
    <driver name='qemu' type='qcow2' cache='none' io='native' discard='unmap'/>
    <target dev='vda' bus='virtio'/>
Block Data Drive:
  <disk type="block" device="disk">
    <driver name="qemu" type="raw" cache="none" io="native" discard="unmap"/>
    <target dev="vdb" bus="virtio"/>

---VM Software & Configuration---

OS: Ubuntu 22.04 LTS (minimized)
Kernel: 5.15.0-27-generic
Swap: disabled
OS Drive: /dev/vda2; file-system: ext4; mount-options: defaults; mount-point: /
Data Drive: /dev/vdb
MySQL: 8.0.28-0ubuntu4
Sysbench: 1.0.20+ds-2

---Prepare the VM---

1. Install Ubuntu 22.04 LTS (minimized) as VM OS
2. Boot the VM & log-in as root
3. apt-get install mysql-server mysql-common sysbench apparmor-utils
4. systemctl disable --now mysql.service
5. aa-complain /usr/sbin/mysqld
6. systemctl restart apparmor

---Reproduction---

1. Reboot the VM & log-in as root
2. mkdir -p /data
3. mkfs.ext4 /dev/vdb
4. mount /dev/vdb /data
5. mkdir /data/mysql
6. mkdir /var/run/mysqld
7. /usr/sbin/mysqld --no-defaults --datadir=/data/mysql --lc-messages-dir=/usr/share/mysql/english --log-error --max_connections=256 --socket=/var/run/mysqld/mysqld.sock --table_open_cache=512 --tmpdir=/var/tmp --innodb_buffer_pool_size=1024M --innodb_data_file_path=ibdata1:32M:autoextend --innodb_data_home_dir=/data/mysql --innodb_doublewrite=0 --innodb_flush_log_at_trx_commit=1 --innodb_flush_method=O_DIRECT --innodb_lock_wait_timeout=50 --innodb_log_buffer_size=16M --innodb_log_file_size=256M --innodb_log_group_home_dir=/data/mysql --innodb_max_dirty_pages_pct=80 --innodb_thread_concurrency=0 --user=root --initialize-insecure
8. /usr/sbin/mysqld --no-defaults --datadir=/data/mysql --lc-messages-dir=/usr/share/mysql/english --log-error --max_connections=256 --socket=/var/run/mysqld/mysqld.sock --table_open_cache=512 --tmpdir=/var/tmp --innodb_buffer_pool_size=1024M --innodb_data_file_path=ibdata1:32M:autoextend --innodb_data_home_dir=/data/mysql --innodb_doublewrite=0 --innodb_flush_log_at_trx_commit=1 --innodb_flush_method=O_DIRECT --innodb_lock_wait_timeout=50 --innodb_log_buffer_size=16M --innodb_log_file_size=256M --innodb_log_group_home_dir=/data/mysql --innodb_max_dirty_pages_pct=80 --innodb_thread_concurrency=0 --user=root &
9. echo 'status' | mysql -uroot # verify that MySQL server is up
10. echo 'drop database test1m' | mysql -uroot
11. echo 'create database test1m' | mysql -uroot
12. /usr/share/sysbench/oltp_read_write.lua --threads=10 --table-size=20000000 --events=0 --time=900 --mysql-user=root --tables=10 --delete_inserts=10 --index_updates=10 --non_index_updates=10 --db-ps-mode=disable --report-interval=1 --db-driver=mysql --mysql-db=test1m --max-requests=0 --rand-seed=303 prepare
13. /usr/share/sysbench/oltp_read_write.lua --threads=6 --table-size=20000000 --events=0 --time=900 --mysql-user=root --tables=10 --delete_inserts=10 --index_updates=10 --non_index_updates=10 --db-ps-mode=disable --report-interval=1 --db-driver=mysql --mysql-db=test1m --max-requests=0 --rand-seed=303 run

---Resulting Output---

...
[ 620s ] thds: 6 tps: 327.00 qps: 18348.00 (r/w/o: 4578.00/13116.00/654.00) lat (ms,95%): 30.81 err/s: 0.00 reconn/s: 0.00
[ 621s ] thds: 6 tps: 320.00 qps: 17930.85 (r/w/o: 4479.96/12810.89/639.99) lat (ms,95%): 39.65 err/s: 0.00 reconn/s: 0.00
[ 622s ] thds: 6 tps: 317.00 qps: 17670.96 (r/w/o: 4432.99/12603.97/634.00) lat (ms,95%): 30.81 err/s: 0.00 reconn/s: 0.00
[ 623s ] thds: 6 tps: 299.83 qps: 16896.41 (r/w/o: 4202.61/12094.14/599.66) lat (ms,95%): 25.28 err/s: 0.00 reconn/s: 0.00
[ 624s ] thds: 6 tps: 0.00 qps: 6.00 (r/w/o: 0.00/6.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 625s ] thds: 6 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 626s ] thds: 6 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 627s ] thds: 6 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
...

---Expecting to happen---

To not have lines with "tps: 0.00 qps: 0.00", like the last four in the example.

---Additional Notes---

1. This is not happening on every run, so it is possible for some test iterations to complete successfully.
2. The same happens with a larger number of sysbench threads (e.g. 8, 16, 24, 32) too.
3. The problem does not occur if the io policy of the data drive is changed from io="native" to io="io_uring" (at least for 7 hours of continuous testing).
4. While IO operations in the VM are frozen, the NVMe device responds to requests from the host. (e.g. dd if=/dev/nvme1n1 of=/dev/null bs=512 count=1 iflag=direct).

Please find attached the libvirt XML configuration of the example VM.

Best regards,
Nikolay Tenev

Related branches

Revision history for this message
StorPool Storage (storpool) wrote :
Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

Thank you for taking the time to report this bug and helping to make Ubuntu better. It seems that your bug report is not filed about a specific source package though, rather it is just filed against Ubuntu in general. It is important that bug reports be filed about source packages so that people interested in the package can find the bugs about it. You can find some hints about determining what package your bug might be about at https://wiki.ubuntu.com/Bugs/FindRightPackage. You might also ask for help in the #ubuntu-bugs irc channel on Libera.chat.

To change the source package that this bug is filed about visit https://bugs.launchpad.net/ubuntu/+bug/1970737/+editstatus and add the package name in the text box next to the word Package.

[This is an automated message. I apologize if it reached you inappropriately; please just reply to this message indicating so.]

tags: added: bot-comment
Changed in ubuntu:
status: New → Incomplete
affects: ubuntu → mysql-8.0 (Ubuntu)
Changed in mysql-8.0 (Ubuntu):
status: Incomplete → New
description: updated
description: updated
Revision history for this message
StorPool Storage (storpool) wrote :

Hello,

I'm adding additional information here. It seems that the issue is not related to mysql-8.0 but the kernel or qemu. After the sysbench test starts to output lines with "tps: 0.00 qps: 0.00", the command:

# ls /data/mysql

sometimes list the files, but sometimes hang.

Whatever the MySQL daemon does, it is not expected to cause a hang of the file-system for other processes. Also the command

# dd if=/dev/vdb of=/dev/null bs=512 count=1 iflag=direct

hangs, but does not show up as an outstanding storage operation in iostat (%util of /dev/vdb remains 0).

Additionally, please find in the attached file the output from:

# echo w > /proc/sysrq-trigger

I hope that this helps!

Best regards,
Nikolay Tenev

Revision history for this message
Sergio Durigan Junior (sergiodj) wrote :

Thank you for taking the time to report the bug and make Ubuntu better.

I am trying to obtain access to a setup where I can attempt to reproduce the issue as described in the bug description (thanks for the detailed instructions, by the way!). I'll let you know when I have more info, but meanwhile feel free to provide more details if you have them.

Thanks.

tags: added: server-triage-discuss
description: updated
Robie Basak (racb)
tags: removed: server-triage-discuss
Revision history for this message
StorPool Storage (storpool) wrote :

The issue is found and fixed in a development version of QEMU. For more information - https://lists.gnu.org/archive/html/qemu-devel/2022-06/msg01771.html

affects: mysql-8.0 (Ubuntu) → qemu (Ubuntu)
Revision history for this message
Lucas Kanashiro (lucaskanashiro) wrote :

Thanks for the link to the upstream patches. We need someone to work on it, not sure if Sergio is still planning to do it. FWIW, I am subscribing ubuntu-server to this bug.

Revision history for this message
Sergio Durigan Junior (sergiodj) wrote :

Thanks for the update, StorPool Storage and Lucas.

I haven't been able to reproduce the bug so far, but in the interest of speeding up the process I've prepared a PPA containing the fixes from https://lists.gnu.org/archive/html/qemu-devel/2022-06/msg01771.html. The link to the PPA is:

https://launchpad.net/~sergiodj/+archive/ubuntu/qemu-bug1970737

Bear in mind a few things, though:

- This PPA is obviously just an attempt to verify that the fix works for StorPool Storage. It's not supported by Canonical/Ubuntu in any way.

- If possible, please do this verification in a testing environment (i.e., not in production).

- Even if the backported patches prove to be the real fix for this issue, we'll have to come up with a good SRU template in order to get it accepted into Jammy. But let's worry about this if/when needed.

Revision history for this message
StorPool Storage (storpool) wrote :

Hi,

Today I will run the tests with the package from Sergio's PPA and report the results here.

Best regards,
Nikolay Tenev

Revision history for this message
StorPool Storage (storpool) wrote :

Hello,

Today we have finished the tests with the qemu packages from Sergio's PPA and we did not hit the issue even after a 12-hour run.

It seems that this issue is solved so. The fix is merged to upstream qemu.git/master now, so now is the turn of qemu package maintainers to backport it to the packages of distribution.

Best regards,
Nikolay Tenev

Changed in qemu (Ubuntu Jammy):
status: New → Confirmed
Changed in qemu (Ubuntu Kinetic):
status: New → Confirmed
description: updated
Changed in qemu (Ubuntu Jammy):
assignee: nobody → Sergio Durigan Junior (sergiodj)
Changed in qemu (Ubuntu Kinetic):
assignee: nobody → Sergio Durigan Junior (sergiodj)
tags: added: server-todo
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

@Sergio - I reviewed your MP and the upstream changes as well as your packaging thereof LGTM.
You can upload that after the wait for the ongoing fixes that I mentioned have landed.

Changed in qemu (Ubuntu Jammy):
status: Confirmed → In Progress
Changed in qemu (Ubuntu Kinetic):
status: Confirmed → In Progress
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package qemu - 1:6.2+dfsg-2ubuntu8

---------------
qemu (1:6.2+dfsg-2ubuntu8) kinetic; urgency=medium

  [ Marc Deslauriers ]
  * SECURITY UPDATE: heap overflow in floppy disk emulator
    - debian/patches/CVE-2021-3507.patch: prevent end-of-track overrun in
      hw/block/fdc.c.
    - CVE-2021-3507
  * SECURITY UPDATE: use-after-free in nvme
    - debian/patches/CVE-2021-3929.patch: deny DMA to the iomem of the
      device itself in hw/nvme/ctrl.c.
    - CVE-2021-3929
  * SECURITY UPDATE: integer overflow in QXL display device emulation
    - debian/patches/CVE-2021-4206.patch: check width and height in
      hw/display/qxl-render.c, hw/display/vmware_vga.c, ui/cursor.c.
    - CVE-2021-4206
  * SECURITY UPDATE: heap overflow in QXL display device emulation
    - debian/patches/CVE-2021-4207.patch: fix race condition in qxl_cursor
      in hw/display/qxl-render.c.
    - CVE-2021-4207
  * SECURITY UPDATE: potential privilege escalation in virtiofsd
    - debian/patches/CVE-2022-0358.patch: Drop membership of all
      supplementary groups in tools/virtiofsd/passthrough_ll.c.
    - CVE-2022-0358
  * SECURITY UPDATE: memory leakage in virtio-net device
    - debian/patches/CVE-2022-26353.patch: fix map leaking on error during
      receive in hw/net/virtio-net.c.
    - CVE-2022-26353
  * SECURITY UPDATE: memory leakage in vhost-vsock device
    - debian/patches/CVE-2022-26354.patch: detach the virqueue element in
      case of error in hw/virtio/vhost-vsock-common.c.
    - CVE-2022-26354

  [ Sergio Durigan Junior ]
  * Fix I/O stalls when using NVMe storage (LP: #1970737).
    - d/p/lp1970737-linux-aio-*.patch: Fix unbalanced plugged counter
      in laio_io_unplug.

 -- Sergio Durigan Junior <email address hidden> Wed, 22 Jun 2022 15:38:37 -0400

Changed in qemu (Ubuntu Kinetic):
status: In Progress → Fix Released
Revision history for this message
Robie Basak (racb) wrote :

This needs the SRU information completing please.

Changed in qemu (Ubuntu Jammy):
status: In Progress → Incomplete
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

FYI - I completed the SRU info as Sergio is out atm.

description: updated
Revision history for this message
Steve Langasek (vorlon) wrote : Please test proposed package

Hello StorPool, or anyone else affected,

Accepted qemu into jammy-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/1:6.2+dfsg-2ubuntu6.3 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-jammy to verification-done-jammy. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-jammy. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in qemu (Ubuntu Jammy):
status: Incomplete → Fix Committed
tags: added: verification-needed verification-needed-jammy
Revision history for this message
Ubuntu SRU Bot (ubuntu-sru-bot) wrote : Autopkgtest regression report (qemu/1:6.2+dfsg-2ubuntu6.3)

All autopkgtests for the newly accepted qemu (1:6.2+dfsg-2ubuntu6.3) for jammy have finished running.
The following regressions have been reported in tests triggered by the package:

systemd/249.11-0ubuntu3.4 (armhf)
vagrant-mutate/unknown (s390x)
sbuild/unknown (s390x)
open-iscsi/unknown (s390x)
ubuntu-image/unknown (ppc64el, s390x)
systemd/unknown (ppc64el, s390x)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/jammy/update_excuses.html#qemu

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

All autopkgtest issues were flaky tests, I resolved them by now.

Revision history for this message
Sergio Durigan Junior (sergiodj) wrote :

Hi StorPool Storage,

Can you please perform the verification steps for this bug in order to unblock the package and allow it to migrate to the release pocket? Since this is a bug that's hard to reproduce, we are relying on you to tell us if the package works fine.

Thanks in advance!

Revision history for this message
StorPool Storage (storpool) wrote :

Hi Sergio,

Somehow I have missed the latest updates. I will perform the tests today and will report after them.

Best regards,
Nikolay Tenev

Revision history for this message
Sergio Durigan Junior (sergiodj) wrote : Re: [Bug 1970737] Re: Stalled IO Operations During MySQL Tests (with sysbench)

On Thursday, July 07 2022, StorPool Storage wrote:

> Hi Sergio,
>
> Somehow I have missed the latest updates. I will perform the tests today
> and will report after them.

No problem, and thank you very much!

--
Sergio
GPG key ID: E92F D0B3 6B14 F1F4 D8E0 EB2F 106D A1C8 C3CB BF14

Revision history for this message
StorPool Storage (storpool) wrote :

Hi all,

We have performed the tests with qemu 6.2+dfsg-2ubuntu6.3 (from http://archive.ubuntu.com/ubuntu/ jammy-proposed restricted main multiverse universe), and after two iterations with 12 hours duration each, we were unable to reproduce the issue.

Probably we could consider the issue fixed.

Best regards,
Nikolay Tenev

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Thank you for the check on your Hardware!

tags: added: verification-done verification-done-jammy
removed: verification-needed verification-needed-jammy
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package qemu - 1:6.2+dfsg-2ubuntu6.3

---------------
qemu (1:6.2+dfsg-2ubuntu6.3) jammy; urgency=medium

  * Fix unbalanced plugged counter in laio_io_unplug (LP: #1970737)
    - d/p/lp1970737-linux-aio-*.patch: Upstream patches.

 -- Sergio Durigan Junior <email address hidden> Tue, 21 Jun 2022 17:07:50 -0400

Changed in qemu (Ubuntu Jammy):
status: Fix Committed → Fix Released
Revision history for this message
Brian Murray (brian-murray) wrote : Update Released

The verification of the Stable Release Update for qemu has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.