Timeout waiting for vif plugging callback for instance

Bug #1357476 reported by Attila Fazekas
26
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Expired
Undecided
Unassigned
Icehouse
Invalid
Critical
Unassigned
neutron
Invalid
Undecided
Unassigned

Bug Description

n-cpu times out while waiting for neutron.

Logstash
========
http://logstash.openstack.org/#eyJzZWFyY2giOiIgbWVzc2FnZTogXCJUaW1lb3V0IHdhaXRpbmcgZm9yIHZpZiBwbHVnZ2luZyBjYWxsYmFjayBmb3IgaW5zdGFuY2VcIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODEyMjI1NjY2NiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

message: "Timeout waiting for vif plugging callback for instance" AND tags:"screen-n-cpu.txt"

Logs
++++
http://logs.openstack.org/09/108909/4/gate/check-tempest-dsvm-neutron-full/628138b/logs/screen-n-cpu.txt.gz#_2014-08-13_21_14_53_453

2014-08-13 21:14:53.453 WARNING nova.virt.libvirt.driver [req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 ServerActionsTestXML-1011304525] Timeout waiting for vif plugging callback for instance 794ceb8c-a08b-4b02-bdcb-4ad5632f7744

2014-08-13 21:14:55.408 ERROR nova.compute.manager [req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 ServerActionsTestXML-1011304525] [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Setting instance vm_state to ERROR
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Traceback (most recent call last):
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] File "/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] disk_info, image)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] File "/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] old_instance_type, sys_meta)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] six.reraise(self.type_, self.value, self.tb)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] File "/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in finish_migration
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in _create_domain_and_network
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] raise exception.VirtualInterfaceCreateException()
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744] VirtualInterfaceCreateException: Virtual Interface creation failed
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 794ceb8c-a08b-4b02-bdcb-4ad5632f7744]

2014-08-13 21:14:56.138 ERROR oslo.messaging.rpc.dispatcher [req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 ServerActionsTestXML-1011304525] Exception during message handling: Virtual Interface creation failed
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher payload)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 292, in decorated_function
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher pass
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 278, in decorated_function
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 342, in decorated_function
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 266, in decorated_function
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher migration.instance_uuid, exc_info=True)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 253, in decorated_function
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 320, in decorated_function
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info())
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 308, in decorated_function
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 3726, in finish_resize
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher self._set_instance_error_state(context, instance)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher disk_info, image)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher old_instance_type, sys_meta)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher block_device_info, power_on)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in finish_migration
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher block_device_info, power_on)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in _create_domain_and_network
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher raise exception.VirtualInterfaceCreateException()
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher VirtualInterfaceCreateException: Virtual Interface creation failed
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher

Tags: network
Revision history for this message
yong sheng gong (gongysh) wrote :

have you set up:
notify_nova_on_port_data_changes = True
notify_nova_on_port_status_changes = True

in neutron.conf?

Changed in neutron:
status: New → Incomplete
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

Marking as incomplete until gongysh's question is answered

Revision history for this message
Attila Fazekas (afazekas) wrote :

It is the default and the configured value.

notify_nova_on_port_data_changes = True
notify_nova_on_port_status_changes = True

http://logs.openstack.org/09/108909/4/gate/check-tempest-dsvm-neutron-full/628138b/logs/etc/neutron/neutron.conf.txt.gz

Changed in neutron:
status: Incomplete → New
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

We are already aware of this race in the neutron full job.
We did not see this bug and reported a different one: bug 1357599

I will dig a bit more into it before declaring it a dupe... it won't be the first time a failure shows multiple root causes.

Changed in neutron:
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
Hua Zhang (zhhuabj) wrote :

one possible reason is that applying iptables time took lots of time due to lots of VMs on compute nodes, if so, this patch should can help, see https://review.openstack.org/#/c/115719/
so it looks maybe this is a duplicate issue with https://bugs.launchpad.net/neutron/+bug/1352826

Aaron Rosen (arosen)
Changed in neutron:
assignee: nobody → Aaron Rosen (arosen)
Joe Gordon (jogo)
Changed in neutron:
importance: Medium → Critical
Revision history for this message
Joe Gordon (jogo) wrote :

marked as critical because 40 hits in elastic-recheck in 24 hours

Changed in nova:
importance: Undecided → Critical
Aaron Rosen (arosen)
Changed in neutron:
assignee: Aaron Rosen (arosen) → nobody
Sean Dague (sdague)
Changed in nova:
status: New → Confirmed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/120842
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=9e856922e33b2e7d1c51098454ac85591a90df4f
Submitter: Jenkins
Branch: master

commit 9e856922e33b2e7d1c51098454ac85591a90df4f
Author: Dan Smith <email address hidden>
Date: Thu Sep 11 10:10:58 2014 -0700

    Improve logging of external events on the compute node

    We don't log every event we receive on the compute node, which could
    allow some to slip through and look like they were never received. This
    patch makes sure to debug log each one so we at least know that it made
    it from API to compute.

    Change-Id: I817248ea8167be21c55fecad3a39cb45b027596c
    Related-bug: #1357476

Joe Gordon (jogo)
Changed in nova:
milestone: none → juno-rc1
Revision history for this message
Michael Still (mikal) wrote :

Unless someone is actively working on this, we need to untarget it from rc1.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Changed in nova:
milestone: juno-rc1 → none
Changed in neutron:
assignee: nobody → Oleg Bondarev (obondarev)
Changed in nova:
assignee: nobody → Oleg Bondarev (obondarev)
Revision history for this message
Boden R (boden) wrote :

I'm hitting this on my devstack all-in-one node using neutron. I've done some debugging and looks like a timing issue whereupon nova is "missing" the notification regarding the vif plug completion... Here's some notes on the flow.

------

FLOW (durung boot instance):
- nova.compute.manager.ComputeManager._allocate_network() which returns an async wrapper instance of: network_model.NetworkInfoAsyncWrapper

- shortly thereafter neutron finishes plugging the network info and will send AMQP notification.. e.g.:
2014-09-29 08:43:48.130 DEBUG neutron.context [req-881fbe63-8722-4ce0-a068-5c3880db8294 None None] Arguments dropped when creating context
: {u'project_name': None, u'tenant': None} from (pid=5674) __init__ /opt/stack/neutron/neutron/context.py:83
2014-09-29 08:43:48.650 DEBUG neutron.notifiers.nova [-] Sending events: [{'status': 'completed', 'tag': u'7ffbe55f-f1bd-4742-8ed9-9b238c456555', 'name': 'network-vif-plugged', 'server_uuid': u'd9574d53-12b3-4a4c-b308-3dd522dffdee'}] from (pid=5674) send_events /opt/stack/neutron/neutron/notifiers/nova.py:220
2014-09-29 08:43:48.827 DEBUG urllib3.connectionpool [-] "POST /v2/5c2ed97c153f4e02988214fe2a25e753/os-server-external-events HTTP/1.1" 200 183 from (pid=5674) _make_request /usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
2014-09-29 08:43:48.828 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'7ffbe55f-f1bd-4742-8ed9-9b238c456555', u'name': u'network-vif-plugged', u'server_uuid': u'd9574d53-12b3-4a4c-b308-3dd522dffdee', u'code': 200}

- nova.compute.manager.ComputeManager calls driver.spawn() ... note the neutron vif completed notification is already done at this point.
here's a sample of network_info which is passed into the driver.spawn()... notice its not active:
[VIF({"profile": {}, "ovs_interfaceid": "491f5df8-277d-4163-917f-2eba8b25ff36", "network": Network({"bridge": "br-int", "subnets": [Subnet({"ips": [FixedIP({"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.2"})], "version": 4, "meta": {"dhcp_server": "10.0.0.3"}, "dns": [], "routes": [], "cidr": "10.0.0.0/24", "gateway": IP({"meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1"})})], "meta": {"injected": False, "tenant_id": "15045704e84f4fb9be69a4aabc10905d"}, "id": "92aabdcd-4a9d-45ba-96d0-cd6b9c314434", "label": "private"}), "devname": "tap491f5df8-27", "vnic_type": "normal", "qbh_params": None, "meta": {}, "details": {"port_filter": True, "ovs_hybrid_plug": True}, "address": "fa:16:3e:92:e4:f7", "active": False, "type": "ovs", "id": "491f5df8-277d-4163-917f-2eba8b25ff36", "qbg_params": None})]

- spawn() calls _create_domain_and_network()

- _create_domain_and_network() calls _get_neutron_events()

- _get_neutron_events() returns indicating the driver needs to wait for the network-vif-plugged notification. however as mentioned
above this has already been sent so we timeout.
------

I'm looking into a fix even though this one is assigned to Oleg as it's blocking me.... Note in my scenario I'm passing in a preallocated neutron port on the nova boot call.

Revision history for this message
Boden R (boden) wrote :

btw -- I am aware this issue can be worked around via CONF.vif_plugging_is_fatal

Revision history for this message
Boden R (boden) wrote :

here's 1 fix that works, but didn't have time to do more than a simpel verify w/it..

in the libvirt.driver.LibvirtDriver._create_domain_and_network() change the following snip from:

        if (self._conn_supports_start_paused and
            utils.is_neutron() and not
            vifs_already_plugged and power_on and timeout):
            events = self._get_neutron_events(network_info)
        else:
            events = []

to:

        if (self._conn_supports_start_paused and
            utils.is_neutron() and not
            vifs_already_plugged and power_on and timeout
            and not instance.system_metadata['network_allocated']):
            events = self._get_neutron_events(network_info)
        else:
            events = []

This works because the async network allocation callback populates the instance.system_metadata['network_allocated'] value.

Revision history for this message
Boden R (boden) wrote :

Note #12 above is just a "hint" as I have to run for the day... If we used system_metadata as in the approach above it obviously lacks additional synchronization with neutron event waiting and async callback update of the metadata itself.

Revision history for this message
Oleg Bondarev (obondarev) wrote :

@boden, can you please describe how are you reproducing the issue on devstack?
I am able to reproduce by launching 50-60 instances at the same time, but this may be another issue however.

Regarding your fix: indeed, async network allocation callback sets the instance.system_metadata['network_allocated'] to True but this happens right after Neutron creates port and sends the result to Nova. VIF plugging is another async operation however. So with your fix Nova just doesn't wait for vif plugging event from neutron which is identical to setting vif_plugging_is_fatal = False.

Revision history for this message
Boden R (boden) wrote :

@Oleg -- I'm using the master stream with a devstack all-in-one configuration (neutron networking); the error occurs with every server boot.

Revision history for this message
Boden R (boden) wrote :

As it turns out, my devstack setup accidentally had nova cells enabled which as a result was using some of the cells_api integration (e.g. ComputeRPCAPIRedirect for the compute rpcapi). Once I disabled cells I'm no longer hitting the error discussed in this bug.

Changed in nova:
assignee: Oleg Bondarev (obondarev) → nobody
Changed in neutron:
assignee: Oleg Bondarev (obondarev) → nobody
Revision history for this message
Marios Andreou (marios-b) wrote :
Revision history for this message
Matt Riedemann (mriedem) wrote :

This might be fixed, no hits in 10 days.

Revision history for this message
Joe Gordon (jogo) wrote :

Since we haven't seen this in 10 days, closing the bug. If this is seen again feel free to re-open.

Changed in nova:
importance: Critical → Undecided
Changed in neutron:
status: Confirmed → Invalid
importance: Critical → Undecided
Changed in nova:
status: Confirmed → Invalid
Matt Riedemann (mriedem)
no longer affects: oslo.messaging
Revision history for this message
puppet-py (jbanerje) wrote :
Revision history for this message
puppet-py (jbanerje) wrote :
Download full text (7.5 KiB)

I have hit this bug repeatedly on creating new instance on a compute-node on which already 4 instances exist.
Resource-wiseless than 10% of the hypervisor has been used by the existing instances.
Any new instance creation is failing at the same place as per the log trace. I have attached the log for my instance named 'AGAIN'

Below is the log snippet:
=======================

2015-06-09 08:03:47.153 19692 WARNING nova.virt.libvirt.driver [-] Timeout waiting for vif plugging callback for instance b8c3a1dc-8780-4495-8a7e-3f03f14c8475
2015-06-09 08:03:47.363 19692 DEBUG nova.virt.libvirt.vif [-] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=True,config_drive='',created_at=2015-06-09T14:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='AGAIN',display_name='AGAIN',ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,host='f3-compute-2.noiro.lab',hostname='again',id=77,image_ref='82cd7cee-ade0-43b2-94ac-b7cb8c0c941b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=None,launched_on='f3-compute-2.noiro.lab',locked=False,locked_by=None,memory_mb=4096,metadata={},node='f3-compute-2.noiro.lab',numa_topology=None,os_type=None,pci_devices=PciDeviceList,power_state=0,progress=0,project_id='26acdadf476e49e78172ef6f0595f9a1',ramdisk_id='',reservation_id='r-51nqu02g',root_device_name='/dev/vda',root_gb=40,scheduled_at=None,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref='82cd7cee-ade0-43b2-94ac-b7cb8c0c941b',image_container_format='bare',image_disk_format='qcow2',image_min_disk='40',image_min_ram='0',instance_type_ephemeral_gb='0',instance_type_flavorid='3',instance_type_id='1',instance_type_memory_mb='4096',instance_type_name='m1.medium',instance_type_root_gb='40',instance_type_rxtx_factor='1.0',instance_type_swap='0',instance_type_vcpu_weight=None,instance_type_vcpus='2',network_allocated='True'},task_state='spawning',terminated_at=None,updated_at=2015-06-09T14:58:44Z,user_data=None,user_id='3fd8184442334332be33cdfe5b57b1ae',uuid=b8c3a1dc-8780-4495-8a7e-3f03f14c8475,vcpus=2,vm_mode=None,vm_state='building') vif=VIF({'profile': {}, 'ovs_interfaceid': u'8d51f6bd-6a8b-43a0-9d86-b8384225eab5', 'network': Network({'bridge': 'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'floating_ips': [], 'address': u'5.5.5.7'})], 'version': 4, 'meta': {'dhcp_server': u'5.5.5.3'}, 'dns': [], 'routes': [Route({'interface': None, 'cidr': u'5.5.5.0/24', 'meta': {}, 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'5.5.5.1'})})], 'cidr': u'5.5.5.0/28', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'5.5.5.1'})})], 'meta': {'injected': False, 'tenant_id': u'26acdadf476e49e78172ef6f0595f9a1'}, 'id': u'4659e62b-f4fd-43ac-b18a-5ef30fac028e', 'label': u'l2p_demo_same_ptg_l2p_l3p_bd'}), 'devname': u'tap8d51f6bd-6a', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 'details': {u'po...

Read more...

Changed in nova:
status: Invalid → New
tags: added: network
Revision history for this message
puppet-py (jbanerje) wrote :

Forgot to mention that this bug prevails in Stable-Juno

Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote :

Does this happen in master? originally this was filed for logs we saw in CI jobs.

Changed in nova:
status: New → Incomplete
Revision history for this message
puppet-py (jbanerje) wrote :

Yes.. this juno off the Master

Revision history for this message
puppet-py (jbanerje) wrote :

May I know why the bug state is in Incomplete?

I have provided all the necessary logs which I have captured in invoking two different VMs and two different time instances.

Please provide the analysis of the logs if they have been already triaged.

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.]

Changed in nova:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.