mysql charm blows up on out of memory error

Bug #1294334 reported by Brian Wawok
30
This bug affects 5 people
Affects Status Importance Assigned to Milestone
mysql (Juju Charms Collection)
Confirmed
Undecided
Unassigned

Bug Description

I was following the juju tutorial for a mysql / wordpress relationship.

I am deploying to a local environment on a machine with 24GB of ram.

Mysql would not start, the start hook exploded. Did some troubleshooting, and saw my innodb_buffer_pool_size was set to 20GB. Reduced this down to 1 GB, and restarted mysql, and it worked.

I have no idea how this param is set, but assume it is set to 75% of available ram or something.. that is fine on a cloud VM, but pretty bad on a local instance. Maybe cap it at 4GB unless a config is passed in?

Revision history for this message
Brian Wawok (bwawok) wrote :

The exact error I saw in my juju log was:

2014-03-18 19:32:27 INFO juju.worker.uniter uniter.go:348 running "start" hook 2014-03-18 19:32:27 INFO juju.worker.uniter context.go:255 HOOK mysql stop/waiting 2014-03-18 19:32:30 INFO juju.worker.uniter context.go:255 HOOK start: Job failed to start 2014-03-18 19:32:30 ERROR juju.worker.uniter uniter.go:350 hook failed: exit status 1

Revision history for this message
Charles Butler (lazypower) wrote :

This appears to be affecting multiple people developing using LXC - https://lists.ubuntu.com/archives/juju/2014-February/003421.html

This needs a bit more discussion on what to do, but I propose reducing the overall innodb memory pool by default to a more manageable range pending a verified fix landing aside from reducing the innodb resources.

Changed in mysql (Juju Charms Collection):
importance: Undecided → High
status: New → Triaged
Revision history for this message
Curtis Hovey (sinzui) wrote :

I wonder if this is a factor in the Hp cloud issue. The mysql charm works fine on aws, azure, and localhost when constrained 2G, but it fails on HP Cloud. The charm does work when mem is increased to 4G.

Revision history for this message
Marco Ceppi (marcoceppi) wrote :

This is because dataset-size is defaulting to 80% of available memory, hence why you're seeing that huge innodb_buffer_pool_size. We have a maximum allotment for i386 architectures, we may just make that global. The problem is some people may actually want that much innodb_buffer_pool_size (probably a bad idea, but still). A first step would be to update the README for MySQL to note this caveat and how to work around it.

Revision history for this message
Charles Butler (lazypower) wrote :

Probably not a bad idea. stuffing it in the Caveats/Known issues directive for specific callouts per environment as to what we see. I +1 this idea. I don't know that there's really much else we can do - as there's no real reliable way to determine what substrate we are running on.

Changed in mysql (Juju Charms Collection):
status: Triaged → Confirmed
importance: High → Undecided
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.