Nov 04 2010

High Availability MySQL Cookbook , the review

When I read on the internetz that Alex Davies was about the publish a Packt book on MySQL HA I pinged my contacts at Packt and suggested that I'd review the book .

I've ran into Alex at some UKUUG conferences before and he's got a solid background on MySQL Cluster and other HA alternatives so I was looking forward to reading the book.

Alex starts of with a couple of indepth chapters on MySQL Cluster, he does mention that it's not a fit for all problems, but I'd hoped he did it a bit more prominently ... an upfront chapter outlining the different approaches and when which approach is a match could have been better. The avid reader now might be 80 pages into MySQL cluster before he realizes it's not going to be a match for his problem.

I really loved the part where Alex correcly mentions that you should probably be using Puppet or so to manage the config files of your environment, rather than scp them around your different boxes ..

Alex then goes on to describe setting up MySQL replication and Multi Master replication with the different approaches one can take here, he gives some nice tips on using LVM to reduce the downtime of your MySQL when having to transfer the dataset of an already existing MySQL setup, good stuff.

He then goes on to describe MySQL with shared storage ... if you only mount your redundant sandisk once on your MySQL nodes my preference would probably be a Pacemaker stack rather than a RedHat Cluster based setup, but his setup seems to work too. Alex quickly touches on using GFS to have your data disk mounted simultaneously on both nodes (keep in mind with only 1 active MySQLd) and then goes on to describe a full DRBD based MySQL HA setup

The last chapter titled Performance tuning gives some very nice tips on both tuning your regular storage, as your
GFS setup but also the tuning parameters for MySQL Cluster

I was also really happy to see the Appendixes on the basic installation where he advocates the use of Cobbler , Kickstart and LVM ..

One of the better books I read the past couple of years .. certainly the best book from Packt so far , I hope there is more quality stuff coming from that direction !

Nov 12 2009

Yet Another DNS Issue

While browsing trough my enormous mailinglist backlog I ran into the following message from Gianluca Cecchi on the DRBD-user mailing list

guess I`ll have to give Lars a T-Shirt when we next meet ;)

  1. From: Gianluca Cecchi
  2. To: drbd-user@lists.linbit.com
  3. Subject: [DRBD-user] notes on 8.3.2
  6. - drbdadm create-md r0 segfaults when the command "hostname" on the
  7. server contains the fully qualified domain name but you have put only
  8. the hostname part in drbd.conf
  9. Instead, the command "drbdadm dump" correctly gives you a warning in
  10. this case (suggesting how to correct the error you made....):
  12. suppose complete hostname is virtfed.domainname.com and you put
  13. virtfed alone in drbd.conf
  14. [root@virtfed ~]# drbdadm dump
  15. WARN: no normal resources defined for this host (virtfed.domainname.com)!?
  17. while
  18. [root@virtfed ~]# drbdadm create-md r0
  19. Segmentation fault

Guess I`ll have to give the Linbit crowd a T-Shirt when we next meet ;)

Oct 19 2009

Nines , Damn Nines and More Nines

Funny how different experiences lead to different evaluations of tools. The MySQL HA solutions the MySQL Performanceblog list, are almost listed in the complete opposited order of what my impressions are.

Ok agreed, I should probably not put my MySQL NDB experiences from 2-3 years ago with multiple Query of deaths and more problems than you into account anymore , but back then went in the list Less stable than a single node. I've had NDB POC setups going down for much more than 05:16 minutes
Ndb comes with a lot of restrictions, there are

As for MySQL on DRBD, I've said this before , I love DRBD, but having to wait for a long InnoDB recovery after a failover just kills your uptime ,
I remember being called by a customer during Fred last holiday who was waiting over 20 minutes for recovery , twice, so putting the DRBD/San setup second would not be my preference. But agreed .. it's only listed at 99.9% meaning almost 9 hours of downtime per year are allowed.

On the other hand we've seen database uptime of MySQL MultiMaster setups with Heartbeat reaching better figures than 99.99% Heck I've seen single nodes achieve better than 99.99% :)

So what does this teach us ... there is no golden rule for HA, lots of situations are different, it's the preferences of the customer, the size of the database, the kind of application , and much
more .. you always need to think and evaluate the environment ...

Jul 01 2009

DRBD2, OCFS2, Unexplained crashes

I was trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) .

Setting up a single primary disk and running bonnie++ on it worked Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ worked

When setting up ocfs2 on the /dev/drbd0 disk and mounting it on both nodes, basic functionality seemed in place but usually less than 5-10 minutes after I start bonnie++ as a test on one of the nodes , both nodes power cycle with no errors in the logfiles, just a crash.

When at the console at the time of crash it looks like a disk IO (you can type , but actions happen) block happens then a reboot, no panics, no oops , nothing. ( sysctl panic values set to timeouts etc )
Setting up a dual-primary disk , with ocfs2 only mounting it on one node and starting bonnie++ causes only that node to crash.

On DRBD level I got the following error when that node disappears

  1. drbd0: PingAck did not arrive in time.
  2. drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure )
  3. pdsk(UpToDate -> DUnknown )
  4. drbd0: asender terminated
  5. drbd0: Terminating asender thread

That however is an expected error because of the reboot.

At first I assumed OCFS2 to be the root of this problem ..so I moved forward and setup an ISCSI target on a 3rd node, and used that device with the same OCFS2 setup. There no crashes occured and bonnie++ flawlessly completed it test run.

So my attention went back to the combination of DRBD and OCFS
I tried both DRBD 8.2 drbd82-8.2.6-1.el5.centos kmod-drbd82-8.2.6-2 and the 83 variant from Centos Testing

At first I was trying with the ocfs2 1.4.1-1.el5.i386.rpm verson but upgrading to 1.4.2-1.el5.i386.rpm didn't change the behaviour

Both the DRBD as the OCFS mailinglist were fairly supportive pointing me out that it was probably OCFS2 fencing both hosts after missing the heartbeat, and suggested increasing the deathtimetimeout values.

I however wanted to confirm that. As I got no entries in syslog I attached a Cyclades err Avocent Terminal server to the device in the hope that I'd capture the last kernel messsages there ... no such luck either.

On the OCFS2 mailinlist people pointed out that i'd use netconsole to catch the logs on a remote node
I set up netconsole using

  1. modprobe netconsole netconsole="@/,@"
  2. sysctl -w kernel.printk="7 4 1 7"

After which indeed I catched error on my remote host..

  1. [base-root@CCMT-A ~]# nc -l -u -p 6666
  2. (8,0):o2hb_write_timeout:166 ERROR: Heartbeat write timeout to device
  3. drbd0 after 478000 milliseconds
  4. (8,0):o2hb_stop_all_regions:1873 ERROR: stopping heartbeat on all active
  5. regions.
  6. ocfs2 is very sorry to be fencing this system by restarting

One'd think that it output over Serial console before it log over the network :) It doesn't .

Next step is that I`ll start fiddling some more with the timeout values :) (note the ":)")

Apr 28 2008

MySQL and DRBD, Often say NO :)

Florian is replying to James on the subject of using DRBD for MySQL HA. A discussion started earlier by Eric Florian is refuting most of the arguments that James has against using MySQL and DRBD together.

I`m also saying NO to MySQL and DRBD in most of the cases.. but not for any of the reasons James mentions.

I must say upfront I love DRBD and I have been using it in production for a long time but not for MySQL HA.

The problem with using MySQL on DRBD is the same problem you have when killing the power on a standalone MySQL machine and rebooting that machine.
DRBD saves you the time of powering up your machine and OS. But MySQL still needs to be started again on the standby machine. (In limited cases you might have a lengthy startup process due to eg. Innodb consitency checks) But for lots of organisations this (even limited) downtime is not acceptable.

Both MySQL Cluster and MultiMaster replication give you constant access to your data on more nodes .

For lots of shops, those not needing to scale, those that can live with a limited downtime, DRBD and MySQL is a good match,

But if you want to achieve real high availability as compared to less downtime. or if you are looking to scale your MySQL and want to benefit from HA while you are at it , then MultiMaster is probably the preferred alternative as opposed to DRBD.

In the meanwhile I`ll be happy serving other data from my DRBD volumes ;)