webrick

Mar 04 2011

24 hours of Puppet Drama

Over the past couple of days I've been fighting with a weird puppet problem , we eventually cracked it , but I promised a bunch of you to fully explain it here ;)

So we were deploying 2 Blade chassis at a pretty remote location with a mix of phyisical and virtual machines, some 48 instances in total. This is a pretty standard rollout, we've got a bunch of similar platforms in our lab , so we knew about a couple of glitches, what to expect etc.

I was just keeping an eye on the deployment, looking at the logs seeing if things were running fine, when suddenly a couple of puppet runs didn't come trough, we had seen such behaviour before, usually it's a matter of running them a gain a couple of times and they will come trough. (Upgrading ruby and putting passenger in front of puppet actually solved those issues,
We'd even had a loop built in the platform that runs puppet a couple of times till it returns with the correct exit code just to make sure. )

We were first scratching the A chain of our setup, so that in the event of failure we could still bring up the B chain of the platform and be up and running again. Actually machines were coming up.. slowly .. some of them took a bit longer . One of the machine's clock was seriously off .. the SSL was barfing on it , so we set the bios clock, and restarted .. it was the machine with 6VM's took a while but everything was back on schedule.. then suddenly things were going down fast more and more puppetruns started failing and .. , at some point in time actually none of our puppet runs were working again .
I'd see the puppetmaster perfectly compile it's catalog

  1. notice: Compiled catalog for ctl-0-a

Then the client .. not wanting to get it ..

  1. Mar 1 11:10:45 ctl-0-a puppet-agent[3674]: Not using expired catalog for ctl-0-a from cache; expired at Tue Mar 01 09:50:06 +0000 2011
  2. Mar 1 11:10:45 ctl-0-a puppet-agent[3674]: Using cached catalog
  3. Mar 1 11:10:45 ctl-0-a puppet-agent[3674]: Could not retrieve catalog; skipping run

We had gone from about 60% of our fresly deployed boxen working fine, to not one
So what do you do .. indeed .. you turn on debugging.
You put both your puppetmaster and client in debug. Nothing, no errors no nothing ..

I asked some collegues, asked on irc .. much ideas but none of them that actually cracked the problem. I did what I knew that solve similar problems before,

I switched our serialization format from yaml back to pson , and back, no luck.
I upgraded ruby to a version from the glei.ch repository. No luck.
I upgraded our Puppet version 2.6 to a version from the TMZ Epel repo , we cleaned out ssl the certificates on all sides multiple times. Cleaned out /var/lib/puppet , We uninstalled puppet and reinstalled it.
It wasn't a DNS Problem

I had started stripping my manifests to empty runs, those worked, then started uncommenting the actual manifests again ... Then in the middle of the debug our VPN connection to the remote location broke down, we'd only be getting it back in the morning ..about 12 hours later not fun. Murphy obviously ..

So the next morning we dived right back in ... making those manifests bigger again, removing all the stages, 1 or 2 successful runs, then with the same config .. back to failure. On and off.. successfull and unscussessful. ... it wasn't in the manifests ..

So we decided to roll the puppetmaster back to it's previous version, that one was known to be stable, there obviously was something really fishy going, so that was the safest bet.

Wrong, the machine came up, but it took longer than expected, and when trying to connect new clients to it .. nothing worked anymore .. same problem as before .. puppetmaster compiles catalog, clients didn't get anything. we started to suspect faulty hardware .. but how could that bee.. the puppetclient looked liked the only malfunctioning thing around .

Then Dim0 suggested me to look at the that one logfile I hadn't looked , /var/log/puppet/masterhttp.log and then we saw it . it was being flooded with ssl errors, ssl errors from clients that shouldn't even be connecting to the puppetmaster at all.

  1. [2011-03-02 13:32:00] ERROR OpenSSL::SSL::SSLError: tlsv1 alert decrypt error
  2. /usr/lib/ruby/site_ruby/1.8/puppet/network/http/webrick.rb:44:in `accept'
  3. /usr/lib/ruby/site_ruby/1.8/puppet/network/http/webrick.rb:44:in `listen'
  4. /usr/lib/ruby/1.8/webrick/server.rb:173:in `call'
  5. /usr/lib/ruby/1.8/webrick/server.rb:173:in `start_thread'
  6. /usr/lib/ruby/1.8/webrick/server.rb:162:in `start'
  7. /usr/lib/ruby/1.8/webrick/server.rb:162:in `start_thread'
  8. /usr/lib/ruby/1.8/webrick/server.rb:95:in `start'
  9. /usr/lib/ruby/1.8/webrick/server.rb:92:in `each'
  10. /usr/lib/ruby/1.8/webrick/server.rb:92:in `start'
  11. /usr/lib/ruby/1.8/webrick/server.rb:23:in `start'
  12. /usr/lib/ruby/1.8/webrick/server.rb:82:in `start'
  13. /usr/lib/ruby/site_ruby/1.8/puppet/network/http/webrick.rb:42:in `listen'
  14. /usr/lib/ruby/site_ruby/1.8/puppet/network/http/webrick.rb:41:in `initialize'
  15. /usr/lib/ruby/site_ruby/1.8/puppet/network/http/webrick.rb:41:in `new'
  16. /usr/lib/ruby/site_ruby/1.8/puppet/network/http/webrick.rb:41:in `listen'
  17. /usr/lib/ruby/1.8/thread.rb:135:in `synchronize'
  18. /usr/lib/ruby/site_ruby/1.8/puppet/network/http/webrick.rb:38:in `listen'
  19. /usr/lib/ruby/site_ruby/1.8/puppet/network/server.rb:127:in `listen'
  20. /usr/lib/ruby/site_ruby/1.8/puppet/network/server.rb:142:in `start'
  21. /usr/lib/ruby/site_ruby/1.8/puppet/daemon.rb:124:in `start'
  22. /usr/lib/ruby/site_ruby/1.8/puppet/application/master.rb:114:in `main'
  23. /usr/lib/ruby/site_ruby/1.8/puppet/application/master.rb:46:in `run_command'
  24. /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:287:in `run'
  25. /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:393:in `exit_on_fail'
  26. /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:287:in `run'
  27. /usr/sbin/puppetmasterd:4
  28. [2011-03-02 13:32:00] ERROR OpenSSL::SSL::SSLError: tlsv1 alert decrypt error

What happened was that 'we' decided to bring of the one backup machines back online, afterall once the slow starting server came trough, it would be the passive node in the cluster , no worries there, right ? Wrong,
This physical machine had 6 virtual machines with old ssl certificates that got stuck in an loop which was put there to sure their puppetrun came trough correctly at boot time.

Those 7 rogue clients which generated little to no relevant traffic on the network were saturating the default webrick, killing them solved the problem and we were back to regular deployment in no time.

The sad part is that our upcoming release already has passenger , a fresher version of ruby etc .. and that most of the above mentioned errors won't occur anymore there.
But in short .. don't use the default webrick .. it will kill you :)

And no , not everything is a freaking dns problem, ssl is a big pain in the B too .. :)