Kris Buytaert's blog

Dec 22 2013

FOSDEM 2014 is coming

and with that almost a full week of side events.
For those who don't know FOSDEM, (where have you been hiding for the past 13 years ? ) Fosdem is the annual Free and Open Source Developers European meeting. If you are into open source , you just can't mis this event where thousands of likeminded people will meet.

And if 2 days of FOSDEM madness isn't enough people organise events around it.

Last year I organised PuppetCamp in Gent, the days before Fosdem and a MonitoringLove Hackfest in our office the 2 days after FOSDEM This year another marathon is planned.

On Friday (31/1/2014) the CentOs community is hosting a Dojo in Brussels at the IBM Forum. (Free, but registration required by the venue)

After the success of PuppetCamp in Gent last year we decided to open up the discussion and get more Infrastructure as Code people involved in a

The keynotes for CfgMgmtCamp will include the leaders of the 3 most popular tools around , both Mark Burgess, Luke Kanies and Adam Jacob will present at the event which will take place in Gent right after Fosdem. We expect people from all the major communities including, but not limited to , Ansible, Salt, Chef, Puppet, CFengine, Rudder, Foreman and Juju (Free but registration required for catering)

And because 3 events in one week isn't enough the RedHat Community is hosting their conference after CfgMgmtCamp at the same venue. (Free but registration required for catering)

cya in Belgium next year..

Dec 12 2013

IPv4 Shortage can make you a billionaire

A couple of weeks ago I had this mail conversation...

I just had to share it ..

To:  Kris Buytaert
Subject:  IPaddress Requirement
Date:  Tue, 05 Nov 2013 11:50:41 -0600 (11/05/2013 06:50:41 PM)

Hi Kris,

Thanks for your response. Please share your contact details including
skype ID so we can discuss further.


On 05.11.2013 08:26, Kris Buytaert wrote:
> On Mon, 2013-11-04 at 04:28 +0000, Sales wrote:
>> Hello,
>> We are looking for IP addresses as we are growing Internet Solutions
>> Provider.
>> We are looking for Ip addresses(Ipv4) anywhere from /22 to /16 Ipv4
>> to
>> host it. We look forward for your response to discuss further on the
>> pricing terms to take this forward.
>> If you are not the concerned person to discuss about this, please
>> forward it to the appropriate department of your company .
>> Please get back to us as soon as possible.
>> Regards,
> Hi there,
> we can offer you a whole range of IP addresses..
> We can provide you with , , and
> Please send us your best offer.
> greetings
> Kris Buytaert

Nov 27 2013

Docker vs Reality , 0 - 1

(aka the opinionated summary of the #devopsdays London November OpenSpace on , Containers and the new flood of Image Sprawl)

There's a bunch of people out there that think I don't like docker, they are wrong.

I just never understood the hype about it since I didn't see, (and still don't) see it being used at large and people seem to understand that as being against it.

So let me put a couple of things straight :

There's absolutely nothing wrong with using a container based approach when deploying your infrastructure. If you remember my talks about the rise of Open Source Virtualization some years ago you've noticed that I've always mentioned OpenVZ and friends as good alternatives if you wanted to have a lot of isolated platforms on one machine. LXC and friends have grown .. they are even more usable these days. Years ago people bought bare metal and ran Hypervisors on it to isolate resources. These days people rent VM's and also want the same functionality so the use of the combination of Virtualization and Container based technologies is a very good match there.

There's also nothing wrong with using Infrastructure as Code tools to build an reproducable image you are going to deploy will provide you with a disposable image which allows you to quickly launch a reproducable and versionned platform for your application if that application is supposed to be shortlived. The tooling around today is not yet there to have these images long lived as you still need to manage the config inside the containers as your application will evolve, it will change, your environment will change (think even about changing to a different loghost..) , but when you don't have to keep state you can dispose the image and redeploy a new reproducable one.

In the embedded world, this kind of approach with multiple banks has been a round for a while , one image running, a second bank as a fallback, and when you upgrade the passive bank you can swap the roles and still have roll back.

There's is also nothing wrong on combining these to approaches and using tools such as Docker and Packer.

But there is lot wrong with building images that then start living their own life, tools like Veewee etc saw the light to create an easy way to make sure the JeOS image (Just Enough Operating System) we created was reproducable, not to ship around virtual appliances.

But, lets be realistic, the number of applications that are suitable for this kind of environment is small. Most applications these days are still very statefull, and when your application contains state you need to manage that
that state, you can't just dispose an image which has state. Specially in an Enterprise environment stateless, immutable applications are really the exception rather than the rule.

When your application maps with stateless and short lived, or a some people like to call it Immutable please do so.. but if it doesn't please remember that we started using configuration management tools like CFengine, Puppet and Chef to prevent Image Sprawl and Config Drift
There's proprietary businesses out there building tools to detect config drift and extort organisations to solve problems that shouldn't have existed in the first place.

Luckily the majority of smart people I've spoken to over the past couple of weeks pretty much confirmed this ...
Like one of the larger devops minded appliation hosting outsourcers in emea, I asked them how much % of their customer base they could all "Immutable" , exactly 0% was the answer.

Image Based Container solutions are definitely not a one size fits all solution, and we have along way to go before we get there if at all ..

Till then I like not to diffuse my attention to too many different types of deploying platforms, just not to make stuff more complex than it already complexity is the enemy of reliability

Jul 28 2013

Robomow vs iRobot·

(aka the follow up to
Lennert was really helpful in dropping by and providing us with some extra isolaters for the cable... the RoboMow is now doing it's work nicely again.

Yet we ran into another hickup. for some reason the Robomow stopped. The bleeping sound indicated yet another cable cut .. but I couldn't find it ...until I opened the the docking station ... where I saw the cable had loosened , after fixing that ... everything started working fine again ..

Now somehow the docking station of the RoboMow doesn't close good anymore , it really is a hassle pull it tight .. you need to lift it a bit and pull in the lid from the inside ...with the risk of needing to realign the dockingstation with the cable again so the mower can find it's way home again .

Also last week I got a new battery for our Roomba, and some things dawned on me. The Roomba does not need a cable to find it's way. It uses lighthouses to create borders, aka virtual walls or detects stuff where it runs into and rides back. Much easier to install than a cable in your garden. Also a virtual wall can't be cut by a knife.

The Roomba is also capable of finding it's base station with no cables. It's actually pretty good at that if you tell it to dock it goes straight to it's target.. no need to first find a cable to follow back home.

The Roomba we have is almost 4 years old , so it's not like this is bleeding edge technology. So it makes you wonder why a RoboMow needs a cable anyhow..

Also by putting the sensor on the robot outside of the mowing area it would make life a lot easier When you read this you might think I`m not really satisfied with the Robomow, on the contrary ..

Like a lot of technologies it takes a while to settle in to your environment and to tune it so it fits your needs better. RoboMow is no different .. once you have the layout of your cable solved (digged in preferrably) it works awesome .. it provides us with some free time we didn't have before .. It's just that for a 4 week test you don't want to go trough the trough the trouble of actually digging it in .. specially since we opted to not install it on the front lawn yet and it will need rerouting then anyhow.

As you might notice this post is a couple of weeks late ... and obviously you want to know the final verdict,

Well .. as you might have figured .. the Robomow is still happy mowing my lawn .. we're figuring out if we are going to redesign our garden before adding the second area in the front or not but fundamentally it's a great device that helps us a lot and we couldn't live without anymore.

Jul 10 2013

Open Business Models

When I started writing this I wrote "Last week Opscode came" obviously now that is "A couple of months ago Opscode came with a bunch of announcements ... one of them being that they are also going to support the Open Source Chef .. rather than only their own platform.

I'd love to see more companies formally do this .. Over the past couple of years I've had numerous situations where organizations where happy to pay for support to an commercial backer of Open Source software... but they were not interested in , software updates, fancy dashboards , unneeded features.

Let alone being limited by some of the features of the Enterprise product (what do you mean there's no vlan support in Xen ? We've been using that for ages (anno 2008)

Even right now I`m talking with a customer that is interested in getting commercial support for an open source project but he feels that by choosing the Enterprise version of the
software he will be limiting his options...

We've had this kind of situations with MySQL, Xen, Knowledgetree and others ..

The sad story is that with the growth of Open Source adoption, lots of companies are finding their commercial talents in the pool of people that used to work for the proprietary vendors, the kind of sales people that don't get Open Source (aside from some exceptions) and are still trying to hardsell a product based on specsheets and feature roadmaps, where most quality open source software are build by people to solve problems , hence those new sales people keep doing their old job selling products while not listening to their actual customer needs.

I've seen this escalate up to the point where people that are willing to support the Open Source project by paying a vendor for support don't do so because it's not the right form for them eventually leading to even less revenue for the said vendor.

Yes I know that supporting a multitude of distributions , libary combinations and architectures is a complex thing to do, and a lot of the proprietary vendors ruined the market by inventing something like certified platforms on which they
supported their software.

But if you as an open source software company are really interested in improving your product why wouldn't you take money from a customer that wants to pay for bugs to be fixed or features to be implemented in your product.
You've already realized that the software industry is different from 10 years ago and that Open Source is here to stay .. yet you are still thinking in the sales model with products and specsheets of that era.

Jul 10 2013

Using broken development frameworks , or why we don't use Zurmo

People often wonder why DBA's used to hate developers, and with DBA's also the System Engineers,
(note that I just expanded devops by adding dba's to the picture..)

So let me tell you a story ..

A couple of weeks ago one of our customers wanted to start experimenting with a new type of CRM. A gamified CRM.
Zurmo ...

So we set this thing up in a dev environment and started playing with it , while at first it looks nice ..
the application actually felt pretty slow.. however given that is a low resource development environment we looked no further.

Yet the next step is that we run into missing features, such as the fact that every contact you create by default is
set to private .. which really isn't productive for a CRM system where you want to be able to follow up on different
customer and share information.

So we tried figuring out what the database changes to do this in bulk would mean, surely it had to be a flag on the contact record .
Wrong, Zurmo uses an ORM for their database connectivity their data model wasn't really trivial.

So we decided to look at the MySQL log file to figure out what db changes happened when updating the record
Yes there's better approaches but this one learned us a lot ..
The procedure I followed was pointing my browser to the page where I wanted to switch the checkbox,
log on to the mysql box, set global logging on . Clicked the checkbox and stopped global logging.

This gave me a log file with all the database actions required to make that one single change.
I had to cross check a number of times ... the file created by this short and small action was.
about 70K

Puzzled you start looking at the queries ...
The query list was full with "SELECT * FROM " stanza's ..
70K whopping K of queries that make your hair turn grey ...

I figured I'd file a bug .. but I couldn't find no bugtracker for Zurmo, only a forum (and forums are the most broken form of communication imvho) , yet the developers responded on Twitter.

The feedback wasn't really satisfying so we quickly decided that supporting this application was not something we would like to do..
and abandonned it..

The real question is who needs a Gamified CRM anyhow...

PS. So while finishing up this article on a late evening this week I might not have put in clear enough that the generated logfile was 70Kb .. I fear some people misunderstood that it generated 70.000 queries. Obviously a huge difference. But still the log file shouldn't have been bigger han 1Kb There should have been 2-3 queries max (

But imvho if the size of the queries you are generating is bigger than the page you are generating you are most often doing it wrong.