rpm

Jul 28 2015

The power of packaging software, package all the things

Software delivery is hard, plenty of people all over this planet are struggling with delivering software in their own controlled environment. They have invented great patterns that will build an artifact, then do some magic and the application is up and running.

When talking about continuous delivery, people invariably discus their delivery pipeline and the different components that need to be in that pipeline.
Often, the focus on getting the application deployed or upgraded from that pipeline is so strong that teams
forget how to deploy their environment from scratch.

After running a number of tests on the code , compiling it where needed, people want to move forward quickly and deploy their release artifact on an actual platform.
This deployment is typically via a file upload or a checkout from a source-control tool from the dedicated computer on which the application resides.
Sometimes, dedicated tools are integrated to simulate what a developer would do manually on a computer to get the application running. Copy three files left, one right, and make sure you restart the service. Although this is obviously already a large improvement over people manually pasting commands from a 42 page run book, it doesn’t solve all problems.

Like the guy who quickly makes a change on the production server, never to commit the change, (say goodbye to git pull for your upgrade process)
If you package your software there are a couple of things you get for free from your packaging system.
Questions like, has this file been modified since I deployed it, where did this file come from, when was it deployed,
what version of software X do I have running on all my servers, are easily answered by the same
tools we use already for every other package on the system. Not only can you use existing tools you are also using tools that are well known by your ops team and that they
already use for every other piece of software on your system.

If your build process creates a package and uploads it to a package repository which is available for the hosts in the environment you want to deploy to, there is no need anymore for
a script that copies the artifact from a 3rd party location , and even less for that 42 page text document which never gets updated and still tells you to download yaja.3.1.9.war from a location where you can only find
3.2 and 3.1.8 and the developer that knows if you can use 3.2 or why 3.1.9 got removed just left for the long weekend.

Another, and maybe even more important thing, is the current sadly growing practice of having yet another tool in place that translates that 42 page text document to a bunch of shell scripts created from a drag and drop interface, typically that "deploy tool" is even triggered from within the pipeline. Apart from the fact that it usually stimulates a pattern of non reusable code, distributing even more ssh keys , or adding yet another agent on all systems. it doesn’t take into account that you want to think of your servers as cattle and be able to deploy new instances of your application fast.
Do you really want to deploy your five new nodes on AWS with a full Apache stack ready for production, then reconfigure your load balancers only to figure out that someone needs to go click in your continuous integration tool or deployment to deploy the application to the new hosts? That one manual action someone forgets?
Imvho Deployment tools are a phase in the maturity process of a product team.. yes it's a step up from manually deploying software but it creates more and other problems , once your team grows in maturity refactoring out that tool is trivial.

The obvious and trivial approach to this problem, and it comes with even more benefits. is called packaging. When you package your artifacts as operating system (e.g., .deb or .rpm) packages,
you can include that package in the list of packages to be deployed at installation time (via Kickstart or debootstrap). Similarly, when your configuration management tool
(e.g., Puppet or Chef) provisions the computer, you can specify which version of the application you want to have deployed by default.

So, when you’re designing how you want to deploy your application, think about deploying new instances or deploying to existing setups (or rather, upgrading your application).
Doing so will make life so much easier when you want to deploy a new batch of servers.

Nov 01 2010

To Package, and what to package

One of the open sessions last week (corr: last month) at Devopsdays 2010 Hamburg was the one on packaging software. It's always a big question on wether you package the software that runs in your infrastructure or not. And if you package it .. what do you package ..

The general consensus of the open space was pretty much that you always package the software you deploy, unless you have some very good reasons not to. Pretty much the way I've been doing for ages ..

Good reasons that were mentionned were the use of scripting languages that update extremely frequently, but certainly not for compiled code, compiling code on a production machine also is a big nono.

There also was a consensus that you DO NOT PUT CONFIGURATION inside a package. You can put in default templates, but you don't put in config files that should change frequently .. There's plenty of configuration mgmt systems out there do that kind of stuff for you.

The naysayers claimed that packaging brings way to much overhead ... and others claim it takes to much time... however I feel it
should just be a 1 time effort that brings devs and ops closer to eachother and from there on it should automated
New versions of software don't mean that the packaging effort needs to be done again..

Another topic that gathered lots of questions was if you should be capable of installing multiple versions of the same package , lots of people mentionned they didn't like fiddling with symlinks however the best comment in that discussion was that there is already a system out there , the alternatives setup .. provide by most operating systems that allow you to do so in a pretty clean way. I must admit I should look into alternatives more in depth too ..

The ever recurring question is wether one should package war files ? Sure as you then can also use the dependency models a package mgmt system brings to deploy the dependent libraries.

However when people ship products, rather than a live service they seem to package everything , mainly because the code in the product isn't changing as quickly as a live website, or internally used application.

The biggest problem however is the frustration people have with GEM or CPAN packages .. they add yet another layer of management to a system, most lots of CPAN packages are already packaged.. but when it comes to GEM's disaster strikes. There's a lot of work left for distributions to integrate GEM and CPAN style packages.

Feb 16 2010

Packaging Drupal Modules or not ?

So John wrote down his experiences on deploying Drupal sites with Puppet .

It's not a secret that I've been thinking about similar stuff and how I could get to the best possible setup.

John starts of with using Puppet to download Drush... while I want to use rpm for that ...

I want my core infrastructure to be fully packaged... not downloaded and untarred. I want to be able to reproduce my platform in a couple of months , with the exact same versions I`m using now .. not with the version that happens to be on ftp.drupal.org at that point in time, or with ftp.drupal.org being down.

Now the next question off course is what's the core infrastructure.
Where does the infrastructure end and does the application start. There's little discussion about having a puppet created vhost , an apache conf.d file, a matching .htaccess file if wanted , and the appropriate settings.php for a multisite drupal config.

There's also little doubt to me on using drush to run the updates, manage the drupal site etc . Reading John's article made me think some further about what and when I want things packaged.

John's post lead to a discussion on #infra-talk on getting all drupal modules packaged for Centos with Karan and some others

In a development environment I probably want to have periodic drush updates getting the latest modules from the interwebs and potentially breaking my devs code. But making sure that when you put a site in production it will be on a fairly up to date platform, and not on the platform you started developing on 24 months ago.

In a production environment however you only want tested updates of your modules as indeed they will break code.

It's probably going to be a mix and match setup having a local rpm/deb repo with packaged modules that have been tested and validated in your setup and using drush to enable or configure them for that production setup.

But also having a CI environment wher Drush will get the new modules from the interwebs when needed. and package them for you.

To me that sounds beter than getting all the available Drupal modules and packaging them, even automated, and preparing a repository of those modules of which only a small percentage will actually be used by people.

But I need to think about it some more :)

Feb 16 2010

To not yum or to not apt-get, that's NOT the question.

Over at the OPenARK blog Shlomi Noach argues that using apt-get or yum to install your MySQL instance will one day most likeley break your MySQL setup. Depdendencies, distros not shipping the MySQL version you want to use and on some distro's indeed the mysql vs MySQL issue, agreed, it all makes things less trivial.

However why give up a clean packaged system if there are other ways out ?

First of all by claiming that such an installation can break a working production environment looks to me like admitting you don't have a split development, production environment and that rather than testing stuff upfront indeed you just hack a long in production.

So rather than using a tarball for the MySQL instance an --force to satisfy the missing dependencies (hence also cluttering your system) , a much cleaner and less error prone setup is to only deploy from your own , self controlled repository , in which you only allow tested packages, most probably not the distro based package , hence packages that won't break your setups ;) But still you will be using apt or yum and deploying rpm's and debs , perfectly satisfying dependency needs.

Apart from that .. watch out for Banquise .. :) Coming to your favourite distro soon..

Jan 19 2010

F12 Dependency failure

Fresh laptop arrived, obviously the first thing to do is to install the latest fedora. then do a full yum update.

However that failed with the following failed dependency

  1. mesa-libGL-7.7-2.fc12.i686 from updates has depsolving problems
  2. --> Missing Dependency: libdrm >= 2.4.17-1 is needed by package mesa-libGL-7.7-2.fc12.i686 (updates)
  3. Error: Missing Dependency: libdrm >= 2.4.17-1 is needed by package mesa-libGL-7.7-2.fc12.i686 (updates)
  4. You could try using --skip-broken to work around the problem
  5. You could try running: package-cleanup --problems
  6. package-cleanup --dupes
  7. rpm -Va --nofiles --nodigest

Now I don't really use all the fancy compiz stuff so for now I can just solve it by running

  1. [root@stillmine ~]# yum remove mesa-libGL

Jan 06 2010

Drupal6 in EPEL

Dear Drupal Community,

If any of you are interrested in getting a packaged version of Drupal 6 into Fedora's EPEL repository (Extra Packages for Enterprise Linux) and therefore usable in RHEL and Centos,
please comment on the Bug I filed to get it's introduction started.

Any pitfalls, benefits etc are welcome ..

thnx in advance !

Dec 20 2009

Packaging Djagios

After all the politics involved in getting a package in a distro, or not it was time for a nice small and clean package of a fresh and promising open source project. Djagios was an easy choice.

I've uploade the rpm and Source RPM to repo.inuits.be and getting the SPEC file in the upstream repo was 10 minutes work.

Next step is to get it into Fedora , and EPEL :)

Dec 20 2009

Packaging Drush

A couple of weeks ago I was once again manually installing Drush as there were no packages for CentOS / EPEL or whatever, apart from the needed patch to get it running on a 5.1.X RHEL php

I had found this thread on Drupal.org mentioning that a package already exists
however David had not answered the exact location yet
So I created a drush package with a with the above mentionned patch and sent it to Jon Ciesla again he gave some suprising feedback ;)


Drush itself might need to be modified in Fedora. It seems
like one of the major functions of drush is to install and update
modules. That's great for modules we don't ship as rpms, but we can't
allow drush to modify modules that we ship.

This feedback pretty much leaves me with 3 options.

The first one is the easiest one, I just forget about packaging drush for Fedora.

The second one would require me to patch Drush so that for all existing drupal modules that have been packaged for Fedora, Drush will call yum to install them. This obviously would create a lot of work maintaining this excludelist.

The third one would be to disable the download functionality for Drush in a Fedora/Rhel enviornment, Jon suggested that this would probably be the saftest path.

(Jon also suggested a fourth option, namely removing all drupal modules from fedora and add a prohibition to package them in the Packaging Guidelines, which he immediately called ridiculous.)

I once again understand the problem of the Distribution maintainer, but on the other hand if I were the upstream Drush developer I wouldn't want to see my software severely disabled in a distribution.

So what do you folks think, disable the functionality or not ?

PS. Yes I've contacted upstream , but I haven't gotten a reply yet.

Dec 20 2009

Drupal 6 for EPEL

Some of you might have noticed that Fedora 11 and up already have an up to date Drupal6 version, but EPEL , which is what a lot of people are using on their CentOS or RHEL builds only has a Drupal5. I asked Jon Ciesla, who is maintaing the Drupal packages in Fedora why :


Because when Drupal was initially built for EL-5 and EL-5, the 5.x
branch was the current release. It's up to date, 5.20 is the most
recent release, and is still supported upstream in terms of security
fixes. 6 is out, and has been for awhile, but we have the following:

http://fedoraproject.org/wiki/EPEL/GuidelinesAndPolicies

Since 5.x isn't broken or insecure, it'll be a tough sell to move to
6.x. Once upstream drops support, this may change.

It's a correct answer from a Distribution point of view, but the fact is it is widening the gap between the Ops and the Devs. If the ops want to keep their platform clean we need to have our software packaged on the platform we want to use, which is most often an Enterprise Linux distro, on the other there is understandably no hair on a dev's head that he will be building a new site on a Drupal 5 platform.

So until the Drupal community doesn't declare Drupal 5 dead, RHEL and CentOS users will have to use 3rd party Drupal6 RPMS , or rebuild the F12 rpm from Source again .

Jan 04 2008

Recent MySQL builds in CentOSPlus

Peter notes that you indeed can find pretty recent Enterprise level MySQL rebuilds over at the CentOSPlus repository.

Good things come to those who wait :)