Everything is a Freaking DNS problem - infrastructure http://127.0.0.1:8080/blog/taxonomy/term/632/0 en Appliance or Not Appliance http://127.0.0.1:8080/blog/appliance-or-not-appliance <p>That's the question <a href="http://blog.rootshell.be/2011/01/05/security-diy-or-plugnplay/" rel="nofollow">Xavier</a> asks in his blog entry titled<br /> Security: DIY or Plug’n'Play</p> <p>To me the answer is simple, most of the appliances I ran into so far have no way of configuring them apart from the ugly webgui they ship with their device. That means that I can't integrate them with the configuration management framework I have in place for the rest of the infrastructure. There is no way to automatically modify e.g firewall rules together with the relocation of a service which does happen automatically, and there is always some kind of manual interaction required. Applicances tend to sit on a island, either stay un managed ( be honest when's the last time you upgraded the firmware of that terminal server ? ) , or take a lot of additional efort to manage manually. They require yet another set of tools than the set you are already using to manage your network.<br /> They don't integrate with your backup strategy, and don't tell me they all come with perfect MIB's.</p> <p>There's other arguments one could bring up against appliances, obviously people can spread fud about some organisation alledgedly paying people to put backdoors in certain operation systems.. so why would they not pay people to put backdoors in appliances , they don't even need to hide them in there .. but my main concern is manageability .. and only a web gui to manage the box to me just means that the <a href="http://queue.acm.org/detail.cfm?id=1921361" rel="nofollow">vendor</a> hates me and dooesn't want my business</p> <p>A good Appliance (either security or other type) needs to provide me an API that I can use to configure it, in all other cases I prefer a DIY platform, as I can keep it in line with all my other tools, config mgmtn, deployment, upgrade strategies etc.</p> <p>Mabye a last question for Xavier to finish my reply ... I`m wondering how Xavier thinks he kan achieve High-availability by using a Virtual environment for Virtual Appliances that are not cluster aware using the virtual environment. A fake comfortable feeling of higher availability , maybe.. but High Availability that I'd like to see.</p> http://127.0.0.1:8080/blog/appliance-or-not-appliance#comments automation devops infrastructure opensource puppet security Wed, 12 Jan 2011 20:28:58 +0000 Kris Buytaert 1028 at http://127.0.0.1:8080/blog Scaling Drupal http://127.0.0.1:8080/blog/scaling-drupal <p><a href="http://www.johnandcailin.com/usernode/john" rel="nofollow">John Quinn</a> writes about <a href="http://www.johnandcailin.com/blog/john/scaling-drupal-open-source-infrastructure-high-traffic-drupal-sites" rel="nofollow">Scaling Drupal</a> he is taking a one step at a time approach and is still writing his 4th and 5 stages.</p> <p>His first step obviously is separating the drupal from a separate database server, and he chooses mysql for this purpose, moving your DB to a different machine is a good thing to do.</p> <p>However then he gets this crazy idea of using NFS to share his his drupal shared files :(<br /> (he even dares to mention that the setup ease is good) Folks, we abandonned NFS in the late nineties. NFS is still a recipe for disaster, it has performance issues , it as stability issues (stale locks), and no security admin in his right mind will tolerate portmap to be running in his DMZ.<br /> (Also think about the IO path that one has to follow to serve a static file to a surfer when the file is stored on a remote NFS volume) </p> <p>On top of that he adds complexity in a phase where it isn't needed yet. Because of the fact he needs to manage and secure NFS and he is storing his critical files on the other side of the ethernet cable he did create a single point of failure he didn't need creating yet.<br /> Yes as soon as you start to scale you need to look at a scalable and redundant way to share your files.<br /> When those files are pretty static you'll start out with a set of rsync scripts or scripts that push them to different servers upon deploying your application. When they are changing often you start looking into filesystems or block devices that bring you replication, such as DRBD or Lustre<br /> But if today his NFS server goes down he is screwed, much harder than when his database has a hickup.</p> <p>One could discuss the order of scaling, but adding more webservers might not always be the first step to take, one might want to tackle the database first depending on the application.<br /> He decides to share the load of his application over multiple Drupal instances using apache mod_proxy , then adds Linux-HA to make it highly available.<br /> I`m interested in knowing why he chose for apache mod_proxy and not for LVS </p> <p>Although using NFS for me belongs in a <cite>How NOT to scale</cite> tutorial, his other steps give you a good idea of the steps to take. </p> <p>I`m looking forward to his next steps :) I hope that in part 4 he also removes NFS in favour of a solution with no performance and locking issues that really takes away a big fat single point of failure. In part 5 he discusses how to scale your database environment. The actual order of implementing step 2 and 5 will be different for each setup. </p> <p>Anyway.. I`m following up on his next steps.. interesting reading</p> http://127.0.0.1:8080/blog/scaling-drupal#comments drupal ha hpc infrastructure mysql scaling Sat, 10 Nov 2007 15:54:46 +0000 Kris Buytaert 504 at http://127.0.0.1:8080/blog