Over the past five years I’ve come to experience the delights of Puppet, CFEngine and Chef across a wide range of deployments ranging from a couple of web servers to host two or three hundred sites, to thousands of servers underpinning an OpenStack-based cloud solution.
I’d like to share a couple of thoughts on what I’ve learned, how to avoid making the same mistakes that I’ve made, and how to ensure that the next time you reach for your modules or cookbooks, you do it in a structured and sensible manner.
LAAAAAAADIESS ANNNNDDD GENNNELMEN…… LET’S GET READYYYYYYY TO RUMMMMMBBBBBLLLLLEEEEEEEE!!!!!!!!!
Ok, enough silliness. For the next few sentences anyway…
I’ve been using Puppet to mange systems for the last four years (at least!) however a new contract has meant I’ve needed to learn Chef. A few months ago I was looking for a blog post on the differences between puppet and Chef written from an objective point of view and the fantastic @nathenharvey produced a blog post about exactly that topic. The main take away from that blog post was that you should stop arguing about which is better and just make sure that you’re using some form of systems management tool, however there are many other good points made.
This post is more of a comment on my feelings about the two different systems and comparing the way in which they work. Unlike Nathen, I didn’t have to make any of the WIIF decisions – the only question I had to ask was “Oh, they use Chef. Do I want a contract?” to which the answer to the second part will always override the first. I am new to Chef (I’ve been using it for a total of four hours now!) however I’m already starting to see some of the differences and I hope that this post will help others who find themselves in my position.
This is a subject which has been blogged about at length, however now that I’ve got this working I thought I’d blog about the key things I found here to make sure that not only do I get it right in future but also provide a reference to others who are trying to fix the same issues.
I’ve had pdnssec running for this domain and a few others for some time now and the domains have been signed “locally” in preparation for signing by the parent domain.
This post is more of a reference for myself than anything else, however I thought it might come in handy for some others. It’s bastardised from the install.sh which comes with nventory and will configure an Amazon Linux AMI for you.
The install script below installs nventory, rubygems and nginx from source and everything else from either the Amazon repo or Epel.
This means that you can install and configure this system on the Amazon Free Usage Tier using the “tiny” Amazon Linux ( amzn-ami-2011.09.1.x86_64-ebs ) and get a complete inventory server and puppetmaster for a year at zero cost!
The last two posts in this series have covered what the overall system will look like and how to ensure your puppet server is ready to receive the files from the SCM repo via capistrano – This post will cover setting up the test server using JenkinsCI and creating some tests.
We start by installing Jenkins.
I’ve been playing around with Capistrano over the past few weeks and I’ve recently created a way to use the power of Capistrano’s “deploy” and “rollback” features with Puppet and MCollective to enable me to have complete control over the deployment of my system configurations. continue reading…
OK, it’s May the fourth as I’ve started to write this and I couldn’t resist the title, but I hope that this first post in what I aim to deliver as a series of tutorials will help you move towards full testing, integration and deployment of your systems and turn what could be a five hour manual build and deploy routine into a single code commit.
Deploying web applications can be a real nightmare at times, especially when you have numerous SVN repositories of code which all link together when installed on the server to create your application.
I’ve started using Murder to try and work around the headaches and apart from a very small issue (which I’ll discuss at the end!) it’s working perfectly.
After reading the thread in the Devops-Toolchain Google Group (http://bit.ly/devops-vmth), I realised it was about time I dusted down Cucumber-Vhost and gave it a quick once-over.
The main addition tonight is way overdue and is the simple addition of a configuration file. I chose YAML for the config file because XML is not a human readable format and the result is a configuration file which looks as follows:
# Configuration file for cucumber-vhost
# Change the settings below to reflect your environment
#### COBBLER ####
server: localhost # The server which is running the cobbler XMLRPC API
port: 80 # The port number to connect to for accessing the API
#### LIBVIRT ####
driver: qemu # The libvirt driver to use as documented at http://libvirt.org/drivers.html#hypervisor
host: # The host to connect and launch the virtual machines (leave blank for localhost)
type: system # How to connect to the hypervisor (see man virsh(1) for more information)
# "system" connects to qemu as root, "session" connects to qemu as the current user.
# Leave blank if using Xen as the driver
storage_pool: default # The name of the libvirt storage pool to use
Once the configuration file is set up, you can run the usual:
and cucumber-vhost will launch your VMs based on the configuration and kickstart in Cobbler.