Aug 06, 2014

Key listing support in Consul client

For integration between ansible and consul I've been using a third party python client called consulate. It is decent, however both it and consul are new and it doesn't support the full consul HTTP API yet.

Currently I'm trying to model our topology in consul's key value store but lists of values are not intuitive. Consul seems to only store strings, so without doing string parsing / casting I am unable to get complicated values out.

For example, I'd like to store something like this:

/roles/zookeeper/zones = ['us-west-2a', 'us-west-2b', 'us-west-2c']

and when I do a query against consul I could get back a list to work with. Since I can't I had been doing terrible things like this:

def get_keys(self):
        output = {}
        for k, v in self.session.kv.items().iteritems():
            if not v is None and type(v) is str and v.startswith("[") and v.endswith("]"):
                output[k] = v.replace('[', '').replace(']', '').split(', ')
            else:
                output[k] = v
        return output

Reading the Consul API docs closer though I found that they have a weird implementation to support this. Specifically:

It is possible to also only list keys without their values by using the "?keys" query parameter along with a GET request. This will return a list of the keys under the given prefix. The optional "?separator=" can be used to list only up to a given separator.

For example, listing "/web/" with a "/" seperator may return:

[ "/web/bar", "/web/foo", "/web/subdir/" ] Using the key listing method may be suitable when you do not need the values or flags, or want to implement a key-space explorer.

So I dug into the (simple) consulate find method and added support for ?keys to it. The pull request logged with consulate and we can use my consulate fork for the time being.

Dynamic inventory and variables in Ansible

I've been building out automation for deploying micro services in ec2. We're using consul for service registration, discovery, health checks, and configuration. Since consul provides an available key value store for configuration we've been trying to define the topology that way. Ansible has some very good documentation and it is one of the things I like most about the project.

Documentation for building your own dynamic inventory is fairly complete, but I was having trouble with including variables in that inventory. The dynamic inventory documentation shows examples of of host level variables in it’s json output:

eg:

{
  "databases": {
    "hosts"   : [ "host1.example.com", "host2.example.com" ],
    "vars"    : {
      "a"   : true
    }
  }
}

However, in the standard inventory documentation there is a differnt type of variable, specifically, group variables.

[atlanta]
host1
host2

[atlanta:vars]
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com

So my assumption was I could use the group variable syntax in the dynamic inventory output to achive the same thing, the power here was that different consul instances could contain different values allowing me to build a fairly dynamic infrastructure. Combining the documentation from the static inventory with the dynamic output gave me something that looked like this:

{
  "databases": {
    "hosts"   : [ "host1.example.com", "host2.example.com" ],
    "vars"    : {
      "a"   : true
    }
  },
  "databases:vars": {
    "postgres": {
       "version": "9.3"
     }
  }
}

Unfortunately ansible-playbook wanted nothing to do with this. 'databases:vars' was being cast to a list and treated as a group which was stomping the variables I was trying to pass around. I spent a while thinking about the problem and decided that inventory wasn't actually where I needed these variables passed in. Instead it would be fine to use consul as a facts source and use that to drive role behavior. I started out trying to augment the magic variable 'ansible_facts' by modifying the [setup module] (https://github.com/ansible/ansible/blob/devel/library/system/setup) but ultimately I didn't want to maintain my own core module. Instead I was able to find two good blog posts about writing a fact gathering module. The first is a little old but comes from one of the best ansible blogs I've found. Ansible: variables, variables, and more variables explains the basic approach well but the example code seemed to no longer work. Docker: Containers for the Masses -- The docker_facts module is a much more recent post and had a working example to go along with it. It turned out the reason the first example wasn't working was due to how the module was exiting. The first example was just printing the output, now the correct approach is to use the module.exit_json method.

module.exit_json(changed=changed, ansible_facts=ansible_facts)

I've posted the module on my ansible fork and may sent a pull request over.

Dec 10, 2013

One way sync Jira to Things

Little ruby script to take the results of a jql query (advanced search) in jira and create todo item's for Cultured Code's Things App. I use it at the beginning of a sprint to keep track of my own work and progress. If I get less lazy I could always sync back in the other direction.

Jan 02, 2012

Getting Tumblr to Octopress Working

Following https://github.com/mojombo/jekyll/wiki/blog-migrations to get posts from tumblr. I then get YAML errors due to bad characters in the title sections. A little grep and awk shows me the bogus titles.

fgrep -R title: * | awk '{$1 =""; print }' |grep :
 Awesome Blog: I keep a diary
 Tutorial: Use Coda with locally stored Django documentation - Small Victory
 Liquidware Antipasto: The First 10 Things Everyone Does with their New Arduino
 How to: Delete all photos off an iPhone by Colin Devroe
 visualvm: Home
 The Nation: The Migration Back To Local Banks : NPR
 30 Hour Drunk: Do You Have What it Takes?
 rentzsch.tumblr.com: Pages-Only GitHub Projects
 rentzsch.tumblr.com: Pages-Only GitHub Projects
 Marco.org: The iPad doesnt need to do everything
 An Innovative Web: SaveTabs Safari Extension
 From the Tips Box: Trimming Photos, Timed Shutdown, and Minimalist Twitter [From The Tips Box]
 VMware Communities: VMware Project Onyx
 Bash: parsing arguments with getopts
 Swirl: some sugar for Tornado

It was the colons.

Dec 28, 2011

Setting up Gource on OS X

Setting up Puppet with Passenger

Setup your puppet.confMake sure you have the following set in your puppetmaster’s puppet.conf:
[puppetmasterd]
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
Install apache2, passenger, and rack

yum install httpd httpd-devel ruby-devel rubygems
yum install gcc-c++
gem install -v 1.1.0 rack
gem install -v 2.2.15 passenger
passenger-install-apache2-module

hit enter and watch it build

Add the following to Apache Config

   LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15/ext/apache2/mod_passenger.so   PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15
   PassengerRuby /usr/bin/ruby

Create /etc/httpd/conf.d/puppetmaster.conf

Listen 8140<VirtualHost *:8140>

    SSLEngine on
    SSLCipherSuite SSLv2:-LOW:-EXPORT:RC4+RSA
    SSLCertificateFile      /var/lib/puppet/ssl/certs/puppet.vmhosted.jiveland.com.pem
    SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/puppet.vmhosted.jiveland.com.pem
    SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
    SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem
    # CRL checking should be enabled; if you have problems with Apache complaining about the CRL, disable the next line
    SSLCARevocationFile     /var/lib/puppet/ssl/ca/ca_crl.pem
    SSLVerifyClient optional
    SSLVerifyDepth  1
    SSLOptions +StdEnvVars

    # The following client headers allow the same configuration to work with Pound.
    RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

    RackAutoDetect On
    DocumentRoot /usr/share/puppet/rack/puppetmasterd/public/
    <Directory /usr/share/puppet/rack/puppetmasterd/>
        Options None
        AllowOverride None
        Order allow,deny
        allow from all
    </Directory>
</VirtualHost>


mkdir -p /usr/share/puppet/rack/puppetmasterdmkdir /usr/share/puppet/rack/puppetmasterd/public /usr/share/puppet/rack/puppetmasterd/tmp

Create /usr/share/puppet/rack/puppetmasterd/config.ru
# a config.ru, for use with every rack-compatible webserver.
# SSL needs to be handled outside this, though.

# if puppet is not in your RUBYLIB:
# $:.unshift('/opt/puppet/lib')

$0 = "puppetmasterd"
require 'puppet'

# if you want debugging:
# ARGV << "--debug"

ARGV << "--rack"
require 'puppet/application/puppetmasterd'
# we're usually running inside a Rack::Builder.new {} block,
# therefore we need to call run *here*.
run Puppet::Application[:puppetmasterd].run


/etc/init.d/puppetmasterd stop
chkconfig puppetmaster off chkconfig httpd on
/etc/init.d/httpd restart

Ruby 1.9.2, Jekyll, and OS X