DevOps: Bulding Projects With Ant

I’ve been running a build process for an application that I am working on that is rather complicated. Originally I was manually managing the build process, with a few scripts to supplement my procedures. The past few days I have finally had the time to sit down and consolidate my build into a single ant script. The company that I am contracting for is a heavy Java shop, so I am using Ant mostly because everyone will have access to ant. I am use to make so it’s been a bit of a learning curve, but Ant is a very cool thing. So, for starters, let me outline my current build process and then I will show you how I have automated it.

  1. We pull the code from the QA branch in GitHub
  2. Run Composer over the code to install all the dependencies
  3. We tag the build with a version tag
  4. We write the version tag and the date to a version file
  5. We tar up ONLY the files that are absolutely needed to run the application (we have tools, docs, ect that we do not need)
  6. We send the final archive to the server team for deployment

Actually, I started using Ant early on for the final tar build. Ant has a great task for tar. Here is the target for that.

<target name="deploy">
      <tar destfile="deploy.tar"
           basedir="build/"   
           excludes="build/**, database/**, docs/**.vagrant/**"/>

</target>

This essentially creates a file called deploy.tar in a sub-folder called build/. It excludes anything already in build/ as well as database/, docs/', and.vagrant’. Obviously I truncated my list…it is a lot longer in reality! A lot of files sit in the root like the Vagrantfile, a Makefile the files for composer.

So this is about the final step. But I still need to automate the rest.

The first step is grabbing my code from GitHub. Eclipse offers a task for any that is based on jgit. You need a few dependencies for this to work. Namely the jgit class file and the jgit-ant class file. You also need the ssh library, which I was already using for some other scripts.

You need to load them as resources, which is done like so:

<taskdef resource="org/eclipse/jgit/ant/ant-tasks.properties">
     <classpath>
       <pathelement location="resources/org.eclipse.jgit.ant-3.0.0.2013061825-r.jar"/>
       <pathelement location="resources/org.eclipse.jgit-3.0.0.2013061825-r.jar"/>
       <pathelement location="../jsch-0.1.49.jar"/>
     </classpath>
</taskdef>

Then we set up the task for cloning:

<target name="clone">
     <git-clone 
         uri="git@github.com:weatheredwatcher/weatheredwatcher.git" 
         branch="testing"
         dest="build/" />
</target>

(Note that I am using this site rather then the project that I am working on…)

The next step is running Composer. For those that do not know, Composer is a dependency management tool for PHP. We are loading several dependencies as well as some custom libs via composer so it is important that we generate the right files.

<target name="init" description="Installing Denpendencies">
  <delete file="build/composer.lock" />

    <exec executable="php" failonerror="true"
        dir="build/">
        <arg value="composer.phar" />
        <arg value="install" />
  </exec>

</target>

So what we are doing here is first we delete the lock file. Typically the devs install a few extra tools that are not needed on the QA server. So we remove the lock file and then only install the production level requirements with Composer. failonerror ensure that we get an error if anything bad happens rather then a success.

As far as tagging goes, I feel it is better to do the tagging in GitHub rather then in the build process. So we will only be writing to a version file. We need to write the current tag as well as the build date to this file. The git command for displaying the current tag is git describe --exact-match --abbrev=0. We antify this like so:

<exec executable="git" failonerror="true"
   dir="build/">
   <arg value="describe" />
   <arg value="--exact-match" />
   <arg value="--abbrev=0" />
   <redirector output="build/version" />
</exec>

The last part was hard. You cannot pass a redirect > through the exec task. Instead, we use the redirector. The date is similar, but we add an append option to the redirector to make sure we do not overwrite the file.

<exec executable="date" dir="build/">
  <redirector output="build/version" append="true"/>
</exec>  

So if we put it all together into a massive ant script, we have nearly the entire deploy build. The last part is the one where we send it along to the Server Team’s folders via a mount and a copy.

The final thing to do is to clean up.

<target name="clean">
    <delete dir="build"/>
</target>

My next step will be taking this process and integrating it into a Hudson build for Continuous Integration or CI. Obviously, Hudson will be able to take on a lot of this functionality without any ant scripts…but I also can just have Hudson run the ant script if I want. We will see. Until then, happy coding!!

Using Zend Style Config Files Everywhere

It’s been a while since my last blog entry, I know!! I feel bad, so here is a bit of php goodness to make us all feel better!

It’s always a good security practice to remove configuration from your web-app. One way to do this is to use a configuration file. Now, Zend has a very cool way of doing this,(Zend\Config\Reader) but the client that I am with right now is using a solution based on Codeigniter. However, since they are planning on eventually moving to Zend anyway, I figured I would implement a solution based on the Zend config solution.

First comes the file, which I called environment.ini and placed in /etc.

The entries are in the following format:

cg.database.name=name
cg.database.username=username
cg.database.password=password
cg.database.hostname=hostname
cg.services.name=dev
cg.services.port=8080

To utilize this for the database, for example, lets create a helper with the following function:

function get_environment(){
$config = array();
foreach( file( '/etc/sitename/environment.ini') as $line) {
    list( $keys, $value) = explode( '=', $line);

    $temp =& $config;
    foreach( explode( '.', $keys) as $key)
    {           
        $temp =& $temp[$key];
    }
    $temp = trim( $value);

}

return $config;

}

This will return an array like this:

array 
  'cg' => 
    array 
     'database' => 
        array 
          'name' => string 'name'
          'hostname' => string 'hostname'
          'username' => string 'username'
          'password' => string 'password' 
      'services' => 
        array (size=5)
          'url' => string 'dev'
          'port' => string '8080'

The next part of this is making use of the data in your application. In the database config file for Codeigniter, for example:

$dbconfig = get_environment();

$db = $dbconfig['cg'][database];

$db['default']['hostname'] = $db['hostname'];
$db['default']['username'] = $db['username'];
$db['default']['password'] = $db['password'];
$db['default']['database'] = $db['name'];

And there you have it! Of course, I based this on CI but you should be able to use this code for any framework..or even no framework.

Preparing a Dev Environment With Puppet

For starts, I now have markup installed in my blog, so no more typing html!! Yea!\n Today we are going to talk about Puppet. No, not Pinochio, or those Punch and Judy dolls. This is Puppet as in the server provisioning tool.

At work I am setting up a development environment for our dev team. Since most of them are just learning php, and for over all consistency I am using Vagrant to build a standard dev vm for everyone to work off of.

The general requirements are simple:

  1. We must run Zend Server
  2. We must load the php drivers MS Sql
  3. We must install subversion

With these requirements in mind, I set out to build my first puppet script.

The first class that we define is our services class. I need to make sure that Apache is running. Also, I found out that Cent Os turns iptables on by default. That interferes with the dev box, as well as being unnecessary! So we make sure that iptables is off.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
class services {
  #we want apache
  service {
    'httpd':
      ensure => running,
      enable => true
  }

service {
  'iptables':
    ensure => stopped,
    enable => false
 }
}

The next two classes work in tandem. The repos class defines our Zend Server repo and packages install the required packages.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
class packages {
  package {
    "httpd":                      ensure => "present"; # Apache
    "subversion":                 ensure => "present"; # Subversion
    "zend-server-ce-php-5.3":     ensure => "present"; # Zend Server (CE)
    "php-5.3-mssql-zend-server":  ensure => "present"; # MSSQL Extenstion - provided by Zend
  }
}


class repos {
  #lets install some repos
  file { "/etc/yum.repos.d/zend.repo":
    content => "[Zend]
    name=Zend Server
    baseurl=http://repos.zend.com/zend-server/rpm/x86_64
    enabled=1
    gpgcheck=1
    gpgkey=http://repos.zend.com/zend.key

    [Zend_noarch]
    name=Zend Server - noarch
    baseurl=http://repos.zend.com/zend-server/rpm/noarch
    enabled=1
    gpgcheck=1
    gpgkey=http://repos.zend.com/zend.key
    "
  }

}

If anyone wants to see the entire file, here it is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
stage {

  'users':      before => Stage['repos'];
  'repos':      before => Stage['packages'];
  'packages':   before => Stage['configure'];
  'configure':  before => Stage['services'];
  'services':   before => Stage['main'];

}

class services {
  #we want apache
  service {
    'httpd':
      ensure => running,
      enable => true
  }

  service {
    'iptables':
      ensure => stopped,
      enable => false
  }
}

class configure {

  # symlinking the code from /home/vagrant/public to var/www/public
  exec { "public simlink":
    command => "/bin/ln -s /home/vagrant/public /var/www/",
    unless  => "/usr/bin/test -L /var/www/",
  }
  file {"/var/www/index.html":
    ensure => "absent"

  }
}

class packages {
  package {
    "httpd":                      ensure => "present"; # Apache
    "subversion":                 ensure => "present"; # Subversion
    "zend-server-ce-php-5.3":     ensure => "present"; # Zend Server (CE)
    "php-5.3-mssql-zend-server":  ensure => "present"; # MSSQL Extenstion - provided by Zend
  }
}

class repos {

  file { "/etc/yum.repos.d/zend.repo":
    content => "[Zend]
name=Zend Server
baseurl=http://repos.zend.com/zend-server/rpm/x86_64
enabled=1
gpgcheck=1
gpgkey=http://repos.zend.com/zend.key

[Zend_noarch]
name=Zend Server - noarch
baseurl=http://repos.zend.com/zend-server/rpm/noarch
enabled=1
gpgcheck=1
gpgkey=http://repos.zend.com/zend.key
    "
  }

}

class users
{
  group { "puppet":
    ensure => "present",
  }
  user { "vagrant":
    ensure => "present",

  }
}

class {
  users:      stage => "users";
  repos:      stage => "repos";
  packages:   stage => "packages";
  configure:  stage => "configure";
  services:   stage => "services";

  }

Running Multiple Apps on NodeJs

So, what I am wanting to so is to be able to run multiple apps on Nodejs. Specifically, I want to be able to use node-static to server static files and some other app (yet to be determined) to server up my blog as flat files. I’ve done this before in Ruby using Rack and Sinatra…so I figure I would give Nodejs’s Bogart a try!\n With a little bit of trial and error, I have come up with the best solution to this: Http-Proxy.

The first thing to look at will be my packages.json file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
    "name": "bogart-test",
    "description": "Testing Bogart/FlatFile/Static structures",
    "version": "0.1.0",
    "author": "David Duggins",
    "email": "David Duggins",
    "main": "./app",
    "directories": { "lib": "./lib" },
    "dependencies": {
      "node-static": ">=0.6.5",
      "bogart": ">=0.2.0",
      "mustache": "0.3.1-dev",
      "http-proxy": ">=0.0.0"
    }
}

The important stuff to note is node-static, bogart and node-static. I have not started to use mustache yet, but that may or may not be the templating engine.

Bogart by itself is fairly straight-forward. It’s just as easy to configure as Sinatra is for Ruby or Silex for php. It’s just handles routes.

1
2
3
4
5
6
7
8
9
10
11
12
13
var bogart = require('bogart');
var router = bogart.router();

router.get('/', function(req) {

  return bogart.html("hello world");
});

var app = bogart.app();
app.use(bogart.batteries); // A batteries included JSGI stack including streaming request body parsing, session, flash, and much more.
app.use(router); // Our router

app.start();

The above example with simply echo “Hello World” on the index of our site. It is set to use the default port 8080. That cam be easily changed with app.start(‘10000’, ‘127.0.0.1’)

The next part is node-static. I want to be able to serve static files, like an about page. Fairly simple as well:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
var static = require('node-static');

//
// Create a node-static server to serve the current directory
//
var file = new(static.Server)('.', { cache: 7200, headers: {'X-Hello':'World!'} });

require('http').createServer(function (request, response) {
    request.addListener('end', function () {
        //
        // Serve files!
        //
        file.serve(request, response, function (err, res) {
            if (err) { // An error as occured
                console.error("> Error serving " + request.url + " - " + err.message);
                response.writeHead(err.status, err.headers);
                response.end();
            } else { // The file was served successfully
                console.log("> " + request.url + " - " + res.message);
            }
        });
    });
}).listen(1337);

This code merely pulls any static files and servers them. It requires that you use naming conventions like index.html to make sure that a file is pulled up via ‘/’. You also can call other pages just like you would on a normal apache server. The final part of this is configuring Bogart to use Http-proxy so that we can load the static pages only when we want to. To load http-proxy we need these two lines:

1
2
var http = require('http')
, httpProxy = require('http-proxy');

Then to use a proxy, we need this line:

1
2
3
  router.get('/', function(req) {
  return bogart.proxy('http://127.0.0.1:1337');
});

Remember that the static app is running on port 1337. Well that is all for now. I will be working on the other parts of this experiment and write more on it later.

Cloud9 IDE

A few days ago I went ahead and I install Cloud9 ide onto my laptop. I’ve been using the cloud version for editing this site as well as some other git-hub based sites and I love it. I didn’t think that I could love it even more then I did before, but I do!\n Locally, I can launch a workspace and start editing local files…and the console gives me complete shell access to my system! I’m working on a project that is being managed with subversion =( and the design team is using sass. So when a change is made in the core style, I can go to the console and update the svn then compile sass with compass and we are good to go!! Obviously I also use it with all my projects that are git based as well!! It’s very nice. It’s also very helpful with my New Years Resolution to learn Node. Cloud9 is node based and gives a GREAT environment to develop node in. Also, when I am using a single screen of the laptop, it’s a great space saver. C9 loads up in Chrome right next to the site I am working on. I have all the Dev tools handy and can just go back and forth between the tabs!! So go ahead and give it a try! You can install it as easy as pie…look into the git hub repo: https://github.com/ajaxorg/cloud9!!