creating choas since 1981.

Brand New Design Coming Soon!

2013-10-08 16:49:13

Well, the old blog here is going away. I am making the switch to Jekyll for my blog. Obviously it will still be here at It will just be a different animal.

I realize that I do not really need a 'homepage'. So I am going straight to the blog. I will still include the other pages, same as here. Of course the biggest change is me going back to a flat file system. I really would rather use vi to post to my there you go.

I have a new design up. I have been working on it for the past week and I rather like it. I still need to do some clean up work on the current posts' markup, as well as add the newer entries from my blog. Please check out the new design if you like. I plan on having it live by the end of the week.

DevOps: Bulding Projects with Ant

2013-08-07 21:07:40

I've been running a build process for an application that I am working on that is rather complicated. Originally I was manually managing the build process, with a few scripts to supplement my procedures. The past few days I have finally had the time to sit down and consolidate my build into a single ant script. The company that I am contracting for is a heavy Java shop, so I am using Ant mostly because everyone will have access to ant. I am use to make so it's been a bit of a learning curve, but Ant is a very cool thing. So, for starters, let me outline my current build process and then I will show you how I have automated it.

  1. We pull the code from the QA branch in GitHub
  2. Run Composer over the code to install all the dependencies
  3. We tag the build with a version tag
  4. We write the version tag and the date to a version file
  5. We tar up ONLY the files that are absolutely needed to run the application (we have tools, docs, ect that we do not need)
  6. We send the final archive to the server team for deployment

Actually, I started using Ant early on for the final tar build. Ant has a great task for tar. Here is the target for that.

<target name="deploy">
      <tar destfile="deploy.tar"
           excludes="build/**, database/**, docs/**.vagrant/**"/>


This essentially creates a file called deploy.tar in a sub-folder called build/. It excludes anything already in build/ as well as database/, docs/', and.vagrant'. Obviously I truncated my is a lot longer in reality! A lot of files sit in the root like the Vagrantfile, a Makefile the files for composer.

So this is about the final step. But I still need to automate the rest.

The first step is grabbing my code from GitHub. Eclipse offers a task for any that is based on jgit. You need a few dependencies for this to work. Namely the jgit class file and the jgit-ant class file. You also need the ssh library, which I was already using for some other scripts.

You need to load them as resources, which is done like so:

<taskdef resource="org/eclipse/jgit/ant/">
       <pathelement location="resources/org.eclipse.jgit.ant-"/>
       <pathelement location="resources/org.eclipse.jgit-"/>
       <pathelement location="../jsch-0.1.49.jar"/>

Then we set up the task for cloning:

<target name="clone">
         dest="build/" />

(Note that I am using this site rather then the project that I am working on...)

The next step is running Composer. For those that do not know, Composer is a dependency management tool for PHP. We are loading several dependencies as well as some custom libs via composer so it is important that we generate the right files.

<target name="init" description="Installing Denpendencies">
  <delete file="build/composer.lock" />

    <exec executable="php" failonerror="true"
        <arg value="composer.phar" />
        <arg value="install" />


So what we are doing here is first we delete the lock file. Typically the devs install a few extra tools that are not needed on the QA server. So we remove the lock file and then only install the production level requirements with Composer. failonerror ensure that we get an error if anything bad happens rather then a success.

As far as tagging goes, I feel it is better to do the tagging in GitHub rather then in the build process. So we will only be writing to a version file. We need to write the current tag as well as the build date to this file. The git command for displaying the current tag is git describe --exact-match --abbrev=0. We antify this like so:

<exec executable="git" failonerror="true"
   <arg value="describe" />
   <arg value="--exact-match" />
   <arg value="--abbrev=0" />
   <redirector output="build/version" />

The last part was hard. You cannot pass a redirect > through the exec task. Instead, we use the redirector. The date is similar, but we add an append option to the redirector to make sure we do not overwrite the file.

<exec executable="date" dir="build/">
  <redirector output="build/version" append="true"/>

So if we put it all together into a massive ant script, we have nearly the entire deploy build. The last part is the one where we send it along to the Server Team's folders via a mount and a copy.

The final thing to do is to clean up.

<target name="clean">
    <delete dir="build"/>

My next step will be taking this process and integrating it into a Hudson build for Continuous Integration or CI. Obviously, Hudson will be able to take on a lot of this functionality without any ant scripts...but I also can just have Hudson run the ant script if I want. We will see. Until then, happy coding!!

Hello i3

2013-08-05 01:21:06

So, just to give some back story to all this: I first installed Linux on my computer in the late 90's and it was Red Hat. The wm was, of course Gnome. Over the years I have tried pretty much every distro and ever wm there is. I've always stuck to Gnome. Mainly, it was because I liked the way it worked. If you've ever seen any of my posted screenshots you know what I am talking about. Statusbar/menu on the top with a taskbar on the bottom. It's been the way I've worked. My fondness for Gnome led me to give Gnome3 a try, despite it being different. Ultimately, with a few tweaks and a lot of plugins I've grown to like it. I certainly have never been one of it's detractors. That being said, this weekend my system nearly got fubar'd thanks to gnome-shell. Even after partially fixing things with a home folder permissions reset, I still can't get everything back in place. I had been eying several alternatives ever since I got my new laptop. For several reasons really. I am a Debian fan and I was very happy when I finally had an AMD based laptop with a Wifi adapter that worked with the free Kernel..such a short lived happiness when I had to install non-free firmware to run gnome-shell..damn graphics card!! The instability to me is not worth the 'prettiness'.

Mainly, I have been looking at tiling wm. It was between Awesome and i3. Awesome has it's points, but I found myself drawn to i3 and I am, after a weekend of playing around, very happy.

For starters, the tiling wm just plain fits my workflow! I have always struggled to fit my windows around my screen, adjusting them to my satisfaction. I like to have browser, vi and various terminals with logging all on a single screen (as well as browser tools when I am dealing with JavaScript) i3 is remarkably good at this!

It's status bar shows we just the information that I want...and is customizable to boot! I can make use of multiple workspaces and even program certain applications to open on certain workspaces!! The downside is that there is no longer any point in me posting screenshots!! (Well, I am keeping e17 as my alternative wm, so maybe some stuff from that realm!!)

Use Zend Style Config Files Everywhere

2013-07-02 13:21:13

It's been a while since my last blog entry, I know!! I feel bad, so here is a bit of php goodness to make us all feel better!

It's always a good security practice to remove configuration from your web-app. One way to do this is to use a configuration file. Now, Zend has a very cool way of doing this,(Zend\Config\Reader) but the client that I am with right now is using a solution based on Codeigniter. However, since they are planning on eventually moving to Zend anyway, I figured I would implement a solution based on the Zend config solution.

First comes the file, which I called environment.ini and placed in /etc.

The entries are in the following format:

To utilize this for the database, for example, lets create a helper with the following function:

function get_environment(){
$config = array();
foreach( file( '/etc/sitename/environment.ini') as $line) {
    list( $keys, $value) = explode( '=', $line);

    $temp =& $config;
    foreach( explode( '.', $keys) as $key)
        $temp =& $temp[$key];
    $temp = trim( $value);


return $config;


This will return an array like this:

  'cg' => 
     'database' => 
          'name' => string 'name'
          'hostname' => string 'hostname'
          'username' => string 'username'
          'password' => string 'password' 
      'services' => 
        array (size=5)
          'url' => string 'dev'
          'port' => string '8080'

The next part of this is making use of the data in your application. In the database config file for Codeigniter, for example:

$dbconfig = get_environment();

$db = $dbconfig['cg'][database];

$db['default']['hostname'] = $db['hostname'];
$db['default']['username'] = $db['username'];
$db['default']['password'] = $db['password'];
$db['default']['database'] = $db['name'];

And there you have it! Of course, I based this on CI but you should be able to use this code for any framework..or even no framework.

Preparing a Dev Environment with Puppet

2013-02-01 22:47:00

For starts, I now have markup installed in my blog, so no more typing html!! Yea!

Today we are going to talk about Puppet. No, not Pinochio, or those Punch and Judy dolls. This is Puppet as in the server provisioning tool.

At work I am setting up a development environment for our dev team. Since most of them are just learning php, and for over all consistency I am using Vagrant to build a standard dev vm for everyone to work off of.

The general requirements are simple:

1 We must run Zend Server 2 We must load the php drivers MS Sql 3 We must install subversion

With these requirements in mind, I set out to build my first puppet script.

The first class that we define is our services class. I need to make sure that Apache is running. Also, I found out that Cent Os turns iptables on by default. That interferes with the dev box, as well as being unnecessary! So we make sure that iptables is off.

class services {
  #we want apache
  service { 
      ensure => running,
      enable => true

  service {
      ensure => stopped,
      enable => false

The next two classes work in tandem. The repos class defines our Zend Server repo and packages install the required packages.

class packages {
  package {
    "httpd":                      ensure => "present"; # Apache
    "subversion":                 ensure => "present"; # Subversion
    "zend-server-ce-php-5.3":     ensure => "present"; # Zend Server (CE)
    "php-5.3-mssql-zend-server":  ensure => "present"; # MSSQL Extenstion - provided by Zend

class repos {
  #lets install some repos
  file { "/etc/yum.repos.d/zend.repo":
    content => "[Zend]
name=Zend Server

name=Zend Server - noarch


If anyone wants to see the entire file, here it is:

stage { 

'users': before => Stage['repos']; 'repos': before => Stage['packages']; 'packages': before => Stage['configure']; 'configure': before => Stage['services']; 'services': before => Stage['main'];


class services { #we want apache service { 'httpd': ensure => running, enable => true }

service { 'iptables': ensure => stopped, enable => false } }

class configure {

# symlinking the code from /home/vagrant/public to var/www/public exec { "public simlink": command => "/bin/ln -s /home/vagrant/public /var/www/", unless => "/usr/bin/test -L /var/www/", } file {"/var/www/index.html": ensure => "absent"

} }

class packages { package { "httpd": ensure => "present"; # Apache "subversion": ensure => "present"; # Subversion "zend-server-ce-php-5.3": ensure => "present"; # Zend Server (CE) "php-5.3-mssql-zend-server": ensure => "present"; # MSSQL Extenstion - provided by Zend } }

class repos {

file { "/etc/yum.repos.d/zend.repo": content => "[Zend] name=Zend Server baseurl= enabled=1 gpgcheck=1 gpgkey=

[Zend_noarch] name=Zend Server - noarch baseurl= enabled=1 gpgcheck=1 gpgkey= " }


class users { group { "puppet": ensure => "present", } user { "vagrant": ensure => "present",

} }

class { users: stage => "users"; repos: stage => "repos"; packages: stage => "packages"; configure: stage => "configure"; services: stage => "services";


Running Multiple Apps On NodeJs

2013-01-24 17:46:57

So, what I am wanting to so is to be able to run multiple apps on Nodejs. Specifically, I want to be able to use node-static to server static files and some other app (yet to be determined) to server up my blog as flat files. I've done this before in Ruby using Rack and I figure I would give Nodejs's Bogart a try!

With a little bit of trial and error, I have come up with the best solution to this: Http-Proxy.

The first thing to look at will be my packages.json file.

    "name": "bogart-test",
    "description": "Testing Bogart/FlatFile/Static structures",
    "version": "0.1.0",
    "author": "David Duggins",
    "email": "David Duggins",
    "main": "./app",
    "directories": { "lib": "./lib" },
    "dependencies": {
      "node-static": ">=0.6.5",
      "bogart": ">=0.2.0",
      "mustache": "0.3.1-dev",
      "http-proxy": ">=0.0.0"

The important stuff to note is node-static, bogart and node-static. I have not started to use mustache yet, but that may or may not be the templating engine.

Bogart by itself is fairly straight-forward. It's just as easy to configure as Sinatra is for Ruby or Silex for php. It's just handles routes.

var bogart = require('bogart');
var router = bogart.router();

router.get('/', function(req) {
  return bogart.html("hello world"); 

var app =;
app.use(bogart.batteries); // A batteries included JSGI stack including streaming request body parsing, session, flash, and much more.
app.use(router); // Our router


The above example with simply echo "Hello World" on the index of our site. It is set to use the default port 8080. That cam be easily changed with app.start('10000', '')

The next part is node-static. I want to be able to serve static files, like an about page. Fairly simple as well:

var static = require('node-static');

// // Create a node-static server to serve the current directory // var file = new(static.Server)('.', { cache: 7200, headers: {'X-Hello':'World!'} });

require('http').createServer(function (request, response) { request.addListener('end', function () { // // Serve files! // file.serve(request, response, function (err, res) { if (err) { // An error as occured console.error("> Error serving " + request.url + " - " + err.message); response.writeHead(err.status, err.headers); response.end(); } else { // The file was served successfully console.log("> " + request.url + " - " + res.message); } }); }); }).listen(1337);

This code merely pulls any static files and servers them. It requires that you use naming conventions like index.html to make sure that a file is pulled up via '/'. You also can call other pages just like you would on a normal apache server.<p/>

The final part of this is configuring Bogart to use Http-proxy so that we can load the static pages only when we want to.

To load http-proxy we need these two lines:

var http = require('http')
, httpProxy = require('http-proxy');

Then to use a proxy, we need this line:

  router.get('/', function(req) {
  return bogart.proxy('');

Remember that the static app is running on 1337.

Well that is all for now. I will be working on the other parts of this experiment and write more on it later.

Cloud9 IDE

2013-01-09 16:37:57

A few days ago I went ahead and I install Cloud9 ide onto my laptop. I've been using the cloud version for editing this site as well as some other git-hub based sites and I love it. I didn't think that I could love it even more then I did before, but I do!

Locally, I can launch a workspace and start editing local files...and the console gives me complete shell access to my system! I'm working on a project that is being managed with subversion =( and the design team is using sass. So when a change is made in the core style, I can go to the console and update the svn then compile sass with compass and we are good to go!!

Obviously I also use it with all my projects that are git based as well!! It's very nice. It's also very helpful with my New Years Resolution to learn Node. Cloud9 is node based and gives a GREAT environment to develop node in. Also, when I am using a single screen of the laptop, it's a great space saver. C9 loads up in Chrome right next to the site I am working on. I have all the Dev tools handy and can just go back and forth between the tabs!!

So go ahead and give it a try! You can install it as easy as pie...look into the git hub repo:!!

2013 is here people

2013-01-07 20:33:17

Well, it's here. 2013. We made it! 2012 was a blast, but now we have another year to make plans for and strive for greatness! I am pleased to say that a majority of my resolutions from last year where meet! So, in that spirit, I shall now give you all my goals for 2013

  • Learn Node.js
  • Finish my python VHost app
  • Write a better mysql gui for *nix
  • Become a Debian Developer
  • Speak even more on php and Open Source
  • Get at least five articles published
  • Get started on my php book
  • Switch from CodeIgniter to Lithium

So there it is! My little wish list for 2013!

I've install cloud9 ide on my laptop today (if you are wondering, I will blog about it next) and in that is a good start on the Node js.

Hark A Vagrant

2012-12-09 22:01:09

Ok, so we start out today with a double reference!! First we are paying homage to the incredibly funny web-comic Hark! A Vagrant. If you are a history nerd certainly check it out!!(Hark A Vagrant). But we are really talking about the really cool vm utility Vagrant. In a nut shell: You download/load a VM packaged as a vagrant box. It is loaded and run in the background using Virtual Box. Once properly setup, you can ssh into it as well as view it's web contents in a browser. (using port forwarding). It can also be provisioned using Chef or Puppet.
What this means is that you can configure a custom server environment ready for your entire team and they can download it ready to go....or if you want to save on can create a base system and then write a provisioning script that installs EVERYTHING that is needed. It's a pretty sweet little setup! It's also a great way to play around with different languages/environments on the fly. I built my own base Debian Wheezy box and I have been using it to play around with Node.js without compromising my work environment!
Some link love:

I must say, that despite what I have heard, it was very easy to get setup and going...but that might also be because I am already using Linux, and so all the tools are running native. I'm not going to go through the step by step here..the site does a good job of that. I thought about linking my Wheezy box, but it is pretty big (700+ mb image). I still might, and post it as an update. Regardless, it is fairly easy to get started...they link to a base Ubuntu box in the instructions. Once you have practiced deploying a server, the provisioning tool is a lot of fun. I have been using puppet and it is fairly easy to use. The fun trick was creating my own image. You have to build a base VM in Virtual Box. Make sure that you do not install any kind of GUI/Window Management on the box as it is not needed!! You can actually customize the image before you package it...add users, software, even sites. If you are packaging for a team and size is not a major issue (say, if you are going to distribute it on a network share internally or something)you can forget about provisioning and load everything manually. Otherwise making good use of provisioning can bring the size down. Once you have built the VM, it is a single command line in Vagrant to build the box for you.

Well...that's all that I have time for today! If you are interested in my base Wheezy box let me know in the comments and I will make sure and post it GitHub or something (that seems to where are the big kids are posting their vagrant boxes...)

New Relic

2012-12-03 19:30:44

I have been using New Relic on my commercial server for a while now. It's a great way to monitor all your customers sites/apps and make sure that things are running smoothly. I really enjoy the weekly reports and the daily warnings if things are not running right. I recently had a report of high cpu usage on my personal server and it let me know exactly what was causing the problem, so when I ssh'ed in to the server, I knew exactly what I needed to do!! Go and setup a free trial account right now and once you deploy to your server, they will send you a really cool t-shirt!! And don't worry about cost!! They have a basic version for free...or several hosts (AppFog, Rackspace) offer free standard monitoring with your server!! New Relic