Combining and minifying assets on a PHP site with PHP minify

This post was written 2 years ago.
Wed, 06 Feb 2013
loading seperate css and js assets
I've been getting carried away with my Camper Van blog over the last couple of weeks, overcompensating for my lack of actual design skills by adding loads of fancy effects such as Supersized full-screen background images, and Photoswipe for responsive photogallery/lightbox.

Looking at the network tab in chrome developer tools I was reminded how many http requests are needed to serve all the seperate css and javascript files, and that I needed to optimise it a bit. There's loads of different ways to combine and minify CSS and JavaScript assets - for example using something like Live reload on the desktop during development, or using a server-side on-the-fly system, e.g. Django Compressor on a Django site. In either case this is usually in conjunction with a CSS pre-processor such as SASS

As "on the road" is a PHP site, and I haven't got round to setting up SASS stuff for it, I decided to use PHP minify, which lets you specify groups of assets to be combined and minified, then serves them up on the fly, using caching (filesystem or memcache) to keep it snappy. The set-up is fairly straightforward, the only thing that might trip up a novice is setting up the caching.

optimised assets loading
As a result (after a bit of refactoring to get things working after moving the js from the head to just before the closing body tag), I now have the site loading in a single js and a single css file, considerably improving the load time, and neatening up the source code. Note that these two screen grabs were taken on different internet connections so the actual load time of the assets shown isn't a good comparison.
Tags: php / css / javascript / site build /

Setting up apache on osx lion

This post was written 3 years ago.
Mon, 19 Dec 2011
For general website/ PHP development, I like to have multiple local sites running from my Sites folder using virtualhosts. Up until now i've usually done this with Xampp, but after hearing that the version of apache shipped with OSX 10.7 is fairly useable out of the box I thought i'd run with it.

First of all I uncommented a couple of lines from /etc/apache2/httpd.conf.

To enable PHP5:-

LoadModule php5_module libexec/apache2/libphp5.so

To enable the use of virtualhosts:-

Include /private/etc/apache2/extra/httpd-vhosts.conf

I can then create virtualhosts in /private/etc/apache2/extra/httpd-vhosts.conf like so:-

<VirtualHost *:80>
DocumentRoot "/Users/rickhurst/Sites/sandbox"
ServerName sandbox.macbook.local
</VirtualHost>

Then to serve a local site from http://sandbox.macbook.local add a line to /etc/hosts:-

127.0.0.1 sandbox.macbook.local

One issue that took me a while to figure out was that I was getting a 403 error when I tried to use AllowOverride All, which lets me use .htaccess

I found the answer here. Basically, the options line needs to be set to "Options All":-

<Directory "/Users/rickhurst/Sites/">
Options All
AllowOverride All
Order allow,deny
Allow from all
</Directory>

(The above rules are added to a file /private/etc/apache2/users/yourusername.conf). Remember to restart apache after making these changes!
Tags: apache / php / osx /

Object storage and retrieval in PHP part 2 - MongoDb

This post was written 3 years ago.
Mon, 12 Sep 2011
In part one, I talked about how to save and retrieve a PHP object instance using JSON files, and in this post I talk about the same operation using mongoDb, and some gotchas.

I've only tried this in very limited circumstances, mainly to see how feasible it would be to make eatStatic seamlessly switch between json files and mongo - I naively thought that you would just throw a json file at mongo and have it store it for you, but the examples i've found takes the php object and converts to JSON magically, and also passes back a PHP object instance rather than raw JSON.

This post doesn't cover installing mongo, I skipped between several different examples/ tutorials before I got it working, so can't remember exactly how I did it in the end. Once installed though you can connect to it from PHP like this:-

$m = new Mongo(); // connect
$db = $m->cms; // select a database

For comparison purposes we'll create a simple case study object like in Part 1:-

class case_study {
var $id;
var $title;
var $body_text;
var $skills = array();
}

$case_study = new case_study;

$case_study->id = 'my/case_study';
$case_study->title = 'My case study';
$case_study->body_text = 'Some text for the case study';
$case_study->skills['css'] = 'CSS';
$case_study->skills['sitebuild'] = 'Site Build';

which gives us:-

case_study Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)

To store this in mongo db, we simply specify a collection to use ("collection" is analogous to table in a relational database, but you can create them on demand, and don't have to specify a schema) and them insert our object:-

$case_studies = $db->case_studies;
$case_studies->insert($case_study);

To get it back we use:-

$case_study = $case_studies->findOne(array('id' => 'my/case_study'));

Passing this to print_r() gives us:-

Array
(
[_id] => MongoId Object
(
[$id] => 4e6de720d2db288b0c000000
)

[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)

Note a couple of things:-

  • It has given us back an Array, instead of an object
  • It has inserted it's own unique ID [_id]

We don't need to worry about the extra ID, as we'll be using our own for lookups, so it can be ignored. To convert the array to an object, simply do:-

$case_study = (object) $case_study;

Which takes us back to:-

stdClass Object
(
[_id] => MongoId Object
(
[$id] => 4e6de720d2db288b0c000000
)

[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)
Tags: php / mongodb / eatStatic /

Object storage and retrieval in PHP part 1 - JSON files

This post was written 3 years ago.
Fri, 09 Sep 2011
I mentioned in my post about eatStatic that I was using JSON files for storage of objects and arrays, but hoped to make it switchable to use mongdb. This is part one of a two-part post, demonstrating use of JSON files with json_encode() and json_decode().

Take the following simple class:-

class case_study {
var $id;
var $title;
var $body_text;
var $skills = array();
}

If we create an instance of this and add some data:-

$case_study = new case_study;
$case_study->id = 'my/case_study';
$case_study->title = 'My case study';
$case_study->body_text = 'Some text for the case study';
$case_study->skills['css'] = 'CSS';
$case_study->skills['sitebuild'] = 'Site Build';

and pass it to print_r(), we get this:-

case_study Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)

If we now encode it as JSON:-

$json_str = json_encode($case_study);

At this point, we can save the file to the filesystem - I tend to create a unique ID based on the current date/time and a random string. I won't detail it all here, but you can see some of the helper functions I use in eatStaticStorage.class.php and eatStatic.class.php. One thing worth noting is that sometimes when reading a .json file back in from the filesystem, I was experiencing a bug where the last three characters were omitted - i'm not sure what was causing this, but it was fixed by changing my read_file() method to use file_get_contents(), instead of fread().

Once you have retrieved your JSON string you can decode it again:-

$case_study = json_decode($json_str);

and we end up with this:-

stdClass Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => stdClass Object
(
[css] => CSS
[sitebuild] => Site Build
)

)

Notice that the array "skills" is now an object. We can set it back to an array using get_object_vars():-

$case_study->skills = get_object_vars($case_study->skills);

nb: this only happens for key => value arrays, if it was just a simple array e.g. array('css','sitebuild'), we wouldn't need to pass it through get_object_vars(), as it would be maintained as an array.

This gets us back to where we started:-

stdClass Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)

sort of - we now have an object instance with all the attributes of the original object instance, but it doesn't know it is a case_study object. In fact it isn't a case_study object instance at all - we would have to create a new instance of case_study and copy the attributes across if we needed the real thing, but if you just want the data, this can be used as it is in most cases.

The above example is very simple, but it can get quite complex when your object contains arrays of objects, which in turn may contain arrays (and arrays of objects). The initial cheap and convenient trick of encoding an object instance and saving it, then retrieving, decoding and using it can then get quite hairy, but still less effort than splitting it out into different objects and maintaining in several different relational database tables.

In part two i'll talk about how to use mongoDB to save and retrieve object instances in PHP.
Tags: php / eatStatic / JSON /
Strict Standards: Only variables should be passed by reference in /home/rickhurst/sites/rickblog.eatstatic/lib/eatStatic/eatStaticBlog.class.php on line 595 Strict Standards: Only variables should be passed by reference in /home/rickhurst/sites/rickblog.eatstatic/lib/eatStatic/eatStaticBlog.class.php on line 595 Strict Standards: Only variables should be passed by reference in /home/rickhurst/sites/rickblog.eatstatic/lib/eatStatic/eatStaticBlog.class.php on line 595 Strict Standards: Only variables should be passed by reference in /home/rickhurst/sites/rickblog.eatstatic/lib/eatStatic/eatStaticBlog.class.php on line 595 Strict Standards: Only variables should be passed by reference in /home/rickhurst/sites/rickblog.eatstatic/lib/eatStatic/eatStaticBlog.class.php on line 595 Strict Standards: Only variables should be passed by reference in /home/rickhurst/sites/rickblog.eatstatic/lib/eatStatic/eatStaticBlog.class.php on line 595

Introducing eatStatic blog engine

This post was written 3 years ago.
Sun, 04 Sep 2011
creating a new blog post in textmate
Recently I ported this blog from an ancient version of wordpress to my own simple blog engine, which uses my PHP5 micro-framework, "eatStatic". I use the phrase "blog engine" rather than blog software, as it isn't really packaged up yet as something I would describe as software - its more just a collection of classes and templates that can be used to keep a blog.

The bulk of the code was written last year in the space of a couple of hours while sitting in a garage waiting for my car to be fixed - I was about to go on a long road trip and wanted a blogging solution that let me create blog posts and organise photos offline and then conveniently sync it to the live site when I had an internet connection. The result was my "on the road" blog about mobile working.

The thing that sets this apart from other blog engines (and the origin of the name "eatStatic", along with a nod to a 90's techno act), is that instead of using a relational database to store content, it uses simple text files for blog posts, and cached json files to store collections of data (e.g. post lists, tag references etc.). I have it set up to run with dropbox so that I compose my posts in textmate and they are synced to a dropbox folder on the webserver. You don't have to use dropbox though - you can use any technique you like to upload the data files to the server - for "on the road" I use subversion, which means I also have versioning of blog content. Draft posts are composed in a drafts folder and moved into the main posts folder to push them live. There is currently no admin area on the site, though I might add one later.

The published date and URI for each post are taken from the text file name - i've adapted it for this blog to use the same url scheme as wordpress to avoid link rot on legacy content. Some people asked me why I don't just use the title and created/ modified date of the text file to make it even simpler, and the answer is that I wanted finer control, and the option to specify the publish date - using created/ modified would have been a disaster for the content I imported from wordpress. Also by naming each file starting with YYYY-MM-DD, the post files are easier to sort/ find in the post folder, both visually/ manually and in code. You can use HTML in the blog post and additionally line breaks are converted to br tags, other then immediately after a closing tag. You can add tags and metadata at the end of the text file.

I've also got a simple thumbnail gallery which can be included in a post (see below) by uploading a folder full of full-size images with the same name as the post. The idea behind this is that a set of jpeg/ png images can be imported from a camera, and automatically pushed to the server by dropbox. A caching script creates the thumbnails and web-size version on demand, which are saved to the filesystem for efficiency during subsequent requests. I considered setting it up so that each post had it's own folder, which could then contain images, but the blog engine was mostly written with the idea of quickly creating posts by opening textmate/ emacs, writing and saving rather then faffing around with creating folders.

I made the decision not to build in any commenting functionality - the anti-spam / moderation features needed are too much of a pain to deal with, so i've archived the old wordpress comments into the post body and integrated disqus instead.

As I mentioned before, I've been using a previous version of eatStatic successfully for my "on the road" blog, but I wanted to see how it coped with 100's of posts rather then just a handful - it seems to be doing fine, coping with over 600 posts, but i'm sure there is room for improvement. I've also been investigating making the json read/write switchable to use mongodb so that it could potentially be very scaleable - i've encountered a few inconsistencies in the way that PHP json_decode() and mongodb object retrieval work, but nothing that can't be worked around - expect a blog post on that later!

I don't expect eatStatic blog to be a wordpress killer, but it may appeal to techie types who want a lightweight PHP5 blog engine, maybe to plug into an existing site and people who want to compose posts in textmate/ emacs (or any other code editor), rather than in a web form. If you are interested in trying it, keep an eye on the github repo, as i'll commit an example of how this blog is formed, once i've ironed out the more embarrassing bugs! I may add a simple admin area at a later date, to allow publishing entirely via the web, and I think it would also benefit from a "post by email" feature, for convenient moblogging, but don't hold your breath!

When I was importing content (I actually wrote a python script to parse a wordpress xml export file and create the text files), I found it quite fitting that the first ever post on this blog nearly ten years ago was made on a home-brewed ASP blog engine which used XML for data storage. I think before then I kept a static HTML blog of sorts, on a freeserve site, but unfortunately haven't got a copy of that for completeness.

Lastly, whether or not you want to set up an eatStatic-based blog, if you aren't already using dropbox, it really is excellent, so why not sign up for free 2GB account using my referral URL, so I can get some more free space? Even though I have a paid dropbox account, I use a second free account to mount on my server for automated site/ database backups and for this blog and it keeps filling up!

A few gotchas when your Drupal site is being deployed behind caching servers and proxy_pass

This post was written 4 years ago.
Fri, 04 Mar 2011
I recently launched a Drupal based site that forms part of the website for a global company (being a freelancer working through a design agency with an NDA, I can't talk about it any more than that, or stick it on my portfolio unfortunately!). I though i'd make a few notes about some of the issues I had to overcome.

The site itself is hosted on a dedicated VPS, but served to the world through akamai caching servers, which means that everything is cached by default. Therefore in this set-up, the CMS is only available at a different URL, where caching is bypassed. Gotcha no.1 is that cookies set and read via PHP do not work in this scenario*. Fortunately Javascript cookies can still be used.

In addition to the caching, the site is served up as part of a much bigger site /deep/down/in/the/url/structure, so proxy_pass is used (before caching) to rewrite the paths. Gotcha no.2 is that base_path() in drupal picks up the path of the origin server, so I had to add a condition like this in my settings.php (excuse wrapping, really must sort this site out for code samples):-

if($_SERVER['HTTP_X_FORWARDED_HOST'] !=''){
$base_url = 'http://'.$_SERVER['HTTP_X_FORWARDED_HOST'].'/path/to/my/proxied/site';
}

The clue to gotcha number 3 is in that last example. $_SERVER['HTTP_HOST'] reports the host of the origin server, rather than the public host, so if it is used anywhere in your code, it may cause issues. I ended up adding a function getHost() that I use to return the appropriate host, depending on where the site is being viewed (once again excuse formatting):-

function getHost(){
$host = $_SERVER['HTTP_HOST'];
if(isset($_SERVER['HTTP_X_FORWARDED_HOST'])){
$host = $_SERVER['HTTP_X_FORWARDED_HOST'];
}
return $host;
}

Gotcha No.4 was image paths in optimised css - both the Drupal-provided CSS caching and some external stuff I had using minify - these both rewrote the image paths to use those from the origin server. This meant that CSS applied images worked in the non-cached editing environment but not on live. I haven't come up with a solution for that one yet, other then to leave some of the stylesheets non-minified.

* maybe there is a solution to this, but assume in this case there is little or no scope to request server config changes.
Tags: drupal / php /

Plone Conference 2010 Day 1

This post was written 4 years ago.
Wed, 27 Oct 2010
quality schwag from Plone Conference 2010
Just back home after day one of Plone Conference 2010, with my mind buzzing so thought it would be a good time to write up some of my thoughts and notes. It was really difficult to choose between the talks on offer on the three different different tracks, but here are some thoughts on the ones I attended.

Keynote by Alexander Limi and Alan Runyan


Two main themes here - ubiquity/availaibility and designer friendliness.

To make Plone more mainstream it needs to be available to non-technical end users through the same means that other systems are already available - namely being able to deploy easily on cheap hosting, specifically the one-click installers on shared hosting in cPanel and similar. This would allow users to easily evaluate Plone for their needs in the same ay that they can already with wordpress, drupal and joomla - apparently there is a new joomla instance created about every two minutes. I must stop Alan Runyan and see if he has thought about microsoft web platform installer - nowadays this includes the option to install wordpress, drupal, modX, and load of other systems, including downloading and installing dependencies. It would be great if Plone was in that list.

One of the aims of Plone 5 is to make it more designer friendly. I think this is really important - even though since the release of plone 4 i've started using plone again for intranets and extranets (mainly straight out of the box with a few minor cosmetic tweaks), I currently still use something like wordpress, or a home-rolled CMS for website builds. That is now going to change - the theming story is being completely re-written by the introduction of Deliverance/XDV/Diazo (already available - more on that later), and Deco (TTW layout and content editing). The aim is to make Plone appeal to designers as something that helps, not hinders them.

Quote of the talk has to be from Limi - "Plone doesn't suck, because the developers don't hate the core technology" (or something like that) - in reference to the revelation that many drupal/wordpress/joomla developers admit they actually hate PHP, whereas Plone developers love python.

Deco: new editing interface for plone 5


The next talk I attended was Rob Gietema's demo of Deco. This is looking really good, although i'm a little bit skeptical of drag and drop and in-place editing (I like front-end based editing, but prefer lightboxed modal editing to in-place), mainly because i've seem layouts explode and page elements disappear, or refuse to drop in the correct place on similar systems in the past. However, I haven't actually tried this one yet, maybe i'm just clumsy! I think in general designers and content editors are going to love it.

LDAP and Active Directory integration


I attended Clayton Parker's talk on LDAP and active directory integration - can't say I absorbed much, but i'm sure i'll be asked to do this one day, so it's good to know that this is tried and tested and the tools are already there.

Easier and faster Plone theming with Deliverance and xdv


Nate Aune gave us an overview of Deliverance. I've known about Deliverance for ages, but the penny dropped for me today about how useful this is. The basic principle is this - deliverance acts as a proxy to transparently take HTML output from a website and merge it with HTML from a theme, according to a simple set of rules. In the case of plone, this means you can create a theme in static HTML and have content from a default theme Plone site displayed wrapped up in the static HTML. Simple rules can be applied e.g. "take the news portlet from the plone site, drop the header and footer and all the images and display in the element with and id of "recent-news" from my HTML theme. magic!

Nate quoted one example where the HTML theme is stored in a dropbox folder which the client has access to to make tweaks and changes. I can see front end developers and designers loving this.

There was much discussion at the end over which technology should be used for this - XDV is a fork of an earlier version of Deliverance, which has slightly different functionality. XDV, which is to be renamed Diazo, will be the theming engine for Plone 5. With that in mind, i'll concentrate my efforts on Diazo. I'm excited by this for non-plone reasons - a majority of my works seems to involve integrating technologies that don't belong together - this will really help.

Design and development with Dexterity and convention-over-configuration


Martin Aspelli gave a talk on dexterity - the (eventual) replacement for archetypes. This is already available, but not mature yet. The talk was mainly conceptual rather than code-led, focussing on best practice for designing your site or application - when it is suitable to create a content type, and when you might be better off creating a form, or using a relational database. Best quote "code is like a plastic bag" (reduce, reuse, recycle). Write less code.

Laying Pipe with Transmogrifier


Another talk from Clayton Parker - transmogrifier is a system to package up migrations of content from other systems. My thoughts on this were that it looked like hard work for a one off import (usually i'd write a one-off python script for something like this), but creating packages would benefit the plone community e.g. if there were packages available covering migrations from a standard wordpress, drupal or joomla, this would benefit plone. I suppose this could also be used to import content from older instances of Plone, where the upgrade path is broken.

Multilingual sites - caveats and tips


Sasha Vincic talkd about strategies and gotchas for multilingual site builds. Even though Plone has tools for this, there are common scenarios, such as the "missing page" scenario where a translation of a site may not have the same number of pages as the base translation. He also covered common issues such as escaped HTML being translated by third parties and being delivered content where HTML attributes have been translated, therefore breaking the HTML.

Guest Keynote: Challenging Business


This was an inspirational treat for the end of day one - Richard Noble is a fantastic speaker, and after a day of CMS talk it was great to hear his story of the challenges of his past world land speed record record achievements, and the current one - the Bloodhound SSC project. As well as building insane rocket powered cars (the current one has an F1 engine onboard just to drive the fuel pump!), his goal is to inspire children and young people to become engineers, as there is an impending massive shortage of engineers in training. I was also interest to hear that there will be no patents on the technology developed for the new car - the advancements will be made avaiolable for anyone in the engineering industry to build on - sound familiar?


archived comments
Thanks for the roundup Rick! Looking forward to the rest.

Mike 2010-10-27 21:29:18
Good, clear writeup. I could not make it to the conference this year. It's nice to be able to follow it a bit this way. Thanks, Rick!

Maurits van Rees 2010-10-27 21:42:13
Thanks for the comments guys - videos for all the talks will eventually be online, so hopefully everyone who couldn't make it won't miss out :)

Rick 2010-10-27 21:58:26
Thanks for the post!

I agree with you re: transmogrifier which is (partially) why I wrote http://pypi.python.org/pypi/parse2plone

Keep the blogs coming :-)

Alex Clark 2010-10-28 00:50:01

More php framework musings

This post was written 5 years ago.
Sun, 21 Feb 2010
arty egg sandwich and latteJust like CMS's I'm incredibly indecisive when it comes to frameworks. I spent years dragging my feet over deciding which php framework I would "standardise" on, before deciding on zend, then deciding against it (too big), then getting as far as using cake for a couple of things (it's pretty good, but felt a bit too rigid, and call me ridiculous, but I just hate the name). I've now gone full circle and started using codeigniter (the first one I tried a few years back) for a couple of main reasons - firstly it is very light and loosely coupled, secondly I'm already using it on toooldtoskate.com, which runs sweetcron. I'm also in tandem using a pure php 5 port of codeigniter called kohana on another project.

I'm hoping the techniques I'll learn for these will be mostly interchangeable so juggling the two won't be counterproductive. There was no big deciding factor really, I had just started to realise that the ever growing collection of php code I had been using could be considered a framework of sorts, but was lacking a number of features I would get for free if I adopted an established framework. A friend suggested I steal bits of other frameworks to add to my code base, and another friend mentioned kohana, just as I came to key point in my longest running project (referred to as "frankenapp", due to the mixture of technologies in use!). I started looking at how I could impose a standard structure for future code and make it more modular, and before I knew it I had the front end being served up by kohana, and a library acting as a simple bridge to legacy code. At the very least, adopting kohana will encourage me to stick to a structure rather than making it up as I go along!
This post was written 5 years ago, which in internet time is really, really old. This means that what is written above, and the links contained within, may now be obsolete, inaccurate or wildly out of context, so please bear that in mind :)
Tags: php / cake / codeigniter / frameworks /

Aardman Timmy Time

This post was written 5 years ago.
Wed, 23 Dec 2009
aardman timmy time
Yesterday the www.timmytime.tv was was finally launched. I worked on this project along with the Aardman digital team building a bespoke php CMS and putting the site together to accommodate the mostly flash content with HTML alternative (where possible). XML was used to share the same data between flash and PHP, making it easier to maintain.

archived comments
Congratulations for this work Rick. It's a really nice one!

nicolas 2009-12-24 09:32:26
This post was written 5 years ago, which in internet time is really, really old. This means that what is written above, and the links contained within, may now be obsolete, inaccurate or wildly out of context, so please bear that in mind :)
Tags: css / php / portfolio /

Deploying and maintaining a website using svn

This post was written 5 years ago.
Thu, 04 Jun 2009
I use svn (subversion code version control) in a very simple way, but it has become an essential tool for how I build, deploy and manage websites. If you aren't using some sort of source control, you should be. If you are, but you are using it only as a source code backup, you might be interested to see one of the ways it can be used as an integral part of the workflow for building, deploying and managing website updates. I'm not going to go into svn commands, this is more of an overview of the process.

This is my typical workflow, based on a php/mysql website build, but the process is relevant to most technologies:-

Create a new project in your svn repo This basically consists of making a new directory in the root of the repo. At this stage, an experienced svn user might suggest you create sub-directories for "trunk" and "branches", but I don't bother (yet). I have my svn repo hosted on a web server so it can be accessed over https, by any machine with web access and an svn client installed.

Check it out to your local machine
I always develop on my local machine, I will jump through hoops to make sure I have a portable, standalone development environment for what i'm working on. It's ok to develop on a remote server, as long as you have your own "sandbox" to work in, where you won't be treading on anyone's toes or vice versa. I always set up local hostnames and virtual hosts on my machine, e.g. myproject.macbook.local to allow me to easily work on multiple sites on the same port no. on my laptop. Check the project out and set the working copy folder as the root of your local website. When i've subcontracted work in the past, I have insisted that freelancers set up a local development environment and work with svn, even if we aren't collaborating - there's no way that i'm going to accept the work as a zip file over email! Working this way means that I can periodically check out the project to my machine to see how they are getting on, rather than having to ask them.

Build your site
As you add files to the site, add and commit them to svn. If you need to rename or delete a file, use svn to do it, to stop it getting out of sync. If you are collaborating with someone else, do an svn update and commit frequently (in that order). This way if you get any conflicts you can resolve them more easily than if you leave it till the end of the day. I find in that scenario it often becomes a "race" to commit stuff frequently, so that the other person is more likely to have to resolve conflicts if any turn up!

Contextual config file I want to be able to deploy an identical code base to the live (and test) server environment, so I use some "if" statements that look at the host name to selectively load variables such as database config and anything else site/server specific, depending on which environment the site is running on.

Checkout the project on the live/ test server Firstly, this scenario will only work for you if you have terminal/ ssh/ remote desktop access to your server. If you only have ftp or sftp available, this technique won't work for you. If that is the case then you may have to resort to uploading the files individually or in bulk - which is a massive pain compared to being able to run an svn checkout/ export/ update on the live server.

There is also some debate about whether deploying or managing the live codebase as an svn working copy is a good idea. I think it is for the following reasons:-
  1. Occasionally you may need to debug something on the live/ test server. e.g. if something behaves differently to your local environment, or some other edge case. In this scenario if you fix the bug on the live server you can commit it back to the repo and do an update on your local copy.
  2. If you are managing user contributed uploads and want to keep them in svn. You can then log in to the server occasionally and add/ commit these back to the repo.
  3. Keeping an eye on things. A while back I spotted that an old wordpress site I was hosting had been hacked. Simply by doing an svn stat on the root of the site, I could see that some files had been modified - that's an edge case, but it's a useful tool for getting quick feedback if anything has changed on the live server that you weren't expecting.
One thing you will want to do if you manage the live site as a working copy is make sure your web server is configured not to serve up the .svn folders, or any other folders not part of the live site.

Database changes I always keep a mysql dump in the svn repo. This may or may not include data, depending on the size of the database, or the need to keep config data in the database. Once again this should be in a directory that is not served up by your webserver. For a first deployment, I will use a checkout of the mysql on the live server to create and populate the database. Thereafter, I use numbered migration scripts to manage any (structural) changes to the database, such as adding or altering tables. These are added/ commited to svn, then checked out on the live server to run and apply the changes to the live site.

Getting started with svn If you are reading this and thinking "sounds good, but i've never used svn". A good place to start would be to sign up for a free hosted svn repo with someone like beanstalk, and then try one of the many GUI svn clients. Rapid SVN is a good free one for most platforms, but there are tons out there. It will help to learn the basic commands, and be essential if you only have ssh access on your server. I said I wouldn't go into commands here, but to put things in perspective, I get by with very few: svn co (checkout a project), svn add (add file(s) to project), svn commit (commit changes to file(s)), svn update (update the working copy with latest changes), svn resolved (mark a conflict as resolved), svn mv (rename a file and tell svn to sync on next commit), svn rm (delete a file and tell svn to sync on next commit) and finally svn revert (go back to the last revision, or a specified revision).

There's more (but I don't know it) I'm fully aware that there are more sophisticated ways to use svn to manage and deploy projects, involving branches, merging, switching, externals and a whole load of other clever stuff that I have yet to become familiar with. However a simple project with checkouts/ commits and updates on local and live server works well for me, and is a good starting point to get stuck in and start to feel comfortable with it. Also, apparently all the cool kids have abandoned svn for git, which works in a similar, but different way, so that might be worth a look too, if you aren't already familiar with svn.

You'll never go back I've been working this way for a few years now, but occasionally i'll do some work for a company that don't or won't work use version control, or don't use it as an integral part of the workflow (i.e. they just use it as a backup). Without version control as an integral part of the workflow, I have often had to resort to pen and paper to remember what files have changed, so I know what to upload, i've accidentally overwritten new code with old versions and mysteriously lost days of work when someone else does an update from their non-versioned working copy. The learning curve is worth it, and so is getting your head round how to resolve conflicts (usually the first thing that scares people away from svn).

archived comments
Nice post Rick.. I'm interested to find out a bit more about how you manage which files are served up by your webserver?

That's the bit I'm a little lost on at the moment.. I use SVN a lot more now thanks to VersionsApp (http://versionsapp.com)

Nik 2009-06-04 10:20:09
I've been using SVN for a while now, but you've pointed out some uses/applications that should be pretty useful - thanks. I can recommend tortoise svn as a client.

I'm sure there is more that can be done with DB version control, but as you say there is a ton to learn.

Joel Hughes 2009-06-04 10:21:45
@Nik in my virtual host config I have something like the following:-

RedirectMatch 404 /\\.svn(/|$)
(There should only be one backslash in front of .svn - need to update the blog software to do a stripslashes!
This stops svn stuff being served up. (I did write some other stuff to help with hiding other directories, but comment formatting messed it up, so removing!

@Joel +1 for tortoise svn on windows. I like the feature that you can commit a directory and it gives you the option to add any unadded files, and select which other files should/ shouldn\'t be committed at the same time. A nice time saver

Rick 2009-06-04 10:58:16
Weighing in on the whole "working copy" debate (which you probably don't want to get into!)... I really have never heard even a slightly valid argument for using an export rather than a checkout.

One thing I would say, though, is as well as denying access to .svn (a must), also make sure that your SVN credentials (passwords, and so forth) are NOT stored on the webserver. When you do an update/checkout, your SVN username and password will probably be stored (by default) in ~/.subversion/auth.

Instead, you can set the "store-passwords = no" and "store-auth-creds = no" in your ~/.subversion/config file. This is a bit of a chore as you have to type the password each time you update.

The alternative is to make sure the SVN user is read-only, but even that gives a hacker access to historical (and future) release info.

Tom 2009-06-04 11:10:11
Good post Rick. I'm going to point my SVN shy housemate at it later.

Not wanting to sound condescending, as I sometimes run my own personal projects as you describe i.e. just using the trunk, but when working on bigger, more complicated sites you should really considering creating a tag (just a svn cp of trunk) and deploying that so you can rollback in case of error.

Creating a SQL patch to the tag and one from the tag (i.e. one to add the new column and one to remove it helps when deploying sites that need to say up)

Andy Gale 2009-06-08 13:39:04
@Andy - yep absolutely - I just haven't got round to trying it out so that I feel comfortable with it. Do you have a way of running sql patches automatically?

Rick 2009-06-08 13:55:35
Not yet... sometimes it counts which order the patches are run... but it would be fairly easy to automate it if you worked out a naming convention. That's my plan for our next major project as it would help to auto-update each development environments as well the live site.

Andy Gale 2009-06-10 11:44:17
This post was written 5 years ago, which in internet time is really, really old. This means that what is written above, and the links contained within, may now be obsolete, inaccurate or wildly out of context, so please bear that in mind :)
Tags: php / svn / sitebuilding /