Rick Hurst Web Developer in Bristol, UK

Menu

Category: php

My current tools and tech in 2019

I try to avoid jumping on every new trend which comes along in the web world, especially over the last few years, where things seem to have moved very fast, with javascript frameworks becoming obsolete before i’ve even got round to trying them, but I also need to keep my skills and tech relevant. Here’s what i’m using as of late summer 2019.

Editor/IDE

I’ve been consistently using Sublime Text (Version 2 because I never got round to upgrading to 3) for years as my code editor, though i’ve never really taken full advantage of the extensions and customisations available. It just works and I saw no real need to change. One thing it has never done well is working over ssh, or rather from a mounted sshfs folder on a mac. I have occasional need to do this, when using the likes of nano in a terminal doesn’t feel good enough, and it’s one thing that I missed from Coda, when I used that for a while, years ago.

Then after spotting a tweet from an old colleague mentioning remote SSH editing in VS Code (via the Remote – SSH extension), I thought i’d take a look at it. Firstly, having used old-skool Microsoft Visual Studio in the past, I was surprised to see that Microsoft had built and released such a cool free editor, with builds for windows, mac and even linux. It’s already become my default editor – the intellisense and built-in terminal and git functionality is really good. There’s a lot there to explore and a ton of extensions. It also looks good.

Languages and frameworks

Most of my work is currently PHP. I no longer feel the need to apologise for that! PHP 7 is mature, stable and fast. I work on a couple of legacy PHP projects, one of which uses a combination of bespoke MVC code, an old fork of codeigniter (currently being refactored out) and Redbean ORM, and the other uses Zend framework, but is being re-built as a de-coupled app with Lumen (micro-framework by Laravel) on the back end and Vue.js on the front-end.

I haven’t done any web projects with Python recently (though i’m keen to try Pyramid), but I have been using it for a custom FTP server, using pyftpdlib (yes, FTP – for an old-skool EDI project – an old, but still very widely used technology where XML and FTP are used to exchange data – probably predating the invention of JSON and REST). I’ve also been using python for a raspberry pi datalogger project, reading serial data from microcontrollers via USB and then relaying it to a REST API.

JavaScript-wise, I still use bits and pieces of  jQuery to get stuff done while I learn Vue.js properly, which is happening as part of a current aforementioned web app rebuild. We settled on Vue.js instead of React because Vue seems to be a popular component in the Lumen/Laravel ecosystem.

For CSS, i’ve settled for now on SASS and Gulp. Sass is a keeper, i’m just using Gulp until I find something that I like better. I tend to use Susy for grid/layout.

Ansible is my go-to tool for devops – I use it to provision web servers and for formerly manual tasks such as migrations of websites, moving content between dev, live and staging on wordpress sites etc. For local development of any sort I almost always use Vagrant now, with an ansible provisioning script, which gives me a very portable set-up, provided I have enough RAM to keep multiple virtual machines running as I juggle projects.

During my last foray back into a day-job (back in 2016), working at an agency, I got a fair amount of experience using wordpress as a CMS, including for some fairly complex projects, relying heavily on the Advanced Custom Fields (ACF) plugin. Now, historically I haven’t always been a fan of WordPress, but it proved to be an excellent tool when used with ACF, making the creation of custom page types incredibly quick and easy compared to other Content Management Systems i’ve used, and even compared to using frameworks which would be my usual preference for custom content. Being able to create custom fields, included nested repeater fields via the WordPress admin, having the field definition saved to the filesystem and then versioned in GIT for seamless syncing to production is a game changer!

 

Using Vagrant for local LAMP development

I’ve always used a local version of Apache for php dev, either the version provided with OSX, or using something like XAMPP or MAMP. On a recent freelance contract I was introduced to using Vagrant to spin up a tailored virtual machine with specific versions of php, mysql and any other relevant dependencies.

Vagrant will download a base virtual machine, and then run a provisioning script to install dependencies. Vagrant also sets up file sharing from your host machine, and port forwarding so you edit locally in your normal editor, and view locally via a web browser as if you were running a local apache instance.

The advantage of this is that once you are up and running, the Vagrant configuration can be stored in the GIT repo, so that other developers can quickly start developing using exactly the same dev environment.

A typical Vagrantfile looks like this:-

Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
# Mentioning the SSH Username/Password:
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
# Begin Configuring
config.vm.define "lamp" do|lamp|
lamp.vm.hostname = "lamp" # Setting up hostname
lamp.vm.network "private_network", ip: "192.168.205.10" # Setting up machine's IP Address
lamp.vm.synced_folder "siteroot", "/var/www", owner: "root", group: "root"
#lamp.vm.provision :shell, path: "provision.sh" # Provisioning with script.sh
end
# End Configuring
config.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true
end

Running “vagrant up” in a directory with this file would spin up an ubuntu VM, serving files from the “siteroot” directory. It would also attempt to run provision.sh on the VM, which can be used to install php/mysql etc.

The getting started guide on the Vagrant site is the best way to get up and running if your are interested.

Combining and minifying assets on a PHP site with PHP minify

loading seperate css and js assets

I’ve been getting carried away with my Camper Van blog over the last couple of weeks, overcompensating for my lack of actual design skills by adding loads of fancy effects such as Supersized full-screen background images, and Photoswipe for responsive photogallery/lightbox.

Looking at the network tab in chrome developer tools I was reminded how many http requests are needed to serve all the seperate css and javascript files, and that I needed to optimise it a bit. There’s loads of different ways to combine and minify CSS and JavaScript assets – for example using something like Live reload on the desktop during development, or using a server-side on-the-fly system, e.g. Django Compressor on a Django site. In either case this is usually in conjunction with a CSS pre-processor such as SASS

As “on the road” is a PHP site, and I haven’t got round to setting up SASS stuff for it, I decided to use PHP minify, which lets you specify groups of assets to be combined and minified, then serves them up on the fly, using caching (filesystem or memcache) to keep it snappy. The set-up is fairly straightforward, the only thing that might trip up a novice is setting up the caching.

optimised assets loading

As a result (after a bit of refactoring to get things working after moving the js from the head to just before the closing body tag), I now have the site loading in a single js and a single css file, considerably improving the load time, and neatening up the source code. Note that these two screen grabs were taken on different internet connections so the actual load time of the assets shown isn’t a good comparison.

Setting up apache on osx lion

For general website/ PHP development, I like to have multiple local sites running from my Sites folder using virtualhosts. Up until now i’ve usually done this with Xampp, but after hearing that the version of apache shipped with OSX 10.7 is fairly useable out of the box I thought i’d run with it.

First of all I uncommented a couple of lines from /etc/apache2/httpd.conf.

To enable PHP5:-


LoadModule php5_module libexec/apache2/libphp5.so


To enable the use of virtualhosts:-


Include /private/etc/apache2/extra/httpd-vhosts.conf


I can then create virtualhosts in /private/etc/apache2/extra/httpd-vhosts.conf like so:-


<VirtualHost *:80>
DocumentRoot "/Users/rickhurst/Sites/sandbox"
ServerName sandbox.macbook.local
</VirtualHost>


Then to serve a local site from http://sandbox.macbook.local add a line to /etc/hosts:-


127.0.0.1 sandbox.macbook.local


One issue that took me a while to figure out was that I was getting a 403 error when I tried to use AllowOverride All, which lets me use .htaccess

I found the answer here. Basically, the options line needs to be set to “Options All”:-


<Directory "/Users/rickhurst/Sites/">
Options All
AllowOverride All
Order allow,deny
Allow from all
</Directory>


(The above rules are added to a file /private/etc/apache2/users/yourusername.conf). Remember to restart apache after making these changes!

Object storage and retrieval in PHP part 2 – MongoDb

In part one, I talked about how to save and retrieve a PHP object instance using JSON files, and in this post I talk about the same operation using mongoDb, and some gotchas.

I’ve only tried this in very limited circumstances, mainly to see how feasible it would be to make eatStatic seamlessly switch between json files and mongo – I naively thought that you would just throw a json file at mongo and have it store it for you, but the examples i’ve found takes the php object and converts to JSON magically, and also passes back a PHP object instance rather than raw JSON.

This post doesn’t cover installing mongo, I skipped between several different examples/ tutorials before I got it working, so can’t remember exactly how I did it in the end. Once installed though you can connect to it from PHP like this:-


$m = new Mongo(); // connect
$db = $m->cms; // select a database


For comparison purposes we’ll create a simple case study object like in Part 1:-


class case_study {
var $id;
var $title;
var $body_text;
var $skills = array();
}

$case_study = new case_study;

$case_study->id = 'my/case_study';
$case_study->title = 'My case study';
$case_study->body_text = 'Some text for the case study';
$case_study->skills['css'] = 'CSS';
$case_study->skills['sitebuild'] = 'Site Build';


which gives us:-


case_study Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


To store this in mongo db, we simply specify a collection to use (“collection” is analogous to table in a relational database, but you can create them on demand, and don’t have to specify a schema) and them insert our object:-


$case_studies = $db->case_studies;
$case_studies->insert($case_study);


To get it back we use:-


$case_study = $case_studies->findOne(array('id' => 'my/case_study'));


Passing this to print_r() gives us:-


Array
(
[_id] => MongoId Object
(
[$id] => 4e6de720d2db288b0c000000
)

[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


Note a couple of things:-

  • It has given us back an Array, instead of an object
  • It has inserted it’s own unique ID [_id]

We don’t need to worry about the extra ID, as we’ll be using our own for lookups, so it can be ignored. To convert the array to an object, simply do:-


$case_study = (object) $case_study;


Which takes us back to:-


stdClass Object
(
[_id] => MongoId Object
(
[$id] => 4e6de720d2db288b0c000000
)

[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)

Object storage and retrieval in PHP part 1 – JSON files

I mentioned in my post about eatStatic that I was using JSON files for storage of objects and arrays, but hoped to make it switchable to use mongdb. This is part one of a two-part post, demonstrating use of JSON files with json_encode() and json_decode().

Take the following simple class:-


class case_study {
var $id;
var $title;
var $body_text;
var $skills = array();
}


If we create an instance of this and add some data:-


$case_study = new case_study;
$case_study->id = 'my/case_study';
$case_study->title = 'My case study';
$case_study->body_text = 'Some text for the case study';
$case_study->skills['css'] = 'CSS';
$case_study->skills['sitebuild'] = 'Site Build';


and pass it to print_r(), we get this:-


case_study Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


If we now encode it as JSON:-


$json_str = json_encode($case_study);


At this point, we can save the file to the filesystem – I tend to create a unique ID based on the current date/time and a random string. I won’t detail it all here, but you can see some of the helper functions I use in eatStaticStorage.class.php and eatStatic.class.php. One thing worth noting is that sometimes when reading a .json file back in from the filesystem, I was experiencing a bug where the last three characters were omitted – i’m not sure what was causing this, but it was fixed by changing my read_file() method to use file_get_contents(), instead of fread().

Once you have retrieved your JSON string you can decode it again:-


$case_study = json_decode($json_str);


and we end up with this:-


stdClass Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => stdClass Object
(
[css] => CSS
[sitebuild] => Site Build
)

)


Notice that the array “skills” is now an object. We can set it back to an array using get_object_vars():-


$case_study->skills = get_object_vars($case_study->skills);


nb: this only happens for key => value arrays, if it was just a simple array e.g. array(‘css’,’sitebuild’), we wouldn’t need to pass it through get_object_vars(), as it would be maintained as an array.

This gets us back to where we started:-


stdClass Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


sort of – we now have an object instance with all the attributes of the original object instance, but it doesn’t know it is a case_study object. In fact it isn’t a case_study object instance at all – we would have to create a new instance of case_study and copy the attributes across if we needed the real thing, but if you just want the data, this can be used as it is in most cases.

The above example is very simple, but it can get quite complex when your object contains arrays of objects, which in turn may contain arrays (and arrays of objects). The initial cheap and convenient trick of encoding an object instance and saving it, then retrieving, decoding and using it can then get quite hairy, but still less effort than splitting it out into different objects and maintaining in several different relational database tables.

In part two i’ll talk about how to use mongoDB to save and retrieve object instances in PHP.

Introducing eatStatic blog engine

creating a new blog post in textmate

Recently I ported this blog from an ancient version of wordpress to my own simple blog engine, which uses my PHP5 micro-framework, “eatStatic”. I use the phrase “blog engine” rather than blog software, as it isn’t really packaged up yet as something I would describe as software – its more just a collection of classes and templates that can be used to keep a blog.

The bulk of the code was written last year in the space of a couple of hours while sitting in a garage waiting for my car to be fixed – I was about to go on a long road trip and wanted a blogging solution that let me create blog posts and organise photos offline and then conveniently sync it to the live site when I had an internet connection. The result was my “on the road” blog about mobile working.

The thing that sets this apart from other blog engines (and the origin of the name “eatStatic”, along with a nod to a 90’s techno act), is that instead of using a relational database to store content, it uses simple text files for blog posts, and cached json files to store collections of data (e.g. post lists, tag references etc.). I have it set up to run with dropbox so that I compose my posts in textmate and they are synced to a dropbox folder on the webserver. You don’t have to use dropbox though – you can use any technique you like to upload the data files to the server – for “on the road” I use subversion, which means I also have versioning of blog content. Draft posts are composed in a drafts folder and moved into the main posts folder to push them live. There is currently no admin area on the site, though I might add one later.

The published date and URI for each post are taken from the text file name – i’ve adapted it for this blog to use the same url scheme as wordpress to avoid link rot on legacy content. Some people asked me why I don’t just use the title and created/ modified date of the text file to make it even simpler, and the answer is that I wanted finer control, and the option to specify the publish date – using created/ modified would have been a disaster for the content I imported from wordpress. Also by naming each file starting with YYYY-MM-DD, the post files are easier to sort/ find in the post folder, both visually/ manually and in code. You can use HTML in the blog post and additionally line breaks are converted to br tags, other then immediately after a closing tag. You can add tags and metadata at the end of the text file.

I’ve also got a simple thumbnail gallery which can be included in a post (see below) by uploading a folder full of full-size images with the same name as the post. The idea behind this is that a set of jpeg/ png images can be imported from a camera, and automatically pushed to the server by dropbox. A caching script creates the thumbnails and web-size version on demand, which are saved to the filesystem for efficiency during subsequent requests. I considered setting it up so that each post had it’s own folder, which could then contain images, but the blog engine was mostly written with the idea of quickly creating posts by opening textmate/ emacs, writing and saving rather then faffing around with creating folders.

I made the decision not to build in any commenting functionality – the anti-spam / moderation features needed are too much of a pain to deal with, so i’ve archived the old wordpress comments into the post body and integrated disqus instead.

As I mentioned before, I’ve been using a previous version of eatStatic successfully for my “on the road” blog, but I wanted to see how it coped with 100’s of posts rather then just a handful – it seems to be doing fine, coping with over 600 posts, but i’m sure there is room for improvement. I’ve also been investigating making the json read/write switchable to use mongodb so that it could potentially be very scaleable – i’ve encountered a few inconsistencies in the way that PHP json_decode() and mongodb object retrieval work, but nothing that can’t be worked around – expect a blog post on that later!

I don’t expect eatStatic blog to be a wordpress killer, but it may appeal to techie types who want a lightweight PHP5 blog engine, maybe to plug into an existing site and people who want to compose posts in textmate/ emacs (or any other code editor), rather than in a web form. If you are interested in trying it, keep an eye on the github repo, as i’ll commit an example of how this blog is formed, once i’ve ironed out the more embarrassing bugs! I may add a simple admin area at a later date, to allow publishing entirely via the web, and I think it would also benefit from a “post by email” feature, for convenient moblogging, but don’t hold your breath!

When I was importing content (I actually wrote a python script to parse a wordpress xml export file and create the text files), I found it quite fitting that the first ever post on this blog nearly ten years ago was made on a home-brewed ASP blog engine which used XML for data storage. I think before then I kept a static HTML blog of sorts, on a freeserve site, but unfortunately haven’t got a copy of that for completeness.

Lastly, whether or not you want to set up an eatStatic-based blog, if you aren’t already using dropbox, it really is excellent, so why not sign up for free 2GB account using my referral URL, so I can get some more free space? Even though I have a paid dropbox account, I use a second free account to mount on my server for automated site/ database backups and for this blog and it keeps filling up!

A few gotchas when your Drupal site is being deployed behind caching servers and proxy_pass

I recently launched a Drupal based site that forms part of the website for a global company (being a freelancer working through a design agency with an NDA, I can’t talk about it any more than that, or stick it on my portfolio unfortunately!). I though i’d make a few notes about some of the issues I had to overcome.

The site itself is hosted on a dedicated VPS, but served to the world through akamai caching servers, which means that everything is cached by default. Therefore in this set-up, the CMS is only available at a different URL, where caching is bypassed. Gotcha no.1 is that cookies set and read via PHP do not work in this scenario*. Fortunately Javascript cookies can still be used.

In addition to the caching, the site is served up as part of a much bigger site /deep/down/in/the/url/structure, so proxy_pass is used (before caching) to rewrite the paths. Gotcha no.2 is that base_path() in drupal picks up the path of the origin server, so I had to add a condition like this in my settings.php (excuse wrapping, really must sort this site out for code samples):-


if($_SERVER['HTTP_X_FORWARDED_HOST'] !=''){
$base_url = 'http://'.$_SERVER['HTTP_X_FORWARDED_HOST'].'/path/to/my/proxied/site';
}


The clue to gotcha number 3 is in that last example. $_SERVER[‘HTTP_HOST’] reports the host of the origin server, rather than the public host, so if it is used anywhere in your code, it may cause issues. I ended up adding a function getHost() that I use to return the appropriate host, depending on where the site is being viewed (once again excuse formatting):-


function getHost(){
$host = $_SERVER['HTTP_HOST'];
if(isset($_SERVER['HTTP_X_FORWARDED_HOST'])){
$host = $_SERVER['HTTP_X_FORWARDED_HOST'];
}
return $host;
}


Gotcha No.4 was image paths in optimised css – both the Drupal-provided CSS caching and some external stuff I had using minify – these both rewrote the image paths to use those from the origin server. This meant that CSS applied images worked in the non-cached editing environment but not on live. I haven’t come up with a solution for that one yet, other then to leave some of the stylesheets non-minified.

* maybe there is a solution to this, but assume in this case there is little or no scope to request server config changes.

Plone Conference 2010 Day 1

quality schwag from Plone Conference 2010

Just back home after day one of Plone Conference 2010, with my mind buzzing so thought it would be a good time to write up some of my thoughts and notes. It was really difficult to choose between the talks on offer on the three different different tracks, but here are some thoughts on the ones I attended.

Keynote by Alexander Limi and Alan Runyan

Two main themes here – ubiquity/availaibility and designer friendliness.

To make Plone more mainstream it needs to be available to non-technical end users through the same means that other systems are already available – namely being able to deploy easily on cheap hosting, specifically the one-click installers on shared hosting in cPanel and similar. This would allow users to easily evaluate Plone for their needs in the same ay that they can already with wordpress, drupal and joomla – apparently there is a new joomla instance created about every two minutes. I must stop Alan Runyan and see if he has thought about microsoft web platform installer – nowadays this includes the option to install wordpress, drupal, modX, and load of other systems, including downloading and installing dependencies. It would be great if Plone was in that list.

One of the aims of Plone 5 is to make it more designer friendly. I think this is really important – even though since the release of plone 4 i’ve started using plone again for intranets and extranets (mainly straight out of the box with a few minor cosmetic tweaks), I currently still use something like wordpress, or a home-rolled CMS for website builds. That is now going to change – the theming story is being completely re-written by the introduction of Deliverance/XDV/Diazo (already available – more on that later), and Deco (TTW layout and content editing). The aim is to make Plone appeal to designers as something that helps, not hinders them.

Quote of the talk has to be from Limi – “Plone doesn’t suck, because the developers don’t hate the core technology” (or something like that) – in reference to the revelation that many drupal/wordpress/joomla developers admit they actually hate PHP, whereas Plone developers love python.

Deco: new editing interface for plone 5

The next talk I attended was Rob Gietema’s demo of Deco. This is looking really good, although i’m a little bit skeptical of drag and drop and in-place editing (I like front-end based editing, but prefer lightboxed modal editing to in-place), mainly because i’ve seem layouts explode and page elements disappear, or refuse to drop in the correct place on similar systems in the past. However, I haven’t actually tried this one yet, maybe i’m just clumsy! I think in general designers and content editors are going to love it.

LDAP and Active Directory integration

I attended Clayton Parker’s talk on LDAP and active directory integration – can’t say I absorbed much, but i’m sure i’ll be asked to do this one day, so it’s good to know that this is tried and tested and the tools are already there.

Easier and faster Plone theming with Deliverance and xdv

Nate Aune gave us an overview of Deliverance. I’ve known about Deliverance for ages, but the penny dropped for me today about how useful this is. The basic principle is this – deliverance acts as a proxy to transparently take HTML output from a website and merge it with HTML from a theme, according to a simple set of rules. In the case of plone, this means you can create a theme in static HTML and have content from a default theme Plone site displayed wrapped up in the static HTML. Simple rules can be applied e.g. “take the news portlet from the plone site, drop the header and footer and all the images and display in the element with and id of “recent-news” from my HTML theme. magic!

Nate quoted one example where the HTML theme is stored in a dropbox folder which the client has access to to make tweaks and changes. I can see front end developers and designers loving this.

There was much discussion at the end over which technology should be used for this – XDV is a fork of an earlier version of Deliverance, which has slightly different functionality. XDV, which is to be renamed Diazo, will be the theming engine for Plone 5. With that in mind, i’ll concentrate my efforts on Diazo. I’m excited by this for non-plone reasons – a majority of my works seems to involve integrating technologies that don’t belong together – this will really help.

Design and development with Dexterity and convention-over-configuration

Martin Aspelli gave a talk on dexterity – the (eventual) replacement for archetypes. This is already available, but not mature yet. The talk was mainly conceptual rather than code-led, focussing on best practice for designing your site or application – when it is suitable to create a content type, and when you might be better off creating a form, or using a relational database. Best quote “code is like a plastic bag” (reduce, reuse, recycle). Write less code.

Laying Pipe with Transmogrifier

Another talk from Clayton Parker – transmogrifier is a system to package up migrations of content from other systems. My thoughts on this were that it looked like hard work for a one off import (usually i’d write a one-off python script for something like this), but creating packages would benefit the plone community e.g. if there were packages available covering migrations from a standard wordpress, drupal or joomla, this would benefit plone. I suppose this could also be used to import content from older instances of Plone, where the upgrade path is broken.

Multilingual sites – caveats and tips

Sasha Vincic talkd about strategies and gotchas for multilingual site builds. Even though Plone has tools for this, there are common scenarios, such as the “missing page” scenario where a translation of a site may not have the same number of pages as the base translation. He also covered common issues such as escaped HTML being translated by third parties and being delivered content where HTML attributes have been translated, therefore breaking the HTML.

Guest Keynote: Challenging Business

This was an inspirational treat for the end of day one – Richard Noble is a fantastic speaker, and after a day of CMS talk it was great to hear his story of the challenges of his past world land speed record record achievements, and the current one – the Bloodhound SSC project. As well as building insane rocket powered cars (the current one has an F1 engine onboard just to drive the fuel pump!), his goal is to inspire children and young people to become engineers, as there is an impending massive shortage of engineers in training. I was also interest to hear that there will be no patents on the technology developed for the new car – the advancements will be made avaiolable for anyone in the engineering industry to build on – sound familiar?

archived comments

Thanks for the roundup Rick! Looking forward to the rest.

Mike 2010-10-27 21:29:18

Good, clear writeup. I could not make it to the conference this year. It’s nice to be able to follow it a bit this way. Thanks, Rick!

Maurits van Rees 2010-10-27 21:42:13

Thanks for the comments guys – videos for all the talks will eventually be online, so hopefully everyone who couldn’t make it won’t miss out 🙂

Rick 2010-10-27 21:58:26

Thanks for the post!

I agree with you re: transmogrifier which is (partially) why I wrote http://pypi.python.org/pypi/parse2plone

Keep the blogs coming 🙂

Alex Clark 2010-10-28 00:50:01

More php framework musings

arty egg sandwich and latteJust like CMS’s I’m incredibly indecisive when it comes to frameworks. I spent years dragging my feet over deciding which php framework I would “standardise” on, before deciding on zend, then deciding against it (too big), then getting as far as using cake for a couple of things (it’s pretty good, but felt a bit too rigid, and call me ridiculous, but I just hate the name). I’ve now gone full circle and started using codeigniter (the first one I tried a few years back) for a couple of main reasons – firstly it is very light and loosely coupled, secondly I’m already using it on toooldtoskate.com, which runs sweetcron. I’m also in tandem using a pure php 5 port of codeigniter called kohana on another project.

I’m hoping the techniques I’ll learn for these will be mostly interchangeable so juggling the two won’t be counterproductive. There was no big deciding factor really, I had just started to realise that the ever growing collection of php code I had been using could be considered a framework of sorts, but was lacking a number of features I would get for free if I adopted an established framework. A friend suggested I steal bits of other frameworks to add to my code base, and another friend mentioned kohana, just as I came to key point in my longest running project (referred to as “frankenapp”, due to the mixture of technologies in use!). I started looking at how I could impose a standard structure for future code and make it more modular, and before I knew it I had the front end being served up by kohana, and a library acting as a simple bridge to legacy code. At the very least, adopting kohana will encourage me to stick to a structure rather than making it up as I go along!