Welcome to Cykod. We are a fully-integrated, self-funded web-development startup located in Boston, MA.

Cykod Web Development and Consulting Blog

Effectively Using Low Ceremony Objects

One of my favorite things about working in a higher level language is what I like to call "Low Ceremony Objects" (If there's another more popular term for them please let me know and excuse my ignorance)  - a.k.a. arbitrary data structures built off of combinations of general purpose containers like arrays and hashs. They are an effective way to quickly create and manipulate data that has a short lifespan but can be counter productive both in terms code readability and maintainability when over-used and sometimes more structured data or traditional objects are much more effective.

When used correctly - LCO's (Yup, tired of typing Low Ceremony Object already) Data structures generally exist for only brief chunks of time that are only defined insomuch as they are used. The existence of general purpose containers in higher level languages and the minimal amount of code needed to create and access them means that data that would otherwise sit in a predefined data structure now often ends up sitting in a combination of Hashes and Arrays ( or Lists and Dictionaries if you swing that way, or PHP's bastard stepchild of both)

As for the name - why low ceremony? Well, like a Vegas shotgun wedding, these generally don't come with a lot of planning -  no design documents or even a set structure - so there's generally not a lot of set guidelines involved. Now why Low Ceremony Objects and not Low Ceremony Data or Low Ceremony Structures? Because a large part of the value of using these objects built out of general purpose containers is the large and easy-to-use toolkit of methods either in the objects themselves (ruby, python, java) or the standard accessor functions (php, lisp) which aid greatly in manipulation of the data - adding, removing, searching, sorting etc.

Reading a book on Clojure got me thinking about this again (after struggling with it a couple of years ago - see the footnote) as in the Lisp variant I first learned - Scheme - pretty much every piece of data is a Low Ceremony Object that you can car, cdr, or caaaaadr to your hearts content, but without some additional structure or abstraction added on top of the language complex data quickly becomes difficult to work with.

When used correctly - Low Ceremony Objects are a great boon to development both in programmer productivity and in code cohesion and DRY philosophy - since they are defined instead of declared - their definition is always close in the code base to their usage. If you make a change to the creation of the LCO you have effectively changed it's declaration. You don't need to dig up a header file or separate class source file to make a modification. Want a quick data structure to hold a menu? Two lines and it's done:

[ { :title => 'Item 1', :url => '/item1' },
  { :title => 'Item 2', :url => '/item2', :selected => true } ]

If that menu is going to be created and digested during a portion of 1 Web request then you don't really want to go through the effort of creating a class, especially if that class is just going to be used as a data structure and isn't going to have any of it's own methods. What does the following actually get you:

class MenuItem
    attr_reader :title, :url

   def initialize(title,url)
      @title = title
      @url = url


class MenuItemList
 def addItem(item)
   @items ||= []
   @items << item

 def item(idx)

lst = MenuItemList.new()

Not a whole lot (Ignoring that no one in their right mind wouldn't just use an Array for MenuItemList unless some more functionality was added). Ruby provides the Struct construct for just this reason - but I'm not sure that using Struct gets you all that much more than just using a Hash. In particular I'm not a fan of passing in a huge parameter list to the constructor as you need to remember the exact order of your properties every time you read code using the initializer otherwise you'll have problems. For my money:

menu_item = { :title => 'Item 1', :url => '/item1',
   :selected => true, :dropdown => false, :green => true }

Is more readable than:

menu_item Struct::MenuItem.new("Item 1","/item1",true,false,true)

There are a lot of situations where LCO's are great, but there's two guidelines I now try to follow:

  1. Only use a Low Ceremony Object if it is going to be created and consumed in relatively close proximity code-wise.
  2. Once a LCO gets too complicated to understand easily by looking at it's definition or a debug output of the data, it's time move on to something else.

The reason for the first rule is that as soon as you are moved away from the definition of the object errors are going to creep in since there's no help from the interpreter or compiler in properly generating and consuming the LCO.

The second guideline should be pretty self evident - since you only have a definition of the data and not a declaration of the type, once the data gets too difficult to understand you are going to make mistakes using it because you don't have a declaration to fall back on.

LCO's can also be limiting because data can be hard to extend when a small piece of custom code could achieve the same effect. Let's go back to our menu item - what if we made the menu item responsible for displaying itself? Suddenly the whole menu system could be a lot more powerful by overloading the base class (or just duck-typing some other type in there):

class MenuItem
  ...Previous Definition...
  def display; "<li><a href='#{@url}'>#{@title}</a></li>"; end

class BlinkingMenuItem < MenuItem
  def display;  "<li><a href='#{@url}' style='text-decoration:blink;'>#{@title}</a></li>"; end

class MenuItemList
  ...Previous Definition...
  def display; "<ul>" + @items.map { |itm| itm.display }.join + "</ul>"; end

menu = MenuItemList.new
print menu.display

Because who doesn't like blinking menu items? Achieving the same sort of functionality with just a data structure would mean adding in a conditional branch for each added option - to the point where your code can degenerate into if/elsif/else spaghetti.

Because of how easy LCO's are to create, they may tend to get overused when some additional design level decisions take more effort than just throwing together an Array of Hashes - don't lose all the benefits of years of work in OOP design just because high level languages make LCO's so easy to create and consume.

One of my least favorite examples of an LCO is the form system in the Drupal Content Management System - the way to create forms is to generate an enormous associative array where different pound-sign prefixed keys have different meaning and different nested arrays create different functional groups in the form.

This fails both of the LCO tests - most people generating drupal forms never look at the drupal code that actually uses them (I took a couple of looks and while it's nice, modular code, it's also very far away from what we're generating) - and since there's no strong typing it's impossible to know what went wrong when your form doesn't show up correctly (this may have been fixed with additional error checking in newer releases.)  Secondly with huge forms with dozens of items, it's hard to look at the data that you're generating and say with any certainty whether or not there's a mistake. Lastly, let's say I want to add a special widget to a form (like a slider for example) - I wouldn't even know where to start since the data that I'm passing into the form just gets magically transformed into HTML output on the other end, I don't have any control over it (other than just putting HTML directly into the form).

So, in conclusion: LCO's are great where you need to quickly create data types to be consumed just as quickly, but can become a drag on a project when they get too complicated ...

Because of the lack of meta-programming at the class level PHP code in general can suffer from a the downsides of lots of complicated LCO's, let's take the Cake PHP framework. From their tutorial, here's an example of a model:

class User extends AppModel {
   var $name = 'User';
   var $validate = array(
     'login' => 'alphaNumeric',
     'email' => 'email',
     'born' => 'date'

Compare this to an example in Rails:

class User < DomainModel
  table_name 'users'
  validates_as_alphanum :login
  validates_as_email :email
  validates_date :born

Wait you say - validates_as_alphanum, validates_as_email and validates_date don't exist in Rails - except that they do in the super class:

class DomainModel < ActiveRecord::Base
    def self.validates_as_alphanum(field,options={})
       validates_format_of field, options.merge({ :with => ..REG_EXP..})

The same thing is doable in CakePHP, but you end up adding instance methods instead of being able to meta program at the class level since the use of a data structure instead of a method forces the implementation to rely on conditional branching and dispatching instead of letting the language handle that part itself. The advantage of using code instead of a LCO in this case is that there's a lot more help from the language compiler/interpreter than when you try to create a Domain Specific Language solely out of data. You effectively need to write an interpreter for the DSL while on the other hand building it out of meta-constructs allows you to use the development language itself as the interpreter (in which case, you'll probably end up fullfilling some variation on Greenspun's Tenth Rule [via Proggit]).

So, in conclusion: LCO's are great where you need to quickly create data types to be consumed just as quickly, but can become a drag on a project when they get too complicated or are used as complicated interfaces as they don't self-document and can make it difficult to track down bugs when they become overly complex. Of course given the apparent rise of Schema-free databases (Like CouchDB ) and the NoSQL movement, I might soon be in the minority arguing against the limiting overuse of LCO's.

As an aside, in the development of Webiva (Our newly-released, newly-open-sourced Rails CMS) we came across the problem that CMS's require a boatload of customizable features so that modules can be developed effectively and generally. This customization must be easy to add into the system from a developer perspective and easier to extend later on as more options are needed. The former cried out for the support and validation offered by ActiveRecord while the later made more sense using an LCO - after all who wants to update a bunch of models or the database every time a new option is added to the thousands of existing ones.

I started out using just Hash's as that's what comes in via the params object with some custom-inline validation. But that quickly became painful and repetitious, so finally what we ended up with  was the HashModel - a hybrid between using an LCO and using a full model that allows easy standard ActiveRecord usage in forms but is easy to create, update and use and store in generic DB text fields. Here's an made-up example usage:

class BlogDisplayOptions < HashModel
  attributes :blog_id => nil, :per_page => 20, :category => '', :active=>true

  integer_options :per_page
  boolean_options :active
  def blog
   @blog ||= Blog.find_by_id(self.blog_id)

  def validate
   # we need a valid blog
   self.errors.add(:blog_id,:invalid) unless self.blog

Usage - storing values as a hash in a serialized column:

@options = BlogDisplayOptions.new(params[:blog])
if @options.valid?
  paragraph.data = @options.to_hash


@options = BlogDisplayOptions.new(paragraph.data)
 ...Do something..

Switching from Hash's to HashModel's was a huge win in reusability and simplicity. Now I just need to fix all the other places in the system where I ignored the two rules from above.

Posted Thursday, Oct 22 2009 04:59 AM by Pascal Rettig | Development, Rails

Why I hate CSS

As a developer in any non-web language (Read: anything but HTML and CSS) there's pretty much only 1 hard and fast rule: it's not the compiler/interpreters fault. It's yours. One of the differences between a good programmer and a bad programmer is knowing that you can't blame to the tools for something that you did.

When I was just starting out in C, I remember being convinced on numerous occasions that the Turbo C compiler had a bug because my code **had** to be right - I'd double checked it a number of times - and there **had** to be a bug in the compiler. But there never was - it's pretty close to 100% of the time not compiler/interpreter/debugger's fault.

Except with CSS and IE/Opera/Safari/Firefox/Epiphany/Konquerer (insert your least-favorite browser here - I'm guessing IE, but that's just me.)

Browsers wouldn't just randomly double margins or change list items whitespace or not position objects correctly? Would they? Things are getting better as IE6 finally phases out, but that's also part of the problem - my memory of all the necessary IE6 hacks is starting to fade.

But it's still not perfect (is your browser 100/100 on the Acid3 - Firefox 3.5.5pre Ubuntu is still 93/100?)  Since browsers still render things differently and react to the same code differently (and often all incorrectly according to the standard) - you have a situation where fairly often it's the language's fault. And since us programmer's are generally egotistical types, give me an inch of believing that it's not my fault and that will be my first conclusion half the time. Even if it turns out that it's just a darn missing semicolon; again.

Posted Tuesday, Oct 20 2009 06:00 AM by Pascal Rettig | Rant

What's wrong with BDD

Nothing. Done, shortest blog post ever.

Ok - but let's step back, I like BDD (Behavior Driven Development ) a lot and the benefits at the end of the day are definitely there to see, but I have noticed a shift in how I develop now that I'm focusing on BDD - while I used to take a very high level attack on problems, the use of BDD is somewhat subversively shifting me to a bottom up approach instead of top down. Instead of adding functionality horizontally on a whole bunch of different parts of project, I end up working vertically and locally on individual classes because it's easier to move forward from a testing perspective.

While that might seem normal or desired on larger shops (where developers are assigned smaller pieces)  and fully spec'd out projects - we usually count pretty heavily on the feedback loop that develops with clients by getting usable prototypes out as early as possible. Working vertically on individual pieces of a project instead of horizontally across the breadth of the project makes it harder to have a working prototype to show at any given time, so the shift that's happening because of BDD isn't necessarily beneficial.

Occasionally I need to make sure to remember to look for the forest through the trees instead of overworking certain interfaces. I end up adding more functionality than needed at a certain stage of the project because BDD makes it both incredibly easy to add additional functionality to interfaces that you are already testing and gives you a warm fuzzy feeling with each additional test that passes and will now proceed to sit in a state permanent watchfulness over the correctness of your code.

Testing of new Controllers, Models, etc classes (In Rails case) that require a fair amount of bootstrapping and tear down state around them - whether it be with Mocks or filters or whatever end up getting put off while others that are easier to test effectively get further along earlier in the process even if the classes themselves aren't that complicated.

There's no easier solution around this - tests are always going to add some additional friction in creating new classes that need to be unit tested separately. I've found that making note of the issue and stubbing out a bunch of tests across a couple different pieces of the project before writing any code - don't even write the actual test right away (In RSpec - the it "should .." without the actual test block acts as a nice stub) makes it easier from a mental standpoint to jump around to different classes and unit tests as the code gets developed.

Further I think there is real value in writing in some test abstractions in the form of helper methods even if you end up pulling your tests to a slightly higher level as long as what you're testing is still clear from test code. RSpec helps with this be making it easy to write additional matchers that you can then use very naturally throughout the rest of your code without a lot of ceremony.

Posted Wednesday, Oct 14 2009 07:50 AM by Pascal Rettig | Development

@font-face: The Fonts are coming, the Fonts are coming!

The times they are a changing - font's are coming to the Web and there's nothing that anyone can do to stop it. Unfortunately the font industry has so far been hesitant to embrace a new potential revenue source and change along with the technology.

With some form of @font-face now supported in the latest version of every major browser (Firefox, Opera, Chrome _sort of_, Safari and IE8 - yes, you'd need two versions of each font file in EOT & TTF - to support all browsers but it's a step in the right direction) there is finally a chance for Web Developers to break out of the painfully restricting confines of the 10 web safe fonts. The simplicity of the CSS needed for @font-face as well as how easily it handles backwards compatibility makes it a pretty sure hit in the near future. We're not 100% there yet, but it looks like it's going to be months not years.

As an example, to use a free font from the people at fonts.info in the same directory as your CSS:
All you need to do is (ignoring the the  alternative custom CSS file necessary for IE):

<style type="text/css">
@font-face {    font-family: GraublauWeb;    src: url(GraublauWeb.otf); }
p {  font-family:GraublauWeb, Verdana, sans-serif; }

And fallback to the legacy 10 is right there in the code (I ignored the bold font for brevity's sake).

Except for the fact that it's still darn hard to find quality fonts to embed.  One of the problems is that many designers seem to be equating web fonts = free fonts, probably because many web developers are pushing for plentiful free fonts and the fact that web-specific licensing isn't really set-up yet. Since free is a tough to make a living off (although who knows we could end up with "Frutiger, sponsored by Dole!"),  very few people seem to be willing to commercially license fonts that can be embedded on the web unless there's some sort of digital restrictions (even if it's a weak one). Judging from some of the animated discussions about free fonts at typophile - font designers aren't happy about free and it seems like the only solution being presented is to replicate the RIAA and MPAA's winning strategies in the area of digital rights warfare: first resist any change for as long as you can, and then try to railroad some poorly thought out DRM scheme into the technology that ends up hurting paying users much more than the pirates.

Unfortunately the tribulations of the music and movie industry have shown us one thing:

People who don't want to pay for stuff WON'T pay for it no matter what. People who do want to pay for stuff WILL pay for it if it's easier than not paying for it.

The truth of the matter is the people who pirate Adobe Font Folio instead of paying for it aren't going to pay the 2.5k for it regardless of how many protections you put around the digital files, because when it comes down to it - you need to be able to use the files somehow, and if you can use them that means you can get at the data and there's someone out there who can crack that data out of it's DRM shell.

So, if we accept that there are two types of people out there - pirates and customers - the question becomes how to make life more difficult for pirates and easier for customers?

Font designers are scared that if their work is embedded it automatically becomes publicly available to pirates and thus "free" - but the truth of the matter is that it's already publicly available (once one person buys it they can upload it to a p2p network without DRM restrictions or watermarks) - so lets take an example of the creation of a simple licensing system that uses the public available part to it's advantage in enforcing payment on commercial fonts:

Imagine if you could buy a web font, but then you needed to include a  a comment before each @font-face declaration with a license number:

/* FONT FOUNDRY LICENSE: #5648961561565 */
@font-face { font-family: GraublauWeb; src: url(GraublauWeb.otf); }

The license number could be keyed to an account that is licensed to one or more domains - if that font and license number appears on a domain that's not in the account, have a system generate an automated email to the owner of the domain saying the font is licensed incorrectly. If there's no action on the font, send a DMCA notice to the domain's ISP requesting removal.

Since @font-face fonts need to be publicly available to download, it's easy to have a web crawler generate a unique id number (i.e. a hash) based on meta-data or a binary signature for every font that it finds on the web and compare that id to a list of hash's in it's database. If there's a match and no corresponding license number or a domain mismatch on the license number - send out that automated email to the WHOIS info on the web.

This would require a sea-change of how font licensing is done - not per user but for per-domain usage - but it would keep the industry making money and even allow for new revenue streams from embedded licensing.

As for the pirates, you won't be able to catch every one, but you'll be able to catch the most popular offenders (since most crawlers will generally crawl the most popular sites first) and since those are the people who most likely have a commercial interest in not getting sued for statutory damages you'll be able to recoup your expenses.

If a search company offered this service to font foundries, scouring the whole web isn't that easy after all, it would be pretty simple to create an economy of scale that would make it economically feasible - Google after all already indexes some binary files, so get them to add fonts to their index searchable by the aforementioned data hash (so people can't just change the file name) and the above technique could theoretically be executed as with a few hundred lines of a Google-API connected search script.

I don't know if something like this is going to happen soon - but is there any reason that fonts can't follow the lead of the stock image market? Yes there are images available for free or for $1.00 a pop ( Istockphoto.com ), but places like Getty are still the place to go if you want to get that 1-of-a-kind image and are willing to pay for it. Some fonts will be free, but there will still be a market for those unique fonts that people are willing to pay for.

Please, font people, there are companies like us that are very willing to pay for fonts we use on the web - learn from the lessons of the RIAA - after years of pushing DRM restrictions, almost everyone who sells music online now makes the files available in a DRM-free format -  yet Bono's still making money hand over first. That and I'm really getting friggin' tired of Arial.

Posted Thursday, Oct 08 2009 12:48 PM by Pascal

Legacy Support for Ajax CSRF token in Rails

In the process of updating Webiva to the newest version of Rails, on the obstacles we had to overcome was adding in support for CSRF protection throughout the system. This is an essential protection for any system but an absolute must for a system like an open-source CMS where anyone can study the code and make an educated guess about what users and content to attack (user id #1 for example)

Working with standard forms in Webiva code base proved to be not that difficult as the form_tag function automatically attaches the required CSRF token, however dealing with hand-coded Ajax calls in the prototype library and seemed like it was going to be a major pain, with the worst case scenario being manually attaching a authenticity_token= parameter to each request (there were probably a couple hundred of them). Luckily there was an easy workaround. We ended up just adding the following code to the top of each layout:

 <script> var AUTH_TOKEN = "<%= form_authenticity_token.to_s %>";</script>

And then just added the following to the bottom of application.js:

try {  if (!AUTH_TOKEN)   AUTH_TOKEN = 'DummyToken'; }
catch (e) {  AUTH_TOKEN = 'DummyToken'; }

Object.toQueryString = function (object) {
                 var result =  $H(object).toQueryString();
                 if (!result.include ("authenticity_token"))
                   result += "&authenticity_token=" + encodeURIComponent (AUTH_TOKEN);
                 return result;

Since Prototype calls Object.toQueryString on any parameters passed to Ajax.Request or Ajax.Update the authenticity token should be added in automatically. For testing purposes I added in the try / catch block just to make sure something is set so that if AUTH_TOKEN isn't set we get a server error that can be tracked more easily than just a javascript error on the client side.

Posted Wednesday, Oct 07 2009 10:35 AM by Pascal | Development, Rails

Stake a virtual claim

Having been in the Web Development business for the past 10 or so years, I've managed to accumulate a fair share of website ideas as well as domain names to go along with those domains. While some of the ideas are now obsolete, I believe a lot of them still have some value and we're only now finally able to start working on some of them.

If you've ever had an idea for a Website, you're probably aware that one of the hardest things to do is come up with a name for the site. People make fun of all the "Web 2.0" names - Redit, Flickr, Tumblr, Digg - but there's a chance that the reason for those names is that Readit, Flicker, Tumbler and Dig weren't available. People weren't necessarily trying to be cool and edgy - they just wanted a nice short name and couldn't secure a real English word that had a useful meaning.

Finding an available .com domain with a name that at least makes some sense is very hard but once you find that magical name it feels like you conquered half the problem even if it takes years to actually get the talent and/or money together to get the project going.

So imagine your surprise when you check a anywhere between a few month to a year later and there's a hyphenated version of your-sweet-name.com and someone banging at your door asking you to hand over yoursweetname.com because it they've actually built something with the name while you've been doing with other things.

Why does this happen? Why would someone pick the same made up word or combination of words as you had already registered when they can see that you've had your domain since 2006?

I'd say the answer lies in that crappy network solutions or GoDaddy.com landing page that newly purchased domains default to. I've come to the conclusion that at this point, everyone sees themselves as an Internet entrepreneur, everyone can go online and buy themselves a domain for $10.00 a year, and everyone does - but most of the time the name just sits there. This means that of the 50 or so names you come up with, 48 of them probably look like someone just bought the domain and isn't doing anything with it. It's just sitting there in it's generic landing page state. At some point people run out of names and just go with variations on something that's already taken - cool-site.com is available even if coolsite.com isn't. We did this as well a couple times figuring that if the project took off, we'd buy the unhyphenated domain from some squatter once we'd made our first million.

The best way to combat this is to not leave the default landing page up but to stake a virtual claim with some sort of content on the site. Put a coming soon graphic that looks nice up (not a "Website Under Construction" road sign) on you most important projects-in-waiting and people will most likely see that and move on to the next made up word on their list. Then you just need to fulfill your end of the bargain and actually build the site.

Posted Friday, Oct 02 2009 09:04 AM by Pascal | Marketing