Welcome to Cykod. We are a fully-integrated, self-funded web-development startup located in Boston, MA.

Cykod Web Development and Consulting Blog

We must be getting better: The Simplicity Era of TiVo, Apple, Google (and Rails)

Have you ever had a conversation with a fellow developer that went something like the following:

Q: Why is this so complicated?
A: Because I didn't have enough time to make it simple.

To someone outside of the field, it might seem odd, but to a programmer, it's pretty straightforward. Simplicity is hard to do the first time around. During development you need to make guesses about how your code and end product will be used well above and beyond what the requirements say. This leads to over-complication as without knowing the exact use cases you can't make add the right abstractions (and the wrong abstractions often do worse things to complexity than none at all.) Writing a blog post is the same way - editing and reworking tends to chop and simplify rather than add.

As Alan Perlis Said: "Simplicity does not precede complexity, but follows it."

In the pre-DVD days when people actually recorded shows onto these huge VHS tape things (I know, crazy, right?) a common joke was how hard it was to correctly set a VCR to properly record a show taking place at a future date. Entire sitcom episodes were written around messing up the recording of some important show or event.

Technology was something to be afraid of, something that only the geeks and 7-year olds could figure out. For all intents it looked like we were heading towards a bureaucracy-led eventual meltdown of society as things got more and more complicated to the point where nothing worked anymore. Terry Gilliam captured this idea perfectly in his fantastically over-the-top movie Brazil. In the movie, one justification for the film's military state is a series of constant terrorist explosions, but while it's never outright explained, those explosions are arguably just caused by a society overtaken by bad duct-work. I couldn't find the intro clip with the malfunctioning apartment, so here's a younger De Niro as a rogue heating engineer:

 

 

Things have to get worse before they get better.

It's understandable, however, that things have to get worse before they get better. Every advance forward is preceded by a minor step back as the kinks in a new technology get worked out. The first TV tuner card I bought in the mid 90's was a complete disaster - my computer didn't have enough power to run it correctly and so my dreams of coding while watching the game turned out to be a disaster of frustration. A cheap $30 portable TV would have gotten the job done 10 times better.

Instead I futzed around with getting drivers to work for Windows95 and tried to use a UI that should have been taken out back and shot. But because of idiots like me who kept buying these crappy cards and validating the market, people kept developing the technology to the point where someone came up with TiVo.

At that point not only could my parents record TV shows, but they could do it in without reading a manual. TiVo simplified the interface to the point where it was usable without needing to think about it. TiVo was a hundred times more powerful than programming a VCR yet it could be done without even glancing at the instructions.

 

When simplicity gets the job done, people like it and they will stick with it.

A couple years earlier, Google launched it's search engine and overwhelmed people by underwhelming them. I still remember the first time I saw the Google home page with none of that "Portal" junk that was popular at the time, just search. It was an epiphany. Turns out that's what people wanted. Yes, Google eventually launched iGoogle, but it kept it off by default and that's the way it stays for most users.

When simplicity gets the job done, people like it and they will stick with it. Like like the saying goes: KISS.

Which brings us to Apple. When the iPod launched, it really only had one thing going for it: simplicity. A simple, elegant interface and a simple, elegant way to get music onto the it via iTunes. It lacked any compatibility with other software and has less storage than competitors, but since you could use it without having to "learn" anything it quickly won people over (I'm air-quoting learn because yes, you did learn to use it but it wasn't a struggle). Apple followed with the iPhone using the same formula of making things that were hard to figure out on other phones easy on the iPhone, and then took it to a whole new level with the iPad. Say what you will about the lack of features or restrictions (and I have), the most common review of the iPad by a member of the tech press goes something like this: "My [Dad/Mom/Husband/Wife/Kid] picked it up and just started using it and I couldn't get it back from them."

Companies had been trying and failing to make a "simple" computer for years (Anyone remember Microsoft's Bob?) However once the technology caught up to Steve Job's obsessive vision and delivered thin screens, SSD's, multi-touch and fast enough processors, it became became physically possible to build a simple reasonably-general-purpose computer that you could just pick up and use.

To get back to programming, the 90s poster children for complexity are C++ and Java. If C was the two steps forward of a previous era, a simple well-defined language that forms the backbone of GNU and L(unix), C++ was the step backwards as we hurtled ourselves forward into a new OOP phase. C++ was the promise of the great next thing but was hindered by the complexity of not knowing which parts were needed. I'm not implying C++ was at all a failure, just that it's high level of complexity was a result of treading into unknown waters, and so people coped. To quote Joshua Bloch from Peter Seibel's Coder's at work:

I think C++ was pushed well beyond its complexity threshold and yet there are a lot of people programming it. But what you do is you force people to subset it. So almost every shop that I know of that uses C++ says, “Yes, we’re using C++ but we’re not doing multiple-implementation inheritance and we’re not using operator overloading.” There are just a bunch of features that you’re not going to use because the complexity of the resulting code is too high. And I don't think it's good when you have to start doing that. You lose this programmer portability where everyone can read everyone else's code, which I think is such a good thing.

A next step in the same space to follow C++ was Java - and the primary thing that Java did was simplify coding and remove features from C++. Memory leaks? Gone with Garbage Collection. Multiple inheritance? No go, but you can have a simplified version called Interfaces. Operator overloading? Gone. People figured out the parts of OOP that were necessary to get the job done and dropped the rest.

Unfortunately Java focused on simplifying the language and forgot about simplifying the development eco-system (although they did nail deployment with build-once run anywhere) While the Java language is simplified, the steps to setting up and building it or any Java projects are anything but - and anyone who argues otherwise should be required to write an Ant build.xml from scratch.

What if we wanted something simpler than Java? What could be simpler than what might be the following "Hello World!" program (written in PHP):

        Hello World!

(As linked to by this Reddit post technically even without any <?php .. ?> tags in, it's still a valid .php file)

PHP filled the need for simplicity and quickly became the de-facto open-source web development language, but as things started to get complicated as project size grew, the same impulse to mitigate complexity with OOP pushed people towards other solutions. As Einstein's famous saying goes "make everything as simple as possible, but not simpler." Out of that need for more expressiveness but still keeping the simplicity came David Heinemeier Hansson's decision to try Ruby and create what would eventually be Rails.

Rails was simple because it was opinionated. It came with a set belief system about how you should go about your business. This was something that had never been expressed so openly by a programming framework, essentially: "we know we can't make everyone happy and keep it simple, so instead let's just make most people happy" (Caveat, I work on a Rails CMS - so I'm more than a little biased)

When we know what we're doing, we usually are willing to make the hard decisions for our users about to what they don't need.

Simplicity and a strong opinion usually go hand in hand. When we know what we're doing, we usually are willing to make the hard, initially unpopular decisions for our users about what they don't need. When things get muddled, we have to go down multiple tracks as we don't know the destination.

So many different things have cropped up recently that seem just, well, simpler than what came before them that I'm starting to come to the conclusion that we must be getting better at this whole computer thing as we're finally comfortable with what we don't need in our software.

Complicated Version Simple Version
Distributed file systems and Complex CDNs Amazon's S3 & Cloudfront
Oracle, MySql and other RDBMS's
NoSql's like MongoDB, Riak and Cassandra
Complicated caching and proxying Memcached, HAProxy
Apache Nginx, Thin
SOAP, Corba, XML-RPC REST


I once assumed that the graph of technological complexity was rising infinitely up and to the right, but now I think there's hope. Computers arrived and took over so quickly in the grand scheme of things that it just took us a few decades to figure out a way to simplify and still end up where we needed to be. That's not to say that everything is getting simpler right away, once a system is in production it's going to be installed and supported for a long time and unfortunately this generation's simplicity is necessarily built on the back of last generation's complexity, but I think we're headed in the right direction.

So here's to the Simplicity Era, may we stay simple and happy with our two steps forward until we find the next big step back to take.

[ As an aside, I've intentionally mostly ignored Microsoft in this discussion - from a programming standpoint they exist in a world adjacent to the one I've been paying attention to for the past 10 years, but I think their (now-waning) might made them somewhat oblivious to the forces of simplicity - and no, I don't think Visual Basic is simple, Basic or not. Now with Windows, I told them they needed to make it simpler, and so Windows 7, that was my idea. ]

Let me know your thoughts on any other technologies or trends that are getting the simplification bug, or feel free to burst my tiny, happy bubble and share your irrefutable evidence that we're headed into a Brazil-like complexity meltdown. Do you think we are entering a new Era of Simplicity or is it just a temporary trend, like drop shadows, rounded corners and Lady Gaga?

Posted Wednesday, Aug 18 2010 10:41 AM by Pascal Rettig | Development

You don't want a quote for that project

Note: this post takes the iterative nature of software development for granted. As a consultant on dozens upon dozens of Web projects, I take it for granted. If you have had great experiences in complicated projects that go from specification to completion without change, I've never met you but I would love to as you sound like a mythically perfect client.

So you've got to get a project done that needs outside help. Whether it's for a corporate site or your next change-the-world idea, the normal sequence of events from this step is probably:

1. Write up a Specification
2. Put out a Request for Proposal
3. Look over the responses
4. Of the responses from reputable looking companies, pick the cheapest

Number four might get more complicated if you are working with companies you already have a relationship with, but the bottom line usually comes down to finding the right balance of price and security. How much are you willing to wager on your project's success to get a cheaper price.

Now let's take a look at this from the perspective of the consultant. I assume most consultants work off a standard hourly rate, estimate the number of hours and other costs (software, hardware, etc) and come up with a quote.

For inexperienced consultants, the number of estimated hours is usually a vast underestimate of the actual time it takes to complete the project. For experienced consultants, it's usually a significant overestimate of the time.

In the former case (and I've been guilty of this) the estimate is based on a best-case-scenario that fails to account for the risks of the project. It's a if-everything-goes-right number. Developers are a cocky bunch and tend to see the world through rose-colored glasses right up until the first bug. In the real world rarely does everything go right, and if the project is done at too low of a number, disaster will ensue.

In the real world rarely does everything go right, and if the project is done at too low of a number, disaster will ensue.

The consultant will be motivated to triage features on the project because once they have gone over the number of hours estimated, they will essentially be losing money for every additional hour of work time. Since all good software development is iterative, it is a given that there will be new requests from the client based on feedback from prototypes that weren't part of the spec, but are ESSENTIAL to the project being useful when it's all said and done. These will be refused or re-quoted and features that are no longer necessary, but are in the Spec, will be added-in.

Barring a mid-project renegotiation, one of two things will happen: first the project might degrade into an acrimonious battle of wills back and forth as both the developer and client becomes less and less flexible, the former trying to do the minimum as outlined exactly in the spec while the latter holding the developer to features that have no reason to exist except for their presence in the original spec. Alternatively and much more seldom, the developer will eat the costs of the additional time to provide a quality product and a happy client. While this looks like a win from the client's perspective, future projects from the client will either be turned down or vastly overestimated to prevent a repeat. Any changes to the system will need to be done by a new party (and remember, software is iterative so there will be changes), bringing a whole new set of risks into the mix.

Experienced consultants, i.e. ones who have gone through this disaster-cycle a couple of times, will see risks and dangers hiding in every crack of the specification and will base their estimate on Murphy's law, making sure their ass is covered for any eventuality. What consultant, after all, wants to lose money on a project? The risks on the consultant's side: finicky client, non-payment, etc, are just as real on the consultant's side as the clients side.

To sum up, as it stands now, someone who needs a project done has two options:

1. Underpay an inexperienced consultant and count on their goodwill professionalism to complete the project.
2. Overpay an experienced consultant

Neither of those two options should sound particularly good, so what's the solution?

Most projects, from a risk/completion versus time standpoint operate something like this:

The risk is enormous at the start of the project. It falls quickly, however, as any developer worth their salt will try to solve the projects unknowns first and foremost. If you are building real-time stock viewer, you don't build a whole website, membership and payment system before you figure out how to get and the real-time data. Thus a couple weeks into a project, a lot of risk as already fallen by the wayside. Things will either be able to work the way anticipated, or they won't. If you pay a developer hourly for the first couple of weeks, you'll be able to remove a tremendous amount of risk from the process and won't force the developer to quote a number that covers that risk. Since you don't win if the developer underquotes the project (see above), why should you lose if they overquote it?

What if, as an alternative to asking for a quote in response to an RFP, you asked potential consultants for their hourly rate and an estimate on the number of man-hours each of the chunks of the project might take along with a confidence measure of each of those chunks?

While these early estimates will be a little all over the board, you should see some convergence on the number of hours from the reputable shops and you should be able to pick out the outliers on both ends to stay away from.

Now pick a firm on the same criteria you would have originally, but make sure to request a number of smaller milestones with working prototypes. Each of these milestones, coming in at a specific date represents a certain amount of developer time. If the developer is ahead of schedule, they can add features to the milestone, if they are behind schedule then features will need to be removed, but the date for a working prototype is the sticking point.

Provided that you have workable milestones, you, as a client, will have a much better idea of how the project is progressing, and, since you are paying hourly, can iterate on changes as necessary to get the best outcome for the project. If the developer can't deliver the first milestone, it's better to know that immediately and get out of the arrangement as early as possible rather than hang onto that sinking ship all the way down to the bottom of the ocean.

At first glance it might appear that you've shifted the risk of the project from yourself to the developer, but the truth is that the risk was always on you. If the project was under-quoted, you're set up for failure (again, see above), while if it was over-quoted, you'll never see that money back (do you think a consultant who was forced to quote a flat-fee will invoice for less?)

While I've plugged it in other posts, the first few chapters of Head First Software Development from Oreilly gives a great overview of why you should do iterative software development and is very readable by anyone interested in software consulting, not just developers. If you are in charge of hiring or managing software consultants I'd very much recommend you spend a couple hours on it.

So, like I said, you don't want a fixed quote for that project. A fixed quote is heartbreak on a stack of paper. You want a great developer with an estimate, an hourly rate and a number of measurable milestones to take you to that happy place in the ether where software is delivered on budget and on time.

Posted Tuesday, Jul 06 2010 01:54 PM by Pascal Rettig | Business, Consulting

Lazy Conventioneering

With any codebase there are set of conventions that should be followed For example, CONTANTS_SHOULD_BE_UPPER_CASE, functionsShouldBeCamelCase, _private_variables_should_be_underscored, spaces not tabs, etc. Making the decision early on as to what conventions to follow will give you a consistent codebase. The big benefit, though it's more than just aesthetic appeal. There are real, solid benefits to having consistent conventions in a project.

By giving your entire codebase a constant structure, your giving you're brain a distinct advantage in understanding what's going on in the code in front of you

Reading, as a mental process is really hard for your eyes - the human eye can only focus on a very small portion of it's entire field of view to actually read the individual words in a line of text, however we are able to pick up a lot more information than just the small piece we're focusing on. By giving your entire codebase a constant structure, your giving you're brain a distinct advantage in understanding what's going on in the code in front of you. Similar to syntax highlighting (find me one programmer who isn't a fan) giving your overworked brain additional cues can lead to significant advantages in scanning over code to find bugs and make changes. Two of my favorite languages take this one step further. Ruby (and Rails specifically) pretty much defines one standard set of conventions to use through out your code - classes are UpperCase, methods_are_underscored, etc. Python makes indentation (one of the oldest visual cues programmer use to structure code) a feature of the language - ensuring a consistent usage across all python code.

The flip side is also true: code that has no consistent structure or conventions is much harder to grok at a higher level. One of the major differences between novices and masters, the ability to quickly scan and understand code, flies out the window if the code isn't structured consistently and readable. Being lazy about coding conventions does more than just look unprofessional, it makes it more difficult to understand what's going on and leads to more bugs in the codebase.

This doesn't mean you have to recode your entire codebase in hungarian notation and implement a 100 page style guide for every project - but make sure you pick conventions early on and stick with them. It's more than just an aesthetic advantage but will lead to better software in the long run.

Posted Sunday, Jun 27 2010 10:49 AM by Pascal Rettig | Development, FNEO

Apple's Eden

Apple passed Microsoft last week in market capitalization. This is big news, something that would be absolutely unthinkable even a few years ago pre-iPhone.

Apple was a niche product. It was for designers and the Starbucks crowd and that was it. Now, if I go to a Ruby meeting - everyone is running on a Macbook. It's ubiquitous. Everyone has an iPhone.

The question is, do we welcome our new sleek, futuristic-dancing-silhouetted overlords with open arms or do we need to be a little careful lest we lose something as Apple becomes the market leader?

We are being tempted to take a bite of their succulent technology, but the danger is not that we are going to be thrown out of paradise, rather that we'll be let back in.

Well, I think the name Apple is incredibly fitting if we reverse the myth of the Garden of Eden (Heck, the Apple Logo even has a bite out of it) We are being tempted to take a bite of their succulent technology, but the danger is not that we are going to be thrown out of paradise, rather that we'll be let back in. Back in to a walled garden where we no longer have the free will to use bad technology (Flash) but also will have a gatekeeper that controls what gets let in and what doesn't.

I believe Linux users and users of free software should be afraid (Yes, I'm liberally sowing FUD right now, but I think a little caution is in order) I never liked liked Microsoft, but I was never scared they would be dominant to a point where they would be able to affect my personal freedom. Why? Because people used Microsoft products, but they were never really happy about it. Even the MS Fanboys were pretty tame, sort of like they accepted their position as evangelizers of the platform, but secretly plugged black headphones into their iPods so no one would know.

Apple users are different. They believe.

To use a Firefly reference, who scares you more:

On the left you have a sociopathic killer who will electrocute and flay you upside down if a job goes wrong
(in this example, that would be Microsoft)

      

On the right you have the idealistic operative who wholly believes they are guiding you to a better world
(Apple).

 

I'd take the sociopath every time over the idealist who is willing to destroy every right you have along the way just to get you to a better world. [ This is a just a metaphor, I don't really think either company has killed anyone - Chen's still alive, isn't he? ]

Steve Jobs thinks he's building a better world. They are refusing political satire Apps into the app store to protect the children. They are wiping years of developer hours off the face of the earth because they don't want apps that create "Desktops" in the app store (See MyFrame story). But he wholly believes: "We always saw ourselves as building the best computers we could build for people"

A future of the Web that has a gatekeeper scares the hell out of me.

Do you think Steve would think twice about enabling objective-c scripting in the iPad browser because of Web standards if he thought it would be better for iPad users or Apple? I don't think so. A future of the Web that has a gatekeeper scares the hell out of me (I'm a web-developer, so clearly I have a selfish vested interest). Update: See this great example of how Apple treats the "open" web.

Now don't get me wrong, I love apple products. In fact after I poured wine over my Macbook and got angry that they wouldn't fix it for me at any price (once a unit is water damaged they won't touch it - even though everything works on the machine except the backlight), I'm going back with my tail between my legs today to get another Macbook. It's the best laptop out there.

But I'm holding off on the iPad as long as I possibly can because I don't want to fade away into the walled paradise just yet.

Posted Thursday, Jun 03 2010 11:01 AM by Pascal Rettig | FUD, Rant, Troll

You're a dork

At least I hope you are. If you aren't you're going to be at an undeniable disadvantage to those who are.

My wife and I recently went to see Josh Ritter at the Orpheum. When he came out on the stage it was odd because he was smiling. I mean really smiling. I had assumed that he was of the angsty alt-country mold I was used to (along the lines of Ryan Adams pre-marital bliss) so it was unexpected that the guy on the stage belting out soulful, intricately-worded Midwestern gems was grinning ear-to-ear.

"Oh wow," I thought "Josh Ritter is a real dork."

After being startled by this fact for a few songs, it sank in that this wasn't an act and he was genuinely excited to be there on stage singing. His excitement and enthusiasm overwhelmed the audience and we quickly became dorks right along with him. When the lights turned off and he sang a song, minus microphone or any amplification, just him with an acoustic guitar strumming and belting out to the hushed crowd, we ate it up. When mid-concert one of his bandmates mothers came out and recited the poem 'Annabelle Lee' on stage, none of us jaded Boston yuppies in the crowd batted an eye.

Passion is intoxicating. Watching someone do something well that they are passionate about is an enthralling experience.

And passion is Dorkiness. It's the kid back in high school who was a little too excited about model trains and took a lot of abuse for that fact. But no matter how many times he got made fun of for responding seriously to a question asked in jest he continued to answer in earnest when someone asked him about his newest locomotive.

While that passion might get you a wedgie in high school, it's a recipe for success in real life.

Jesse Schell has a section in his "The Art of Game Design" called "The Secret of the Gifted" that speaks to this:

Well, here is a little secret about gifts. There are two kinds. First, there is the innate gift of a given skill. This the minor gift. If you have this gift, a skill such as game design, mathematics, or playing the piano comes naturally to you. You can do it easily, almost without thinking. But you don't necessarily enjoy doing it. There are millions of people with minor gifts of all kinds, who, though skilled, never do anything great with their gifted skill, and this is because they lack the major gift.

The major gift is love of the work.[ Pg.6 ]

Love will keep you working at that skill, keep you growing at it. As Schell writes: "And your love for the work will shine though, infusing your work with an indescribable glow that only comes from the love of doing it."


Being across the table actually interviewing people, I finally understand why interviewers always say they look for candidates with passion.

Being across the table actually interviewing people, I finally understand why interviewers always say they look for candidates with passion. When I first heard this, I called shenanigans. Passion above IQ, Resume and Schooling?  But now speaking from personal experience, passion is really is what you look for as a prospective employer. Most important is whether the candidate would be overall be a good fit in the office. The next most important thing by far is what gets them fired up, what makes their pre-interview nerves melt away as they go-off on slightly too long of a tangent regarding something work-related they love. Someone with a limited skillset but a passion to learn your business beats a learned automaton every time.

Me, I'm a Web dork. If you want to get into an uncomfortably animated discussion, ask me about anything related to current web technologies (I'll get extra excited if you mention the word "Rails") and you'll be in for at least a 1/2 hour discussion for how HTML5 is going to cure cancer and bring about world peace. Make the mistake of asking our designer (and my wife) Martha about typography, you'll learn far more than you ever wanted to about Kerning and Serifs.

So like "Nerd" and "Geek" have morphed themselves from insults to badges of pride, we're reclaiming "Dork" too. In fact I wouldn't ever imagine hiring someone who didn't give me a "Wow, they are a dork" moment.

Oh, and go buy Josh Ritter's Albums. We need more dorks like him in the music business.

Posted Tuesday, Jun 01 2010 02:11 PM by Pascal Rettig | Rant, Troll

Thanks for the Semantic Web Facebook (I still don't like you though)

Everyone is talking about what a bunch of evil privacy destroying, soul sucking jerks Facebook is. There's even a quit Facebook day. And I agree. I like my privacy and I would love something like Diaspora (despite their curse) to move the social graph out of one corporation's greedy hands.

What seems to have been overlooked by the tech media however is what an absolutely fantasmagastical (I'm really that excited) thing Facebook did to push us all towards that wondrous pie in the sky of Web 3.0.

With OpenGraph, we have meta-data now. Real, usable, indexable, meta data.

With OpenGraph, a web-standard that Facebook released to help it pull information from web pages people "Like", we now have metadata. Real, usable, indexable, metadata. Facebook is throwing it's full weight behind OpenGraph by making it invaluable for maximizing the usefulness of it's new unit of social currency, the Facebook "Like". If you haven't peeked at the Open Graph Protocol, take a quick look.

Yes, it doesn't do everything, but notice how simple it is. There isn't even a mention of the words "semantic" or "triples". It's something that's very much targeted at being useful and usable. It's something you could explain to your boss and they'd understand it.

Facebook released open graph at F8 in 2010. It's already showing up all over the Web because of the market weight of Facebook (I noticed it in the HTML of GoodReads pages.) While OpenGraph is based on RDFa, OpenGraph and RDF are very different beasts. RDF has been around for most of the decade but hasn't gained much traction on the commercial Web. If you want to understand why, try to read the primer or even the primer primer (The fact that someone even felt compelled to write a primer primer speaks to the issue - RDF is complicated to understand and implement)

So here's the awesome part: everyone's implementing OpenGraph for Facebook, but us Web Developers can hitch our metaphorical Poon to the back Facebook's speeding truck and get to the same destination: the semantic web.

There's already libraries in a number of languages to support open graph, and using them (at least the Ruby one) is dead simple.

require 'opengraph'
movie = OpenGraph.fetch('http://www.rottentomatoes.com/m/1217700-kick_ass/')
movie.title # => 'Kick-Ass'
movie.movie? # => true
movie.image # => 'http://images.rottentomatoes.com/images/movie/custom/00/1217700.jpg'

[To put a slight damper on Unicorns and Rainbows tone of this post, the opengraph Gem didn't want to pull meta-data from the GoodRead's site]

Contrast this with an actual API - even a simple REST one like GoodRead's API: you need to get an API key, limit yourself to 1 request per second, and handle the data that comes back on a per website basis.

Perhaps more importantly, you are bound by the terms of use of the Website's API for anything you pull down. In the case of the GoodReads API, you need to remove anything you pull from their API in 24 hours. In the case of OpenGraph, while I'm sure the legal responsibilities are a little murky - you can pull and store data as you like provided you abide by fair use (e.g. Google doesn't need to remove all data it scrapes from the Web every 24 hours).

Now something like oEmbed is obviously a competitor to OpenGraph, however oEmbed is actually a lot more work for a website to implement than OpenGraph and there's not the same motivation - Facebook "Likes" -  driving adoption. We use a (great) third party provider called Embed.ly to support a lot of extra sites that haven't implemented oEmbed, but truthfully, it would be nice to be able to drop the middle man and pull meta-data straight from the source.

OpenGraph is the classic case of the Bazaar functioning better than the Cathedral.

OpenGraph is pretty new, and given Facebook's willingness to change and tweek constantly, I'm sure there's more usefull stuff that will be added to the protocol soon. So even if the semantic Web isn't here, we're at least pointed in the right direction. OpenGraph is the classic case of the Bazaar functioning better than the Cathedral. I'll take a quick and dirty approach that's actually implemented by millions of sites than a "perfect" idea that no one can understand, and that's exactly what OpenGraph is starting to get us. It may be Web 2.5 - but it's here today.

Posted Thursday, May 20 2010 10:22 AM by Pascal Rettig

A band-aid for the comments conundrum: Developing a Tilder filter

I made the mistake yesterday of reading through the comments section on a Boston.com article. I should know better and I quickly regretted my decision.

The comment sections on websites these days suck either because they are completely devoid of any sort of intelligent thought (poster child: YouTube) or because people really enjoy expressing strong opinions tangential to the subject matter just to make themselves feel smarter (poster child: Slashdot)

For the former problem there's not much you can do - to have a good conversation you need people willing to write coherent sentences. For the latter problem, unfortunately, it's not a question of being able to write the comment, it's being willing to take the time to read the article and add something productive to the conversation.

Now, the problem may stem from us internet-friendly millennials who think we're terriffic on account of our extensive collection of after-school sports trophies and refrigeratored A+'s, and so are convinced that the world really, REALLY needs to hear what we have to say, regardless of what we're saying. Or maybe it's a universal issue and people just like griefing. It's tough to tell.

In any case, for our purposes people who read blog posts fall into three overlapping categories: people who want to write comments, people with something intelligent to say, and people who actually RTFA. Ideally, you'd like the comments on your blog to come from only the intersection of all three of those groups.

What often happens, however, is that you just end up with the difference of the first group minus the other two. People who have RTFA will scroll down to the comments, see all the comments that say "You are an idiot and should never have children" and decide to move on. The other group - people with something intelligent to say, will often make a great comment that adds nothing to the subject matter because they can't be bothered to RTFA to the end. Furthermore, Austrian goats are my favorite animals!

As a partial solution to this problem we've come up with a concept called the Tilder filter that works like this: somewhere in the blog post, preferably near the middle, there needs to exist a complete non sequitur. Something that's really out of place and will catch the attention of anyone who reads the post.

Next, as soon as someone tries to submit a comment the system sets a cookie via javascript (more on this in a second) and darkens the screen with a lightbox-like popover obscuring the entire page. On this popover is a multiple choice question with 8 or so answers which the user has twenty seconds to answer. The answer is of course the non sequitur mentioned earlier.

If they get it wrong - too bad. No comment for them. Of course they could remove their site cookies and try again but at that point they are going to be so angry that their comment will be easily discernible and moderated (e.g. "Your site sucks, I just lost my f#$@%ing comment you worthless pile of ..." and so on)

Now regarding the cookie, since we set a cookie the minute they press submit, the system can black out the page if they bring it up in another browser and disable the comments form after they failed to answer the question correctly the first time. This could all be done very simply in javascript as a proof of concept, while a real implementation would need some server side support.

I'd love to hear feedback on the idea, provided, of course, you've taken the time to read the full post.

Posted Thursday, May 13 2010 07:36 PM by Pascal Rettig | Development, Troll

Thriving in the coming game mechanics hype cycle

It was the famous Jesse Schell video from DICE 2010 that finally convinced me that Game Mechanics on websites were here to stay. I had hoped once people got tired of mayoring-up and badges it would die down, but that video opened my eyes to the fact that it's here to stay.

Why wasn't I convinced before that? I think it might have something to do with the fact that currently implemented game mechanics don't particularly engage me. I love video games and play on Steam and Xbox but have never gotten into achievements, so I idiotically assumed that since I wasn't that interested most other people weren't either. Some recent conversations I've had have clearly indicated that's not true and that a significant portion of the population can be manipulated to change their behavior via achievements.

Two things someone said in particular amazed me:
1) People will buy a crappy movie knock-off game because they intentionally make achievements easier than normal games
2) People will pay to skip portions of the game to keep up with their friends (Farmville, WoW, etc) but don't think their friends do the same.
(Notice neither of these are actually "fun." In fact, they both constitute, in a loose moral sense "cheating")

Just like "niche social networks" were the big thing a few years ago game mechanics are the new hotness.

So I now very much believe, as someone poignantly wrote, Game mechanics are the new black. We are already heading into the overhype stage. Just like "niche social networks" were the big thing a few years ago game mechanics are the new hotness. The idea of "Game Mechanics" has filtered down to the client level and clients are asking for them. However, just like the "niche social networks" few of the multitude of projects are going to bring real innovation to the space for a period of time. Why? They don't need to - clients are going to be asking for plain-jane points and badges because that's what they've seen work - so most developers aren't going to push back and try to innovate because that's not what client want.

Most developers aren't going to push back and try to innovate because they don't have to and that's not what the client wants.

Simple game mechanics are going to sprout up everywhere soon and it's going to get painful. If you think constant foursquare twitter updates are already a pain - extrapolate that exact same idea to every area of your life. To use the example from the Schell talk, want you get your teeth-brushing-points? You'll have to let your your wifi enabled toothbrush tweet every time you brush your teeth. "Just made my teeth extra clean with #colgate!"

At an abstract level, one of the reasons for the success of points, badges, etc is that they guide our efforts. They place a clear valuation on our time and say "You can do A or B or C, but B is worth 2x as much as A and C doesn't get you anywhere." They are an effective way in our mentally-exhausting media saturated lifestyles to cut through the noise to a clear quantitative signal and feel rewarded for our efforts with tangible results. Once they are everywhere, however, their apparent value is going to decrease as the noise increases.

The argument for implementing basic game mechanics is that whether everyone's overdoing them doesn't matter because they will help engagement, expansion and retention on my site. That's unfortunately not necessarily true and let me try to explain why:

We call these things game mechanics but they aren't really. They are meta-game mechanics. Mechanics that operate on a level outside the game. For example in a 3D FPS, the game mechanics would be the running around and the shooting of aliens in the head. Your score and achievements are meta-mechanics outside of the game. Shooting aliens is fun. Getting points for doing so isn't, in and of itself, at all fun.

In a similar fashion, points and badges on Web sites operate outside of the core feature of the website. You could argue that on a site like FourSquare things get a little muddled - but even there the "Game" is the tips and location aware check-ins - the badges or mayor-ships you get for doing so are a meta-mechanic.

(As an aside has anyone come across a meta-meta game mechanics website? A site that gives you points for other points that you get on other sites - I'm sure there's at least one out there already)

To get back to why a focus only on the meta-mechanics might be bad, let's take the example of MySpace and facebook. The Game Mechanic, aka the "meta" is the number of friends you have in your social network. For a lot of MySpace users that was all that there was to it - get as many friends on your account as possible (well, besides ugly wallpaper and writing idiotic shout-outs on people's walls.) Because that was the game, MySpace grew like crazy, but once people got tired of adding friends it was done [a gross over-simplification I know]. Facebook was much the same at the beginning, with a focus on adding friends, but then they innovated the hell out of the game. Making it actually social by pushing other user updates to your wall and adding third-party applications kept users engaged and Facebook growing like crazy.

So a game mechanic can work to your advantage, but only if you don't let it dominate the show.

Companies shouldn't really care about Game Mechanics, they should care about Viral Mechanics

Sachin Agarwall made a great point in his "Don't be a douche" Barcamp Boston presentation. The Slides are sort of hard to follow but here's the gist: companies shouldn't really care about Game Mechanics, they should care about Viral Mechanics. How the game mechanics promote viral expansion of the user base is what is actually important to the bottom line.  Just throwing a couple of points into the system amounts to Cargo Cult Game Design. As Agarwall pointed out, the issue is that at some point notching the viral mechanic up in favor of the company will make it annoying enough that users will turn it off or leave.

To look at it another way, the fact that the simple game mechanics most people are talking about put the focus on the meta means that when it's all said and done they actually take away from the core of the game. As Jeff Atwood wrote a while back: Meta is murder. Even though he was talking about online forums and discussions, I think he had it right.

The tagline from FourSquare is: "Foursquare on your phone gives you & your friends new ways of exploring your city. Earn points & unlock badges for discovering new things."

FourSquare is more about the meta than the game...

if they don't provide real value, people will quickly move on to the next flash in the pan

Yet I never hear people discussing the great things they found or discovered on FourSquare. I have seen and heard plenty about that second part. Go to http://foursquare.com/ and watch the "recent activity." What is the Tip to Badge/Mayor ratio you see? The times I looked I saw a lot fewer Tips than other stuff. FourSquare is more about the meta than the game. That's obviously working well currently, but I wouldn't count on that continuing if they don't provide real value. People are always quick to move on to the next flash in the pan.

To go back to the Atwood blog: "If you don't control it, meta-discussion, like weeds run amok in a garden, will choke out a substantial part of the normal, natural growth of a healthy community." And yet with Game Mechanics we are intentionally adding meta-elements into our systems and making the conversation revolve around them.

Now as businesses we like the meta-elements because they take advantage of people's strong innate desire for personal-validation and self-aggrandizement, but more than just muddying the waters, they also unfortunately bring some bad behavior into the mix.

To put it simply: if it's set up like a game, people will play (and cheat) to win.

When users are playing against themselves, you don't care that much if they cheat, as engagement is king on the web and if plays are taking the time to cheat you're actually probably ok with it as a site owner. When users are competitive with other users however, bad things can happen. I stay away from Digg these days because the game part of it has skewed to dramatically favor the all-powerful Digg superusers and that means that it no longer feels like a balanced, social environment.

Summary / TL;DR:

Game Mechanics - at least the simple ones that people are talking about like points and badges, aren't actually the mechanics of the game at all. They are the meta-mechanics that surround the game built to induce behavior inside the game. But there's no reason it has to be that way, and so where you should focus your efforts to avoid launching a dud as we roll through the game mechanics hype cycle is on making the core of whatever system you are building more game-like (put simply, fun) and not just toss a thin-veneer of simple game mechanics around the outside. Read the literature and blogs on the subject and brush up on what makes a game fun.

Try to make sure your app is a game worth playing in-and-of itself before resorting to tricks.

I love the Mega-man games because unlike some other games that just pat you on the back when you complete a level, in Mega-man you get a kick-ass new weapon. It adds something of value inside the game. FourSquare's push to have mayorships translate to perks in real life is a great piece of value getting added, but unfortunately, because it's an out-of-game perk that is also a scarce resource it encourages cheating that has negative effects (think FourSquare wants to play check-in cop?) My opinion would be to try to make sure your app is a "game" worth playing in-and-of itself before resorting to those types of tricks.

And in any case, you'll have plenty of chances to do in-real-life rewards next year when we roll through an augmented-reality overhype cycle.

[ Update 5/20: Case in point "Badge" support on HuffPost ]

Posted Monday, May 10 2010 11:18 AM by Pascal Rettig | Development

Ugly Abstraction, or "Just one more option parameter"

One of the mistakes that I was (and probably still am - but being aware of the problem I think has helped) guilty of was the crime of trying to make one piece of code do too many things.

Like a vacuum cleaner with too many useless attachments that don't work correctly and keep getting lost, expanding on one function or piece of code by adding too many different conditional branches leads to a quick and ugly code death. At first glance, adding options might seem like vintage DRY - if you already have code that "almost" performs a certain function, why not add a couple of lines and a parameter or two to make it do a little bit more. In reality though, a hundred and one conditional branches are the quickest way to take an ugly-stick to your code. The next guy forced to look at the spaghetti that you've written will probably just end up quitting so that he doesn't have to maintain what you've written.

The better option, which is still DRY but also allows for separation of concerns is to extract the shared functionality separately and then invoke both your old function and a new function to get job done. To give a simplified example (extracted from some old PHP code I looked at recently), what's better:

function generate_table(&$data,$wrap_in_a_div = false) {
$table_info = ... // Generate your table
if($wrap_in_a_div} {
return "<div>" . $table_info . "</div>";
}
else {
return $table_info;

}

Or:

function generate_table(&$data) { 
$table_info = ... // Generate your table
return $table_info;
}
function generate_table_in_div(&$data) {
return "<div>" . generate_table($data) . "</div>";
}

 

Ignoring the triviality of the example, for my money, the second one is liquid gold compared to the first one. There may be some perfectly valid need to have a table in a <div> tag sometimes and not sometimes, but the table generating code shouldn't worry about it because it's not it's job. If it turns out you need a whole bunch of junk occassionally wrapped in a <div>, you can down the road extract a generic wrap_in_div method and refactor generate_table_in_div to us that method.

Posted Friday, May 07 2010 11:45 AM by Pascal Rettig | Development, FNEO

Beware the Faux Rush Project

Ring! goes the phone with a certain noticeable urgency. You reach over and answer it with some trepidation.

"We've got a great project" says the breathless client, "but we need to quote it today and it's gotta get done in the next two weeks!"

Looking over your schedule you think, "We'll I can push this here and this there and I think I can get it done"

"And how much?" asks the client

You crunch some numbers and they come out too high - you only have two weeks after all, so even with the rush fee you can't charge them that much more than two straight weeks of time, right? So you fudge the numbers, tell the client that you marked some features as "we'll get to them if we have time" and make the price tag 2 weeks of your company's time.

...Two Weeks later...

You've been at the office every night until 11 including the weekends. You busted your butt, but guess what? It got done. It's a little rough around the edges and you didn't get to any of "nice to have" features, but you snuck it in before the deadline and it even has some decent test coverage. A couple of your other clients are pissed off because you've been a little less than normally responsive but you'll have plenty of time to take care of them now that this is done.

Ring! goes the phone, a little too perky for your current state.

"Great news!" says the client, "The boss is out of town so we can't launch for another two weeks. He took a quick look at the project last night though and has a bunch of changes. We need to put in those features that said you would get to if you had time. Now you have time."

Crap.

This story in some form has happened to me more than once and has made be realize that often *urgent* client deadlines are closer to preferences than actual stop-dead dates. Don't let the prospect of a filled schedule and a quick buck convince you to make decisions regarding schedules, features and prices that you normally wouldn't. Stick to your guns and make sure the project works for you. Otherwise the project may drag on and you'll end up burnt out, having neglected your other clients and still have weeks to go on a project that you underquoted. We're still happy to do the occassional rush project, but the same rules apply as any other project.

Posted Friday, Apr 30 2010 11:46 AM by Pascal Rettig
< <    1 2 3 4 5 6 .. 10   > >