Friday, November 23, 2007

Typophile Temper Tantrum

Leopard has a bunch of neat new APIs that us developers can have fun with. There are also some nice performance improvements under the hood.

When it comes to usability though Apple has either moved every last person with even a smidgeon of design sense to the iPhone or the programmers took the designers out back, shot them, turned around and found these new, shiny APIs much like a child finds a mother's jewelry and makeup case for the first time.

Click me! - image originally from:

In particular, the built-in dictionary has a nice addition - searching of Wikipedia. I've wanted this for a long time so there was much rejoicing when my office mate clued me in. Whoever worked on this did at least 2 things right. The first is actually including Wikipedia search. The second I'm saving for the punch line.

There are a couple technical issues (like doing the search on the main UI thread) but the main issue that essentially made this feature unusable for me was the choice of typeface. Baskerville! Seriously whoever chose Baskerville should be taken out back with the rest of the designers. If it wasn't consciously chosen then shame on you - you should always choose your fonts with care. Baskerville is cool for the first few seconds, giving the dictionary and thesaurus parts of the app that old, leather-bound book feel. I put up with it because there were never large amounts of reading to do when grabbing a definition now and again so there wasn't much of an effect on my reading speed. For Wikipedia-length articles, Baskerville enough to make the feature unusable for me (1).

This needed to be fixed!

After a bit of poking around though I discovered thing-that-was-done-right # 2. renders its content using WebKit and simply grabs the Wikipedia results and applies an XML transform on them.

Bingo! It was only a matter of time before I found the right switch (2).

Turns out everything I needed was in 1 css file. Also turns out I was waiting for something of just the right size to attack with my new Python skills (3). Enter grotesque!

> open /Applications/ > grotesque > grotesque --revert

The script is pretty straight forward, well documented, and should be non-destructive (4). The script is also written in a way that if you wanted to not use 10pt Verdana you could parameterize that.

1) For those that don't know I have slow visual processing speed so I normally read much slower than the average person (I'm in the 4th percentile). When displays are visually cluttered others might slow down by a constant factor whereas I'll slow down by an order of magnitude.
2) And since I new it was Baskerville from the minute I looked at the thing grep was my friend. I must admit I even went as far as patching the binary but it seems the 2 instances of 'Baskerville' in the binary now control nothing at all.
3) Yes, yes I know, I'm late to get on this Python train and Ruby is the new cool kid now.

4) I've tested it well on my machine but if you're really paranoid you can make a backup of

Tuesday, October 16, 2007

Waterloo you... amuse me!

So I've now been at the University of Waterloo for 5 years and a month. I've interacted with our retarded PeopleSoft course registration software for about 7 months longer than that. More-so, I've known I wanted to go to Waterloo since about grade 9. For the sake of it though let's just say that I seriously did research about the university starting in grade 10 (Jan. 2000). I know where to go on their web site to get info, I've been doing it for nearly 8 years now. None of the important landing pages or resource pages have changed in that time.

Why oh why then did I get this email late last week:

Hi Shawn,

We want to let you know about our three information websites.

The Quest website at offers information about enrolment appointments, deadlines, unofficial transcripts, etc.; the Registrar's Office website at offers information about convocation, final examinations, ordering official transcripts, undergraduate calendar, etc.; and the Student Awards & Financial Aid website at offers information about scholarships, bursaries, OSAP, Work Study, etc.

We recommend that you bookmark them for future use.

This mailbox is not monitored; do not reply using your mail "reply" feature.


Office of the Registrar

University of Waterloo


Monday, September 17, 2007

Sunday evening design shenanigans

I was thinking lately about how to write a blog entry to summarize how everyone always has an opinion on design. Why is it that so many people will always have an opinion about how something should be designed? Software engineers certainly don't have random people walking up to them suggesting how they should architect their next framework. You don't see people randomly telling civil engineers and likewise construction workers how to place the trusses on that new bridge. Better yet, when's the last time you saw a patient tell a surgeon how to perform the procedure (OK I suppose the patients are under... maybe that's what more designers should do with their clients while they are working to come up with a polished idea).

But why bother writing about it... I just stumbled on this tongue-in-cheek song about the most common phrase a designer is likely to here:

If you don't get the song then maybe it's a clue that you should stay out of the way of the designers you are paying so much money for and let them do what they do best.

Monday, August 06, 2007

generalAvailability(VMware Fusion) && num(VCPUs) != numRunning(VMs)

Because every geek needs to post a blog entry with a title written in predicate logic every once and a while :)

Fusion 1.0 is now generally available! Woo Hoo

This post links to a ZDnet article (though there are many others) because I wanted to make a slight correction. I don't have anything against ZDnet, thanks for the coverage. I can see how VCPUs and VMs might be confounded and this little error seems to have spread beyond the land down under; the tubes must have been particularly empty during the last week.

VCPUs, or virtual CPUs, are what Fusion (and other VMware products) expose to the virtual machine (VM) to allow the VM to execute instructions. In other words when you run programs in a virtual machine they think they are running on a real computer and running instructions on a real CPU. To greatly simplify things, a VM uses the abstraction of a VCPU to confine the execution of a single virtual machine and ensure that it can't wreak havoc on the rest of the system that it's running on (including other VMs).

Just like a real computer can have 1 or more processors (CPU) a VM can have 1 or more VCPUs.

How many VCPUs can a VM have?
In Fusion each VM can have 1 or 2 VCPUs. This is the same as Workstation 6. We actually create new VMs with a default of 1 but allow people to set it to 2. Why you may ask? The more the merrier right? Well it turns out that's not quite the case. There are certain things that will run "faster" given the extra parallelism but some things actually run slower. This is because of our dependence on a host operating system; OS X in the case of Fusion and Linux and Windows in the case of Workstation. Because the programs running the the VM have no idea they are running inside a VM they assume they have full control of the hardware. In particular they can ask that certain instructions be run simultaneously on certain CPUs. One of the main tenets of virtualization is that execution in a virtual machine should behave the same and yield the same results as running on a real computer. This means that Fusion can't turn around and run the requested instructions at different times; they must be run at the same time on the different CPUs. This turns out to be rather hard to do efficiently on most systems.

So if it's sometimes slower why run with 2 VCPUs?
The main reason would be to actually test out multithreaded code. It's hard to guarantee that you've written thread safe code. Most problems will arise when running stress tests and going from 1 to 2 CPUs.

OK, so how many VMs can I really run then?
In theory, we set an (artificial) upper limit of 24 VMs. In practice we will never let you start a VM if the computer you are running on doesn't have enough resources. For example if you have a limited amount of memory and you are trying to start a VM that will consume much more than the memory available (to the point that your entire system, not just the VM, would become unusable) then we won't let you start that VM. This means that even if you have an XServe maxed-out at 16 GB of RAM you might not ever reach the 24 VM limit.

Long story short, we don't limit you to 2 VMs (we limit it to 24 but that won't really matter in practice given the current state of hardware), we do limit you to 2 VCPUs since more just wouldn't effectively run faster. Load up that system with RAM and run the VMs you need to get your work done :)

Sunday, June 10, 2007

Back to sleeping a little more... for now

So at Macworld last January I talked to a bunch of people who all wanted Fusion to have better Mac integration. A lot of these questions at the time were prompted by Parallels announcement of Coherence. My answer to all of you at the time was "We (internally) have a bunch of ideas of how to make the integration better. What exactly is it you would like to see; we're listening." Well, we listened: I've been pretty quiet on my blog lately. In January I had to go back to school. Then, because of a turn of personal events I ended up taking some time off from school (sort of). That meant I had more time to work with the rest of the team on Fusion. Well it turns out that timing was just right; I ended up doing a bunch of work on what came to be know as Unity, which all of you can now play around with in the latest beta. You can also get a sneak peak of Unity in a youtube video that Regis and I put together: We still have a bunch of ideas about how to keep pushing the integration even further. If you'd like to give us suggestions about how we can do even better find us at WWDC.

Wednesday, March 21, 2007


I've been back home for the last few weeks so I've fallen back in to some of the good old Quebec culture. The reference is to one of the more recent Quebec movies: Bon cop bad cop. A bilingual movie where a murder scene set on the Quebec - Ontario border opens the story of both provincial police forces working together.

For those of you who know a bit more about me you may also notice that if I'm back home (proverbial stepping stone) and not in Waterloo right now it's because I'm getting ready for yet another move. A few personal circumstances came together in an odd way; I'm about to head back down to California to work at the VMware mother ship on something really really cool (tm). No, I'm still not quite done school yet. Yes, this really really cool (tm) thing is Mac related. Can't say more right now but you can be sure I'll keep you posted the minute I can :)

What prompts this post though is an interview synopsis I just came across: I've criticized executives and companies for ignoring competition on this blog before. Some of those companies I've worked for. Also, less on this blog but definitely if you have me at a dinner table with a glass of wine in hand, you can see me in a very passionate speech about the sad state of the music industry. I've had variations of the "fix that shit" talk with friends of the family and even high-ups at Sony Music for at least 3 years now.

Well, for the first time in a while and most notably since Sony's root-kit debacle I've started to have a bit of restored faith in the company. If they have a CEO that truly can mix passion, humility, determination, and humour the way he seems to in the interview then things can't be all bad. To be an occidental CEO at the head of Sony is also no easy feat. This guy seems to get how software should play with hardware and he gets digital media. Hopefully some of that can trickle down to Sony BMG (their music business).

Just like "Bon cop bad cop" is very close to home for more than just superficial reasons so is Sony. I'll be keeping an eye out for them. They did after all invent the Walkman and I kind of sort of remember that being a big deal (I wasn't around when it first came out and very young still in the hay day of it's popularity).

Friday, March 16, 2007

Daring Fireball on Thurrott & Paul Kedrosky on Jobs on DRM & Music

Over the last couple months I was beginning to loose faith in Gruber's ability to present a good solid argument. In one of his more recent posts Gruber does an excellent job at addressing key issues concerning the open letter from Steve Jobs about removing DRM from music:

He does a better job than I could have at getting at the crux of the matter. I still really wish he would do away with calling people jackasses. You hear me John, you are a better writer than that and can do a much better job by convincing people with solid research and well composed arguments.

It's well worth the read.

Thursday, March 15, 2007

Bone Chilling Beats

Hot off the presses at TUAW is a new way to enjoy your tunes.

The author seems skeptical but I would definitely go for one should it be sold in North America (for biking not running in my case). When your heart is pounding at 170 BPM and you're breathing heavy the last thing on your mind is the quality of the sound. If I'm sitting at home enjoying some jazz or house it's a completely different matter. When I'm running or biking though if I could just have a nice beat going in the background and still be able to hear oncoming traffic I would be more than happy.

Sunday, February 25, 2007


No matter how I say this I'm going to come off as a pompous prick and an Apple zealot. This really has nothing to do about Apple and all to do about product design so here goes.

The link is to an article about a new laptop design by Samsung for their Aurora model. Before I go on let me give kudos to Samsung for a great effort. This is a very nice laptop design; right up there with the Toshiba Qosmio series. If anything I find the Qosmio even sexier than the Aurora. I tend not to bash a company that is at least trying to embrace industrial design; especially when they're in the learning stages.

The author of this article though I think needs a lesson in aesthetics.

But now the Samsung design team has pushed the boat out and created the new Aura series of notebooks, which look set to go head to head with Apple’s ultra-stylish MacBooks.
The first Aura out of the gates will be the R20, which Samsung will be officially announcing next week. The R20 really does look the business with its glossy black finish and minimalist styling.


I would say the R20 is the epitome of Rocotechno design*. There are numerous conflicting lines and curves ornamenting** the laptop's profile. There is a very prominent latch mechanism. There is no attempt to conceal the various ports to blend with the ID (Industrial Design) of the laptop. Rather, each port is prominently displayed in a way that pronounces its unique shape and distinguishes it from all the other ports. Samsung felt that people were comfortable enough to understand that USB ports where still USB ports even though they were black. By the same nature, why are the consumers of this laptop NOT smart enough to know that the D-Sub 15 port is for an external monitor; why the PC '97 blue colour coding when there aren't even any serial ports.

ASSIDE: I like to use the example of when I was a kind I used to play with this toy that had you stick pieces of a certain shape (square, star, moon, circle) into holes that matched that shape. Toddlers are doing this with PC peripherals now. When you remove the element of fear of technology you allow the natural mechanisms of learning by interacting with our environment to kick in. Consumers are slowly coming more an more from the generation of digital natives so why no embrace that in our designs.

I'm not necessarily saying there's anything wrong with ID that is not minimalist (though in many cases it tends to be my preference simply because I am a romantic modernist at heart). Just don't call something minimalist when it isn't

In fact, if you placed it next to a black MacBook you’d be forgiven for thinking that they share the same DNA. That said, you could say the same about a Nintendo DS Lite!

Come again? If you don't know anything about art, art history, aesthetics, or product design then please don't talk about it as if you do! The article would have had much more merit if the author would have left it at the nice technical specs and a note that he thought the laptop was pretty and stylish and given a kudos to Samsung for joining the growing number of manufacturers that are concerned with the aesthetics of their products.

*As far as I know I just invented the term Rocotechno (Rococo + Techno) and I think I like it.
** I use ornamenting in the pejorative sense here. Added visual complexity but useless in function.

Saturday, February 10, 2007

Reflections; cybernetics and augmented living

I was just browsing some of the products offered by Jabra and it hit me; cybernetics is going to be one of those quick and quiet revolutions. Ten to fifteen years from now we won't know what hit us and our value systems won't have adapted at all because we won't have seen it coming. It reminds me of the frog swimming in boiling water analogy.

In particular I was looking at the Flash promo for one of their new headsets: the JX10. It starts out like a nice piece of promotional techno-fetishism although I don't particularly like the design of this piece for a few reasons that I won't get into.

Right about when they show the man and woman both wearing the headset I'm assuming the goal was to invoke feelings of a sexual nature to make the desire for the headset stronger (a rather common practice). The image invoked a completely different emotion in me; a glimpse of how we will embrace being cyborgs without ever realizing what we are getting into until it's too late.

I suppose to put things into perspective there are a few really good TED talks:

I'm at odds with my feelings on this one. I'm morally against full blown human cyborg augmentation for similar reasons that I'm against doping in sports; it's a slippery slope and then if you even want to stand a fighting chance you need to jump on board to. At the same time I'm more than OK and probably would be an early adopter of most of the products and social phases that would form the stepping stones to that possible future.

Thursday, February 08, 2007

Longing for the end of University


This post won't have much techno-design content but bear with me.

If a day like yesterday can't help me break out of a SAD cycle I don't know what can.

Waterloo this time of year can be really hard on the soul. It's my first Canadian winter in 2 years; I was in California at the VMware head office all of last winter. Let me tell you can get used to a winter in Silicon Valley really fast. To make matters worse Waterloo had a crappy warm month of January. It was really grey and raining without an inch of snow and a miserable 10 degrees (that's about 50 for the other fraction of the world). Then my birthday comes along at the end of January and the temperature drops to -30 (-22 Fahrenheit) WTF mate! Since then Waterloo has been having a record cold streak. I've been stuck with too much school work to help out my team in any meaningful way; not to mention that most of what I have left to learn in class is pretty insignificant compared to what I'm able to learn on my own at this point.

So back to yesterday. I wake up feeling a little more energy than I have had in the last couple days and decide to finally tackle that essay that's due later that afternoon. I'd been mulling it over for about 5 days on and off in my head but in about 2 hours time I put out a pretty kick-ass 5 pager on the impact of electronic forms of communication on the value systems of youth and how academia should change the way it evaluates how youth are impacted by technology. Then I have a bunch of really good interactions with people all day, guitar lesson, finished off with an amazing yoga session (Ashtanga is my medicine of choice for the curious).

I was ready to come home, cook myself a really good meal, shower and take it easy checking up on email etc.

So I start checking work email when what do see. People are talking about this Youtube video of a leak of a really sweet feature we have lined up for Fusion: that's right 3D acceleration in a VM. Since I've been working remotely on my own little features and bogged down with school I hadn't had a chance to see 3D in action on my own computer yet; I was seeing it for the first time like most of you. My jaw literally dropped!

Then I thought my day was really done... NO! I continue checking email and see that they had a synopsis of our quarterly all hands meeting. VMware is going IPO!!! Here's the link to the press release.

I want to be done with school... last week!

Wednesday, January 17, 2007

iPhone... Industry Impact

How could I not write something about the iPhone?

Considering my last post directly challenged Palm and Palm CEO's inability to fully appreciate that a company is always capable of surpassing itself; to assume that a competitor will execute in the same way and make the same mistakes years later is a little naive and short sighted.

On another related note the launch of the iPhone marks (almost to the day) the two year anniversary of my death proclamation on RIM. A little more background is clearly needed here. You see, the University of Waterloo has a Co-op program (they pretty much wrote the book on Co-op actually) and RIM hires a LOT of UW students since their campus is right next door; quite literally, I'm not kidding. Probably half the students walk through RIM's campus to make it to class every morning. Anyway, I did two Co-op terms with RIM (January-April and then September-December 2004). I always made jokes with family and friends because shortly after I joined RIM's stock soared and then shortly after I left it tanked: going from roughly $34 to $92 (that's with adjustment for a 2:1 split). Come March 2005 they were down to about $60. Stocks really aren't what I care about because they say very little about a company. Rather they tend to be a completely different game that is played in tandem to a company trying to innovate and deliver products; they are a game to allow others to make money and that might just impair the company's ability to execute and deliver the products they would like to.

To get to the point: mid December 2004 I decided not to go back to RIM for a third Co-op term and try my luck with another company. Try explaining that to parents who are helping your through school and up until that point think RIM is doing fantastic because their stock is still soaring (it was at it's high point then actually and has risen above that since). Basically at that point I had come to the realization that RIM was now surfing a 10 year old wave about to crash on shore; the top brass just kept saying how cool it was that RIM did wireless email. To add a bit to the context Good was just starting to offer some interesting services that included wireless email but also full integration into company intranets. Sybase (more precisely their iAnywhere subsidiary) was releasing a solution to wirelessly connect to their back-end databases; imagine being able to query sales reports and place customer orders all from your handheld (unheard of at the time). Finally, Microsoft was readying a free component that integrated with exchange server to push wireless email ie: the equivalent of RIM's blackberry enterprise server; a crucial part of their financial mix at the time though that has changed somewhat now. RIM in the mean time is busy settling a bogus law suit with NTP that it told its employees that they would never settle and boasting about doing wireless email.

Long story short when people asked me about RIM I told them I had a great time, worked on an awesome team and learned a lot but that if the execs didn't realign their strategy within the next year (possibly two) that within five years RIM would essentially be gone. Clearly it will take longer than five years to completely destroy a company like RIM, heck even Nortel is still kicking around and look at Sun's recovery and SGI now out of bankruptcy. But from a technology and innovation point of view RIM would be insignificant within five years and that's as good as dead.

Sure RIM will keep having corporate contracts for some time to come since they are so well established there. That really doesn't matter; Apple isn't going after the corporate. This might be one of the few times that we get to witness a reversal of the buying power. For the last several years techno-fetishery like the BB was left in the realm of the C level and top execs; and companies could justify the cost for it. Regardless of what people are saying about the price of the iPhone right now people bought an iPod five years ago at similar prices. The iPhone's price will drop with volume and variations on the base model. It will be consumers buying this device; mid level management and the likes buying it for personal use, expensing the carrier service fees and asking their IT departments to make their email work with the device. There are privacy concerns remaining about exposing corporate email to iPhones compared to BlackBerries but there is so little information about how to access email beyond Yahoo!'s free push based email on the iPhone at this point that I won't even bother speculating. For a brief moment (two years or so) we'll be able to see a shift from corporate buying power to consumer buying power shaping the landscape of tech toys.

I have more to say about the iPhone design itself. That will be for another post

Wednesday, November 22, 2006

Death Wish of a Fading Rock Star?

I'm not coming at this as an Apple aficionado. It simply strikes me that people can be so arrogant and blinded by simple facts of life.

Ed Colligan is the CEO of Palm and obviously a bright person. Sure this was something he answered in the spur of the moment to a reporter asking him to comment about rumours of a possible Apple phone. Spur of the moment answers are interesting though. They tell you either what the person's gut feeling about something is if they haven't given much thought to it or what they truly think about the matter after they've thought it through considerably. In either case brushing Apple off in this manner is worrisome.

The situation reminds me very clearly of a company all hands meeting in the summer of 2003 while I was working at Roxio.

BACKROUND: Roxio had just recently bought the Napster brand and and planned to re-launch it as Napster 2.0. Of course none of the original infrastructure could be used, because the original Napster didn't really have any infrastructure so Pressplay was also acquired in May 2003 to actually run the (then) subscription-only service. Basically Roxio had put almost all its spare cash into these purchases. More info here:
and here:

In the mean time Apple was busy with a few things. Apple had started staging its multi-year strategy to regain market share and acceptance and clean up the disastrous effects of the '90s. The first phase was a massive product line cleanup and a focus on core competencies. Part of this was reaching out to part of it's core market through the "Rip, Mix, Burn" marketing and engineering campaign where the iMac was equipped with CD-burning hardware and software. Shortly after Apple introduced its iPod in October 2001 to be a Mac-only product originally. The term is now synonymous with MP3 player but at the time it had just started generating buzz. Then Apple made a marketing splash with its "hell froze over" event announcing a windows compatible iPod in July 2002. At the time Apple already had iTunes that was considered by many to be the best music management software around but they chose to release the Windows iPod with a special version of Music Match Jukebox, which at the time was arguably one of the better suited pieces of software available on Windows to get the job done. Fast forward a few more months to end of April 2003 and Apple announces the iTunes music store. With it Apple had firmly placed its stake in the ground stating that pay per download and NOT subscription models are what the user wanted. But at the time obviously the iTunes Music Store was available only for for iTunes and hence only available for Mac; hey was is Apple after all what could you expect... right?
Watch the introduction of the original iPod:
Review information about the iPod and its timeline:
Background on the iTunes Music Store:

So that's about more than enough background info to be just right. During the Q&A session at the end of the all hands meeting one brave soul asks what Roxio's take on the iTunes Music Store was since there had been some rumours of a Windows version. The response was a simple as can be: "It'll never happen." No one even paused and the Q&A session went on. To that I told my boss at the time not to underestimate Apple's position in this market. Fast forward a few months and in October 2003 Apple was announcing a Windows version of iTunes, the Windows version of the iTunes Music Store and a completely new generation of the iPod; all in time for the holiday season. Napster 2.0 wasn't even ready yet... Ooops!

Now back in the present Napster is still met with lackluster acceptance and iTunes/iPod hold roughly 85% and 75% market share in the US respectively.

Those comments about Apple were made at a time when Apple had just set in place a strategy to come back from the brink of extinction and it was just barely starting to work. Other than the ones who really wanted to believe Apple could make a comeback or to companies like e-Machines copying the iMac because it was trendy and fashionable Apple was a nobody. It almost sort of made sense for a comment like that to slip by at the time. By in our day an age? How can somebody as bright as the CEO of Palm say something like it will take Apple at least as long as us to get the right mix?

He makes some logical conclusions about how Apple might go about executing a mobile phone strategy but the only problem is that it's set 3-4 years in the past. For one you don't have to worry nearly as much about the childish bickering among the various carriers you can operate as a Mobile Virtual Network Operator (MVNO). Virgin mobile is one of the very successful MVNO's and companies like Disney have had more trouble (I think because they don't have the right mix). Here's a good article summing up the current state of MVNO's: So let's see Apple has core competencies in hardware and software (especially in integrating the two really really well), it has shown that it has a knack for finding the right mix of content and features that people will go for, has a CEO that might just happen to have good connections at Disney (read might have access to contacts to get a head start over the hurdles of setting up an MVMO), and finally runs an incredibly tight ship.

I'm not saying that's a 100% formula for success and of course in all of this discussion I took the luxury of assuming that the iPhone rumour was actually going to be true just so I could illustrate a point about Palm's position. It does seem like a risky move to take when you are already struggling as a company to completely brush off the media and pop-culture darling. I'm just picking on Palm here because the example is too easy to pick on but Palm is not alone in thinking like this (I've made my opinions on RIM quite clear in the past although not on this blog).


I really need to blog more!


For lots and lots on reasons.
- As my day to day activities migrate away from academia (for the time being) and more towards programming I want to keep practising the art of writing effective prose.
-Blogging is a very effective (modern) tool to capture the progression of my ideas so I can review them later.
- I have experienced tremendous internal resistance to blogging regularly. Figuring out what can reduce the activation energy to write regularly might hold the key to finding out how to design interfaces with low activation energy to accomplish tasks in general.
- I have this asinine fear of exposing things that aren't perfect or at least close to completed. I need to start embracing organic development and releasing my need to control everything so tightly.
- I need to figure out how to focus my thoughts. Too often I feel the need to give much too long of a pre-amble to establish shared context and then diverge into way to many avenues of the ideas I'm trying to express. This is related to the need to feel like everything I write is "complete."
- But mostly to collect my ideas and share them; I have spurts of creativity that I want to capture both for myself to come back to and think about more deeply later or just so they get out there since I don't have the time to pursue all the ideas that come into my mind.
ASSIDE: My thoughts on distributed cognition and socio-emergence of ideas is probably best left for an entirely separate entry (or set of entries). That's all.

Thursday, October 05, 2006

Mental Explorations in the Land of Virtualization - MELV

I work on the team responsible for delivering VMware's hosted suite of products (Workstation, ACE, Server, Player). Looking at that line-up of products you could say our part of the company is focused on delivering solutions that push the envelope of things that can be done with virtualization. I personally have come to think that virtualization will provide the computer industry with it's next paradigm shift; how very back to the future! I think part of the reason has to do with how nascent computing (the industry, field of research, and trade) really is compared to what else is around. Many of the things that are needed to make virtualization worth while are just starting to come together.

A particular interest of mine is how virtualization will fundamentally affect the way Joe Consumer uses his computer; how does virtualization affect the end consumer's computing experience. At this point it's a no brainer to me that virtualization should be used in the data centre and if you are a developer (web designer, software engineer, or any other flavour) and are not using Workstation then you are at a disadvantage to your competition that are using Workstation to be more productive. Plain and Simple!

With the easy part out of the way let's move onto the harder questions. The issue I'll target in this blog post is application packaging and virtual appliances.

I think virtual machines provide an interesting way of viewing how applications should be packaged for consumers and enterprise alike. Imagine if Oracle developed their next database engine tied to a specific kernel that they tuned to meet their needs and then shipped as a VM that could simply be copied onto a SAN and there were barely nothing else to do to get it up and running. Better yet, imagine a game publisher having complete control over the run time environment by compiling a custom system into a VM; no more need to worry about what other cruft is already on the gamer's system since the game runs inside the VM.

VMware is promoting this concept under the moniker "virtual appliance." As far as I'm concerned, there have been a few imaginative examples of virtual appliances but most are designed with virtualization as a peripheral thought. Another huge issue is the size of these appliances often ranging in the hundreds of megabytes to gigabytes. They are often full distros plus a few other tools that are actually pertinent to the appliance.

In some ways I see this as an opportunity for the second coming of micro-kernels (I know I'm painfully abusing the meaning of micro-kernels here). If there would be some streamlined way to create a VM with an open source kernel that contains only the bits you need to run the VM and then include your pertinent additions you would end up with a truly streamlined solution. There might be some wasted hard drive space but we are taking in the 10s or maybe, maybe couple 100s of megabytes. RAM is even less of an issue because of shared memory page techniques that the VMware platform can offer.

All right, enough rambling. I've decided to put my money where my mouth is and personally explore various applications. The basic premise is that you can be creative by well... being creative. You need to jump into things and experiment with solutions, try them on for size. These are the sorts of ideas that grow organically. You start with some basic stuff and slowly your mind starts to form a mental model. I personally think that the concept of properly using virtual machines is so radically different from how most of us have grown to think of computing that we literally need to boot-strap a new model.

That's quite literally what I've set out to do. I'm going to start very simple (conceptually anyway) and explore the various ideas I come up with that I think may be interesting to pursue. I suppose I should also set a few ground rules but nothing too limiting that it will keep me from exploring an idea.

1 - I'll try and keep a series of blog posts about my experiences. All these blog entries will have subject lines that end in - MELV for easy searching.
2 - When applicable (if a usable VM is produced out of an exploration) I'll do my best to make it available in some form.
3 - The initial exploration will be focused on application packaging and virtual appliances. Given my current mental model and ideas that have been brewing in my head this is an easy place to start my bootstrapping process.

Note that I've made no promise that an exploration should end in a usable VM of sorts. That would sort of defeat the purpose of exploration now wouldn't it.

Shawn Morel works for VMware but eats sleeps and blogs for himself... He certainly doesn't speak for VMware.

Friday, June 09, 2006

Microsoft's new type faces... drool

As some of you may or may not know Microsoft's typography labs has been hard at work coming up with improvements to a new version of clear type and a new set of typefaces slated for release with Office 12 and Windows Vista.

I'm not shamed to admit that I'm a typo-phile at heart and I've been following the development of these type faces on and off for a little more than a year. I was browsing the latest articles on OS News (as I do daily) and came across "A Comprehensive Look at the New Microsoft Fonts," an article about the new type faces. How could I resist!

Reading the article I think the author missed some important points though. Granted, the article does claim to be comprehensive and my writing has the brevity of Buckminster-Fuller or Hofstadter. I do commend the author on being brief, it's a skill that I'm always working hard to achieve.

I must say that the new type faces are absolutely beautiful. I'm a computer science student and I do contract work for a software company. I also happen to have a condition that makes my reading speed very slow (I fall in the 4th percentile of the population). Being someone that stares at a computer screen all day a new typeface like Consolas would warrant the cost of Vista alone. Yes that's right I would pay 200$ for a monospaced font to use in my code editor!

Back to the article though. The author first points out that the majority of the new type faces are sans-serif (simply put that means the letters don't have any decorators on their stems and terminals). A typical example of a serif font is Times New Roman and examples of sans-serif fonts are Futura and the more commonly known Arial. Obviously there are many type faces of either kind. The author seems to be a little confused that the majority of the new fonts are sans-serif since in theory serif fonts are more legible.

Typography myth number 1: Serif type faces are easier to read. This is still a debated point but there is not much conclusive evidence that serifs help with readability (based on testing reading speed of on screen type and print). The general idea was that well designed serifs helped the eye follow the baseline of the font and that the more information (differentiation) that a font displays in it's outline the more the eye will pick up and easily differentiate between letters of a type face. What the research shows is that inter-letter spacing (kerning) and inter-word spacing play a much more important in readability than serifs. This argument is sometimes taken to the extent that serifs become a matter of preference and habit; someone who does a lot of reading and only in a type face with serifs and then must read something in a sans-serif font might struggle a little more at first as they get used to it. The same would be true of someone that normally reads sans-serif and then must read a font with serifs.

The other important point to consider is that computer screens are incredibly low resolution when compared to print. Considering that most modern LCDs fall somewhere in the 100 dpi range every pixel you have to represent a letter counts. Representing the spine of the letter in as true a form as possible is much more important than sacrificing that to squeeze in serifs. Also at 100 dpi it is nearly impossible to accurately represent the serifs so the debatable good they may have brought to readability is negated. Serifs at this resolution fall more in the category of visual noise than valuable structures of a type face. Once screens start reaching 300 dpi (and for various reasons I would argue 600 dpi) then we could consider using serif and sans-serif fonts interchangeably for screen reading. Support for resolution independence in modern operating systems and the social aspects of using serif vs. sans-serif fonts brought about during the Bauhaus movement are 2 separate articles all together.

The author also brings up the use of the letter 'C' for the names of the new type faces. These fonts all start with C because they are the first type faces actually designed with Microsoft's clear-type technology in mind. Originally fonts like Verdana were designed with the screen in mind and clear type made them a little better. As we can see with these new type faces designing with clear type in mind from the start has produced some clearly impressive results.

I especially enjoyed some of the conspiracy theory comments posted by readers about the name choices. Some went as far as to speculate that Microsoft wanted to ensure that the fonts would be at the top of font selection lists. I don't know about you but personally I have a bunch of fonts that have names starting in 'A' and 'B' and Microsoft's fonts will also be scattered among a bunch of fonts starting in 'C.' As for personal bias well I'll let you be the judge of that. On campus I work as the Apple Campus Rep and I do contract work for a software company that is a direct competitor to Microsoft.

Typography myth number 2: Anti-alisasing makes fonts more readable. This point was not made by the author but rather by on of the readers in a comment.

I think you mean for the first time in history on MS-Windows systems. Mac OS X has had an advanced anti-aliasing system in place for a few years now.

Let me say this rather bluntly anti-aliasing is bad for fonts! Anti-aliasing actually blurs edges to trick the eye into thinking that lines are continuous instead of discrete pixels. You want you font to be rendered with a high fidelity. Anti-aliasing actually throws in some uncertainty because you can't be sure how the spine of a font will be deformed and blurred when letters are placed at arbitrary locations on the screen. I have to agree that in practice though OS X does look better for the time being. I think this is partly due to the type system making the spines of fonts a little bolder and having more flexibility in kerning adjustments. This is all going to change soon as we start to see 150 dpi and 200 dpi displays. Microsoft's type rendering engine is hands down, technologically superior.

Finally how can you talk about all these new fonts without even pointing to the people responsible for them.
An amazing video on Channel 9: Cleartype Team - Typography in Windows Vista
Microsoft's typography group web site:

Monday, October 24, 2005

Why Wil, WHY!

Wil Shipley is the co-founder of Delicious Monster. Before that he founded the Omni Group, which originally developed software for NeXT and later moved on to Apple's OS X. I generally enjoy reading Wil's blog. Although he can be rather blunt at times, and probably has an approach that wouldn't fly in most corporate environments, he is strongly opinionated and he sticks to his guns. For the most part this is a quality I admire in people, regardless of whether I agree with them or not.

Well in this instance I disagree with Wil. He recently posted a blog entry about his views on unit testing: Unit Testing is teh Suck, Urr.. The title alone should be enough to set a wise developer off.

I'm not about to blindly promote unit testing and test-driven development just because it's "what's in" right now. I have used unit testing techniques, encouraged a team of developers to use these techniques on a commercial web application, and written developer tools to help integrate unit testing into a continuous build workflow. I have even purposely not unit tested certain projects just to see how much extra time I was spending on "manual" testing. I believe in unit testing because I have seen it work and I have seen projects fail because of lack of unit testing.

First of all, talking about unit testing separately from test-driven development is non-sense. Unit tests are a developer tool! Unit tests are written by the developer to isolate a specific section of functionality while the program is being worked on; be that initial coding, debugging, or bug fixing. If this is not the case, call your tests anything else but please don't call them unit tests. Obviously this is my personal view of things, although it is supported by other developers. In Wil's case, his exposure to "unit tests" was clearly one of the bad uses of the methodology. He mentions being hired to write unit test code by Lighthouse Design for one of his first jobs. Huh?

A company that hires devs/testers for the specific purpose of writing "unit tests" is not doing unit testing! Of the many benefits of unit testing I would say most of them are developer centric. The benefits of having someone solely writing unit test are completely countered; the company is essentially paying someone to spend a considerable amount of time working through code in hopes of recreating the developer's original state of mind and capturing the results in the form of executable code. This is something that would have taken the developer very little time to do in the first place.

Wil's first general guideline is:

When you modify your program, test it yourself. Your goal should be to break it, NOT to verify your code. That is, you should turn your huge intellect to "if I hated this code, how could I break it" as SOON as you get a working build, and you should document all the ways you break it. [Sure, maybe you don't want to bother fixing the bug where if you enter 20,000 lines of text into the "item description" your program gets slow. But you should test it, document that there is a problem, and then move on.] You KNOW that if you hated someone and it was your job to break their program, you could find some way to do it. Do it to every change you make

So let me think about this for a second. When I'm coding along and I make a change I should stop and take the time to test that. Now how different is taking the time to manually run the program and put it into a few different configurations compared to taking that same time and writing a few lines a test code. Think about this honestly. Switching from developer to tester is a cognitive task switch; you have to shift your world view from building to breaking. To make matters worse you have to stop thinking in code and start thinking in whatever interface your program provides, which may still be crude since you are just developing it. This is an expensive context switch.

Lets extend this line of thought. What about the time the you spent thinking about how you were going to create a routine? What were you doing? Surely your mind was active, but could the time have been spent doing something more productive. For example, you could have been creating a very basic set of unit tests to handle the most common functionality; a skeleton functional test. When you approach unit testing from this angle writing and maintaining unit tests takes only a little more time than doing you regular development.

But the benefits don't stop there. Let's say you are starting the development of an app or more realistically a component. You have some basic functionality already in place that you can manually run 10 tests against and be fairly sure the general behaviour is correct. So you go back to the code and change it to keep adding the functionality that's required. Now let's say you need to run 2 tests to check this new functionality. But what about the functionality that was already there? Did you break anything with the changes? I can think of many examples during the early stages of development where you make changes that will almost inevitably affect what little functionality was already there. So you really have to run 12 tests, the 10 original tests and the 2 new tests. If you consider the frequency at which you iterate through these develop/test micro-cycles you can quickly get a sense of just how much time you are wasting, even in the course of a few hours, manually re-running these tests.

Shipley adds:

Too often I see engineers make a change, run the program quickly, try a single test case, see it not crash, and decide they are done. THAT IS TEH SUCK! You've got to TRY to break that shit! What happens when you add 10 more points to that line?

I'm going to argue that developer's fall into this pattern because of the tedium of constantly repeating very similar trial runs of their program.

He then goes on to compare testing strategies to nature and evolution.

When you get the program working to the point where it does something and loads and saves data, find some people who love it and DO A BETA TEST. The beta test is often maligned, but the most stable programs I've ever written are stable because of beta testing. Essentially, beta testing is Nature's Way (TM) of making systems stable. You think nature creates unit tests and system tests every time it mutates a gene? Aw hell nah. It puts it out in the wild, and if it seems better it sticks around. If not, it's dead.

I really don't follow his argument on this one. Nature never just changes something and then see if that change is successful; that kind of pseudo random genome re-sequencing is actually quite un-natural. If you think about it nature is intrinsically very efficient (lazy?). According to the theory of evolution, changes occur because of stresses that an organism is put under; no stress to change means no changes. That's not unlike unit testing. The changes are made to the system in hopes of allowing it to cope better with the challenges that are already in place.

I am in no way saying that some form of beta testing is not required. On the contrary beta testing is a much needed phase in the development life cycle. Beta testing can catch bugs ranging from localization issues to inconsistencies in the user interaction model. Unit testing is especially hard to carry out on the graphical user interface of programs.

Bill Bumgarner, one of the developers that worked on Apple's Core Data framework posted "Unit Testing" on his blog shortly after Wil's comments. Bill points out that Core Data makes heavy use of unit tests and that it would not have made it to market in the time frame that it did had it not been for unit tests. Bill also points to Delicious Library's reliance on Core Data. In effect Wil was able to get by without unit testing mainly because of the richness of the Cocoa frameworks.

I wanted to post this blog entry a while back but got caught up in school work and job interviews. Now in all fairness I should mention that Wil briefly returned to the topic of unit testing (a few sentences compared to a few pages in his original article) directly responding to Bumgarner's post. He acknowledged that unit tests were great for frameworks and bad for UIs; it really felt more like he was brushing off the topic than honestly addressing it. The remainder of the blog had a very apologetic tone since he was addressing the large amount of criticism to his blog post that came right after the unit testing post, "Quit School and Set Things On Fire."

Alright, this post has gone on too long as it is. Considering one of my goals with this blog is to work on my ability to write concisely I don't think I'm doing all that great. Now that interviews are over I'll hopefully have a little more time to blog about how that went after the next 2 weeks of assignment and midterm hell.

Wednesday, October 12, 2005

Thank you ADC

Every IDE has it's own quirks and ways of seeing the world. Unfortunately for developers that means spending time getting used to viewing the world in that particular way. Whether this is good interface design practice I will leave for another post but I'll leave a quick note while passing by. Since an IDE is really an environment rather than an application like Word and it's a tool that will be used by experts much like Photoshop or Illustrator, it may not be a very bad thing to make it just hard enough to use so that new comers have to spend some time learning how to do things in the environment. The benefit of this approach is that once users have gone through this learning process they are much more proficient; compare your average Word user who has discovered most of the features through self discovery (aka snooping around the interface) and uses on average 10 features to your average Photoshop or Illustrator user who has invested time and energy learning how to use the application. Are Graphic designers more intelligent than office knowledge workers? I don't think so. It's the interface itself that has made them more proficient.

Apple's IDE, Xcode, is actually quite powerful once you finally figure out what's what in their world. After spending some time using Xcode and not fully understanding the reasoning behind everything I think things finally clicked sometime this summer. After using Xcode for months I now finally understand the underlying principle behind Xcode's project structure. Like most things of this nature, once you finally understand it's not only simple but painfully obvious.

Xcode is not alone when it comes to applications that just make you feel stupid. I'm not going name any applications or peg them to the open source world or commercial software shops. What it does come down to however is complete user experience. It might not even be a question of the developers not caring. Rather, many times the developers forget what it was like to first start learning and using their own product. They become so immersed in their paradigm that it is obvious to them and very casually overlook the difficulties that new-comers might have with the product.

Although "Understanding Xcode Projects" published on the ADC is long overdue it is nonetheless a very elegant solution. No more than a few pages with accompanying annotated screen shots, "Understanding Xcode Projects" succinctly explains Xcode's philosophy. The document can prove useful to new-comers and experienced Xcode users alike. In about ten minutes you can look over the article and sit back and say "AHHH! So that's how things work around here." Rather than spend many frustrated attempts at tackling Xcode by the horns.

I think more developers should learn from this example; struggling with a user interface should not be viewed as a rite of passage. This is probably even more true in the open source world if only because technical writers are not as readily available. If you're a developer what's an hour spent preparing a help article like this one compared to the countless hours you've spent implementing all those neat features that you hope will get used?

Sunday, September 25, 2005

Surf's up

Earlier this year there was a small uprising in the open source world over Apple's commitment to giving back to the community. At the core of the dispute was Apple's WebCore project. Open source advocates were upset that Apple wasn't giving more back to the KHTML project considering what had been gained for free. The argument really heated up shortly after Dave Hyatt posted to his original Surfin Safari blog that WebCore, and hence Apple's Safari browser, now passed the Acid 2 CSS test.

Because of the publicity Apple has given to the close relation between the WebCore project and the KHTML project the members of the KHTML project were being asked how soon it would be until KHTML would pass the Acid 2 test. This led members of the KHTML project to post a few open letters to the community expressing their discontent over how Apple failed to collaborate effectively with the open source project.

Hyatt soon posted a reply to his blog asking anyone with suggestions about how they thought the WebCore project could change to improve the situation. Since the situation was already heated many of the comments fell into what I would consider non-productive chatter. I wouldn't say that it evolved to a flame war but there was much blame being placed on either party and a definite lack of concrete solution proposals.

I care about the success of Safari and the WebCore project just as I care about seeing an open source project like KHTML succeed. So I decided that enough was enough and that I should write a detailed response proposing a handful of manageable changes that would allow both teams to benefit even more from their partnership.

The WebCore project is now being managed differently and there is a new Surfin Safari blog. Although I have no way of knowing for sure, some of the ideas I had suggested are very similar to the changes that have actually been put in place. Now it would be extremely vain of me to think that there are no other bright people at Apple that could have come up with similar ideas. However, it is somewhat comforting that I was partly responsible for getting things rolling in what seems to be a productive direction.

Here was my response:

First let me give some background about myself so you can assess my level of bias. I am a computer science student at a Canadian university and I have an Apple ADC Student membership. I am also an Apple Campus Rep and many of the students in my program are very pro open source (as am I) so responding to issues like this one is something I must deal with often. Our university has a co-op program and while I am only beginning my 3rd year I effectively have 1 full year experience working in a commercial setting, particularly using open source frameworks to produce a commercial web apps. I know this isn't a great amount of work experience but it's better than none. My minor is in business with a focus on management sciences.

With that said here are a few things I think could be done better based on the information I have, which I admit is incomplete much like most people who have posted.

1- From what I gather communication between Apple and the KDE group has been only through email or informally through blogs. One of my co-op jobs was with Research In Motion (makers of the blackberry) so if anyone I can appreciate the benefits of email. However, email and blog postings fall very far from being a rich enough communication medium to coordinate development efforts. In my opinion many developers fall prey to over-relying on email because of it's ease and speed. Both companies and open source communities use it because it's cheap and widespread and I think that companies and open source projects run into problems when they rely on email too much. Email can't capture facial expressions or vocal intonations and is often miss-interpreted and the miss-interpretation can't be corrected on the fly like you can when you are talking face to face.

Soln: Organize a monthly or by monthly conference call with the lead developers of both parties (I know Apple has the resources for this since Campus reps have regular calls). Invite some of the KHTML devs to WWDC and send some WebKit devs to the KDE conference. I think this alone would change the outlook on the situation and knowing that there are people on the other end.

2- Apple is very closed about the bugs it is working to fix. This is a company policy and it is completely up to Apple if it wants to leave it as such. However, I think it not only helps external relations (members of the community wanting to contribute to an open source project) but also Apple (internally) if bugs logged against all of it's open source projects are also open.

Soln: There is no doubt already a team responsible for the initial bug triage that comes into radar (be it crash reports, people clicking on the report a bug button in safari, or even people logging bugs on their own through When these bugs are initially reviewed and dispatched if they are dispatched to an open source project (WebKit, Darwin, GCC, Bonjour etc.) they should be sent to a separate, open bug tracking system. For example if a safari bug it logged but it ends up being a bug in the client then don't put it in the open source tracking system but if the bug originates in WebKit then go ahead and open source the bug. I have no idea what Apple uses for Issue tracking software but I have experience with 3 of the most popular issue tracking products and they provided some way of synchronizing the status of bugs between 2 databases. Better yet the radar database schema could be changed to incorporate an Open source flag. Whatever happens Apple needs to let other people interested in one of the open source projects what is currently being worked on so that work is not duplicated and it is well coordinated. What is the point of open source otherwise.

3- There were both suggestions and criticisms towards adopting a common code base. A point to keep in mind, KHTML and WebKit in their current state are 2 separate albeit open source projects. I don't think Apple has done any false publicity here since they were fairly straight forward when they said that WebKit was a fork. Apple in my opinion has also done a good job of helping the KHTML project both in terms of added functionality and publicity. KHTML has come a long ways since Safari has been released (I'm happy about this since KDE is my platform of choice when I run Linux) and although Apple's contribution seems to have slowed down lately compared to the initial benefits that were seen I think this is mainly a communication problem. If it takes more time to integrate a fix from one project into the other than it does to actually write the patch because of differences in both projects (i.e.: KHTML using Apple patches and vice versa) then it only makes sense that a developer would rather spend the time fixing it and feeling some sense of accomplishment rather than spend the time sorting through some other developer's work. To address the issue of what Apple has to gain from going through the trouble of moving to a common code base, well lots. Better PR than what it has been getting lately, increased awareness and use of a common web engine, a more robust rendering engine since any abstractions introduced separate the engine functionality from native UI toolkit functionality that may have crept through undocumented, and in the end, less work since the community can be fixing some of the bugs. Notice that these are potential benefits for KDE as well in my opinion.

Soln: I think that fundamentally both teams still have many of the same core values and direction for the project. Both teams still care very much about making a light-weight, fast, and common rendering engine. In my opinion I think that the problems here stem more from what I would call a superficial project management issue than a core divergence in project direction. Simply put, I think it would be a shame for a combined effort with so much potential to cease existing. Most of the complaints I can gather from the KDE camp are frustrations about how hard it is to reap the benefits of the combined effort; how hard it is to actually do more of the work the love doing rather than code base administrivia. Mr. Hyatt's offer about simply using web core (which may seem drastic at this point in the game), his continued posting of patches, and the very fact that he posed the question "what can Apple do?" is very clear indication to me that at least he, if not more people on the WebKit team and higher up at Apple, are more than willing to try to improve the situation. If Apple truly didn't care then why would they even bother with this sort of thing. The proposition to just go ahead an use WebKit as a common code base may seem like an effort from Apple to simply supplant KHTML but, although I am just guessing here, I think it shows just how much Mr. Hyatt wants to improve the situation and maybe even how desperate he might be and the lengths he is will to go to help resolve this issue.

Just so that this post doesn't suffer from any recency effects and people finish reading this thinking that I am putting most of the pressure on KDE devs, I think that both teams have to work equally if they hope to gain.

Sunday, August 28, 2005

Finally, my own blog

I've been wanting to start my own blog and web site for some time now. While the web site is still a work in progress I thought that the end of Spring term at university was as good a time as ever to get things going.

For the last six months or so I keep coming across various articles and ideas and I say to myself "oh I should really blog about this." Call me a hopeless dreamer but I truly believe that one person can make a difference in the world and I won't stop until I've made my dent. I may only be a 21 year old student but I am very opinionated. At the same time there are several people that I admire, a few of them being: John Gruber over at Daring Fireball, Joel Spolsky from Joel on Software, Bob Cringely who writes the I, Cringely column, and Bill Cowan a professor at the University of Waterloo, who I've been doing research with and doesn't keep as much of an online presence. The main reason I admire these people is because they all have strong opinions that are very well reasoned about yet expressed with a simplicity and elegance that few people are able to match. Talk is cheap and dreaming without action would truly prove to be hopeless. If I do really hope to make a difference in this world I have to learn to express myself with as much eloquence as the best of them.

As they say, practice makes perfect and what better place to practice and get feedback than on a blog.

So, why Blogger? I must admit that I am somewhat of a perfectionist; that combined with how much work they love to give us at university has led to me putting off creating my own blog. Blogger has everything I need for the time being and by the time I need more features I can either migrate my blog to some custom solution or who knows Blogger might have the features I need by then. At this point I think that getting into the rhythm of blogging regularly is the most important part of the whole process. This kind of thinking goes hand with an approach to technology development that I have been pondering about for a little time now; the most effortless solution often tends to be the best one. For a perfectionist it can be hard to let go of the notion that solutions must be rigorous and completely defined over the problem domain. Sometimes the quick and dirty solution can turn out to be even more enabling; it can turn out to be exquisite.