Wednesday, November 22, 2006

Death Wish of a Fading Rock Star?

I'm not coming at this as an Apple aficionado. It simply strikes me that people can be so arrogant and blinded by simple facts of life.

Ed Colligan is the CEO of Palm and obviously a bright person. Sure this was something he answered in the spur of the moment to a reporter asking him to comment about rumours of a possible Apple phone. Spur of the moment answers are interesting though. They tell you either what the person's gut feeling about something is if they haven't given much thought to it or what they truly think about the matter after they've thought it through considerably. In either case brushing Apple off in this manner is worrisome.

The situation reminds me very clearly of a company all hands meeting in the summer of 2003 while I was working at Roxio.

BACKROUND: Roxio had just recently bought the Napster brand and and planned to re-launch it as Napster 2.0. Of course none of the original infrastructure could be used, because the original Napster didn't really have any infrastructure so Pressplay was also acquired in May 2003 to actually run the (then) subscription-only service. Basically Roxio had put almost all its spare cash into these purchases. More info here:
http://en.wikipedia.org/wiki/Napster#Current_status
and here:
http://en.wikipedia.org/wiki/Pressplay

In the mean time Apple was busy with a few things. Apple had started staging its multi-year strategy to regain market share and acceptance and clean up the disastrous effects of the '90s. The first phase was a massive product line cleanup and a focus on core competencies. Part of this was reaching out to part of it's core market through the "Rip, Mix, Burn" marketing and engineering campaign where the iMac was equipped with CD-burning hardware and software. Shortly after Apple introduced its iPod in October 2001 to be a Mac-only product originally. The term is now synonymous with MP3 player but at the time it had just started generating buzz. Then Apple made a marketing splash with its "hell froze over" event announcing a windows compatible iPod in July 2002. At the time Apple already had iTunes that was considered by many to be the best music management software around but they chose to release the Windows iPod with a special version of Music Match Jukebox, which at the time was arguably one of the better suited pieces of software available on Windows to get the job done. Fast forward a few more months to end of April 2003 and Apple announces the iTunes music store. With it Apple had firmly placed its stake in the ground stating that pay per download and NOT subscription models are what the user wanted. But at the time obviously the iTunes Music Store was available only for for iTunes and hence only available for Mac; hey was is Apple after all what could you expect... right?
Watch the introduction of the original iPod:
http://uneasysilence.com/archive/2006/10/8008/.
Review information about the iPod and its timeline:
http://en.wikipedia.org/wiki/Ipod#iPod_models.
Background on the iTunes Music Store:
http://en.wikipedia.org/wiki/ITunes_Store#Background

So that's about more than enough background info to be just right. During the Q&A session at the end of the all hands meeting one brave soul asks what Roxio's take on the iTunes Music Store was since there had been some rumours of a Windows version. The response was a simple as can be: "It'll never happen." No one even paused and the Q&A session went on. To that I told my boss at the time not to underestimate Apple's position in this market. Fast forward a few months and in October 2003 Apple was announcing a Windows version of iTunes, the Windows version of the iTunes Music Store and a completely new generation of the iPod; all in time for the holiday season. Napster 2.0 wasn't even ready yet... Ooops!

Now back in the present Napster is still met with lackluster acceptance and iTunes/iPod hold roughly 85% and 75% market share in the US respectively.

Those comments about Apple were made at a time when Apple had just set in place a strategy to come back from the brink of extinction and it was just barely starting to work. Other than the ones who really wanted to believe Apple could make a comeback or to companies like e-Machines copying the iMac because it was trendy and fashionable Apple was a nobody. It almost sort of made sense for a comment like that to slip by at the time. By in our day an age? How can somebody as bright as the CEO of Palm say something like it will take Apple at least as long as us to get the right mix?

He makes some logical conclusions about how Apple might go about executing a mobile phone strategy but the only problem is that it's set 3-4 years in the past. For one you don't have to worry nearly as much about the childish bickering among the various carriers you can operate as a Mobile Virtual Network Operator (MVNO). Virgin mobile is one of the very successful MVNO's and companies like Disney have had more trouble (I think because they don't have the right mix). Here's a good article summing up the current state of MVNO's: http://news.zdnet.com/2100-1035_22-6106423.html. So let's see Apple has core competencies in hardware and software (especially in integrating the two really really well), it has shown that it has a knack for finding the right mix of content and features that people will go for, has a CEO that might just happen to have good connections at Disney (read might have access to contacts to get a head start over the hurdles of setting up an MVMO), and finally runs an incredibly tight ship.

I'm not saying that's a 100% formula for success and of course in all of this discussion I took the luxury of assuming that the iPhone rumour was actually going to be true just so I could illustrate a point about Palm's position. It does seem like a risky move to take when you are already struggling as a company to completely brush off the media and pop-culture darling. I'm just picking on Palm here because the example is too easy to pick on but Palm is not alone in thinking like this (I've made my opinions on RIM quite clear in the past although not on this blog).

Meta-blogging

I really need to blog more!

Why?

For lots and lots on reasons.
- As my day to day activities migrate away from academia (for the time being) and more towards programming I want to keep practising the art of writing effective prose.
-Blogging is a very effective (modern) tool to capture the progression of my ideas so I can review them later.
- I have experienced tremendous internal resistance to blogging regularly. Figuring out what can reduce the activation energy to write regularly might hold the key to finding out how to design interfaces with low activation energy to accomplish tasks in general.
- I have this asinine fear of exposing things that aren't perfect or at least close to completed. I need to start embracing organic development and releasing my need to control everything so tightly.
- I need to figure out how to focus my thoughts. Too often I feel the need to give much too long of a pre-amble to establish shared context and then diverge into way to many avenues of the ideas I'm trying to express. This is related to the need to feel like everything I write is "complete."
- But mostly to collect my ideas and share them; I have spurts of creativity that I want to capture both for myself to come back to and think about more deeply later or just so they get out there since I don't have the time to pursue all the ideas that come into my mind.
ASSIDE: My thoughts on distributed cognition and socio-emergence of ideas is probably best left for an entirely separate entry (or set of entries). That's all.

Thursday, October 05, 2006

Mental Explorations in the Land of Virtualization - MELV

I work on the team responsible for delivering VMware's hosted suite of products (Workstation, ACE, Server, Player). Looking at that line-up of products you could say our part of the company is focused on delivering solutions that push the envelope of things that can be done with virtualization. I personally have come to think that virtualization will provide the computer industry with it's next paradigm shift; how very back to the future! I think part of the reason has to do with how nascent computing (the industry, field of research, and trade) really is compared to what else is around. Many of the things that are needed to make virtualization worth while are just starting to come together.

A particular interest of mine is how virtualization will fundamentally affect the way Joe Consumer uses his computer; how does virtualization affect the end consumer's computing experience. At this point it's a no brainer to me that virtualization should be used in the data centre and if you are a developer (web designer, software engineer, or any other flavour) and are not using Workstation then you are at a disadvantage to your competition that are using Workstation to be more productive. Plain and Simple!

With the easy part out of the way let's move onto the harder questions. The issue I'll target in this blog post is application packaging and virtual appliances.

I think virtual machines provide an interesting way of viewing how applications should be packaged for consumers and enterprise alike. Imagine if Oracle developed their next database engine tied to a specific kernel that they tuned to meet their needs and then shipped as a VM that could simply be copied onto a SAN and there were barely nothing else to do to get it up and running. Better yet, imagine a game publisher having complete control over the run time environment by compiling a custom system into a VM; no more need to worry about what other cruft is already on the gamer's system since the game runs inside the VM.

VMware is promoting this concept under the moniker "virtual appliance." As far as I'm concerned, there have been a few imaginative examples of virtual appliances but most are designed with virtualization as a peripheral thought. Another huge issue is the size of these appliances often ranging in the hundreds of megabytes to gigabytes. They are often full distros plus a few other tools that are actually pertinent to the appliance.

In some ways I see this as an opportunity for the second coming of micro-kernels (I know I'm painfully abusing the meaning of micro-kernels here). If there would be some streamlined way to create a VM with an open source kernel that contains only the bits you need to run the VM and then include your pertinent additions you would end up with a truly streamlined solution. There might be some wasted hard drive space but we are taking in the 10s or maybe, maybe couple 100s of megabytes. RAM is even less of an issue because of shared memory page techniques that the VMware platform can offer.

All right, enough rambling. I've decided to put my money where my mouth is and personally explore various applications. The basic premise is that you can be creative by well... being creative. You need to jump into things and experiment with solutions, try them on for size. These are the sorts of ideas that grow organically. You start with some basic stuff and slowly your mind starts to form a mental model. I personally think that the concept of properly using virtual machines is so radically different from how most of us have grown to think of computing that we literally need to boot-strap a new model.

That's quite literally what I've set out to do. I'm going to start very simple (conceptually anyway) and explore the various ideas I come up with that I think may be interesting to pursue. I suppose I should also set a few ground rules but nothing too limiting that it will keep me from exploring an idea.

1 - I'll try and keep a series of blog posts about my experiences. All these blog entries will have subject lines that end in - MELV for easy searching.
2 - When applicable (if a usable VM is produced out of an exploration) I'll do my best to make it available in some form.
3 - The initial exploration will be focused on application packaging and virtual appliances. Given my current mental model and ideas that have been brewing in my head this is an easy place to start my bootstrapping process.

Note that I've made no promise that an exploration should end in a usable VM of sorts. That would sort of defeat the purpose of exploration now wouldn't it.

------
Shawn Morel works for VMware but eats sleeps and blogs for himself... He certainly doesn't speak for VMware.

Friday, June 09, 2006

Microsoft's new type faces... drool

As some of you may or may not know Microsoft's typography labs has been hard at work coming up with improvements to a new version of clear type and a new set of typefaces slated for release with Office 12 and Windows Vista.

I'm not shamed to admit that I'm a typo-phile at heart and I've been following the development of these type faces on and off for a little more than a year. I was browsing the latest articles on OS News (as I do daily) and came across "A Comprehensive Look at the New Microsoft Fonts," an article about the new type faces. How could I resist!

Reading the article I think the author missed some important points though. Granted, the article does claim to be comprehensive and my writing has the brevity of Buckminster-Fuller or Hofstadter. I do commend the author on being brief, it's a skill that I'm always working hard to achieve.

I must say that the new type faces are absolutely beautiful. I'm a computer science student and I do contract work for a software company. I also happen to have a condition that makes my reading speed very slow (I fall in the 4th percentile of the population). Being someone that stares at a computer screen all day a new typeface like Consolas would warrant the cost of Vista alone. Yes that's right I would pay 200$ for a monospaced font to use in my code editor!

Back to the article though. The author first points out that the majority of the new type faces are sans-serif (simply put that means the letters don't have any decorators on their stems and terminals). A typical example of a serif font is Times New Roman and examples of sans-serif fonts are Futura and the more commonly known Arial. Obviously there are many type faces of either kind. The author seems to be a little confused that the majority of the new fonts are sans-serif since in theory serif fonts are more legible.

Typography myth number 1: Serif type faces are easier to read. This is still a debated point but there is not much conclusive evidence that serifs help with readability (based on testing reading speed of on screen type and print). The general idea was that well designed serifs helped the eye follow the baseline of the font and that the more information (differentiation) that a font displays in it's outline the more the eye will pick up and easily differentiate between letters of a type face. What the research shows is that inter-letter spacing (kerning) and inter-word spacing play a much more important in readability than serifs. This argument is sometimes taken to the extent that serifs become a matter of preference and habit; someone who does a lot of reading and only in a type face with serifs and then must read something in a sans-serif font might struggle a little more at first as they get used to it. The same would be true of someone that normally reads sans-serif and then must read a font with serifs.

The other important point to consider is that computer screens are incredibly low resolution when compared to print. Considering that most modern LCDs fall somewhere in the 100 dpi range every pixel you have to represent a letter counts. Representing the spine of the letter in as true a form as possible is much more important than sacrificing that to squeeze in serifs. Also at 100 dpi it is nearly impossible to accurately represent the serifs so the debatable good they may have brought to readability is negated. Serifs at this resolution fall more in the category of visual noise than valuable structures of a type face. Once screens start reaching 300 dpi (and for various reasons I would argue 600 dpi) then we could consider using serif and sans-serif fonts interchangeably for screen reading. Support for resolution independence in modern operating systems and the social aspects of using serif vs. sans-serif fonts brought about during the Bauhaus movement are 2 separate articles all together.

The author also brings up the use of the letter 'C' for the names of the new type faces. These fonts all start with C because they are the first type faces actually designed with Microsoft's clear-type technology in mind. Originally fonts like Verdana were designed with the screen in mind and clear type made them a little better. As we can see with these new type faces designing with clear type in mind from the start has produced some clearly impressive results.

I especially enjoyed some of the conspiracy theory comments posted by readers about the name choices. Some went as far as to speculate that Microsoft wanted to ensure that the fonts would be at the top of font selection lists. I don't know about you but personally I have a bunch of fonts that have names starting in 'A' and 'B' and Microsoft's fonts will also be scattered among a bunch of fonts starting in 'C.' As for personal bias well I'll let you be the judge of that. On campus I work as the Apple Campus Rep and I do contract work for a software company that is a direct competitor to Microsoft.

Typography myth number 2: Anti-alisasing makes fonts more readable. This point was not made by the author but rather by on of the readers in a comment.

I think you mean for the first time in history on MS-Windows systems. Mac OS X has had an advanced anti-aliasing system in place for a few years now.

Let me say this rather bluntly anti-aliasing is bad for fonts! Anti-aliasing actually blurs edges to trick the eye into thinking that lines are continuous instead of discrete pixels. You want you font to be rendered with a high fidelity. Anti-aliasing actually throws in some uncertainty because you can't be sure how the spine of a font will be deformed and blurred when letters are placed at arbitrary locations on the screen. I have to agree that in practice though OS X does look better for the time being. I think this is partly due to the type system making the spines of fonts a little bolder and having more flexibility in kerning adjustments. This is all going to change soon as we start to see 150 dpi and 200 dpi displays. Microsoft's type rendering engine is hands down, technologically superior.

Finally how can you talk about all these new fonts without even pointing to the people responsible for them.
An amazing video on Channel 9: Cleartype Team - Typography in Windows Vista
Microsoft's typography group web site: www.microsoft.com/typography

Monday, October 24, 2005

Why Wil, WHY!

Wil Shipley is the co-founder of Delicious Monster. Before that he founded the Omni Group, which originally developed software for NeXT and later moved on to Apple's OS X. I generally enjoy reading Wil's blog. Although he can be rather blunt at times, and probably has an approach that wouldn't fly in most corporate environments, he is strongly opinionated and he sticks to his guns. For the most part this is a quality I admire in people, regardless of whether I agree with them or not.

Well in this instance I disagree with Wil. He recently posted a blog entry about his views on unit testing: Unit Testing is teh Suck, Urr.. The title alone should be enough to set a wise developer off.

I'm not about to blindly promote unit testing and test-driven development just because it's "what's in" right now. I have used unit testing techniques, encouraged a team of developers to use these techniques on a commercial web application, and written developer tools to help integrate unit testing into a continuous build workflow. I have even purposely not unit tested certain projects just to see how much extra time I was spending on "manual" testing. I believe in unit testing because I have seen it work and I have seen projects fail because of lack of unit testing.

First of all, talking about unit testing separately from test-driven development is non-sense. Unit tests are a developer tool! Unit tests are written by the developer to isolate a specific section of functionality while the program is being worked on; be that initial coding, debugging, or bug fixing. If this is not the case, call your tests anything else but please don't call them unit tests. Obviously this is my personal view of things, although it is supported by other developers. In Wil's case, his exposure to "unit tests" was clearly one of the bad uses of the methodology. He mentions being hired to write unit test code by Lighthouse Design for one of his first jobs. Huh?

A company that hires devs/testers for the specific purpose of writing "unit tests" is not doing unit testing! Of the many benefits of unit testing I would say most of them are developer centric. The benefits of having someone solely writing unit test are completely countered; the company is essentially paying someone to spend a considerable amount of time working through code in hopes of recreating the developer's original state of mind and capturing the results in the form of executable code. This is something that would have taken the developer very little time to do in the first place.

Wil's first general guideline is:

When you modify your program, test it yourself. Your goal should be to break it, NOT to verify your code. That is, you should turn your huge intellect to "if I hated this code, how could I break it" as SOON as you get a working build, and you should document all the ways you break it. [Sure, maybe you don't want to bother fixing the bug where if you enter 20,000 lines of text into the "item description" your program gets slow. But you should test it, document that there is a problem, and then move on.] You KNOW that if you hated someone and it was your job to break their program, you could find some way to do it. Do it to every change you make

So let me think about this for a second. When I'm coding along and I make a change I should stop and take the time to test that. Now how different is taking the time to manually run the program and put it into a few different configurations compared to taking that same time and writing a few lines a test code. Think about this honestly. Switching from developer to tester is a cognitive task switch; you have to shift your world view from building to breaking. To make matters worse you have to stop thinking in code and start thinking in whatever interface your program provides, which may still be crude since you are just developing it. This is an expensive context switch.

Lets extend this line of thought. What about the time the you spent thinking about how you were going to create a routine? What were you doing? Surely your mind was active, but could the time have been spent doing something more productive. For example, you could have been creating a very basic set of unit tests to handle the most common functionality; a skeleton functional test. When you approach unit testing from this angle writing and maintaining unit tests takes only a little more time than doing you regular development.

But the benefits don't stop there. Let's say you are starting the development of an app or more realistically a component. You have some basic functionality already in place that you can manually run 10 tests against and be fairly sure the general behaviour is correct. So you go back to the code and change it to keep adding the functionality that's required. Now let's say you need to run 2 tests to check this new functionality. But what about the functionality that was already there? Did you break anything with the changes? I can think of many examples during the early stages of development where you make changes that will almost inevitably affect what little functionality was already there. So you really have to run 12 tests, the 10 original tests and the 2 new tests. If you consider the frequency at which you iterate through these develop/test micro-cycles you can quickly get a sense of just how much time you are wasting, even in the course of a few hours, manually re-running these tests.

Shipley adds:

Too often I see engineers make a change, run the program quickly, try a single test case, see it not crash, and decide they are done. THAT IS TEH SUCK! You've got to TRY to break that shit! What happens when you add 10 more points to that line?

I'm going to argue that developer's fall into this pattern because of the tedium of constantly repeating very similar trial runs of their program.

He then goes on to compare testing strategies to nature and evolution.

When you get the program working to the point where it does something and loads and saves data, find some people who love it and DO A BETA TEST. The beta test is often maligned, but the most stable programs I've ever written are stable because of beta testing. Essentially, beta testing is Nature's Way (TM) of making systems stable. You think nature creates unit tests and system tests every time it mutates a gene? Aw hell nah. It puts it out in the wild, and if it seems better it sticks around. If not, it's dead.

I really don't follow his argument on this one. Nature never just changes something and then see if that change is successful; that kind of pseudo random genome re-sequencing is actually quite un-natural. If you think about it nature is intrinsically very efficient (lazy?). According to the theory of evolution, changes occur because of stresses that an organism is put under; no stress to change means no changes. That's not unlike unit testing. The changes are made to the system in hopes of allowing it to cope better with the challenges that are already in place.

I am in no way saying that some form of beta testing is not required. On the contrary beta testing is a much needed phase in the development life cycle. Beta testing can catch bugs ranging from localization issues to inconsistencies in the user interaction model. Unit testing is especially hard to carry out on the graphical user interface of programs.

Bill Bumgarner, one of the developers that worked on Apple's Core Data framework posted "Unit Testing" on his blog shortly after Wil's comments. Bill points out that Core Data makes heavy use of unit tests and that it would not have made it to market in the time frame that it did had it not been for unit tests. Bill also points to Delicious Library's reliance on Core Data. In effect Wil was able to get by without unit testing mainly because of the richness of the Cocoa frameworks.

I wanted to post this blog entry a while back but got caught up in school work and job interviews. Now in all fairness I should mention that Wil briefly returned to the topic of unit testing (a few sentences compared to a few pages in his original article) directly responding to Bumgarner's post. He acknowledged that unit tests were great for frameworks and bad for UIs; it really felt more like he was brushing off the topic than honestly addressing it. The remainder of the blog had a very apologetic tone since he was addressing the large amount of criticism to his blog post that came right after the unit testing post, "Quit School and Set Things On Fire."

Alright, this post has gone on too long as it is. Considering one of my goals with this blog is to work on my ability to write concisely I don't think I'm doing all that great. Now that interviews are over I'll hopefully have a little more time to blog about how that went after the next 2 weeks of assignment and midterm hell.

Wednesday, October 12, 2005

Thank you ADC

Every IDE has it's own quirks and ways of seeing the world. Unfortunately for developers that means spending time getting used to viewing the world in that particular way. Whether this is good interface design practice I will leave for another post but I'll leave a quick note while passing by. Since an IDE is really an environment rather than an application like Word and it's a tool that will be used by experts much like Photoshop or Illustrator, it may not be a very bad thing to make it just hard enough to use so that new comers have to spend some time learning how to do things in the environment. The benefit of this approach is that once users have gone through this learning process they are much more proficient; compare your average Word user who has discovered most of the features through self discovery (aka snooping around the interface) and uses on average 10 features to your average Photoshop or Illustrator user who has invested time and energy learning how to use the application. Are Graphic designers more intelligent than office knowledge workers? I don't think so. It's the interface itself that has made them more proficient.

Apple's IDE, Xcode, is actually quite powerful once you finally figure out what's what in their world. After spending some time using Xcode and not fully understanding the reasoning behind everything I think things finally clicked sometime this summer. After using Xcode for months I now finally understand the underlying principle behind Xcode's project structure. Like most things of this nature, once you finally understand it's not only simple but painfully obvious.

Xcode is not alone when it comes to applications that just make you feel stupid. I'm not going name any applications or peg them to the open source world or commercial software shops. What it does come down to however is complete user experience. It might not even be a question of the developers not caring. Rather, many times the developers forget what it was like to first start learning and using their own product. They become so immersed in their paradigm that it is obvious to them and very casually overlook the difficulties that new-comers might have with the product.

Although "Understanding Xcode Projects" published on the ADC is long overdue it is nonetheless a very elegant solution. No more than a few pages with accompanying annotated screen shots, "Understanding Xcode Projects" succinctly explains Xcode's philosophy. The document can prove useful to new-comers and experienced Xcode users alike. In about ten minutes you can look over the article and sit back and say "AHHH! So that's how things work around here." Rather than spend many frustrated attempts at tackling Xcode by the horns.

I think more developers should learn from this example; struggling with a user interface should not be viewed as a rite of passage. This is probably even more true in the open source world if only because technical writers are not as readily available. If you're a developer what's an hour spent preparing a help article like this one compared to the countless hours you've spent implementing all those neat features that you hope will get used?

Sunday, September 25, 2005

Surf's up

Earlier this year there was a small uprising in the open source world over Apple's commitment to giving back to the community. At the core of the dispute was Apple's WebCore project. Open source advocates were upset that Apple wasn't giving more back to the KHTML project considering what had been gained for free. The argument really heated up shortly after Dave Hyatt posted to his original Surfin Safari blog that WebCore, and hence Apple's Safari browser, now passed the Acid 2 CSS test.

Because of the publicity Apple has given to the close relation between the WebCore project and the KHTML project the members of the KHTML project were being asked how soon it would be until KHTML would pass the Acid 2 test. This led members of the KHTML project to post a few open letters to the community expressing their discontent over how Apple failed to collaborate effectively with the open source project.

Hyatt soon posted a reply to his blog asking anyone with suggestions about how they thought the WebCore project could change to improve the situation. Since the situation was already heated many of the comments fell into what I would consider non-productive chatter. I wouldn't say that it evolved to a flame war but there was much blame being placed on either party and a definite lack of concrete solution proposals.

I care about the success of Safari and the WebCore project just as I care about seeing an open source project like KHTML succeed. So I decided that enough was enough and that I should write a detailed response proposing a handful of manageable changes that would allow both teams to benefit even more from their partnership.

The WebCore project is now being managed differently and there is a new Surfin Safari blog. Although I have no way of knowing for sure, some of the ideas I had suggested are very similar to the changes that have actually been put in place. Now it would be extremely vain of me to think that there are no other bright people at Apple that could have come up with similar ideas. However, it is somewhat comforting that I was partly responsible for getting things rolling in what seems to be a productive direction.

Here was my response:

First let me give some background about myself so you can assess my level of bias. I am a computer science student at a Canadian university and I have an Apple ADC Student membership. I am also an Apple Campus Rep and many of the students in my program are very pro open source (as am I) so responding to issues like this one is something I must deal with often. Our university has a co-op program and while I am only beginning my 3rd year I effectively have 1 full year experience working in a commercial setting, particularly using open source frameworks to produce a commercial web apps. I know this isn't a great amount of work experience but it's better than none. My minor is in business with a focus on management sciences.

With that said here are a few things I think could be done better based on the information I have, which I admit is incomplete much like most people who have posted.

1- From what I gather communication between Apple and the KDE group has been only through email or informally through blogs. One of my co-op jobs was with Research In Motion (makers of the blackberry) so if anyone I can appreciate the benefits of email. However, email and blog postings fall very far from being a rich enough communication medium to coordinate development efforts. In my opinion many developers fall prey to over-relying on email because of it's ease and speed. Both companies and open source communities use it because it's cheap and widespread and I think that companies and open source projects run into problems when they rely on email too much. Email can't capture facial expressions or vocal intonations and is often miss-interpreted and the miss-interpretation can't be corrected on the fly like you can when you are talking face to face.

Soln: Organize a monthly or by monthly conference call with the lead developers of both parties (I know Apple has the resources for this since Campus reps have regular calls). Invite some of the KHTML devs to WWDC and send some WebKit devs to the KDE conference. I think this alone would change the outlook on the situation and knowing that there are people on the other end.

2- Apple is very closed about the bugs it is working to fix. This is a company policy and it is completely up to Apple if it wants to leave it as such. However, I think it not only helps external relations (members of the community wanting to contribute to an open source project) but also Apple (internally) if bugs logged against all of it's open source projects are also open.

Soln: There is no doubt already a team responsible for the initial bug triage that comes into radar (be it crash reports, people clicking on the report a bug button in safari, or even people logging bugs on their own through bugreporter.apple.com). When these bugs are initially reviewed and dispatched if they are dispatched to an open source project (WebKit, Darwin, GCC, Bonjour etc.) they should be sent to a separate, open bug tracking system. For example if a safari bug it logged but it ends up being a bug in the client then don't put it in the open source tracking system but if the bug originates in WebKit then go ahead and open source the bug. I have no idea what Apple uses for Issue tracking software but I have experience with 3 of the most popular issue tracking products and they provided some way of synchronizing the status of bugs between 2 databases. Better yet the radar database schema could be changed to incorporate an Open source flag. Whatever happens Apple needs to let other people interested in one of the open source projects what is currently being worked on so that work is not duplicated and it is well coordinated. What is the point of open source otherwise.

3- There were both suggestions and criticisms towards adopting a common code base. A point to keep in mind, KHTML and WebKit in their current state are 2 separate albeit open source projects. I don't think Apple has done any false publicity here since they were fairly straight forward when they said that WebKit was a fork. Apple in my opinion has also done a good job of helping the KHTML project both in terms of added functionality and publicity. KHTML has come a long ways since Safari has been released (I'm happy about this since KDE is my platform of choice when I run Linux) and although Apple's contribution seems to have slowed down lately compared to the initial benefits that were seen I think this is mainly a communication problem. If it takes more time to integrate a fix from one project into the other than it does to actually write the patch because of differences in both projects (i.e.: KHTML using Apple patches and vice versa) then it only makes sense that a developer would rather spend the time fixing it and feeling some sense of accomplishment rather than spend the time sorting through some other developer's work. To address the issue of what Apple has to gain from going through the trouble of moving to a common code base, well lots. Better PR than what it has been getting lately, increased awareness and use of a common web engine, a more robust rendering engine since any abstractions introduced separate the engine functionality from native UI toolkit functionality that may have crept through undocumented, and in the end, less work since the community can be fixing some of the bugs. Notice that these are potential benefits for KDE as well in my opinion.

Soln: I think that fundamentally both teams still have many of the same core values and direction for the project. Both teams still care very much about making a light-weight, fast, and common rendering engine. In my opinion I think that the problems here stem more from what I would call a superficial project management issue than a core divergence in project direction. Simply put, I think it would be a shame for a combined effort with so much potential to cease existing. Most of the complaints I can gather from the KDE camp are frustrations about how hard it is to reap the benefits of the combined effort; how hard it is to actually do more of the work the love doing rather than code base administrivia. Mr. Hyatt's offer about simply using web core (which may seem drastic at this point in the game), his continued posting of patches, and the very fact that he posed the question "what can Apple do?" is very clear indication to me that at least he, if not more people on the WebKit team and higher up at Apple, are more than willing to try to improve the situation. If Apple truly didn't care then why would they even bother with this sort of thing. The proposition to just go ahead an use WebKit as a common code base may seem like an effort from Apple to simply supplant KHTML but, although I am just guessing here, I think it shows just how much Mr. Hyatt wants to improve the situation and maybe even how desperate he might be and the lengths he is will to go to help resolve this issue.

Just so that this post doesn't suffer from any recency effects and people finish reading this thinking that I am putting most of the pressure on KDE devs, I think that both teams have to work equally if they hope to gain.

Sunday, August 28, 2005

Finally, my own blog

I've been wanting to start my own blog and web site for some time now. While the web site is still a work in progress I thought that the end of Spring term at university was as good a time as ever to get things going.

For the last six months or so I keep coming across various articles and ideas and I say to myself "oh I should really blog about this." Call me a hopeless dreamer but I truly believe that one person can make a difference in the world and I won't stop until I've made my dent. I may only be a 21 year old student but I am very opinionated. At the same time there are several people that I admire, a few of them being: John Gruber over at Daring Fireball, Joel Spolsky from Joel on Software, Bob Cringely who writes the I, Cringely column, and Bill Cowan a professor at the University of Waterloo, who I've been doing research with and doesn't keep as much of an online presence. The main reason I admire these people is because they all have strong opinions that are very well reasoned about yet expressed with a simplicity and elegance that few people are able to match. Talk is cheap and dreaming without action would truly prove to be hopeless. If I do really hope to make a difference in this world I have to learn to express myself with as much eloquence as the best of them.

As they say, practice makes perfect and what better place to practice and get feedback than on a blog.

So, why Blogger? I must admit that I am somewhat of a perfectionist; that combined with how much work they love to give us at university has led to me putting off creating my own blog. Blogger has everything I need for the time being and by the time I need more features I can either migrate my blog to some custom solution or who knows Blogger might have the features I need by then. At this point I think that getting into the rhythm of blogging regularly is the most important part of the whole process. This kind of thinking goes hand with an approach to technology development that I have been pondering about for a little time now; the most effortless solution often tends to be the best one. For a perfectionist it can be hard to let go of the notion that solutions must be rigorous and completely defined over the problem domain. Sometimes the quick and dirty solution can turn out to be even more enabling; it can turn out to be exquisite.