Tuesday, November 25, 2003

Mechanism and Policy

What can we do about the tradeoff between flexibility and convenience in interface design? Do users want us to provide just the means to get the job done (the mechanism), or do they want to be told how to do that job (the policy)?


I’ve been reading Eric Raymond’s “The Art of Unix Programming” (a good book that could have been great had he managed to find a more balanced voice). In the section on user interfaces, he reminds his readers of the decision of the designers of the X windowing system not to impose look-and-feel constraints on X applications. The designers say that X supports “mechanism, not policy.”


The X windowing system provides the underlying graphical user interface for most Unix systems (the Mac is a notable exception, as we’ll see). Perhaps surprisingly, X itself offers almost no user-level features. Instead, it concentrates on providing a set of low-level primitives for drawing windows and filling those windows with graphics and text.


In order to make X usable, you need to supply an application program called a “window manager.” This hooks in to X and handles events: for example, X may create a window, but the window manager can decide where to place it on the screen. To fill windows with widgets (standard interface components) you need another layer of software, the various X toolkits.


The designers of X felt that building a lot of behavior and standard interaction models into X would limit the user of X. Instead, they provided a (fairly low-level) API, and allowed their users to build any style of interface they wanted. They provide the mechanism, but enforce no policies on how that mechanism is used.


By contrast, the windowing systems from Microsoft and Apple (as well as those from Be and NeXT) were rich in policy. The windowing systems imposed a number of look-and-feel constraints and behavioral similarities between applications. There were even documents for application designers dictating justhow their applications should look and react.


So what are the tradeoffs? Raymond says:


The difference in approach [between X and Mac/Windows] ensured that X would have a long-run evolutionary advantage by remaining adaptable as new discoveries were made about human factors in interface design—but it also ensured that the X world would be divided by multiple toolkits, a profusion of window managers, and many experiments in look and feel.


Ignoring the interesting spin on “evolutionary advantage” (I don’t often see X applications edging out Windows and Mac ones on my client’s desktops), the point is a good one. By keeping the underlying framework free of particular implementation decisions, you make it more flexible. This flexibility is a two-edged sword. One the one hand, it allows multiple competing ideas to duke it out: the winners will be selected by their users, and not just by developers (perhaps this is what he meant by evolution). But on the other hand, it also leads to the fragmentation he describes.


But he’s also being disingenuous here: the reality is that it isn’t the X windowing system itself that’s adapting at all. Instead, it’s the efforts of hundreds of people writing the stuff on top of X that has provided the ongoing evolving interfaces he describes. Underneath the covers, X is basically the same old X. In some way, you could say that all their efforts were expended making up for stuff that X didn’t provide itself.


So, by providing policy, the designers of Windows and Mac interfaces have provided their end-users with a consistent look and feel, and a base set of application behaviors. By instead focusing on mechanism and ignoring policy, the designers of X allowed developers to experiment, but gave the users of X applications a very inconsistent interface experience. Arguing one approach is better than the other is pretty pointless: they’re just different.


What can we learn here when it comes to applications and designs?


When I first started thinking about this, I was reminded of the audience discussions that sometimes erupt when I talk about Naked Objects. A Naked Objects application exposes the core business objects of an application directly to the end user. They can manipulate these in any way the objects allow: there is no overall application GUI imposing a certain way of doing things. A Naked Objects system provides mechanism, but little in the way of policy. When I describe this, some folks have a strong reaction against the idea. “Without high-level policy imposed by the GUI (scripting, or series of modal dialogs that have to be filled in in a proscribed order) how can we ensure our users do everything that needs to be done?” they ask. And its a good question. Experienced users, folks who understand the domain, love Naked Object systems because they get the control and flexibility they need to get the job done. But inexperienced users can be confounded by that same flexibility—”what should I do now?”. (In a way, this also relates back to the Dreyfus model of skills acquisition: beginners need to be guided, while experts need to be left alone to get on with their jobs.)


In the Naked Objects world, it turns out that there’s a compromise. Because the Naked Objects are themselves just Java business objects, there’s nothing stopping you putting a more conventional view and controller on top of them, converting your Naked Objects application into a conventional GUI or Struts-style app. And, because the objects are the same beneath the covers, you could probably arrange to run both the Naked Objects and conventional application at the same time. The conventional application would have less flexibility and functionality, but would be easier for casual users. The Naked Objects system would have full flexibility for more experienced users.


In a way, this seems like the OSX way of doing things. Apple have taking a Unix operating system and wrapped it with a fantastic user interface. Not only does this interface work at the application level, but it also gives you the ability to do most of the administration of a box without dropping to the command line or editing files. I love this: I’ve spent all too many years administering Unix boxes the hard way. But what I love just as much is that when I need to, I can still get down and dirty. It’s the best of both worlds: regular users get a great, easy-to-use interface, but power users get to strip away the facade and work down at the lower levels.


Increasingly, I think the “one-size-fits-all” mentality is going to break down. We need to think about delivering our application functionality using multiple modalities, each targeted at specific user communities. Mechanism versus policy is one axis we need to consider, and one that’s relatively easily addressed in a well-designed application. We don’t need to decide up front whether to deliver one or the other; instead we need to work out how to provide both.

Tuesday, October 14, 2003

My Kind of Warranty

Over on the ruby-talk mailing list, why the lucky stiff just announced a new version of the Syck library (it reads and writes the excellent YAML format). In his announcement, he includes a warranty and a disclaimer:


  • Some of this code is still beta software and here’s my disclaimer as far as that goes: I’m not going to say “Use at your own risk” because I don’t want this library to be risky. If you trip on something, I’ll share the liability by repairing things as quickly as I can. Your responsibility is to report the inadequacies.

That, my friends, is what a warranty should be.

Monday, October 13, 2003

Tangled Up In Packing Tape

It’s been a busy couple of months here as we prepare to launch our new book-printing imprint, The Pragmatic Bookshelf. We spent the year writing the first two books, Pragmatic Version Control andPragmatic Unit Testing. The interesting part was what happened next.


Tuesday, August 12, 2003

Want to Work for Amazon?

Then apparently we’re required reading! (And number one, no less…)


Obviously a company with taste.

Tuesday, July 29, 2003

Prowling the ruins of ancient software

Link: Prowling the ruins of ancient software


Sam Williams recently interviewed Grady Booch, Ward Cunningham, and myself about software archaeology; the issues surrounding preserving and understanding existing software. Grady focused on the preservation aspects, keeping archives of worthy software in museums. Ward and I concentrated on the issues on understanding the code that you come across (not just in a historical sense; this stuff is useful when maintaining code that’s six months old). The resulting article in Salon is fairly high level, but the underlying message is an important one.

Friday, July 4, 2003

experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed.

On the Fourth of July, I went over to the archives to read a transcription of the Declaration of Independence. In a way it seems cheap to draw project-team lessons from such a document, but there is a wonderful quote in the middle that I hadn’t noticed before:


Are we currently putting up with things “to which we have become accustomed” rather than fighting to right them?

Tuesday, June 17, 2003

Debating with Knives

A couple of years ago at OOPSLA I saw a wonderful panel format. On stage there was a table, short side facing the audience. People sitting on one side of the table had to support the motion, those on the other had to oppose it. At any time, anyone could stand up, walk to the other side of the table, and tap someone on the shoulder. Those two people then swapped places: the one arguing for then had to argue against. After a while, they let members of the audience come up and tap on shoulders too.


At last night’s Pragmatic Practitioner dinner here in Dallas we tried the same thing. After the meal was cleared away, we used our left-over knives to indicate the position we were taking: a knife lying in the customary end-on position meant you were supporting the statement “statically typed languages are better than dynamically types ones.” A knife lying crossways meant you were opposing the motion. We started with knives alternating around the table, and tried to maintain a kind of parity: you could only swap your knife’s position if someone else did. Every now and then we had a group swap, where every knife switched.


The result was a fun and not too serious debate. It was good to be able to argue both sides of a position; very few things are black and white, and it’s nice to be able to acknowledge opposing points of view.


Now I’m wondering if the same technique could work in a business setting. Could it take the heat out of the discussions we have about architectures, design, timescales, and so on?

Wednesday, June 11, 2003

Construction Methods

A recent thread in ruby-talk reminded me of a change in the way I’ve been writing classes over the last year or so.


In the past, I used to enjoy the ability to overload constructors. To create a rectangle given two points, I might write:


  Point p1 = new Point(1,1);
Point p2 = New Point(3,4);

rect = new Rectangle(p1, p2)

Alternatively, I might want to create a rectangle given one point, a width, and a height:


  rect = new Rectangle(p1, 2, 3);

Inside class Rectangle, I’d have two constructors:


  public Rectangle(Point p1, Point p2) { ... }

public Rectangle(Point p1, double width, double height) { ... }

The problem with this approach is that is breaks down when the different styles of constructor can’t be distinguished based on argument type. It also makes the code harder to read: if you see


  new Rectangle(p, r, t);

where are the hints as to what’s going on?


So now when I write a class with multiple construction requirements, I tend make the constructor itself private, so it can’t be called outside of the class. Instead I use a number of static (class) methods to return new objects. These methods can have descriptive names (often with_xxx, for_xxx, orhaving_xxx), and I don’t have to worry about parameter types. As a silly example, the following Ruby class has three constructor methods, letting me build a square by giving its area, its diagonal, or a side. What it won’t let you do is construct the object by calling new, as the constructor itself is private.


  class Square
def Square.with_area(area)
new(Math.sqrt(area))
end

def Square.with_diagonal(diag)
new(diag/Math.sqrt(2))
end

def Square.with_side(side)
new(side)
end

private_class_method :new

def initialize(side)
@side = side
end

end

s1 = Square.with_area(4)
s2 = Square.with_diagonal(2.828427)
s3 = Square.with_side(2)

Of course this is nothing new to Smalltalk folks (nor is it particularly new or revolutionary elsewhere). However, it does seem to be less common than it should be in the Java world. Just because you canoverload constructors doesn’t mean that it’s the best way to code your classes.

Thursday, June 5, 2003

Change Control is Location Independence

Brian Marick just blogged on Configuration Management. He talks about how he uses CM differently in work and personal contexts: in a work context he uses it as an audit and exploration tool, but at home it is basically a big UNDO button.


I’m guessing that Brian primarily uses a single computer at home, or that he has different computers dedicated to different tasks. Andy and I don’t. I have four desktops and two laptops I use pretty regularly (although the Powerbook has become my workhorse machine recently). And I migrate my development between them in a pretty ad-hoc way. If I happen to be in front of my big monitor when I realize how to add something to RDoc, I’ll start typing in to my Linux box. If half way through I have to take Zachary to his karate class, I’ll check in what I have, check it back out on the Powerbook, and carry on while I wait for him. I can work on articles, coding, books, or whatever on just about any of my machines. If I’m in a hotel and I need something, I can just check it out. Even this blog is stored under CVS (and served from it as well: all I have to do to post an article is check it in).


The are many other good reasons for using source control, but freedom from a particular hard drive is a significant one for me.

Tuesday, June 3, 2003

Dynamically Scoped Variables

Whenever I write complex systems, I find I need a way to keep context information lying around. For example I may pick up a set of user preferences for colors at the top level of some code, but then I need them when I get fifteen levels deep, somewhere within the bowels of a component’s paint method. Or perhaps I get a database connection at the start of request handling, but then need to use it when I get deeply nested inside some application code.


These types of scenario seem to have no easy answer. Sometimes you solve it by passing a common parameter to all your methods. This parameter then contains references to all the context information needed by the application code. But this is a messy approach: it means that all your methods have to accept and pass on a parameter that they don’t necessarily need themselves.


A variant of the above is to pass the context object to every constructor, and then store a reference in an instance variable. This suffers from the same drawbacks; every object is carrying around payload that it might not itself need.


Sometimes you can get away with using singletons to store this kind of stuff, but this rapidly breaks down (or at least becomes unwieldy) in the face of multi-threading.


There is another answer, though: dynamically scoped variables.


Most languages offer lexically-scoped variables. When a program is compiled, variable names are looked up by first examining the enclosing scope, then the scope that lexically encloses that scope, and so on. Variables are bound according to their static location in the source code.


However, another kind of variable binding is remarkably useful for passing around context information. Dynamically scoped variables are resolved not at compile time but at run time. When a dynamically scoped variable is referenced, the runtime looks for an appropriate variable in the current stack frame. If none is found, it looks in the caller’s stack frame, and then in that stack frame’s caller, and so on. That way you can set the context in one method, then call multiple levels deep, and still reference it.


Many languages offer dynamically scoped variables: Lisp, TCL, Postscript, and Perl to name a few. In Perl, you could use local to achieve the effect:


  sub update_widget() {
print "<$color>$name</$color>\n";
}

sub update_screen() {
update_widget;
}

sub do_draw() {
local $name = "dave";
local $color = "red";
update_screen();
}

Although easy to use, locals in Perl are hard to control. And Perl’s features don’t help me much anyway; I needed a Ruby solution. I came up with something that’ll let me do the following.


  def update_widget
name = find_in_context(:name)
color = find_in_context(:color)
puts "<#{color}>#{name}</#{color}>"
end

def update_screen
update_widget
end

with_context(:name => 'dave', :color => 'red') do
update_screen
end

The with_context block establishes a set of dynamic variables (the parameters to the call). Within any method called at any level during the execution of the with_context block, a call to find_in_contextlooks up the appropriate dynamic variable’s value and returns it.


The implementation I came up with allows nested dynamic scopes, so the code:


  with_context(:name => 'dave', :color => 'red') do
with_context(:name => 'fred', :color => 'green') do
update_screen
end
update_screen
end

outputs:


  <green>fred</green>
<red>dave</red>

The actual implementation itself is a tad ugly (and I’d welcome alternatives), but right now I view it as something of a singing pig.


  def with_context(params)
finder = catch(:context) { yield }
finder.call(params) if finder
end

def find_in_context(name)
callcc do |again|
throw(:context, proc {|params|
if params.has_key?(name)
again.call(params[name])
else
raise "Can't find context value for #{name}"
end
})
end
end

Update…


And of course, it took less than eight hours for a more elegant implementation to surface (I love the Ruby community). Tanaka Akira posted:


  def with_context(params)
Thread.current[:dynamic] ||= []
Thread.current[:dynamic].push params
begin
yield
ensure
Thread.current[:dynamic].pop
end
end

def find_in_context(name)
Thread.current[:dynamic].reverse_each {|params|
return params[name] if params.has_key? name
}
raise "Can't find context value for #{name}"
end

Update #2…


And Avi Bryant massages the original into this masterpiece of minimalism…


  def with_context(params)
k, name = catch(:context) {yield; return}
k.call(params[name] || find_in_context(name))
end

def find_in_context(name)
callcc{|k| throw(:context, [k, name])}
end

Sunday, June 1, 2003

The Joy of Lego

Martin Fowler has put up a link to an IEEE Software Design Column article by Rebecca Parsons called Components and the World of Chaos (pdf).


In part, the paper argues that assembling large numbers of components could potentially lead to behavior that would be hard to predict ahead of time: the interaction of these simple components could lead to complex (or emergent) behavior. Components could interact in ways not foreseen by their original designers.


The paper suggests that this might be a bad thing: it would be hard to predict the exact behavior of these component-based applications in advance, and so they would be risky to deploy.


I can see that argument: even without worrying about the distribution of heterogeneous, multi-vendor, high-level components, I know that I’ve been bitten in the past by different parts of systems interacting in ways that I hadn’t expected.


But at the same time, a part of me wonders if there isn’t some potential magic to exploit here. Say we can find ways of specifying the stuff we definitely don’t want to happen, perhaps by specifying business rules as invariants or mini-contracts, stuff such as “you can’t sell something if it isn’t in stock,” and “you can’t refund more that you were originally paid,” that kind of thing. These rules define a kind of business baseline: something that the application must do. We implement the rules at some kind of meta-level; some are associated with individual components, and others, specified in the component assembler/aggregator layer, apply to the component’s interactions. They give us our safety net.


But we don’t try to box the application in totally. Instead, we wait to see if other, potentially unexpected behaviors emerge. Our business rules act as some form of guarantee that the new behavior won’t hurt us, but they don’t prevent us from benefiting from any new and valuable behavior that might pop up.


Can we really produce working systems where we don’t know all the ways in which it will behave up-front? Just look at The Sims (or Lemmings, for those feeling nostalgic). Look at the way folks are using scripting languages to produce small component-like interfaces for existing applications, and then using those interfaces to combine the applications in unexpected ways. Clearly at some level we can. Right now we can’t do the same kind of thing for business applications: we don’t know enough about specification techniques to be able to plug all (or even most of) the holes up-front. But in the next few years, perhaps we will. And perhaps systems such as Naked Objects suggest how some of lower-level building blocks might work.


All the truly interesting behavior is emergent (if for no other reason that if we can predict it ahead of time, it really isn’t too interesting when it happens). And this emergent behavior has an amplifying effect on our productivity as developers: combine simple things using elementary rules to produce a whole that has complex and rich behavior. So I’d argue that having component-based architectures produce systems with emergent properties is not a risk: it’s a requirement. We’re just not there yet.

Thursday, May 8, 2003

Vicarious Seat Backs

After a hellacious trip across to Norway for rOOts (note to self: never fly Lufthansa transatlantic), the return came as a pleasant relief. Not only was the SAS flight half-empty, letting me claim an entire center row to myself, but their new A340 had something I hadn’t seen before: nose and belly cameras wired into the seat-back video displays.


A couple of touch screen menu picks, and I had one seat-back looking forward, one looking down, and a third on the moving map. It says something about the state of mind that you get in to on long flights that I started playing a game, trying to tie moving map features up with the downward-pointing camera. It turned out to be easy (which I guess is what you’d expect): just as the moving map said we were over the coast of Iceland, a rocky shoreline scrolled beneath us. Coming across Canada approaching the Great Lakes, most of the larger rivers on the map seemed to tie in with what I was seeing below. Looking out the front and seeing the runway appear through the murk during our final in to O’Hare was a nice way to end the trip. +1 SAS.

Tuesday, May 6, 2003

First Kill the Architects

I’m over in Bergen for the rOOts conference. Martin Fowler gave an interesting 30 minute talk on the role of architecture in software development, and on how the forces that drive architecture also drive other aspects of the overall process. He started by mentioning Ralph Johnson’s discussion of architecture; we define architectures to document the things that we perceive as being hard to change. Being agile, Martin then went on to say that the role of an architect is to make himself redundant: to find ways of implementing systems which can roll with the punches, and where everything is amenable to change. As an example, he talks about databases and schemas. Conventional thinking tells us that database schemas are hard to change: once you code to a schema, every change involves updating the database, the code, and also all the data affected by the change. As a result, people tend to treat schemas as scary things: we define them and then code around them. At Thoughtworks, though, they have developed techniques for incremental migration through schema changes: the database, data, and code all update in parallel. As a result, the schema no longer has to be defined up front: is is no longer an architectural element.


The driver for all this, of course, is flexibility: we need to find ways of writing applications that work in the face of a set of volatile requirements. Cut down the number of up-front constraints, and we increase our degrees of freedom. If also helps us start delivering earlier, allowing us to get feedback ad refine our applications as we go.


The alternative to killing all the architects, of course, is to kill all the developers. Rather than spending time coding flexible applications, find ways of throwing together disposable solutions to business problems at greatly reduced cost. Don’t worry about flexibility: if the application no longer works when the environment changes, throw it away and write it again. If the cost of code is small, then the investment can be written off in almost no time.

Wednesday, April 16, 2003

the difference between a good movie and a bad movie is getting everyone involved in making the same movie.

Francis Ford Coppola (This one has no definitive source, so it may well be totally inaccurate, but the concept is sound even if it was never said. :)


How could this apply to your current project?

Tom Lehrer's "The Elements". A Flash animation by Mike Stanfill, Private Hand

Link: Tom Lehrer's "The Elements". A Flash animation by Mike Stanfill, Private Hand


Fan’s of Tom Lehrer’s Elements song might enjoy Mike Stanfill’s Flash adaptation.

Sunday, March 23, 2003

Artifacting

Software development is a discipline of artifacts; for a bunch of folks who like to do things, we seem surprisingly wedded to nouns, not verbs. Just look at the vocabulary of methodologies: requirements, designs, quality, communication, tests, deliverables—all good solid things. And yet increasingly I’m realizing that these things, these nouns, are not really all that useful. Let’s look at just two of them (for now), requirements and quality.


The value of spending three months doing a requirements analysis is not the 150 page document that is produced. It certainly doesn’t capture the full nuance of the system, and it’s just about certain that it will gradually become outdated as the project butts up against reality and the implementation adapts accordingly. No, the value of requirements is not the deliverable document, the artifact. Instead the value is the process that everyone goes through in order to produce the document: the understanding gained, the relationships forged, the negotiations, the compromises, and all the other little interactions which share information.


Quality is another terribly important word. We plan it, measure it, hire folks to manage it, and put big posters about it on our walls. But again, quality should not be a noun: you can’t measure it, or draw it, or describe it. Quality is a part of the process; it’s in the doing. Quality isn’t a set of rules, or a set of metrics. Quality is in the spirit of the smallest of our daily activities.


Once I started thinking about this as a pattern, it started to change the way I look at many of the other artifacts we produce (including the delivered programs themselves). Often the true value of a thing isn’t the thing itself, but instead is the activity that created it.


So, a challenge. Think of some of the common nouns we deal with on a daily basis (testUML diagram, and architecture might be interesting starting places). Then try to recast them (somehow) as verbs. Where do you find the value? Should we be emphasising the doing of things more, and the artifacts less? How?

Thursday, March 6, 2003

Topless Systems, Naked Objects

When talking about the failings of top-down development, Bertrand Meyer says “Real systems have no top.’[1] And yet the GUI-based applications we produce run counter to this: our code typically does have a “top,” at least from the user’s perspective. The top in this case is the user interface, the collection of mini-scripted activities that we provide to allow our users to interact with our underlying application. Everything else in a typical interactive application is there simply to support this GUI, and this affects the way we both design and implement the code.


Interestingly, Naked Object applications (www.nakedobjects.org) do not have a top in this sense: the user is instead presented with a group of business classes and business objects. The user is free to interact with them in any way that makes sense.


If Meyer is correct (and I think he is), then Naked Object systems do indeed seem to be closer to the true spirit of OO development.



[1] Bertrand Meyer, Object-Oriented Software Construction, 2nd ed

Sunday, March 2, 2003

Bill Venners' Interview

Andy and I were in Seattle in January for Scott Meyer’s Code Quality get together. Bill Venners took the opportunity to interview the two of us for his online magazine. The first of eight installments(!), a discussion of broken windows, is at www.artima.com/intv/fixit.html.



The third installment of Bill Venners interview with Andy and me is now online. My favorite line is Andy’s: "Having a quality officer is kind of like having a breathing officer." Bill really does a remarkable job of taking our ramblings and rendering them down into something that might almost be considered coherent. He’s a nice guy too: have a look at the stuff on his site, www.artima.com.



Bill Venners has posted the fourth article extracted from an interview he did with Andy and me in Portland. This one’s about our assertions that you should “Put abstractions in code, details in metadata.” Reading the transcript, we come off dissing XP a fair amount; YAGNI comes in for some criticism. We claim that it’s OK to use experience to predict which parts of a system will be volatile, and it’s OK to build in support to handle that volatility (particularly by using metadata to describe the details of the application). I’m expecting a lot of e-mail on this one… :)



The fifth installment of Bill Venners’ interview with Andy and me is all about building adaptable systems. The previous article generated an enormous thread (still running) of negative comment in the Extreme Programming mailing list. This one will probably generate death threats…



In the seventh part of Bill Venner’s discussion with Andy and me, we’re talking about gardening as a metaphor for software development.


Thinking in terms of analogies is a useful way of extracting hidden meaning. Brian Marick and Ken Schwaber are co-hosting an interesting workshop at Alistair Cockburn’s Salt Lake Agile DevelopmentConference. I particular like the first phrase in the description: The Analogy Fest is an attempt to manufacture serendipity .


The eighth installment of Bill Venners’ interview of Andy and me is now online. We’re talking about tracer bullets, prototypes, and iterations. The key to tracer bullets is the feedback they give: they let you know how well you’re aiming in a real-world environment. Short iterations and lots of feedback are the software development equivalent.





Friday, February 28, 2003

Io, Io, it's Off to Play I Go

Call me slow, but I hadn’t come across the Io language before today. I popped over to its site (http://iolanguage.org/) and played for an hour. It seems to have a lot going for it:


  • full OO (like Smalltalk)

  • very simple semantics (even assignment is a message)

  • nice orthogonal structure

  • very compact

  • fast enough

  • different enough to be interesting, similar enough to be easy to learn

You can even get Io T-shirts and mugs at www.cafeshops.com/IoLanguage (what else do you need in a language?).


Io’s objects are generated by cloning existing objects, rather than instantiating classes (so it has some similarity to Self). So I could create a Dog object using something like


  Dog = Object clone
Dog sound = "woof"
Dog bark = block( write(self sound, "\n") )

Dog bark

The first line creates a new object based on Object, assigning it to a slot called Dog. The second creates a slot in Dog called sound and arranges for it to reference the string “woof”. The third lines creates an anonymous block (which writes “woof”) and assigns that block to the bark slot in Dog. Finally we call it.


We can now create some objects based on a Dog: note that the mechanism is the same:


  rover = Dog clone
fido = Dog clone

fido bark #=> woof
fido sound = "bow wow"
fido bark #=> bow wow
rover bark #=> woof

Io has differential prototyping: the only slots created in sub objects are those specialized in those objects.


Io has lots of interesting features. It keeps its code lying around in a message tree structure, allowing you to inspect and alter it at runtime (yup, alter. You can write self-modifying Io programs, so I guess adding aspects would be fairly straightforward).


Because it’s becoming a tradition, here’s 99 bottles of beer in Io (taken from the distribution).


  bottle = block(i,
if(i==0, return "no more bottles of beer")
if(i==1, return "1 bottle of beer")
return i asString("%i") .. " bottles of beer"
)

for(i, 99, 1,
write(bottle(i), " on the wall, ", bottle(i), ",\n")
write("take one down, pass it around,\n")
write(bottle(i - 1), " on the wall.\n\n")
)

I’m not sure if Io is a keeper for me. It is certainly interesting, but it has a slightly pedantic feel to it (especially compared with Ruby). But it’s fun to play with.

Saturday, February 22, 2003

That which is overdesigned, too highly specific, anticipates outcome; the anticipation guarantees, if not failure, the absence of grace.

William Gibson. All Tomorrow’s Parties

Saturday, February 8, 2003

Every day in every way…

Imagine a simple (and somewhat boring) card game. In each round, all players are dealt one card each. Each player may hold at most three cards (so after the third round they must start discarding). After an arbitrary number of rounds, the player with the highest card total wins.


The strategy is pretty simple: when forced to discard, always discard your lowest card. No rocket science here: when you can’t control the cards you’re dealt, you win by eliminating the weakest of your holdings.


This seems to be a reasonable strategy in any situation where you need to optimize some collection of “things,” but where the resources you receive have unknown characteristics.


Our industry is suffering from an embarrassment of bad programmers. Much of the blame can be leveled at the hiring frenzy that occurred during the dot com boom, where anyone who could play Quake (or who had once watched someone play Quake) could get a job coding (and playing foosball at work). Many of these people are still in the industry.


So now we have a problem. Interviewing and recruiting good people is very difficult; For most organizations we’ve seen it’s a hit or miss affair. This means that the bad get let in along with the good (just like being dealt random cards). However, once the bad folks have been hired, it turns out to be hard to fire them. Unlike the card game, they stay in your hand, dragging down your overall score.


There are three things to be done here. First, companies could get better at recruiting. Unfortunately, one of the best indicators available to recruitors, past performance, is hard to come by. In the US at least, employers now tend to give anodyne references to ex-employees rather than tell the truth and risk being sued.


Second, we could find a way to fire the ineffective developers. If that happened enough times to an individual, they might get the hint and leave the industry (which would be good for all of us). Unfortunately, that’s also unlikely to happen. Even though most developers in the US are employees at-will (meaning they are in theory employed at the whim of their employer), in reality the various anti-discrimination laws make firing a risky business for most companies.


Our third strategy isn’t available to the card players: we can improve the individual cards in our hand. This means working hard to train and retrain folks, not just in the specifics of technologies and languages, but also in the soft disciplines: communications, business practices, and so on. Some developers aren’t trainable, but I’m thinking that the vast majority will benefit.


Interestingly, there are companies who recognize the value of attrition. GE, for example, has every level of manager rank their employees. After justifying these ranks to the manager’s manager, the company then puts in place an action plan for the bottom 10% (a plan which can include termination, a performance improvement plan with a time limit, or a move to a more suitable position). I don’t think I like the rigidity of GEs policy (at least as externally stated), but I think the intention is good.


Over the next few years, we all have to do something to improve the quality of the work delivered by our industry. If we don’t, we’ll find legislators doing it for us (possibly with professional licensing schemes and attempts to hold developers liable for faults in software). And improving the quality of work means improving the quality of the development community. Maybe it would be in our long-term interests to find ways to make recruiting more reliable (perhaps by setting up a way for employers to comment truthfully on a developer’s past performance). We need to make it easier to fire the truly bad developers (contract to hire and probationary periods are a good interim measure). And we need to find ways to promote on-going professional training. If we don’t help ourselves, someone in government will do it to us.

Thursday, February 6, 2003

Passing Information to Our Children's Children's ... Children

There’s a great article in the January 2003 CACM which describes some very long-term data storage technology (Organic Data Memory using the DNA Approach).


In a nutshell, you encode your information using sequences of DNA base triplets (AAA, AAC, AAG, and so on), then splice these on to the end of a DNA strand, making sure that the stuff you write is past that strand’s stop codon. You then perform the necessary magic to get this DNA into the host’s genome. That way the new material will not take part in protein synthesis, but will be passed down as genetic material from generation to generation.


This isn’t science fiction: the researches encoded the words of “It’s a Small World”, added them to a bateria’s genome, then extracted the information again. Because bacteria can withstand all kinds of abuse (dessication, extremes of temperature, and so on), they believe that this gives us a good long-term storage scheme. (There’s the problem of mutations to deal with, but decent error correcting codes could probably deal with this).


Now, of course, we’ll see the RIAA step in to the act and insist that they need to add unique digital signatures into every human being.

Sunday, February 2, 2003

Learning from Mistakes

I read a lot of aviation magazines. In every one, you’ll find at least one column dedicated to reporting on accidents. These reports are fairly dry: a restatement of the facts issued by the various government agencies that investigate transportation problems. Depressingly, a large number end with the summary “pilot error.”


Are our pilots a bunch of cowboys, recklessly flying planes in to the ground? Quite the reverse, the vast majority are conservative, careful aviators. So why does “pilot error” figure so prominently?


The authorities quite rightly give the pilot of an aircraft ultimate authority over that aircraft’s operation. It’s up to the pilot to check the weather, the condition of the plane, the distribution of weight, the fuel required, and many other factors, all before setting foot inside the plane. Once flying, the pilot’s job continues, monitoring weather, fuel remaining, aircraft performance, navigation, collision avoidance: the list is long and complex.


It isn’t easy keeping all these factors balanced, particularly not when the weather is closing in, fuel is starting to look marginal, turbulence is jarring your teeth loose and you’re at the end of a long, exhausting day. And yet we all (quite rightly) expect our pilots to maintain a level of near perfection. So pilots, being human, make mistakes, and sometimes these mistakes have tragic consequences.


That’s where the accident reports come in. Pilots read them, and read them avidly. They aren’t reading to gloat. They’re reading to learn. The pilots they’re reading about are for the most part every bit as careful as they are, and yet still something went wrong. So pilots read the reports to find out what happened, and maybe to try to tune their personal procedures to stop it happening to them. By reading these reports, pilots improve both their own performance and aviation’s overall safety.


Computer programming is perhaps half the age of powered flight. We face different issues; our mistakes can cause inconvenience, but rarely loss of life. But perhaps still there’s something that programmers can learn from the attitudes of pilots. Is it conceivable that we might one day have a way for developers to report problems during development to some anonymous forum so that others might learn? Could we start to use our own history as a tool to help us all improve?