At XP2011 in Madrid, Matt Wynne and I ran a workshop called Extreme Startup. This is a workshop that we devised to simulate the environment of a startup, where there is high uncertainty as to what the market wants, and teams must iterate rapidly to develop a product. The accepted agile engineering practices are supposed to support working in an iterative fashion, with changes of direction and strategy, without compromising quality. We wanted to see what happened when we increased the frequency of iteration, under the pressure of competition. Would the practices help? Or would some (or all) of them fall by the wayside?
We asked the participants to form teams, to compete against each other. Most formed pairs. The task we set them was to build a small webserver that could respond to requests that we would send them. We didn’t tell them in advance what the requests would be, the only way for them to discover this was to build something, launch it into the marketplace, and see what they received. For each request, the response was scored, and running totals kept for each teams score. We displayed a leaderboard on screen throughout the session.
We played the main part of the game for about an hour, and the atmosphere was tense. Pairs huddled over laptops, tailing logs, programming solutions, checking their scores. The competition was tight, and the top of the leaderboard changed often throughout the session. We spent some time peering over their shoulders to try and determine their various strategies.
After an hour or so, we called the end of the competition. As a group, we reflected on the exercise, and the decisions that people had made. There were varying strategies, but a common thread was that under the pressure of the competition and varying requests, people had coded quickly, and perhaps messily, and had not written many tests. They weren’t particularly proud of what they had written, but it had worked. Presumably it was “good enough” for the task in hand.
We want to run this session again, and we want to take more notice of how the scores vary throughout the session. Perhaps we can plot a graph and maybe tie specific events or phases back to particular aspects of the graph. We also wonder what would happen if we ran the exercise over a longer timescale, perhaps a whole day. It wouldn’t be possible for people to maintain the same intensity and concentration over a whole day, so we wonder how their strategy would change. I also wonder what would happen if the teams were larger.
This is definitely a fun exercise. We hope to run it again in future, and some of the particpants from XP2011 have gone on to run the session with their own user groups.