In Windows Vista, it was possible to run as administrator by right clicking on a file and choosing “Run as administrator”. However, it wasn’t possible to run as a different user. When developing software, this can be really useful for testing—seeing if your program works with a specially-configured limited user, for example.
In Windows 7 it appears to be the same issue; here’s the menu obtained by right-clicking in Windows 7:
However, in Windows 7, holding down the SHIFT key and then right-clicking adds a new option: “Run as different user”:
This then allows the execution of the program or file by any user:
In the past on x86 Vista, I used the ShellRunAs program from Windows Sysinternals to do this. When I tried this on my x64 Windows 7 machine, I found that it did not work correctly. The program ran, but my program was unable to open its own config file, and so it didn’t run correctly. I’m guessing that this might be an issue with ShellRunAs and x64. Since this functionality is now built into Windows 7 and works with x64, this isn’t really a problem for me.
For a bit more about this, see http://forum.sysinternals.com/forum_posts.asp?TID=19939.
I think Microsoft’s decision to make SQL Azure more like SQL Server (and less like Windows Azure Storage) is great. Relational databases are complex, and having to learn a whole new technology for cloud usage seemed like a very significant barrier to entry. In addition, this makes it much easier to design an application that could be run on-premises or in the cloud—you don’t need a completely different data storage technology. (By the way, I like Azure Storage—it’s just that the previous version of SQL Data Services offered very similar functionality to Azure Storage, and we didn’t really need two of them!)
I’m less comfortable with some of the security aspects of SQL Azure. Specifically, if I understand correctly, developers and DBAs will access their cloud database instance using SQL Server authentication—in short, a username and password.
The issue here is that usually there are a lot of layers of security between attackers and the database in a traditional deployment—for example, usually the database is behind a corporate or hosting firewall. This firewall prevents users from outside the organization from accessing the database instance. Often this firewall needs something like a digital certificate or a smart card to gain access—this makes the firewall fairly formidable to attackers.
However, with SQL Azure, (as I understand it) there is no firewall—the only thing between you (and millions of potential attackers) is a username and password. (Presumably there’s also the equivalent of a database or server name.) The point is that the password becomes very important in this scheme—you should be sure to pick a really strong password. (There’s also an interesting Wikipedia article about strong passwords.)
At least until I understand better, I’d be a bit uncomfortable using SQL Azure for really sensitive data—modern computers and networks are fast enough that even a really strong password just takes time. I’m hoping that I’m being too paranoid here—perhaps there is a whole layer of security in SQL Azure that I’m just not aware of yet. But picking a strong password still never hurts!
Recently Microsoft announced pricing of Windows Azure. There are different pricing models, but the consumption-based model is the one that’s currently well-defined.
I was curious about what all the hourly rates would add up to for a few representative types of sites using Azure. Therefore, created some assumptions about usage for some different types of sites to see what the monthly bill would come out to.
Just to be clear, these are just estimates—for any given application, the monthly charges could be quite different! Also, I don’t have much experience with how much load a single web role in Azure can support, so the estimates about how many would be necessary for a given level of traffic are just guesses.
Small web application
Let’s assume we want to host a small web application on Azure. It only has a few users, but we want it to be fairly available, so we need to have at least two web roles, so when one is updating or the hardware is failing, the other one is available.
- Web Edition of SQL Azure is sufficient (up to 1 GB database)
- 1000 hits / day
- 2 web roles are sufficient to support the traffic
- 10 KB web site requests
- 100 KB web site responses
- Average of one .NET Services call per hit (for authentication)
- 31 days per month
- Billing does not round up for bandwidth—I have no idea if they do or not, and it’s just pennies either way
- They do round up for .NET Services calls that are less than 100K (it says so on their pricing page)
|Resource ||Price ||Amount used ||Cost per month |
|Web roles ||$0.12 / hour ||2 roles ||$178.56 |
|SQL Azure (Web Edition) ||$9.99 / month ||1 instance ||$9.99 |
|Bandwidth in ||$0.10 / GB ||0.31 GB ||$0.03 |
|Bandwidth out ||$0.15 / GB ||3.1 GB ||$0.47 |
|.NET Services ||$0.15 / 100K ||31,000 messages ||$0.15 |
Total monthly charge: $189.20
Here’s a chart of how the various resources compare in cost for this example:
Medium web application
- Business Edition of SQL Azure is needed (up to 10 GB database)
- 10,000 hits/day
- 4 web roles are sufficient to support the traffic
- 10 KB web site requests
- 100 KB web site responses
- Average of three .NET Services calls per hit
|Resource ||Price ||Amount used ||Cost per month |
|Web roles ||$0.12 / hour ||4 roles ||$357.12 |
|SQL Azure (Business Edition) ||$99.99 / month ||1 instance ||$99.99 |
|Bandwidth in ||$0.10 / GB ||3.1 GB ||$0.31 |
|Bandwidth out ||$0.15 / GB ||31 GB ||$4.65 |
|.NET Services ||$0.15 / 100K ||930,000 messages ||$1.50 |
Total monthly charge: $463.57
Small video delivery site
This is a small site that allows users to watch videos online. I was curious what the costs would be for a site that uses a lot more bandwidth and storage than the examples above.
- Uses Azuze Storage to store videos
- 1000 hits/day
- 2 web roles sufficient to support the traffic
- 10 KB web site requests
- 10 MB web site responses (a reasonable sized video)
- 100 GB of storage space for videos in Azure storage
- One storage transaction request per web request
|Resource ||Price ||Amount used ||Cost per month |
|Web roles ||$0.12 / hour ||2 roles ||$178.56 |
|Storage ||$0.15 / GB stored / month ||100 GB ||$15.00 |
|Storage transactions ||$0.01 / 10K ||31,000 requests ||$0.04 |
|Bandwidth in ||$0.10 / GB ||0.31 GB ||$0.03 |
|Bandwidth out ||$0.15 / GB ||310 GB ||$46.50 |
Total monthly charge: $240.13
For each of these examples, the web roles dominated the cost—partly because I had at least two of them (see next paragraph). Bandwidth was relatively inexpensive, even for the video site. In addition, the .NET Services and storage costs were relatively low in all of these examples. However, for some sites and services, the bandwidth or other resources might become the dominant cost—for example, a service that mainly served many large videos or files directly from Azure storage. Interestingly, even the business edition of SQL Azure is cheaper than two web roles for a month.
One interesting tradeoff is that you need to have at least two web roles to ensure that your application is available, given failures and updates. For a really small application it seems like overkill (and really adds to the expense if the application doesn’t consume a lot of other resources). However, I’m hard-pressed to imagine a situation, even with a relatively small application, where it would be fine for it to be unavailable for minutes at a time on a relatively frequent basis. Note that you could save quite a bit in hosting costs if you were able to go with a single web role.
Another question is how this compares with traditional, on-premises hosting. It seems like one factor is the timeframe of the deployment. If it’s a very short-lived site (like an event site), then this is incredibly cost effective—you don’t need to buy a bunch of servers and deploy and configure them, but the site isn’t up for long enough to incur huge charges in Azure. For a longer-lived site, it’s not quite so clear cut, but it does still seem compelling, especially considering the level of redundancy and stability that you get.
Yet another question is how it compares with traditional hosting. Generally, it tends to be more expensive than dedicated hosting, but also much more flexible. Often hosting providers require contracts with specific levels of resources—if your needs change, it can be difficult to change the resourcing level—especially if expected demand doesn’t materialize.
I recently attempted to add keyboard support for deletion to a DataGrid control in a Silverlight application.
To help decide which event I should use—KeyUp or KeyDown—I decided to see how Microsoft Excel handled the Delete button. A quick test revealed that Excel deleted on KeyDown, and I figured if it was good enough for Excel, it was good enough for my app.
Therefore, I coded up the Delete button to use the KeyDown event, and (thoughtfully) added a confirm MessageBox dialog for the user asking if they actually wanted to delete the item. However, as soon as I started testing it, I noticed that it worked fine in Firefox but it would sometimes cause IE to crash and close.
After doing some debugging, I found that a serious-sounding exception (System.ExecutionEngineException) was getting thrown at the MessageBox.Show call (here it is recreated in a new project):
My first thought was that it was a bug with IE—mainly because of the violence of the crash. However, at lunch, I mentioned this to a colleague, and he mentioned that a crucial difference between KeyUp and KeyDown is that KeyDown repeatedly fires as long as the key is held down, whereas KeyUp does not. He suggested that the exception (and subsequent browser crash) might be caused by multiple events being fired.
A bit more research revealed that this was the issue. I was getting multiple events fired from the KeyDown, which were then trying to display more than one MessageBox at the same time. Since the MessageBox is modal, only one can be displayed at a time, and trying to display another caused the error.
Changing to the KeyUp event fixed the entire issue!
A common user interaction is the humble MessageBox. Doing a quick web search of “silverlight messagebox” tends to produce a number of results lamenting how MessageBox is not available in Silverlight, and discussing various techniques to simulate a MessageBox.
While it’s true that there wasn’t a MessageBox until the Release Candidate, there IS a MessageBox class in the final (RTW, Release To Web) version of Silverlight 2.0!
It’s System.Windows.MessageBox. It’s a bit limited:
- It doesn’t display an icon on Windows (although according to the docs, the Macintosh version does display an icon!)
- It offers a choice of a single button called OK or two buttons: OK and Cancel.
MessageBox.Show("This is the messageBoxText",
"This is the caption",
This renders like this on Windows Vista:
If you only want the OK button, just pass MessageBoxButton.OK as the last parameter:
Finally, there’s a simpler overload that only takes the messageBoxText:
MessageBox.Show("This is the messageBoxText");
This generates a MessageBox with only the OK button and no text in the title bar:
In addition, the Show method returns a MessageBoxResult, which has the following values: OK, Yes, No, Cancel, None. According to the docs, the Yes, No, and None values are currently unused. (This makes sense, as there are only OK and Cancel buttons.)
Here’s a link to the documentation: http://msdn.microsoft.com/en-us/library/system.windows.messagebox(VS.95).aspx
For the most part I really like Microsoft Windows. But when something doesn't work the way I expect it to, I often reboot the computer. I'd say it fixes the problem something like 10% to 20% of the time, which is relatively infrequently, but it's so easy to do that it's often the first thing I try.
Recently, we had an issue in a production system where the date format being passed to the database was suddenly different. We had not changed any code, and no updates had been applied to the server for several weeks. Instead of passing the time as "0:00" for midnight, or "12:00 AM", it was passing the time as "12:00", which is noon, rather than midnight. This caused a variety of failures.
We looked into it, and determined that nothing had changed. We could not duplicate the issue with the same code on any other machines--they all worked correctly.
This was all a bit stressful, as the application was significantly reduced in functionality due to the issue, and the clock was ticking.
After a bit, we decided that we might as well reboot the server. This isn't the first thing I try with a production server because it brings it down, and the 10% to 20% payoff isn't high enough to be the first thing to do.
Anyway, we rebooted the server, and it resolved the issue. It actually worked!
A subsequent set of diagnostics on the server revealed no hardware issues with the server. Perhaps an infamous "cosmic ray" flipped a bit in RAM somewhere...
Recently, the team I work with started adding a new Silverlight module to our large web application.
There are lots of great reasons to use SIlverlight:
- Simpler and faster development compared to ASP.NET AJAX
- More responsive user interface, and the possibility of a richer user experience
- Less exposure to differences between browsers
However, in the relatively data-centric enterprise web application we're currently working on, we're also starting to notice some of the browser features that we take for granted in an HTML web application that we are missing in Silverlight:
- Copy and paste--users cannot select static text in a Silverlight grid or label and paste it into another application
- Search within a page--users cannot search for a customer name or account number in a large table of data displayed on a page
- Font resizing--users cannot resize the font in a Silverlight control to make it easier to read
All of these can be implemented in Silverlight with custom code, but it ends up being a fair amount of effort, especially the copy and paste across multiple controls, and the searching within a page across all controls.
Just to be clear, what I'm talking about here are the features that are built into web browsers, and therefore are available to all HTML web applications. For example, here's a screenshot of searching for the word "android" in the news.com homepage in Microsoft Internet Explorer 7:
In Silverlight, the browser doesn't have direct access to the content in the Silverlight control, so it cannot search it. This can be useful when you have a list of customers as some part of a workflow page (everyone who is late paying their bill, for example), and you want to find a specific customer by name on the page.
And here's an example of selecting text across multiple HTML tags from news.com:
Here's what I get when I copy and paste this selection:
This week in Apple App Store angst
1 hour, 25 minutes ago
Developers are still wondering what Apple considers improper iPhone applications, and now might not even be able to compare rejection notes in hopes of figuring that out.
(Posted in Apple by Tom Krazit)
Firefox update fixes a dozen flaws
54 minutes ago
Update spans Firefox 2 and Firefox 3 and will be pushed out to current users to take affect the next time the browser relaunches.
(Posted in Security by Robert Vamosi)
Not a perfect reproduction, but good enough for many situations, and great considering that the news.com developers didn't need to do any development to get it working! This type of copy and paste can be crucial when you want to copy a customer's account number into an email or other document without mistyping it, or when you want to copy a larger amount of table data into Excel for some ad-hoc analysis.
The ability to change font sizes is great for accommodating different users--both users who have vision issues (and want a larger font), and for power users with excellent vision who want to fit more data onto their monitors (and want a smaller font).
For some web applications these features aren't important--for example, at home I use my bank's web-based bill pay system, and I've never used any of these features while doing that. However, in an enterprise system with lots of customers, accounts, and information, these features often become very useful.
Does this mean data-centric enterprise applications should not be implemented in Silverlight? Certainly not! (See some of the advantages listed above!) I just want to point out that there are some subtle things that users will lose, and it's important to be aware of these when choosing the technology, and to be prepared to add them in Silverlight if and where necessary.
In an earlier post, I discussed using and configuring WinMerge as a compare tool in Team System. Here I discuss using SourceGear's DiffMerge as a custom merge tool.
It all started recently when I had a tricky merge to do in Team System, and I realized that I wanted a bit of their changes and a bit of mine—all from the same block of changes. The problem is that the default merge tool in Team System renders the text change blocks as huge buttons—given this, it's not possible to select a chunk of text from one of them. (Trying to select the text simply clicks the button that is displaying the text.) In addition, the default tool does not display differences within a line—instead, it only shows which lines have differences. This can be very annoying if you are doing a merge and there's a very long line that is highlighted as being different, as it knows which characters are different, but it won't show the specific differences.
Given this, I backed myself out of the tricky merge, and installed the latest version of Sourcegear's DiffMerge. From some preliminary research, it seems to be the best free, three-way merge tool.
Things I really like about DiffMerge include:
- You can select text from any of the panes and patch it together in the merge result window as necessary
- It displays differences within a line, which can be a huge time-saver!
- It does true three-way merges (it uses the common ancestor of a file to determine what it can merge automatically)
- It is completely free for commercial or personal usage
The only thing that I don't really like about it is that it doesn't have the super-cool monitor-wide diff pane that WinMerge has—given this, I still use WinMerge as my compare tool, and I use DiffMerge as my merge tool. However, I can certainly understand this, as merging is quite different than comparing, and it wouldn't really make sense to have the wide pane at the bottom for merging.
Here's a screenshot of DiffMerge with some silly sample code:
This is the merge view. The center file is always the ancestor—the file that existed before any of the current edits started. Note the highlighting of changes within lines. Also notice the tabs at the bottom—they change the middle file display from the original ancestor and the merge result, which can be quite useful.
You can simply select text from any window and paste it into the Merge Result window, as well as directly editing the text in the Merge Result window.
One trick with this tool is that the auto-merge is called "merge to center"; it's the toolbar button on the far right that shows two green arrows pointing into the middle. This is the three-way merge—it uses the information about the common ancestor to figure out which changes it should keep and which it should not. (Without a common ancestor, it's impossible for the tool to tell what it should do, which is to keep everybody's changes as long as they don't collide.) If the tool detects conflicting changes to the same text block, the auto-merge will not do anything with the conflicts. It will display a dialog like this in the sample case above where the namespace is a conflict, because two different people changed it from the base (ancestor) file:
Here's what the merge result looks like after auto-merging (note that the "merge to center" button is now grayed out):
And here's the reference view showing the original ancestor file (bottom tab selection):
Tip: The documentation suggests that, if you are going to use the auto-merge, you should do it before you do any manual merging, so that the auto-merge doesn't reverse or modify what you've already done. In addition, you can only use the auto-merge once—after that, the button is disabled.
Configuration of DiffMerge for Team System Merges
Once again, the web page with all the installation details for custom compare and merge tools is James Manning's blog post. Here is a walk-through for configuring DiffMerge as the merge tool.
First, in Visual Studio, click Tools, then Options. The options dialog appears—click the Source Control node, and then the Visual Studio Team Foundation Server subnode:
Click the Configure User Tools... button and the following dialog appears:
This list will be empty if you have the default tools configured. Here I have it already configured for WinMerge for the compare operation, but nothing for merge. Click Add... to add a new non-default tool. Here's the dialog when it first opens:
Enter the following data:
- .* for the extension (this will use the merge tool for all extensions)
- Merge for the operation
- Browse to the DiffMerge.exe file in C:\Program Files\SourceGear\DiffMerge for the Command
- Enter the following DiffMerge command line arguments from James Manning's incredibly useful post: /title1=%6 /title2=%8 /title3=%7 /result=%4 %1 %3 %2
It should look something like this when you are done:
Next, click OK. You should now see your merge tool in the tool list:
Click OK, and then OK again to save the settings, and you should be all set!
Thanks to SourceGear for making this tool available to everyone for free! I think it's a winner!
PS: If you try using a custom compare/merge tool and don't like it, it's super easy to undo this process. To go back to the default tool, simply return to the Configure User Tools dialog, select the desired tool under file extensions, and click Remove. (If nothing is configured here, then it uses the default tools.)
Scrum has a direct and timely feedback loop built in for optimizing the software development process—the retrospective. Here are some of my thoughts about feedback and optimization of process in software development.
Development in large organizations
Many large organizations tend to have significant amounts of process around their development efforts—so much that sometimes, the development efforts slow to a crawl. By process I mean various rules and procedures where forms need to be filled out, steps need to be taken, checklists need to be followed, approvals need to be obtained, emails need to be sent, people need to be informed, tests need to be performed, etc. Speaking with the developers in such situations often reveals a similar story—they spend much of their time following complex procedures that don't add directly to development productivity or quality. It's not unusual to hear stories where it takes ten hours to complete one hour's development work due to nine hours of process overhead or delays. For example, one client told us a true story in which removing a single misplaced hyphen from a web page cost the organization nearly $100,000. The cost included time to schedule resources, develop project plans, consider and analyze the impact, do a risk assessment, do the actual development (delete the hyphen in a text editor), and then test, retest, and deploy the solution to a variety of staging systems, and finally to the production system. Despite everyone agreeing that the hyphen was a mistake, it took much more time to complete the process required than to delete the comma.
How do things get this way?
I suspect that this is partially related to hierarchical management—as managers become more separated from the developers, less communication occurs. And, when mistakes are made (which is nearly inevitable due to the complexity of modern software), management's response is often to create a new procedure or process to try to prevent future problems. This is completely well-intentioned, but by adding a bit of process each time something goes wrong, eventually process itself becomes part of the problem. And if the issue was a rare issue, then the process may consume thousands of hours in subsequent years to attempt to prevent something that would likely not happen again even without the process.
How to avoid this?
I think one way to avoid this type of problem is by using Scrum. In Scrum, you develop in short iterations (sprints), and at the end of each iteration you have a retrospective. In the retrospective, the dev team reviews what they liked and didn't like, and creates improvements. Often these improvements involve new process or changes to process. So why is this any better than what was described above? In Scrum there is feedback from the dev team.
The importance of feedback
From my experiences with Scrum, the retrospectives will push the team in one of two directions—if they are lacking in process, then the team will create new process to assist it. But if the team has too much process, the team will recognize this and remove some of the less-effective processes. It's the feedback from the dev team every sprint (ie, every month or so) that keeps things from getting too far from an optimum state.
To make an analogy, the best way to drive a car down a freeway is to continually look out the windshield, see where you are, and based on that immediate, direct feedback, adjust course and speed. (This is analogous to the development team creating and removing process as they experience how everything is going.) On the other hand, imagine trying to drive a car blindfolded based on telephone instructions from someone who has driven down the same freeway a few years ago, and is watching your car from a nearby hill as you drive by. This is somewhat analogous to management creating process without being directly involved in the current design process. (I actually think this overstates the issues a bit, but it was the best analogy I could think of...) The feedback over the telephone may work, but the feedback is much less direct and timely than simply looking out the window and driving. To be effective, feedback needs to be timely and direct.
Consider the following hypothetical plot of the amount of productivity of a team as a function of amount of process. Just to be clear, it is not based on any measurements of productivity or process—I just made it up. It is just a general guess of what this curve might look like:
The sketch attempts to communicate the following points:
- If you have too little process (the far left side of the curve), your overall productivity declines—things become chaotic and error-prone, and less useful work gets done (and sometimes the work that does get done causes a lot of damage!) In the extreme, with no process (no planning, no guidance, no direction, etc), you don't get anything useful done because the work being done isn't the right work, so productivity is zero.
- If you have too much process (the right side of the curve), your overall productivity declines—the team spends too much of its time doing process, which often does not add directly to productivity, especially beyond a certain point. In the extreme of infinite process, no real work is ever done, so productivity is zero.
- For a given combination of project, team, situation, etc, there is an optimum amount of process where productivity is at a maximum—add more and productivity declines, but remove process and productivity also declines.
Given these assumptions, how do you optimize the amount of process? Feedback. The development team needs to be told what the priorities of the organization are, and then the team needs to be given the freedom to control the amount of process it uses. Usually management—especially if they are significantly removed from the development process—does not have as good an understanding as the team of what will really be useful process and what simply impedes development without adding much value.
To describe the feedback process in terms of the curve, when the team finds themselves in a chaotic and random environment, they will notice and, as part of the retrospective, create new process to assist in controlling the chaos. On the other hand, if the team notices that they are spending most of their time on process, they can remove the least-valuable of these processes and then see where they are. Eventually, after several sprints, the development team will hopefully be somewhere near the optimal point of the curve, and as the situation evolves from this point, they can continue to adjust to maintain the optimal position.
Some people might argue that the development team inherently doesn't like process and therefore will remove it regardless of whether it is useful or not. (It's only management that doesn't mind processes and therefore has the will to add them, because they will not be directly affected by them.) However, I've seen the opposite in many cases—for example, early in my current project, testing was very chaotic, because we had not defined much process around it. The team did not like this—it was chaotic and stressful for them. In the retrospectives, the team brought up that testing was chaotic and stressful, and the improvement they arrived at was more process. In addition, they defined the process—and it worked incredibly well. The process has since been fine-tuned and adjusted many times during subsequent retrospectives.
Will the team sometimes add too much process, or remove a valuable bit of process? Of course—but within a sprint or two, assuming there's a significant impact, they'll notice and correct. Constant correction via feedback is the key to optimizing the process. And Scrum includes the retrospective as a way of building feedback into the process at the most basic level.
One additional point that may not be obvious—in some cases, maximum productivity may not be the goal. For some organizations, reliability and stability or something else may be more important than pure productivity, but this is basically just a redefinition of what productivity is. (For most organizations reliability and stability are important, but so is the rate at which new features are added, so the definition of productivity needs to take balancing these various priorities into account.) The bottom line is that these priorities need to be communicated clearly to the development team—they are the ones who are steering the car down the freeway.
The retrospective is a key and often overlooked part of the Scrum process. I believe one reason for the success of my Scrum team in the past few years is related to our consistency in doing retrospectives at the end of each sprint.
I've seen other teams sometimes skip over retrospectives, for a variety of reasons:
- Retrospectives don't seem that important
- The team is too busy to take time to retrospect
- They believe that doing retrospectives won't improve anything
Having done a lot of retrospectives, from my experience, the retrospective serves a number of purposes:
- It provides a structured environment for the entire team to brainstorm ways to improve things
- It helps the team to bond—they have a time specifically to discuss their feelings and experiences, and to listen to each other
- It provides the team a sense of closure at the end of each iteration
- It allows team members to voice grievances and annoyances in a supportive environment
- Simply scheduling the time for the retrospective communicates to the team that their feelings and opinions are important and valued
The most obvious purpose of the retrospective is making improvements to the process. It may seem that making a few small process improvements each sprint won't really add up to much, but this has not been my experience. Although the individual improvements can seem very small, the cumulative effect can be quite surprising.
Thoughts and suggestions about retrospectives
(If you are unfamiliar with Scrum or the retrospective, I encourage you to read the excellent book Agile Software Development with SCRUM by Ken Schwaber and Mike Beedle. I strongly recommend this book—I think it is the best book I've seen about Scrum.)
First of all, I always refer to it as a “retrospective”, not a “postmortem” or anything else. Postmortem has the smell of death around it—I can understand people not wanting to go talk about how the iteration (and/or team?) “died” over the last few weeks. This word also implies a sense of hopelessness—the iteration or the team is dead and cannot be resurrected—instead we’ll just talk about how it died. On the other hand, the word “retrospective” gives the process a more hopeful tone—that you will look back in order to learn something that can make the process more pleasant going forward.
We start our retrospectives by making two lists on the whiteboard:
- What we liked in the previous iteration
- What we did not like in the previous iteration
We usually draw it with a plus sign at the top of the column for things we liked, and a minus sign at the top of the column for things we didn't like, for example:
Note that the second list is NOT defined as a list of mistakes that we made in the last iteration! This may be a subtle point, but it's important--not many people enjoy going to a meeting where they have to confess their mistakes over the past month. The list consists of things that we (members of the team) did not like. This can include mistakes we made, but the scope is much larger than that.
Note that, as the ScrumMaster for the team, I try to encourage and write down items in both areas, without adding too many of my own, or doing any editing or censorship of items. For the most part I just write items down—I don’t want to hijack the meeting so that it ends up being all about me.
In addition, I don't try to make the two lists “balance out” or be equal in some way—this is not a zero-sum game. If we had an unpleasant iteration, it’s likely that we'll have more items on the minus side than the plus side—and that’s fine! On the other hand, a really smooth iteration it may be the other way around, and that’s fine too.
I do what I can to encourage everyone to be honest—for example, if there is an event I recall that people didn’t like but nobody is bringing up, I’ll ask about it. Honesty is important—the more honest everyone is, the better the whole thing works. Sometimes this means taking social risks—we’ve had several retrospectives with people getting frustrated or angry and saying things like “I didn’t like how Bob would criticize my code!” If people are feeling it, this is probably the best time to talk about it, and it certainly makes the process a lot more interesting! Not talking about it just means that it likely won't change. In my experience, most of these types of interpersonal team issues were greatly improved after being brought up in a retrospective, and this greatly added to the team moral and productivity.
When we start running out of items, we prioritize the top few items that we didn't like. The prioritization is based on the importance of improvement in that area to us as a team. Usually the top priority items are the biggest pain points of the iteration for the team, and fairly easy to agree upon. Again, as ScrumMaster, I usually leave my own opinions out of the prioritization—I’ll sometimes add a bit, but I really don't want to control the prioritization.
Finally, we take the top two or three priority items from the minus side, and brainstorm as a team how we could improve the situation going forward. In my experience it's important to not select more than 2 or 3 items to improve, because they tend to be too many and some get forgotten, which makes the whole process seem pointless and frustrating.
It’s during the brainstorming about possible improvements where the real magic happens—the team is smarter as a whole than any individuals on the team. Issues that at first seem impossible to improve (items that you seem to just be stuck with) can often be improved in some way when the whole team focuses on them. Note that this means it often makes sense to spend some brainstorming improvements time on a painful item that everyone initially believes they cannot change. It is only in the brainstorming that the possibilities emerge. These improvements may not resolve it entirely but improve things incrementally—the tricky bit is that these possibilities can be difficult to see in advance. In my experience, the improvements sometimes involve more communication with someone (the product owner, users, someone on the team, etc), more training, new procedures, different tools, or a technical task for the next iteration to improve something. But there are no limits to the types of improvements you can brainstorm into existence.
And, as I said above, it’s important to follow through with the improvements—otherwise it undercuts the whole process.
In summary, if you are using Scrum but skipping retrospectives, I encourage you to try doing retrospectives seriously and consistently! From my experience, it may improve your team’s productivity and moral.