Wednesday, March 31, 2010

How To Get Moles Into Your Test Project

In my last post, I talked about what Moles can do for your testing capabilities. In this one, I’m going to give a quick overview of what you need to do to start using Moles in your project. For this post and most others, I’ll be talking from the viewpoint of Visual Studio 2008 (all of the stuff on the Pex page trumpets VS 2010, but it works fine with 2008) Team Suite, so I’ll be using the VS Team Test unit test tools as opposed to something like NUnit, although Pex and Moles come out of the box with a few switches you can flip to make them friendly to NUnit and a couple other testing frameworks.
Like most things, you’re going to need to install it first. I recommend the full Pex + Moles installer - you can grab it from here. I try to be careful with what I install and avoid installs that make a mess or require a lot of steps, and I can report that the Pex + Moles installer is simple and clean as can be.
Once the installer finishes, fire up VS with your test project. In your test project (as opposed to the project-under-test), do an Add > New Item. You should find a new template called “Moles and Stubs for Testing.” Before you fire away and click OK, there’s a small bit to understand about what adding a .moles file does and what effect the name has.


Running this template will add a .moles file and subfiles to your solution and add two references to your project: one to the newly compiled Moles.dll and another to the Moles framework types A .moles file is a small XML file in your solution that contains configuration for a set of MSBuild tasks that also get added to your project when you add a mole. Most importantly, it points to the assembly that you want to mole. A .moles file only moles a single assembly, so if you want to mole types in multiple assemblies, you’ll need to add multiple moles. The .moles file has three subfiles when viewed in Solution Explorer:
  1. A designer.cs file. To be honest, I have no idea what this does, as it contains nothing but a commented-out Base64 string.
  2. A Moles.xml file. This contains generated documentation for all the mole types generated.
  3. A Moles.dll file. This is a compiled assembly that contains the mole types. When your test project is built, this assembly is generated by MSBuild, and a reference to it is automatically added to your project. If you mole a big assembly, like mscorlib, it can add a good chunk of time to your build process, but it will only rebuild the assembly if the assembly being moled has changed since the last build.

When you add your .moles file, the name you use (minus the .moles part) will be the name of the assembly that it moles by default. It has visibility to everything in the GAC plus everything referenced by the containing project. You can change the assembly by opening the .moles file in an XML editor, but it’s easier to just name it correctly to begin with. One tweaky thing I’ve noticed is that, unlike a lot of other VS template items, you really should explicitly include “.moles” at the end of the filename, otherwise it will get confused if the targeted assembly contains dots in its name.

Just like that, you’re done. You now have access to moles and stubs for all the types in the referenced assembly. Now, what exactly does that mean? Check out the next post for information about how mole and stub types are generated.

Monday, March 29, 2010

Moles!

As I write more of them, I'm finding that these "unit test" things aren't so bad, especially if you write them properly. That means getting rid of your external dependencies - code that's not yours, code that is yours but isn't being tested, networks, file systems, and all that other junk.

If you're working in the ivory tower of Objectorientedland, this is always dead simple: the dependencies of every class you use are abstracted away as interfaces, and concrete implementations of those interfaces are controlled by either the caller, who provides them to the class' constructor, or a dependency-injection framework, which I admittedly haven't messed around with. Your test should provide suitable test doubles of those dependencies.

There are lots of ways of thinking about test doubles - the two most common that I've seen are stubs and mocks. Stubs are little dummy objects with methods that basically do the simplest thing possible to make your test run. For example, if I have an object with a method that takes a DataSet and returns a scalar value, my stub implementation of that method will be a one-liner that just returns the value I'm expecting. Another kind of test double is a mock. A mock performs the same actions as a stub when it comes to the code that's calling it, but it also has "expectations:" you set up a mock to expect method calls with certain parameters in a certain order, because that's what you're expecting your code-under-test to do. If the mock's expectations aren't met, the test fails.

Back to Objectorientedland - parachuting down from the ivory tower and making our way to the swamp of spaghetti code, we see that things aren't always that simple. Sometimes corners are cut, sometimes refactoring never happens, sometimes it's just simpler to use that static filesystem accessor method, and sometimes creating a zillion classes and interfaces to accomplish a simple task (aka "enterprise design") is just overkill.

Enter Moles.

Moles are part of Microsoft Research's new test tool Pex. I'm not going to go into Pex just yet - I'm really excited about it but I haven't explored it fully enough. Both Moles and Pex help you to write better and more complete test cases for your code. Pex does it by "exploring" your code, literally finding ways to take all the branches and break it if possible. Moles gives you a way to easily stub out methods, even methods you shouldn't technically be able to stub out, without writing a ton of code.

I'm going to be writing a bunch of posts about Moles and Pex, but just to get started, I'm going to toss out a perfect use case for Moles: getting around the static File methods. In a properly built OO application, the file system will be abstracted away through a set of classes, abstracted away to interfaces, that can be swapped out so your tests can effectively "fake it." However, we all know that File.Open is way too tasty, and sometimes it's really all that's called for. The problem with File.Open is that you have inexorably chained your code to the filesystem - have fun trying to make a test suite that doesn't refuse to run on any machine but yours without 20 minutes of setup. The whole point of unit tests is that you could be able to click a couple buttons and watch the green checkmarks roll in within seconds.

With Moles, you can literally rewrite File.Open and any other static methods you like, right in your test case, to make them do whatever you want. Check out the following code and the test method for it:

public int SumNumbersFromAFile(string filename)
{
    string fileText = File.ReadAllText(filename);
    string[] numbers = fileText.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
    return numbers.Select((str) => Int32.Parse(str)).Sum();
}

[TestMethod()]
[HostType("Moles")]
public void SumNumbersFromAFileTest()
{
    Class1 target = new Class1();
    string input = "filename";
    string textFromFile = "1, 2, 3, 4";
    MFile.ReadAllTextString = path =>
        {
            Assert.AreEqual(input, path);
            return textFromFile;
        };
    int actual = target.SumNumbersFromAFile(input);
    Assert.AreEqual(10, actual);
}


SumNumbersFromAFile is the method under test. As you can see, it reads the text from a file, expecting to find a list of numbers separated by commas, and sums them. It's a pretty contrived example, and one without a lot of error handling or checking of assumptions, but it does the job for illustrating Moles as I'm doing here.

The test case looks like a pretty standard test case - setup of input and expected results, method call, and an assertion - with the exception of that "MFile" sitting there. MFile is actually the Mole of the static file class. As a mole of a static class, it has a static property for each method. The type of each property is a delegate type that matches the signature of the method being moled. By providing an implementation for the delegate, you are overriding the behavior of the method. When my method under test calls "File.ReadAllTextFromString("filename");", it will call my delegate implementation. The assert in there is a poor man's mock-like expectation: it checks to see if the method was called with the expected parameter. Then it returns my nicely prepared string to execute on.

How does this sorcery work? The key is the [HostType("Moles")] attribute at the top. This causes the test to run in a host that can overwrite the compiled IL as it executes. The debugger behaves just a little bit funny when you debug inside a Moles-hosted test - it looks like it's jumping around a bit but it actually is working properly - but if you step through SumNumbersFromAFile you can see it call out to my Mole delegate to get my pre-prepared string, bypassing the file system entirely.

You can use Moles to bypass most anything, and also included is a simple little stubbing framework for doing regular ol' stubs of interfaces, abstract classes and classes with virtual methods. For stuff where a regular stub won't cut it, you use a mole.

In the coming weeks I'll be posting some more about how to actually get Pex and Moles into your test project, neat examples of how to use Moles and stubs, and how to do things like mole constructor methods, create mole instances (for classes that can't be stubbed because they're sealed or have no virtual methods), and create nested stubs.

Caching with Enterprise Library

I can't say I was too excited when I was asked to dive back into some web service code I had written for a recent project to implement caching in a few methods. Caching. That just sounds like one of those things that's going to be a pain in the ass to implement, doesn't it? There was a major risk that I was actually going to have to put some thought into this lumped-together collection of dumb/stateless service methods and make it do something.

What's more, my colleague suggested that I take a look at the Caching application block in Enterprise Library. Wonderful, I thought, now it's expected to be "enterprise-grade" and make use of this big honking library too.

I must have been feeling lazy or interested in/distracted by some other task at the time, because usually this kind of opportunity excites me - new tools I've never played with before! Fortunatly I got over the "starter's hump" and dove into it, and I'm glad I did. Enterprise Library is something I haven't played with before, and I still haven't touched the majority of it, but I wanted to share my experience working with the Caching block to show you how easy it is to use. I'm going to implement in-memory caching in a simple little WCF service method:

public int SlowAdd(int value1, int value2)
        {
            Thread.Sleep(5000);
            return value1 + value2;
        }

Yeah, it's contrived, but it simulates the service's reliance on some slow resource, like another service. Is it a good example of a method that should be cached? Sort of. It's definitely slow, and there are only two parameters, but because we don't have any contextual information about the kind of usage this method sees, we don't know if those parameters are constrained at all. Are callers of this service method really only ever interested in values from 1 to 50? Or does this service get nailed with int values from all over the board? Caching will work fine in either case, but remember that if the number of potential combinations of arguments you care about is very high, the cache may grow to be very large, and callers may not get any benefit out of it if two calls using the same parameters are rare.

OK, back to EntLib. First things first... what is it? Wikipedia defines it pretty well: "a set of tools and programming libraries for the Microsoft .NET Framework. It provides an API to facilitate best practices in core areas of programming including data access, security, logging, exception handling and others." Really, that's it. I like to think of it as simple, configurable helper code that wasn't low-level enough to include in the framework itself. It's broken up into "application blocks," which is really just an enterprisey way of saying it provides multiple pieces of functionality. A few examples of blocks are cryptography, data access, exception handling, logging, and of course caching - all stuff that most developers could probably write themselves, but with EntLib it's standardized and already done for you. If one person on your team knows how to use a few of the blocks, he can easily share his knowledge with others, and since those blocks are simple and configurable enough to use everywhere, everyone he teaches becomes a more effective developer with a bigger toolkit of standardized, recognized tools.

To actually use it, you need to grab a copy from here. I had no problems downloading it and installing it using all the default options (make sure you check "build application blocks" at the end of the first installer - it's a multi stage extract/build from source/install VS tool.

Now let's dig in! You need to add a reference from your service project to Microsoft.Practices.EnterpriseLibrary.Caching.dll. If you completed the installation properly and built all the application blocks, this should be in the Bin folder of the Enterprise Library installation directory (Program Files\Microsoft Enterprise Library 4.1 - October 2008 or similar, depending on your version).

Before adding any code, you need to configure your service application to use the Caching application block (an "application block" is just a part of Enterprise Library - other "blocks" include exception handling, logging, data access, cryptography, and more. They're all things you could write yourself, but with EntLib they're standardized and you don't have to). You can do this by manually editing your web.config, but it's much easier to start by using the EntLib configuration tool: right-click on your Web.config in Solution Explorer and choose Edit Enterprise Library Configuration:

What you'll get is a nice graphical view of your web.config, with an interface designed to make it easy to drop in EntLib-specific configuration:
Right-click the Web.config entry (the one right under the root) and click New > Caching Application Block. At this point, you've configured everything you need to configure  to use simple, in-memory caching. You may want to look at some of the options that pop up in the Properties pane when you click your new Cache Manager entry. If you want to use a different cache provider, like a database, this would be the place to specify that, but I'm not going to go into that here. Now is a good chance to look at your web.config in an XML editor if you want to see the changes that were made: EntLib dropped in a couple new configSection definitions and a new section to describe the caching provider.
Now, onto the code. Here's the new method that takes advantage of caching:


public int SlowAdd(int value1, int value2)
        {
            string cacheKey = "SlowAdd-" + value1 + "|" + value2;
            ICacheManager cacheManager = CacheFactory.GetCacheManager();
            if (cacheManager.Contains(cacheKey))
            {
                return (int)cacheManager.GetData(cacheKey);
            }

            Thread.Sleep(5000);
            int result = value1 + value2;

            cacheManager.Add(cacheKey, result);
            return result;
        }

That's it! Before I step through this line by line to examine exactly how it works, here are a few things to know:
  • This is by no means an example of architectural excellence, just an example of how easy it is to use the EntLib caching api. You may want to abstract away the caching in some other method or perhaps use inheritance to implement versions of the method that work without caching enabled.
  • Our "cache key" is a dumb string based on our parameters. Basing your cache on a simple combination of all the parameters to your method isn't always the right thing to do.
  • Our code does not take into account that a call to SlowAdd(x, y) returns the same value as SlowAdd(y, x). This is easy to fix, but I'm lazy.
  • The cache object that holds the values lives independently of the service instance that calls it. This means that the same cache object is used by all instances of the service. This is exactly what you want - otherwise, the cache wouldn't be all that useful.
Alright, let's take a slightly closer look at what's going on. If you couldn't tell, the cache is essentially a Dictionary hidden behind an API that... really isn't a lot simpler or more complicated? Why not just whip up your own dictionary and use that? Like I said before, you could, but with EntLib, you don't have to worry about concurrent modifications and accesses, configurability points, limiting the size, implementing a sane entry deletion policy, and reuse (you did think about all those things when you considered rolling your own, right?).

Let's start from the top: cacheKey is our key into the cache for this particular set of parameters. CacheFactory.GetCacheManager() returns an ICacheManager, which is the object we use to manipulate our cache. Our application only has one cache object - the default one, which is what a call to GetCacheManager() without parameters returns. You can give it a string if your code uses multiple cache objects. The rest is pretty self explanatory: if we find our precalculated result already in the cache, we return it. If not, we take the long road, calculating the result ourselves, and then we save it into the cache before returning it. The next call to this method with the same parameter set will return in a fraction of the time.

One common point of configuration I didn't show here is how to control how long something lives in the cache. This is done with an overload to the add method:

cacheManager.Add(cacheKey, result, 
                CacheItemPriority.Normal, null, new SlidingTime(TimeSpan.FromHours(5)));

The first couple parameters haven't changed. The third is a priority that determines how "important" the cached result is, which in this context means how likely the cache "scavenger" process is to consider it for deletion when the cache is full and needs to make room for something new. The fourth is an instance of an ICacheItemRefreshAction, which I won't discuss in detail here, but is essentially a way to control what happens when your item gets deleted - you can implement refresh logic if you like, so your cache keeps itself up to date outside of the control of the service code. The last parameter is actually a params array of ICacheItemExpirations, five of which are available out of the box with EntLib: AbsoluteTime, ExtendedFormatTime, FileDependency, NeverExpired, and SlidingTime. Here, I've used SlidingTime, which determines whether or not an item has expired by determining if a preset amount of time has passed since the item's last access or modification.

This is by no means an exhaustive manual for the Caching application block, but hopefully it will help you get started if you are considering using it, or any part of EntLib, in your project. It's actually pretty easy to set up and get going on, and I encourage you to give it a try if you haven't already.

Saturday, March 27, 2010

Be Good at Your Job

One of my favorite bits of advice anywhere is Chuck Jazdzewski's Fatherly Advice to New Programmers. Three of his points (Keep Learning, Learn to Communicate, and Own Up to Your Mistakes) I feel like I've never had a problem with, but the other three align really well with some of the goals I have for myself:
  • Be Predictable: Not much to say here - under-promsing, over-delivering, and delivering realistic estimates.
  • Never Let Bad Code Off Your Desk: This one, to me, is all about not being lazy. In a lot of my previous work, where I was the only person working on a well-defined portion of a project, the prevailing attitude was that there was no time for things like unit tests, and it was easier to just check in your work and find out if it broke things later. With this kind of attitude, not only are you robbing your team of time and quality, you are denying yourself the awesome feeling of "pride in craftsmanship."
  • Programming is Fun But Shipping is Your Job: I like this one so much that I set it in big 22 point font, printed it out and hung it over my desk where I can see it every day. The opportunities my role provides remind me frequently that I really do love learning about, designing and implementing software, but that can make it easy to forget that so many other things need to happen to make a project successful.
I've been trying to find ways to make some of the less-fun stuff easier to accomplish, less intimidating and more fun. One thing that helps me is to simply embrace that craftsman's pride that comes with knowing you've made something great that's not missing any parts: it has to be fast, understandable, simple, tested and well documented. Another is using a Hipster PDA to keep track of the million to-do items I have and help me focus on the one or two that need all of my attention at any given time. Finally, that "document strategy" I mentioned previously helps motivate me to write documentation without getting bogged down or intimidated by gigantic spec templates - when you make something interesting, spin up a new Word doc about it, write about how it works and why you made it, and share it!

Wednesday, March 24, 2010

CryptoStream - direction, flushing, and length

I was tooling around with some encryption code, trying to learn a bit more about the .NET encryption APIs, and it occurred to me that some of the documentation for the general usage of CryptoStream was a little bit confusing. I am referring to the "direction" of the stream, and what the actual effect of the CryptoStreamMode instance is in the constructor.

The reason for my initial confusion was the EncryptTextToFile and DecryptTextFromFile methods the documentation uses in its sample. These methods kind of make it look as if you should use CryptoStreamMode.Read if you are doing decryption, and CryptoStreamMode.Write if you are doing encryption. This is not the case - you are free to use either direction no matter what kind of transformation you are applying.

My confusion probably originated from my one-track mind when it comes to streams: In all my experience with BizTalk pipeline components, I don't do a whole lot of writing to streams, and so I default to thinking of streams as objects that either access a resource, like the file system or the network, or as machines that apply a transforms as you pull data through them. You can layer these machines like Matryoshka dolls, with the data source at the center, and pull the data all the way out, and what you'll get out is the grand sum of all the transformations applied in order, without necessarily having to get the full result of each transformation in sequence. This is exactly how CryptoStream works in Read mode - the first argument to the constructor is the stream you are wrapping. The second argument, the ICryptoTransform, determines the transformation that the stream applies as you pull the original stream through. Just like any other stream, you can Layer a CryptoStream and return it, and the consumer can use it just like any other stream. Keep in mind, though, that a CryptoStream in Read mode is read-only, and CryptoStreams are never seekable.

A CryptoStream in write mode works the other way: with the first argument to the constructor, you define the stream that goes inside of it. When you write to the CryptoStream, the result of the transform gets pushed to the underlying stream.

So which direction is preferable? It depends on the shape of the information you're putting in and getting out. If your input is not a stream, and is instead something like a string or a byte array that you know the length of, Write is probably a simpler option, because you can just write your data directly to it. If you're dealing with streams, Read is likely easier.

Just like every other kind of stream, don't forget to flush your CryptoStream. CryptoStream does have one twiddly bit about it - Flush is a no-op. For whatever reason, the important flushing method in CryptoStream is FlushFinalBlock. The documentation does a good job of hammering it home that you can call Close to make sure this gets done, but calling Dispose does it as well, which is good to know because "using" statements are generally preferable where applicable.

One more important bit that's easy to overlook - while you may be able to determine the length of a CryptoStream you are reading from at runtime, the length of the CryptoStream is not necessarily the size of the data you will get out of it, because it's transforming the data as it zips through. This goes for all streams - don't be lazy, and don't use the length of a stream to set the size of a byte array you're going to store the data in!

Tuesday, March 23, 2010

Publishing

I remember a few times throughout my childhood when a teacher would extol the virtues of writing for oneself. Whether it was taking notes, doodling, "mind-mapping," creating notecards from individual ideas, or simply writing down all the ideas that came into your head, there were always a few key messages:
  • You don't have to worry about remembering it anymore. A related Chinese proverb is, "the weakest ink is better than the strongest memory."
  • Once you get your thoughts down, you can physically/visually rearrange them to fit your goal (remembering them, making sense of them, etc.)
  • Writing down your thoughts helps you to find gaps in your reasoning or explanations, reinforce your opinions, and clarify your message.
This is all well and good, but ultimately, writing for yourself and yourself alone only serves to make you better. A noble pursuit, to be sure, but by making a simple tweak, you can get much more bang for your buck, not only increasing the positive effect that writing has on you, but broadening that positive influence to everyone you work with as well.

Publish.

The goal I'm most excited to pursue over the next few months is a "documentation strategy" I'm going to assemble and present to my team. It's goal, in short, is to supplement the lifeless, monolithic spec templates we use here and encourage easy, simple and succinct publishing with a minimalist template. The focus is on publishing as opposed to writing because the benefits of having easily editable documents to collaborate over and record decisions and ideas are just too numerous to ignore.

It's not going to be easy - I need to develop and sell a strategy that isn't too "open" (I can't just tell my teammates to write clearer, more succinct documents without delivering something that they can use, like samples and templates) and isn't too "closed" (introducing another monolithic template and more hard rules defeats the point), all while convincing everyone that what looks like more work actually results in better quality with less effort. I have a whole lot of ideas and there are a few specific ones that a few specific, important people want to see, and so I need to make sure those get a good treatment as well.

I'm not going to get into heavy details in this post, but I am going to be writing about this more as I put this strategy together and figure out how to make it work for my team. Actually, one of the primary reasons I want to continue to write about this and show my work in progress is one of the reasons I want to introduce a strategy like this in the first place - it leads to greater collaboration and higher quality work. These two fellows from Google have some great insights about collaboration and showing your work to others, and how it leads to better quality work, faster. They talk about it in the context of version control systems, but these ideas outline one place I want to go to with this idea.

Monday, March 22, 2010

I Just Don't Have the Time

A small insight that someone shared with me a long time ago, paraphrased:

The concept of "not having enough time" to do something is a non-entity: it literally does not exist. When you tell someone that you don't have enough time to do what they want you to do, what you are telling them is that their task is less important than everything else you are doing. There's nothing inherently wrong with this - I get plenty of requests every day that are less important than what I'm doing, both at work and from friends and loved ones - but there are a couple things you need to think about:
  • Saying "I don't have the time to do that" means that the request is literally less important than everything else you are doing. Is it really? Can you take just a minute to try to fit it in with everything else going on?
  • Most importantly, how would you feel about yourself if you just went ahead and said what you meant, even if you said it as nicely as possible? "Look, I'm really sorry, but your request just isn't important enough for me to get it done today/this week/ever." How would the person with the request feel about you?
When you think about it this way, every time you say "I don't have the time" becomes an opportunity to reflect on your priorities for a second.

IDisposable WCF Services

If you do any searching for the terms IDisposable and WCF together, you're likely to hit a lot of posts about my previous topic. However, last time I was searching for them, I was more interested in whether or not you can make a WCF service disposable.

The answer is yes - if you derive a class you are using as a WCF service implementation from IDisposable, WCF will call Dispose on it for you when the connection is closed.

Closing A WCF Proxy

MSDN has a whole page on why you shouldn't use a using statement to manage the lifetime of a WCF service proxy. Succinctly, closing a WCF proxy actually makes a call to the server to tell it to "hang up", and this action can fail if the server is unreachable. Disposing a WCF proxy closes it. If your proxy gets disposed and closed at the end of a using block and the call to the server fails, the closing brace of your using statement will throw an exception. What's more, if you threw an exception inside the using statement, it disappears, because the failing close/dispose threw its own exception afterwards.

The page gives an example of how to properly handle closing a proxy, but like many MSDN pages, a better answer is given in the Community Content at the end of the page: don't use the using statement, wrap your proxy work in a try, and in the finally block, have another try/catch that tries to close the proxy and aborts instead if the close fails.

finally
{
   try
   {
      client.Close();
   }
   catch
   {
      client.Abort();
   }
}
Don't forget to catch CommunicationExceptions and TimeoutExceptions before catching any others!

Friday, March 19, 2010

Goals

Where I work, just like most places, we have to set SMART goals for ourselves. Specific, measurable, achievable, results-based and time-bound goals – you know, the kind that you actually have to put some thought into and require you to challenge yourself. I think a lot of people dislike SMART goals and dislike their particular organization’s goal-setting metrics because they have enough work to do already, but if you hunker down and spend a couple hours on it, it can be a pretty rewarding activity that can help you to focus on what’s important to you and set aside things that aren’t.

What I Say I’ll Do vs. What I’m Responsible For – What’s the Difference?

My organization, like many, differentiates between “action plans” and “accountabilities.” That is to say, “what I’m going to do” and “what I’m ultimately responsible for delivering on” are two different things. At first glance, there is a problem here, because those two sound like the same thing. If I'm going to execute activity X, the result is that I executed activity X. If I'm going to publish document Y, the result is that the document is published. So why bother trying to differentiate between my “to-do’s” and my “responsibilities?”
The answer is the unwritten subtext after the word “accountabilities:” “to other people.” The items you are ultimately responsible for are effects or changes that manifest themselves in others. Publishing document Y is my action plan, and my accountability should be that my intended audience for that document should take up or at least consider the process I discuss, install and try out the tools I recommend, implement the pattern I illustrate in their future work, or start a conversation about how to make things better using my ideas as a foundation.

So My Goals Are All About Everyone Else?

Is it scary that your goals are things that hinge on other people; things that you may feel like you don't have full control over? Yes. The reason you set them that way is to encourage yourself to do everything that you can do to make them happen. Before you give the presentation that you want to affect your team, chat with some of them to make sure it’s on target. After you give it, if it wasn't enough, follow up on it. If your document wasn't clear, revise it. Sell it. If you fail, at least you will have a laundry list of everything you did to try to make it work, and that counts for something (you did do everything you could do, didn’t you?). Find your radius of impact and focus your accountabilities on influencing people within that radius – make their work better, make their job easier, raise their efficiency or satisfaction, and expand their minds. Also, make sure you let the people in your radius know that they are a big (possibly the biggest) part of your goals and that you really appreciate their time and feedback, so when you bother them with a request for a follow-up or tell them that the quality bar for the organization can be raised by giving them more work, they don’t get too fed up with you.

On Measurements

Not every accountability has to be "improve this by 5%" or "have 25% of participants do X". Just because your goals need to be measurable doesn't mean you need to set them with pinpoint accuracy, especially if the context doesn't call for it. One of my accountabilities is centered on evangelizing some development strategies not currently located in the worldview of my team and getting them to consider them for future projects, or at least making sure they understand what they are and what tradeoffs are involved so they can make more informed decisions. Is this goal specific, measurable, attainable, relevant and time-bound? Yes, it is all of those things, and with no percentages required. For those asking, “How do you measure it?” It’s pretty simple, really: ask my teammates. We’re all humans and goal-setting is a human process, so there needs to be room in your goals for explanations that go beyond ticking boxes and filling in percentages. If there’s not, there really is something wrong with your organization’s goal-setting processes and metrics.

What If I Really Am Accountable For a “To Do?”

Sometimes, "things I do" will still bleed into my accountabilities, and they can’t all be all about making others great. One place this happened for me is in my personal improvement commitments: I have a lot of accountabilities there about publishing things to my blog. For me, publishing a writeup is proof positive that I have learned and internalized what I have written about (and my boss knows this… or hopefully he does now), and even though I don’t exactly count on a lot of people reading it, I can share it with people in my radius to help them learn, show them what I’m learning, and encourage them to write too. I don't want my accountability to simply be "learn this,” because that’s too vague, and not SMART. So, publishing something here becomes a "proxy accountability:” by doing it, I prove that I have reached my actual goal in a way that meets the SMART criteria, and hopefully affects others in a positive way too.

Thanks to Hard Code for the inspiration. In response to some of the comments there… I don’t really have any comments on the effectiveness of any particular style of goal setting for ranking people in an organization, determining promotions and the like. My intention here was just to discuss how to think about goal-setting in a way that helps me challenge myself, expand my horizons and contribute to my team.

"throw" vs. "throw ex"

Here is an interesting bit of C# knowledge I just picked up from a coworker the other day: rethrowing an exception with "throw ex" usually isn't what you want to do. You can do it, and at first glance it works fine, but when you use "throw ex" the stack trace of the exception is erased, making it appear as if the exception originated from your code, making debugging more difficult than it should be.

If you are catching an exception and you want to rethrow the exact same exception, just use "throw;". This will rethrow the exception with the stack trace intact. If you are throwing a new exception, make sure to set the original exception as the inner exception before throwing.

Thanks to this post for making it clear.