Monitoring execution using Mono Cecil

This post will demonstrate how to monitor the execution of .Net code using Mono Cecil. This can be useful for logging, for performance analysis and just for fun. The concept is obviously IL weaving. We’ll look for entry points and existing IL instructions to weave around the new IL. In this post we’ll show only four types of monitoring, in reality we have some more. The four types are: Enter method, Exit method, Jump from method and Jump back to method. Jump in this context means call another method and return from the other method.
In our example we’ll assume we have some simple ‘notifier’ which the weaved code will call:

public class Notifier
{
public static Action<string> Enter;
public static Action<string> Exit;
public static Action<string> JumpOut;
public static Action<string> JumpBack;

public static void NotifyEnter(string methodName)
{
if (Enter != null)
{
Enter(methodName);
}
}

public static void NotifyExit(string methodName)
{
if (Exit != null)
{
Exit(methodName);
}
}

public static void NotifyJumpOut(string methodName)
{
if (JumpOut != null)
{
JumpOut(methodName);
}
}

public static void NotifyJumpBack(string methodName)
{
if (JumpBack != null)
{
JumpBack(methodName);
}
}
}

Monitoring enter

This is the most trivial weave, which inserts a call to Enter callback before the first instruction in the method body. In order to do so, we first need to load the assembly and find all the methods into which we can weave:

public void Weave()
{
AssemblyDefinition assembly = AssemblyDefinition.ReadAssembly(assemblyPath);

IEnumerable<MethodDefinition> methodDefinitions = assembly.MainModule.GetTypes()
.SelectMany(type => type.Methods).Where(method => method.HasBody);
foreach (var method in methodDefinitions)
{
WeaveMethod(assembly, method);
}

assembly.Write(assemblyPath);
}

Now we add reference to the the callbacks into the weaved assembly. This is not yet the weaving, this is required definition for the assembly to use in the weaved assembly. We’ll get the called methods using reflection:

Type notifierType = typeof (Notifier);
enterMethod = notifierType.GetMethod(
"NotifyEnter", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);
exitMethod = notifierType.GetMethod(
"NotifyExit", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);
jumpFromMethod = notifierType.GetMethod(
"NotifyJumpOut", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);
jumpBackMethod = notifierType.GetMethod(
"NotifyJumpBack", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);

Afterwards, we’ll add the references to the weaved assembly:

MethodReference enterReference = assembly.MainModule.Import(enterMethod);
MethodReference exitReference = assembly.MainModule.Import(exitMethod);
MethodReference jumpFromReference = assembly.MainModule.Import(jumpFromMethod);
MethodReference jumpBackReference = assembly.MainModule.Import(jumpBackMethod);

So our weave method looks like:

private static void WeaveMethod(AssemblyDefinition assembly, MethodDefinition method)
{
MethodReference enterReference = assembly.MainModule.Import(enterMethod);
MethodReference exitReference = assembly.MainModule.Import(exitMethod);
MethodReference jumpFromReference = assembly.MainModule.Import(jumpFromMethod);
MethodReference jumpBackReference = assembly.MainModule.Import(jumpBackMethod);

string name = method.DeclaringType.FullName + "." + method.Name;

WeaveEnter(method, enterReference, name);
WeaveJump(method, jumpFromReference, jumpBackReference, name);
WeaveExit(method, exitReference, name);
}

Now, we have everything ready to weave the enter monitoring code:

private static void WeaveEnter(MethodDefinition method, MethodReference methodReference, string name)
{
var ilProcessor = method.Body.GetILProcessor();

Instruction loadNameInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callEnterInstruction = ilProcessor.Create(OpCodes.Call, methodReference);

ilProcessor.InsertBefore(method.Body.Instructions.First(), loadNameInstruction);
ilProcessor.InsertAfter(loadNameInstruction, callEnterInstruction);
}

The ILProcessor is a helper utility which Cecil provides to make the weaving simpler. The first instruction we weave is loading of a string which is the name of the method being entered. The second instruction we weave is a call instruction which uses as argument the loaded string. We insert the instructions in the beginning of the method and from now on every time the method is entered the callback will be invoked.

Monitoring exit

Monitoring exit is a little more interesting. In contrast to enter where we have a single weaving point, exit may have multiple exit points – multiple return statements, thrown exceptions, etc…
Here we’ll monitor for simplicity return statements only:

private static void WeaveExit(MethodDefinition method, MethodReference exitReference, string name)
{
ILProcessor ilProcessor = method.Body.GetILProcessor();

List<Instruction> returnInstructions = method.Body.Instructions.Where(instruction => instruction.OpCode == OpCodes.Ret).ToList();
foreach (var returnInstruction in returnInstructions)
{
Instruction loadNameInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callExitReference = ilProcessor.Create(OpCodes.Call, exitReference);

ilProcessor.InsertBefore(returnInstruction, loadNameInstruction);
ilProcessor.InsertAfter(loadNameInstruction, callExitReference);
}
}

As can be seen, we first find all the return instructions. Afterwards, we insert before them call to our callback before them in a similar way to the enter callback.

Monitoring method jumps

This monitoring type will let us know when we jump to another method. If we are doing performance measuring, in an “ideal” world (where we have a single thread and no context switches) this would be the place where we stop and resume measuring the time for the executed method. Here for simplicity we’ll weave around simple call instructions, ignoring other types of call (like callvirt).

private static void WeaveJump(MethodDefinition method, MethodReference jumpFromReference, MethodReference jumpBackReference, string name)
{
ILProcessor ilProcessor = method.Body.GetILProcessor();

List<Instruction> callInstructions = method.Body.Instructions.Where(instruction => instruction.OpCode == OpCodes.Call).ToList();
foreach (var callInstruction in callInstructions)
{
Instruction loadNameForFromInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callJumpFromInstruction = ilProcessor.Create(OpCodes.Call, jumpFromReference);

ilProcessor.InsertBefore(callInstruction, loadNameForFromInstruction);
ilProcessor.InsertAfter(loadNameForFromInstruction, callJumpFromInstruction);

Instruction loadNameForBackInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callJumpBackInstruction = ilProcessor.Create(OpCodes.Call, jumpBackReference);

ilProcessor.InsertAfter(callInstruction, loadNameForBackInstruction);
ilProcessor.InsertAfter(loadNameForBackInstruction, callJumpBackInstruction);
}
}

Here, we find all the call instructions and insert a call to JumpFrom before them and a call to JumpBack after them. This way we get a call before leaving and returning to the method.

Example

public void MethodA()
{
MethodB();
}

private void MethodB()
{
}

If we execute MethodA we’re about to receive these calls:

  1. Enter MethodA
  2. JumpFrom MethodA
  3. Enter MethodB
  4. Exit MethodB
  5. JumpBack MethodA
  6. ExitMethod A

Summary

Mono Cecil can be used for low level AOP where the aspects’ targets are IL instructions. There are already some great tools out there for AOP like PostSharp, but it is cool to know how simply a solution can be implemented using Cecil.

Advertisement

The synchronized keyword

What is does

A little known feature of .NET is the synchronized keyword. The keyword can be used on methods and it ensures:

  • Instance method – can be executed in a single thread on the instance (different instances are not synchronized). Equivalent to lock(this).
  • Static method – can be executed in a single thread. Equivalent to lock(typeof(TypeName)).

Usage in C#

If you’ll look at the C# specification you’ll see that there’s no mention of this keyword. The reason is that the keyword is an IL keyword and not a C# one. In order to instruct the compiler to mark the method as synchronized, we can use the MethodImplAttibute with Synchronized MethodImplOptions. For example:

[MethodImpl(MethodImplOptions.Synchronized)]
public void MethodWithSyncAttribute()
{
}

The IL result

Using synchronized keyword

The MethodWithSyncAttribute() looks in IL:

.method public hidebysig instance void  MethodWithSyncAttribute() cil managed synchronized
{
  // Code size       2 (0x2)
  .maxstack  8
  IL_0000:  nop
  IL_0001:  ret
}

It is very clear that this method has no explicit lock instructions like Monitor.Enter for example. Yet, it’ll still behave the same as if we had used a lock block around the method body.

Using lock block

The previous method is equivalent to the next:

public void MethodWithExplicitLock()
{
lock(this)
{
}
}

This method translates into:

.method public hidebysig instance void  MethodWithExplicitLock() cil managed
{
  // Code size       36 (0x24)
  .maxstack  2
  .locals init ([0] bool ‘s__LockTaken0’,
           [1] class Sync.Logger CS$2$0000,
           [2] bool CS$4$0001)
  IL_0000:  nop
  IL_0001:  ldc.i4.0
  IL_0002:  stloc.0
  .try
  {
    IL_0003:  ldarg.0
    IL_0004:  dup
    IL_0005:  stloc.1
    IL_0006:  ldloca.s   ‘s__LockTaken0’
    IL_0008:  call       void [mscorlib]System.Threading.Monitor::Enter(object,
                                                                        bool&)
    IL_000d:  nop
    IL_000e:  nop
    IL_000f:  nop
    IL_0010:  leave.s    IL_0022
  }  // end .try
  finally
  {
    IL_0012:  ldloc.0
    IL_0013:  ldc.i4.0
    IL_0014:  ceq
    IL_0016:  stloc.2
    IL_0017:  ldloc.2
    IL_0018:  brtrue.s   IL_0021
    IL_001a:  ldloc.1
    IL_001b:  call       void [mscorlib]System.Threading.Monitor::Exit(object)
    IL_0020:  nop
    IL_0021:  endfinally
  }  // end handler
  IL_0022:  nop
  IL_0023:  ret
}

As can be seen, the lock block translates naturally into a try/finally block with calls to Montior.Enter and Monitor.Leave.

Summary

The synchronized keyword is an IL keyword that synchronizes the marked method calls. It causes the method to behave in an equivalent way to the one where the whole body is surrounded with lock block. It is interesting to note that locking instructions are generated only during JIT when using the keyword.
The bottom line is that for C# developers it mostly provides another syntactic sugar for defining trivial lock.

Visual Studio 2010 code complexity extension

An Alpha version of code complexity addin for Visual Studio 2010 is available. The extension can be found at project page at CodePlex.
The extension shows the complexity of the methods in the IDE near the method and measures the method “health”.
For example, view of healthy methods (low complexity):
GoodMethods
And example of too complex method:
BadMethod
Currently, the complexity metric shown is simple complexity (defined by Code Complete). This metric counts the number of possible paths in the method. A healthy method is one with low complexity; a method with 10 paths is not so good and is worth simplifying.

Keep it green!

Intercepting unmanaged call in managed code

This post will demonstrate how to intercept unmanaged calls in the executing process. There are many reasons for intercepting unmanaged calls, among them monitoring, debugging and some other hacks.
This post will demonstrate how to intercept calls to CreateFile from Kernel32 library. The CreateFile method is called for opening an existing file and creating new one. For example, calls to File.OpenText will initiate a call to CreateFile.

Hooking CreateFile in unmanaged code

We’ll define a function pointer type for CreateFile:

typedef HANDLE (WINAPI *FileCreateFunction)(
LPCWSTR lpFileName,
DWORD dwDesiredAccess,
DWORD dwShareMode,
LPSECURITY_ATTRIBUTES lpSecurityAttributes,
DWORD dwCreationDisposition,
DWORD dwFlagsAndAttributes,
HANDLE hTemplateFile);

Afterwards, we’ll store the original CreateFile method and create a hook which will redirect calls:

FileCreateFunction OriginalCreateFile = (FileCreateFunction)GetProcAddress(GetModuleHandle(L"kernel32"), "CreateFileW");
HANDLE WINAPI CreateFileHook(
LPCWSTR lpFileName,
DWORD dwDesiredAccess,
DWORD dwShareMode,
LPSECURITY_ATTRIBUTES lpSecurityAttributes,
DWORD dwCreationDisposition,
DWORD dwFlagsAndAttributes,
HANDLE hTemplateFile)
{
bool hasListener = createFileCallback != NULL;
if(hasListener)
{
createFileCallback(lpFileName);
}

return OriginalCreateFile(
lpFileName,
dwDesiredAccess,
dwShareMode,
lpSecurityAttributes,
dwCreationDisposition,
dwFlagsAndAttributes,
hTemplateFile);
}

For now, ignore the code about the callback; we’ll use this code later for notifying the managed when a file is created.

As we can see, the hook function has the same signature as the original one. This is obvious since the function caller will not be changed, just the target method. The hook observes the function arguments and forwards the call to the original call.

Now, we’ll forward the calls from CreateFile to the new hook using mhook library:

BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved)
{
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
Mhook_SetHook((PVOID*)&OriginalCreateFile, CreateFileHook);
break;
case DLL_PROCESS_DETACH:
createFileCallback = NULL;
Mhook_Unhook((PVOID*)&OriginalCreateFile);
break;
case DLL_THREAD_ATTACH:
case DLL_THREAD_DETACH:
break;
}

return TRUE;
}

This code will hook the CreateFile method (OriginalCreateFile) to our new defined hook when the library is loaded into the process.

Using the library in managed code

In order to load the unmanaged library we’ll use P/invoke calls:

[DllImport("kernel32", SetLastError = true)]
static extern IntPtr LoadLibrary(string lpFileName);

[
DllImport("kernel32.dll", SetLastError = true)]
static extern bool FreeLibrary(IntPtr hModule);

Loading the library:

static void Main(string[] args)
{
IntPtr library = LoadLibrary(@"..\..\..\Debug\InterceptionLibrary.dll");
FreeLibrary(library);
}

Right now, all the calls will behave the same, but all will be routed through our new hook (a user of the software will experience no difference).

Preparing a callback in unmanaged code

We would like to know what file is being created, so we’ll define a callback function pointer that matches it:

typedef void (*NotifyCallbackFunction)(const TCHAR* fileName);

The managed code will register a callback using the method:

extern "C" __declspec(dllexport) void RegisterFileCreateListener(
NotifyCallbackFunction callback )
{
createFileCallback = callback;
}

This method is exposed so it can be used by the managed code using:

extern "C" __declspec(dllexport)

Registering the callback through managed code

First, in order to call the register method, we’ll need to define a delegate into the register method will be loaded (ignore the callback delegate for now):

[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
private delegate void RegisterFileListenerDel(OnFileCreatedDel callback);

In order to load the register method into the managed assembly, we’ll need to use:

[DllImport("kernel32", SetLastError = true)]
static extern IntPtr GetProcAddress(IntPtr hModule, string procName);

Loading the method is done by:

IntPtr pAddressOfFunctionToCall = GetProcAddress(library, "RegisterFileCreateListener");
var registerListener = (RegisterFileListenerDel)
Marshal.GetDelegateForFunctionPointer(
pAddressOfFunctionToCall,
typeof (RegisterFileListenerDel));

Now, we’ll have to define a callback delegate and method:

[UnmanagedFunctionPointer(CallingConvention.Cdecl, CharSet = CharSet.Unicode)]
private delegate void OnFileCreatedDel(string fileName);
private static void OnFileCreated(string fileName)
{
Console.WriteLine("File opened: {0}", fileName);
}

Now, all that’s needed is to register the callback using the method loaded in the previous step:

registerListener(OnFileCreated);

That’s all, as long as the library is loaded our callback we’ll be notified on every CreateFile call.

Example

This is our final version of the main method in the managed code:

static void Main(string[] args)
{
IntPtr library = LoadLibrary(@"..\..\..\Debug\InterceptionLibrary.dll");

IntPtr pAddressOfFunctionToCall = GetProcAddress(library, "RegisterFileCreateListener");
var registerListener = (RegisterFileListenerDel)
Marshal.GetDelegateForFunctionPointer(
pAddressOfFunctionToCall,
typeof (RegisterFileListenerDel));
registerListener(OnFileCreated);

File.OpenText(@"C:\Development\wow.txt").Dispose();
File.CreateText(@"C:\Development\wow2.txt").Dispose();

FreeLibrary(library);
}

Running this code will notify these files created:
image

Summary

In order to be notified about a native call in managed code:

  1. Create an unmanaged library
  2. Hook the requested method using mhook
  3. Create and expose a callback registration method
  4. In managed code load the library
  5. Register a callback

You can download the source code example here.

Alternatives

There several alternatives. For example, another possible way to intercept calls in unmanaged calls is Detours, a library developed by Microsoft. Another possible solution is EasyHook, a library that allows intercepting unmanaged calls directly from managed code.

Agile means a release is ready today

The trigger for this post is an interesting experience I had during our weekly demo. I led the demo and made the client worried. This demo had a positive side – only a few crashes, our system had stability problems during the previous phases. Being able to work with almost no exception is great progress. So far, so good. The downside was that we had to set the environment at some points during the demo, like:

  • Run a script to fix the registry
  • Watch the task manager to ensure it’s done processing before sending new request
  • Delete local DB between sessions

It repeated itself during the last few demos, it annoyed us – the developers and it definitely annoyed the client. The result was a request from the client to change the architecture. This is rarely a legitimate request, the client asks for features, for a “demoable” value. Architecture and design should not be the client’s concerns. This helped us learn important basics about the demo: how did a demo with such progress made a client so worried?

The purpose of demo

A demo has two parallel purposes, they can exist side by side, but it’s crucial to know which one each demo serves:

Feedback for new concepts

This is clear characteristic of demos of new products. The client has a hard time telling what the exact features are that solves the problem the product is aimed for. In this phase, the developers are in charge of demonstrating how the initial ideas “feels” in real life – a POC the client can interact with. The POC phase is important in the product lifetime. In this phase the features must be perceptible, but not perfect. The ideal result of this demo is a client decision on whether the feature is good or not – should it be thrown away or should it go into the product?

Show the value the product provides

Hopefully, this is the characteristic of most demos. These demos give the client a close view of the development progress – the client knows what value the product provides. At this point decisions can be made – use the new value (like selling the new version) and choose the next important value the product should provide.
Choose wisely – which purpose this demo serves? Defining before the demo what it is supposed to achieve is crucial.

When should we choose?

The answer is easy – before the iteration plan. The reasons are clear: For a POC the least possible should be done, the feature may not be part of the product and there is no need to put effort there at this moment. For a progress demo, a feature will be planned differently – coding will take longer (this is not a throw away code – refactoring, unit-testing…), satellite tasks will be required (like installer, licensing, etc.). In this iteration the feature will require more tickets, so it must be planned accordingly.

How to choose?

If there’s confidence that the feature is what the client wants in the product – plan an iteration in such a way that the feature is in the product and it’s “productized”. If there’s a doubt, plan in such a way the feature will be demoable but with as little effort as possible.

What is done

“What is done” is important in a demo that shows the current value of the product. Then what is done? The answer is very simple: The product can be released right after the demo. In these demos there’s a simple but important guideline – perform the demo on a neutral machine from an installed version. Meaning – running the product from the development IDE on a development machine is not good enough.
If the demo’s purpose is to show what the client can use, show the client what can be used. The client can use only what is already “productized”, so if the feature is not part of the installer, it doesn’t help. If the feature is not linked to the license, it doesn’t help. If the feature crashes and constantly requires workarounds, it doesn’t help.

Bottom line

All developers know how to achieve a successful demo. The problem is identifying the next demo purpose. Before planning the next iteration, be clear about the iteration “What is done”. One of the things which could cause a failure is acknowledging the product passed the POC phase and it’s a real product (This is where we failed this week). A good parameter for moving between the phases concluded from the question “when will users use it?”.
After identifying the demo’s purpose, plan the iteration accordingly and make sure the demo fits its purpose.

Automatic generation of View-Model – test drive

In the previous post I tried to present an approach for automatic View-Model generation. When trying to use it in a real life project as simple as a registration form, many missing features were revealed.

Here’s a list of issues which were noticed on simple implementation:

  • There’s no way to access the model from the abstract View-Model
  • There’s no way no pass arguments to the constructor
  • No built-in way to define the order of validations
  • No way to raise event of PropertyChanged other than the property being set
  • No way to declare additional errors on properties which are not mapped to the model

Actual attempt to use the framework in registration form

image
This is the simple registration – all fields are mandatory, email must match some regular expression and password verification must be the same as the password. The model has 3 properties – Email, Name and Password. These properties are mapped to the View-Model. Let’s see how this code looks like in the View-Model using the new framework:

public abstract class UserRegistrationViewModel : 
INotifyPropertyChanged, IDataErrorInfo, IUserViewModel
{
private static readonly ViewModelGenerator viewModelsGenerator = new ViewModelGenerator();

public static UserRegistrationViewModel CreateUserRegistrationViewModel(User user)
{
return viewModelsGenerator.Generate<UserRegistrationViewModel>(user);
}

protected UserRegistrationViewModel() { }

private const string MANDATORY_FIELD_ERROR_MESSAGE = "This field is mandatory";
private const string PASSWORD_VERIFICATION_PROPERTY_NAME = "PasswordVerification";

[
Model]
private readonly User user;
private ICommand save;

public event PropertyChangedEventHandler PropertyChanged = (sender, args) => { };

[
Validation(typeof (MandatoryValidator))]
public abstract string Name { get; set; }

[
Validation(typeof(EmailValidator))]
[
Validation(typeof (MandatoryValidator))]
public abstract string Email { get; set; }

[
Validation(typeof(MandatoryValidator))]
[
RelatedProperty(PASSWORD_VERIFICATION_PROPERTY_NAME)]
public abstract string Password { get; set; }

private string passwordVerification;
public string PasswordVerification
{
get { return passwordVerification; }
set
{
passwordVerification =
value;
}
}

private string GetPasswordVerificationValidation()
{
if(string.IsNullOrEmpty(passwordVerification))
{
return MANDATORY_FIELD_ERROR_MESSAGE;
}

if (user.Password != passwordVerification)
{
return "Password and Verification must be same";
}

return null;
}

public virtual string this[string columnName]
{
get
{
if (columnName == PASSWORD_VERIFICATION_PROPERTY_NAME)
{
return GetPasswordVerificationValidation();
}

return null;
}
}

public string Error { get { return null; } }

public ICommand Save
{
get
{
if (save == null)
{
save =
new SaveCommand(user);
}

return save;
}
}
}

Let’s see how this code demonstrates the solutions to some of the missing features.

Accessing the model – in order to get an instance of the model, a new attribute is presented [Model]. Decorating a field with it will make sure that field is initialized with the model instance.

Raising PropertyChaned event on other property than the one being set – in order to deal with it a new attribute is presented [RelatedProperty]. When the property is being set, the PropertyChanged event will be raised for all properties – the one being set and the related properties.

Defining additional errors for non-mapped properties – the generated View-Model tries to resolve errors for mapped properties, if it find none, it forwards the call the base View-Model (the abstract class) and checks for additional errors.

Download

The framework source can be found in CodePlex. It is still very partial and contains many bugs but it already works in the basic scenarios 🙂

Automatic generation of View-Model – first attempt

I recently found myself writing the same code again and again. As can be guessed, I’m writing a WPF application based on MVVM architecture. In order to avoid writing the same code again I am trying to generate the View-Model automatically using Castle Dynamic Proxy.

Simplest scenario – forwarding calls to model

This is probably the simplest case we encounter. This is so simple we are tempted to bind the view directly to the model.

Naive implementation

public class Model
{
public object Prop { get; set; }
}

public class ViewModel
{
private readonly Model model;

public ViewModel(Model model)
{
this.model = model;
}

public object Prop
{
get { return model.Prop; }
set { model.Prop = value; }
}
}

The generated alternative

So, what we’d like to achieve is skipping the forwarding implementation. The View-Model can look like:

public abstract class ViewModel
{
public abstract object Prop { get; set; }
}

So far, it’s very simple 🙂 Let’s see how it’s being used with the Model:

[Test]
public void Generate_SetPropertyValue_ModelPropertyUpdated()
{
var viewModelGenerator = new ViewModelGenerator();
var model = new Model();

ViewModel generatedViewModel = viewModelGenerator.Generate<ViewModel>(model);

object valueToAssign = new object();
generatedViewModel.Prop = valueToAssign;

Assert.That(model.Prop, Is.SameAs(valueToAssign));
}

This example shows that we’ve created a View-Model based on a Model instance, simulated a call to a property setter and the new value automatically reflected the Model. That was simple, wasn’t it?

Next scenario – Implementing INotifyPropertyChanged

This is a very common scenario, which has a very common implementation. It’s so common I’ll skip the naive implementation and jump to the generated version directly:

The generated alternative

public abstract class ViewModel : INotifyPropertyChanged
{
public abstract object Prop { get; set; }
public event PropertyChangedEventHandler PropertyChanged;
}

[
Test]
public void Generate_SetPropertyValue_PropertyChangedRaised()
{
var viewModelGenerator = new ViewModelGenerator();
var model = new Model();
ViewModel generatedViewModel = viewModelGenerator.Generate<ViewModel>(model);

bool wasRaised = false;
generatedViewModel.PropertyChanged += (sender, args) => wasRaised =
true;

generatedViewModel.Prop =
new object();

Assert.That(wasRaised, Is.True);
}

In this case we’ve generated a View-Model that implements INotifyPropertyChanged. We had to do nothing but declare the View-Model implements INotifyPropertyChanged. Whenever a property value is changed the event will be raised with the property name.

Last scenario – Implementing IDataErrorInfo

This case is trivial too so I’ll skip again the naive solution.

The generated alternative

First we’ll take a look at the format of the abstract View-Model and the validation declaration:

public abstract class ViewModel : IDataErrorInfo
{
public abstract string this[string columnName] { get; }
public abstract string Error { get; }

[
Validation(typeof(DummyValidator))]
public abstract object Prop { get; set; }
}

public class DummyValidator : IValidator
{
public string Validate(object value)
{
return "error";
}
}

The view model is now implementing the IDataErrorInfo interface. The interfaces require implementation of indexer that maps from property to error message. The automatic View-Model generation should take care of it. In order to declare the required validations we’ll use the Validation attribute. The attribute defines which validator to use on the property. The validation result will be mapped to the property name.
For example:

[Test]
public void Generate_SetInvalidPropertyValue_PropertyErrorIsCorrect()
{
var viewModelGenerator = new ViewModelGenerator();
var model = new Model();

ViewModel generatedViewModel = viewModelGenerator.Generate<ViewModel>(model);

generatedViewModel.Prop =
new object();

Assert.That(generatedViewModel["Prop"], Is.EqualTo("error"));
}

Conclusion

MVVM is used many times in common scenarios. The code is usually a template which we can generate dynamically. This way the code duplication will be drastically reduced and allow uniform implementation.
The examples here were simplified but the concept should be clear. There’s much more work on the framework in order to cover more real life scenarios, soon to come 🙂 I’ll upload the framework source code to CodePlex before the next post.

Balancing working hours toward better productivity

I’ll start by saying – I don’t know what the best balance is. Then, what can I tell?

The obvious

  1. The ratio between our working hours and productivity is not linear. We all know that, nothing new here. If we work 12 hours a day instead of 10 we won’t produce 20% more features.
  2. When we need to get more work done, we work more hours.

The almost obvious

What does our productivity looks like during the day?
image
Everybody agrees about this graph until the productivity gets closer to zero. But is possible to have negative productivity ?

Negative productivity

This is the counter intuitive part, we had encountered it so many times but it’s still hard to accept it. Starting sometime during the day, we do more harm than good – we cause more bugs, produce fragile design and less readable code. Even though our feature might work just fine, we did more harm than good – next week, when we have to extend the feature, we’ll have to understand code of an inferior standard. We’ll fight the design and most likely, we’ll fight the bugs we missed before.
From my personal experience, during the last year I found myself fighting with features for very long hours. Most of the times when I gave up and left the office, the next morning was extremely productive. I could throw away all the mess I made during the previous evening and write nice code within an hour.

Conclusion

The conclusion is very straightforward – when you’re getting to the zone of overwork, Go Home! You’re wasting your time, you’re wasting your boss’s money and you’re planting seeds in code that’ll annoy your teammates in the near future. We all know this is counter intuitive, but working shorter hours makes us more productive.

More thoughts

We have concentrated on the productivity of a single day. A bit more interesting can be to analyze a whole week. Is it possible that working one day less a week will improve our productivity? If we find this to be true, will we act and shorten our work week? If we’d shorten our work week, should we get paid less or more? On one hand we’re more productive, so we should be paid more, but on the other hand, we work less, so we should be paid less. This leads to an interesting question – are we paid for our time or for our results?
There’s a lot to think about here, but we must find first the optimal balance of working hours and days.

Worker thread using parallel tasks

Worker thread is a known pattern – there’s work to do, it needs to be done asynchronously and we want to get all the work results when it’s ready. What we’re going to see is an implementation of it as an alternative to the common implementations. This implementation will take advantage of the new parallel tasks library.
To formalize the requirements:

  • The worker queues items to process
  • The items are processed asynchronously
  • Only one item can be processed at a time
  • The items are processed in the order they were queued
  • The worker will store the processed results in the order they were processed

The worker class

public class Worker
{
private readonly IItemsProcessor itemsProcessor;
private Task lastTask;

public IList ProcessedItems { get; private set; }

public Worker(IItemsProcessor itemsProcessor)
{
this.itemsProcessor = itemsProcessor;
ProcessedItems =
new List();
InitializeNullTask();
}

private void InitializeNullTask()
{
lastTask =
new Task(() => default(TResult));
lastTask.Start();
}

public void ProcessItem(TItem item)
{
var nextTask = lastTask
.ContinueWith(task =>
{
var processItem = itemsProcessor.ProcessItem(item);
ProcessedItems.Add(processItem);
});
lastTask = nextTask;
}

public void WaitForPendingItems()
{
using (var sync = new ManualResetEvent(false))
{
lastTask.ContinueWith(task => sync.Set());
sync.WaitOne();
}
}
}

The worker creates a task for each item which needs to be processed. Each task is executed in the thread pool, the point where we ensure that the tasks are run in the correct order is the ContinueWith call. ContinueWith takes care of the order of the tasks’ execution.

The InitializeNullTask creates a task that, surprisingly, does nothing but set a head to the tasks queue. This task helps us avoid in ProcessItem to check if this is the first item to process. The first task starts with call to Start while all the others start with ContinueWith.

WaitForPendingItems also enqueues a task. This time, the task is only waiting to be executed, which means all other items were already processed. When the task starts it releases the enqueuing thread.

Usage example

In this example we’ll download a list of web pages and check print their sizes. The downloader implements the IItemsProcessor we’ve seen the worker expects.

public class WebUrlsDownloader : IItemsProcessor<string, byte[]>
{
public byte[] ProcessItem(string url)
{
using (var webClient = new WebClient())
{
return webClient.DownloadData(url);
}
}
}

And the actual usage:

public void DownloadFiles()
{
var worker = new Worker<string, byte[]>(new WebUrlsDownloader());
worker.ProcessItem(
@"http://msdn.microsoft.com/en-us/library/dd537608.aspx");
worker.ProcessItem(
@"http://msdn.microsoft.com/en-us/library/dd537609.aspx");
worker.ProcessItem(
@"http://msdn.microsoft.com/en-us/library/dd997405.aspx");
worker.WaitForPendingItems();
Console.WriteLine("Finished downloading files:");
foreach (var processedItem in worker.ProcessedItems)
{
Console.WriteLine("Downloaded file with size: {0}", processedItem.Length);
}
}

Unit testing F#

F# is a cool and exciting language. It’s much more than an academic language; it can solve many real life (coding) problems in its functional manner. As to production code, it must be covered with unit tests. I’ll show a simple example of a unit test written in C# with moq against F# code.
The code under test is a simple lottery calculator: it takes a list of participants and says how much every participant won – 5 times the number of hits. Simple, isn’t it?
Let’s take a look at the code under test, it could probably be written better, but ignore this for now 🙂

type LotteryCalculator(winningNumbers:System.Collections.Generic.IList) = 
let calculatePrize(participatingNumbers) =
let numOfHits = participatingNumbers |>
Seq.filter (
fun participatingNumber -> winningNumbers.Contains participatingNumber) |>
Set.ofSeq |>
Set.count
numOfHits *
5

member this.CalculatePrizes(participants:System.Collections.Generic.IEnumerable) =
participants
|> Seq.map (
fun participant -> (participant, calculatePrize(participant.GetTicket())))

So, we have an API method, CalculatePrizes(), which take a collection of participants, look at their tickets and returns a tuples list of participant and prize. So we have something like this: CalculatePrizes: System.Collections.Generic.IEnumerable –> seq

After dealing with some issues, which I’ll write about in a future post, I wrote this test:

[Test]
public void CalculatePrizes_RecievesTwoParticipant_MatchCorrectPrizes()
{
var firstParticipantMock = new Moq.Mock<IParticipant>();
IParticipant firstParticipant = firstParticipantMock.Object;
firstParticipantMock.Setup(fake => fake.GetTicket()).Returns(
new List<int> {1, 4, 5});

var secondParticipantMock = new Moq.Mock<IParticipant>();
IParticipant secondParticipant = secondParticipantMock.Object;
secondParticipantMock.Setup(fake => fake.GetTicket()).Returns(
new List<int> { 1, 2, 4 });

var lotteryCalculator = new LotteryCalculator(new List<int> {1, 2, 3});
var participantPrizes = lotteryCalculator.CalculatePrizes(new[] { firstParticipant, secondParticipant });

var prizes = participantPrizes.ToDictionary(tuple => tuple.Item1, tuple => tuple.Item2);

Assert.That(prizes[firstParticipant], Is.EqualTo(5));
Assert.That(prizes[secondParticipant], Is.EqualTo(10));
}

It’s nice that we can keep our unit tests written in C#, it’s also important in cases where the users of the code (most likely us in another module) will use it in C# as well.