AOP without weaving

In this post I’ll present a usage of runtime method replacer in AOP context. The idea behind it is to change the behavior of an application without changing the IL of its methods. In this post I’ll show how to log an exception from a method.

This post is based on the work of Ziad Elmalki who posted the original method replacer. It is also based on the updated code for the method replacer by Chung Sung which is compatible with the new .NET framework versions. Lastly thanks to Roy Osherove who mentioned those recently.

Replacing methods

The method replacer uses the following concept – after a method is jitted it receives a pointer of the jitted code. You can see how to extract that address in the original post. After extracting the addresses, we can simply replace one method with another:

public static void ReplaceMethod(IntPtr srcAdr, IntPtr destAdr)
{
unsafe
{
if (IntPtr.Size == 8)
{
ulong* d = (ulong*)destAdr.ToPointer();
*d = (
ulong)srcAdr.ToInt64();
}
else
{
uint* d = (uint*)destAdr.ToPointer();
*d = (
uint)srcAdr.ToInt32();
}
}
}

As a simple example, if we have these two methods:

public class MyClass
{
public static void Foo()
{
Console.WriteLine("In Foo");
throw new Exception("I am done here!");
}

public static void Bar()
{
Console.WriteLine("In Bar");
}
}

Then executing Foo in the following context:

MethodInfo barMethod = typeof (MyClass).GetMethod("Bar");
MethodInfo fooMethod = typeof (MyClass).GetMethod("Foo");
MethodUtil.ReplaceMethod(barMethod, fooMethod);
MyClass.Foo();

Will actually lead to the next result:

image

Which is… Cool!

Catching exceptions in Foo

What I’d like to present is a simplified example of how to catch an exception in business code without modifying it. A similar functionality to PostSharp exception handling. What we’re about to do is to hijack the original calls to Foo and redirect those to our new wrapper method. Our new wrapper method will call the original one inside a try/catch block.

Storing the original Foo

Since we’re about to intercept calls to Foo based on its address, we’d like to store a “way” to call the original method later. The “way” to do it is simple, we’ll extract the method address before starting the interception and create a delegate to it using marshaling. The delegate will be stored on a field:

MethodInfo fooMethod = typeof (MyClass).GetMethod("Foo");
IntPtr fooAdress = MethodUtil.GetMethodAddress(fooMethod);
OriginalFoo =
Marshal.GetDelegateForFunctionPointer(fooAdress, typeof (Action));

Creating the wrapper

For the purpose of this example we could prepare a stub in the project istelf. But, in order to prove that it is likely possible to create a more general solution, we will generate the wrapper at runtime.

Since the wrapper is going to receive the calls instead of Foo it must have the same signature. Besides, our wrapper will retrieve the original Foo delegate from a static field named OriginalFoo. The delegate will be called from the method inside a try/catch block.

We will generate a dynamic method that replaces the original method:

// The field holding the delegate to the original Foo
FieldInfo originalFooDelegateField = typeof (FooProtector).GetField("OriginalFoo");

MethodInfo invokeDelegateMethod = OriginalFoo.GetType().GetMethod("DynamicInvoke");
MethodInfo innerExceptionGetter = typeof(Exception).GetProperty("InnerException").GetGetMethod();
MethodInfo exceptionMessageGetter = typeof(Exception).GetProperty("Message").GetGetMethod();

var dynamicMethod = new DynamicMethod("FooProtector", typeof(void), new Type[0]);
ILGenerator ilGenerator = dynamicMethod.GetILGenerator();

Label beginExceptionBlock = ilGenerator.BeginExceptionBlock();

// Preparing the call to the original Foo -
// Load the original Foo
ilGenerator.Emit(OpCodes.Ldsfld, originalFooDelegateField);
// Load "no arguments" to invoke the delegate
ilGenerator.Emit(OpCodes.Ldnull);
// Invoke the delegate and call original Foo
ilGenerator.Emit(OpCodes.Callvirt, invokeDelegateMethod);
ilGenerator.Emit(
OpCodes.Pop);

ilGenerator.Emit(
OpCodes.Leave, beginExceptionBlock);
ilGenerator.BeginCatchBlock(
typeof (Exception));

// Extract the exception message
ilGenerator.Emit(OpCodes.Callvirt, innerExceptionGetter);
ilGenerator.Emit(
OpCodes.Callvirt, exceptionMessageGetter);

// Print the exception message
MethodInfo info = typeof (Console).GetMethod("WriteLine", new[] {typeof (string)});
ilGenerator.Emit(
OpCodes.Call, info);

ilGenerator.Emit(
OpCodes.Leave, beginExceptionBlock);
ilGenerator.EndExceptionBlock();
ilGenerator.Emit(
OpCodes.Ret);

// Trigger method compilation
dynamicMethod.CreateDelegate(typeof (Action));

This wrapper calls the original method through a delegate. In case an exception is thrown, it extracts the original exception and prints to the console the message.

Is it working?

Let’s revisit the original code and update it to the protecting code:

FooProtector.ProtectFoo();
MyClass.Foo();

The expected result is two messages printed, where the second one is the exception message “I am done here!”. As we can happily see, this is the exact result:

image

Conclusion

The concept of replacing methods using their jitted versions can be useful. It can be used to for AOP where it can be used for logging, exception handling and basically applying any custom aspect. It can also be used to modify some 3rd party code behavior for which we have no source code. Additionally, as Roy says is can be used as an engine for mocking frameworks.

But there are some disadvantages too. Firstly, it is very dependent on the compilation outcome which makes it quite fragile. Secondly, it is sensitive to optimizations, for example inlined methods cannot be handled. Thirdly, when it is used extensively it requires generation and JIT of many dynamic methods which might lead to a performance hit.

Behind the scenes of events

Events are a classic implementation of the observer pattern. Support for events syntax exists in many languages, such as C#. In this post I’ll explain the internals of events.

Delegates’ background

The most abstract way to describe a delegate is a “pointer to a method”. A very relevant feature of delegates is that they can “point” at multiple methods. In order to do so we use the =+ operator and combine to other delegates, for example:

[Test]
public void Invoke_TwoDelegatesCombined_BothCalled()
{
bool wasACalled = false;
bool wasBCalled = false;

Action delA = () => wasACalled = true;
Action delB = () => wasBCalled = true;

Action combine = null;
combine += delA;
combine += delB;

combine.Invoke();

Assert.That(wasACalled);
Assert.That(wasBCalled);
}

But, what actually happens here? Let’s take a look at this code:

combine += delA;

This code compiles to the following IL:

IL_0031: ldloc.2
IL_0032: ldloc.0
IL_0033: call class [mscorlib]System.Delegate [mscorlib]System.Delegate::Combine(class [mscorlib]System.Delegate, class [mscorlib]System.Delegate)
IL_0038: castclass [mscorlib]System.Action
IL_003d: stloc.2

Which is equivalent to:

combine = Delegate.Combine(combine, delA);

The result of the compiled code is direct call to Delegate.Combine, which makes any future call to the combined result be forwarded to both delegates.

Default event

If we use default event implementation, the compiler generates two methods and a backing field. The backing field is a delegate storing the subscribers; the methods are add and remove the subscribers from the delegate. This implementation allows us to add and remove subscribers for whom the event is visible and raise the event from the type itself. For example:

public class Publisher
{
public event EventHandler MyEvent;

public void Publish()
{
MyEvent(
this, EventArgs.Empty);
}
}

The event compiles into a field, which is a delegate of the event type:

.field private class [mscorlib]System.EventHandler MyEvent

And into two methods for adding and removing subscribers:

.event [mscorlib]System.EventHandler MyEvent
{
.addon instance void Events.Publisher::add_MyEvent(class [mscorlib]System.EventHandler)
.removeon instance void Events.Publisher::remove_MyEvent(class [mscorlib]System.EventHandler)
}

With the signatures:

.method public hidebysig specialname 
instance void add_MyEvent (
class [mscorlib]System.EventHandler 'value'
) cil managed

.method public hidebysig specialname
instance void remove_MyEvent (
class [mscorlib]System.EventHandler 'value'
) cil managed

The bodies of the events, not surprisingly, manipulate the backing field; we can ignore the bodies for now.
So up to here we see what the declaration of event compiles into – an event declaration, a backing field which is a delegate of the event type and two methods for adding and removing subscribers. All this magic from a single C# line of code.
The other side of the event is what happens as we raise it. The event can be raised only from within the type that declares it. For example:

MyEvent(this, EventArgs.Empty);

Compiles into:

IL_0000: nop
IL_0001: ldarg.0
IL_0002: ldfld class [mscorlib]System.EventHandler Events.Publisher::MyEvent
IL_0007: ldarg.0
IL_0008: ldsfld class [mscorlib]System.EventArgs [mscorlib]System.EventArgs::Empty
IL_000d: callvirt instance void [mscorlib]System.EventHandler::Invoke(object, class [mscorlib]System.EventArgs)

All this code does is accessing the delegate backing field and invoking it.

Custom event

In fact, the event is not custom but the add/remove methods are. Custom add/remove for events is a feature in C# which I think is not very commonly used (in contrast to properties). It allows the developer to provide an alternative implementation to the event subscription.

public event EventHandler MyCustomEvent
{
add {}
remove { }
}

The compiled class in this case does not contain a backing field. It contains the declaration of the event and the two methods with the custom provided body.
A difference which derives from the compiled code difference is that there’s no way to raise the event directly. This makes sense since the custom code can do many things (or nothing) with the subscribers and not store them in a common place for later invocation. If we try to raise MyCystomEvent in the same way we tried to raise MyEvent we’ll get a compilation error.

Retrieving property value by name using dynamic method

In the previous post we compared some alternatives of the dynamic keyword. One important and very interesting alternative is based on reflection emit. Reflection emit enables us to generate code using IL at runtime, compile it and execute it straightaway.
In this post we’ll see how to extract a string property named ‘Name’ from an unknown type using a dynamic method.

The code

public static string GetNameByDynamicMethod(object arg)
{
Type type = arg.GetType();

Func<object, string> getterDelegate;
if (!typeToEmitDelegateMap.TryGetValue(type, out getterDelegate))
{
string typeName = type.Name;

PropertyInfo nameProperty = type.GetProperty("Name");
Type returnType = typeof (string);

// Define a new dynamic method
// The method returns a string type
// The method expects single parameter
var method = new DynamicMethod("GetNameFrom" + typeName,
returnType,
new[] {typeof(object)});

ILGenerator ilGenerator = method.GetILGenerator();

// Load to the stack the first method argument.
//In our case, this is an object whose type we already know
ilGenerator.Emit(OpCodes.Ldarg_0);

// Cast the object to the type we already know
ilGenerator.Emit(OpCodes.Castclass, type);

// Call the getter method on the casted instance
ilGenerator.EmitCall(OpCodes.Call, nameProperty.GetGetMethod(), null);

// Return the value from Name property
ilGenerator.Emit(OpCodes.Ret);

// Compile the method and create a delegate to the new method
getterDelegate = (Func<object, string>)method.CreateDelegate(typeof(Func<object, string>));

typeToEmitDelegateMap.Add(type, getterDelegate);
}

return getterDelegate(arg);
}

What we did here was to define a new method, generate its code with IL, compile it and execute it. This new method is equivalent in many ways to a method we had generated in the original program. This new method will be hosted in a dynamic module in the memory.

The advantage of this kind of method over reflection is that it compiles the code once and doesn’t need to explore the type again whenever we need to get the property value.

Performance

A quick comparison for calling these alternatives 10,000,000 times each:

Seconds Ratio to directly
Directly 0.0131 1
Dynamic 0.4609 35
Expression 0.9154 70
Reflection emit 0.9832 75

As can be seen, using the dynamic keyword works much faster than compiling an expression or a dynamic method at runtime.

Another interesting data set shows the time that each alternative takes to set up (The time to perform the first call):

Seconds
Directly 0.00003
Dynamic 0.08047
Expression 0.00114
Reflection emit 0.02169

Monitoring execution using Mono Cecil

This post will demonstrate how to monitor the execution of .Net code using Mono Cecil. This can be useful for logging, for performance analysis and just for fun. The concept is obviously IL weaving. We’ll look for entry points and existing IL instructions to weave around the new IL. In this post we’ll show only four types of monitoring, in reality we have some more. The four types are: Enter method, Exit method, Jump from method and Jump back to method. Jump in this context means call another method and return from the other method.
In our example we’ll assume we have some simple ‘notifier’ which the weaved code will call:

public class Notifier
{
public static Action<string> Enter;
public static Action<string> Exit;
public static Action<string> JumpOut;
public static Action<string> JumpBack;

public static void NotifyEnter(string methodName)
{
if (Enter != null)
{
Enter(methodName);
}
}

public static void NotifyExit(string methodName)
{
if (Exit != null)
{
Exit(methodName);
}
}

public static void NotifyJumpOut(string methodName)
{
if (JumpOut != null)
{
JumpOut(methodName);
}
}

public static void NotifyJumpBack(string methodName)
{
if (JumpBack != null)
{
JumpBack(methodName);
}
}
}

Monitoring enter

This is the most trivial weave, which inserts a call to Enter callback before the first instruction in the method body. In order to do so, we first need to load the assembly and find all the methods into which we can weave:

public void Weave()
{
AssemblyDefinition assembly = AssemblyDefinition.ReadAssembly(assemblyPath);

IEnumerable<MethodDefinition> methodDefinitions = assembly.MainModule.GetTypes()
.SelectMany(type => type.Methods).Where(method => method.HasBody);
foreach (var method in methodDefinitions)
{
WeaveMethod(assembly, method);
}

assembly.Write(assemblyPath);
}

Now we add reference to the the callbacks into the weaved assembly. This is not yet the weaving, this is required definition for the assembly to use in the weaved assembly. We’ll get the called methods using reflection:

Type notifierType = typeof (Notifier);
enterMethod = notifierType.GetMethod(
"NotifyEnter", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);
exitMethod = notifierType.GetMethod(
"NotifyExit", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);
jumpFromMethod = notifierType.GetMethod(
"NotifyJumpOut", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);
jumpBackMethod = notifierType.GetMethod(
"NotifyJumpBack", BindingFlags.Public | BindingFlags.Static, null, new[] {typeof (string)}, null);

Afterwards, we’ll add the references to the weaved assembly:

MethodReference enterReference = assembly.MainModule.Import(enterMethod);
MethodReference exitReference = assembly.MainModule.Import(exitMethod);
MethodReference jumpFromReference = assembly.MainModule.Import(jumpFromMethod);
MethodReference jumpBackReference = assembly.MainModule.Import(jumpBackMethod);

So our weave method looks like:

private static void WeaveMethod(AssemblyDefinition assembly, MethodDefinition method)
{
MethodReference enterReference = assembly.MainModule.Import(enterMethod);
MethodReference exitReference = assembly.MainModule.Import(exitMethod);
MethodReference jumpFromReference = assembly.MainModule.Import(jumpFromMethod);
MethodReference jumpBackReference = assembly.MainModule.Import(jumpBackMethod);

string name = method.DeclaringType.FullName + "." + method.Name;

WeaveEnter(method, enterReference, name);
WeaveJump(method, jumpFromReference, jumpBackReference, name);
WeaveExit(method, exitReference, name);
}

Now, we have everything ready to weave the enter monitoring code:

private static void WeaveEnter(MethodDefinition method, MethodReference methodReference, string name)
{
var ilProcessor = method.Body.GetILProcessor();

Instruction loadNameInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callEnterInstruction = ilProcessor.Create(OpCodes.Call, methodReference);

ilProcessor.InsertBefore(method.Body.Instructions.First(), loadNameInstruction);
ilProcessor.InsertAfter(loadNameInstruction, callEnterInstruction);
}

The ILProcessor is a helper utility which Cecil provides to make the weaving simpler. The first instruction we weave is loading of a string which is the name of the method being entered. The second instruction we weave is a call instruction which uses as argument the loaded string. We insert the instructions in the beginning of the method and from now on every time the method is entered the callback will be invoked.

Monitoring exit

Monitoring exit is a little more interesting. In contrast to enter where we have a single weaving point, exit may have multiple exit points – multiple return statements, thrown exceptions, etc…
Here we’ll monitor for simplicity return statements only:

private static void WeaveExit(MethodDefinition method, MethodReference exitReference, string name)
{
ILProcessor ilProcessor = method.Body.GetILProcessor();

List<Instruction> returnInstructions = method.Body.Instructions.Where(instruction => instruction.OpCode == OpCodes.Ret).ToList();
foreach (var returnInstruction in returnInstructions)
{
Instruction loadNameInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callExitReference = ilProcessor.Create(OpCodes.Call, exitReference);

ilProcessor.InsertBefore(returnInstruction, loadNameInstruction);
ilProcessor.InsertAfter(loadNameInstruction, callExitReference);
}
}

As can be seen, we first find all the return instructions. Afterwards, we insert before them call to our callback before them in a similar way to the enter callback.

Monitoring method jumps

This monitoring type will let us know when we jump to another method. If we are doing performance measuring, in an “ideal” world (where we have a single thread and no context switches) this would be the place where we stop and resume measuring the time for the executed method. Here for simplicity we’ll weave around simple call instructions, ignoring other types of call (like callvirt).

private static void WeaveJump(MethodDefinition method, MethodReference jumpFromReference, MethodReference jumpBackReference, string name)
{
ILProcessor ilProcessor = method.Body.GetILProcessor();

List<Instruction> callInstructions = method.Body.Instructions.Where(instruction => instruction.OpCode == OpCodes.Call).ToList();
foreach (var callInstruction in callInstructions)
{
Instruction loadNameForFromInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callJumpFromInstruction = ilProcessor.Create(OpCodes.Call, jumpFromReference);

ilProcessor.InsertBefore(callInstruction, loadNameForFromInstruction);
ilProcessor.InsertAfter(loadNameForFromInstruction, callJumpFromInstruction);

Instruction loadNameForBackInstruction = ilProcessor.Create(OpCodes.Ldstr, name);
Instruction callJumpBackInstruction = ilProcessor.Create(OpCodes.Call, jumpBackReference);

ilProcessor.InsertAfter(callInstruction, loadNameForBackInstruction);
ilProcessor.InsertAfter(loadNameForBackInstruction, callJumpBackInstruction);
}
}

Here, we find all the call instructions and insert a call to JumpFrom before them and a call to JumpBack after them. This way we get a call before leaving and returning to the method.

Example

public void MethodA()
{
MethodB();
}

private void MethodB()
{
}

If we execute MethodA we’re about to receive these calls:

  1. Enter MethodA
  2. JumpFrom MethodA
  3. Enter MethodB
  4. Exit MethodB
  5. JumpBack MethodA
  6. ExitMethod A

Summary

Mono Cecil can be used for low level AOP where the aspects’ targets are IL instructions. There are already some great tools out there for AOP like PostSharp, but it is cool to know how simply a solution can be implemented using Cecil.

The synchronized keyword

What is does

A little known feature of .NET is the synchronized keyword. The keyword can be used on methods and it ensures:

  • Instance method – can be executed in a single thread on the instance (different instances are not synchronized). Equivalent to lock(this).
  • Static method – can be executed in a single thread. Equivalent to lock(typeof(TypeName)).

Usage in C#

If you’ll look at the C# specification you’ll see that there’s no mention of this keyword. The reason is that the keyword is an IL keyword and not a C# one. In order to instruct the compiler to mark the method as synchronized, we can use the MethodImplAttibute with Synchronized MethodImplOptions. For example:

[MethodImpl(MethodImplOptions.Synchronized)]
public void MethodWithSyncAttribute()
{
}

The IL result

Using synchronized keyword

The MethodWithSyncAttribute() looks in IL:

.method public hidebysig instance void  MethodWithSyncAttribute() cil managed synchronized
{
  // Code size       2 (0x2)
  .maxstack  8
  IL_0000:  nop
  IL_0001:  ret
}

It is very clear that this method has no explicit lock instructions like Monitor.Enter for example. Yet, it’ll still behave the same as if we had used a lock block around the method body.

Using lock block

The previous method is equivalent to the next:

public void MethodWithExplicitLock()
{
lock(this)
{
}
}

This method translates into:

.method public hidebysig instance void  MethodWithExplicitLock() cil managed
{
  // Code size       36 (0x24)
  .maxstack  2
  .locals init ([0] bool ‘s__LockTaken0’,
           [1] class Sync.Logger CS$2$0000,
           [2] bool CS$4$0001)
  IL_0000:  nop
  IL_0001:  ldc.i4.0
  IL_0002:  stloc.0
  .try
  {
    IL_0003:  ldarg.0
    IL_0004:  dup
    IL_0005:  stloc.1
    IL_0006:  ldloca.s   ‘s__LockTaken0’
    IL_0008:  call       void [mscorlib]System.Threading.Monitor::Enter(object,
                                                                        bool&)
    IL_000d:  nop
    IL_000e:  nop
    IL_000f:  nop
    IL_0010:  leave.s    IL_0022
  }  // end .try
  finally
  {
    IL_0012:  ldloc.0
    IL_0013:  ldc.i4.0
    IL_0014:  ceq
    IL_0016:  stloc.2
    IL_0017:  ldloc.2
    IL_0018:  brtrue.s   IL_0021
    IL_001a:  ldloc.1
    IL_001b:  call       void [mscorlib]System.Threading.Monitor::Exit(object)
    IL_0020:  nop
    IL_0021:  endfinally
  }  // end handler
  IL_0022:  nop
  IL_0023:  ret
}

As can be seen, the lock block translates naturally into a try/finally block with calls to Montior.Enter and Monitor.Leave.

Summary

The synchronized keyword is an IL keyword that synchronizes the marked method calls. It causes the method to behave in an equivalent way to the one where the whole body is surrounded with lock block. It is interesting to note that locking instructions are generated only during JIT when using the keyword.
The bottom line is that for C# developers it mostly provides another syntactic sugar for defining trivial lock.