Posts Tagged ‘.NET’

IDisposable, finalizers and garbage collection

No Comments »

Proper resource management in .NET requires that you understand a few basic concepts, which I will discuss in this posting.

The main resource consumed by your .NET program is memory and this consumption comes in two flavours: memory consumed by the stack and memory consumed by the heap. The stack is where local variables of primitive types are stored, while complex objects are stored on the heap.

In addition to memory your application will probably claim other system resources as well, such as file handles.

Since the amount available of any resource is constrained, its usage has to be managed. What this means is not only that we need to create or claim resources whenever we want to use them, we also need to destroy or release the resources when we are done with them. This last point, to which many textbooks do not dedicate enough attention IMO, is the tricky bit.

Because .NET is a garbage collected environment, you usually do not have to think about managing memory: memory used by local variables is managed automatically as the stack grows and shrinks during program execution. Memory allocated on the heap is managed by the .NET framework’s garbage collector. This is a component of the .NET runtime which will periodically freeze program execution, take a look at the memory allocated on the heap, compare it with the set of references currently used by your program, and release memory to which no references exist.

.NET offers (at least) two means to help you manage resources not handled by the garbage collector: the IDisposable interface and object finalizers.

Finalizers

Every .NET object has a Finalize() method. This method is called by the .NET runtime when the garbage collector collects the object. The Finalize() method should be overridden in any class which needs to release unmanaged resources prior to its destruction. However, in C# you cannot directly implement the Finalize method. That is, the compiler will not allow this:

protected override void Finalize()
{
    //Do clean up, releasing resources etc.
}

Instead you must implement a destructor. A destructor is implemented by creating a method whose name is constructed by prefixing the type’s name with a tilde (~). Thus, to create a destructor for the class MyClass, you would write this:

~MyClass()
{
    //Do clean up, releasing resources etc.
}

The C# compiler will transform this into

protected override void Finalize()
{
    try
    {
        //Do clean up, releasing resources etc.
    }
    finally
    {
        base.Finalize();
    }
}

Thus, the point of forcing you to use a destructor instead of overriding Finalize() directly is to help you remember to call base.Finalize(). Notice that it is the responsibility of the .NET runtime to call your destructor. In fact, you are not allowed to call the destructor yourself.

IDisposable

You can’t predict when the garbage collector runs (unless you explicitly force it to run). Thus, the destructor of an object may run long after your application is done with the object. For objects using scarce resources this is not desirable behaviour: you will usually want to perform clean up as soon as you are done with the object. The IDisposable interface is a means to this end.

The IDisposable interface contains a single method:

// Summary:
//     Defines a method to release allocated resources.
[ComVisible(true)]
public interface IDisposable
{
    // Summary:
    //     Performs application-defined tasks associated with freeing, releasing, or
    //     resetting unmanaged resources.
    void Dispose();
}

In this method you are encouraged to put the code which should run to release the resources the current object is holding. By implementing IDisposable you are signaling to users of your class that it uses unmanaged resources and that they should call Dispose() as soon as they are done with an instance of the class to release these resources. Moreover, implementing IDisposable does give you some benefits:

When your class implements IDisposable, the C# compiler will let you create an object of the given type in a using statement, and the .NET framework will make sure to call Dispose() when control leaves the statement:

using (MyDisposableClass m = new MyDisposableClass())
{
    //yada yada yada
}//m.Dispose() will be called here.

By calling Dispose implicitly like this, you make sure that you do not forget to call Dispose() or accidentally remove the call when reorganizing the code.

Now, how does destructors and Dispose() relate? Obviously you don’t want to have to duplicate clean up code in your class, so where should you put it: in the destructor or in Dispose()? Since Dispose() is the more aggressive approach and you aren’t allowed to call a destructor directly, you should put your clean up code in Dispose() and call Dispose() from your destructor. Be aware that putting a call to Dispose() inside your destructor means that your Dispose() method may be called multiple times: first when control leaves a using statement in which the object is being used and then again when the garbage collector collects the object and the destructor is called. Thus, your Dispose() method should not blow up if it is called multiple times. Also, Microsoft specifically declares that you should not make any assumptions on the order in which objects are collected or have their Finalize method called. Thus, when your cleanup code is called because of a call to Finalize(), you should not reference other managed resources, since these may already have been destroyed.

Suppressing Finalize

Executing a Finalize method can be costly and it actually delays the reclaiming of the memory used by the object. Thus, if appropriate clean up has already been performed for an instance through a call to Dispose(), it is desirable that the destructor is not run when the instance is collected by the garbage collector. To achieve this, your Dispose method should call the GC.SuppressFinalize() method, passing in the instance as a parameter.

That’s it. I hope this has cleared up some of the confusion usually surrounding IDisposable, finalizers and garbage collection.

As you can see, there are a number of rules you should adhere to if you want to manage resources effectively. To help you to arrive at a correct resource management scheme, Microsoft provides a reference implementation here: http://msdn.microsoft.com/en-us/library/system.gc.suppressfinalize.aspx. Notice how the Dispose(bool disposing) overload is used to ensure that only unmanaged resources are accessed when Finalize causes a dispose.


Hey, where are my interface’s properties?

2 Comments »

This post is about the somewhat unexpected behaviour of the Type.GetProperties() method when it is called on an interface.

Last week I began writing an ASP.NET MVC application to act as a dashboard for an application which has previously had a very cumbersome UI (involving pgAdmin – certainly not best practice :-). I wanted it to have a very light feel, hence decided to use as much AJAX as I could get away with. This has worked out very well. I’ve now got 1(!) view and a bunch of controller actions all returning JsonResults.

My business objects all reference each other, even in a circular fashion: an Order has a List<OrderLine> OrderLines property and each OrderLine has an Order property. I would like to pass orders and orderlines to the browser using JSON, but using an Order as data for a JsonResult will cause the serializer to barf because of the circularity. Thus, we’ll need to pass a simpler object to the JsonResult (that’s a good idea anyway, since you will usually not need/want all the object’s properties to be passed to the browser).

My business objects looked something like

namespace X.BO
{
    public class Entity
    {
        public virtual int Id { get;set; }
    }

    public class OrderLine : Entity, IOrderLine
    {
        public virtual string Text { get; set; }
        public virtual int UnitPrice { get; set; }
        public virtual string Quantity { get; set; }
        public virtual string ProductNumber { get; set; }
        public virtual Order Order {get;set;}
    }
}

and I decided that I would want to pass objects like

namespace X.JSON
{
    public class OrderLine
    {
        public int Id { get; set; }
        public string Text { get; set; }
        public int UnitPrice { get; set; }
        public string Quantity { get; set; }
        public string ProductNumber { get; set; }
        public int OrderId {get;set;}
    }
}

to the JsonResult. I furthermore anticipated that I would be passing all kinds of objects to the browser, hence didn’t want to write all the boilerplate code for mapping from my business objects to the JSON objects over and over. What I needed was a generic mapper. Among the features I wanted was the ability to automatically have the mapper map references to business objects in the source object to an identifier in the target JSON object (for an orderline this would mean automatically mapping the BO.OrderLine.Order property of type  BO.Order to the JSON.OrderLine.OrderId property of type Int32 by copying the value of BO.Order.Id (all my business objects have an identifier named Id of type Int32).

I considered using AutoMapper, but, not having used it before, I decided that for my limited needs it would be faster to just roll my own mapper (of course, as is most often the case, my needs changed over time and I am now thinking about investing the time to get acquainted with AutoMapper – I guess hindsight is always 20/20 :-).

So, I got to work on the mapper, initially named Business2JSONMapper. Everything was working out fine, I was writing unit tests, they were all passing and I was feeling good about myself. At last I fired up the application and, lo and behold, the Business2JSONMapper blew up, complaining that the target object X.JSON.OrderLine had a property named ‘Id’ of type Int32, but that the source object had no such property! This had me baffled! The source object certainly had a property named ‘Id’ of type Int32!

I started furiously banging out regression tests to try and reproduce the behaviour in my unit tests. I had been using pretty simple objects previously, so I started adding virtual properties, inherited properties etc.

In the end, it turned that the problem lay with the IOrderLine interface. In my application, all my references are interfaces, but I hadn’t been using interfaces in the tests. The IOrderLine interface looks something like

namespace X.BO
{
    public class IEntity
    {
        public virtual int Id { get;set; }
    }

    public class IOrderLine : IEntity
    {
        string Text { get; set; }
        int UnitPrice { get; set; }
        string Quantity { get; set; }
        string ProductNumber { get; set; }
        Order Order {get;set;}
    }
}

In the Business2JSONMapper code I used the Type.GetProperties() method to determine the set of properties on the source object which might act as sources for each of the properties of the target object. This worked great when acting on classes like OrderLine, but when called on the IOrderLine type, the Id property wasn’t among the properties being returned!

Some investigation revealed the reason for this: when an interface inherits from a parent interface, it does not inherit the properties of the parent interface, it only inherits the requirement to implement these properties!

This conforms with section 8.10 of the CLI specification:

Only object types can inherit implementations, hence only object types can inherit members (see §8.9.8). While interface types can be derived from other interface types, they only “inherit” the requirement to implement method contracts, never fields or method implementations.

Thus, in order to get a full list of the properties required by an interface, we will have to traverse the inheritance hierarchy:

public List<PropertyInfo> GetTypesProperties(Type type)
{
    List<PropertyInfo> typesProperties = type.GetProperties().ToList();

    if (!type.IsInterface)
        return typesProperties;

    foreach (Type intface in type.GetInterfaces())
        typesProperties.AddRange(GetTypesProperties(intface));

    return typesProperties;
}

For mapping purposes this might lead to some problems, since we may find properties of the same name and type in different interfaces. Fortunately, I know that I don’t have to worry about this in my particular case, but if you’re writing a mapper yourself, it may be a problem you’ll have to consider.


How to enable your Windows Service to have its name configured during installation

1 Comment »

In this posting I will show you how you can configure the name of a Windows Service as it is installed. Once we have tweaked the service installer properly, we will be able to have InstallUtil.exe install our service under a name which we supply as a command-line parameter.

Let’s say we’ve implemented a Windows Service in C#. To install the service, we’ve implemented an Installer, i.e. something like the following:

    [RunInstaller(true)]
    public class WindowsServiceInstaller : Installer
    {
        public WindowsServiceInstaller()
        {
            ServiceProcessInstaller serviceProcessInstaller =
                               new ServiceProcessInstaller();
            ServiceInstaller serviceInstaller = new ServiceInstaller();

            //# Service Account Information
            serviceProcessInstaller.Account = ServiceAccount.LocalSystem;
            serviceProcessInstaller.Username = null;
            serviceProcessInstaller.Password = null;

            //# Service Information
            serviceInstaller.StartType = ServiceStartMode.Automatic;

            serviceInstaller.DisplayName = "My Windows Service";
            serviceInstaller.ServiceName = "my_windows_service";

            this.Installers.Add(serviceProcessInstaller);
            this.Installers.Add(serviceInstaller);
        }
    }

Assuming that the service is contained in Installable.exe, we install the service using InstallUtil.exe like so:

c:\windows\Microsoft.NET\Framework\v2.0.50727\InstallUtil.exe /i c:\service1\Installable.exe

Now, what happens if we want to have two instances of this service running? We might want to have multiple services running, each associated with different databases. If we place Installable.exe and its configuration file in a new folder (say c:\service2) we might try to install this service via

c:\windows\Microsoft.NET\Framework\v2.0.50727\InstallUtil.exe /i c:\service2\Installable.exe

InstallUtil will throw an error: "System.ComponentModel.Win32Exception: The specified service already exists". InstallUtil complains because a service named my_windows_service is already installed.

So, what can we do? The obvious thing to do is to just change the lines

            serviceInstaller.DisplayName = "My Windows Service";
            serviceInstaller.ServiceName = "my_windows_service";

in WindowsServiceInstaller, recompile and run InstallUtil.exe again. While this works, it doesn’t play well with source control: what should the serviceInstaller.ServiceName property be set to in the code committed to the repository? How do you ensure that you remember to verify that this property is set correctly, before you compile and deploy the code?

What we really need is a way to supply the service name to the install procedure. Ideally, we would like to be able to do something like

c:\windows\Microsoft.NET\Framework\v2.0.50727\InstallUtil.exe /i c:\service1\Installable.exe /servicename="my_service_instance_1" /servicedisplayname="My Service Instance 1"

but InstallUtil doesn’t support these arguments. However, InstallUtil won’t bail out if it encounters unknown arguments either. This observation is going to be half the  solution. The other half is knowing that you can use System.Environment.GetCommandLineArgs() to get at the command line arguments provided for the current process.

Thus, in the installer we can access the command line arguments provided to InstallUtil, parse these arguments ourselves and set the service name accordingly.

To do this, we add the following private methods to the installer

private void SetServicePropertiesFromCommandLine(ServiceInstaller serviceInstaller)
{
	string[] commandlineArgs = Environment.GetCommandLineArgs();

	string servicename;
	string servicedisplayname;
	ParseServiceNameSwitches(commandlineArgs, out servicename, out servicedisplayname);

	serviceInstaller.ServiceName = servicename;
	serviceInstaller.DisplayName = servicedisplayname;
}

private void ParseServiceNameSwitches(string[] commandlineArgs, out string serviceName, out string serviceDisplayName)
{
	var servicenameswitch = (from s in commandlineArgs where s.StartsWith("/servicename") select s).FirstOrDefault();
	var servicedisplaynameswitch = (from s in commandlineArgs where s.StartsWith("/servicedisplayname") select s).FirstOrDefault();

	if (servicenameswitch == null)
		throw new ArgumentException("Argument 'servicename' is missing");
	if (servicedisplaynameswitch == null)
		throw new ArgumentException("Argument 'servicedisplayname' is missing");
	if (!(servicenameswitch.Contains('=') || servicenameswitch.Split('=').Length < 2))
		throw new ArgumentNullException("The /servicename switch is malformed");

	if (!(servicedisplaynameswitch.Contains('=') || servicedisplaynameswitch.Split('=').Length < 2))
		throw new ArgumentNullException("The /servicedisplaynameswitch switch is malformed");

	serviceName = servicenameswitch.Split('=')[1];
	serviceDisplayName = servicedisplaynameswitch.Split('=')[1];

	serviceName = serviceName.Trim('"');
	serviceDisplayName= serviceDisplayName.Trim('"');
}

The SetServicePropertiesFromCommandLine method retrieves the command line arguments and configures the installer to set the service’s name properties. The second method, ParseServiceNameSwitches, is just a utility method for SetServicePropertiesFromCommandLine.

We use these methods from InstallableServiceInstaller’s constructor:

public InstallableServiceInstaller()
{
	ServiceProcessInstaller serviceProcessInstaller =  new ServiceProcessInstaller();
	ServiceInstaller serviceInstaller = new ServiceInstaller();

	//# Service Account Information
	serviceProcessInstaller.Account = ServiceAccount.LocalSystem;
	serviceProcessInstaller.Username = null;
	serviceProcessInstaller.Password = null;

	SetServicePropertiesFromCommandLine(serviceInstaller);

	//# Service Information
	serviceInstaller.StartType = ServiceStartMode.Automatic;
	this.Installers.Add(serviceProcessInstaller);
	this.Installers.Add(serviceInstaller);
}

We can now install our service, providing the service names to InstallUtil as desired:

c:\windows\Microsoft.NET\Framework\v2.0.50727\InstallUtil.exe /i c:\service1\Installable.exe /servicename="my_service_instance_1" /servicedisplayname="My Service Instance 1"


How to get a pending transaction from a SqlConnection

2 Comments »

This blogpost originated from a question I asked on stackoverflow.

I’ve been building an application on top of a CMS lately. This CMS has a number of extension points. In particular, when certain business objects are updated, I can have the system call a custom handler for this event. In my custom handler I want to perform and persist my own data access operations. If I just new up a SqlConnection and use this for my data access, I run into two problems:

  • The changes that I make will not be enclosed in the transaction that the CMS is using for its updates. Thus, my updates might be persisted while the CMS’ changes are rolled back or vice versa, resulting in my business objects’ state being corrupted.
  • If I am updating database rows related to the business objects updated by the CMS, it is very likely that my updates will block on a lock owned by the CMS’ transaction. This will result in a deadlock, since the CMS won’t commit its transaction until my custom handler has finished.

Okay, so surely the guys building the CMS must have provided some contextual information for the custom handler? Well, sort of. Implementing the custom handler is a matter of implementing and registering a method with signature

void OnAfterUpdate(IDbConnection conn)

Hence, in OnAfterUpdate, the obvious thing to do would be something like

private void OnUpdateHandler(IDbConnection conn)
{
       
var cmd = conn.CreateCommand();
        cmd
.CommandText = someSQLString;
        cmd
.CommandType = CommandType.Text;
        cmd
.ExecuteNonQuery();
}

However, the call to cmd.ExecuteNonQuery() throws an InvalidOperationException complaining that

ExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized

So we’ll need to provide a reference to the CMS’ pending transaction through the IDbCommand.Transaction property on cmd. You might suspect that this would be as straightforward as

cmd.Transaction = conn.CurrentTransaction;

While I think this would certainly be a reasonable expectation, things aren’t so simple. The IDbConnection object doesn’t have a public accessor providing access to the current transaction.
Of course, using reflection, you can get a hold of the transaction, and here is how:

The SqlConnection has a private member InnerConnection of type System.Data.SqlClient.SqlInternalConnectionTds, which is an internal type of System.Data.dll. The SqlInternalConnectionTds in turn has a member called AvailableInternalTransaction of type System.Data.SqlClient.SqlInternalTransaction. Finally SqlInternalTransaction has a Parent property of type System.Data.SqlClient.SqlTransaction, which is the transaction we’re looking for. Thus, the transaction may be retrieved as follows:

private SqlTransaction FetchTransaction(SqlConnection conn)
{
 
Type sqlConnectionType = conn.GetType();
 
object innerConnection =
      sqlConnectionType.GetProperty(
“InnerConnection”
         
BindingFlags.NonPublic |
          BindingFlags.Instance).GetValue(conn, null);
 
object sqlInternalTransaction =
      innerConnection.GetType().GetProperty(
“AvailableInternalTransaction”,
         
BindingFlags.NonPublic |
         
BindingFlags.Instance).GetValue(innerConnection, null);
 
SqlTransaction transaction =
     
(SqlTransaction)sqlInternalTransaction.GetType().GetProperty(“Parent”,
         
BindingFlags.NonPublic |
         
BindingFlags.Instance).GetValue(sqlInternalTransaction, null);
 
return transaction;
}

I have no idea why it must be this hard to get a hold of the transaction, but if you happen to understand the inner workings of SQL connections and transactions and can provide an explanation, please leave a comment.


Mapping a network drive in .NET

No Comments »

I recently had to perform the following operations in an ASP.NET project I was working on:

  • Retrieve the contents of certain directories on a remote server
  • Process the retrieved files
  • Distribute the result of the above process to a number of webservers

I first created a quick and dirty console application which achieved this. Its main thread spawned additional processes running ‘net use LOCALNAME: UNCPATH PASSWORD /user:USERNAME’ to map the required remote drives after which it performed the copying.
My first attempt to wrap this in ASP.NET consisted in trying to continue to use the ‘net use’ approach. This required some configuration of proper impersonation, but otherwise it seemed to work fine at first glance – the mapping processes returned no error codes. However, upon trying to access the mapped drives, the application died with an ‘Unable to find part of the path’-exception. I’m not really into what is going on with ‘net use’, but my investigations showed that the mappings performed by the child processes weren’t reflected back in the parent process. The approach worked in the console app, but not when used from ASP.NET (even though I impersonated the exact same user who had run the console app).
Next, I discussed the problem with a couple of colleagues. The outcome was that I tried to achieve my original goals through the use of FTP. A colleague lended me some code he had written when performing a seemingly similar task. However, upon investigation I discovered that FTP wasn’t properly configured on the servers I was targeting with my application, so I abandoned the idea (these were production servers and I really didn’t want to have to mess with their configuration).
In the end I turned to the Windows API.
I had previously eschewed the concept of interfacing to unmanaged code. However, it turned out to be quite simple once I got an idea of how to map the .NET datatypes to unmanaged types and vice versa. I imported the signatures of two functions for creating connections to networking resources:

[DllImport("mpr.dll")]
public static extern int WNetAddConnection2(
[MarshalAs(UnmanagedType.LPArray)] NETRESOURCE[] lpNetResource,
[MarshalAs(UnmanagedType.LPStr)] string lpPassword,
[MarshalAs(UnmanagedType.LPStr)] string UserName,
int dwFlags);
[DllImport("mpr.dll")]
public static extern UInt32 WNetCancelConnection2(
[MarshalAs(UnmanagedType.LPStr)] string lpName,
UInt32 dwFlags,
bool fForce);

Once this was in place, putting the functions to work was quite straightforward:

private int UnMapDrive(string localname)
{
return Convert.ToInt32(WNetCancelConnection2(localname, 0, false));
}


private int MapDrive(string mapto, string drivepath, string password, string user)
{
NETRESOURCE [] nr = new NETRESOURCE[1];
nr[0] = new NETRESOURCE();
nr[0].dwType = 1;
int dwFlags = 1;
nr[0].lpLocalName = mapto;
nr[0].lpRemoteName = drivepath;
nr[0].lpProvider = null;
int res = WNetAddConnection2(nr, password, user, dwFlags );
if(res == 85) /* system error code 85 ~ ERROR_ALREADY_ASSIGNED - The local device name is already in use. */
{
LogManager.LogLine(TranslateErrorCodeToDescription(res));
LogManager.LogLine("Unmapping...");
int unmapresult = UnMapDrive(mapto);
if(unmapresult != 0)/*If unmapping fails, bail out.*/
{
numErrors++;
LogManager.LogLine("Unmapping failed with error code " + unmapresult);
LogManager.LogLine(TranslateErrorCodeToDescription(unmapresult));
BailOut(1);
}
else
{
LogManager.LogLine("Unmapping succeeded");
}
/* If unmapping succeded, try mapping again */
LogManager.LogLine("Retrying");
res = WNetAddConnection2(nr, password, user, dwFlags );
}
return Convert.ToInt32(res);
}

This works like a charm and the approach has since been used by a couple of my colleagues.