Wednesday, June 24, 2009

Composite WPF ("Prism") DelegateCommand.CanExecuteChanged Memory Leak

I was happy to hear that the Microsoft Patterns and Practices Team has picked up this Composite WPF ("Prism") bug report that I submitted several weeks ago. From the issue description:

When profiling my application I noticed that plenty of EventHandlers had never been deregistered from DelegateCommand's CanExecuteChanged-Event. So those EventHandlers were never been garbage-collected, which caused a severe memory leak.

As registering CanExecuteChanged-EventHandles is done outside application code scope I had expected them to be deregistered automatically as well. At this point I thought this might as well be a ThirdParty WPF control issue, but digging further I read a blog post stating that "WPF expects the ICommand.CanExecuteChanged-Event to apply WeakReferences for EventHandlers". I had a look into RoutedCommand, and noticed it uses WeakReferences as well.


Now this is no showstopper for us as we are very early in the development cycle, and we simply patched DelegateCommand in the meantime by using WeakReferences for its CanExecuteChanged-EventHandlers.

As far as I know the issue has been fixed already, and it is currently going through testing.

Wednesday, June 17, 2009

NHibernate Criteria-Query: Child Collection Not Populated Despite FetchMode.Join When Criteria Exists For Child Table

Example taken from NHibernate Bugreport #381:


session.CreateCriteria(typeof(Contact))
.Add(Expression.Eq("Name", "Bob")
.SetFetchMode("Emails", FetchMode.Join)
.CreateCriteria("Emails")
.Add(Expression.Eq("EmailAddress",
"Bob@hotmail.com"))
.List();



The resulting SQL will include a Join to Emails as expected, the resultset returned by the database is OK, but within the object model Contact.Emails is not going to be populated with data. Which means once Contact.Emails is being accessed in code, lazy loading occurs, which probably was not the coder's intention. This is not the case when

CreateCriteria("Emails")
.Add(Expression.Eq("EmailAddress",
"Bob@hotmail.com"))


is omitted.

The bug report was closed without fix, but contained a comment that "According to Hibernate guys this is correct behavior" and a link to Hibernate bug report #918.

To me that does not sound completely implausible. Hibernate's interpretation ot this criteria tree is that the EMail-criteria is meant to narrow down die Contact parent-row, not the EMail child-rows. HQL queries act just the other way around. Under HQL, additionaly Join-With- or Where-expressions can limit which child rows are loaded into the child collection. I know that HQL - in contrast to Criteria queries - does not apply the fetching strategy defined in the mapping configuration. But with an explicit FetchMode.Join I would have expected Criteria query to do the same.

Apparently under Criteria API this can be worked around by applying an outer Join (which of course is somewhat semantically different):

session.CreateCriteria(typeof(Contact))
.Add(Expression.Eq("Name", "Bob")
.CreateCriteria("Emails", JoinType.LeftOuterJoin)
.Add(Expression.Eq("EmailAddress",
"Bob@hotmail.com"))
.List();


Which seems kind of inconsistent compared to the inner join scenario, and there is even a Hibernate bug report on that.

What I would recommend anyway: If the goal is to narrow the parent data, but then fetch all the children, why not apply an Exists-subquery for narrowing, and in the same query fetch-join all children without further narrowing. Or, if you prefer lazy loading, simply define fetchmode="subselect" on the association.

On a related topic, eagerly joining several child associations has the drawback that the resultset consists of a cartesian product over all children - lots of rows with duplicate data. Let's say there are three child associations A, B and C with 10 rows each for a given parent row, joining all three associations will blast up the resultset to 1 x 10 x 10 x 10 = 1000 rows, when only 1 + 10 + 10 + 10 = 31 rows would be needed.

And while those duplicates will only lead to duplicate references in the object model (and not to duplicate instances), and even those duplicate references can be eliminated again by using Maps or Sets for child collections, these Joins impose a severe performance and memory drawback on the database resp. ADO.NET level.

Of course one could simply issue N single select statements, one for each table, with equaivalent where-clauses. But that implies N database roundtrips as well. Not so good.

The means to avoid this are Hibernate Criteria- and HQL-MultiQueries. Gabriel Schenker has posted a really nice article on MultiQueries with NHibernate.

More Hibernate postings:

Tuesday, June 16, 2009

My Amazon Listmania Lists

I had nearly forgotten about them, and was surprised to see that over time nearly 25,000 people have viewed my Amazon Listmania Book Recommendation Lists. Hey I should have received commission! ;-) Subtle hint: The J2EE list is a little bit out-dated by now.

Friday, June 12, 2009

Visualizing TFS Work Items With Treemaps

Microsoft Team System / Team Foundation Server is a really nice line of products. Besides version control we heavily rely on TFS Work Items for organizing development tasks. One of our largest project is conducted using Scrum, so we are utilizing Conchango's Scrum Plugin for Team System, plus Conchango Taskboard for Sprint planning. Taskboard is better suited than the general purpose Work Item lists and forms that are part of Visual Studio Team Explorer. Let's compare.

Visual Studio Work Item list:



Conchango Taskboard:



From a certain project size on the Visual Studio Work Item lists just won't scale and end up with heaps of data that one can scroll through forever. Don't get me wrong, those lists are sufficient for standard tasks, but they are cumbersome for gaining insight into the project's big picture. And Conchango Taskboard is for Sprint planning and Product Backlog maintenance only. The Conchango Scrum Plugin does have a set of really nice reports though.

So this is where I decided to ramp up my own little solution, which would be based on rendering Work Item data into Treemaps. This week I hacked out a little prototype in my sparetime (working title "Aurora"):


(this is an old screenshot that still misses labeling the treemap blocks)

This configuration example provides an overall impression of the sample project's progress: Green tasks are done, blue tasks are not done, and their size represents their complexity. And this is by no means limited to Scrum projects, it works for all kinds of TFS project templates.

Three simple input parameters are all it takes:

  • Work Item type (e.g. Product Backlog Item, Sprint Backlog Item, Bug)
  • Work Item attribute defining Treemap size (e.g. Storypoints or any other numeric data, or none in case all items should be rendered with equal size)
  • Work Item attribute defining Treemap color (e.g. State, Sprint ID, etc)

Plus an optional query string for narrowing the list of items.

This approach possibly allows to visualize about 70% of the reports I could think of. I am still wondering how to implement the missing 30%, as they cannot be covered that easily. For instance I want to group Area Paths with equal prefixes by rendering them with the same color. Or simplify the creation of queries (can't expect everyone to know WIQL by heart). And I don't want to over-complicate things either. Any suggestions regarding those matters are highly welcome! Another requirement is to let the user define color mapping. And item hierarchies are still missing, too (that's the "Tree" in "Treemaps" after all).

By the way, I am using woopef's WPF TreeMap control, thanks a lot for making it publicly available. I am also going to open-source Aurora once it provides basic functionality and reaches a certain level of stability, most likely on CodePlex.

Monday, June 08, 2009

Vanilla Data Access Layer Library 0.6.0 Released

I have just updated Vanilla DAL on Sourceforge. Release 0.6.0 is still in Alpha state, and comes with improved automatic transaction management. I now chose am approach similar to System.Transactions.TransactionScope. Of course nothing as sophisticated, Vanilla DAL's TransactionScope is a simple IDisposable object that consists of a thread-bound transaction and a does some reference-counting. The transaction will be commited when the last TransactionScope is being disposed, resp. rollbacked in case of any exception during execution:


using (accessor.CreateTransactionScope())
{
accessor.Update(new UpdateParameter(northwindDataset.Customers));
accessor.ExecuteNonQuery(new NonQueryParameter(new Statement("DeleteTestCustomers")));
}


My main problem was how to find out whether the current call to IDisposable.Dispose() happens within the process of exception unwinding. Several people have recommended Marshal.GetExceptionPointers(), which is the only working solution I have found so far. But I consider this a semi-hack. Any better ideas?

By the way, Luis Ramirez has written a nice article on Codeproject.Com, comparing Vanilla DAL to plain ADO.NET, Microsoft Data Access Application Block and SqlNetFramework.

Sunday, June 07, 2009

Code Coverage Analysis With QmCover

My cousin has developed a graphical code coverage analysis tool for the GNU toolchain. Now I am not really up-to-date regarding the current state of code coverage tools in Unix land (during my old Solaris days all I needed was vi and gcc), but QmCover certainly looks cool!