Creating a Mess
Recently I began work on an XMPP Component Framework in C#, initially trying to reuse code from a previous version. I’m trying to do it right this time, conforming to the official specifications. This has led to a lot of rework as the previous version was essentially based upon code I’d written in 2005. Not only has XMPP continued to evolve since then, it has always been an extremely flexible protocol – you are free to implement as little as you need, and to customize as you see fit, so long as you conform to some basic protocols. My previous efforts really didn’t cover much of the official specifications and relied on a simple convention to mange method calls to components.
As I’ve bumbled my way through the specifications, I’ve found myself moving classes around, renaming them, removing some, adding others etc. Furthermore, there’s been some fundamental code changes, for example, in the VxElement class which all XMPP payloads and stanzas inherit from I was using a composite pattern to handle attributes and elements but later decided to treat them separately for simplicity of use in the descendant classes. I’m still not done hacking my way through the specifications, but I’ve created so much technical debt I’m feeling somewhat overwhelmed. I need to stop and clean up before proceeding.
But how much technical debt? How healthy is this codebase really? Well, that’s what I’m about to find out, thanks to the awesome NDepend.
What is NDepend?
NDepend is a static analysis tool for .NET managed code. The tool supports a large number of code metrics, allowing to visualize dependencies using directed graphs and dependency matrix. The tool also performs code base snapshots comparisons, and validation of architectural and quality rules. User-defined rules can be written using LINQ queries. This feature is named CQLinq. The tool also comes with a large number of predefined CQLinq code rules. Code rules can be checked automatically in Visual Studio or during continuous integration. – Wikipedia
Out of the box, NDepend comes with over 200 CQLinq code rules to identify issues in several different categories and you are free to add as many as you need. To find out more about CQLinq and rules, see here. This is an example from the NDepend website:
// <Name>Avoid too large methods</Name> warnif count > 0 from m in in Application.Methods where m.NbLinesOfCode > 30 select m
A number of graphs and diagrams are also available to help identify dependency issues and to provide a general overview of the code. This is an example of a dependency matrix showing the namespaces in an application and the number of members involved in the coupling:
NDepend’s static analysis engine provides a number of useful code metrics, including lines of code, cyclomatic complexity, debt measurement, depth of inheritance tree, lack of cohesion methods and many others.
You can learn more about NDepend from their product features page, or by watching their growing collection of videos and tutorials here. This review by Gabriele Tomassetti is a good overview of the essential features, and there is a PluralSight course available here.
What I will be hoping to achieve in this post is to test NDepend’s proposition that “NDepend allows your team to accurately gauge the overall health of your application, no matter how complex it is.” My application is a mess right now, I’m relying on NDepend to help bring some order to the choas.
The first thing to do is add NDepend to my Visual Studio solution – the easiest way to do that is to click the NDepend tray icon and select ‘Attach new NDepend Project to current VS Solution’:
And then to select which assemblies I want NDepend to analyse:
Click the Analyze button and after NDepend completes the analysis, I can see an HTML report and a dialog suggesting I show the NDepend interactive graph:
As expected, there’s a number of issues to address, it’s not looking pretty. To learn more about them in the HTML report, expand the menu and select from the available reports.
The first thing I can see is I have 3 failing Quality Gates and a number of broken rules. In this state, not only is NDepend informing me I have issues, it is also advising me that this software is not releasable and should not be checked-in to source control.
What is a Quality Gate? (for more, see here)
A Quality Gate is a code quality goal.
Such quality goal must be enforced before releasing and eventually, before committing to source control.
A Quality Gate can be seen as a PASS/FAIL criterion for software quality.
A dozen of default Quality Gates are proposed by NDepend related to measures like technical debt amount, code coverage or amount of issues with particular severity.
Notice that special red / yellow / green losange icons shows Quality Gates status: fail / warn / pass.
Quality Gates operate at a higher level than Rules
What is a Rule?
A Rule outputs issues. An issue is a code smell that should be fixed to make the code cleaner and avoid potential problems. Typically the team can release to production even if some issues are still reported.
Quality Gate Reporting
Clicking on the ‘Quality Gates‘ menu option leads to a very detailed view listing which quality gates failed, where and why they failed.
Interestingly, many of my issues are caused by PostSharp, a third party AOP framework:
It’s the same for the other quality gates. I’ve been meaning to remove PostSharp as I tend to do my C# development on Mac or Linux nowadays, so I’ll do that now and then re-run the tests….
After removing PostSharp, I re-ran the analysis to produce a new report. I received similar results but the number of issues is much less, as you can see here:
This is a summary of the Quality Gate analysis.
Another very useful report is the ‘Hot Spots‘. It lists a number of issues that need to be addressed, each organized in the following categories:
- Types Hot Spots
- Types to Fix Priority
- Issues to Fix Priority
- Debt and Issues per Rule
- New Debt and Issues per Rule
- Debt and Issues per Code Element
- New Debt and Issues per Code Element
Each issue is given a color-coded DebtRating, DebtRatio and Debt which is an estimated amount of time for the team to produce clean code in the offending type:
It’s not looking too good for me, there’s a lot of work to do. Even worse, the Object Oriented Design Report is also having a bit of a go at me. This is an excellent report providing the following details:
Rule violations are grouped by type and detailed, for example:
Many of the above issues are by design. I understand inheritance is a form of tight coupling and may make the code brittle (and in general composition is preferred) but in this case an Iq is an XMMP Stanza, is a Packet, is a PacketBase, is a VxElement whereas Auth is a Packet, is a PacketBase is a VxElement, and an Error is a Subpacket, is a PacketBase, is a VxElement. I could reduce the inheritance depth by pushing PacketBase into VxElement, and perhaps XMPP Stanza into Packet, but then I lose some precision. For example, An Auth Packet is not an XMPP Stanza. In frameworks it’s common to have deep hierarchies, for example Delphi’s VCL, or WinForms, but then again, Flutter has taken the composition approach. The great thing about NDepend is that it has surfaced these questions so the team can rationalize the design. I have some thinking to do.
No excuses here though, these need to be fixed:
Abstractness vs Instability Report
Another interesting report is one that reports on the Abstractness vs Instability of your assemblies. From the report overview itself:
The Abstractness versus Instability Diagram helps to detect which assemblies are potentially painful to maintain (i.e concrete and stable) and which assemblies are potentially useless (i.e abstract and instable).
- Abstractness: If an assembly contains many abstract types (i.e interfaces and abstract classes) and few concrete types, it is considered as abstract.
- Instability: An assembly is considered stable if its types are used by a lot of types from other assemblies. In this context stable means painful to modify
As the report shows, my three assemblies are sitting down in the instability corner, but if I am not mistaken the abstractness is looking good. I’m not too worried about the instability at this stage because once I have finished the XMPP layer, the next two layers above it will use it extensively, as well as the infrastructure layer. The other layer is the entry point to the application, it will always be depending downwards. But I will definitely be keeping an eye on this graph as the code evolves.
There are a number of other useful reports in the HTML view such as metrics, dependencies etc. But to address my issues I’m going to switch to the tools within Visual Studio. These tools can be activated via the dialog shown earlier which pops up after an analysis is performed, or directly from the extenstions menu in VS 2019.
The Interactive Tools
First up I’m shown a dependency graph of my code. I can see my three assemblies, and every assembly they depend upon. The size of each assembly is reflected in the display, with Velocity.Xmpp having the lion’s share of the code. Clicking on an assembly opens a popup display listing properties of the assembly including code statistics and the results of the NDepend analysis, such as technical debt. Double-clicking the assembly opens a dialog box listing all the files in the assembly, clicking a file opens it in the IDE.
There is a button to the far left on the toolbar which allows you to select different types of dependency graphs, with the same interactive functionality. In each of these views you can zoom in or out, and export them should you need.
This is a namespace dependency view, in which my inheritance issues are clearly visible:
NDepend enables you to drill further down by searching for an element and selecting it. For example, here I searched for Iq and right clicked the type under the Xmpp.Stanzas namespace in the Results Panel. Not only did I see a more detailed dependency graph for this type (not shown here), there was also a Class Information dialog, and a menu popup to perform further actions:
Choosing the ‘Select Issues…’ menu option opened the Queries and Rules Explorer, led me straight to the first issue with this class, and opened the file in the IDE ready for me to begin fixing the issue.
Once I selected a specific issue in the Queries and Rules Explorer, the results panel to the right updated with a list of all the types which have that particular issue, and the rule descripion box is updated accordingly. This is a great way to navigate between code and issues. The following is for two different issues, the first found in 45 types, the second found in only 1 type. The rule description is very useful for issue resolution.
The view which pulls all this together is the Dashboard view. It couldn’t be easier to work through each of the issues, get information about the broken rules, jump to your code – all from a single view and the click of a button. This view also provides a number of graphs detailing the overall health of your project and can be configured to suit your needs either by adding new Graphs, such as a Trend Chart, or removing others.
This is how I will proceed to fix up my code, issue by issue via the Dashboard view.
Whether or not I reduce the inheritance hierachy I’m not sure yet, but I have tons of other issues to work through whilst I think about it.
So far PostSharp has been removed, the inheritance model flagged for review, and I’ve begun working through the list of coding issues. Thanks to NDepend’s interactive feedback, I’m confident of getting this project under control quickly.
Once happy, I’ll create a baseline from which to track future quality issues.
Another nice thing about NDepend is it is very configurable.
If you disagree with a particular issue, or decide to allow it to remain for whatever reason, you can suppress the warning. This allows you to create a custom baseline for your solution from which all future issues will be reported.
For example, in general using a Singleton is frowned upon nowadays, and NDepend correctly flags this as a violation in the Object Oriented Analysis report, but not everyone agrees with that. There may be valid uses, such as representing an Application or Session object, it’s an older legacy code base, or you rely on third party code etc. To be honest, I’d prefer to get notified of this violation and deal with it accordingly, which is NDepend’s default behavior.
Baselines allow you to have that discussion once, fix the issue or suppress it, then move forwards from there. To learn more about baselines and reporting code diff, see here.
There is a whole lot more NDepend can do, a quick look at the NDepend extensions menu and its tools submenu reveals a lot of the functionality and a list of possible third party integrations:
It is an incredibly useful tool which identifies not only coding issues, but design issues for the team’s consideration. It provides a simple but comprehensive method for managing the quality of your code. I especially like the quality gates which make it clear the code is not ready to release.
I have heard of NDepend’s benefits for teams, but I can also say from my own experience it is very helpful for individual developers. It’s like having a mentor do a code review for you – whenever you want, as often as you want. Or like pair programming by yourself, because the reports really do invoke that kind of dialog: should I refactor this and reduce the inheritance depth? why is this assembly reported useless? Through the comprehensive reporting and interactive functionality, I really feel like I am on top of my code. There are issues, fine…let’s fix them.
The answer to my question “[Does] NDepend allow your team to accurately gauge the overall health of your application, no matter how complex it is?” is an emphatic yes!