I found the talks at The International Conference Of Software Archaeology interesting and thought provoking. Thanks to Robert Chatley and Duncan McGregor (below) for putting this new conference together and Tim Mackinnon who helped keep things running smoothly on the day.
From wikipedia, Archaeology is “the study of human activity in the past, primarily through the recovery and analysis of the material culture and environmental data that they have left behind, which includes artifacts, architecture, biofacts and cultural landscapes.”
As programmers, it’s natural to use our coding skills to do the digging. We love spotting problems and patterns so trying to see what's been done in the past is pretty facinating. As speakers pointed out, our findings in the code are merely clues to what’s going on outside it. We need to add a lot of context to make sense of it.
When looking at graphs of various metircs over time I wanted to know more about the organisational structure, locations of people and tooling. Some of the talks correlated source code changes with defects and others with team size and contributors. It might also be handy to know design approaches and modeling techniques that were applied during the lifetime of a codebase, such as TDD and UML.
Aside from code, there are more potentially rich sources of information that we could dig into that may help understand the culture and environment that code was developed in. Other artefacts that might be interesting:- email, calendars, budget cycles, user feedback logs, support tickets and revised plans. Perhaps these are a little less attractive to explore as they’re more fiddly to analyse with code.
Clearly some of the metrics presented could be used as evidence to support efforts for code improvement. After seeing quite a few visualisations of code over time, I also found myself wondering whether it's that useful to look at shadows of organization in the code. We might get a different perspective from observing and interviewing teams while development is ongoing (ethnography). Although as someone (possibly Nat Pryce) pointed out "Code doesn't lie".
Snazzy graphs can be useful to grab management attention (as we already have plenty of competing problems to fix such as code that doesn’t work as expected and poor user experience). It's typically difficult to get across to non-technical stakeholders just how complex a codebse is and how messy things have become over time.
As a coach, I’d advocate spending time talking to programmers before crunching the codebase into pictures. Solutions to any problems lie in the world of people - programmers and the businesses that they work in. If we are to improve the code then understanding their current concerns is a useful place to start on improvements. But I would say that wouldn’t I? ;-)