Intellect Concept Viability Workshop Review – Digital Forensics


15 August 2013

Last week I was fortunate enough to attend a Concept Viability Workshop at Intellect, where the Home Office CAST (Centre for Applied Science and Technology) requested the collective input of a number of the Digital Forensic (herein referred to as DF) community; tasked with addressing the issues of the testing and validation of forensic software and processes, and the lack of standardisation in the area. This was a golden opportunity for DF practitioners like me to attend, as tool and process validation has been the ‘Pandora’s box’ of DF for some time. The following sections outline some of the issues addressed and will hopefully provoke further thought and insight in the community.

Digital Forensics (DF) vs. Traditional Forensics 

The tools DF practitioners use are plentiful and varied, not to mention updated frequently, which immediately presents validation issues. One of the points raised was the goal of moving DF towards the same degree of specificity shown in more traditional forensic disciplines like fingerprinting or DNA.

Whilst this is good to strive toward, the practicality of it makes it near impossible to achieve or maintain. Traditional computer forensics like the aforementioned examples are near enough ‘black and white’ or true/false answers. Whilst our tools and knowledge allow us to prove or say with certainty a lot about the data we analyse, there are far too many unknowns in DF to categorically place it alongside traditional forensics. DNA and fingerprints are not changing, DF changes nearly every day. The word forensic itself as an adjective can be defined as ‘relating to, used in, or appropriate for courts of law, or to stand up to public debate’, and some extremists may argue that because of these rapid changes, ‘Digital Forensics’ is almost undeserving of its title.

At the rate technology and software advances, having all the facts about digital evidence as opposed to ‘what the data suggests’ or ‘what the data appears to indicate’ is an on-going battle that I don’t see a victory for any time soon.

Software Validation

DF software ranges from niche and bespoke tools created for specific purposes, to large and complex forensic suites that can see an investigation through from acquisition to final report.

Whilst the former has its place and we as practitioners like to have our ‘go to’ tool for a set task, can we really be 110% sure that we understand how the tool is obtaining its results and should we be relying on these results in court? Whilst many believe that we know the inner most workings of a given tool and put our hand on our hearts to say it’s reliable, the proper answer is probably no, we cannot. In the absence of a nationally recognised and mediated standard/accreditation for validation, the onus for the acceptance of digital evidence comes down heavily on the competency of the individual before tool validation is considered. Is this backwards? Should we not first take solace in our software before our interpretation and experience is brought into question? Or is this correct and the competency of the individual should be judged based on the choices of the tools we use?

In an ideal world yes, we should be able to place unconditional trust in our tools. However, with larger forensic suites that carry out a multitude of analyses simultaneously, factored with the rate of change of both technology and our software trying to keep up with the very same technology, I don’t currently see a viable way of ensuring that every aspect of every tool could possibly be validated.

An interesting point of discussion was the re-validation of existing tools, or functions within tools. For a hypothetical example, a given tool can analyse and extract the metadata from Microsoft Office XP files. A couple of years and several product revisions later, the same tool now analyses Office 365 artefacts and is portrayed to still be compatible with every version of Microsoft Office in between. In each revision of the software, is the Office XP functionality still being tested to ensure no later developments have affected the original extraction/interpretation process? It has been known in the past that tools have been developed to a point where legacy functionality is actually broken and subsequently reporting false results. If we are reaching a point where our own software can suffer from delusions of grandeur, then something needs to change. To use our previous example, let’s complete the story and say that the current version of that tool is then used in court to provide evidence on Office XP files. Would anyone even know or think to question the tool? Most likely not.

The validation scapegoat

So who should take responsibility for this and how do we go about implementing it? There was no hard and fast answer but some of the discussions and ideas for further thought were to see if some of the validation workload for each tool could be dissipated using inter-lab testing. This could possibly be achieved in academia, in which each institution would show testing and validation of their respectively assigned tasks/tools? I imagine a plan such as this would need to be partially government funded as the sole financial burden of it would be too heavy for academia alone. If there was some sort of accreditation or standard that could be mediated (possibly by CAST?), then recognition of the contributions of the various institutions could be obtained. A centralised database would then need to be maintained and policed for every tool and its current ‘validation status’.

I agree with others present who noted that it is unlikely that SMEs would want to actively commit to a lifetime of what I would consider to be validation purgatory; one that would inevitably be immensely time consuming and costly – for no financial return.

Some may say that ISO 17025 encapsulates everything noted here, and we’re trying to solve a problem that’s already been addressed. Whilst ISO 17025 is a gargantuan effort to both establish and maintain, it does not fully address every point surrounding this and was certainly discussed at length during the day. In my opinion, if ISO 17025 was the answer to our digital prayers, workshops such as this one would not have been organised.

Author: Will Hunt – 7Safe Computer Forensic Consultant & Course Manager.

To find out more about computer forensics, or how 7Safe can provide evidence-based analysis of your data, contact us.

 

« BACK

« Back