A case study in automated records management

Automation is inevitable to support the increased audit, FOI, eDiscovery, security and other regulatory obligations that government agencies are expected to meet. 

In 2019, Castlepoint engaged with a small specialist Federal agency. The agency had been running a manual sentencing process for two years, which involved reviewing content in a legacy shared drive and classifying it against AFDA Express and their Records Authority (RA). It wanted to explore an automation solution as an alternative. 

We implemented Castlepoint and ran it across the same drive, registering, classifying and sentencing the content using Artificial Intelligence (AI). 

In parallel, we selected a random sample of files, and undertook a separate manual sentencing activity by a qualified records manager, blind to the results of both the original manual activity and the Castlepoint audit. 

Cost of sentencing 

The previous manual process had involved: 

  • Gaining access to required shares 
  • Reviewing the documents  
  • Mapping the documents based on topics to an applicable RA 
  • Manually calculating disposition date, based on Class and last-modified date 
  • Renaming the individual document to include the disposition year and Class. 

Approximately 1 million items were sentenced in this way. The projected cost to complete sentencing of the 4.6 million items in the drive was 223 weeks, which is the equivalent of about A$300,000 APS6 FTE salary. 

Our team installed Castlepoint, and used it to run an automated classification and sentencing process. The high level process looks like this: 

  1. Install the software and connect to the data store 
  1. Register every file and relate it to its aggregation 
  1. Read every word in every item, regardless of format or length 
  1. Extract key phrases and named entities from each item using AI 
  1. Map the key phrases to the Records Authorities to determine all potential matches 
  1. Identify which of the applicable classes has the longest retention 
  1. Sentence the record against that Class, and calculate disposition date based on its metadata 

After installation, the audit took 30 days to run across all 4.6 million items (average 40,000 items a day, or 280,000 per week – one every two seconds). Overall, we reduced the cost of processing by 95% per item. 

The other huge benefit of automation is extensibility. We ran some other key uses cases with Castlepoint, with the following projected results: 

  • Discovery: $A401,535.82 to $A10,018.40 (97.5%) per annum.  
  • Disposition: $A1,867.56 to $A27.33 (98.5%) per action. 
  • Identifying redundant items: $A85.42 to $A2.66 (97%) per record. 
  • Reporting and auditing: $A1,936.63 to $A4.56 (99.8%) per event. 

Our projected Return on Investment for Castlepoint was under one month to 100% cost recovery.  

One thing to note is that manual classification should still  be the gold standard. No AI can make inferences and assessments about the content and context of a document with the sophistication of a human brain.  

But the benefits we can achieve from having human evaluators are quickly undermined by the sheer scale of the problem. To categorise all of these records in the timeframe, the manual sentencer had sentence on average 20,970 items a week, which required them to sentence nine items per minute (one every seven seconds). There is no time, in this model, to read and understand the entire document. 

As a result, our validation exercise identified some key issues. 

Under-classification 

We found that 75% of the manually classified records in the sample size were potentially under-classified, and should be retained 40% longer than currently planned. This under-classification was caused by assessing the records based on their title and a (necessarily) quick scan of their content, which did not allow the sentencer to identify small (but key) portions of text that elevated the item from a 7-year class to a 10-year class, for example.  

We also found that a lot of content was assigned Normal Administrative Practice (NAP) Classification for ad hoc deletion, as part of a necessary strategy to expedite the sentencing activity. Items were marked as NAP based on risk-based decisions, such as their format. Castlepoint’s assessment of the records marked as NAP indicated that most actually needed to be retained.  

Castlepoint had a 100% success rate in retention application, compared to the in-depth records manager sentencing action. Castlepoint also used the retention on individual items to calculate the retention of the whole aggregation, reducing the number of disposition actions required.  

Assessment gaps 

The sentencer could not open some types of files, meaning they could not be sentenced at all. Attachments in emails couldn’t be opened; hidden files, system files, files with overlong names, and zip files were excluded. In this share, the most common file types included .properties, .bat, .html and .gz. which the sentencer couldn’t read. Castlepoint was able to read all of these.  

Processing issues 

We found that the requirement to rename files introduced some classification mistakes based on typographical errors. Transposing two digits changed the applied Class, and as such the sentence (e.g., 20314/20334 vs. 20344). Also, opening the file to appraise it can change the metadata, which affects the sentence calculation, so care must be taken here. Castlepoint avoided these issues by maintaining a standalone register of all items, and not modifying the source.  

Limited defensible data 

Disposition decisions need to be defensible. When we give a business owner a file name, out of context with its other related records, they are not able to make informed decisions about whether the sentence is appropriate without themselves also reviewing the document. Castlepoint provided the key phrases that were used to make the decision, so the owner could simply review these to validate the sentence. 

Risk and value 

Class isn’t the only consideration when it comes to disposition. We also need to know if the content could be subject to a retention hold or disposition freeze. We need to know if it relates to any key work that is ongoing, as it may still have real value to the organisation. On the flip side, we need to know if it’s a risky item. If it contains PII, details about Spent Convictions, sensitive commercial or other confidential information, or classified information, it may need to be disposed of more expeditiously (and handled differently) to less sensitive items.   

Castlepoint identified over 500 items with sensitive content, and over 2,500 items subject to a freeze or hold. It also flagged actionable events, including deletions, classification downgrades, unauthorised modifications, or any other action we wanted visibility of. And we also created a taxonomy of ‘high value’ terms, so that the agency can easily see and protect information that is of interest to the executive, key projects, or current regulatory activities.   

Summary 

Artificial Intelligence has the advantage and relative luxury of being able to read every single word in a document, extremely quickly, and can scan and re-scan 24 hours a day without a break. AI doesn’t suffer from decision fatigue, or compassion fatigue, eye strain, or even a sore finger from clicking and scrolling. AI is a machine, and we can use that to our advantage to do the heavy lifting for us. Making the machine read all of the words, and apply all of the rules, frees up our subject matter experts in records management to add real value, and more easily make decisions. 

So automation can help us make sure that: 

  • The sentence is always current 
  • We sentence the whole aggregation, in context 
  • We can layer our sentencing decisions easily with evidence about classification, holds/freezes, low-value, high-value, or high-risk information 
  • We stop wasting records manager time and business owner time trying to read every document in order to make good decisions 
  • We can let users use any system, in any way they want – we don’t need to get in the way of their work, as automation can process so much, and so quickly, that there is no need to constrain usage scenarios (or enforce metadata rules) to make the outputs more useful. 

Records automation traps 

  • Avoid complex rules engines. If it requires ongoing records-manager effort to maintain, or changes in user behaviour to operate, it’s not effective automation. 
  • Don’t dumb down your governance. If you need to heavily simplify or water down your record-keeping obligations, it’s a sign your automation system can’t handle complexity well. 
  • Don’t auto-archive items. Trimming old files off a record as they age just detracts value and meaning from the aggregation. Sentence your records as a unit. 
  • Be careful automatically moving or copying records out of their context. Archiving a record at the end of its life can strip off its metadata, versions, audit trail, and contextual relationships, damaging its integrity. Automation systems that take a copy double your threat surface and halve your discoverability.  
  • Beware of systems requiring tight integration. If your automation system is too interdependent with your systems of record, you won’t be able to upgrade or patch easily. 
  • Don’t automate deletion of all ‘ROT’. It’s hard to know what truly has no value – even duplicates may be adding meaning to the aggregation they are in, and with new Royal Commissions every year, a ‘trivial’ record might suddenly become important.  
  • Finally, don’t automate disposition. It’s against government guidelines to let a system make any decision requiring discretion. Use automation for augmented intelligence, not just AI. 

Records automation tips 

  • Address all your formats and systems. Important records of business are found in every type of system, and your automation tool needs to be able to access and read them. 
  • Run all Records Authorities through your engine. Don’t assume that only AFDA Express applies, because the files are in the corporate share. If they refer to core business, they may need longer retention – and AI engines are excellent at layering rulesets quickly.  
  • Don’t manage someone else’s files. Make sure your AI can accurately tell the difference between your Disaster Recovery Plan, and a copy of one from another agency saved for reference. And make sure the tool can tell the difference between templates and meaningful content.  
  • Use all of your data. Metadata is not enough – we found in our case study that less than 4% of files had descriptive metadata, <0.1% had structural, and <1% had administrative. Make sure your AI can read all the text in every item. 
  • Go beyond sentencing. Once an AI engine has indexed all your records, the possibilities are limitless. Use the intelligence to find high-value and high-risk content; support eDiscovery; manage audits; and track user behaviour.   

This article by Rachael Greaves was originally published in IDM Magazine

Book a demo

Register your details for a demo of Castlepoint