By Ralph Baxter,
CEO at ClusterSeven
2011 should go down as the most significant year yet for errors in the sea of unstructured financial data – increasingly seen as part of the Big Data problem engulfing all modern institutions. One wonders what 2012 will bring.
Despite the problems that these errors have caused, firms such as ClusterSeven face a huge challenge ensuring that ‘data management risk’ gets its fair share of the media spotlight. For example, the biggest data error story of the year was largely under-reported. At the end of October, the German government announced that the country was €55bn [sic] “richer” after an accountancy error undervalued assets at the state-owned mortgage lender Hypo Real Estate.
Cited by Reuters, the finance ministry was quoted as saying: “It was due to sums incorrectly entered twice.” (1)
One can presume that these incorrect sums were entered into the most common host of unstructured financial data – i.e, a humble spreadsheet. Without a proper data control process in place, nobody knew much about them until checks were made later on. How many other such errors are currently – or historically – littering the balance sheets of the world’s financial institutions and national economic data?
Other errors came to light across the course of 2011. In August, the UK’s Office of National Statistics had to revise UK construction industry data sharply downwards after admitting an “arithmetical error” had caused it to overstate the strength of the sector in Q2 by 180 basis points.
In October, Richard Cuthbert, CEO of UK outsourcing specialist Mouchel, stepped down after a spreadsheet-based accounting error reduced Mouchel’s full-year profits by more than £8.5 million to below £6 million, a move which prompted a profits warning.
As the global financial and economic malaise drags on, the fear is that data management will not be a core risk for financial institutions looking to cut costs. This is particularly ironic given that less IT resources will mean more spreadsheets and more pressurised users mean it will become easier for errors to occur.
It is common for audits of spreadsheet estates in leading financial institutions to exceed 10s of millions of recently active files – a staggering indication of the deep-rooted nature of spreadsheets in the modern business world. It is also an indication of the huge risks firms are exposed to if these estates are not managed properly.
One positive development is that spreadsheets have become a core focus for market regulators such as the Financial Services Authority (FSA) in the UK. The FSA has stated very clearly that one of the biggest risks for insurance companies in the run-up to Solvency II is managing the data contained in large estates of spreadsheets.
There is clearly a great deal of work to do, however. According to recent research by ClusterSeven, just over one in two (57 per cent) of spreadsheet users have never received formal training on the spreadsheet package they use.(2)
Almost three quarters of respondents (72 per cent) admitted that no internal department checks their spreadsheets for accuracy, with only 13 per cent reporting that Internal Audit reviews their spreadsheets.
ClusterSeven is in the process of conducting further research into spreadsheet risk among senior business professionals. The early indications are that while spreadsheet risk is seen as serious, companies are simply not putting the right structures in place to mitigate or, better still, eradicate it.
In short, financial institutions will continue to report high profile instances of data mismanagement and fraud unless they take 100 per
cent control of the vulnerable financial data files that move and manipulate information between their business systems.
These include files known as CSVs (comma separated variable), plus spreadsheets and Microsoft Access Databases. Spreadsheets and CSVs are the ‘glue’ that joins everything else together. If this ‘glue’ is contaminated – such as bad data values – then this will be extremely difficult to spot further down the line. Many firms get used to accepting exceptions in this data such as test values or balancing items. However, these loopholes can hide more malicious entries for long periods.
For firms that have put the right system in place to ‘control’ spreadsheet usage and invested into defining, controlling and validating
spreadsheet processes, the rewards are significant.
Actuaries, for instance, benefit from the ability to extract, manipulate and represent the data spreadsheets contain in very fast and efficient manner and to re-present this data as the business environment changes – without being delayed by unnecessary checks.
These rewards include such things as data trending over time, automated documentation to avoid increased administration costs and automated validation of data feeds into the internal model and other core systems.
As ClusterSeven will keep on highlighting, it is only when a serious financial mistake occurs that the subject of spreadsheet risk becomes a discussion point. Many firms do not realise how vulnerable their unstructured financial data processes are until it is too late as they lack formal processes and tools to make sure all these critical data files are accurate and truthful.
Indeed, many firms do not accept just how fundamental spreadsheets are to their business. The outlook for 2012 does not look particularly rosy: we can expect many more errors to swim to the surface over the next 12 months.