How many times have you downloaded a bank statement and had this warning pop up when you saved it: your spreadsheet may contain features that are not compatible with CSV (Comma Delimited). Do you want to keep the workbook in this format?
Most of us just hit ‘yes’ and assume everything’s going to be OK. In fact, a lot of people simply assume a CSV file is an earlier iteration of today’s spreadsheets. But when the government did something similar recently, it managed to lose 16,000 positive Covid-19 test results, thus triggering a national outcry.
The problem is that a CSV file is not a spreadsheet. It’s a list of fields of data separated by commas, hence ‘comma-separated values’ or CSV file. But when a CSV file is opened by Microsoft’s Excel spreadsheet, things can change. In the government’s case, the outdated version of Excel they used couldn’t support the number of rows the CSV file had, and without warning, 16,000 test results dropped off the spreadsheet. Had they been using an earlier version still, they would have stopped tracking and tracing after just 65k rows!
Many parts of the insurance industry relies on these simple file types, and similar formats like them, e.g. EDI messages (electronic data interchange), which again are just simple files of data ordered in a particular way, which are used to move data between different broking systems or broking and underwriting systems with great ease, and these all work fantastically well until they don’t!
The moment a human being opens a simple data file type like CSV with a progamme like Excel, things can begin to go wrong. Excel, will start to do the clever things it is designed to do for spreadsheet users, like converting data from dates into numbers, or removing leading zeros in numeric values, or, like in the Track and Trace disaster, quietly losing thousands of rows of data.
So why, if human interaction with data has the potential to change it, do we need to interact with it at all? Can’t we just leave it to the machines? Unfortunately not, people are always going to need to “eyeball” the data, for a number of reasons.
Many brokers by necessity need to run a variety of systems in their business to manage different lines of insurance, so, to gain an overview or to create an insurer bordereau of their complete book, they extract data from the different systems and combine them, usually in a spreadsheet, and at that point, inadvertent mistakes can happen, unbeknown to the brokers, or any downstream consumers of this data.
The Track and Trace issue also identified another common flaw in the implementation of new technical solutions. The systems and process seem to work well during their trial phase, but forward planning hasn’t been built in to the testing, so as time passes and the volume of data grows, issues can arise. Data begins to disappear as the limits on certain steps in the process, like using Excel, are exposed.
There’s no doubt that insurers, brokers and MGAs are at their most competitive when they fully understand their business. How they read, understand and interpret their data is a key part of that. Ensuring a firm is using the latest versions of software is a sensible step to ensuring data transfers run smoothly. Automating processes will also reduce the opportunity for someone to open a file and inadvertently corrupt it. Otherwise, every time someone decides to manually check the data, the big picture may become distorted rather than clearer.