This is the second article in our seven part series, promoting Aigen’s financial services publication: ‘Counting the Value of AI in Financial Services.’

Click the link at the bottom of the article to download the full publication.

With the digitisation of the economy, fraud has become more and more prolific. The scale at which fraud has grown is dramatic, and the hackers who have been successful may never be apprehended, in part because it isn’t in the interest of banks to actively share information about breaches. The fraud epidemic extends to social security, motor vehicle claims, personal injury claims – the list goes on. Organised crime is becoming smarter, harder to combat, and increasingly common. Just recently, for example, the NHS were on the receiving end of a cyber attack which resulted in logistical chaos: patient records were made unavailable, ambulances were diverted, operations were cancelled.

So what can AI do to help? Firstly, it’s important to recognise that AI is not the answer to this issue; it’s only part of the solution. AI is a tool that can help us in the fight against this type of crime.

Neural networks and, more recently, deep learning can look for patterns in data to identify anomalies – something which is simply not possible for humans to do at scale when limited to traditional analytics tools. AI can identify collusion and collaboration from seemingly disconnected entities and events seen in the data. When used appropriately, it is a far more effective tool for detection.

Once a potential case for fraud has been identified, it needs to be investigated and false positives rejected.

These solutions can also identify collusion and collaboration from seemingly disconnected events and data.

This is done by evaluating possible scenarios and available data, and then combining this with our own inferences, to reach a decision about whether a particular case is likely to be fraudulent. Traditionally, this has been a very time-consuming process, but cognitive reasoning is a process that can help. A cognitive reasoning platform can evaluate fraud cases, and provide a rationale for decisions, in the same way as human experts can.

Here is an example to help illustrate the point: imagine that you are trying to detect credit card fraud. The neural AI system can detect patterns in your spending, and it observes that you buy things online or from a local store in Norfolk. One day, it spots a large transaction in London from a cosmetics shop. This transaction does not adhere to your usual pattern of spending, and therefore is identified as an anomaly. A case is then created to facilitate investigation of the anomaly, and a robotic automation system gathers data from different sources available to the bank: other transactions on the account; your location according to the banking mobile app; information about where the transaction was made, and so on.

This information is sent to the cognitive reasoning AI system. It reviews the data available for the transaction, and asks the fraud management handler a number of questions, which critically the AI determines in the face of any uncertainty or missing data, to validate whether it was possible for you to make that transaction in London. The handler inputs that a train ticket was purchased on the same date as the transaction, and that the date of the purchase was four days before Valentine’s Day. Such factors each have varying degrees of salience in relation to the validity of the transaction as a whole, and when aggregated together the AI can come up with a reasoned conclusion: the likelihood is that you are simply buying your spouse a Valentine’s Day gift. The system is never going to be 100% confident that the transaction was not fraud, any more than a human could be, but it can systematically review the data and piece together the puzzle to make a judgement, with reasonable certainty, that the transaction is unlikely to be fraud. The AI is not subject to human bias at run-time, making it more consistent in its performance. More importantly, the system explains its rationale, which can act as an audit trail but also lead to incremental improvements in the system over time.

All this happens in a split second and the cardholder never knows. They don’t get a worrying text, their card is never placed on hold, and many more false positives are eliminated. This is important, because one of the main reasons that people change providers is because their card is unnecessarily placed on hold when there is any sign of potential fraud. As most of us are all too aware, there are few things worse than being stood in the queue at a checkout, having your card declined, and being ushered to a separate counter to call your provider. Not only can AI help prevent fraud by determining when anomalies occur, but if implemented correctly it can reduce the number of disruptive false positives that have such a large impact on customer satisfaction.

More importantly, the system explains its rationale, which can act as an audit trail.

As the recent NHS and Westminster cyber attacks made clear, fraud is a very real issue that can have far-reaching consequences. We need to act quickly to stay ahead of the hackers and thieves in order to protect our data and detect fraud in real time. At Aigen, we’ve been working with a payment services provider to combine AI tools and create a system that can identify and validate anomalies, which is a key step in identifying fraud before it has an impact on consumers.

Share this on: