This is the fourth article in our seven part series, promoting Aigen’s financial services publication: ‘Counting the Value of AI in Financial Services.’

Click the link at the bottom of the article to download the full publication.

Algorithms are becoming more and more complex – with the proliferation of machine learning and the introduction of deep learning, the algorithms used within many of the most advanced AI systems are becoming increasingly difficult to dissect. In fact, some of the most recent systems have learned how to do things without even their creators truly knowing how they managed it.

‘Black box syndrome’ is a growing problem for a variety of industries. In the legal space, there has been a case in which a defendant’s sentence was determined based on a risk assessment generated by an artificial intelligence tool. The defendant answered a number of questions, which were then fed into a risk-assessment algorithm. This algorithm provided the court with a ‘high risk’ outcome, based on the answers he gave. The problem with this, as the defendant himself raised in his appeal, was that the algorithm gave no explanation for its judgement. If this was a human judge and jury, we would expect strong justification for any decision, or we could never expect it to stand up against an appeal.

nVidia’s self-driving car technology is like nothing that has ever been revealed to the market before. Rather than follow the instructions of the engineers that created it, it runs on an algorithm which taught itself to drive by watching humans do it. The problem is, the engineers who built it don’t quite know how it does it. They can’t accurately tell you how the system comes to the decisions it makes. This has huge implications for safety and liability — not to mention something as simple as bug-fixing. It is all fine while it works, until the moment it doesn’t which cannot be predicted.

Some of the most recent systems have learned how to do things without even their creators truly knowing how they managed it.

The stock exchange has also experienced a monumental shift. Modern day traders, who used to make decisions based on their expertise and instinct, now depend more and more on algorithms. Part of the justification for this shift lies in the fact that algorithmic trading can remove the human entirely, including emotional bias, making for a more stable stock exchange. What is concerning is that because these algorithms have become so complex, even the people who created them do not entirely understand how they work. These systems benefit from being able to rapidly react to fractional price shifts and market trends in a split second, whilst executing enormous volumes of trades based on this fast- changing information. A rise in computer processing power, coupled with the availability of historic data, has spawned this new age of trading. The attraction seems obvious – faster, more reliable and profitable trades. However, it is these same systems that have contributed to some of the ‘flash crashes’ seen in the past, where traders have used disruptive algorithms to manipulate the market to their benefit, in a technique known as spoofing.

These sorts of ‘black box solutions’ are not a very feasible option for the future progression of AI in enterprise, because they require too much trust from humans. Leaps of faith do not work on a day-to-day basis. Decisions need to be backed up with sound reasoning and rationale that can be explained to others. Admittedly, even humans cannot always give a completely logical reason for all their actions. However, compared to a computer system that is impossible to interrogate, human reasoning is at least traceable and understandable to those affected by its outcomes. If AI is to become a significant part of our everyday life, it needs to be able to show us exactly how it is working to engender trust.

Share this on: