Financial firms should leverage machine learning to make anomaly detection easier

Anomaly detection is one of the more difficult and underserved operational areas in the asset-servicing sector of financial institutions. Broadly speaking, a true anomaly is one that deviates from the norm of the expected or the familiar. Anomalies can be the result of incompetence, maliciousness, system errors, accidents or the product of shifts in the underlying structure of day-to-day processes.

For the financial services industry, detecting anomalies is critical, as they may be indicative of illegal activities such as fraud, identity theft, network intrusion, account takeover or money laundering, which may result in undesired outcomes for both the institution and the individual.

There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.

Detecting outlier data, or anomalies according to historic data patterns and trends can enrich a financial institution’s operational team by increasing their understanding and preparedness.

The challenge of detecting anomalies

Anomaly detection presents a unique challenge for a variety of reasons. First and foremost, the financial services industry has seen an increase in the volume and complexity of data in recent years. In addition, a large emphasis has been placed on the quality of data, turning it into a way to measure the health of an institution.

To make matters more complicated, anomaly detection requires the prediction of something that has not been seen before or prepared for. The increase in data and the fact that it is constantly changing exacerbates the challenge further.

Leveraging machine learning

There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.

With supervised learning, a model is taught to recognize categories based on labeled data that maps a set of inputs to a known set of outputs and arrives at a function that can decode incoming signals with sufficient fidelity to determine the correct response. In theory, a supervised model could be trained to recognize previous anomalous behavior, reducing the problem to binary classification of “anomalous” or “not anomalous”. This, however, would not help detect novel forms of anomalous behavior.

That is where unsupervised anomaly detection algorithms come in. Unsupervised learning tools such as autoencoders do not require labeled data and therefore enable the detection of novel data points or outliers by virtue of their reliance on prevalent classes.

Unsupervised learning offers a solution

Within the larger family of unsupervised learning algorithms for anomaly detection there are different approaches to take including clustering algorithms, isolation forests, local outlier factors and autoencoders.

Autoencoders, in particular, offer a reliable solution. In simple terms, this algorithm seeks to encode “healthy” data patterns acquired through exposure to clean data structures in the training set. It does this by first decoding the inputs in a process similar to principal component analysis, which it then uses to build an encoding layer that attempts to recreate its inputs with some measure of fidelity. The measure is termed a reconstruction error. The idea here is that the model will have lower reconstruction errors for normal or “healthy” data, but higher errors for anomalous or novel data.

The main advantage of the autoencoder rests on its ability to turn the class imbalance problem on its head, since it relies on the prevalent class to learn good patterns. This removes the need to acquire representative target class samples, as is the case for supervised learning. Focusing on the population (normal) rather than the target (anomalies) ensures it is more likely to detect novel cases as they arise. Furthermore, unlike clustering algorithms, autoencoders can be taught to treat once-anomalous data points as normal simply by including them in the training set.

The challenge of explainability

By far the biggest challenge in machine learning today is having the ability to explain predictions. Accurately justifying the algorithm’s actions and extracting the root cause of the event is of immense value, since it allows teams to not only detect suspicious, harmful, and anomalous events and properly audit them, it also provides valuable intelligence that enables timelier intervention.

There are very few tools that can offer the granularity needed to explain individual predictions. As an example, some models can identify “CURRENCY” as a global feature of high predictive value, but it can’t point out which local value (i.e., USD, JPY, EUR) is most likely to be the correct one.

With that in mind, the solution lies in developing a method that encodes feature values into a single individual signature, which can then compare against known signatures in the clean benchmark set. A similarity measure (Levenshtein distances) can be used to find the closest data points to the anomalous entry in the benchmark set, which are then categorized as most similar, or having most support.

The most similar measure returns the record that most closely resembles the signature regardless of the number of instances of it in the benchmark set that back up the comparison. The best support measure, on the other hand, returns similarity scores above an established threshold, choosing the one with the highest number of support samples in the benchmark set. With the record to compare against, the signature can be deconstructed in order to establish which of its constituent parts is responsible for any differences, thus identifying the likely anomalies.

Preventing bad data

With all eyes on data, it’s crucial that financial institutions find solutions to detect anomalies upfront, thereby preventing bad data from infecting downstream processes. Machine learning can be applied to detect the data anomalies as well as identify the reasons for them, effectively reducing the time spent researching and rectifying executions.