UK markets closed
  • FTSE 100

    7,901.80
    +81.64 (+1.04%)
     
  • FTSE 250

    20,593.46
    -21.23 (-0.10%)
     
  • AIM

    889.79
    +1.03 (+0.12%)
     
  • GBP/EUR

    1.1163
    -0.0042 (-0.37%)
     
  • GBP/USD

    1.2056
    -0.0173 (-1.41%)
     
  • BTC-GBP

    19,360.43
    -187.63 (-0.96%)
     
  • CMC Crypto 200

    535.42
    -1.43 (-0.27%)
     
  • S&P 500

    4,136.48
    -43.28 (-1.04%)
     
  • DOW

    33,926.01
    -127.93 (-0.38%)
     
  • CRUDE OIL

    73.23
    -2.65 (-3.49%)
     
  • GOLD FUTURES

    1,865.90
    -50.40 (-2.63%)
     
  • NIKKEI 225

    27,509.46
    +107.41 (+0.39%)
     
  • HANG SENG

    21,660.47
    -297.89 (-1.36%)
     
  • DAX

    15,476.43
    -32.76 (-0.21%)
     
  • CAC 40

    7,233.94
    +67.67 (+0.94%)
     

Hawk AI, an anti-money laundering and fraud prevention platform for banks, raises $17M

Hawk AI, a German company developing anti-money laundering (AML) and tangential fraud prevention smarts for financial institutions, has raised $17 million in a Series B round of funding.

Prior to now, Hawk AI had raised $10 million, and with a fresh $17 million in the bank, the company said that it plans to bolster its product development and global expansion plans. The Series B round was led by Sands Capital, with participation from Picus Capital, DN Capital, Coalition and BlackFin Capital Partners.

It's estimated that up to $2 trillion of ill-gotten gains are laundered each year, representing as much as 5% of global GDP, with just 1% of these illegal profits recovered. And this is where Hawk AI is setting out its stall.

Founded out of Munich in 2018, Hawk AI serves to improve how banks and payment companies manage their compliance risks through a cloud-native, modular AML surveillance system that promises the "highest level of explainability" in its AI-powered decision-making engine, which is pivotal for audits and regulatory investigations.

"Financial Institutions and regulators need to be able to understand and trust AI-driven decisions," Hawk AI co-founder and CEO Tobias Schweiger told TechCrunch. "Full explainability of such an AI is the key to establishing trust and acceptance."

Hawk AI AML transaction monitoring, explainable results. Image Credits: Hawk AI

Hawk AI offers products such as payments screening, customer screening, transaction monitoring, transaction fraud and customer risk rating, which allows its customers to build their own risk-rating model by combining static data (e.g. product or geographical data) with dynamic data (e.g. transaction data such as suspicious activity reports).

Among its customers are European spend-management platform Moss, U.S. payments processing company North American Bancard and Brazil's Banco do Brasil Americas.

Black box

Besides the legacy incumbents in the space such as Verafin, BAE Systems and Oracle, there are other notable newish-comers in this space, including financial fraud unicorn Feedzai and VC-backed Feature Space. However, Hawk AI is touting its cloud-native credentials and SaaS business model as one of its core differentiators, versus the clunky on-premise deployments of many of the legacy players.

But the company is keen to stress its focus on addressing the "black box" world that AI and machine learning algorithms typically inhabit -- understanding why an algorithm made a specific decision is key, and companies need to be able to justify why one customer was flagged as a potential fraudster.

Hawk AI: Customer risk rating. Image Credits: Hawk AI

It's worth noting that other anomaly detection software do give insights into what factors led to a flag. But Hawk AI says that its patent-pending technology also tells users what the "expected range" of normal behavior is, giving a score for each risk-factor using natural human language. The company says that this context is essential in terms of evaluating whether a case qualifies as suspicious activity or not.

"For Hawk AI, explainability is made up of two areas," Schweiger said. "What is the justification for an AI-driven, individual decision, and how were the algorithms that contribute to AI developed? Compliance officers need to have transparency over both."