A feature investigation into the rising tide of artificial intelligence-enabled financial crime, and what it means for one of the world’s most connected societies.
—
Prologue: A Voice You Know
The phone rings. It is your boss. The voice — cadence, timbre, even the slight Singaporean lilt — is unmistakable. He needs you to authorise an urgent wire transfer before the markets close. You do not hesitate, because why would you? Except your boss never called. The voice was synthesised in milliseconds by an AI model trained on publicly available recordings. By the time the fraud is discovered, the money has passed through three crypto wallets and is untraceable.
This scenario is no longer speculative. It is the leading edge of a new era in financial crime — and Singapore, by virtue of its extraordinary digital connectivity, its cashless economy, and its position as a regional financial hub, finds itself at the very centre of the storm.
—
I. The Scale of the Problem
The numbers are, in the most precise sense of the word, alarming. Singapore lost over S$1.1 billion to scams in 2024 — equivalent, as the Singapore Police Force has noted, to one successful scam every ten minutes. In the first half of 2025 alone, losses totalled S$456.4 million despite a 26% decrease in total case volume. That the quantum of loss remains enormous even as case numbers fall is itself revealing: individual scams are becoming more sophisticated, and their yields per victim are rising. The median loss per case increased from S$1,100 to S$1,500 year-on-year in the same period.
These are not abstract macroeconomic figures. They represent retirement savings drained, small businesses bankrupted, families destabilised. And underpinning this crisis with increasing force is artificial intelligence.
—
II. Why Singapore Is a Prime Target
To understand the particular vulnerability of Singapore, one must understand what makes it exceptional. The city-state is among the most digitally penetrated societies on earth. Cashless payments are ubiquitous; digital banking, almost universal among working-age adults. The 2025 Online Identity Study by identity verification firm Jumio — which surveyed consumers across the US, UK, Singapore, and Mexico — found that 74% of Singapore consumers believe AI-powered fraud now poses a greater threat to their personal security than traditional forms of identity theft. This figure tracks above global averages across nearly every measured dimension of concern.
Singaporean respondents expressed above-average anxiety about fake digital identification documents created using AI (84%, versus a global average of 76%), scam emails using AI to extract passwords or financial credentials (82% versus 75%), and deepfake videos or voice recordings (83% versus 74%). Critically, higher levels of concern in Singapore are not merely psychological artefacts — they reflect rational responses to elevated exposure. The city-state’s cashless infrastructure, which enables frictionless commerce, simultaneously enables frictionless fraud.
Singapore is also the most targeted market in the Asia-Pacific region for job scams, according to Trend Micro’s 2025 consumer study, with 53% of surveyed Singaporeans reporting having been targeted — far exceeding Australia (42%), New Zealand (39%), and Japan (12%). Financial pressure compounds the problem. The same research found that 79% of Singaporeans cite rising costs as their primary concern, and that 47% report their financial security has been negatively impacted in the preceding twelve months — exactly the conditions under which fraudulent job offers become persuasive.
—
III. The AI Transformation of the Fraud Ecosystem
What distinguishes the contemporary fraud landscape from its predecessors is not merely scale but architecture. Fraud has been industrialised.
For decades, the distinguishing markers of a scam were grammatical: poorly spelled phishing emails, awkward phrasing, improbable syntax. These signals are being systematically eliminated. Large language models now draft flawless, culturally calibrated messages in Mandarin, Tamil, Malay, and English — all four of Singapore’s official languages. Voice cloning systems can replicate a target’s family member or superior from a few seconds of audio scraped from social media. Deepfake video technology, once the preserve of sophisticated state actors, is now commercially available.
Trend Micro’s 2026 Consumer Security Predictions Report, published in Singapore in December 2025, warns that the coming year will see scams reach unprecedented AI-driven scale as automation reshapes how fraudsters target victims. Criminal networks are merging automation with emotional manipulation — what researchers call “emotion engineering” — creating operations characterised by unprecedented speed, realism, and reach. Multi-channel scams, in which victims are lured from social media or text messages into encrypted chat applications and then toward fraudulent payment pages, are expected to become the dominant pattern.
The industrialisation metaphor is not incidental. Sumsub’s Identity Fraud Report 2025–2026, which analysed more than four million fraud attempts, identifies what it terms a “Sophistication Shift”: fewer but more professionalised fraud operations, engineered for higher-impact damage. Fraud-as-a-service platforms have democratised the toolkit, making identity crime accessible to operators with no technical background. Deepfake companions, AI chatbots, and synthetic personas blur the distinction between authentic and manufactured human contact — a particular risk in relationship scams and investment fraud, which continue to generate the highest individual losses.
In the Asia-Pacific region more broadly, deepfake-related fraud incidents surged by more than 1,500% between 2022 and 2023. Interpol’s Cybercrime Directorate, headquartered in Singapore, has warned that criminal gangs across South-east Asia are now deploying cheap, widely available AI tools to run scalable scam operations at a speed and volume impossible to achieve through human labour alone.
—
IV. The Business Dimension
It would be a mistake to read this crisis as exclusively a consumer-facing problem. Singapore’s business community is equally exposed, and the consequences of enterprise fraud carry systemic ramifications.
Data from TD Bank’s February 2026 survey of Canadian consumers — which provides a comparative benchmark — shows that 66% of business owners feel more exposed to fraud than in previous years, and 61% identify AI-driven crime as a major threat. Transposed to Singapore’s context, these concerns map onto documented vulnerabilities: Singapore’s Cyber Security Agency revealed that phishing scams cost businesses over S$1.2 million within just three months, from October to December 2024.
Business email compromise (BEC) — in which fraudsters impersonate executives, suppliers, or regulatory authorities to authorise fraudulent transactions — is particularly dangerous in Singapore’s status as a regional corporate headquarters hub. Companies that process regional payrolls, manage cross-border trade settlements, or hold treasury functions for multinational subsidiaries are high-value targets. AI enables BEC attacks to be conducted at scale, with individualised social engineering that is indistinguishable from legitimate internal communications.
The emergence in 2025 of a new scam category — insurance services fraud — with 791 reported cases in the first half of the year alone, illustrates how rapidly the fraud ecosystem adapts to exploit newly digitised sectors.
—
V. The Behavioural Gap
Perhaps the most intellectually troubling finding to emerge from current research is not the sophistication of the adversary but the persistence of the behavioural gap on the defender’s side.
Among Canadian respondents in the TD Bank survey, 52% admitted to engaging in behaviours that heighten fraud vulnerability: using public Wi-Fi for financial transactions, opening email attachments from unknown senders, clicking unverified links. More significantly, 41% said they never consult fraud prevention resources, and 42% do so only a few times per year. There is no reason to believe the situation is materially different in Singapore.
Research on the intention-behaviour gap in cybersecurity contexts identifies several mechanisms by which this discrepancy is sustained. Optimism bias — the conviction that one is less susceptible than average to adverse outcomes — is robust and difficult to correct through information campaigns alone. Cognitive load in everyday digital interactions is high; the friction required to pause and verify competes against the ambient pressure to respond quickly. And as AI-generated fraud becomes indistinguishable from authentic communication, the traditional heuristics that once guided threat recognition cease to function.
Trend Micro’s research underscores this dynamic: 99% of Singapore respondents in its 2025 study reported concern about being targeted by scams or fraud, yet the behaviours most likely to result in victimisation remain widespread. Concern, in other words, does not translate automatically into protection.
—
VI. Singapore’s Regulatory Response: An International Benchmark
Against this backdrop, Singapore’s regulatory architecture represents one of the most comprehensive and analytically coherent anti-fraud frameworks in the world — and it is worth examining in detail, both as a policy achievement and as a window into unresolved tensions.
The Shared Responsibility Framework
The Monetary Authority of Singapore (MAS) and the Infocomm Media Development Authority (IMDA) introduced the Shared Responsibility Framework (SRF), which came into effect on 16 December 2024. The framework’s conceptual innovation lies in its refusal to assign liability exclusively to any single actor in the fraud chain. Instead, it distributes duties — and accountability for losses — among financial institutions, telecommunications companies, and consumers.
Under the SRF, the 17 major banks and 30 payment service providers operating in Singapore are required to: impose a 12-hour cooling-off period upon activation of a digital security token or login on a new device; deliver real-time notification alerts for high-risk activities; provide a 24/7 “kill switch” enabling customers to block account access; and, from June 2025, implement real-time fraud surveillance capable of identifying rapid, large-scale account draining.
Telecommunications operators, meanwhile, are required to restrict SMS Sender ID delivery to authorised aggregators and implement anti-scam filters over their networks — closing one of the most exploited channels for phishing attacks.
Significantly, where institutions fail to fulfil these duties, they bear financial liability for consumer losses. This direct accountability mechanism represents a departure from models in which consumers absorbed the full cost of fraud arising from systemic inadequacies in institutional infrastructure.
The Protection from Scams Act
In parallel, the Protection from Scams Act 2025, which came into force on 1 July 2025, introduced a framework for restriction orders — a legally novel instrument enabling the Police to limit an individual’s banking access where there is reasonable belief they are about to transfer funds to a scammer. This provision addresses a specific and previously intractable problem: the scam victim who, under the influence of social engineering, actively resists intervention. The restriction order framework — applicable through Singapore’s seven Domestic Systemically Important Banks — allows the state to interpose itself between victim and perpetrator as a measure of last resort, subject to strict procedural safeguards and a maximum duration of 30 days per order.
ScamShield and Technological Countermeasures
At the consumer interface, Singapore has deployed the ScamShield suite — an app, website, 24/7 helpline, and alert channels that collectively screen communications for scam indicators and enable rapid reporting. The helpline receives between 500 and 700 calls daily, reflecting both the scale of the problem and the uptake of the protective infrastructure.
The Google Play Protect Enhanced Fraud Protection feature, deployed in Singapore before any other market, has blocked over 2.49 million installation attempts of potentially malicious applications across 553,000 devices as of June 2025, preventing more than 40,000 unique applications from being potentially weaponised for financial fraud. The Singapore Police Force’s SATIS (Scam Analytics and Tactical Intervention System) leverages AI and machine learning to triage, assess, and respond to scam intelligence in near-real time.
DBS Bank, Singapore’s largest domestic financial institution, has reported a 25% improvement in fraud prevention efficiency following the integration of machine learning into its anti-fraud systems — suggesting that defensive AI deployment is yielding measurable returns.
—
VII. Unresolved Tensions
Singapore’s regulatory response is sophisticated, but it operates within constraints that no domestic framework alone can resolve.
The transnational nature of the fraud ecosystem is the most fundamental. Interpol has documented that scam centres operating from Myanmar, Cambodia, and other jurisdictions across South-east Asia function as offshore factories supplying fraud-as-a-service to criminals globally. These operations — some staffed by trafficking victims coerced into participation — route communications through local networks to fabricate domestic origin, as illustrated by a recent operation that disrupted a syndicate using GSM gateway devices across three jurisdictions to make calls appear to originate within Singapore, linked to more than 480 cases and losses exceeding S$3.1 million.
No MAS guideline can regulate a call centre in a jurisdiction where Singapore law does not run. This places a ceiling on what any domestic framework can achieve in isolation, and argues for the elevation of anti-fraud cooperation to the highest levels of regional diplomacy — an area where progress has been uneven.
A second tension concerns the scope of the SRF itself. The framework, as currently constituted, applies to phishing scams — a defined and significant category, but not coextensive with the full spectrum of AI-enabled fraud. Investment scams, romance scams, and the emerging category of agentic AI fraud — in which autonomous AI systems independently conduct multi-step deception campaigns — fall outside the SRF’s current liability mechanism. As Sumsub’s research notes, roughly one in fifty forged documents is now AI-generated, and the sophistication of synthetic identity attacks is advancing faster than verification infrastructure can adapt.
A third tension is the one that receives the least public attention: the vulnerability of the elderly. Singapore’s rapidly ageing demographic — the proportion of residents aged 65 and above is projected to reach 25% by 2030 — presents a population that combines relatively lower digital literacy with the highest concentration of liquid savings from decades of CPF accumulation. Scam typologies targeting this group, particularly impersonation of government officials and Chinese-language investment fraud, have been among the most damaging in financial terms.
—
VIII. What Individuals and Institutions Must Do
Despite the structural complexity of the problem, several evidence-based protective principles can be stated with confidence.
At the individual level, the most effective defences are procedural rather than technological. Establishing out-of-band verification protocols for any financial instruction received digitally — a call to a known number, not a number provided in the suspicious communication itself — remains the single most reliable countermeasure against social engineering attacks, regardless of how convincingly the initial contact is rendered. The ScamShield helpline (1799) provides a rapid verification channel for suspicious contacts.
The Money Lock facility, through which at least 370,000 Singaporeans have locked over S$30 billion of savings as of June 2025, represents a powerful structural barrier: funds in a Money Lock account cannot be transferred digitally without additional physical verification, rendering them largely inaccessible to remote fraud.
For enterprises, the priority is the development and rehearsal of transaction authorisation protocols that cannot be bypassed by a single communication channel, however apparently authoritative. AI voice cloning and deepfake video have rendered unilateral reliance on audiovisual authentication inadequate. Dual-authorisation requirements, anomaly-detection systems, and staff training programmes that emphasise scepticism rather than responsiveness are necessary adaptations.
For regulators, the outstanding challenge is extending the SRF’s accountability logic — the principle that systemic actors bear proportionate responsibility for systemic failures — to cover the full range of AI-enabled fraud typologies, not merely phishing as currently defined.
—
IX. Conclusion: A Societal Wager
Singapore has, through deliberate institutional design, positioned itself as a laboratory for high-trust, high-connectivity digital society. The ScamShield infrastructure, the Shared Responsibility Framework, the Protection from Scams Act, the Anti-Scam Centre’s Crypto Tracing Team — these represent a genuine and internationally recognised attempt to make that wager viable in an adversarial environment.
But the adversary is adapting, and its tools are becoming cheaper, faster, and more persuasive. The 71% of Singaporean consumers who told Jumio that AI-generated scams are now harder to detect than traditional scams are not expressing irrational anxiety. They are accurately perceiving a shift in the balance of asymmetric advantage.
Ultimately, the contest between AI-enabled fraud and AI-enabled defence will not be resolved at the level of technology alone. It will be resolved — or not — at the level of institutional trust, social cohesion, and the willingness of financial institutions, telecommunications companies, regulators, and individuals to accept that in a networked society, the cost of fraud is a shared cost, and its prevention a shared obligation.
The algorithm does not rest. Singapore cannot afford to either.
—
Sources: Singapore Police Force Mid-Year Scam and Cybercrime Brief 2025; Jumio 2025 Online Identity Study; Trend Micro 2026 Consumer Security Predictions Report; Trend Micro APAC Consumer Study 2025; Sumsub Identity Fraud Report 2025–2026; GBG Southeast Asia Fraud Trends Report 2025; Monetary Authority of Singapore, Guidelines on Shared Responsibility Framework; Allen & Gledhill, Protection from Scams Act 2025 analysis; TD Bank / Léger Survey, February 2026; Interpol Cybercrime Directorate (via CybersecAsia, February 2026).