Can my device become my Captor?
A simulated targeted attack — Step by Step
Your phone is probably fine right now. But what would it actually look like if it weren't? This post walks through the full lifecycle of a real-world targeted phone attack — from the opening move to total device control — in plain language, with enough technical depth for people who want to understand what's actually happening under the hood.
This isn't a horror story. It's a simulation — a structured walkthrough of how these attacks actually unfold in the real world, drawn from documented incidents and security research. Understanding the playbook is the best defence against it.
We'll go through five phases. Each one builds on the last. By the end, you'll understand not just what attackers do, but why each step works — and what interrupts it.
RECON They already know more about you than you think
Before any attack begins, the attacker does homework. Not on you specifically — most targeted attacks start with a category, not a person. A threat actor running a banking fraud campaign, for instance, might purchase a list of phone numbers from a data broker.
Data brokers are companies that legally buy and sell personal information — your phone number, rough location, the apps you use, your approximate income bracket, whether you have children. This data comes from loyalty cards, app permissions, website visits, and hundreds of other sources you've probably never thought about.
With your phone number and demographic profile in hand, the attacker now personalises the approach. If they know you recently searched for mortgage information (from a data leak or broker record), they might craft a message about your home loan. If they know your approximate neighbourhood, they might reference a local delivery company.
This is what makes modern attacks feel eerily personal — and why the old advice "just ignore messages from unknown numbers" is no longer enough on its own.
How attacker reconnaissance actually works
Attacker data acquisition typically flows through three channels: commercial data brokers (Acxiom, Oracle, LexisNexis); leaked credential databases (Have I Been Pwned indexes over 14 billion compromised records); and open-source intelligence (OSINT) — scraping LinkedIn, public social media, and electoral rolls.
A well-resourced actor correlates these sources using an identity graph — a database structure that links your email, phone number, device IDs, home address, and social handles into a single unified profile. Tools like Maltego automate much of this correlation. The result: an attacker who has never met you may know your employer, your bank, your approximate net worth, and whether you've had your number for more than 3 years.
Device fingerprinting adds another layer: even without your phone number, visiting a website from your phone leaks your device model, OS version, screen resolution, timezone, browser plugins, and IP address. Combined, these create a near-unique fingerprint that follows you across sites — even in private browsing mode.
ENTRY The message that doesn't look like a threat
This is the phase most people picture when they think of "getting hacked" — but it rarely looks like the movies. The most effective opening moves are almost boring in their ordinariness.
Smishing (SMS phishing) is now the most common entry vector for mobile device attacks. In 2023, 75% of organisations reported experiencing smishing attacks targeting their staff. Text messages have a click-through rate of around 8–14%, compared to under 2% for emails — attackers know this.
Here's what one of these messages actually looks like:
Notice what makes this effective: it arrives in your Messages app (next to real texts from your bank and your family). It uses a real-seeming sender name. It creates mild urgency — not panic, just mild inconvenience. The fee is tiny enough to feel plausible. And if you've ordered anything recently, your brain autocompletes the rest.
This type of campaign — fake delivery notifications — surged 174% between April 2023 and April 2024, according to UK parcel carrier Evri, which had to shut down over 5,000 cloned versions of its own website.
Two other entry methods are worth understanding — not because they're common for most people, but because they show what's technically possible:
Zero-click exploits are attacks that require absolutely no action from you. A specially crafted message arrives via iMessage, WhatsApp, or even a missed call — and malware installs itself before you've touched your screen. Pegasus spyware, developed by Israeli firm NSO Group, works this way. In 2024, security firm iVerify found Pegasus infections on 2.5 in every 1,000 devices they scanned — a rate far higher than anyone had assumed, and on the devices of ordinary people, not just journalists or politicians.
Malicious apps are a quieter threat. An app that promises a free VPN, a flashlight, or a QR scanner might be harvesting your contact list and location in the background. Kaspersky blocked 33.2 million mobile attacks in 2024, many delivered via apps that had passed basic app store checks.
What a zero-click exploit actually looks like
A zero-click exploit targets a memory safety vulnerability — typically a buffer overflow or use-after-free bug — in the code that processes incoming messages before they're displayed on screen. The messaging app (iMessage, WhatsApp, MMS) has to parse the incoming data. If that parsing code contains a flaw, a maliciously crafted payload can overwrite memory and redirect code execution.
Apple's BlastDoor sandbox (introduced in iOS 14) was a direct response to this. It processes iMessage content in an isolated process, limiting what a successful exploit can reach. NSO Group responded by finding ways around it — the resulting arms race is ongoing.
Network injection is a third variant: if you visit an unencrypted (HTTP) website, a man-in-the-middle attacker on the same network can intercept the response and inject a redirect to a malicious payload. This is why HTTPS everywhere matters — and why public Wi-Fi is riskier than it looks.
The technical attack chain typically involves: initial RCE → sandbox escape → kernel privilege escalation → persistence mechanism. Each step exploits a different vulnerability. The most sophisticated attacks chain four or five zero-days in sequence — each one patched in days once discovered, but already deployed.
INSTALL The payload drops — and you feel nothing
You clicked the link. Maybe it took you to a convincing-looking bank login page and you entered your password. Maybe it triggered a file download. Maybe — with a zero-click — nothing at all happened from your perspective.
In a credential-phishing attack (the fake bank page), the damage is immediate: your username and password are now in the attacker's hands. They'll try it on your email, your bank, your Apple ID or Google account — wherever password reuse gives them an opening.
But in a malware attack, something more patient is happening. A small piece of software — called a dropper — installs itself first. Its only job is to survive long enough to download the actual payload. The payload is the malware that does the real work.
Modern mobile malware requests permissions the same way legitimate apps do. You might see a prompt that says "This app needs access to your storage to save files." That's technically true — and it's also how it reads your SMS messages and forwards them to a remote server.
Here's what an attacker's ideal permission set looks like on an infected device — compared to what's typically blocked:
One documented family worth knowing about is Triada — a backdoor that has been found pre-installed in the firmware of cheap Android devices sold through unofficial channels. It requires no user interaction at all. By 2025, Kaspersky found it was appearing with increasing frequency. If it's baked into the device itself, there's no app to delete.
The technical anatomy of a mobile banking trojan
Modern banking trojans like Mamont, Cerberus, and SharkBot operate on a Malware-as-a-Service (MaaS) model — criminals rent them like software subscriptions, complete with a control panel and customer support. The Mamont family, documented by Kaspersky in late 2024, spread via fake parcel tracking apps in Russia and generated thousands of distinct variants through automated generation.
Core technical capabilities include: Accessibility Service abuse (intercepts screen content and simulates taps), overlay attacks (draws a fake bank login on top of the real one), SMS forwarding (streams incoming messages to C2 server), and keylogging (logs every tap). The Herodotus trojan, discovered in September 2025, even attempts to mimic human touch patterns to bypass biometric behavioural analysis.
Communication back to the attacker's Command and Control (C2) server is typically encrypted over HTTPS — making it visually identical to normal app traffic. Some families use legitimate cloud services like Dropbox or Telegram as C2 infrastructure, so blocking the domain also blocks legitimate use.
The Accessibility Service API is particularly powerful and particularly abused. When granted, it allows an app to see every element on screen, read all text, and simulate user interaction — designed for screen readers, weaponised for fraud. Android 13+ introduced stronger restrictions, but social engineering still tricks users into enabling it manually.
EXFIL Your data starts leaving — silently, constantly
This is the phase that's hardest to visualise, because it's entirely invisible. No notification. No battery spike (modern malware is careful about this). No obvious change in your phone's behaviour.
Your data leaves in small, encrypted packets — designed to look like ordinary background app sync. A banking trojan sends your SMS codes as they arrive. Spyware might take a photo of your surroundings every 15 minutes. Location data reports every movement. Contact lists upload once. Password vaults, if accessible, exfiltrate immediately.
The exfiltration is designed to blend in. An app that legitimately syncs to the cloud sends HTTPS traffic. A malware payload sending your location to a command-and-control server sends HTTPS traffic. From your network router's perspective, they're identical.
One technique worth understanding: SMS interception for two-factor authentication bypass. When you try to log into your bank, the bank texts you a six-digit code. Malware that has SMS access reads that code the moment it arrives and forwards it to the attacker — who is simultaneously entering your password on the real bank site. The whole exchange takes under ten seconds. You receive the code, but by the time you look at your phone, it's already been used.
Exfiltration channels and evasion techniques
Advanced mobile malware uses multiple techniques to hide data in transit. DNS tunnelling encodes stolen data inside what looks like normal domain lookup requests — traffic that almost no firewall inspects. Steganography (rare but documented) hides data inside image files. The most common approach is simply HTTPS to legitimate-looking domains — sometimes using free cloud storage APIs as relay points.
Pegasus-class spyware is especially sophisticated: it uses end-to-end encrypted channels back to the operator's server, self-deletion if forensic activity is detected, and adaptive transmission schedules — only sending data when connected to Wi-Fi to avoid raising battery or data-usage alerts.
The Pegasus control panel allows operators to configure trigger-based collection: "send me audio only when this device is within 500 metres of this GPS coordinate" or "alert me when a message from phone number X arrives." This level of surgical targeting is why detection is so hard — there's no constant high-bandwidth stream to notice.
On the network monitoring side: a local VPN approach (like NetGuard on Android) can surface unexpected outbound connections by logging all DNS queries. If your flashlight app is resolving tracking.adnetwork-somewhere.io at 3am, that's worth investigating. This is technically tractable — and it's the basis of how privacy tools like ours can give you visibility at the network layer.
PERSIST The attacker makes themselves at home
The final phase is about staying. A one-time theft is one thing. An attacker who maintains access to your device has something far more valuable: a live feed of everything you do, indefinitely.
Persistence mechanisms vary by attack type. A credential-phishing attack might simply be "your password now works in their login scripts — until you change it." A more sophisticated malware attack embeds itself in ways designed to survive reboots, app updates, and even factory resets in extreme cases.
Common persistence tricks on mobile include: installing as a device administrator (which prevents easy uninstallation), abusing the Accessibility Service to re-enable itself if disabled, and in the case of Triada-style firmware malware, embedding into the operating system itself where standard security tools can't reach.
At this point, the attacker has a choice about what to do with access. Some sell it: there's an active market in "logs" — sets of credentials and device access — on closed forums. Others use it directly for financial fraud. Others, particularly more sophisticated actors, treat it as long-term intelligence: staying quiet, watching patterns, waiting for the right moment.
The average time between a breach occurring and it being detected was 241 days in 2025. That's eight months of access before most people even know something is wrong.
Detection evasion and anti-forensic techniques
Sophisticated mobile malware employs several anti-detection strategies. Memory-only operation: Pegasus stores most of itself in RAM, leaving minimal disk artifacts — the approach that made it notoriously hard to detect forensically until Amnesty International's MVT (Mobile Verification Toolkit) developed log-based detection methods.
Code obfuscation: malware packs and re-packs its code to change its cryptographic signature, defeating signature-based antivirus. A single malware family like Mamont can generate thousands of distinct variants. Root detection evasion: some banking trojans check if they're running in a sandbox or analysis environment and go dormant if so — behaving normally until deployed on a real device.
On iOS, detection is harder because the OS restricts the ability of security apps to inspect other apps' behaviour. The iVerify approach (used to find Pegasus on ordinary users' devices in 2024) works around this by analysing diagnostic logs and crash reports, which contain forensic artifacts even Pegasus couldn't fully clean up. This is a meaningful gap: Apple's security model protects users from most malware, but also limits visibility for researchers and security tools.
The key open-source forensic tool for mobile analysis is Amnesty International's mvt-ios and mvt-android (Mobile Verification Toolkit). It analyses iTunes backups and device logs for known indicators of compromise. Not a consumer tool — requires Python and command-line comfort — but it's free and peer-reviewed.
Every phase has a weak point — here's where you can break the chain
The five phases aren't inevitable. Each one requires something from the attacker — and each one has a point where it can fail. Understanding the chain means understanding where to apply pressure.
The most powerful intervention is at Phase 2 — the entry point. Once malware is installed and exfiltrating data, your options narrow significantly. Stopping it before the tap is easier by orders of magnitude than forensically recovering afterwards.
Here are the four things that actually break the chain:
What you can actually do today
1. Treat every unexpected link as suspicious — no matter who appears to be sending it. Real delivery companies don't ask you to pay redelivery fees by clicking a link. Real banks don't ask for your password via text. Real tax authorities contact you by mail. If something creates urgency and asks you to tap or enter information, verify it through the organisation's official app or their website — typed in manually, not via the link in the message.
2. Update your operating system immediately when updates are available. Zero-click exploits target specific, known vulnerabilities. When Apple or Google releases a patch, it closes a door. The time between a patch being released and it being widely deployed is when attackers are most active — they know exactly what vulnerability was fixed, and they target unpatched devices. Updates are inconvenient. Spyware is worse.
3. Audit your app permissions, then revoke anything that doesn't make obvious sense. Go to Settings → Privacy (iPhone) or Settings → Apps (Android). Ask: does my weather app actually need my contacts? Does my recipe app need my microphone? A permissions review takes five minutes and removes access that most people granted without thinking years ago.
4. Switch from SMS two-factor authentication to an app-based authenticator where possible. Apps like Google Authenticator, Authy, or Ente Auth generate codes locally on your device. Malware that intercepts your SMS messages cannot intercept an authenticator app's output. For your most important accounts — bank, email, password manager — this single change meaningfully raises the bar against the most common attack pattern described in Phase 4.
One final thing: if you ever suspect your phone has been compromised — unexplained battery drain, unfamiliar data usage, your bank contacting you about transactions you didn't make — your first call should be to your bank (via the number on the back of your card, not any number in a text), and your second action should be changing passwords on your most sensitive accounts from a different device.
Sources & Further Reading
- Kaspersky — Mobile Threat Report 2024 & 2025: securelist.com
- Lookout — 2024 Annual Mobile Threat Report: lookout.com
- iVerify — Pegasus detected on ordinary devices, 2024: cybernews.com
- Amnesty International — Mobile Verification Toolkit (mvt): github.com/mvt-project
- Palo Alto Unit42 — Smishing Triad global campaign, 2024: unit42.paloaltonetworks.com
- Recorded Future — H1 2025 Malware & Vulnerability Trends: recordedfuture.com
- Bank Policy Institute — Spoofing, Smishing and Deepfakes report, 2025: bpi.com
- Wikipedia — Pegasus (spyware): wikipedia.org