Hackers are finding ways to hide inside Apple’s walled garden
You’ve heard of Apple’s famous walled garden, the tightly controlled tech ecosystem that gives the company unique control of features and security. All apps go through a strict Apple approval process, they are confined so sensitive information isn’t gathered on the phone, and developers are locked out of places they’d be able to get into in other systems. The barriers are so high now that it’s probably more accurate to think of it as a castle wall.
Virtually every expert agrees that the locked-down nature of iOS has solved some fundamental security problems, and that with these restrictions in place, the iPhone succeeds spectacularly in keeping almost all the usual bad guys out. But when the most advanced hackers do succeed in breaking in, something strange happens: Apple’s extraordinary defenses end up protecting the attackers themselves.
“It’s a double-edged sword,” says Bill Marczak, a senior researcher at the cybersecurity watchdog Citizen Lab. “You’re going to keep out a lot of the riffraff by making it harder to break iPhones. But the 1% of top hackers are going to find a way in and, once they’re inside, the impenetrable fortress of the iPhone protects them.”
Marczak has spent the last eight years hunting those top-tier hackers. His research includes the groundbreaking 2016 “Million Dollar Dissident” report that introduced the world to the Israeli hacking company NSO Group. And in December, he was the lead author of a report titled “The Great iPwn,” detailing how the same hackers allegedly targeted dozens of Al Jazeera journalists.
He argues that while the iPhone’s security is getting tighter as Apple invests millions to raise the wall, the best hackers have their own millions to buy or develop zero-click exploits that let them take over iPhones invisibly. These allow attackers to burrow into the restricted parts of the phone without ever giving the target any indication of having been compromised. And once they’re that deep inside, the security becomes a barrier that keeps investigators from spotting or understanding nefarious behavior—to the point where Marczak suspects they’re missing all but a small fraction of attacks because they cannot see behind the curtain.
This means that even to know you’re under attack, you may have to rely on luck or vague suspicion rather than clear evidence. The Al Jazeera journalist Tamer Almisshal contacted Citizen Lab after he received death threats about his work in January 2020, but Marczak’s team initially found no direct evidence of hacking on his iPhone. They persevered by looking indirectly at the phone’s internet traffic to see who it was whispering to, until finally, in July last year, researchers saw the phone pinging servers belonging to NSO. It was strong evidence pointing toward a hack using the Israeli company’s software, but it didn’t expose the hack itself.
Sometimes the locked-down system can backfire even more directly. When Apple released a new version of iOS last summer in the middle of Marczak’s investigation, the phone’s new security features killed an unauthorized “jailbreak” tool Citizen Lab used to open up the iPhone. The update locked him out of the private areas of the phone, including a folder for new updates—which turned out to be exactly where hackers were hiding.
Faced with these blocks, “we just kind of threw our hands up,” says Marczak. “We can’t get anything from this—there’s just no way.”
Beyond the phone
Ryan Storz is a security engineer at the firm Trail of Bits. He leads development of iVerify, a rare Apple-approved security app that does its best to peer inside iPhones while still playing by the rules set in Cupertino. iVerify looks for security anomalies on the iPhone, such as unexplained file modifications—the sort of indirect clues that can point to a deeper problem. Installing the app is a little like setting up trip wires in the castle that is the iPhone: if something doesn’t look the way you expect it to, you know a problem exists.
But like the systems used by Marczak and others, the app can’t directly observe unknown malware that breaks the rules, and it is blocked from reading through the iPhone’s memory in the same way that security apps on other devices do. The trip wire is useful, but it isn’t the same as a guard who can walk through every room to look for invaders.
Despite these difficulties, Storz says, modern computers are converging on the lockdown philosophy—and he thinks the trade-off is worth it. “As we lock these things down, you reduce the damage of malware and spying,” he says.
This approach is spreading far beyond the iPhone. In a recent briefing with journalists, an Apple spokesperson described how the company’s Mac computers are increasingly adopting the iPhone’s security philosophy: its newest laptops and desktops run on custom-built M1 chips that make them more powerful and secure, in part by increasingly locking down the computer in the same ways as mobile devices.
“iOS is incredibly secure. Apple saw the benefits and has been moving them over to the Mac for a long time, and the M1 chip is a huge step in that direction,” says security researcher Patrick Wardle.
Macs were moving in this direction for years before the new hardware, Wardle adds. For example, Apple doesn’t allow Mac security tools to analyze the memory of other processes—preventing apps from checking any room in the castle aside from their own.
These rules are meant to safeguard privacy and prevent malware from accessing memory to inject malicious code or steal passwords. But some hackers have responded by creating memory-only payloads—code that exists in a place where Apple doesn’t allow outside security tools to pry. It’s a game of hide and seek for those with the greatest skill and most resources.
“Security tools are completely blind, and adversaries know this,” Wardle says.
It’s just not Apple, says Aaron Cockerill, chief security officer at the mobile security firm Lookout: “Android is increasingly locked down. We expect both Macs and ultimately Windows will increasingly look like the opaque iPhone model.”
“We endorse that from a security perspective,” he says, “but it comes with challenges of opacity.”
In fact, Google’s Chromebook—which limits the ability to do anything outside the web browser—might be the most locked-down device on the market today. Microsoft, meanwhile, is experimenting with Windows S, a locked-down flavor of its operating system that is built for speed, performance, and security.
These companies are stepping back from open systems because it works, and security experts know it. Bob Lord, the chief security officer for the Democratic National Committee, famously recommends that everyone who works for him—and most other people, too—only use an iPad or a Chromebook for work, specifically because they’re so locked down. Most people don’t need vast access and freedom on their machine, so closing it off does nothing to harm ordinary users and everything to shut out hackers.
But it does hurt researchers, investigators, and those who are working on defense. So is there a solution?
Making the trade-offs
In theory, Apple could choose to grant certain entitlements to known defenders with explicit permission from users, allowing a little more freedom to investigate. But that opens doors that can be exploited. And there is another consequence to consider: every government on earth wants Apple’s help to open up iPhones. If the company created special access, it’s easy to imagine the FBI knocking, a precarious position Apple has spent years trying to avoid.
“I would hope for a framework where either the owner of a device or someone they authorize can have greater forensic abilities to see if a device is compromised,” Marczak says. “But of course that’s tough, because when you enable users to consent to things, they can be maliciously socially engineered. It’s a hard problem. Maybe there are engineering answers to reduce social engineering but still allow researchers access to investigate device compromise.”
Apple and independent security experts are in agreement here: there is no neat fix. Apple strongly believes it is making the correct trade-offs, a spokesperson said recently in a phone interview. Cupertino argues that no one has convincingly demonstrated that loosening security enforcement or making exceptions will ultimately serve the greater good.
Consider how Apple responded to Marczak’s latest report. Citizen Lab found that hackers were targeting iMessage, but no one ever got their hands on the exploit itself. Apple’s answer was to completely re-architect iMessage with the app’s biggest security update ever. They built the walls higher and stronger around iMessage so that exploiting it would be an even greater challenge.
“I personally believe the world is marching toward this,” Storz says. “We are going to a place where only outliers will have computers—people who need them, like developers. The general population will have mobile devices which are already in the walled-garden paradigm. That will expand. You’ll be an outlier if you’re not in the walled garden.”