Features 14.01.2025

Apple Intelligence Is Here: Should CISOs Be Concerned?

Businesses will soon be brimming with the tool. That could be a challenge.

Callum Booth asks, what can security teams do to mitigate the impact of consumer-grade mobile AI assistants?

Whenever Apple releases a new product or feature, the cybersecurity world pays close attention – and for good reason. With over 2.2 billion Apple devices in the wild, including well over a billion iPhones, the company’s engineering decisions matter. So it was with great fanfare, and some concern, that the tech giant’s latest innovation was recently released.

Apple Intelligence is an iOS and iPadOS AI assistant. Although it can currently help users only with mundane tasks like drafting or summarising text, businesses will soon be brimming with devices running the tool. Should they be concerned about it hoovering up large volumes of corporate data?

How dangerous is Apple Intelligence?

Most experts Assured Intelligence spoke to agree that Apple has got plenty right with the rollout of its new AI assistant.

Chris Hauk, privacy champion at Pixel Privacy, tells Assured Intelligence: “Apple has put several privacy and security safeguards in place, not least the ability to have on-device processing of many AI requests.”

Hauk adds that Apple also includes an anonymity layer for any requests sent to the cloud for processing, something which should prevent specific user targeting. This doesn’t mean everything is fine and dandy for cybersecurity professionals though.

“Consumer AI tech is here to stay – and it’s up to security leaders how much of a threat it’ll be to their companies.”

“Apple has set the standard of data privacy and its security has always been very high,” Zscaler EMEA CISO in residence, Tony Fergusson, tells Assured Intelligence. Yet, this has a rather unfortunate downside, as it makes it “more difficult to intercept and inspect data in transit that is going to Apple Intelligence”, he adds.

In other words, corporate use of Apple Intelligence could reduce IT control over where data is moving. If, for example, the tool is automatically summarising sensitive information from internal emails, security teams may not have the appropriate oversight.

Albert Fox Cahn is the executive director of the Surveillance Technology Oversight Project (STOP) a non-profit privacy group. He tells Assured Intelligence that “while Apple has gone much further than others in promising privacy protections for our AI prompts and outputs, a lot still needs to be seen about how these protections work in practice.”

Mitigating risk from Apple devices

Fortunately, there are ways to mitigate these risks, according to Manoj Bhatt, a cybersecurity consultant who has worked with the Ministry of Justice and Accenture.

“The trick is to embrace AI assistants, but with the right protection.”Tony Ferguson, Zscaler

He advises that Apple Intelligence should be handled like “any other [cybersecurity] risk”.

The first step is user education.

Tools like Apple Intelligence are often automatically enabled when launched on personal devices. Employees should therefore be made aware of this and encouraged to take any new features they encounter to security teams for assessment, Bhatt tells Assured Intelligence.

“This user engagement is going to be the key, rather than disabling everything or trying to detect all AI features,” he adds.

Zscaler’s Fergusson agrees, adding that when analysing the risk posed by Apple Intelligence, businesses first need to look inwards.

“What kind of data [is your company] dealing with?” he asks. “What’s critical? What’s sensitive? And what needs the most protection? Once that’s clear, companies can set rules about who can access that data and how it’s handled.”

If these principles are followed, then security teams can pre-emptively reduce any threats from Apple Intelligence before they arise. This, combined with Apple’s stringent security policies, means businesses shouldn’t be overly worried about the proliferation of the tool. Or should they?

Threats and assistants

Historically, Apple has profited by taking pre-existing technologies – such as MP3 players and smartphones – and refining them in a way that appeals to the public at large. The same could be true for on-device AI assistants. Cybersecurity leaders and executives therefore need to prepare for Apple Intelligence popularising the use of this technology, to the point that it becomes widespread. The challenge is that not all device makers have the same strict security and privacy policies as Apple.

Lisa Ventura – founder of the Cyber Security Unity community – argues that these types of consumer-facing AI tools can “introduce risks related to eavesdropping, data breaches, and unauthorized access to personal information, especially if vulnerabilities are exploited.”

For corporate leaders, this is a nightmare scenario – and something STOP’s Cahn also fears.

“As long as this AI hype bubble continues, and we see more AI features deployed, it’ll radically expand the catalogue of threats to our data,” he says.

If AI assistants become embedded in everything from phones and watches, to smart glasses and tablets, how can security leaders prevent these tools soaking up sensitive data and sending it to the cloud?

Fighting consumer-facing AI

The spread of AI into consumer devices is inevitable. So how should it be dealt with?

“The trick is to embrace [them], but with the right protection,” Zscaler’s Fergusson argues,

Best practices could include close monitoring of data, investment in adaptable security systems, and enabling as many internal controls as possible.

“User engagement is going to be the key, rather than disabling everything or trying to detect all AI features.”Manoj Bhatt

Security consultant Bhatt agrees, saying the industry must “start to accept a level of risk when it comes to AI and tooling”. If companies want to stay ahead of the curve and attract talent, banning consumer AI tools won’t achieve that goal.

Instead, executives should encourage closer dialogue between security staff and regular employees. The more each side knows about the wants and needs of the other, the quicker and more efficiently risk from rogue consumer-facing AI tools can be managed.

There’s something else CISOs and other C-level executives should do though – and that’s ramp up industry engagement and dialogue.

Cyber Security Unity’s Ventura believes that collaboration between industry leaders, policymakers, and cybersecurity experts is essential to prevent AI misuse and reduce risk, both for society and the corporate world. Working together, these groups should drive standards to reduce the threat posed by consumer-facing and enterprise AI.

Beyond this, Ventura says that making consumer-facing AI safer for businesses should involve “providing users with greater control over their data”; by giving them customisable privacy settings, and opt-in features.

Finding a way forward

Ultimately, Apple Intelligence shouldn’t unduly worry businesses with a strong security posture. Apple has a good track record on privacy and if executives follow best practices, the tool won’t be a major threat.

The bigger issue is what happens if Apple Intelligence normalises the technology and a deluge of less security-conscious AI assistants floods the corporate market. Combatting this will be a challenge, but can be achieved with careful risk management, adoption of new tools, and broad user education.

Ultimately, consumer AI tech is here to stay – and it’s up to security leaders how much of a threat it’ll be to their companies.

Latest articles

Be an insider. Sign up now!