Menu

Search

  |   Business

Menu

  |   Business

Search

Apple, AI, Cybersecurity, and Musk: What ‘Apple Intelligence’ means for AI safety and cybersecurity

In early June, Apple launched their hotly anticipated entry into the ever-evolving AI space with ‘Apple Intelligence’. According to CEO Tim Cook, in collaboration with OpenAI, Apple Intelligence will deploy generative A.I. to enhance the iPhone experience.

For instance, it will be able to prioritise which emails or messages you need to see first, it will enable video search on the photo app, and, amongst other things, you will now be able to create a completely new emoji based on a text prompt.

Indeed, it all sounded very ‘Apple’. That is, implement something that someone else has already implemented, but throw on an Apple lick of paint to plant the stars in our eyes.

However, critics like the ever-outspoken Mr Elon Musk have come out and dramatically condemned Apple Intelligence, claiming it was ‘patently absurd’ that Apple had to rely on integrating OpenAI’s software into their products instead of developing their own AI.

Musk suggested that ‘Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.’ Mr Musk went on to threaten banning Apple devices from all of his companies were Apple to integrate OpenAI at OS level, calling it an ‘unacceptable security violation.’

Musk has a habit of making points via internet memes. For instance, when he posted a picture on X of an NFL player dodging a clan of opposition players and comparing it to his legal team attempting to stop him from posting ‘fire content’. Therefore, it is surprising that he does not see the potential humour in being able to create an emoji based on a text prompt.

This is not the first time Musk has come blows with Apple. In 2022, Musk accused the tech giant of trying to sabotage Twitter by cutting back on advertising and removing X from the app store.

We also know that he has been a long-time AI sceptic who has called for its regulation and for us to be wary of its potentially lethal cybersecurity implications. In a 2023 meeting US senators and tech CEOs Musk warned that AI was a ‘civilisational risk’.

Indeed, Mr Musk also has his gripes with OpenAI. Having founded the company with Sam Altman in 2015, Musk left the company in 2018 after what the New York Times coined as ‘power struggle’ and has since been outspoken about OpenAI betraying its founding principles turning to profits and commercial interests instead.

Considering these personal dynamics, is Musk right? Is Apple Intelligence a safe pair of Californian hands for your cybersecurity? Is it ‘patently absurd’?

AI models have often been criticised for their tendency to ‘hallucinate’. AI hallucinations are when models present information that is false or misleading in a way that appears factual. This can occur randomly.

In an interview with the Washington Post, Tim Cook admitted that, in terms of hallucinations, Apple Intelligence is not yet ‘100 percent’ bulletproof.

Whilst euphemistic, none of the current AI leaders have yet been able to eliminate hallucinations and no doubt, as the technology continues to develop, they will become less and less common.

Indeed, one of the reasons Apple decided to partner with OpenAI is because they have already taken strong steps to safeguard users’ privacy.

For instance, OpenAI does not track IP addresses and according to Apple the improved Siri is ‘aware of your personal information without collecting your personal information’. A peculiar way of saying we know so much about you already that we do not need to collect this information.

Aside from this clunky phrase, the fact that Apple will continue not to store your data, that this will only be actioning personal requests, and that they provide a ‘verifiable privacy promise’ is certainly a step in the right direction.

The integration of ChatGPT into the AppleOS is undeniably useful, and yes of course at times it will hallucinate but that is by no means a licence to condemn it as absurd.

Anyone interested in tech and AI has been waiting a long time to see what shape Apple’s entry into the space would come in and, like most Apple ventures, this is aimed very much at the consumer.

This is not a business product. In most cases, it will not be plugged into business software in the same way the Microsoft Copilot for Office 365 would be.

Instead, it makes it easier to find that photo of that meal you had on holiday in Palm Beach. If this is what Mr Musk deems as a patently absurd violation of personal privacy, then perhaps he does not want to be reminded of what he eats and indeed, he does not want his old friend Sam Altman to see this.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.