I, Phone

Reading Time: 15 minutes.

Isaac Asimov dreamed of a world with robots. He gave them the three laws of robotics (and eventually another one). It ensured robots were safe to be around for humans. However, real AI is far more lawless. There are no laws for AI, even where there should be. Companies seeking to replace human labor with cheap AI—built using their labor, given unwillingly—have found it is easy to replace mankind’s labor with cheap automations, with no one in government interested in ensuring humans are paid fairly for their contributions. AI has lead to false arrests, longer sentences, denied parole, bannings, sexist hiring decisions, potentially car accidents, and, what creatives would say, theft of intellectual property. Most of this extreme biases.

Apple has added a few new rules to their “Apple Intelligence” AI, but it still seeks to take the humanity out of your interactions. It’ll write a bedtime story for your child, hopefully with all the nice life lessons a human would add, based off the hard work of others who may not have been compensated for their work. Obviously the lesson won’t be that people should be paid to work instead of forced to contribute their work without reward to a cause they do not support. The AI will sterilize and repackage these creative works, removing the soul and identity of the original authors who trained the model. AI is empty, fill it with your creativity, your passions, your soul. It still won’t spit any of that back out. Much like how all AI images look the same, these writings will be too. Devoid of humanity, Apple’s Intelligence will generate text, emails, poems, stories, emoji, and more.

It’s ironic that Apple replaced the human voice so eagerly in a communication device.

What is Apple Intelligence?

Before we get into the problems with AI so intertwined into an operating system, let’s take a pause, a deep breath, put down the pitchforks, and talk about the cool features these updates will bring.

But keep the pitchforks at the ready, we’ll be needing them in a bit.

In iOS 18, iPadOS 18, and macOS Sequoia, and only on Apple silicon, all M-Series chips and the A17 or greater, users will soon have access to “Apple Intelligence.” Apple just had to brand their AI to set it apart, but does anything really set it apart?

“Our unique approach combines generative AI with a user’s personal context to deliver truly helpful intelligence. And it can access that information in a completely private and secure way to help users do the things that matter most to them. This is AI as only Apple can deliver it, and we can’t wait for users to experience what it can do.”

– Tim Cook, CEO of Apple

So, let’s see what it can do.

Writing Tools

ChatGPT writes a story for a Mac user

Making up stories with your kids can be fun. But who has the time? ChatGPT to the rescue! You can even have Siri read it to your child!

Apple’s new “writing tools” will be available system-wide, even for third party apps. Apple has a long history of preventing third parties from using their best new technologies, so it’s a nice change of pace to see them enabling such a powerful tool in any app.

Writing tools seem to be based heavily in OpenAI’s technology. While most of Apple’s AI tools that integrate into the OS do not require OpenAI, their generative text seems to be tied directly to it. However, when you ask the system to perform a request that Apple’s AI can’t solve on its own, like re-writing an email, summarizing a document, or generating a children’s story, it’ll ask the user if they want to use OpenAI’s ChatGPT to solve the problem. Apple hasn’t been clear how often this would happen, or exactly what features will be locked away from users who are unwilling to share their data with OpenAI. Many have come to see OpenAI as one of the least trustworthy sources of generative AI, and will not want to trust that they won’t access their data or use requests, data, or responses for training. Apple lets us keep OpenAI away from our private information.

Dear Bosshole, Give Me a MFing Raise Plz? Signed Doesntknowhowtowrite

Have you written an email before for work and suddenly realized you don’t know how to write professionally? Apple and OpenAI have you covered. Apple Intelligence will offer to “rewrite” the email in question, adding whatever tone you want. Professional, funny, heartfelt, you no long have to be any of those things with Apple Intelligence by your side. It can even proofread your writing, if you’re even still using your own words, to ensure you don’t have any typos.

Do you ever see a giant block of text on some loser’s blog and think, “What the hell is she on, I’m not reading all of that!” You could work on your reading comprehension and speed, but that’s work. You could read a section at a time over a few days, but that’s a commitment. With artificial intelligence, you don’t have to be intelligent anymore! You can simply ask Apple Intelligence to summarize it for you. Siri is better than ever at comprehending speech, you likely won’t even need a complete sentence! AI has been great with topic recognition and creating summarizes for some time. Getting permission from the author to run her text through AI? Who cares? The OS will do it for you, you’re not liable! No consideration, thinking, or empathy required. Thanks, Apple Intelligence!

It won’t just work for longwinded women on the internet! Your own handwritten notes can be summarized. You can even transcribe and summarize recorded audio. Sit down in class and let AI take the notes for you! Or just jot down as much as you can and let AI sort out the important parts later. Great for cramming in some studying in the 10 minutes before class because you went to a party last night instead of studying. Where was that when I was in college? I mean, you’ve seen how much I write! Imagine trying to cram these notes before a test! Less time studying, more time partying, what could go wrong?

You know when a friend texts you and you read the message in the notification but think, “Damn, I don’t feel like typing out a response right now.” Of course you do, socialization is exhausting! Let machines talk to your loved ones instead! Smart suggestions can answer questions, like, “Are you driving tonight, Justin?” You’ll get an autocomplete message that might read, “Sure, I doubt it’ll ruin the tour,” or, “No, I’m going to take an Uber.” Interestingly, in Apple’s example, it really did suggest taking an Uber, specifically, a brand. It’s unknown if Uber has to pay for that or if Apple is just basing it on the apps you use most frequently. New startups will be disappointed to find that breaking into the app market just got a little bit harder, with big and existing names getting a slight boost they don’t need in every message.

Priority Messaging

Emails marked as high priority using AI in the inboxI know I can mark my emails as a high priority message in nearly every modern email client, but do I? No. Does anyone outside of overpaid executives eager to assert their own importance do that?

No.

But sometimes your message is more important. Sometimes you really should mark it as such, but politeness keeps you from doing so. Apple Intelligence will analyze your incoming messages and notifications, seemingly without the help of OpenAI, to mark messages that seem more important right away.

Of course, Apple Mail still throws a great deal of my important emails into spam. When I was job searching, I had to check my spam inbox as much as my normal inbox. So let’s take this one with a grain of salt.

Image Playground

Van Gogh created some of my favorite paintings. The way he conveyed emotion through art, focusing more on how something felt than it looks, captures more of the human experience than I typically see in art. I love a quote from a Doctor Who episode that never fails to make me cry, no matter how many times I’ve seen it.

“He transformed the pain of his tormented life into ecstatic beauty. Pain is easy to portray, but to use your passion and pain to portray the ecstasy and joy and magnificence of our world, no one had ever done it before. Perhaps no one ever will again.”

– From Vincent and the Doctor, Doctor Who

But whoever wrote that clearly never heard of Apple’s Image Playground! Because it can take that pain and beauty and recreate it! No humanity involved at all!

Apple Intelligence generating a slightly off-putting image of a person. Image Playground will be available in nearly every app. It can generate images with animation styles, illustrations, or sketches. However, Apple Intelligence will also be able to create near photo-realistic images from your own terrible sketches, using the context of a note, for example. You can even take your friend’s profile photos, seemingly without permission, to generate uncanny valley representations of them. They may be done on-device or off in Apple’s cloud (more on that later).

With “Genmoji” you can create custom emoji to use in text or as iMessage stickers. You can even generate images of other people, which I’m sure they’ll love and approve of their likeness being used in that way. They won’t have a choice, it seems! Image Playground will be a fun place to make all kinds of new images, based on the work of others, often taken without permission. What fun!

Image Search/Organization

Apple Intelligence incorporates a lot of features that might have just been considered part of Siri before, or just a generic search bar. However, the new search features will be anything but simple. You’ll be able to search for people in photos, and by activity, with simple queries like, “Photos of Fred’s new bicycle.” Apple hasn’t stated exactly how much of this is on device, or how much is using their cloud service. As it once again involves searching for other people’s images with AI, sending it to the cloud with potentially other contextual information and personal information, it’s yet another question of consent as there’s no way of people denying you the ability to use a photo of them with AI.

Private Cloud Compute

An animation of a sketch being turned into a more realistic image

It’ll be nice to have a tool to turn my disturbingly bad sketches into “art!” (Sped up 2x from Apple)

A few times, I’ve mentioned Apple’s new cloud computing service. You might think it should be named something like “iCloud Compute,” but, for some reason, Apple decided not to attach their branding to the product. Instead, they’re just calling it the most generic name they could come up with, “Private Cloud Compute,” or PCC. It’s almost like they’re hiding from it.

PCC will take the requests that your device just isn’t powerful enough to process on its own. If you have a request that requires more memory, storage, or computational power than your device is capable of, it’ll send your relevant data and query to a PCC node, which will use more resources than your device can to complete the query. Apple hasn’t said exactly which requests would head off to PCC, or if you will be able to track the jobs performed by your device versus those performed by Apple’s server nodes. Apple claims, however, that it’ll be the most private system for AI ever. They published a detailed blog post explaining it in a whitepaper.

How Does Apple Claim They Protect Privacy?

Generated emoji from a contact's profile photo

Make sure your profile photo is clear so Apple AI can generate emoji of you!

Apple was in a tricky situation. They wanted the AI capabilities of Google, so they wouldn’t fall behind Android, but didn’t want to violate user privacy or security. If a server is to collect their data, and process it, then users cannot rely on the usual end-to-end encryption they’re used to. Their private data would have to exist, unencrypted, on Apple’s servers for processing. How would they manage that without dismantling privacy? Credit to Apple, they actually took the time to work this out.

Apple shared a few details in their keynote, but most of what we know comes from a short whitepaper from Apple. Here, they explain the issue and the measures they take to prevent privacy and security issues with PCC.

How PCC Protects Privacy

Obviously, standard protections won’t work here. Apple lays out five tactics they’ve used to ensure the privacy of Private Cloud Computing. First, is stateless computing. Basically, this means that data is only stored as long as its needed. Data is encrypted on your device, sent across the network, and when the node is done creating a response to your request, not only does it delete the data, it’ll rotate the codes on restart, ensuring that even if someone accesses the same node you used, no data can remain between runs. The software makes used of sandboxing and pointer authentication to prevent corruption of the software leading to memory leaks into areas where data could be accessible. Instead, Apple locks data to the process using it, then deletes it when it’s done.

Next, what Apple calls “enforceable guarantees.” Think of this like a nuclear launch that requires two keys to turn to launch. All software images that run on the nodes must be signed to run. In order to sign that software image, it’ll have to be made available to security researchers for third party testing. The hardware is verified with high resolution images, then it’s sealed with a tamper-evident case. Nothing can work unless it’s been tested and verified.

Apple won’t allow admin access. The nodes have no shell application, and their own employees will be locked out of nodes at runtime. They can’t enable debugging or additional logging. In fact, there are no dynamic logs. Logging occurs at milestones, and can include no data from the request. This prevents sneaky employees or even hackers from creating nodes that copy or share user data.

One issue I was concerned about was targetability. The FBI has tried to force Apple to loosen its security multiple times. What if they win sometime? What about a warrant? Could Apple be forced to make a “poison pill” node and force a particular user’s traffic to it? Basically, no. Nodes are randomized and selected using metadata that describes the tasks, not the content of the data. No routing ever has access to the identity of the person making the request. Data is encrypted between nodes, and each node will not be able to do more than it needs to do. The one that handles facial recognition, for example, might not have access to contacts or contextual parts of the request. In doing so, no one node contains enough data to completely compromise a person’s request and response information. Nodes are randomized, chosen at computation time, and with the user’s identity masked to these nodes, impossible to track. Apple can’t execute specific code on a node without signing it, and no one can access data once it’s used for a particular task. If you wanted to compromise a single person, you’d have to compromise a majority if not all of the nodes. That would require extreme oversights on Apple’s part, as well as the security researchers examining Apple’s hardware and software.

Finally, Apple wants security researchers to be able to verify claims. Anyone can download images of the software, snapshots of what’s used on the nodes, to test locally on their machines if they’re running an Apple silicon Mac. Apple will have third parties verify and perform pentesting, and anyone can do their own at home as well. It’s not completely open-sourced, and users still don’t have much transparency into when their requests go to the cloud or which nodes are in use, but it’s a step in the right direction. Apple is notorious for not allowing the larger developer community access to testing tools. We don’t often get a look under the hood. It’s a welcome surprise when Apple does anything. While I think they could do more for transparency, Apple has made an attempt to be a little better than they normally are.

Whose Privacy Are We Talking About?

One thing that sticks out to me is how much Apple focused on privacy, but not consent. They show someone using AI to transform a contact’s profile photo into an AI-generated image. Even in Apple’s advertising, the results looked unnatural. How uncomfortable would you be giving your face to OpenAI for the generation of some grotesque AI image of you. Apple’s blocking photo-realism for these features, for now, but all AI has been easily tricked. Or what about information? You may be comfortable sharing your contact list with OpenAI, but what about your contacts? Don’t they get a say? All it will take is one inconsiderate idiot putting personal information in an email they allowed OpenAI to generate, and the chances it ends up in a model increase.

Will Apple make it possible to opt out of other people’s use of AI? If so, they’re not talking about it, and no one has pointed it out from the betas. That means, once your friends update to Apple’s new operating systems, you’re not going to be able to avoid your data ending up in Apple’s or OpenAI’s servers.

OpenAI

OpenAI has been anything but honest, transparent, or trustworthy. They won’t detail the sources of their data. It turns out they may be ignoring requests to exclude material from their training. OpenAI CEO Sam Altman claimed ignorance about NDAs that would strip former employees of their equity in the company if they spoke out about their work, but signed the paperwork himself. They might have even deliberately used a Scarlett Johansson sound-alike to reference a movie.

As long as any of your data is going to OpenAI, I wouldn’t trust its privacy. Apple says OpenAI has agreed not to train their AI based on requests, but how much can we believe them? I wouldn’t believe a single claim from OpenAI. And what about metadata, results, or result acceptance? Just what privacy are users actually guaranteed from a company that refuses to reveal their sources? They only mention the requests, but there is far more data to train a model on from a single request than the request itself.

On-Device

Apple is still trying to focus on keeping your requests as local and private as possible. That means using the device’s power instead of the cloud as often as possible. They may be working on ways to cut OpenAI out of their operating systems, much in the way they eventually replaced Google Maps with Apple Maps in their apps. However, that could be years away.

Apple hasn’t detailed exactly how determinations are made, or if they’re logged in any way for users to review on their own. However, it’s interesting when you think about the model sizes. Could Apple take advantage of a newer processor and larger memory, so an iPhone 16 Pro could make fewer cloud requests than an iPhone 15 Pro? Could Apple give more power—and better privacy—to those willing to pay more? Could the best advice for protecting your privacy be, “Pay More?” We won’t know until future devices are released.

Privacy is a Distraction

One thing I kept reminding myself of during all of this is how privacy is a distraction. Yes, it’s important, far more important than anyone’s treating it, and if you use OpenAI, I personally suspect you’ll be throwing your privacy out the window. However, privacy is hardly the main concern here. Apple is using generative AI, with multiple parties involved not consenting to be a part of the inputs or contributing to the models. While these features can do cool things, they sacrifice the work of others to do so. Perhaps Private Cloud Computing will set a gold standard for privacy. Will it matter when it’s using the work of others to generate results? Will it matter when you or your friends are feeding it requests about people you know, without their permission? Will anyone care about privacy when they’re working in horrible conditions to generate that AI?

What’s The Big Deal

A haunting robot sketch representing AI with the text "Fill me for I am empty" around it. The eyes, nose, mouth, and ears are cavernous into a foreboding metal shape

Give AI your art, your soul. It can’t survive without it.

These features sound cool, and more people will use them than contribute to them. Doesn’t that make them a net positive?

First, I will say trust no guarantees from any company making and selling AI right now. The goal of AI is profit, copyright, human decency, intellectual property, forced labor, all of it doesn’t matter when it comes to profit. On top of that, laws are extremely lax around AI right now. Companies are hoping to cash out before the laws catch up to their misdeeds. You cannot trust any of them right now, there are simply no repercussions to breaking consumer trust, security, and privacy. As Apple’s updates show, you may have no choice but to participate in AI now.

No Harm Reduction

Apple could have done generative AI right. They could have made their own models from willing participants, generating models with data they personally chose to help reduce bias. They could have ensured contributions are tracked so that creators could be compensated when their creations lead to generated output. Apple could have reduced their energy impact, controlled the suppliers more closely, and ensured they were producing AI with more care than even their own devices. They didn’t. Instead, they went with one of the most controversial names in AI. Apple had a chance to reduce harm, and they didn’t take it. That should tell you everything about their priorities in this space. Our assumption is a safe one: if it comes down to doing right by customers and creators, or making a profit, Apple will choose the latter.

Normalizing the Worst in AI: Replacing People

Text written into static with a robot emoji. Text reads, "If you put 10,000 humans in a room, eventually they will write Shakespeare."Many of the AI features are nice. I especially like using AI to find details in your messages to make actions easier. “Add that to my calendar” as a command that Siri can understand, taking something from a text message and figuring out the details is wonderful. I also like the sorted notifications and emails, and I hope that, finally, it improves Apple’s inbox (and doesn’t throw interview requests in my junk mail the next time I’m looking). Summarizing your notes, helping you make bullet points and potentially flash cards? That’s great and could take a large part of what I didn’t like about studying away.

However, the generative AI features are simply disappointing to see. These are built by shady companies who keep secrets and, it seems, lie and cheat their way to more data. According to the people creating these words of art, they are stealing the output of people’s labor to replace the very people they stole from. I don’t feel like, in any civilized society, I should have to point out just how evil that is. This should be obvious.

But there’s more. These models are huge, made of untold data from even larger sources. When researchers tricked OpenAI’s ChatGPT to output personal information, it seemed as though the information came from personal emails. It’s likely someone hacked those emails and shared them somewhere, either on the dark web or somewhere as simple as pastebin, and OpenAI scraped it. These models are not safe because they were not built for safety. They have ingested racism, homophobia, transphobia, sexism, religious bias, and many other forms of bias. Companies work hard to filter this out, performing multiple layers of treatment on the data, often at the expense of the people doing it in conditions they describe as “modern day slavery.”

The accomplishments of these large data models are impressive, but what’s less impressive is that smaller models can accomplish the same thing. Less data, curated and specifically generated to nudge AI in the right direction, can be as powerful without the biases. On top of that, large data means large computations. It means far more electricity use. While smaller data sets can more ethically produce generative AI, we’ve instead gone the opposite direction, being as destructive as we can.

Right now? AI looks like the industrial revolution. And while that can be a boom for rich capitalists, it’s generally quite bad for the health and safety of the people building it. That’s us. AI companies are gobbling up our work so someone can do your job “in the style of <you>” for less money than it takes to hire you. The “jobs” it’s creating, from the people doing them, are nearly equivalent to slavery. All while increasing our energy demands and pollution. This software isn’t just made to replace you, it’s going to cause harm. Hell, AI already has.

Killing Our Humanity

Robot emoji to back of sweating human emoji. Text reads "I copied your homework. Now I have your job."We’ve had translation software for some time. Obviously, AI has made it better. Topic recognition and context make for important pieces of language, and that nuance is lost if you’re just looking at each sentence on an individual basis, rather than trying to get the deeper meaning out of something. AI tries to do this, but there’s a reason we don’t use AI translators in place of human translators when it counts. Humans get humans. The soul of a human, the creativity, passion, and mutual understanding between humans is something that a machine just can’t replicate. Whether it’s a translator for a politician at the U.N., or a book with brilliant prose you wish to have the same perception in another language, it’s important to ensure art is made by humans, for humans. A machine ruins everything.

And this is the real problem. When you have AI write a lazy story for your kid, you steal the human labor that went into generating that story and pull all the humanity out of it. You produce some silly story that looks like it was written by a toddler instead of someone who cares about teaching kids important life lessons through stories. Stories guide us. What will we do without stories that guide us to be better?

Perhaps we’ll make more AI.

When you pull the humanity out of art, you just make a distraction. Like a scam email or an old “like this post and share or you’ll get bad luck” post. It’s not creative, it’s not passionate, it’s just there. It’s “content,” in name only. It takes all the humanity out of art, repackages it, and sells it back to someone else. In this case, Apple’s likely footing the bill so your kid can get the most bland bedtime stories, your mom can get the lousiest Mother’s Day greeting, and your first love can get a sonnet written by the equivalent monkeys slamming their firsts on typewriters until they produce Shakespeare. Maybe it won’t have soul, but who cares? It’s profitable.

A Better Way to AI

AI can help doctors diagnose diseases and cancerous tumors that humans sometime miss. When used in conjunction with a doctor’s expertise, it can produce an accurate assessment of a patient’s health. AI can identify an engine, and help a mechanic quickly diagnose issues, getting your car back on the road. It can help a data analyst spot patterns in their earliest stages, reporting and helping them identify trends to make suggestions to decision makers more quickly. It can help detect security flaws in your code.

When AI is used as a tool that benefits humanity, it’s clearly beneficial. When we use it to enhance our own abilities, rather than replace them, we become something greater than the sum of our parts. However, much of AI research has been using the outputs of human labor—often without permission—to replace human labor. It’s taking someone’s work without paying for it. Sanitizing theft and making a profit. I can’t think of a better endgame for capitalism than replacing the labor component so the market is all capital. We’re well on our way!