this post was submitted on 10 Jun 2024
-6 points (28.6% liked)

Technology

71371 readers
3471 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.

Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.

Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via "function calling"; kernel process LLM that can schedule and coordinate work across them given user queries.

Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, "always on", and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.

Step 4 Initiative. Don't perform a task given a prompt, anticipate the prompt, suggest, initiate.

Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.

Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).

Step 7 Privacy. <3

We're quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.

https://x.com/karpathy/status/1800242310116262150

top 34 comments
sorted by: hot top controversial new old
[–] c10l@lemmy.world 4 points 1 year ago (1 children)

Founding member of company that stands to make fortunes through a product endorses said product.

[–] Z4rK@lemmy.world 0 points 1 year ago

I mean, that’s fair, if you don’t believe in his integrity than this news have very little value to you.

[–] duuuurtymike@lemm.ee 3 points 3 months ago

NONE of the features on this list are in Apple Intelligence. Apple AI is such a flop. They released the iPhone 16 lineup saying it’s for Apple AI and it’s not even going to be released on them for at least another year. What a fail.

[–] someacnt_@lemmy.world 2 points 1 year ago (1 children)
[–] admin@lemmy.my-box.dev 0 points 1 year ago (1 children)

Check out OP defending Apple in every comment in this thread. It would be funny if it weren't so... yeah.

[–] someacnt_@lemmy.world 1 points 1 year ago

I am just sitting here like.. how. Am I too autistic to distinguish satire from non-satire ones

[–] AverageGoob@lemmy.world 2 points 1 year ago (3 children)

Yikes. Just hit em with the ol' "<3" for privacy. Does not inspire confidence.

[–] eager_eagle@lemmy.world 3 points 1 year ago

#trustmebro

<3

[–] reattach@lemmy.world 2 points 1 year ago

I thought the original post was satire - list all of the privacy issues, then throw in "Privacy <3" at the end. Seriously, almost every one of those points has a potential privacy issue.

Guess I was being too generous.

[–] Z4rK@lemmy.world 0 points 1 year ago (3 children)

How so? Many people want to use AI in privacy, but it’s too hard for most people to set it up for themselves currently.

Having AI tools on the OS level so you can use it in almost any app and that is guaranteed to be processed on device in privacy will be very useful if done right.

[–] TheFriar@lemm.ee 1 points 1 year ago

You think your iPhone isn’t collecting data on you? Is that what you’re saying?

[–] Zoot@reddthat.com 0 points 1 year ago (1 children)

Yeah just like Microsoft Recall right? An AI that has access to every single thing you do (and would also be recording, otherwise how does it know "you") can never be private by design. Its literal design is to know everything about you, your actions, and your habits. I wouldn't trust anyone to be able to create an actually secure piece of software that does the above. It will always be able to be stolen/sold/abused.

[–] Z4rK@lemmy.world -1 points 1 year ago (1 children)

macOS and Windows could already be doing this today behind your back regardless of any new AI technology. Don’t use an OS you don’t trust.

[–] Zoot@reddthat.com 0 points 1 year ago (1 children)

I don't use either of those thankfully:).

[–] Z4rK@lemmy.world -1 points 1 year ago

That’s fair, but you are misunderstanding the technology if you’re bashing the AI from Apple for making macOS less secure. Most likely, it will be just as secure as for example their password functionality, although we don’t have details yet. You either trust the OS or not.

Microsoft Recall was designed so badly, there’s no hope for it.

[–] Rustmilian@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

you can use it in almost any app
if done right

How are you going to be able to use it in "almost any app" in a way that is secure? How are you going to design it so that the apps don't abuse the AI to get more information on the user out of it than intended? Seems pretty damn inherently insecure to me.

[–] Z4rK@lemmy.world -1 points 1 year ago (1 children)

That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.

User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.

The App is (possibly) unaware that AI has been used / activated, and has not received any user information.

Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.

[–] Rustmilian@lemmy.world 1 points 1 year ago* (last edited 1 year ago)
  • Malicious actors could potentially exploit vulnerabilities in the AI system to gain unauthorized access or control over device functions and data, potentially leading to severe privacy breaches, unauthorized data access, or even the ability to inject malicious content or commands through the AI system.
  • Privacy breaches are possible if the AI system is compromised, exposing user data, activities, and conversations processed by the AI.
  • Integrating AI functionality deeply into the operating system increases the overall attack surface, providing more potential entry points for malicious actors to exploit vulnerabilities and gain unauthorized access or control.
  • Human reviewers have access to annotate and process user conversations for improving the AI models. To effectively train and improve the AI models powering the OS-level integration, Apple would likely need to collect and process user data, such as text inputs, conversations, and interactions with the AI.
  • Apple's privacy policy states that the company collects data necessary to provide and improve its products and services. The OS-level AI would fall under this category, allowing Apple to collect data processed by the AI for improving its functionality and models.
  • Despite privacy claims, Apple has a history of collecting various types of user data, including device usage, location, health data, and more, as outlined in their privacy policies.
  • If Apple partners with third-party AI providers, there is a possibility of user data being shared or accessed by those entities, as permitted by Apple's privacy policy.
  • With the AI system operating at the OS level, it likely has access to a wide range of user data, including text inputs, conversations, and potentially other sensitive information. This raises privacy concerns about how this data is handled, stored, and potentially shared or accessed by the AI provider or other parties.
  • Lack of transparency for users about when and how their data is being processed by the AI system & users not being fully informed about data collection related to the AI. Additionally, if the AI integration is controlled solely at the OS level, users may have limited control over enabling or disabling this functionality.
[–] JoMiran@lemmy.ml 2 points 1 year ago (2 children)
[–] WorldsDumbestMan@lemmy.today 1 points 4 months ago

Now just need one of those headsets that read your vocal cord movements in order to "read your thoughts", and I can silently make the AI do anything.

I look forward to Apple Marketing coming up with their usual line of nonsense, like a meaningless name for an existing capability that they are claiming to have invented.

[–] LANIK2000@lemmy.world 1 points 1 year ago

The amount of corporate speak makes me sick. Especially the mix of buzzwords being mixed with shit like "KERNEL PROCESS", shit's cursed.

[–] henfredemars@infosec.pub 1 points 1 year ago* (last edited 1 year ago) (1 children)

Kernel process LLM

God I hope not. That sounds extremely insecure. Definitely do not do this in the kernel.

[–] Thann@lemmy.ml 2 points 1 year ago (1 children)

Why not just have the LLM replace the kernel?

vibe-syscalls

[–] eager_eagle@lemmy.world 1 points 1 year ago (1 children)

"and it just works"

has he even used an llm before?

[–] Z4rK@lemmy.world 0 points 1 year ago (1 children)

He sort of invented it, so you have to think he’s commenting on the concept here, not the implementation.

I have tried a lot of medium and small models, and there it just no good replacement for the larger ones for natural text output. And they won’t run on device.

Still, fine-tuning smaller models can do wonders, so my guess would be that Apple Intelligence is really 20+ small and fine tuned models that kick in based on which action you take.

[–] gravitas_deficiency@sh.itjust.works 0 points 1 year ago (1 children)

An LLM has no comprehension of what it says. It’s just a puppy that is really good at performing for treats. This will always yield nonsense a meaningful proportion of the time.

I don’t care how statistically good your model can be under certain constraints and inputs. At the end of the day, all you’ve done is classically condition your computer.

[–] Z4rK@lemmy.world 0 points 1 year ago

It goes a tad bit beyond classical conditioning... LLM’a provides a much better semantic experience than any previous technology, and is great for relating input to meaningful content. Think of it as an improved search engine that gives you more relevant info / actions / tool-suggestions etc based on where and how you are using it.

Here’s a great article that gives some insight into the knowledge features embedded into a larger model: https://transformer-circuits.pub/2024/scaling-monosemanticity/

[–] rikudou@lemmings.world 1 points 1 year ago

What the hell is the fella smoking if he thinks Apple would ever let others use their on-device LLM? Like, the company that deems it too dangerous if apps could change a wallpaper?

[–] demonsword@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

Andrej Karpathy endorses Apple Intelligence

Who is this guy and why his opinion should mean anything to me?

EDIT: nevermind, searched for it and its some guy who used to work at OpenAI.

[–] xylogx@lemmy.world 1 points 1 month ago

He has some really good, in-depth youtube explainer videos on LLMs. That said this bit on Apple Intelligence does not seem to reflect what people are experiencing.