This Week's Sponsor:

Turbulence Forecast

Know before you go. Get detailed turbulence forecasts for your exact route, now available 5 days in advance.


Posts tagged with "AI"

OpenAI to Buy Jony Ive’s Stealth Startup for $6.5 Billion

Jony Ive’s stealth AI company known as io is being acquired by OpenAI for $6.5 billion in a deal that is expected to close this summer subject to regulatory approvals. According to reporting by Mark Gurman and Shirin Ghaffary of Bloomberg:

The purchase — the largest in OpenAI’s history — will provide the company with a dedicated unit for developing AI-powered devices. Acquiring the secretive startup, named io, also will secure the services of Ive and other former Apple designers who were behind iconic products such as the iPhone.

The partnership builds on a 23% stake in io that OpenAI purchased at the end of last year and comes with what Bloomberg describes as 55 hardware engineers, software developers, and manufacturing experts, plus a cast of accomplished designers.

Ive had this to say about the purportedly novel products he and OpenAI CEO Sam Altman are planning:

“People have an appetite for something new, which is a reflection on a sort of an unease with where we currently are,” Ive said, referring to products available today. Ive and Altman’s first devices are slated to debut in 2026.

Bloomberg also notes that Ive and his team of designers will be taking over all design at OpenAI, including software design like ChatGPT.

For now, the products OpenAI is working on remain a mystery, but given the purchase price and io’s willingness to take its first steps into the spotlight, I expect we’ll be hearing more about this historic collaboration in the months to come.

Permalink

Notes on Early Mac Studio AI Benchmarks with Qwen3-235B-A22B and Qwen2.5-VL-72B

I received a top-of-the-line Mac Studio (M3 Ultra, 512 GB of RAM, 8 TB of storage) on loan from Apple last week, and I thought I’d use this opportunity to revive something I’ve been mulling over for some time: more short-form blogging on MacStories in the form of brief “notes” with a dedicated Notes category on the site. Expect more of these “low-pressure”, quick posts in the future.

I’ve been sent this Mac Studio as part of my ongoing experiments with assistive AI and automation, and one of the things I plan to do over the coming weeks and months is playing around with local LLMs that tap into the power of Apple Silicon and the incredible performance headroom afforded by the M3 Ultra and this computer’s specs. I have a lot to learn when it comes to local AI (my shortcuts and experiments so far have focused on cloud models and the Shortcuts app combined with the LLM CLI), but since I had to start somewhere, I downloaded LM Studio and Ollama, installed the llm-ollama plugin, and began experimenting with open-weights models (served from Hugging Face as well as the Ollama library) both in the GGUF format and Apple’s own MLX framework.

LM Studio.

LM Studio.

I posted some of these early tests on Bluesky. I ran the massive Qwen3-235B-A22B model (a Mixture-of-Experts model with 235 billion parameters, 22 billion of which activated at once) with both GGUF and MLX using the beta version of the LM Studio app, and these were the results:

  • GGUF: 16 tokens/second, ~133 GB of RAM used
  • MLX: 24 tok/sec, ~124 GB RAM

As you can see from these first benchmarks (both based on the 4-bit quant of Qwen3-235B-A22B), the Apple Silicon-optimized version of the model resulted in better performance both for token generation and memory usage. Regardless of the version, the Mac Studio absolutely didn’t care and I could barely hear the fans going.

I also wanted to play around with the new generation of vision models (VLMs) to test modern OCR capabilities of these models. One of the tasks that has become kind of a personal AI eval for me lately is taking a long screenshot of a shortcut from the Shortcuts app (using CleanShot’s scrolling captures) and feed it either as a full-res PNG or PDF to an LLM. As I shared before, due to image compression, the vast majority of cloud LLMs either fail to accept the image as input or compresses the image so much that graphical artifacts lead to severe hallucinations in the text analysis of the image. Only o4-mini-high – thanks to its more agentic capabilities and tool-calling – was able to produce a decent output; even then, that was only possible because o4-mini-high decided to slice the image in multiple parts and iterate through each one with discrete pytesseract calls. The task took almost seven minutes to run in ChatGPT.

This morning, I installed the 72-billion parameter version of Qwen2.5-VL, gave it a full-resolution screenshot of a 40-action shortcut, and let it run with Ollama and llm-ollama. After 3.5 minutes and around 100 GB RAM usage, I got a really good, Markdown-formatted analysis of my shortcut back from the model.

To make the experience nicer, I even built a small local-scanning utility that lets me pick an image from Shortcuts and runs it through Qwen2.5-VL (72B) using the ‘Run Shell Script’ action on macOS. It worked beautifully on my first try. Amusingly, the smaller version of Qwen2.5-VL (32B) thought my photo of ergonomic mice was a “collection of seashells”. Fair enough: there’s a reason bigger models are heavier and costlier to run.

Given my struggles with OCR and document analysis with cloud-hosted models, I’m very excited about the potential of local VLMs that bypass memory constraints thanks to the M3 Ultra and provide accurate results in just a few minutes without having to upload private images or PDFs anywhere. I’ve been writing a lot about this idea of “hybrid automation” that combines traditional Mac scripting tools, Shortcuts, and LLMs to unlock workflows that just weren’t possible before; I feel like the power of this Mac Studio is going to be an amazing accelerator for that.

Next up on my list: understanding how to run MLX models with mlx-lm, investigating long-context models with dual-chunk attention support (looking at you, Qwen 2.5), and experimenting with Gemma 3. Fun times ahead!


Is Apple’s AI Predicament Fixable?

On Sunday, Bloomberg’s Mark Gurman published a comprehensive recap of Apple’s AI troubles. There wasn’t much new in Gurman’s story, except quotes from unnamed sources that added to the sense of conflict playing out inside the company. That said, it’s perfect if you haven’t been paying close attention since Apple Intelligence was first announced last June.

What’s troubling about Apple’s predicament isn’t that Apple’s super mom and other AI illustrations looks like they were generated in 2022, a lifetime ago in the world of AI. The trouble is what the company’s struggles mean for next-generation interactions with devices and productivity apps. The promise of natural language requests made to Siri that combine personal context with App Intents is exciting, but it’s mired in multiple layers of technical issues that need to be solved starting, as Gurman reported, with Siri.

The mess is so profound that it raises the question of whether Apple has the institutional capabilities to fix it. As M.G. Siegler wrote yesterday on Spyglass:

Apple, as an organization, simply doesn’t seem built correctly to operate in the age of AI. This technology, even more so than the web, moves insanely fast and is all about iteration. Apple likes to move slowly, measuring a million times and cutting once. Shipping polished jewels. That’s just not going to cut it with AI.

Having studied the fierce competition among AI companies for months, I agree with Siegler. This isn’t like hardware where Apple has successfully entered a category late and dominated it. Hardware plays to Apple’s design and supply chain strengths. In contrast, the rapid iteration of AI models and apps is the antithesis of Apple’s annual OS cycle. It’s a fundamentally different approach driven by intense competition and fueled by billions of dollars of cash.

I tend to agree with Siegler that given where things stand, Apple should replace a lot of Siri’s capabilities with a third-party chatbot and in the longer-term make an acquisition to shake up how it approaches AI. However, I also think the chances of either of those things happening are unlikely given Apple’s historical focus on internally developed solutions.

Permalink

Google Brings Its NotebookLM Research Tool to iPhone and iPad

Google’s AI research tool NotebookLM dropped on the App Store for iOS and iPadOS a day earlier than expected. If you haven’t used NotebookLM before, it’s Google’s AI research tool. You feed it source materials like PDFs, text files, MP3s, and more. Once your sources are uploaded, you can use Google’s AI to query the sources, asking questions and creating materials that draw on your sources.

Of all the AI tools I’ve tried, NotebookLM’s web app is one of the best I’ve used, which is why I was excited to try it on the iPhone and iPad. I’ve only played with it for a short time, but so far, I like it a lot.

Just like the web app, you can create, edit and delete notebooks, add new sources using the native file picker, view existing sources, chat with your sources, create summaries, timelines, and use the Studio tab to generate a faux podcast of the materials you’ve added to the app. Notebooks can also be filtered and sorted by Recent, Shared, Title, and Downloaded. Unlike the web app, you won’t see predefined prompts for things like a study guide, a briefing document, or FAQs, but you can still generate those materials by asking for them from the Chat tab.

NotebookLM’s native iOS and iPadOS app is primarily focused on audio. The app lets you generate audio overviews from the Chats tab and ‘deep dive’ podcast-style conversations that draw from your sources. Also, the audio generated can be downloaded locally, allowing you to listen later whether or not you have an Internet connection. Playback controls are basic and include buttons to play and pause, skip forward and back by 10 seconds at a time, control playback speed, and share the audio with others.

Generating an audio overview of sources.

Generating an audio overview of sources.

What you won’t find is any integration with features tied to App Intents. That means notebooks don’t show up in Spotlight Search, and there are no widgets, Control Center controls, or Shortcuts actions. Still, for a 1.0, NotebookLM is an excellent addition to Google’s AI tools for the iPhone and iPad.

NotebookLM is available to download from the App Store for free. Some NotebookLM features are free, while others require a subscription that can be purchased as an In-App Purchase in the App Store or from Google directly. You can learn more about the differences between the free and paid versions of NotebookLM on Google’s blog.


Eddy Cue Causes a Stir for Google

2025 is shaping up to be the year of litigation for big tech. Apple’s been held in contempt and has an antitrust case on the horizon, Meta is in the midst of an antitrust trial, and Google is defending two antitrust lawsuits at once. Every one of these cases is a high-stakes challenge to the status quo, and collectively, they have the potential to reshape the tech industry for years to come.

The ultimate question for Google right now is whether it will be broken up. What will become of its ad tech business, and will it be forced to sell Chrome? That will be decided by the judges in those cases, but along the way, there are plenty of sideshow dramas worth keeping an eye on. This week, it was Google’s turn for a little litigation drama that was prompted not by a judge, but by none other than Apple’s SVP of Services Eddy Cue.

As part of Google’s search antitrust case, Cue testified yesterday that in April 2025, Google searches declined in Safari for the very first time. Cue’s testimony, which was reported on by Mark Gurman, Leah Nylen, and Stephanie Lai of Bloomberg, went on to explain that Apple is investigating AI search as an alternative to traditional search engines, noting that the company has had discussions with Perplexity.

Google’s stock immediately began to fall. By the close of trading, it was down around 7.5% and had caused enough concern internally at Google that the company felt compelled to release a one-paragraph statement on its blog, The Keyword, responding not to the testimony but to “press reports:”

Here’s our statement on this morning’s press reports about Search traffic.

We continue to see overall query growth in Search. That includes an increase in total queries coming from Apple’s devices and platforms. More generally, as we enhance Search with new features, people are seeing that Google Search is more useful for more of their queries — and they’re accessing it for new things and in new ways, whether from browsers or the Google app, using their voice or Google Lens. We’re excited to continue this innovation and look forward to sharing more at Google I/O.

It’s not news that Google Search is under threat from AI. However, Cue’s testimony under oath that Google searches in Safari are in decline is the first concrete evidence publicly shared that the threat is not just theoretical, which is a big deal.

Apple’s exploration of AI-based search is not terribly surprising either, but I do hope they cut a broader deal with Anthropic instead of Perplexity. I understand why Perplexity’s product is popular, but its CEO’s contempt for the open web and user privacy is something that I’d rather not see Apple perpetuate through a partnership.


Post-Chat UI

Fascinating analysis by Allen Pike on how, beyond traditional chatbot interactions, the technology behind LLMs can be used in other types of user interfaces and interactions:

While chat is powerful, for most products chatting with the underlying LLM should be more of a debug interface – a fallback mode – and not the primary UX.

So, how is AI making our software more useful, if not via chat? Let’s do a tour.

There are plenty of useful, practical examples in the story showing how natural language understanding and processing can be embedded in different features of modern apps. My favorite example is search, as Pike writes:

Another UI convention being reinvented is the search field.

It used to be that finding your flight details in your email required typing something exact, like “air canada confirmation”, and hoping that’s actually the phrasing in the email you’re thinking of.

Now, you should be able to type “what are the flight details for the offsite?” and find what you want.

Having used Shortwave and its AI-powered search for the past few months, I couldn’t agree more. The moment you get used to searching without exact queries or specific operators, there’s no going back.

Experience this once, and products with an old-school text-match search field feel broken. You should be able to just find “tax receipts from registered charities” in your email app, “the file where the login UI is defined” in your IDE, and “my upcoming vacations” in your calendar.

Interestingly, Pike mentions Command-K bars as another interface pattern that can benefit from LLM-infused interactions. I knew that sounded familiar – I covered the topic in mid-November 2022, and I still think it’s a shame that Apple hasn’t natively implemented these anywhere in their apps, especially now that commands can be fuzzier (just consider what Raycast is doing). Funnily enough, that post was published just two weeks before the public debut of ChatGPT on November 30, 2022. That feels like forever ago now.

Permalink

Sundar Pichai Testifies That He Hopes Gemini Will Be Integrated into iPhones This Fall

Ever since Apple announced its deal to integrate ChatGPT into Siri, there have been hints that the company wanted to make deals with other AI providers, too. Alphabet CEO Sundar Pichai has added fuel to the rumors with testimony given today in the remedy phase of the search antitrust case brought against it by the U.S. Department of Justice.

In response to questions by a DOJ prosecutor, Pichai testified that he hoped Google Gemini would be added to iPhones this year. According to a Bloomberg story co-authored by Mark Gurman, Davey Alba, and Leah Nylen:

Pichai said he held a series of conversations with Apple Chief Executive Officer Tim Cook across 2024 and he hopes to have a deal done by the middle of this year.

This news isn’t surprising, but it is welcome. Despite Google’s early stumbles with Bard, its successor, Gemini, has improved by leaps and bounds in recent months and has the advantage of being integrated with many of Google’s other products that have a huge user base. What will be interesting to see is whether Gemini is integrated as an alternative fallback for Siri requests or whether Apple and Google ink a broader deal that integrates Gemini into other aspects of iOS.

Permalink

Sycophancy in GPT-4o

OpenAI found itself in the middle of another controversy earlier this week, only this time it wasn’t about publishers or regulation, but about its core product – ChatGPT. Specifically, after rolling out an update to the default 4o model with improved personality, users started noticing that ChatGPT was adopting highly sycophantic behavior: it weirdly agreed with users on all kinds of prompts, even about topics that would typically warrant some justified pushback from a digital assistant. (Simon Willison and Ethan Mollick have a good roundup of the examples as well as the change in the system prompt that may have caused this.) OpenAI had to roll back the update and explain what happened on the company’s blog:

We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

We are actively testing new fixes to address the issue. We’re revising how we collect and incorporate feedback to heavily weight long-term user satisfaction and we’re introducing more personalization features, giving users greater control over how ChatGPT behaves.

And:

We also believe users should have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don’t agree with the default behavior.

Today, users can give the model specific instructions to shape its behavior with features like custom instructions. We’re also building new, easier ways for users to do this. For example, users will be able to give real-time feedback to directly influence their interactions and choose from multiple default personalities.

“Easier ways” for users to adjust ChatGPT’s behavior sound to me like a user-friendly toggle or slider to adjust ChatGPT’s personality (Grok has something similar, albeit unhinged), which I think would be a reasonable addition to the product. I’ve long argued that Siri should come with an adjustable personality similar to CARROT Weather, which lets you tweak whether you want the app to be “evil” or “professional” with a slider. I increasingly feel like that sort of option would make a lot of sense for modern LLMs, too.

Permalink

What Siri Isn’t: Perplexity’s Voice Assistant and the Potential of LLMs Integrated with iOS

Perplexity's voice assistant for iOS.

Perplexity’s voice assistant for iOS.

You’ve probably heard that Perplexity – a company whose web scraping tactics I generally despise, and the only AI bot we still block at MacStories – has rolled out an iOS version of their voice assistant that integrates with several native features of the operating system. Here’s their promo video in case you missed it:

This is a very clever idea: while other major LLMs’ voice modes are limited to having a conversation with the chatbot (with the kind of quality and conversation flow that, frankly, annihilates Siri), Perplexity put a different spin on it: they used native Apple APIs and frameworks to make conversations more actionable (some may even say “agentic”) and integrated with the Apple apps you use every day. I’ve seen a lot of people calling Perplexity’s voice assistant “what Siri should be” or arguing that Apple should consider Perplexity as an acquisition target because of this, and I thought I’d share some additional comments and notes after having played with their voice mode for a while.

Read more