Federico Viticci

10804 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.

I Finally Tested the M5 iPad Pro’s Neural-Accelerated AI, and the Hype Is Real

The M5 iPad Pro.

The M5 iPad Pro.

The best kind of follow-up article isn’t one that clarifies a topic that someone got wrong (although I do love that, especially when that “someone” isn’t me); it’s one that provides more context to a story that was incomplete. My M5 iPad Pro review was an incomplete narrative. As you may recall, I was unable to test Apple’s promised claims of 3.5× improvements for local AI processing thanks to the new Neural Accelerators built into the M5’s GPU. It’s not that I didn’t believe Apple’s numbers. I simply couldn’t test them myself due to the early nature of the software and the timing of my embargo.

Well, I was finally able to test local AI performance with a pre-release version of MLX optimized for M5, and let me tell you: not only is the hype real, but the numbers I got from my extensive tests over the past two weeks actually exceed Apple’s claims.

Read more


iPadOS 26.2 Beta Restores Drag and Drop Gestures for Split View and Slide Over

Following the comeback of Slide Over in iPadOS 26.1, Apple is continuing to iterate on iPadOS 26 multitasking by restoring functionalities that had been removed from the launch version of iPadOS 26.0 in September. Yesterday, in the third developer beta of iPadOS 26.2, the company brought back drag and drop gestures to put app windows directly in Split View and Slide Over without having to interact with additional menus. To understand how these old gestures work in the context of iPadOS 26, I recommend watching this video by Chris Lawley:

As you can see, the gestures are pretty much the same ones as iPadOS 18, but the interaction is slightly different insofar as the “pull indicator” for Slide Over (re-introduced in iPadOS 26.1) now serves two purposes. That indicator now acts both as a signal that you can drop a window to instantly tile it as one half of a Split View, and it’s also a drop target to enter Slide Over right away. The design is clever, if maybe a little too hard to discover…but that’s always been the case with multitasking gestures that aren’t exposed by a menu – which is exactly why Apple is now offering plenty of options in iPadOS 26 to discover different multitasking features in different menus.

I’m glad to see Apple quickly iterate on iPadOS 26 by finding ways to blend the old multitasking system with the platform’s new windowing engine. Based on the comments I received after publishing my iPadOS 26 review, enough people were missing the simplicity of Split View and Slide Over that I think Apple’s doing the right thing in making all these multitasking systems coexist with one another.

As I argued on last week’s episode of Connected, and as Myke and Jason also elaborated on this week’s episode of Upgrade, the problem with the iPad Pro now is that we have a great foundation with iPadOS 26 and very few third-party apps that take advantage of it beyond the usual names. I suspected as much months ago, when I explained why, in a world dominated by web apps, the iPad’s next problem was going to be its app ecosystem. The web services I use on a daily basis (Slack, Notion, Claude, Superhuman, Todoist – the list goes on) simply don’t make iPad apps of the same caliber as their desktop/web counterparts. So I find myself using Safari on the iPad to get my work done these days, but, for a variety of reasons and dozens of small papercuts, Safari for iPad simply isn’t as good as Safari on the Mac.

Given how the third-party app ecosystem story for iPad is outside of Apple’s control and how most companies aren’t incentivized to make excellent native iPad apps anymore, now that multitasking has been largely “fixed” in iPadOS 26.2, I hope Apple turns its attention to something they can control: making Safari for iPad truly desktop-class and not a baby version of Safari for Mac.

Permalink

Our Latest App and Automation Experiments

This week, Federico and John kick off their holiday app and automation experimentation season a little earlier than usual with a mix of apps, automations, and services.

On AppStories+, Federico and John look ahead, considering the future of Shortcuts and automation.


We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.

To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.


AppStories+ Deeper into the world of apps

AppStories Episode 461 - Our Latest App and Automation Experiments

0:00
38:52

AppStories+ Deeper into the world of apps

Read more



The Great Digital Declutter

This week, Federico and John clean house, deleting old apps, screenshots, half-built shortcuts, huge downloads, and more. A look at the workflows and apps we use to stay organized and clean up our digital messes.

On AppStories+, Federico’s Typing Mind experiments continue, while John shares his experience of Claude Code to build tools to run MacStories.


We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.

To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.


AppStories+ Deeper into the world of apps

AppStories Episode 460 - The Great Digital Declutter

0:00
31:12

AppStories+ Deeper into the world of apps

Read more


Trying to Make Sense of the Rumored, Gemini-Powered Siri Overhaul

Quite the scoop from Mark Gurman yesterday on what Apple is planning for major Siri improvements in 2026:

Apple Inc. is planning to pay about $1 billion a year for an ultrapowerful 1.2 trillion parameter artificial intelligence model developed by Alphabet Inc.’s Google that would help run its long-promised overhaul of the Siri voice assistant, according to people with knowledge of the matter.

There is a lot to unpack here and I have a lot of questions.

Read more


Exploring AI Browsers

This week, Federico and John look at the hype surrounding AI browsers to see if there’s any there there.

Then, on AppStories+, Federico explains his experiments with lightning fast alternative AI models in Typing Mind.


Subscribe here.

Subscribe here.

We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.

To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.


AppStories+ Deeper into the world of apps

AppStories Episode 459 - Exploring AI Browsers

0:00
38:23

AppStories+ Deeper into the world of apps

Read more


On MiniMax M2 and LLMs with Interleaved Thinking Steps

MiniMax M2 with interleaved thinking steps and tools in TypingMind.

MiniMax M2 with interleaved thinking steps and tools in TypingMind.

In addition to Kimi K2 (which I recently wrote about here) and GLM-4.6 (which will become an option on Cerebras in a few days, when I’ll play around with it), one of the more interesting open-source LLM releases out of China lately is MiniMax M2. This MoE model (230B parameters, 10B activated at any given time) claims to reach 90% of the performance of Sonnet 4.5…at 8% the cost. You can read more about the model here; Simon Willison blogged about it here; you can also test it with MLX on an Apple silicon Mac.

What I find especially interesting about M2 is that it’s the first model to support interleaved thinking steps in between responses and tool calls, which is something that Anthropic pioneered with Claude Sonnet 4 back in May. Here’s Skyler Miao, head of engineering at MiniMax, in a post on X (unfortunately, most of the open-source AI community is only active there):

As we work more closely with partners, we’ve been surprised how poorly community support interleaved thinking, which is crucial for long, complex agentic tasks. Sonnet 4 introduced it 5 months ago, but adoption is still limited.

We think it’s one of the most important features for agentic models: it makes great use of test-time compute.

The model can reason after each tool call, especially when tool outputs are unexpected. That’s often the hardest part of agentic jobs: you can’t predict what the env returns. With interleaved thinking, the model could reason after get tool outputs, and try to find out a better solution.

We’re now working with partners to enable interleaved thinking in M2 — and hopefully across all capable models.

I’ve been using Claude as my main “production” LLM for the past few months and, as I’ve shared before, I consider the fact that both Sonnet and Haiku think between steps an essential aspect of their agentic nature and integration with third-party apps.

That being said, I have been testing MiniMax M2 on TypingMind in addition to Kimi K2 for the past week and it is, indeed, impressive. I plugged MiniMax M2 into TypingMind using their Anthropic-compatible endpoint; out of the box, the model worked with interleaved thinking and the several plugins I’ve built for myself in TypingMind using Claude. I haven’t used M2 for any vibe-coding tasks yet, but for other research or tool-based queries (like adding notes to Notion and tasks to Todoist), M2 effectively felt like a version of Sonnet not made by Anthropic.

Right now, MiniMax M2 isn’t hosted on any of the fast inference providers; I’ve accessed it via the official MiniMax API endpoint, whose inference speed isn’t that different from Anthropic’s cloud. The possibility of MiniMax M2 on Cerebras or Groq is extremely fascinating, and I hope it’s in the cards for the near future.