This Week's Sponsor:

Turbulence Forecast

Know before you go. Get detailed turbulence forecasts for your exact route, now available 5 days in advance.


Posts in Linked

Claude’s Chat History and App Integrations as a Form of Lock-In

Earlier today, Anthropic announced that, similar to ChatGPT, Claude will be able to search and reference your previous chats with it. From their support document:

You can now prompt Claude to search through your previous conversations to find and reference relevant information in new chats. This feature helps you continue discussions seamlessly and retrieve context from past interactions without re-explaining everything.

If you’re wondering what Claude can actually search:

You can prompt Claude to search conversations within these boundaries:

  • All chats outside of projects.
  • Individual project conversations (searches are limited to within each specific project).

Conversation history is a powerful feature of modern LLMs, and although Anthropic hasn’t announced personalized context based on memory yet (a feature that not everybody likes), it seems like that’s the next shoe to drop. Chat search, memory with personalized context, larger context windows, and performance are the four key aspects I preferred in ChatGPT; Anthropic just addressed one of them, and a second may be launching soon.

As I’ve shared on Mastodon, despite the power and speed of GPT-5, I find myself gravitating more and more toward Claude (and specifically Opus 4.1) because of MCP and connectors. Claude works with the apps I already use and allows me to easily turn conversations into actions performed in Notion, Todoist, Spotify, or other apps that have an API that can talk to Claude. This is changing my workflow in two notable ways: I’m only using ChatGPT for “regular” web search queries (mostly via the Safari extension) and less for work because it doesn’t match Claude’s extensive MCP support with tools; and I’m prioritizing web apps that have well-supported web APIs that work with LLMs over local apps that don’t (Spotify vs. Apple Music, Todoist vs. Reminders, Notion vs. Notes, etc.). Chat search (and, again, I hope personalized context based on memory soon) further adds to this change in the apps I use.

Let me offer an example. I like combining Claude’s web search abilities with Zapier tools that integrate with Spotify to make Claude create playlists for me based on album reviews or music roundups. A few weeks ago, I started the process of converting this Chorus article into a playlist, but I never finished the task since I was running into Zapier rate limits. This evening, I asked Claude if we ever worked on any playlists, it found the old chats and pointed out that one of them still needed to be completed. From there, it got to work again, picked up where it left off in Chorus’ article, and finished filling the playlist with the most popular songs that best represent the albums picked by Jason Tate and team. So not only could Claude find the chat, but it got back to work with tools based on the state of the old conversation.

Resuming a chat that was about creating a Spotify playlist (right). Sadly, Apple Music doesn't integrate with LLMs like this.

Resuming a chat that was about creating a Spotify playlist (right). Sadly, Apple Music doesn’t integrate with LLMs like this.

Even more impressively, after Claude was done finishing the playlist from an old chat, I asked it to take all the playlists created so far and append their links to my daily note in Notion; that also worked. From my phone, in a conversation that started as a search test for old chats and later grew into an agentic workflow that called tools for web search, Spotify, and Notion.

I find these use cases very interesting, and they’re the reason I struggle to incorporate ChatGPT into my everyday workflow beyond web searches. They’re also why I hesitate to use Apple apps right now, and I’m not sure Liquid Glass will be enough to win me back over.

Permalink

Building Tools with GPT-5

Yesterday, Parker Ortolani wrote about several vibe coding projects he’s been working on and his experience with GPT-5:

The good news is that GPT-5 is simply amazing. Not only does it design beautiful user interfaces on its own without even needing guidance, it has also been infinitely more reliable. I couldn’t even count the number of times I have needed to work with the older models to troubleshoot errors that they created themselves. Thus far, GPT-5 has not caused a single build error in Xcode.

I’ve had a similar initial experience. Leading up to the release of GPT-5, I used Claude Opus 4 and 4.1 to create a Python script that queries the Amazon Product Advertising API to check whether there are any good deals on a long list of products. I got it working, but it typically returned a list of 200-300 deals sorted by discount percentage.

Though those results were fine, a percentage discount only roughly correlates to whether something is a good deal. What I wanted was to rank the deals by assigning different weights to several factors and coming up with a composite score for each. Having reached my token limits with Claude, I went to GPT-o3 for help, and it failed, scrambling my script. A couple of days later, GPT-5 launched, so I gave that a try, and it got the script right on the very first try. Now, my script spits out a spreadsheet sorted by rank, making spotting the best deals a little easier than before.

In the days since, I’ve used GPT-5 to set up a synced Python environment across two Macs and begun the process of creating a series of Zapier automations to simplify other administrative tasks. These tasks are all very specific to MacStories and the work I do, so I’ve stuck with scripting them instead of building standalone apps. However, it’s great to hear about Ortolani’s experiences with creating interfaces for native and web apps. It opens up the possibility of creating tools for the rest of the MacStories team that would be easier to install and maintain than walking people through what I’ve done in Terminal.

This statement from Ortolani also resonated with me:

As much as I can understand what code is when I’m looking at it, I just can’t write it. Vibe coding has opened up a whole new world for me. I’ve spent more than a decade designing static concepts, but now I can make those concepts actually work. It changes everything for someone like me.

I can’t decide whether this is like being able to read a foreign language without knowing how to speak it or the other way around, but I completely understand where Ortolani is coming from. It’s helped me a lot to have a basic understanding of how code works, how apps are built, and – as Ortolani mentions – how to write a good prompt for the LLM you’re using.

What’s remarkable to me is that those few ingredients combined with GPT-5 have gone such a long way to eliminate the upfront time I need to get projects like these off the ground. Instead of spending days on research without knowing whether I could accomplish what I set out to do, I’ve been able to just get started and, like Ortolani, iterate quickly, wasting little time if I reach a dead end and, best of all, shortening the time until I have a result that makes my life a little easier.

Federico and I have said many times that LLMs are another form of automation and automation is just another form of coding. GPT-5 and Claude Opus 4.1 are rapidly blurring the lines between both, making automation and coding more accessible than ever.

Permalink

Reuters Reports that Apple’s New EU Developer Terms May Avoid Further Penalties

Reuters reports that Apple is on the brink of satisfying EU regulators with the changes the company has made to its developer program in the EU:

Apple’s changes to its App Store rules and fees will likely secure the green light from EU antitrust regulators, people with direct knowledge of the matter said, a move that would stave off potentially hefty daily fines for the iPhone maker.

Reuters estimates that those fines, which would be on top of the 500 million euro fine already levied against Apple, could be as much as 50 million euros per day.

No deal is finished until it’s formally announced, but if Reuters’ sources are correct, we should see an announcement from the European Commission in the coming weeks.

Permalink

New Emoji Announced for World Emoji Day

Source: Unicode Consortium.

Source: Unicode Consortium.

Every year, the Unicode Consortium announces new emoji that will be added in the fall and incorporated in iOS and other OSes in the months that follow. The latest batch that were announced today to coincide with World Emoji Day will be part of Unicode 17 and include:

  • Trombone
  • Treasure Chest
  • Distorted Face
  • Apple Core
  • Fight Cloud
  • Ballet Dancers 
  • Hairy Creature 
  • Orca

As usual, it’s an eclectic mix that rounds out certain categories and includes other emoji that are just plain fun. I look forward to Federico trying to guess these on Connected. There’s an almost one-to-one overlap between the ones I know I’ll use the most and those that I think Federico will never guess.

Permalink

Ars Technica Takes CarPlay Ultra for a Spin

Michael Teo Van Runkle, writing for Ars Technica, spent eight days testing CarPlay Ultra in an Aston Martin DB12 Volante. Van Runkle walks readers through the setup process, covers the themes available, and describes the experience of monitoring and controlling the car’s systems using Apple’s next-generation version of CarPlay.

By and large, Van Runkle’s experience was positive:

Ultra’s biggest improvements over preceding CarPlay generations are in the center console infotainment integration. Being able to access climate controls, drive modes, and traction settings without leaving the intuitive suite of CarPlay makes life much easier. In fact, changing between drive modes and turning traction control off or down via Aston’s nifty adjustable system caused less latency and lagging in the displays in Ultra. And for climate, Ultra actually brings up a much better screen after spinning the physical rotaries on the center console than you get through Aston’s UI—plus, I found a way to make the ventilated seats blow stronger, which I never located through the innate UI despite purposefully searching for a similar menu page.

That said, it was not without glitches and hiccups along the way, some of which were difficult to pin on CarPlay Ultra versus Aston Martin’s systems.

Precious few auto makers have signed on to offer CarPlay Ultra, but Kia and Porsche have said they will, too, which is a start. I remember when CarPlay debuted in 2014 with a similarly small lineup composed mostly of luxury brands like Ferrari and Mercedes-Benz. So, it’s not surprising Ultra is debuting in a car that starts at $265,000. It took years before the original CarPlay trickled down to ordinary, everyday cars. But they did, and now, with a few notable exceptions, like Tesla, Rivian, and GM EVs, you can find CarPlay in most makes and models.

I hope CarPlay Ultra follows a similar trajectory. It looks great, and I’d love to have it in my next car, which I can confidently predict now will not be an Aston Martin.

Permalink

CD PROJEKT RED Publishes Mac System Requirements for Cyberpunk 2077

Yesterday, I wrote about the upcoming release of Cyberpunk 2077: Ultimate Edition on the Mac. Today, CD PROJEKT RED published a support document, listing the game’s Mac system requirements. As I wrote yesterday, the company says the game will work on all Apple silicon Macs; however, the beefier your CPU and memory, the better.

As reported by Tom Warren at The Verge today, the support document summarizes the game’s system requirements in four categories: Minimum, Recommended, High Fidelity, and Very High Fidelity. It’s worth checking out the support document and Warren’s coverage before buying Cyberpunk 2077, which still hasn’t shown up on the Mac App Store for pre-order, because if you want the Very High Fidelity experience, you’ll need at least an M3 Ultra or M4 Max with at least 36 GB of memory.

Permalink

The Search for Nintendo’s Elusive iMac G3-Inspired Game Boy Colors

Retro Dodo, linking to the website Console Variations, has a story about the time Nintendo produced variants of the Game Boy Color that matched the iMac G3. All that seemingly remains of these color-matched Game Boys is the low-resolution image above from a 1999 issue of 64 Dream Magazine because Retro Dodo’s Sebastian Santabarbara went looking for the handhelds and was unable to find them anywhere online.

Nintendo wasn’t alone in copying the vibrant translucency of the iMac G3. In the late ’90s it seemed like every consumer product maker did something similar. Most of those products have been lost to time and forgotten, but Nintendo fans are an intrepid bunch. I wouldn’t be surprised if these iMac-themed Game Boy Colors turn up in an auction online eventually.

Permalink

The Curious Case of Apple and Perplexity

Good post by Parker Ortolani, analyzing the pros and cons of a potential Perplexity acquisition by Apple:

According to Mark Gurman, Apple executives are in the early stages of mulling an acquisition of Perplexity. My initial reaction was “that wouldn’t work.” But I’ve taken some time to think through what it could look like if it were to come to fruition.

He gets to the core of the issue with this acquisition:

At the end of the day, Apple needs a technology company, not another product company. Perplexity is really good at, for lack of a better word, forking models. But their true speciality is in making great products, they’re amazing at packaging this technology. The reality is though, that Apple already knows how to do that. Of course, only if they can get out of their own way. That very issue is why I’m unsure the two companies would fit together. A company like Anthropic, a foundational AI lab that develops models from scratch is what Apple could stand to benefit from. That’s something that doesn’t just put them on more equal footing with Google, it’s something that also puts them on equal footing with OpenAI which is arguably the real threat.

While I’m not the biggest fan of Perplexity’s web scraping policies and its CEO’s remarks, it’s undeniable that the company has built a series of good consumer products, they’re fast at integrating the latest models from major AI vendors, and they’ve even dipped their toes in the custom model waters (with Sonar, an in-house model based on Llama). At first sight, I would agree with Ortolani and say that Apple would need Perplexity’s search engine and LLM integration talent more than the Perplexity app itself. So far, Apple has only integrated ChatGPT into its operating systems; Perplexity supports all the major LLMs currently in existence. If Apple wants to make the best computers for AI rather than being a bleeding-edge AI provider itself…well, that’s pretty much aligned with Perplexity’s software-focused goals.

However, I wonder if Perplexity’s work on its iOS voice assistant may have also played a role in these rumors. As I wrote a few months ago, Perplexity shipped a solid demo of what a deep LLM integration with core iOS services and frameworks could look like. What could Perplexity’s tech do when integrated with Siri, Spotlight, Safari, Music, or even third-party app entities in Shortcuts?

Or, look at it this way: if you’re Apple, would you spend $14 billion to buy an app and rebrand it as “Siri That Works” next year?

Permalink

Initial Notes on iPadOS 26’s Local Capture Mode

Now this is what I call follow-up: six years after I linked to Jason Snell’s first experiments with podcasting on the iPad Pro (which later became part of a chapter of my Beyond the Tablet story from 2019), I get to link to Snell’s first impressions of iPadOS 26’s brand new local capture mode, which lets iPad users record their own audio and video during a call.

First, some context:

To ensure that the very best audio and video is used in the final product, we tend to use a technique called a “multi-ender.” In addition to the lower-quality call that’s going on, we all record ourselves on our local device at full quality, and upload those files when we’re done. The result is a final product that isn’t plagued by the dropouts and other quirks of the call itself. I’ve had podcasts where one of my panelists was connected to us via a plain old phone line—but they recorded themselves locally and the finished product sounded completely pristine.

This is how I’ve been recording podcasts since 2013. We used to be on a call on Skype and record audio with QuickTime; now we use Zoom, Audio Hijack, and OBS for video, but the concept is the same. Here’s Snell on how the new iPadOS feature, which lives in Control Center, works:

The file it saves is marked as an mp4 file, but it’s really a container featuring two separate content streams: full-quality video saved in HEVC (H.265) format, and lossless audio in the FLAC compression format. Regardless, I haven’t run into a single format conversion issue. My audio-sync automations on my Mac accept the file just fine, and Ferrite had no problem importing it, either. (The only quirk was that it captured audio at a 48KHz sample rate and I generally work at 24-bit, 44.1KHz. I have no idea if that’s because of my microphone or because of the iPad, but it doesn’t really matter since converting sample rates and dithering bit depths is easy.)

I tested this today with a FaceTime call. Everything worked as advertised, and the call’s MP4 file was successfully saved in my Downloads folder in iCloud Drive (I wish there was a way to change this). I was initially confused by the fact that recording automatically begins as soon as a call starts: if you press the Local Capture button in Control Center before getting on a call, as soon as it connects, you’ll be recording. It’s kind of an odd choice to make this feature just a…Control Center toggle, but I’ll take it! My MixPre-3 II audio interface and microphone worked right away, and I think there’s a very good chance I’ll be able to record AppStories and my other shows from my iPad Pro – with no more workarounds – this summer.

Permalink