Summarizing Scores with Foundation Models, #Playground, and Xcode AI
Apple's AI offerings for developers are knocking it out of the park, both on and off device
After WWDC25, I kept building on my mocked-out MLB Tracker app — this time exploring Apple’s new AI tools: Foundation Models and Xcode 26’s built-in assistant. I also took the new #Playground
macro for a spin. This post is part dev log, part proof of concept, and full of baseball energy.
To follow along, check out the latest code on Github.
Foundation Models is a Game Changer
Since the app already has 2 main screens (one for game scores and another for team standings) complete with mock data, the simplest generative AI feature I could think of was a summarizer. Yes, both screens are served visually well through Lists. But what if I was driving or out for a run and preferred hearing an up-to-date summary of where things stood in the MLB?
Starting with game scores first, I first created a function to print the game data neatly to a String and created my first Playground to see the result:
#Playground {
print(printBaseballGamesDetails(generateMockGames()))
}
Tremendously impressed already, but more on that later...
Then, I created a new class called ScoresSummarizer
. In there, I added a static base prompt that would make the request with detailed instructions for what I wanted. To help me write the prompt, I got real meta with it and asked ChatGPT in my browser to make me a perfect prompt (see full prompt on Github).
Then I added a static function that would combine the printed data with the prompt, generate the model session, trigger the combined prompt, and return the response:
static func summarizeGameScores(_ games: [BaseballGame]) async throws -> String {
let gameData = await printBaseballGamesDetails(games)
let combinedPrompt = basePrompt + gameData
let session = LanguageModelSession()
let response = try await session.respond(to: combinedPrompt).content
return response
}
To test it out, I created another Playground closure and checked the results. Already I can foresee my future code bases riddled with Playgrounds:
#Playground {
print(try! await ScoresSummarizer.summarizeGameScores(generateMockGames()))
}
While not instantaneous (I run a Mac Mini M4 w/ 16 GB RAM. Come at me, bro) I was pleasantly surprised by the response generated:
Meanwhile:
Bottom of the 8th in Atlanta — Braves trailing 8-3 against the Diamondbacks!
Over in:
Bottom of the 8th in Cincinnati — Reds clinging to a 4-3 lead over the Blue Jays!
Also tonight:
Top of the 7th in Oakland — Athletics take an early 9-7 lead over the Cubs!
Meanwhile:
Bottom of the 9th in Milwaukee — Guardians edge out Brewers 1-0!
Also tonight:
Bottom of the 9th in San Francisco — Giants bounce back with an 8-1 victory over the Dodgers!
...
Ground-breaking sports journalism? No. But it's taken my data and turned it into something more readable and, in particular, easy to listen to. And all this without using Generable
, Guide
, or Tool
. Just a basic prompt and response.
Displaying the Results
To show off the summary in the app, I first added a task
to my Scores view that triggers generating the summary. I also set a new @State
property to collect the response and another to track that the summary has been generated.
@State private var summaryReady = false
@State private var generatedSummary: String?
...
.task {
do {
generatedSummary = try await StandingsSummarizer.summarizeStandings()
summaryReady = true
} catch {
// Handle error if needed
}
}
Once generated, a button with a baseball SF Symbol appears in the top toolbar (Liquid Glass style, of course). Once tapped, a sheet
appears displaying the response:
Speaking the Results
In the GIF above, you'll see I also added a "Read Summary Aloud" button. When tapped, I trigger a read out of the summary using AVSpeechUtterance
and AVSpeechSynthesizer
:
private func readSummary() {
guard let summary = generatedSummary else { return }
let utterance = AVSpeechUtterance(string: summary)
utterance.rate = 0.57
utterance.pitchMultiplier = 0.8
utterance.postUtteranceDelay = 0.2
utterance.volume = 0.8
let voice = AVSpeechSynthesisVoice(language: "en-GB")
utterance.voice = voice
let synthesizer = AVSpeechSynthesizer()
synthesizer.speak(utterance)
}
This was a new feature, but it highlights how easily we can generate content that can power other features in iOS. Combining summaries and speech can really change the "hands-free" game!
Cleaning Things Up with Xcode AI
I took all the code added for the Scores Summarizing (model code and UI additions) and copied it over for the Standings feature. The only difference (besides changing some names) was generating a different prompt for the Standings summary. But beyond that, the structure between the Game Score and Standings screens are almost identical.
Which led me to an idea. I asked Xcode AI to look at the similarities between the two Screens and create a new base Template View for reuse purposes. The result was detailed steps listed out on how it would approach the task, then the generation of the Template View, and a refactoring of the feature views to utilize the new template.
I hit run and, predictably (though still amazingly) the app was functionally the same. It was really satisfying to not only watch my POC code undergo some healthy gut rearranging, but for it to work on the first go.
Stealing Bases with Xcode AI
While impressed, I wasn't done experimenting with AI just yet. During the opening sessions, they showed off drawing a UI by hand, uploading it to the chat, and having it spit out impressively matching SwiftUI.
This felt like the big one to test out. And so, I imagined that, when a user taps on the live score in the tabview bottom accessory, a sheet would appear with a more detailed overview of a game's current status. Breaking out the ancient tools known as paper and pen, I drew a fantastic rendering of my vision:
I snapped a pic, AirDropped to my machine, and dropped it into the chat with the prompt "when a user taps the tabview bottom accessory, a sheet should appear with the current games details displayed using the attached drawn UI. Create this view and attach it to the tabview bottom accessory". Within seconds, it delivered something that, in all honesty, beat my expectations:
It's far from perfect, for sure. But in terms of translating a drawing I made in under a 30 seconds (see squiggles in the game summary section), I was particularly impressed that:
It added relevant mock text to the summary section instead of the squiggles
It nailed the three dots for the outs
It suggested and implemented medium and large detents
It built a baseball diamond (albeit 45 degrees off)
With some small UI fixes (diamond angle and padding on top in medium detent), I have a UI that I can begin to hook up to data in no time (probably seconds with just another prompt).
AI for MVP?
This time around I got to mess with AI features on-device and off. For all the speculation around AI, Apple Intelligence, and WWDC, at the very least developers won big this year.
On-device
Foundation Models are gonna help Apple developers usher in a new era of AI powered apps (mind you, I did a basic prompt without even getting into Generable
, Guide
, or Tool
). Yes, we've already seen quite a bit of AI-powered apps already, for sure. But on-device generation brings with it privacy, security, immediacy, unlimited uses (seemingly), and network-less access.
Sure it's capabilities are limited compared to server-driven solutions. We're still at the dawn of this new era, one where individuals and organizations are becoming more aware and concerned about the loss of privacy and control of the information. As on-device models become increasingly better alongside ever-evolving supporting hardware, I guarantee its value will scale with it exponentially.
Off-device
As for Xcode's AI features, in spite of the concerns I just mentioned about server-driven solutions, it's really fun and a huge game changer for Apple developers. Having worked with other solutions like Cursor, Claude, and Copilot (just realized they all start with C's), I'm positive there are pros and cons to each.
Apple has the advantage of providing a baked in solution that could integrate with their IDE in ways no other solution could. With what I saw during sessions and experienced on my machine (so far), I believe they proved that point really well on the first shot. That makes me hopeful that we could potentially see really incredible stuff over the next few WWDC's. Just imagine bringing that power over to Instruments and supercharging app optimization and performance.
Seventh-inning Stretch
As for my exploration into WWDC goodies, this has only been a taste so far. I've been digging deep into Liquid Glass, understanding more about what's changed, and letting it sync in that Apple means it when they say it's a new design system.
As the summer continues, I'll be sharing more expansions to this little MLB Tracker app as well as focused articles on specific topics. So stand up, stretch those legs, grab some cracker jacks, and stay tuned!