This article in the Apple Developer documentation explains how you can capture a video that you can then use as an ARKit session, making it easy to run your ARKit app without walking around with your iPhone or iPad every time you want to test something.
Another article about bindings in SwiftUI. Dean walks us through a couple of options for passing one-way bindings to SwiftUI views, from passing simple parameters, over callbacks to using proxy providers.
These are useful techniques to have at your disposal, especially when building reusable components.
Timo has been working on this nifty little app that not only includes a bunch of useful resources for learning SwiftUI (videos, newsletters, books, SDKs, and more), but also has code snippets for a number of common use cases - for example a context menu with a preview.
(Full disclosure: I was pleased to see that three of my own resources and the Firebase SDK are included in the app.)
In issue 51, I included Sean Allen's video about ContentUnavailableView. Around the same time, I came across Craig Clayton's excellent video about Empty States in iOS 17. Sean and Craig have very different teaching styles, and I learned a lot from watching both videos. Craig's video covers a few aspects that Sean didn't talk about, so it's definitely worth checking it out.
Fun fact: I only learned touch typing a few years ago - previously, I knew quite well where all the keys are located on the keyboard, but I had to look down at my keyboard to actually type. Learning to type without looking at the keyboard has made me a lot more efficient.
In this blog post, the Tower team provide a couple of tips and tricks for becoming even more efficient by learning keyboard shortcuts.
I especially liked the idea of setting up a hyper key - give it a try an let me know what you think!
As with all new form factors, it takes a while to figure out what the real killer use case for a new device like the Vision Pro is. Most apps so far just seem to be slightly adopted versions of already existing apps, and most of them use the window metaphor we've been using on desktop computers for the past couple of decades.
Truly immersive or mixed reality apps seem to be an exception.
In this post, John LePore proposes a mixed reality app for Formula 1 races. I am not into F1, but this concept looks very compelling to me, and I think it can be applied to a number of similar use cases (other sports events), and beyond (how about a virtual map that allows you to experience movies in a mixed reality setting - would make movies like Inception quite the ride...)
You might think cloud-based IDE like Project IDX are only useful for backend or web development. In the latest version of Project IDX, you can now run your apps on a native iOS Simulator or an Android Emulator - right in your browser!
This is still an experimental feature, and the team is looking for feedback.
Ronald Mannak created a reverse proxy to keep your OpenAI API keys safe. This allows you to ship your app without including your API key in your app's binary. Instead, the API key is set in the environment of the reverse proxy's runtime on the server. Given that it's a lot harder to compromise a server environment, this approach provides a lot more protection against your API key(s) being stolen by malicious actors.
In addition, this package has a couple of nifty additional features, such as being able to verifying App Store subscriptions.