Tutorial: Build an iOS 16 Lock Screen Inspired AR Experience with Realitykit and Swift

Use ARKit’s Person Segmentation to Place AR Content Behind A User

Cole Dennis
5 min readOct 30, 2022

Today we will be using Person Segmentation to make an iOS 16 lockscreen-inspired AR experience!

A gif showing the end result of the project, with text that says “9:14” showing behind the user’s head.

We will be starting with the basic Xcode Augmented Reality template using RealityKit and SwiftUI:

Xcode new project screen with Augmented Reality App highlighted.
Xcode new project setup for an Augmented Reality App, with Swift, SwiftUI, and RealityKit selected.

There are 4 steps we will be following in our project to achieve this effect:

  • Adding Person Segmentation to our ARView Session
  • Accessing the Current Time
  • Generating the 3D Text Mesh
  • Adding the 3D Text Mesh to our Project

Adding Person Segmentation to our ARView Session

First we will be adding Person Segmentation to our ARView, which will allow our experience to use any people it detects to mask over AR content. More documentation from Apple on this can be found here: LINK.

Documentation image from Apple showing AR content being segmented behind a user’s body.
Reference from Apple Documentation

To add this to our project, we will be using ARFaceTrackingConfiguration as the configuration for our ARView session and adding the .personSegmentation frame semantics to this configuration. Import ARKit and add the below to the makeUIView() function in the ARViewContainer struct:

import ARKitfunc makeUIView(context: Context) -> ARView {

let arView = ARView(frame: .zero)
var newConfig = ARFaceTrackingConfiguration()
newConfig.frameSemantics.insert(.personSegmentation)
arView.session.run(newConfig)


return arView

}

For this tutorial, we’re using the Face Tracking configuration, but if you have an experience using the rear-camera, you can use the ARWorldTrackingConfiguration() instead.

While we’re here, we also will be deleting the template code for the boxAnchor and Experience file that comes with the Augmented Reality App template:

// Remove the below template code from the project.
// let boxAnchor = try! Experience.loadBox()
// arView.scene.anchors.append(boxAnchor)

Accessing the Current Time

In order to mimic the lock screen, we will be accessing the current time on a user’s device and returning it as a string. We will be using DateFormatter() to do this. Add this function to ARViewContainer:

func getTime() -> String {
let formatter = DateFormatter()
formatter.timeStyle = .short
var dateString = formatter.string(from: Date())
dateString.removeLast(3)
return dateString
}

This will return the current time in the form of a string, and will remove the extra characters for AM or PM if you use that. The reason we’re removing those characters is purely asthetic to better match the lockscreen.

Generating the 3D Text Mesh

We will be using the .generateText() function of RealityKit to generate a 3D text mesh using the time string we generate in the getTime() function. For readability, I’ve separated each of the variables in this function into separate let constants to hopefully make it more clear what variables go into this function.

  • extrusionDepth: float that identifies how thick / deep the 3D generated text should be — for our use, we will have it thin as it should appear nearly 2D for the effect we want to achieve.
  • font: UIFont that identifies what font to use for the text — for our use, I’ve used the systemFont and tried to match on of the text styles available in the iOS 16 lock screen.
  • containerFrame: CGRect that identifies the invisible bounding box that the 3D text mesh will occupy in 3D space — you’ll notice in my example below, the x transform is a negative of half the width of the box, which will allow our text to be centered on screen (otherwise, it would start halfway on screen and go to the right).
  • alignment: CTTextAlignment that controls the text alignment — this doens’t really apply to our use as we do not have multiple lines of text.
  • lineBreakMode: CTLineBreakMode that controls when the 3D text mesh should break to a new line — this doesn’t really apply to our use as we will only display the time, which does not have enough characters to cause a line break.

The .generateText() function generates a MeshResource, which we still need to add to a ModelEntity to use in our RealityKit experience. For the material of our ModelEntity, I’ve created a materialVar that is unlit (to there are no reflections) and black with transparency added for visual effect, but you can use whatever color material you want!

Finally, you will need to adjust the transform of the 3D text in order for it to appear correctly placed on screen. I’ve adjusted the transform to be almost a meter back and a meter up, but play around with the transform and font size to get to a sizing that looks right for you on your device!

Add the below function to ARViewContainer:

func textGen(textString: String) -> ModelEntity {
var materialVar = UnlitMaterial(color: .black)materialVar.blending = .transparent(opacity: 0.8)let depthVar: Float = 0.005let fontVar = UIFont.systemFont(ofSize: 0.15, weight: .bold, width: .standard)let containerFrameVar = CGRect(x: -0.4, y: 0.0, width: 0.8, height: 0.4)let alignmentVar: CTTextAlignment = .centerlet lineBreakModeVar : CTLineBreakMode = .byCharWrappinglet textMeshResource : MeshResource = .generateText(textString,extrusionDepth: depthVar,font: fontVar,containerFrame: containerFrameVar,alignment: alignmentVar,lineBreakMode: lineBreakModeVar)let textEntity = ModelEntity(mesh: textMeshResource, materials: [materialVar])textEntity.transform.translation = [0, 0.1,-0.9]return textEntity}

Adding the 3D Text Mesh to our Project

The final step is to actually anchor the 3D text to our camera. We will be creating a new anchor that is anchored to the camera, and will add our 3D text as a child.

func makeUIView(context: Context) -> ARView {

let arView = ARView(frame: .zero)
var newConfig = ARFaceTrackingConfiguration()
newConfig.frameSemantics.insert(.personSegmentation)
arView.session.run(newConfig)

let textAnchor = AnchorEntity(.camera)
textAnchor.addChild(textGen(textString: getTime()))
arView.scene.addAnchor(textAnchor)


return arView

}

All together, the updated ARViewContainer should look like this:

Run the project on a device, and you should see the time on screen, and if you position your body in front of the text, you should see that it correctly appears behind you:

I hope this tutorial is helpful! The full GitHub repository for this project can be found here:

--

--