photo of man

Stephen Ruiz

Product manager

8 MIN READ

Apple’s visionOS 2.0 introduces a suite of new developer APIs and frameworks across UI, AR/VR, graphics, and collaboration domains. These enhancements expand what’s possible on Apple Vision Pro, enabling richer spatial experiences that are more immersive, interactive, and multi-user. Below is a breakdown of the key new APIs by category—UI improvements, ARKit and RealityKit enhancements, Metal optimizations, and collaborative features—detailing their purpose, capabilities, and benefits, with examples of how developers can use them.

UI Improvements and Volumetric UI Enhancements

VisionOS 2.0 extends SwiftUI and UI frameworks to better support 3D interfaces and user interaction in space.

  • Volumetric UI Resizing & Scaling: Apps can present 3D content in volumes (SwiftUI scenes with depth) that are resizable and can scale appropriately with distance. A new SwiftUI scene modifier windowResizability lets developers control whether a volumetric window can be resized by the user. Developers can also specify if a volume’s content maintains a constant apparent size or diminishes with distance.

  • Volume Ornaments (Attached UI Elements): Developers can now affix ornaments to volumes – small UI components or indicators that remain attached to a volumetric window. Ornaments could be labels, buttons, or decorative elements that float with the window, improving usability in spatial UIs.

  • Hand Depth Interaction Control: visionOS 2.0 gives apps control over how users’ hands render relative to virtual content. A new input option lets developers decide if the user’s hands should appear in front of or behind digital elements. This enhances immersion by properly blending real hand presence with 3D content.

  • Custom Hover Effects in SwiftUI: SwiftUI in visionOS 2 adds new hover effect APIs. Developers can define custom visual responses when the user looks at or hovers over a UI element. This provides clearer feedback in the 3D interface, improving usability.

ARKit Enhancements (Scene Understanding and Tracking)

VisionOS 2.0 builds on ARKit’s world understanding capabilities, offering more powerful environmental sensing and tracking APIs tailored for spatial computing.

  • Plane Detection in All Orientations: ARKit in visionOS can now detect flat surfaces of any orientation, not just horizontal planes. This greatly expands where virtual objects can be anchored, making apps more versatile.

  • Room Anchors: A new room anchor API allows ARKit to understand and label distinct rooms in the user’s environment. Apps can now “remember” spatial content per room, ensuring virtual elements stay in place when the user returns.

  • Object Tracking API (Known Object Detection): visionOS 2 introduces the ability to track specific real-world objects using pre-scanned 3D reference models. Developers can include reference object files in their app, and ARKit will detect and continuously track those objects in the user’s surroundings.

  • Improved Scene Understanding Fidelity: visionOS 2 significantly extends the fidelity of scene understanding, resulting in more stable and precise anchors. This helps with better placement of virtual content and improves AR interactions.

 

More details on ARKit updates can be found in Apple’s developer documentation.

RealityKit Enhancements (3D Rendering and Physics)

RealityKit receives upgrades that enhance rendering realism, physical simulation, and cross-device compatibility.

  • 3D Hover Effects & Hand Interactions: RealityKit now supports hover effects on 3D entities. This means 3D models can visually respond when the user focuses on them or hovers a hand over them.

  • Physics: Force Effects and Joints: visionOS 2 adds new RealityKit APIs for realistic physics simulation, including force effects and joints. Developers can now apply physical forces (like impulses or gravity fields) and connect entities with joints (hinges, springs) for constrained motion.

  • Dynamic Lights and Shadows: Apps can now use real-time lights and shadows in RealityKit, making virtual content blend more naturally into its environment.

  • Portal Crossing Enhancements: The portal API now supports partial portal crossing, allowing objects to smoothly transition through a portal instead of popping from one side to the other.

  • Cross-Platform Support (iPadOS/macOS): Many new RealityKit APIs are cross-platform, meaning the same code works on iOS, iPadOS, and macOS as well as visionOS.

 

More details are available in the RealityKit API reference.

Metal and Rendering Optimizations

VisionOS 2.0 introduces optimizations and new APIs for custom 3D rendering.

  • Scene-Aware Projection Matrix: A new API provides a projection matrix that factors in the headset’s camera intrinsics and real-time scene understanding data. This improves how virtual objects align with real-world surfaces.

  • Trackable Anchor Prediction (Latency Reduction): visionOS 2.0 provides trackable anchor prediction, reducing rendering latency for AR scenarios by predicting the future pose of moving AR anchors.

  • Optimized Compositor and Foveated Rendering: Developers now have better tools for working with foveated rendering, which enhances performance by focusing detail where the user is looking.

  • Enterprise-Grade Performance Controls: Apps can request more CPU/GPU compute power for intensive workloads, such as professional CAD or data visualization apps.

 

More details on Metal updates can be found in Apple’s Metal documentation.

Collaborative Features and Multi-User Experiences

VisionOS 2.0 places a big emphasis on shared experiences, introducing new APIs that let multiple people connect, play, or work together in the same virtual space.

  • TabletopKit Framework: A new framework designed for multiplayer experiences around a virtual table. It provides high-level constructs for board games, turn management, and object placement.

  • SharePlay and Spatial Persona Integration: visionOS 2 deepens integration with Apple’s SharePlay, allowing apps to invite others into a shared Full Space session. FaceTime spatial Personas can now be used directly in apps for a more immersive social experience.

 

More details on shared experiences can be found in Apple’s SharePlay documentation.

Conclusion

The new APIs in visionOS 2.0 empower developers to create apps that are more immersive, interactive, and connected. UI improvements like volumetric resizing and hover effects enable intuitive 3D interfaces. Enhanced ARKit scene understanding and object tracking let apps better integrate with the user’s environment. Upgraded RealityKit rendering and physics produce more realistic visuals and interactions, while Metal-level optimizations ensure smooth performance. Finally, frameworks like TabletopKit open the door for multi-user spatial computing, turning Vision Pro into a device not just for solo experiences but shared ones.

FAQs About App Development

Crafting and bringing a mobile app to life can often feel like a daunting endeavor. However, with our wealth of knowledge and seasoned expertise, we’re equipped to address all your questions and navigate you through the app creation journey with unparalleled smoothness and simplicity.

At Frame 60, our software development journey began with a focus on mobile iOS and Android applications developed using Objective C and Java. As technology advanced, we made a seamless transition to modern programming languages like Swift, Swift UI, and Kotlin.

When it comes to creating AR/VR experiences and games, our primary choice is Unity for Native applications, and for Web-based projects, we leverage 8thwall, A-Frame and WebXR. With the recent introduction of Apple Vision Pro, we are actively involved in the process of migrating existing AR/VR Unity apps to Vision Pro and exploring new possibilities through Swift UI.

For web development, we excel in crafting dynamic websites using React and Node, along with expertise in building platforms on WordPress. In addition to these, we are well-versed in various other frameworks and programming languages such as Firebase, Golang, Python, and PHP, ensuring our ability to tackle a wide range of projects with versatility and proficiency.

At our software development company, we initiate the process with an introductory video conference meeting. During this session, we take the opportunity to establish a strong working relationship and delve into discussions about your project's key features, timelines, dependencies, and other essential aspects. If there is mutual interest and compatibility, you can confidently share any project documentation you may have, and rest assured, we are open to signing NDAs to ensure confidentiality.

In case the project specifications require further refinement, we are happy to offer a ballpark estimate to give you a rough idea of the overall cost. Subsequently, if you find it beneficial, we can continue with additional meetings to meticulously fine-tune the requirements and provide you with an accurate and detailed estimate. Our aim is to foster clear communication and transparency throughout the entire collaboration to ensure the success of your project.

Absolutely! We understand that launching an app can be a challenging process. With our vast experience of submitting hundreds of apps, we are well aware of the intricacies involved, including the occasional feedback from Apple store reviewers. Rest assured, we have a thorough understanding of the approval process, and we are fully equipped to assist you in achieving a swift and successful approval for your app. Our goal is to make the entire process as seamless and efficient as possible, so you can focus on realizing your app's full potential without unnecessary delays or complications.

Absolutely! At our custom mobile app development company, staying in close communication with our clients is a top priority.

We believe that regular updates are essential for ensuring that your expectations align seamlessly with our work. To achieve this, we encourage prompt feedback from you. To facilitate effective communication, we often break down the project into manageable phases and deliver new features every week or two. For day-to-day interactions, we are proficient in using platforms like Slack and Discord, providing a quick and efficient way to stay connected.

Moreover, we conduct weekly video conference calls, where we can discuss requirements and showcase the progress made on your project. However, we also understand that some clients prefer a more traditional approach, and we are happy to accommodate that as well, offering communication via good old-fashioned email if that's more comfortable for you.

By adopting this communication strategy, we ensure that you are well-informed about the development progress and that your valuable input is incorporated seamlessly into the project, fostering a collaborative and successful partnership.

Certainly! At our software development company, we firmly believe in granting our clients full ownership and control of their applications.

Once the development process is completed, you become the sole owner of the application and its source code. This means you have the freedom to submit the application to the relevant app stores or deploy the website as you see fit. The source code is included as a part of the final deliverable, ensuring that you have complete access and authority over it.

To further empower you, we provide detailed documentation along with the deliverable. This documentation serves as a valuable resource for any future developers who may need to work on the project. It contains all the essential information, enabling them to understand precisely what we have implemented and how the application functions.

Our commitment to granting you ownership of the code and providing comprehensive documentation is aimed at ensuring your long-term success and independence with your software solution.

Yes. We regularly build proof-of-concept apps and MVPs to help clients validate ideas, secure funding, or pitch to internal stakeholders.

Absolutely. We've built medical-grade prototypes, including brain scan visualization tools, and can tailor AR/VR development for healthcare use cases.

For initial development, we use Apple’s visionOS simulator. However, full testing requires access to the Vision Pro device, which we support internally.

Yes. We can migrate existing Unity projects to visionOS using Unity PolySpatial and Apple's APIs, while optimizing for performance and UI/UX standards.

Some components can extend to iPad or iPhone, but immersive spatial features are exclusive to Vision Pro. We can advise on cross-device strategy as part of your AR/VR development roadmap.

Yes. Our AR/VR development team specializes in Quest’s MR capabilities using color passthrough, depth APIs, and environment blending.

Absolutely. We can integrate Photon, Normcore, or custom networking stacks to enable social, collaborative, or multiplayer AR/VR apps.

While all support core features, Quest 3 and Pro offer better passthrough, depth sensing, and hand tracking. We help tailor AR/VR development to the hardware's strengths.

Yes, we implement Quest’s hand tracking APIs for natural gesture-based input, ideal for immersive experiences and accessibility.

Yes. We primarily use Unity for Meta Quest apps but can also develop in Unreal depending on the visual or technical requirements.

Yes. Our team handles the full spectrum—from mobile AR on iOS/Android to fully immersive VR for headsets like Quest and Vision Pro.

Yes. We frequently develop custom graphics pipelines, shaders, and input systems to create high-fidelity AR/VR experiences.

Definitely. Many of our clients engage us early to help shape the product vision, build prototypes, and outline a go-to-market plan.

Yes. We offer ongoing support, feature expansions, and performance optimizations after your AR/VR app is live.

Yes. We’ve integrated AI-based assistants, object tracking, and spatial awareness features into AR/VR projects for smarter interactions.

Categories: Blog