Samsung has officially ventured into the mixed reality (MR) headset space with the unveiling of "Project Moohan," an upcoming innovation powered by the newly announced Android XR platform. Although consumer release is slated for 2025, I had the chance to get a hands-on experience with an early prototype.
Before diving into the specifics, it’s important to note that Samsung and Google have not disclosed much about key aspects like resolution, weight, field-of-view, or pricing. During my demo, capturing any photos or videos was prohibited, so we only have an official image to rely on for now.
To offer a bit of context, think of Project Moohan as a blend of Quest and Vision Pro, pulling inspiration from both. When you look at the headset’s design, it’s obvious that it borrows heavily from Vision Pro, with similar color schemes, button layouts, and even calibration processes. On the software front, Android XR feels like a perfect melding of Horizon OS and VisionOS.
It’s almost uncanny how much Project Moohan and Android XR echo the leading headset platforms today. But this isn’t about pointing fingers. Tech companies frequently draw inspiration from one another and build on good ideas. For Android XR and Project Moohan, capturing the strengths while avoiding the weaknesses of their competitors would be a win-win situation for both developers and users.
And it seems like they’re on the right track.
Hands-on with Samsung’s Project Moohan Android XR Headset
Starting with the Project Moohan’s design, it’s a snazzy piece of hardware. It carries the ‘goggles’ vibe reminiscent of Vision Pro, but with improvements. For instance, instead of Vision Pro’s often-maligned soft strap, Samsung opted for a more rigid strap with a tightening dial, akin to the Quest Pro’s ergonomic design. It also features an open-peripheral design which is beneficial for augmented reality applications, along with magnetic snap-on blinders for fully immersive experiences.
While there are resemblances to Vision Pro, like button placements and shapes, Project Moohan lacks the external display that shows users’ eyes. Vision Pro’s ‘EyeSight’ display might be divisive, but I find it quite innovative, and it’s something I wish Project Moohan included. Seeing users while they wear the headset adds a personal touch currently missing from Samsung’s offering.
Samsung hasn’t been too forthcoming about tech specs, labeling the device as a prototype. However, it’s known to house the Snapdragon XR2+ Gen 2 processor, a more advanced processor variant than what’s in the Quest 3 and Quest 3S.
During my trial, it was clear the headset uses pancake lenses with automatic IPD adjustment, all thanks to integrated eye-tracking. The field-of-view seems narrower than the Quest 3 or Vision Pro, but I reserve final judgment because different forehead pad options could bring my eyes closer to the lenses and perhaps widen the view.
What I experienced was an immersive field-of-view, although the sweet spot felt limited due to brightness fall-off at the edges of the display. Again, this could improve with proximity adjustments, but for now, Meta’s Quest 3 seems to lead the pack, closely followed by Vision Pro, with Project Moohan trailing slightly behind.
Although Samsung confirmed that Project Moohan will have dedicated controllers, I didn’t get to see or test them. It’s undecided whether these will come bundled with the headset or be sold separately.
My interaction relied entirely on hand and eye-tracking. This combination felt remarkably similar to both Horizon OS and VisionOS. Raycast cursors can be used like in Horizon OS, or eye+pinch inputs akin to VisionOS. Samsung’s headset also has downward-facing cameras for pinch detection while resting your hands comfortably in your lap.
When I strapped on the headset, the first thing that caught my eye was the sharpness of my hands in the display. The passthrough cameras seemed to deliver a sharper image than the Quest 3 and had less motion blur than Vision Pro—although, I only tested in ideal lighting conditions. Considering the sharp focus on my hands, it seems the passthrough cameras might be set to focus around arm’s length.
Inside Android XR
Moving on to Android XR, it immediately felt like a hybrid of Horizon OS and VisionOS. The main interface features app icons suspended on a transparent backdrop similar to Vision Pro. You select apps by looking and pinching, which results in floating panels for the apps. The home screen opens with the same gesture—looking at your palm and pinching.
However, the windows in this system resemble Horizon OS more closely than VisionOS. They have mostly opaque backgrounds and can be moved anywhere by grabbing an invisible frame encasing them.
Android XR supports fully immersive experiences, too. I explored a VR adaptation of Google Maps, reminiscent of Google Earth VR, which let me visit global locales, including 3D-modeled cities, Street View imagery, and novel volumetric captures of indoor spaces.
While Street View delivers monoscopic 360-degree imagery, volumetric captures provide real-time, explorable renderings. Google described the latter as a "gaussian splat" solution but didn’t clarify whether it was from existing photography or required new scans. Though not as sharp as photogrammetry, it wasn’t bad, and Google anticipates further improvements.
In addition, Google Photos on Android XR now auto-converts 2D photos and videos from your library into 3D. The transformations looked quite stunning, similar in quality to the same feature on Vision Pro.
YouTube on Android XR also got an upgrade. Along with giant curved displays for regular content, there’s support for existing 180, 360, and 3D videos. These may not all be high-quality but it’s nice that they retain significance and will likely expand with more compatible headsets.
Interestingly, Google presented a YouTube video filmed in 2D but auto-converted to 3D for the headset. The quality was commendable, comparable to Google Photos’ tech. Whether this requires creator participation or is done automatically by YouTube is yet to be clarified.
The Stand-out Advantage (for now)
What positions Android XR and Project Moohan ahead—both hardware and software-wise—is Google’s next-level conversational AI integration. Known as Gemini (especially its ‘Project Astra‘ edition), this AI agent is accessible directly from the home screen. It’s equipped with the capability to continuously see and hear what you see in the real and virtual worlds, making it remarkably intelligent and integrated compared to existing AI agents in other headsets.
Unlike Siri on Vision Pro, which only processes auditory inputs for limited tasks, Gemini excels in supporting continuous dialogue. Meta’s experimental AI within their Quest series, although able to hear and glimpse the real-world scene, lacks context in virtual settings, resulting in a noticeable disconnect.
Gemini, however, accesses a quasi-low framerate video feed from both realms, eliminating any awkwardness when queried, and maintains real-time perception without the need to pose for a photo while moving.
Gemini on Android XR further distinguishes itself with an impressive rolling 10-minute memory, allowing it to recall key details from recent conversations—whether spoken or visual. In a familiar AI demonstration scenario involving a room filled with various objects, I tested Gemini’s limits with trickier questions, only to be pleasantly surprised at its adeptness in steering clear of diversions.
In one instance, I used Gemini to translate a Spanish sign into English. It offered a swift and accurate translation. Then, I requested a French translation of a nearby sign, fully aware that it was already in French. Gemini correctly identified the language and clarified the details in a remarkable French accent.
Shifting my inquiries towards other items in the room, I later asked Gemini to revisit our earlier translation—“what did that sign say earlier?” Impressively, it recalled the French sign effortlessly. Then, I queried, “what about the one before that?”…
This task would have been a daunting feat for AI systems just years ago, as current-context understanding and continuity were significant challenges. However, Gemini seamlessly provided the Spanish sign translation when prompted.
Beyond answering general queries, Gemini also promises seamless headset control. Although the full scope of its abilities won’t be clear until launch, Google demonstrated useful features like transporting me virtually to the Eiffel Tower via an immersive Google Maps experience. Unlike other agents, Gemini understood subsequent natural questions, enhancing interaction with the virtual content.
Further, it fetched relevant YouTube videos based on queries—“show a video of the view from the ground,” while observing the virtual tower triggered a response with a fitting video.
While Gemini on Android XR might cover tasks similar to mobile AI agents like messaging or setting reminders, the real interest lies in how far it will extend its XR-specific capabilities.
Gemini on Android XR stands as the most advanced AI agent on any headset to date, outpacing Meta’s current offering in devices like Ray-Ban smartglasses. However, it’s a race against time before Apple and Meta rise to the challenge with similar technology.
Gemini’s real strength with Project Moohan shines in spatial productivity applications, but its potential reaches beyond—hinting at innovations awaiting in everyday wearable smartglasses, which I also sampled… but more on that in a separate review.