How might we connect people across Facebook’s suite of hardware and software products with valuable, connected, and multi-modal experiences?
Initial research went into identifying areas where we could see Facebook already creating cross-platform experiences across hardware and software experiences.
After framing the project ideation began with an activity called the creative matrix. This matrix helps us to generate many wide-ranging ideas in a short amount of time, it’s a framework that’s useful because it stimulates cross-pollination.
This workshop occurred during a lunch session of XR club, an internal cross-functional club focused on exploring spatial computing at Connected.
This workshop generated over 90 concepts which were then affinity mapped, evaluated, and developed further.
Potential Impact Prioritization
9 concepts were brought forward for further prioritization.
A smaller workshop was conducted with 10 attendees was used to prioritize these concepts. Attendees were cross-functional across engineering, product strategy and product design all with an understanding of both Portal and Oculus products, some of who are working on various Portal projects.
A presentation and discussion were followed by a dot voting exercise where we prioritized concepts based on potential impact.
All 9 of our concepts were developed into richer conceptual prototypes with 3 being prioritized for further exploration by our team.
Portal Voice Assistant(s) in Oculus Spaces
Exploring the embodiment of virtual personal assistants in Oculus Spaces, including Portals VPA. Making portal voice commands consistent across all Facebook hardware.
- How might we represent and embody these virtual agents in Oculus space?
- How might we leverage the context of both real and virtual spaces and associated metadata (nearby objects, IoT states etc.) to give our assistant more understanding and awareness?
- How might an assistant utilize gesture and body language of users and assistants to communicate beyond spoken language?
Oculus to Portal Calling
Creating an immersive sense of presence for Oculus users during video calls to Portal users by utilizing the 140 degree wide-angle camera on Portal devices to create a concave wrapped experience.
For Portal users, Cameraman AI can work with the virtual camera to keep the avatar(s) in the frame. This creates a consistent experience for Portal users.
- Explore future camera hardware and its impact on UX including higher resolution, frame rates, and stereo cameras.
Shared AR Layer
Oculus and Portal users in a video call can share AR objects, this is the link between both spaces.
For Portal users, these objects are overlaid on a reflected video while their view of Oculus users becomes a PIP or split-screen. For Oculus users they see AR layers as three-dimensional objects occupying the space between the large immersive video projection of their caller's space (utilizing a 140-degree camera) and their own avatar. This concept aims to enable AR object interaction beyond the current capabilities of Facebooks Spark AR, which is currently limited to face tracking.
- Leverage the user's phone + fiducial marker or custom tracking image to allow users to manipulate objects using their mobile phone
- Explore using motion tracking on Portal TV controller to allow for 3DoF object manipulation
An SDK and component library for Unity Devs who are building for Oculus that allows for fast development of enhanced spectatorship views. Particularly geared towards competitive multiplayer team games that may gain a streaming following.
- Spectators can toggle dynamic map views, HUDS (heads up display), or s
- They are able to see scoreboard view and switch between first-person views of multiple players within the same multiplayer gaming session.
- Simple integration into popular streaming platforms
- Spectator interaction through polling, tips, commenting etc.
Third Person Camera View
Server-side rendering and streaming of additional camera views such as third-person views can allow the spectator to toggle between cameras and even control the camera.
Spectatorship of a first-person view can cause motion sickness for some people and offers limited context on the environment around the player. However, allowing the spectator to switch to an alternative view requires additional graphics rendering resources.
- Game streaming typically requires extremely low latency however this is not the case for spectatorship.
- This may also be used for asymmetrical cooperative or competitive gameplay where the portal user is a more active participant and can view graphics outside of portals rendering capabilities.
Co-watch media content on Portal devices with someone using an Oculus headset.
While watching together the Portal user sees a small Oculus avatar(s) in the bottom corner from behind, as if they are in a movie theatre. Gestures are tracked and the avatar is animated for Portal viewers.
Oculus users might see a small video window floating beside them of co-viewer while they watch, or the real-world co-viewer could be pose tracked and be present as an avatar on the virtual sofa in Oculus space. Pose tracking can translate their gestures and movement to their virtual avatar.
Allowing for spectators viewing on Portal devices or within Facebook's newsfeed to make simple interactions with Oculus users through a heads-up display. This welcomes engagement from spectators and allows them to participate in the experience ie. allowing them to help them or hinder the VR player.
eg. Audience can help or hinder a VR player by choosing the next Tetris block to fall. Could be a 1:1 choice, or could be a voting poll between pieces dropping if there is a large number of spectators.
eg. Audience chooses dialogue of NPC (non-playable character) to help push along the story, giving some semblance of intelligence.
Portal to Oculus: Asymmetrical gameplay
Show simplified gameplay on portal that will run on local hardware. Use Facebook Instant Game html5 platform or rendered 3d games using integrated mobile chipset.
This could allow for a myriad of asymmetrical competitive and cooperative gameplay with different configurations which could include PIP of first-person perspective from VR player who is in half of the experience, or not sharing their game stream in order to facilitate vocal communication and cooperation between players as shown here.
MultiModal Calling Pt 2 (Oculus to Smartphone)
On video calls between Oculus and mobile users the mobile user holds phone like they are shooting a video into VR space, they can move it around and see all around them a window into VR Space. This is called simulated VR.
In Oculus space player sees the caller as an avatar holding a device. Forward-facing camera tracks expression and translates it to the facial animations of the avatar of the mobile user as well as syncing lip movement. The avatar's body is animated based on camera position and approximation of anatomy through the animation rigging.
Potential to use forward-facing depth cameras (iPhone) to use head tracking and enable camera reprojection to create the illusion of a window/portal into the metaverse.
Using head tracking camera vision to extrapolate the approximate x,y,z axis position of user allows for the reprojection of the virtual camera and the creation of the illusion of looking through a window.
This illusion is particularly impactful when looking into a virtual scene in VR space but also may be a compelling UX for Portal to Portal calling.
- Future hardware and depth cameras with laser grid projection would increase the accuracy and impact of this illusion.
- Only works for one user, explore and validate UX of seamlessly switching between camera modes of Window and AI Camerman.