XR Design / Prototype
Future Casting
UI / UX Design
User Testing
Sep – Dec '22
14 Weeks
Unity
C#
Visual Studio
Figma
Encompass is an exploratory mixed reality smart home system that allows multiple users to interact with their smart homes more naturally and intuitively.
With the development of lightweight MR devices, in the future, Encompass can enable users to navigate smart TV, control the lights, or set up automation scenes in the house by simply using different household objects within reach, or through users' daily habits / gestures.
We began developing our project as an Immersion Lab experiment, focusing on the concept of objects existing in both virtual and real worlds. As our ideas evolved, we focused on a central theme concerning the future of extended reality (XR) technology. We observed that current market offerings, particularly virtual reality (VR) headsets, impede user interaction with the physical world, limiting potential use cases.
We assert that XR technology should enhance and extend human interaction in the real world, seamlessly integrating with our lives. Mixed reality (MR) offers the greatest potential in this regard, as users can engage with the real world without interruption while benefiting from virtual overlays and enhancements. This realization prompted us to explore how MR could improve aspects of our daily lives, leading us back to a familiar space: our homes and the current smart home experience.
We began by investigating the current market and verifying whether the technology assumption was correct. Through our trend research, we were able to analyze existing use cases and the direction in which MR technology is developing.
Google has brought back its lightweight augmented reality (AR) glasses for a second attempt, allowing users to utilize them in public settings. Features like real-time translation and directions inside the discreet lenses make them potentially even more useful.
Meta's new Quest Pro also represents a trend in MR technology. With the introduction of features like color pass-through, hand and eye-tracking, and physical object mapping in virtual space, it allows for more natural interactions with both real and virtual objects.
We are also seeing an increase in investment in new form factors, technology barriers, and novel use cases for lightweight MR devices from both startups and existing tech companies, indicating the potential for a breakthrough in this concept in the future.
We also looked back at our own homes and smart home use cases, and the existing market and products for smart home, and tried to identify its problems and potential development when incorporated with MR technology.
Currently, users can control their homes through smart speakers using natural interactions. However, due to the limits of VUI, there is much to improve in terms of intuitiveness, precision, and accuracy.
Users can also control their homes with smart home apps. However, as there are differences in manufacturers and ecosystems, it typically takes more than one app to control devices from different systems.
Dedicated controllers for specific appliances limit their usefulness, and the lack of automation and communication with the rest of the system make them necessary, which also increases the hassle when they are lost.
To identify how we can better address the current solution's weak points, we looked further into the existing XR devices (which may share similarities with the future MR devices) and some of the already established smart home standards. Here are two parts that we think we can leverage when designing our vision.
The existing smart home system includes two main parts: personal devices (phones) for basic automation and triggering, and hubs (smart speakers) for controlling. Lightweight MR devices, however, are more intuitive and accurate due to their ability to track everything from the environment to the user's movement and display information as overlays. This opens up new possibilities for controlling devices and more accurate and detailed automations, even down to the posture-level.
There are also new smart home standards and technologies on the horizon. The upcoming adoption of the new Matter standards will enable cross-ecosystem compatibility for more smart home devices and appliances, allowing them to be controlled with the service the user prefers. Newer wireless standards like WiFi 6E and UWB connections can also enable a hub-less, interconnected IoT network that is distributed and always tracked.
The whole Encompass demo is intended to showcase the possibility of the future as more lightweight MR devices become available to the general public. However, with the limitation of the current technology and platform, we are only able to simulate the experience using Unity and Meta's Oculus Integrations. All the demo source code is open-source on GitHub, so feel free to try them out if you have the adequate hardware!
Developing with Unity and XR platforms is a relatively new experience for our team. Experimenting with the capabilities and constraints of the existing toolkit has been a great learning experience. We have gained a clearer understanding of the potential and constraints of the current technology, such as gesture tracking accuracy, existing MR passthrough technology, and general prototyping intuitiveness, by diving deep into Meta's Oculus integration and exploring object-oriented programming. It's always fun to view a design possibility from the perspective of a developer!
During the exploration and development process of our main prototype, the Multi-Affordance scene, we also conducted several rounds of user tests to see what are their reactions to some of the main main features we proposed and what we can improve. The results were surprisingly useful and shaped a lot regarding what the final prototype would be, including how the user is going to interact with the multi-controller, and how gesture can be significantly influenced by the position the user's performing.
In order to present the overall concept in a more persuasive way, we also created a little brand guide for Encompass. Utilize the emerging AIGC, we were able to quickly iterate between all the potential names for our concept and goal. We also chose GitHub's new open-source font Mona as both the display and paragraph font thanks to it flexibility as a variable font.
Experimenting with the emerging technology and still-developing platform is quite an interesting journey for me. To have hands-on experience of the real developing tools is such a valuable experience for designers. Here are some points I learned during the development process of Encompass:
Having hands-on experience creating a scene, experimenting with Unity engine settings, and exploring existing APIs and codebase gave me a better understanding of XR design and development, and why the platform hasn't been popularized yet, even with major pushes from companies like Meta. The constraints of existing APIs and hardware limitations not only challenged us to create novel experiences while considering all the requirements, but also sparked more ideas to improve the workflow in the future.
The original plan for Encompass was to show a complete scenario with a daily routine of how users interact with the smart home system. However, due to time and effort constraints, we had to make trade-offs to finish the design. I learned that the best way to make trade-offs is to evaluate the ultimate goal of the design (in this case, showcasing our vision) and consider the scope of the entire project. This resulted in our final prototypes, one of which is fully interactive and still demonstrates our vision fully.
There is still much to do regarding both the Encompass vision and the prototypes. Here are some of the points we can further develop based on our existing design:
Me and Haoran are also on the way to setting up a stage for a live demo of the prototypes at our graduation show. Stay tuned for more updates!