Reimagining Design interaction and modeling:
Our approach to this endeavor has been based on a few guiding principles as laid out below:
Guiding principles in design:
Interacting with Hardware CAD models should be easy and in first person 3D!
Mixed reality is a must! We built quite a few initial prototypes (in MR and VR) and quickly came to the conclusion that the experience of full immersion is like no other than when it’s in MR.
Interaction models: No controllers! We wanted to explore models based on our most trusted and intuitive interaction models: our own hands and eyes.
Building on interaction models: Defining custom poses and gestures that allow for direct interaction, easy to learn and easy to implement, bringing the “as seen in sci-fi movies” experience to life.
Moving past visualizing: Adding Drawing, writing and measuring features: Controllers appeared to be the logical choice here (more stable, better fine control) but we took the extra effort to continue down the path of refining hand interaction models. We strongly feel that this will continue to be the most intuitive and will open up currently not thought of possibilities in the future.
Networking and collaboration: Defined as default, all models imported in the app can be networked across players. We felt collaboration was a fundamental feature, with the key theme being full immersion. In both co-located (physically in the same room) and distributed (geographically in different locations) settings, all assets are shared. Our north star goal is to blur the lines between physical and virtual for all players (building a sense of realism of object presence).
Guiding principles in development process:
Accelerated development leveraging GenAI for quick prototyping and iterate, iterate, iterate! Refer to our GenAI post if you’re interested in our history :)
A build, iterate, test, gather feedback and refine approach. This has allowed us to be laser focused on feature sets (the potential design space is vast!), building engagement with our active community and enabling features that users want to see.
These principles continue to drive our future work as we continue building; for example, we’re exploring AI use cases and integration, these will continue to leverage similar modalities (hands - sketching, voice recognition etc).