How I learned to stop worrying and embrace the GenAI paradigm shift

A quick story on our history


Watching the 2008 movie (‘billionaire building a robotic suit of armor’ :) ) for the first time blew my mind - fresh out of college, seeing Tony naturally interacting with holograms of hardware design on the big screen, and chatting with an AI assistant to generate concepts and offloading the bulk of the work - was an engineer’s dream, but also felt far in the future.

Fast forward to 15 years later - from powerful advances in mixed reality experiences, to exceptionally powerful AI tools a click away - I feel all the elements are coming into place for us to truly be immersed in this reality.

If AIs are powerful enough to be multimodal and maintain temporal context – anyone can dive into essentially anything. If skill stacking serves as the engine towards creative problem-solving, LLMs are dual stage turbochargers. They can quickly orient a person in a new field, and rapidly accelerate feedback loops.


How do you build a chess game in mixed reality?

With all the recent advancements, I wanted to dive into tinkering in this space first-hand. The first question I asked myself was: How difficult would it be for me (i.e. a hardware engineer with limited coding experience working in MR/AR) to build a simple chess app in Unity by leveraging GenAI. This led to a series of contextual questions, one after the other with a patient mentor, and to my own surprise, led to a proof of concept game with hand tracking and accurate physics in a week!


Enter Caddy

Caddy (3D CAD in Mixed reality) was an idea I wanted to try and an obvious leap from that point on - if we can bring in assets, we should be able to abstract out the type of asset, and also tailor the experience to what I personally would view as useful (i.e. baby steps towards a fully immersive hardware experience).

The first video I shared with my team was a super rough app; a monochrome, single part CAD model in MR with barebones hand tracking. But this instantly resonated, I got a call from Jason Putnam immediately after – ‘I think this is it, I’m in’.


Together, we just kept building what we felt we wanted to see; adding interaction models, custom workflows, networking solutions, essentially building the base framework for an aspirational fully AI enabled experience; a 3D visualization tool in MR with an abstract dataflow pipeline. To build this – each time using LLMs as a starting point, but interestingly also finding increasingly less need for them in familiar areas and picking them back up when diving into new ones.


Which brings us to where we are today - We developed a natural interaction stack built on hand tracking, with a networking and colocation solution, abstracted out so anyone can essentially visualize and interact with any CAD model in a shared space and collaborate.


To help level set the impact of GenAI/LLMs and how it accelerated our development process; building this complete feature set as a proof of concept demo took less than a month (in our spare time, from hardware engineers with zero initial C# and minimal coding experience), followed by a similar effort to build out the abstraction layer.

This is just a single application that two hardware engineers wanted to experiment with. In this new AI paradigm, there’s so much more waiting in the wings - it’s time to build!!

- Raghavan


Previous
Previous

Reimagining Design interaction and modeling:

Next
Next

Caddy app features - future implementation ideas!