Our future aspirations integrating AI (or) Where can we go from here?

Recommended pre-reads:

  • Our Vision

  • Reimagining Design interaction and modeling


Picture this:

  • ‘Hey Caddy, this front cover feels too thin, let’s try to increase it by 25% and while you’re at it, throw two 10mm sized holes here and here (or)

  • Here’s a rough wireframe I’ve sketched out of this part, please apply design principles and build out our design (or)

  • ‘Hey Caddy, I wonder what the stresses will look like if we change the profile on this bracket, here’s a rough profile sketch, use your best judgment to make the design’

All of these are now theoretically possible; natural language interaction customized for hardware engineers, combined with powerful existing tools like CAD design, topology optimization, finite element analysis, thermal simulations etc.

How do we do this?

There’s quite a bit of exciting work here - starting with design AI; on a high level, building custom trained AI agents that can work with point clouds.

Basic approach: 


Phase 1: CAD modeling has been traditionally based on two main methodologies:

  • Boundary representation (Wireframe modeling approach)

  • Solid modeling approach or a combination of both;

AI modeling: Training pytorch models to recognize and polish rough drawn shapes based on either solid models or wireframes. 

  • Utilizing PointNet architecture to either build  a trained database on basic solids; or define engineering splines.

Phase 1 Vision: The user sketches out a rough rendition of the shape they’d like; the agent recognizes either the solid shape and updates it in real time with an actual solid, or it recognizes the splines (if it's a complex surface) and updates it with possible curves based on engineering design.

Mechanics: Train models on Pytorch architecture, deploy the inference engine using ONNX architecture and Barracuda to incorporate directly into Unity as a game object.

Phase 2: Phase 1 output feeds into phase 2 workflow for final design fabrication.

 Leverage custom CAD software APIs – train an AI agent using either RAG (Retrieval Augmented Generation models) on LangChain/Llama2 to enable a language based solution.

Phase 2 musings: Integrating the model output and trained LLMs to enable direct design manipulation in CAD SW packages.

We’re still noodling on these ideas - comments welcome!

Previous
Previous

Caddy app features - future implementation ideas!