Show HN: AI in SolidWorks
Hey HN! We’re Will and Jorge, and we’ve built LAD (Language-Aided Design), a SolidWorks add-in that uses LLMs to create sketches, features, assemblies, and macros from conversational inputs (https://www.trylad.com/).We come from software engineering backgrounds where tools like Claude Code and Cursor have come to dominate, but when poking around CAD systems a few months back we realized there's no way to go from a text prompt input to a modeling output in any of the major CAD systems. In our testing, the LLMs aren't as good at making 3D objects as they are are writing code, but we think they'll get a lot better in the upcoming months and years.To bridge this gap, we've created LAD, an add-in in SolidWorks to turn conversational input and uploaded documents/images into parts, assemblies, and macros. It includes:- Dozens of tools the LLM can call to create sketches, features, and other objects in parts.- Assembly tools the LLM can call to turn parts into assemblies.- File system tools the LLM can use to create, save, search, and read SolidWorks files and documentation.- Macro writing/running tools plus a SolidWorks API documentation search so the LLM can use macros.- Automatic screenshots and feature tree parsing to provide the LLM context on the current state.- Checkpointing to roll back unwanted edits and permissioning to determine which commands wait for user permission.You can try LAD at https://www.trylad.com/ and let us know what features would make it more useful for your work. To be honest, the LLMs aren't great at CAD right now, but we're mostly curious to hear if people would want and use this if it worked well.
161 points by WillNickols - 83 comments
The UI is the inverse of whatever intuitive is. It's built on convention after convention after convention. If you understand the shibboleths (and I'm guessing most people take a certified course by a trainer for it?), then it's great, but if you don't, it really sucks to be you (i.e. me).
I would LOVE to try out what you've built, but I am afraid that if the model misinterprets me or makes a mistake, it'll take me longer to debug / correct it than it would to just build it from scratch.
The kinds of things I want to make in solidworks are apparently hard to make in solidworks (arbitrarily / continuously + asymmetrically curved surfaces). I'm assuming that there won't be too many projects like this in the training dataset? How does the LLM handle something that's so out of pocket?
Personally not familiar with curved models, but my understanding is that surface modelling with lofts guided by spline contours might be the way to go. Not sure if SW has those features.
And yea, you should find a course from a training firm rather than official documentation. It sucks and theres a reason Fusion360 seems to be really eating into the market after 5-10yrs.
[1] https://github.com/pedropaulovc/offline-solidworks-api-docs
[2] https://github.com/pedropaulovc/harmonic-analyzer/blob/main/...
https://github.com/MichaelAyles/heph/blob/main/blogs/0029blo...
I need to redo this blog, because I did it on a run where the enclosure defaulted to the exploded view, and kicanvas bugged out, either way, the bones of it is working. Next up is to add more subcircuits, do cloud compilation of firmware, kicad_pcb to gerbers.
Then order the first prototype!
https://adamkarvonen.github.io/machine_learning/2025/04/13/l...
Do any people with familiarity on what's under the hood know if the latent space produced by most transformer paradigms is only capable of natively simulating 1-d reasoning and has to kludge together any process for figuring geometry with more degrees of freedom?
An interesting thing about transformers is that they are world-class at compressing 2D image data even when not trained on anything but textual language ( https://arxiv.org/abs/2309.10668 ). Whether that notion is relevant for 3D content would be two or three figures over my pay grade, though.
Solidworks might be as close to a final form for CAD as you're going to get.
oh and they changed the price as well, it went up, and up, and up
OK, how do we work with our CMM and CAM software? How do we send files to clients and vendors? crickets
Not confidence inspiring.
LLMs struggle because I think there's a lot of work to be done with translating colloquial speech. For example, someone might describe a creating a tube is fairly ambiguous language, even though they can see it in their head: "Draw a circle and go up 100mm, 5mm thick" as opposed to "Place a circle on the XY plane, offset the circle by 5mm, and extrude 100mm in the z-plane"
You can represent a dog as a specific multi-dimensional array (raster image), but the word dog represents many kinds of images.
Amusingly, I did this just a couple of weeks ago and that's how I learned what a chamfer is: a flat angled transition. The adjustment I needed to make to my pins where they are flared (but at a constant angle) is a chamfer. Claude told me this as it edited the OpenSCAD file. And I can just ask it in-line for advice and so on.
I'd love to have that kind of UI for adjusting dimensions in regular (non-CAD) images. Or maybe adjusting the CSS on web pages?
This is definitely my experience as well. However, in this situation it seems we are mostly working in "local" space, not "world" space wherein there are a lot of objects transformed relative to one another. There is also the massive benefit of having a fundamentally parametric representation of geometry.
I've been developing something similar around Unity, but I am not making competence in spatial domains a mandatory element. I am more interested in the LLM's ability to query scene objects, manage components, and fully own the scripting concerns behind everything.
But I think this shows that these models can improve drastically on specific domains.
I think if three was some good datasets/mappings for spacial relation and CAD files -> text then a fine tune/model with this in its training data could improve the output a lot.
I assume this project is using a general LLM model with unique system prompt/context/MCP for this.
I then waited about an hour for the print to finish, only to discover I wanted to make some adjustments. While I was able to iterate a couple times, I quickly realized that there were certain structures that were difficult to describe precisely enough without serious time spent on wording and deciding what to specify. It got harder as the complexity of the object grew, since one constraint affects another.
In the end, I switched to FreeCAD and did it by hand.
Wasted half an hour generating absolute nonsense if it even compiled and ended up going with one of those svg gear generators instead lmao.
(Of course, you and I know it is, it's just that you're asking it to do a lot)
I guess it's all in the perspective
https://shapelabvr.com/
Here's an example I finished just a few minutes ago:
https://github.com/jehna/plant-light-holder/blob/main/src/pl...
For people looking at a different angle on the "text to 3D model" problem, I've been playing with https://www.timbr.pro lately. Not trying to replace SolidWorks precision, but great for the early fuzzy "make me something that looks roughly like X" phase before you bring it into real CAD.
I've tried ChatGpt and Claude on datasheets of electronic components, and I'm sorry to say that they are awful at it.
Before that is fixed, I don't have high hopes for an AI that can generate CAD/EDA models that correctly follow some specification.
My wife was designing a spring-loaded model that fits in our baby walls so that we can make it more modularly attached to our walls and she used Blender. Part of it is that it's harder to make a slightly more complex model with an LLM.
Solidworks is out of our budget for the kind of things we're building but I'm hoping if this stuff is successful, people work on things down the market. Good luck!
For the most part they still suck at anything resembling real spatial reasoning but they're capable of doing incredibly monotonous things that most people wouldn't put themselves through like meticulously labeling every pin or putting strict design rule checks on each net or setting up DSN files for autorouter. It even makes the hard routing quite easy because it can set up the DRC using the Saturn calculator so I don't have to deal with that.
If you give them a natural language interface [1] (a CLI in a claude skill, thats it) that you can translate to concrete actions, coordinates, etc. it shines. Opus can prioritize nets for manual vs autorouting, place the major components using language like "middle of board" which I then use another LLM to translate to concrete steps, and just in general do a lot of the annoying things I used to have to do. You can even combine the visual understanding of Gemini with the actions generated by Opus to take it a step further, by having the latter generate instructions and the former generates JSON DSL to that gets executed.
I'm really curious what the defensibility of all these businesses is going to be going forward. I have no plans on entering that business but my limit at this point is I'm not willing to pay more than $200/mo for several Max plans to have dozens of agents running all the time. When it only takes an hour to create a harness that allows Claude to go hog wild with desktop apps there is a LOT of unexplored space but just about anyone who can torrent Solidworks or Altium can figure it out. On the other hand, if it's just a bunch of people bootstrapping, they won't have the same pressure to grow.
Good luck!
[1] Stuff like "place U1 to the left of U4, 50mm away" and the CLI translates that to structured data with absolute coordinates on the PCB. Having the LLM spit out natural language and then using another LLM with structured outputs to translate that to a JSON DSL works very well, including when you need Opus to do stuff like click on screen.
Yes, huge +1 for this. I do this in a different field and it's quite impressive. At first it felt weird using different models together but they really do have certain strengths/weaknesses (in January 2026.)
Also, fascinating how quickly things are evolving around PCB design. It was only six months ago that "vibecoding" a PCB was far off (look how basic this was: https://news.ycombinator.com/item?id=44542880) and now that may soon become possible!
2 things related to what you said I hadn't put in the original post:
1. In our experience, the LLMs were awful at taking actions directly with any of the SolidWorks API scripting formats (C#, VBA, etc.). Probably 75% of what they wrote just failed to run, and even when they had access to browse the documentation it wasn't much better. If you're getting Opus or anything else to interact with SolidWorks from the CLI, can you say more about how you're getting it to interface effectively?
2. The LLMs are indeed surprisingly bad at spatial reasoning unless prompted specifically and individually. The most notable case of this is when they need to choose the right plane to sketch on. When creating revolve features, they'll often choose the face that would've only worked if they were going to extrude rather than revolve, and when creating sweeps they'll often try to put the sketch that's going to be swept on the same plane as the path that's being swept. If you go back and ask them why they did that and point out that it's wrong, they can fix it pretty fast, but when left to their own devices they often get quite stuck on this.
I just have a Solidworks plugin that translates CLI calls to JSON to Solidworks API calls and back again.
What really started working was creating a bunch of high level CLI commands so that Claude Code could query the part/assembly by asking stuff like "What is the closest distance between the southmost surface of Extrusion1 and the surface of Cylinder2" which would then be translated to a specific high level command or a bunch of lower level commands by Gemini 3 Flash. Those would then be translated to Solidworks API calls, as would any editing commands. It also really helps to give it the ability to parametrize the queries so instead of "all features smaller than 2mm" it can say "all features smaller than $MinFeatureSize", with some logic and a downstream LLM to translate that parameter into values in the design and review it with the human in the loop before committing it to the project.
The key is to absolutely minimize how often the LLMs think about numbers and have them think in relationships instead. The hard part is translating those relationships back to the CAD API calls but LLMs are much better at hot hallucinating if you resolve all the parametrized equations last.
I have a mini query language in the CLI that implements a lot of spatial queries, both structured and via prompts (another LLM translates the prompt to a structured query), against the Altium file format and an intermediate representation I keep. Most queries and editing commands use relative positioning ("to the left of"), units ("right edge minus 10mm"), and parameters (U1.P1.Top + MinSpacing * 5), etc. The LLM rarely needs to use concrete units because it's mostly parametrized by component clearances and design rules - I just choose some numbers at the beginning like board size and layer stackup (mostly set by the fab).
The CLI has over a hundred subcommands and I use Claude skills to split up the documentation, but the agent actively explores the schematic and PCB themselves. The Claude skills include instructions to use the measurement subcommands to sanity check after making a move or when doing a review at the end, although I'm in the process of implementing a GPU based design rule checker. My Solidworks interface works the same but there are many more "verbs" there for the LLM to manage.
At the end of the day it's mostly just orchestrating another tool which does most of the spatial logic and calculations. It's definitely easier with Altium than Solidworks so far.
Not yet but once I'm ready it's all going to be open source.
> Also looked into automating altium more and found having to do a lot of GUI work to guide the models along.
Have you tried the Windows UI Automation/Accessibility APIs? You can download Accessibility Insights for Windows to see the data structure and it's well supported by Altium. It has everything you need to tell the LLM what's on screen without ever sending a screenshot (except for the 2d/3d CAD view) and the UIA provides an API that can actually click, focus, etc. without sending fake keyboard events manually. When reverse engineering the file format I put Opus on a loop and it just kept fiddling with the binary file format until Altium stopped throwing parsing errors.
> How much of going from ‘design in head’ to schematic and layout have you automated?
At this point everything but the final PCB routing. My autorouter and autoplacer are very much a work in progress and LLMs aren't great at routing complex traces, but they can brute force it given enough time and DRC checks. Right now I just shell out to autorouters like Specctra with a DSN and route the most important nets by hand in Altium. Since the LLM sets up all the design rules and can spend hours on a loop figuring out the optimal placement, it's usually a breeze. Especially compared to when I first started out in EE and spent days with part selection and footprint capture. Now a lot of that tedium is completely automated.
Soon I'll integrating probe-rs into the whole workflow and making end to end vibe PCBs, adding support for FPGAs, and integrating Solidworks so it does enclosures and molds too.
I'm just manipulating the file format directly because I don't want to be tied to Windows (bleh). I only fire up Altium for a few features that I haven't implemented yet like the autorouting and the UI automation was to let the LLM figure out file corruption issues.
I learned Altium long before KiCad was a viable option and Altium's ActiveRoute routing is the perfect blend between manual routing and autorouting so it's really hard to switch. ActiveRoute gets even better with the LLM filling out all the design rules.
ive been starting to play around with 2.5D cad system so i can sketch patterns for leather work and auto-convert into constraints, but id like to be able to have an agent in there too eventually
Do you consider adding support for AutoCAD or AutoCAD vertically integrated software like Civil 3D?
Yes - we're likely looking into other 3D systems in the future.
This is exactly what SGS-1 is, and it's better than this approach because it's actually a model trained to generate Breps, not just asking an LLM to write code to do it.
I am still hoping that openSCAD or something similar can grab hold of the community. openSCAD needs some kind of npm as well as imports for mcmaster-carr etc but I think it could work.
All the people who aren't professional CAD users have Fusion, which they can get for free. They would probably benefit the most from text to model AI, but would probably also be the least willing to pay.
Solidworks is the GOAT though
I've watched the video a couple times and the only thing I can see that it does wrong is the fillets on the handle (and maybe the way it used a spline & sweep for the handle could have been improved but it's no worse than you'd see from a new Solidworks user).
How big is that mug? Slowing it down and trying to catch some dim info the initial cylinder has a radius of ~39. Regardless of if that's default inches, that's one hell of a mug.
Sure you can scale it, but it's not great practice to design something and then just apply a scaling to get it to the intended size.