I’ve been using AI to assist me with coding (R, in my case) since shortly after ChatGPT came out. In engineering, we tend to run off-the-shelf models that were of course written in some kind of code. Sometimes these are open access but often they are proprietary. The brutality of market discipline pretty much requires specialized off-the-shelf solutions in industry because customers are not going to be willing to pay for custom coding. The proprietary ones are even often preferred for legal/liability reasons. Anyway, the future of modeling appears to be humans providing a detailed specification to an AI agent which then follows it to do the coding, debug the coding, run the model, process and present the results. The humans have to be able to detect whether the results are BS, of course, at this point in history. One can imagine using a different agent or a more specialized agent to assist the humans in the bullshit-detection stage, that agent getting more independent over time, and so in a cycle. I wonder if we will be using agents to set up, run, and post-process the specialized models, or if things will trend toward just letting the agents write more fundamental code over time. Or maybe the specification will be what future scientists, engineers, and business people focus their efforts on, with translating that into 0s and 1s being basically a commodity done on the fly by AIs. This makes sense to me – the most crystal clear function of AI so far, in my view, is making it easier and easier for humans to communicate with computers in more abstract language, logic, and mathematical symbols.
Anyway, this example used something called Roo Code which included a couple versions of Claude Code along with some other agents, to run a fishery-related model. There is a peer reviewed article, but I also like this blog post and this example of a specification given to the agents.