I spoke with @sableRaph recently about using AI agents to contribute to Processing ecosystem code base. We did some initial tests with one PR developed completely by Claude model using my Claudine agent.
Note: I am not referring to the practice of using AI assisted software development (like with Copilot, Cursor or Continue, etc.), but a situation where AI agent is given a task to implement a feature, or fix a bug, and carrying this task out since the beginning till the end.
I recognize that the nature of this topic is very sensitive. Many programmers, including me, consider the latest advancements in LLM development as a threat to economic sustainability of programming as a profession. On the other side, I also recognize that we are witnessing a rapidly progressing revolution in how we use and think about our digital machines, and their interaction with us, humans. A revolution which momentum cannot be stopped by our wishes regarding socioeconomic determinants. We can take political actions, like support for open source models, but a purist approach of rejecting any AI authored code seems unrealistic. The AI authored code could also significantly improve the whole Processing platform.
I would like to:
- Initiate the discussion about the politics of AI generated code in Processing and open source software in general.
- Continue with subsequent experiments of my AI agents trying to fix certain issues in the code base.
- Understand how to keep the community in the loop by defining the selection of of work to be done by the agent, describing it, answering agent’s question throughout the process and finally testing the outcome.
I know that @sableRaph believes, that certain low hanging issues should be still accessible to new potential contributors. I am looking for suggestions which issue should we try to fix next with my AI agent to continue the experiment. I will document the whole process, and also ask for help if some additional input or testing is needed.