ChatGPT on the Processing forum. Yay or Nay?

Hey y’all,

I’ve been experimenting with ChatGPT to answer some questions on the forum. Here are the messages:

For transparency’s sake, I went back and identified those messages as partially AI generated.

In the second case ChatGPT was even able to provide a full code solution when prompted to do so. I didn’t include it in my reply for obvious reasons.

ChatGPT is a Large Language Model created by OpenAI that can generate very convincing text on a broad range of topics. In some instances, it is able to write complete Processing sketches from only a text prompt.

There are many ways to use this tool. Some of them will be problematic, like the fact that you can generate very convincing but totally wrong answers. Also you don’t know when it might be heavily borrowing from a existing sources.

Other use cases could be potentially very helpful. A basic example: you can outline a rough reply and let ChatGPT make it sound more friendly and idiomatic. Or help you find the bug in some code, with or without a pirate accent.

My question for you is:

When is it ok to use a Large Language Model to enhance or partially generate posts/answer for this forum?

Should the use of such tools be allowed here? Under which conditions? Should we ask posters to disclose that a reply was partially or totally AI generated? Some other policy?

If you haven’t tried ChatGPT yet, I encourage you to do so if only to get a sense for its capabilities. Whether you decide to use it or not after that, it’s pretty clear that we will be confronted with more generated posts in the future and I think it’s good to be informed and ready for it.

One last thing, this GPT-2 output detector does a great job at flagging text generated with chatGPT. How about you try it on this message? :wink: This might become a necessary tool for the teachers among us :upside_down_face:

Curious to read your answers!



I didn’t screenshot the original run I used to write my reply, but here’s an example of answer to the original message from the dist() collision thread.

Note that it correctly identified the issue with the float array but got the solution wrong in that instance, which you’d only notice if you understand the problem.


Hello Folks! :slightly_smiling_face:

On The Verge this scenario unfolded last week on site is worth a read:

If this is something the forum is interested in pursuing, I think there would also need to be a revisiting of guidelines. Especially as it pertains to students and homework.

The processing forum currently asks that students support their questions with their own code. And/or if they are using found code that they attribute the author.

This potentially becomes a situation of “do as I say, not as I do” if the answers to student questions are AI generated. The respondent is not posting their solution/code nor can it be attributed to a definitive author beyond “Note: part of this message was generated with ChatGPT”.

This in turn raises a second point of consideration.

What are the implication(s) of social impact when the human element in a communication exchange is removed? Is it a quality exchange when one side of the conversation is partly or fully bot driven? If part of the conversation is written via AI, do we know which part? Do we have the same level of trust in accuracy of the knowledge being shared?

As to whether it should be allowed on this forum? I am skeptical. I think it will add an unnecessary element of background noise. An additional level of scrutinization about origin and quality of code.

I think it could potentially undermine students’ understanding of the importance of research, perseverance and problem-solving needed to develop their programming skills.

However I do agree with @sableRaph that this is a topic that should be discussed.
And I’m certainly interested to hear the thoughts of other forum members.



Thanks for writing about this, interesting topic! :grinning:

Quoting from the official StackOverflow meta channel:

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting.

And I like this one:

Other commentators pointed out that it can be difficult to determine whether an answer was created by ChatGPT or not.
I’d like to point out that it doesn’t matter. Terrible answers are terrible answers, and anyone posting a stream of terrible answers should be banned or otherwise restricted.
That does not mean the rule is useless. Simply having a rule that says “no AI answers” will discourage many people from trying, thus decreasing the amount of bullshit that humans have to moderate.

And about AI generated answer detection:

This calls for a feature-request to detect AI generated answers/questions and maybe an additional flag option for users to mark a post if an answer/question is suspected to be one.

An interesting point here is that Deepfake detection is a big area of research but AI generated text detection is still lagging behind a bit. Hoping the community comes up with good models soon that help detect ChatGPT generated content.

For the people suggesting ChatGPT can “help” SO, please know, the biggest differentiator of SO from other Q&A platforms is the fact that some of the most brilliant programmers in the world are directly guiding the community, and the rest of us learn from their answers to then guide others who need help.

Who would you rather learn from? A veteran programmer or a random person with a AI text generator? Because if SO allows this, be rest assured this is going be be exploited beyond control.

My conclusion on this is that it’s a matter of who you want to interact with and what quality of exchange / conversation you want to have. The goal of a forum is to talk to people, not AI (even the best ones) otherwise it’s another kind of interaction.

AI models can help people to find answers quickly and if they do it’s fine because they don’t ask on the forum. But I don’t see the benefit of doing that for people that answer posts on the forum…

I mean we do that because we love interacting with people, writing solutions and describe problems. Why would you want to generate the entirety or a part of your answer? For that people can directly use ChatGPT to answer them :wink:



Textual it’s impressive but when it’s producing so many errors, I don’t think we should recommend it.

I wouldn’t at the moment forbid it in this forum here though.


I think this is a nice gimmick, but also just a gimmick.

I don’t think the AI in its current state will be a substitute for qualified answers to concrete problem solving and more complex questions, and steering help seekers in the right direction.

Still, this could be used as a kind of second interface in parallel to the current forum.
So to speak, as a kind of “extended reference book”.

So for questions like:

  • What parameters do I need for using the rect function?
  • Can you show me an example on how I can filling my rect with red and having a green border ?
  • How to use the dist function and what does the result mean ?
  • etc.

In any case, it would have to be clear that the answer comes from a bot, and how to ask other “real” people if necessary.

To be honest, I wouldn’t feel like checking every time if the AI has answered correctly and if necessary to clarify and correct the errors from AI (analogous to the example above from the dist() collision thread).

I also don’t know if it doesn’t discourage people who are looking for help to get forwarded to a bot with their problems.

— mnse


I agree with the sentiment though not 100% with the conclusion.

People who regularly answer on the forum are experienced programmers who would be better equipped to evaluate the quality of answers from a Large Language Model than the person asking the question, in the same way that a more experienced programmer knows how to evaluate the quality of the replies they find on StackOverflow. Whether one enjoys doing that instead of solving the problem entirely by themselves is an important question for sure. And also, it can be faster to answer yourself than parse through a bunch of subtly wrong answers from ChatGPT.

There are also use cases that don’t involve asking the AI for coding solutions. It can tweak your writing to make it more understandable, friendly, or encouraging, which is not always easy for a human to consistently do when the same questions keep coming, or before we’ve had coffee :stuck_out_tongue: I think we should look at it as a more nuanced question than just “should you outsource every coding questions to ChatGPT?”

I love seeing your answers so far. Let’s keep the discussion going and hopefully we can come up with some healthy guidelines for the forum.


Of course, that’s why people should be warned that the answers may not be correct and they should document / test themselves.

You are right, there are multiple open questions here:

  • From a human perspective, do you enjoy getting answered by a bot or by a real person?

  • As a forum active user who answers questions, would you accept to be helped by an AI by “improving” your answers, like a guiding AI?

  • Should ChatGPT or any Large Language Model be included in the forum guidelines before posting a question on the forum? Maybe the AI could suggest other similar posts or just provide a simple answer. For more complicated answers, it would be advised to post on the forum.

  • Are people going to use ChatGPT to copy and paste the answer on the forum? Are we enough lazy to do that?



We can probably all agree that we should discourage people from copy-pasting AI generated answers straight into the forum without any scrutiny :slightly_smiling_face:


Because of:

  1. Geopolitical factors in flux around the world
  2. A pandemic that is still causing sporadic lockdowns and remote-learning
  3. Or a combination of 1 & 2

The reason for asking a question on a forum can take on additional significance when people are in isolation due to situations beyond their control. I do not want to assume that the freedom I have to walk out of my house and communicate with another person is the same for everyone at any given time. Sometimes the connections made online are a sort of lifeline.

In reality, we don’t know in a quantifiable way the impact we have when we take a moment (or a few moments) to answer someone’s question.

But I do feel the appreciation when a poster responds with a hearty “Now I understand! Thank you!!”

Speaking from my own experience. The first time I asked a question on this forum, to be honest, I was a bit scared(?) or hesitant(?). I knew that it was quite a simple question (an exercise from Dan Shiffman’s Learning Processing) and that in my mind I should be able to work it out on my own. Much to my surprise, I received a quick and very helpful explanation (Thank you again @Chrisir!!). Since then, I have asked many(!) questions. And while I am still a toddler along the lifeline of a programmer, the amount of understanding I have now is largely due to the generosity of time and knowledge many forum members have shared. You all know who you are and for that, I am grateful beyond words. :slight_smile:

I also came to coding with some preconceived misconceptions. I had no idea how deeply creative the process is. I had no idea about the incredible breadth of code solutions to a single problem. I had no idea that learning code is in many ways like learning a spoken language. Imagine my surprise when I learned there is a kind of vernacular to code! :slight_smile: It was the variety of responses from human beings (not bots) that helped me dispel my errors in thinking.

I guess where I’m going with this is that while we can definitively measure whether a piece of code is right or wrong, we cannot measure the impact of our own words. In a forum, the code and who (or what) is conveying the information are of equal importance and inextricably intertwined.


Wonderful answer thank you very much! :heart:

Yes AI is (going to be) amazingly complex and very powerful but in the end the question is what do we want as a society and as human beings?

Do we want a world where the majority of social jobs are being replaced by conversational AIs and ultra realistic 3d generated avatars on a screen? (and I am not talking about the Metaverse…)

I really find technology and programming fascinating but I am still quite skeptical about the uses of those technologies by big companies and individuals around the world for profit, power and interests which saddens me…

Anyway this forum is bringing people together and this is what matters :yum:


About that, check the new video of Fireship about the future of programming with AIs:

(the comments are also interesting :wink: )


I say yay. We need to start integrating The Machine into our lives.

Hi @animanoir,

Can you elaborate a bit more on that? :yum:

AI can be helpful in the Processing forum because it can assist users in finding answers to their questions more quickly and accurately. AI can also help to moderate the forum by identifying and flagging posts that violate the forum’s rules. Additionally, AI can help to automate certain tasks within the forum, such as tagging posts with relevant keywords or providing suggestions for similar threads. This can save time and resources for both users and moderators, and make the forum a more efficient and effective place for users to share information and help each other.

1 Like

Official policy from StackOverflow:

1 Like

I see what you did there :stuck_out_tongue:

Can you try to elaborate without the machine? :wink:


Thanks for giving us the opportunity to show why using ChatGPT to generate complete answers is problematic.

The reply is wildly off-topic. Here’s how.

First of all, we already have a spam filter that filters most of the infringing content. Secondly, that is not what ChatGPT is designed for and there is no such integration with Discourse. Besides, outsourcing value judgments about rules violation to a Large Language Model is a terrible idea :upside_down_face:

Again, these are not features of ChatGPT.

You could argue that a better prompt could yield a better answer, but it would still require human scrutiny to make sure the answer actually makes sense in the context of the conversation and spot any subtle flaws. This is even more of an issue when using ChatGPT to generate answers on a topic you know little about, as you are unable to assert the quality of the answer.

Again, this is not to say that there are no good uses for ChatGPT. For example: I still think that using it to improve style or grammar on human-written text is potentially helpful. It can even be good as a helper tool for debugging code in a language/framework you already know well.

After spending a lot of time with ChatGPT over the last week, I do believe it is too blunt a tool to be used without human supervision and its use on the forum should be limited.



Wise answer from chatGPT itself… :joy:

— mnse


Edit my posts at your own peril!