ChatGPT on the Processing forum. Yay or Nay?

It would appear that the answer is Nay

Personally I am not in favour for all the reasons that made Stack Overflow disallow it and the fact CharGPT does not recommend it. (nice bit of recursion there @mnse :+1:).

If a solution to a forum discussion is not found or the original poster is dissatisfied with the replies then direct them to ChatGPT to try for themselves. They can always come back to the forum for further advice.

4 Likes

Nice timing. I have just considered how chatGPT or Codex could be used in a software company.

Plenty of good considerations. I don’t have a strong opinion either way. I’d hate to see a categorical prohibition. ChatGPT could help non native speakers to improve their question. As for providing answers it could help in some cases, like improving the tone as @sableRaph mentioned.

I’d like to turn the question around. How could forum or processing community benefit from ChatGPT or OpenAI’s Codex (or other programming tuned Large Language Models)?

GitHub has opened copilot (based on codex) to help generate code. They say that about 25% of it’s suggestions are accepted, with Python it’s up to 40%. Codex can be fine-tuned to improve it’s suggestions. Now I think this is the interesting part. Questions and answers from forum could be used for fine-tuning the codex model. Forum posting can be long, but hey one use cases of chatGPT is to summarize text. It could be enough for prompts. And in most cases answer is already flagged.

I think it would be basis for scientific article to extract prompt:answer pairs for fine-tuning and then measure improvement of answers model gives. Such a service could exist separately from forum (with warnings that code may have syntactic errors and may otherwise be incorrect). To run such a service would require a bit money (codex and chatGPT have costs to run) besides hosting.

3 Likes

As far as I know, on Reddit and Twitter bots are clearly identifying themselves. If that is the case, then I would see no problem to test that as an experiment. You will still have other people go in and check anyway, so if the answers doesn’t make sense, it will be corrected. I did not understand questions correctly before and needed to be corrected. Would be the same.

Maybe one should limit it to one bot answer per question. If it doesn’t solve the issue then the community takes over.

There are some questions, which repeat itselves, where an automatic answer would be great (e.g. collision detection).
An automatic request to format code properly would be nice as well.

2 Likes

Here’s a proposed update to the FAQ. Feedback welcome!

AI text-generation policy

While AI text generators and Large Language Models can be helpful if used reasonably, the content generated by these tools is often wrong in ways that are not immediately obvious. We want this forum to be a place where people come to learn from other knowledgeable (sentient) folks. Therefore, we do now allow spammy answers copy-pasted from ChatGPT or similar text generation tools.

Legitimate uses:

  • It’s okay to use ChatGPT or similar text-generators for educational or research purposes, as long as you clearly label the content as being generated by a machine [This line was generated by ChatGPT]
  • Using machine learning to improve or translate text is acceptable as long as facts and knowledge in the text come from you, just don’t use AI to solve the problem for you, especially if you don’t have a good understanding of the problem yourself.
5 Likes

I have added the AI policy to the FAQ and updated the text to add more nuance.

As usual, feedback is encouraged :smiley:

AI text-generation policy

While AI text generators and Large Language Models can be helpful if used reasonably, these tools often generate misleading answers to factual questions. These mistakes are often subtle and presented in a confident tone that makes them hard to notice.

We want this forum to be a place where people come to learn from other knowledgeable folks. Therefore, we believe the use of such tools on the forum should be limited.

In particular, we do now NOT allow spammy answers copy-pasted from ChatGPT or similar text generation tools. Unhelpful messages copy-pasted from a text generator will be removed, and a warning will be issued to the person posting them.

At the same time, we recognise that these tools can have a positive impact when used the right way and do not wish to ban them completely.

Acceptable uses of AI text generation on the forum are:

  • Using machine learning to improve or translate text, as long as facts and knowledge in the text come from you. Just don’t use AI to solve technical problems for you or others, especially if you don’t have a good understanding of the problem yourself.
  • It’s okay to use ChatGPT or similar text-generators for educational or research purposes (including problem solving and code generation) in dedicated topics, and as long as you clearly label the content as being generated by a machine. [This paragraph was generated by ChatGPT]

Conversational AI is an area of active development, and we encourage friendly discussions on the topic. If you want to share your feedback on this policy, please write on the dedicated topic.

See the full updated FAQ at: FAQ - Processing Foundation

2 Likes

I have deleted the post.

Hello @sableRaph :slightly_smiling_face:

Perhaps a minor thought but…

I think ChatGPT should be part of the policy title. I realize there are other companies also developing AI text generators and Large Language Models. But currently, the vast majority of media headlines refer to ChatGPT in their titles. And youtube is flooded with thumbnails with “ChatGPT” in bold type.
When forum users come to Processing’s policy page I think having similar titling language as the public is currently exposed to will be a quicker/easier scan for this information.

I may be overthinking this…
:nerd_face:

… Yay or Nay?

The thoughtful original post and the equally thoughtful replies that follow it confirm that the answer must inevitably be a nuanced one. Perhaps we can formulate a protocol for using content generated by bots in posts by considering what each of us would like to know about posts that we read on this forum. As for myself, I would ask that the information be valid, and I would want to know the source of that information. If information came largely from a book or other static textual source, I would want a reference. If it was from a bot, I would like that fact revealed, and would also wish to know the wording of the prompt that generated the reply. If the material has the flavor of an opinion or an emotional tone, I would certainly want to know whether it was generated by a sentient human or by a machine. In fact, I find the prospect of what appears to be emotional content being generated by a machine to be rather foreboding. Perhaps that should not be allowed here.

This technology will be (is already being) very transformative of society, and the interaction of society with it will also transform the technology. This may happen really fast, and the policy of the Processing Community regarding this phenomenon will probably need to evolve as all of it unfolds.

1 Like

Here’s an answer written with help from ChatGPT:

I totally get why you think “ChatGPT” should be part of the policy title, given the widespread media coverage and mentions on YouTube. But, to be honest, my goal is to make sure this policy stays relevant and useful for as long as possible. By keeping the title as “AI Text-Generation Policy,” it covers all current and future AI text-generating tools, and we won’t have to worry about updating it every time something new comes along. Does that make sense?

@javagar: That is a type of use of ChatGPT that would be allowed under the policy as it stands. The content and tone of the message are chosen by me but the wording is produced using the GPT Language Model. I was mindful to double check that the output said what I intended to say (in fact it says it better than I possibly could in that amount of time) and made small edits.

I do share your concerns to an large extent, but I should also say that, maybe paradoxically, I feel so far that ChatGPT has helped me interact with people in a more human fashion. I wish I could always take the time to write in an accurate, personable, and caring way but sometimes fail to do so due to limited time, which leads to misunderstandings, tension, and inboxes pilling up :sweat_smile: In many cases, the alternative would be no interaction at all.

Anyway, I’m curious to hear what you think.

Here’s the initial prompt I used:

I had to ask for a few tweaks because I wasn’t satisfied with the original output:

5 Likes

I have deleted the post.

1 Like

I largely concur with that, but much more needs to be said. In fact, conversations such as this one must inevitably be going on within diverse venues, all across society, as they should. By hosting this discussion, the Processing Foundation is contributing to a growing and essential body of thought about relationships between technology and humans that extend well beyond the immediate focus of this thread, so let’'s keep this discussion going.

I’ll be back later with some more to say, but for now, I’ll disclose a couple of premises that underly much of my opinion about this. Not everyone agrees with these premises, so I would value reading of the opinions of others regarding what follows.

  1. Machines and technology have brought humanity both of great deal of benefit and a great deal of harm. There is medicine that enhances longevity and there are wars that take it away. Industry produces cars that enable us to travel, but it also contributes to pollution, climate change, and poor working conditions for many. Artificial intelligence continues this mixed relationship between technology and humans. The Luddites actually recognized the complex relationship between humans and machine. They were not anti-machine. They just wanted to keep their jobs.

  2. Machines are not sentient or conscious like humans, cats, and mice. That’s my view and that of many others. However, some philosophers and others (seem to) have claimed that all technology that carries information has at least some rudimentary consciousness (See Wikipedia: David Chalmers), while others seem to be saying that consciousness, even among humans, is illusory (See Wikipedia: Daniel Dennett). That said, based my own premises as stated above, I strongly prefer to hear or read about the opinions of humans, rather about than those of machines, though machines can contribute tremendous amounts of factual information to the development of human opinion.

More later …

2 Likes

Let’s zoom in on this, since it gets to the crux of the matter:

In my opinion, this is the way to go, at least for now. As long as the final product expresses your thoughts and feelings, it’s good. When writing is attributed to a person, we need to be assured that we are, in fact, communicating with someone who knows, first hand, the experience of being human, rather than instead communicating with a machine that has a wealth of information about humans, but does not know what it is like to be one.

Edited above to clarify the final sentence.

2 Likes