I know we talk about “big” topics in AI and AGI sometimes, but here’s a really simple one anyone can do regarding LLMs. Recognize we’re in the midst of disruptive transformation and accommodate different perspectives on that in a transparent way:
1) Before answering someone’s question on a topic unrelated to LLM with a prompted LLM response, clarify whether they would appreciate that.*
2) Disclose any text provided by an LLM as to its source.
Simple. Easy. Takes no time. Many of you already do this. Doesn’t require you to take a position one way or another on AI/AGI economic disruption, alignment problems, lethalities etc.
*A subset of the general ethical practice of first checking to see whether they want an “answer” at all. Sometimes people are venting, other times they only want to hear from bonafide experts of which current LLMs are not.