Guidelines for Academics Aim to Lesson Ethical Pitfalls in Generative-AI Use

May 23, 2024

aerial view of a crowd of people

(Nature) – The participants had shared details about their experiences — such as how their work and relationships were affected — under the assurance that the information would be shared with others only in an anonymized form. But before the team started feeding this information into a genAI program, Moncur suddenly feared that, if the tool pieced together publically available information with the anonymized data that it was being fed, the participants might accidentally be reidentifiable.

The team was also concerned about LLMs’ tendency to ‘hallucinate’ — generating nonsensical or incorrect information — which could potentially slander reidentified participants. And LLMs can change the meaning of the information fed into them, because they are influenced by social and other biases inherent in their design. For example, Moncur says the program that her team used would distort what the participants had said, making their stories more positive than the participants had intended. “ChatGPT has a bit of a ‘Pollyanna thing’ going on, in that it doesn’t like unhappy endings,” says Moncur. “So, it needs a bit of a nudge to produce a credible story.” (Read More)