A proposal is projected onto the conference room screen: “Researchers should disclose AI idea generation, even when none of those ideas are ultimately included in the final manuscript.” Dozens of attendees walk over to a sign marked “strongly disagree”; a few stand firm under the sign for “strongly agree.” The room erupts in uncomfortable laughter.
This was the scene here at the World Conference on Research Integrity (WCRI) on 4 May, in a session where researchers, publishers, ethicists, and others explored a global standard for when and how researchers should disclose that they have used artificial intelligence (AI). The discussions will feed into guidelines that project co-leader Bert Seghers—a mathematician and head of the Flemish commission for research integrity in Belgium—hopes will be published by the end of this year. The debates showed, however, that writing up those guidelines will not be easy.
There is broad consensus that authors must take responsibility for their published work, however it was produced; that they should cite or acknowledge work that is not their own; and that they should be open about their methodologies. Many journals (including Science) already have their own rules for AI disclosure, often specifying that AIs cannot be listed as authors. But some journal guidelines can be vague, Seghers says, and they are not harmonized. “There is really a need for common understanding.”
To read more, click here.