This January, Byeongjun Park, a researcher in artificial intelligence (AI), received a surprising e-mail. Two researchers from India told him that an AI-generated manuscript had used methods from one of his papers, without credit.
Park looked up the manuscript. It wasn’t formally published, but had been posted online (see go.nature.com/45pdgqb) as one of a number of papers generated by a tool called The AI Scientist — announced in 2024 by researchers at Sakana AI, a company in Tokyo1.
The AI Scientist is an example of fully automated research in computer science. The tool uses a large language model (LLM) to generate ideas, writes and runs the code by itself, and then writes up the results as a research paper — clearly marked as AI-generated. It’s the start of an effort to have AI systems make their own research discoveries, says the team behind it.
The AI-generated work wasn’t copying his paper directly, Park saw. It proposed a new architecture for diffusion models, the sorts of model behind image-generating tools. Park’s paper dealt with improving how those models are trained2. But to his eyes, the two did share similar methods. “I was surprised by how closely the core methodology resembled that of my paper,” says Park, who works at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea.
The researchers who e-mailed Park, Tarun Gupta and Danish Pruthi, are computer scientists at the Indian Institute of Science in Bengaluru. They say that the issue is bigger than just his paper.
Ya think?
To read more, click here.