Generative AI technology challenges existing conventions in publishing, science, and the arts, particularly around transparency, attribution, consent, and permission. Nature’s current stance is against visual content created using generative AI until regulations are developed.
The science journal Nature has decided not to publish images and videos generated or enhanced by AI generators such as Midjourney. The decision, made after months of internal discussion, aims to maintain integrity and transparency.
Generative AI tools do not provide access to the sources of the data and images, making verification difficult. In addition, Nature complains about consent and permission, which “common applications” of generative AI fail to address.
Generative AI systems are being trained on images for which no efforts have been made to identify the source. Copyright-protected works are routinely being used to train generative AI without appropriate permissions. In some cases, privacy is also being violated — for example, when generative AI systems create what look like photographs or videos of people without their consent. In addition to privacy concerns, the ease with which these ‘deepfakes’ can be created is accelerating the spread of false information.
For visual content created using generative AI, Nature takes a simple “no” stance until the regulatory and legal landscape catches up with advances in AI technology.
But Nature is comfortable with the inclusion of AI-powered written text under certain conditions. Tools such as large language models (LLMs) must be documented in the methods or acknowledgments section of a paper, and authors should provide sources for all data, including those generated by AI. In no case can the AI be listed as an author.