Policy on the Use of Generative Artificial Intelligence (AI) in Scientific Publishing
Policy on the Use of Generative Artificial Intelligence (AI) in Scientific Publishing
Neutrosophic Sets and Systems
At Neutrosophic Sets and Systems, we are committed to upholding the highest standards of academic integrity, ethical publishing, and transparency in research. With the growing use of generative artificial intelligence (AI) tools in scholarly communication, our editorial board has developed a comprehensive policy that aligns with international ethical guidelines, including those of the Committee on Publication Ethics (COPE), the International Committee of Medical Journal Editors (ICMJE), and Elsevier’s Responsible AI Guidelines.
This policy applies to all stakeholders involved in the publishing process, including authors, reviewers, editors, and editorial staff.
- Acceptable Use of Generative AI by Authors
Authors may utilize generative AI tools such as ChatGPT, Grammarly, DeepL, or other large language models (LLMs)to assist in language editing, grammar correction, reference formatting, or data presentation, provided that such use does not replace the intellectual and scholarly contribution of the authors themselves.
All scientific arguments, conceptual developments, data interpretation, and critical reasoning must originate from human authors who take full responsibility for the content.
- Mandatory Disclosure
Any use of generative AI tools in the preparation of a manuscript must be clearly and fully disclosed in the Acknowledgments section. The disclosure must include:
- The name of the AI tool (e.g., ChatGPT by OpenAI)
- The purpose of its use (e.g., improving language clarity, paraphrasing, formatting references)
- The extent of AI involvement (e.g., complete draft editing vs. grammar suggestions only)
Failure to disclose AI use may result in manuscript rejection or, in post-publication cases, retraction, in line with COPE’s ethical procedures.
- AI Tools Cannot Be Listed as Authors.
Generative AI tools cannot be listed as authors under any circumstance. According to ICMJE guidelines, authorship requires:
- Substantial contributions to the conception or design of the work
- Drafting or critically revising the intellectual content
- Final approval of the version to be published
- Agreement to be accountable for all aspects of the work
Since AI systems cannot meet these criteria, they may not be credited as co-authors or contributors.
- Restrictions on AI Use in Peer Review
To ensure confidentiality and preserve the objectivity of the peer review process, reviewers and editors must refrain from using generative AI tools to evaluate, summarize, translate, or revise submitted manuscripts. All assessments must be conducted independently by qualified human experts.
Inputting confidential manuscript content into public AI platforms (such as ChatGPT) is strictly prohibited and constitutes a breach of ethical review practice.
- Editorial Oversight and Detection
Our editorial team may use specialized tools to detect excessive or unethical reliance on AI-generated content. If any concerns arise, authors may be asked to clarify the extent of AI use. Undisclosed or inappropriate use of AI tools will be handled in accordance with the COPE flowcharts for ethical misconduct.
- Continuous Policy Updates
As AI technologies continue to evolve, this policy will be regularly reviewed and updated. Authors, reviewers, and editors are encouraged to consult the latest version published on our official website under the "Publication Ethics" section.
Contact
For questions or clarification regarding this policy, please get in touch with the editorial office at: smarand@unm.edu