Policy on the Use of Generative AI and AI-assisted Technologies
1. General Provisions
This Policy establishes the ethical framework and legal requirements for the utilization of Generative Artificial Intelligence (GAI) in the preparation, submission, and peer review of scholarly works. The Policy is grounded in the provisions of the Law of Ukraine "On Academic Integrity" (No. 4742‑IX), specifically concerning transparency requirements (Art. 8) and the classification of academic misconduct (Art. 18, 29).
2. Authorship Status and Personal Responsibility
-
No AI tools or models shall be credited as authors or co-authors of an academic work, as they are incapable of bearing legal responsibility for the content.
-
The author bears full personal responsibility for every facet of the manuscript, including data accuracy, the absence of plagiarism, and the ethical validity of conclusions, regardless of whether GAI tools were employed during its preparation.
-
Academic stakeholders are entitled to the fair and ethical assessment of their research outputs, which precludes the attribution of AI-generated results as one's own intellectual achievements.
3. Permitted and Restricted Use
-
AI may be utilized as a supportive tool for linguistic refining (improving stylistics, grammar, and syntax) and for the structural outlining of drafts or abstracts.
-
Generative AI models shall not be cited as primary sources in the reference list, as they do not constitute reliable or credible scientific entities.
-
The use of AI to create, modify, or manipulate scientific imagery (graphs, diagrams, models) is strictly prohibited if such actions distort empirical research data or produce fictitious visual results.
4. Classifications of Violations (Pursuant to Law No. 4742‑IX)
In accordance with Articles 18, 27, 28, and 29 of the Law, the following actions are considered violations of academic integrity in the context of AI:
-
Presenting texts, models, or data generated by AI as one's own research results without appropriate disclosure.
-
Generating fabricated data or intentionally modifying existing facts using AI-assisted tools.
-
Presenting GAI-generated outputs as the author's original intellectual contribution.
5. Disclosure and Radical Transparency
Pursuant to Article 8 of the Law, the author is obligated to disclose the use of AI in a manner that clearly distinguishes their original contribution from AI-generated content:
-
At the Submission Stage: Authors must notify the Editorial Office in writing regarding the use of AI technologies upon manuscript submission.
-
Mandatory Parameters: The manuscript must specify the name and version of the software, the generation methodology, and the specific purpose of its use.
-
Placement of Disclosure: This information must be included within the "Methods" section or as a distinct formal note following the main text.
-
Sample Declaration: "In the preparation of this work, the author utilized [Model Name and Version] for [Purpose: e.g., stylistic editing]. The generation process followed the methodology of [Description]. The author assumes full responsibility for the final content and accuracy of the publication".
6. Confidentiality and Ethics in Peer Review and Editorial Operations
-
Authors, reviewers, and editors are prohibited from uploading unpublished research findings or personal data of respondents into GAI systems, as this violates the confidentiality of the editorial process.
-
Reviewers are strictly forbidden from utilizing AI to analyze manuscripts or generate peer-review reports. This restriction is necessitated by risks concerning confidentiality breaches, potential bias, the provision of superficial feedback, and the risk of hallucinated information (e.g., fictitious references). Peer review is recognized exclusively as the intellectual endeavor of the expert.
-
Editors shall not utilize GAI to generate manuscript assessments. Any routine use of automated tools by the Editorial Office (e.g., for technical integrity checks) must be disclosed and subject to human oversight. Linguistic editing or technical rewriting of text may be permissible only upon full disclosure.
7. Enforcement Measures, Human Oversight, and Verification Rights
-
The use of automated editorial tools (e.g., for detecting text or image manipulation) must remain under human control. An editor shall personally verify the results of any automated AI detection before reaching a final decision.
-
Upon establishing a case of unethical AI use, the publisher reserves the right to refuse publication or to retract an already published work (retraction) with a formal public explanation.
-
The Editorial Office reserves the right to employ specialized software to detect AI-generated content and may request the generation history (prompts and drafts) from authors to verify the originality of the work.