- English
- Українська
Artificial Intelligence (AI) Policy
This document defines the principles and rules for using artificial intelligence (AI) tools in the preparation, peer review, and publication of materials in the scholarly periodical "Bulletin of Taras Shevchenko National University of Kyiv. Psychology" (hereinafter, the Journal). While we recognise the potential benefits of AI in enhancing the efficiency and quality of scholarly publications, we emphasise the necessity of its ethical, responsible, and transparent application in compliance with academic integrity and scientific ethics standards.
Scope of Application
This policy applies to all participants in the Journal’s editorial process, including:
-
Authors submitting materials to the Journal.
-
Reviewers conducting expert evaluations of submitted materials.
-
Editorial Board members and editorial staff.
Use of AI Tools by Authors
Authors may use AI tools to improve the quality of their manuscripts, subject to the following mandatory conditions:
-
Responsibility for Content. Authors bear full responsibility for the accuracy, originality, and reliability of the entire content of their manuscript, including parts created or edited using AI tools. The use of AI does not exempt authors from liability for plagiarism, data fabrication, falsification, or other breaches of academic integrity.
-
Transparency and Disclosure. Authors are required to clearly state the use of AI tools in the "Acknowledgements" section or a dedicated "Statement on the Use of AI Tools." It is necessary to specify the name of the AI tool, its version, and the purpose of its use (e.g., text editing, grammar checking, data analysis, image generation, etc.).
-
Authorship Restrictions. AI tools cannot be listed as authors of manuscripts. Authorship is reserved solely for individuals who have made a significant intellectual contribution to the research and the preparation of the publication.
-
Verification of Generated Content. Authors must carefully verify any text, data, or images generated by AI tools for accuracy, contextual relevance, and the absence of bias or errors.
-
Use for Data Analysis. When using AI tools for data analysis, authors must clearly describe the analysis methodology, including the AI tools used, in the "Materials and Methods" section.
-
Use for Image Generation. If AI tools were used to generate images or other visual materials, authors must indicate this in the captions of the respective elements and in the "Statement on the Use of AI Tools."
Use of AI Tools by Reviewers
Reviewers may use AI tools to enhance the efficiency of the peer-review process, subject to the following conditions:
-
Confidentiality. Reviewers are prohibited from uploading confidential manuscript materials (including full text, data, figures, etc.) into AI tools that do not guarantee data confidentiality and security.
-
Objectivity. The use of AI tools must not affect the objectivity and impartiality of the review. The reviewer bears full responsibility for the content of their review.
-
Disclosure. Reviewers are encouraged to inform the Editorial Board if AI tools were used during the preparation of the review.
Use of AI Tools by the Editorial Board and Staff
The Editorial Board and staff of the Journal may use AI tools to optimize editorial processes, such as:
-
Plagiarism detection (using specialized tools).
-
Checking formatting and compliance with Journal requirements.
-
Assisting in the selection of potential reviewers.
-
Analyzing trends in scientific publications.
In doing so, principles of confidentiality, data security, and the avoidance of biases inherent in some AI tools must be strictly followed. Decisions regarding the acceptance or rejection of a manuscript are always made by the Editorial Board based on scientific merit and compliance with Journal standards, rather than solely on results obtained via AI tools.
Ethical Aspects and Responsibility
-
Plagiarism. Using text generated by AI tools without proper citation and/or without substantial reworking by the author may be considered plagiarism.
-
Bias. AI tools may generate content containing biases reflecting the data on which they were trained. Authors, reviewers, and the editorial office must critically evaluate AI outputs and avoid disseminating biased content.
-
Transparency. Maximum transparency regarding the use of AI tools is a key element of responsible scholarly publication.
-
Accountability. Ultimate responsibility for compliance with ethical norms and Journal standards lies with the authors, reviewers, and Editorial Board members.
Handling Violations
Any identified cases of improper or unethical use of AI tools will be reviewed by the Editorial Board in accordance with the Journal's procedures for handling publication ethics violations. This may include a request for clarification from the author/reviewer, manuscript rejection, retraction of a published article, or other measures.
Policy Updates
This policy may be reviewed and updated by the Editorial Board of the Journal to reflect developments in AI technology and changing standards in the scientific community.