AI Policy
AI Policy for the Iqthishodi
- Scope and Purpose
This policy outlines the principles governing the use of Artificial Intelligence (AI) tools and technologies in the research, writing, peer review, and editorial processes for manuscripts submitted to the Iqthishodi. The objective of this policy is to ensure academic integrity, ethical transparency, and responsible AI utilization within the scholarly communication process.
Iqtishodi acknowledges that AI tools can enhance academic productivity, language accuracy, and data processing. However, their use must remain transparent, accountable, and subordinate to human scholarly judgment. This policy will be regularly reviewed to reflect technological advancements and ethical considerations in AI-assisted academic work.
- Guidelines for Authors
2.1. Use of AI in Research
- Authors may use AI tools for data analysis, pattern recognition, modeling, qualitative coding, or simulation, provided that the specific tools and their functions are clearly described in the methodology section of the manuscript.
- Authors must discuss the limitations, assumptions, and potential biases associated with AI tools to ensure transparency and reproducibility.
- The responsibility for verifying and interpreting AI-generated results lies solely with the authors.
2.2. Use of AI in Writing and Editing
- Authors must explicitly disclose any use of AI tools (e.g., ChatGPT, Grammarly, DeepL, Writefull, or others) in the process of drafting, paraphrasing, summarizing, or language editing of the manuscript.
- AI tools must not replace the author’s intellectual and analytical contributions.
- Authors remain fully accountable for the accuracy, originality, and interpretation of all content, including that assisted or generated by AI.
- Any AI-generated text must be reviewed, verified, and contextualized by the authors to maintain scholarly rigor and avoid misinformation or bias.
2.3. Authorship and AI
- AI systems cannot be listed as authors under any circumstance.
- Authorship is limited to human contributors who have made substantial intellectual or practical contributions to the conception, design, execution, or interpretation of the research.
- Authors assume full responsibility for any portion of the manuscript developed with AI assistance.
2.4. Ethical Considerations
- The use of AI must adhere to ethical research principles, including respect for data privacy, informed consent, institutional review standards, and cultural sensitivity.
- AI must not be used to fabricate, falsify, or manipulate data, images, or citations.
- Violation of these principles will be treated as research misconduct, subject to retraction or disciplinary measures.
2.5. Citing and Referencing AI Tools
- When AI tools are used in the creation or refinement of a manuscript, their role must be transparently acknowledged in both the manuscript and reference list.
- Recommended citation format (APA Style):
OpenAI. (Year). ChatGPT (Version). Retrieved from [website]. Accessed [date].
- Example:
OpenAI. (2025). ChatGPT (Version 5). Retrieved from https://chat.openai.com/. Accessed 29 November 2025.
- Guidelines for Reviewers
3.1. Use of AI in Peer Review
- Reviewers may employ AI tools for supporting tasks such as grammar correction, summarizing sections, or checking for language clarity.
- However, AI tools must not be used for forming evaluative judgments, determining acceptance/rejection, or generating review comments without human oversight.
- Any use of AI assistance in the review process must be disclosed to the editorial team to ensure transparency.
3.2. Human Oversight and Responsibility
- Peer review must remain a human-led, expert-driven process.
- Reviewers must exercise critical judgment, discretion, and academic expertise, ensuring fairness, accuracy, and confidentiality.
- Reviewers are responsible for verifying that their comments and conclusions are their own, not generated or influenced unduly by AI tools.
- Guidelines for Editors
4.1. AI-Assisted Editorial Work
- Editors may use AI technologies for non-decisional editorial tasks, such as:
- Grammar or style checking
- Plagiarism detection
- Metadata processing
- Workflow optimization
- Editorial decisions regarding acceptance, revision, or rejection must remain the exclusive responsibility of human editors.
4.2. Editorial Transparency
- The editorial team will disclose any significant AI involvement in editorial procedures that could affect manuscript outcomes.
- Iqtishodi commits to ensuring that editorial integrity and human judgment remain the cornerstone of its decision-making process.
- Transparency and Disclosure
- All contributors — authors, reviewers, and editors — must disclose the use of AI tools in their respective roles.
- Disclosures must specify:
- The type of AI tool used
- The purpose of use
- The extent of AI’s influence on the manuscript or review outcome
- Failure to disclose significant AI usage will be considered a breach of publication ethics, which may result in:
- Manuscript rejection
- Retraction of published work
- Temporary or permanent removal from reviewer/editorial positions
- Integrity, Accountability, and Enforcement
The Iqtishodi reserves the right to:
- Request detailed clarification or evidence regarding AI use in any manuscript.
- Conduct investigations into suspected cases of AI misuse, misrepresentation, or unethical use.
- Take corrective actions, including issuing corrections, retractions, or sanctions, in line with COPE (Committee on Publication Ethics) standards.
- Collaborate with institutions or organizations in addressing serious cases of misconduct involving AI.
- Continuous Review and Policy Updates
This AI Policy will undergo periodic review to ensure alignment with emerging technologies, academic norms, and ethical standards. Iqtishodi welcomes feedback from authors, reviewers, and readers to improve and adapt the policy over time. The overarching principle guiding this policy is that AI should enhance, not replace, human scholarly contribution and integrity.
