AI tools can be used to streamline academic processes including the processes asked of you in some of these activities (e.g., writing in MarkUp, analyzing a text, naming and organizing files, etc.)
For this activity, you will think about different stages of the research lifecycle (literature review, data analysis, dissemination, collaboration, etc.) and identify at least one way an AI tool (e.g., ChatGPT, Gemini Google, Microsoft Copilot) could be positively applied to open research and identify at least one potential challenge or ethical concern regarding the use of AI in open research.
Complete this Activity
To complete the activity, post a comment with your answer.
GenAI Tools
Generative AI (GenAI) is rapidly transforming how we conduct research, scholarship, and teaching and learning. Permitted use of GenAI in these areas is constantly shifting and dependent on the organizations or instituions for which you are engaging. Before sharing any outputs of your GenAI use, it is important to consult policies and guidelines on accepted use. To learn more, review the resources on GenerativeAI and Open Scholarship.
Image Credit: hand-3044387_1280 used under Pixabay Content License
AI tools could be used positively to help with summarizing major themes across multiple papers for a literature review. However, it’s possible that AI could miss new or recent papers in its assessment while hallucinating information if its output is not thoroughly checked/verified.
I think that in the early stages of a project, AI can significantly enhance the literature review and background research process by rapidly scanning and processing large volumes of academic publications. Some specialized AI systems can identify relevant papers, summarize key findings, and highlight gaps in the existing research. This helps researchers by making much of this work faster and more “efficient”.
However, use of AI risks overlooking subtle insights in papers, missing nuances like tone, skepticism, or methodological caveats that human researchers would catch. Additionally, AI often relies on existing databases that may be skewed toward English-language or high-impact journals, which can under-represent niche or non-Western research and create blind spots in the literature review. Some AI tools are not updated in real time and may miss recent studies, potentially leaving gaps in the most current research landscape, and finally AI can hallucinate which – without thorough review and analysis – could insert false information in to a lit review.
I can see how AI could be used to help with research dissemination. AI could be used to help adjust language to be more suitable to individuals without the same scientific background. We could ask AI tools like ChatGPT or Microsoft Co-Pilot to help summarize the research using language that is more accessible to a broader audience. However, as mentioned by the user above, AI models can hallucinate or provide inaccurate information and should always be reviewed for it’s accuracy before sharing!
I could see AI tools quickly being integrated in all stages of the research lifecycle. At this point, I think the greatest strength of tools such as ChatGPT in research is in literature review. GenAI seems to be able to quickly generate an outline of a topic although it can fail somewhat in terms of providing details. As a starting point for a lit review I think it could be very helpful. As others mentioned, the bias involved in training GenAI tools could present a problem. Also, accurately acknowledging the source of information obtained through AI tools could be a challenge, and even an ethical concern if the intellectual property used by the AI is not intended for this type of use.
To explore how useful AI can be for data analysis—and what concerns it might raise—I planned to cross-check open data with public social media information. I focused on the “Data Analysis” stage of the research lifecycle, applying statistical and qualitative methods using AI tools.
I found a reliable data source from the British Columbia Government Open Data portal, where I was able to download an XLS file suitable for my cross-checking experiment. I uploaded the file into Copilot, and with a few prompts, I obtained the statistical results I expected—in a matter of seconds, which was impressive. I then manually validated the results and found them to be accurate.
For the second part of the experiment, I used different prompts in copilot to cross-check and validate my statistical results with public social media platforms such as Reddit, Facebook, and BeReal, using hashtag searches related to my investigation. Again, the results were excellent and delivered very quickly. However, a concern emerged: when there was no proper context, the conclusions presented by the AI led to unfair comparisons, in my opinion. I had to invest additional time to provide the tool with proper context to guide its reasoning.
This raised an important concern for me: the high confidence with which the AI presents its outputs can give a false sense of accuracy, potentially leading to misinterpretation or misuse of information. However, the fast data analysis left a really good impression to me.