As more students adopt generative artificial intelligence tools to support their academic works, more discussion on the perception of AI use cases, faculty’s expectations, and concerns in higher education is sprouting. The Johns Hopkins University School of Advanced International Studies currently does not have an official policy on AI usage and the SAIS Office of Dean told this SAIS Observer’s staff writer that there are no plans to have such a policy as of now ahead of this report.
The SAIS Observer conducted a survey over two weeks in February of 2025 to examine students’ AI literacy, level of integration in coursework, and various attitudes towards AI tools in an academic environment. The survey found that 66% of The Johns Hopkins SAIS DC students regularly use artificial intelligence in their studies. 14% said they use it rarely and 20% said they have never used AI tools for academic work while enrolled at SAIS. The results demonstrate the rise of AI usage in an academic environment and challenges the lack of a university-wide academic policy.
The survey comprised responses from 50 students currently enrolled at the SAIS DC campus in three different programs—Master of Arts in International Relations (38), Master of Arts in Strategy, Cybersecurity, and Intelligence (6), and Master of International Public Policy (6). Given AI’s potential as a support tool for language learners and people with learning disabilities, the survey aimed to investigate the correlation. Nine students have indicated that they speak English as a second language and no students indicated that they have approved accommodations from Student Disability Services.
ChatGPT was found to be the most used among students who use AI tools regardless of the frequency, with 97.5% of students using it. The same pool of students also indicated the use of Grammarly (35%), which is an AI-powered writing tool, Google’s Gemini (17.5%), Claude (12.5%), Microsoft Copilot (10%), and the latest Chinese-made DeepSeek (10%).
The survey asked students to select all the functions they perform with AI tools. The most common use case was to summarize the main points of the reading materials (80%) and to run a quick search to understand difficult concepts (70%). Five out of 40 students said they use AI to translate text into their primary language, but since there were only 9 non-native English speakers in the survey, it is assumed the particular use case is more common than appeared in the survey result. Other use cases are as follows:
- Learn more about a topic discussed during class (50%)
- Edit and proofread papers/essays (50%)
- Brainstorm and choose research topic (45%)
- Find relevant sources for research (37.5%)
- Generate an outline for a research paper (32.5%)
- Paraphrase a resource for writing paper (20%)
When asked about integrity, 96% of the respondents said it is ethical to use AI to proofread a paper for grammatical errors or spelling errors. However, only 52% said using generative AI to edit the paper for restructuring and rewording is ethical while 38% were unsure and 10% thought it was unethical. Most students also thought it was ethical to use generative AI to brainstorm ideas for a paper (84%). Eighty-six percent of the respondents said it is unethical to use generative AI to write a portion of a paper without citing, with only one respondent saying it is ethical and, the rest, unsure.
The survey found students to have some level of uncertainty about the academic integrity of AI. For example, even though using generative AI to summarize readings assigned for the class was voted the most common use case, only 66% found it ethical, 20% were unsure and 14% found it unethical. There were prompts where respondents’ answers were evenly divided:
- Using Generative AI to confirm problem set answers before submitting an assignment. Here, opinion was most evenly divided (34% ethical, 34% unsure, 32% unethical)
- Using generative AI to paraphrase a resource to cite in a paper (44% unethical, 38% unsure, 18% ethical)
- Using generative AI to make presentation slides for class (48% unethical, 28% unsure, 24% unethical)
Most students strongly agreed that generative AI content cannot be trusted to be factually correct (62%), and are aware of the limitations of artificial intelligence in an academic environment (88%) and that it is important to check other sources to verify information provided by generative AI (94%). When asked if they agree that the usage of AI is more helpful than harmful in an academic environment, students provided divisive opinions—16% strongly agreed, 34% moderately agreed, 16% were neutral, 20% moderately disagreed and 14% strongly disagreed.
Currently, the academic integrity policy at SAIS does not include any clause about the usage of artificial intelligence. This lack of an official policy confounded students, with the survey finding that 88% of the respondents think it is very or moderately important that SAIS provides instructions on the usage of AI in an academic environment. 56% of the respondents said all or most of their SAIS DC instructors have discussed AI usage in class and 6% said none of the instructors they have had in the past year have discussed the topic. Although there are no official guidelines for students, The Johns Hopkins University has Generative AI Tool Guidance for the faculty members. Compiled by JHU community members, the guidance provides basic principles to consider when using AI-generated content and warning of bias in AI when using academic integrity detection tools. The guidelines mostly derive from the U.S. federal guidelines, such as FERPA, HIPPA, and Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence executive order signed by former President Biden.
94% percent of respondents said it is very or moderately important that SAIS constantly improves and edits its instructions on the usage of AI. Half of students who responded to the poll said SAIS instructors and professors should have guidance over the usage of AI and 40% said SAIS Academic Affairs should have the jurisdiction. One respondent added that “it is important that the library or another SAIS office provide several opportunities for students to responsibly use AI as a research tool.”
Ten respondents specifically indicated the usefulness of AI and the need for AI policy at SAIS. Two respondents said they believe AI will be prevalent in professional settings and that “learning its strengths, weaknesses, and limitations is critical for success.” Additionally, respondents noted that “AI allows students to digest information faster and more effectively, develop ideas, and tighten arguments.” Some also added that international students and non-native English speakers benefit from AI tools and that “providing specific guidelines is paramount for a conducive and improved academic experience.”
Some respondents had strong opinions against the integration of AI into an academic environment, with one writing that “it is making students lazier as some totally rely on it to summarize readings, brainstorm, etc.” Another student argued that “raw content is not the only thing that can be unoriginal—structure, syntax, and language that isn’t a product of your own thought and capability should be cited as such.” Additionally, a respondent said that “widespread AI usage hurts the academic reputation of SAIS and students who choose not to use AI tools.” But even AI skeptics agree with the enthusiasts that SAIS must provide a standardized policy on the usage “with constraints to protect academic integrity and individual thought and creativity.”
Edited By: Ali Gostanian

