Table of Contents |
Generative artificial intelligence (AI) has rapidly transformed the way individuals engage with language, writing, and research. For students in English Composition II, understanding this technology is really important. As writing tools powered by AI become more sophisticated and widely accessible, learners must develop a foundational awareness of what generative AI is, how it works, and how it is currently being used (ethically and unethically) in academic and professional settings.
Generative AI refers to algorithms and machine learning models designed to create new content. In the realm of writing, this includes producing coherent text, summarizing sources, translating languages, improving grammar, and even mimicking specific styles or tones. These capabilities are possible because the AI has been trained on massive datasets, which allows it to predict likely sequences of words based on prompts from users.
One of the most common types of generative AI in writing is the large language model (LLM). These models, like ChatGPT, Claude, or Gemini, are designed to predict and produce natural-sounding language based on user input. When you type a prompt into such a tool, the model doesn’t "think" in the human sense. Instead, it generates a response by statistically predicting what word comes next, drawing from its training data. The result is often useful, but you should never assume that the AI is always accurate, original, or unbiased.
Generative AI in writing serves as a language-generation assistant, which means it can:
You may already be familiar with generative AI tools, even if you haven’t used them. You may even use software that runs one of these tools in the background, like Google Docs. Some commonly used platforms in academic settings include:
Unlike using a traditional writing handbook or getting feedback from a peer, generative AI is an on-demand assistant. There are strengths and weaknesses of generative AI, which may be better at brevity but weaker at nuance. AI can miss the point entirely if the prompt lacks context or specificity.
As generative AI tools become more and more popular, understanding how to use them ethically is critical for maintaining integrity and accountability. While these tools offer exciting opportunities for support and collaboration, they also raise concerns about authorship, transparency, and fairness.
Ethical use begins with transparency and informed intention. Generative AI, when used responsibly, can be a valuable part of the prewriting and drafting phases. It can help you organize your thoughts, brainstorm ideas, clarify your sentence structure, and offer stylistic suggestions. However, the ethical boundary is crossed when students submit AI-generated content without attribution or allow the tool to do the work they are expected to do themselves. Academic writing is about the thought process, the argument construction, and the critical thinking demonstrated along the way.
Always find out what kind of AI usage your institution or instructor permits as acceptable for an assignment. Some instructors or programs think of their assignments like a stoplight. Red assignments mean there should be zero usage of AI. Yellow assignments involve AI, but only on some parts of the assignment. Green assignments might be mostly created by interacting with AI. The key is to know if you are working on a green, yellow, or red assignment, because this can completely change how you approach an assignment.
We think of the essay you are working on as firmly in the “yellow” category. There are some usages of AI that are appropriate, but there are others that would be considered dishonest.
Generative AI can be helpful in the early stages of writing, particularly in brainstorming sessions. Asking an AI model to suggest topic ideas, outline a structure, or provide an initial draft of a thesis statement is similar to asking a peer for help. The difference lies in how you use the material it provides. Treating the AI as a collaborator, but not a ghostwriter, is essential.
Here are examples of appropriate uses of generative AI:
When used for style and grammar, AI can serve as a proofreading tool, much like a spellchecker or grammar checker. However, you should still read through the suggestions and understand the changes. Accepting all edits without review not only compromises learning but can introduce subtle errors or misinterpretations.
As with any source, the use of AI in your writing requires acknowledgment when it contributes language, structure, or substantive content. This ensures academic transparency and helps clarify which parts of the work were generated and which were written by you. Using generative AI ethically means retaining authorship, disclosing assistance, and treating AI like a tool and not a substitute for your voice and thinking.
If you quote or paraphrase the AI directly, include a citation (“ChatGPT response to prompt ___, April 2025”). If the AI only helped with brainstorming or proofreading, you generally do not need to cite, but you should be prepared to explain your process if asked. Once again, you want to ask your instructor in future classes what types of usage they believe are honest and appropriate.
EXAMPLE
ChatGPT. (2025, April 5). Prompt: What are key ethical concerns with using AI in writing? [Large language model]. OpenAI. chat.openai.comWhile generative AI can be a powerful tool for academic support, there are also significant risks when it is misused. Ethical concerns arise when students use AI to complete full assignments without an instructor’s knowledge, create misleading or false information, or hide the true authorship of submitted work. As more learners adopt these tools, academic institutions are refining their policies and detection methods to address AI misuse. Understanding these risks is essential for staying within academic integrity boundaries and developing a responsible writing practice.
Misuse often begins with good intentions: A student might feel overwhelmed by deadlines, confused by an assignment, or unsure about their writing skills. The ease and speed of AI responses can be tempting, especially when a single prompt can yield what appears to be a polished paragraph or essay. However, submitting AI-generated work violates the core principles of academic honesty, including originality, authorship, and critical engagement with ideas.
Perhaps more importantly, AI deprives students of the learning they are paying for! In English Composition, overuse of AI would mean students do not deepen their knowledge of a topic, develop an effective writing process, or practice shaping arguments around evidence. If we call AI use “cheating,” you might well ask who is being cheated—and the answer would be the students, not the teacher.
Let’s begin with plagiarism, the act of presenting someone else's words or ideas as your own. With AI, this becomes more complicated. While the content may not belong to a known author, it also doesn't originate from you. If you copy and paste from an AI tool without revision or acknowledgment, you're essentially using a co-author's work without permission or credit.
Remember that AI can also introduce fabricated information. Fabrication is the act of creating or submitting false, invented, or misleading information in academic work, often including fake citations or sources. These tools are trained to generate convincing text, not verify facts. They often cite nonexistent sources, misquote statistics, or fabricate research findings.
EXAMPLE
Asking an AI to list academic studies on a topic may return credible-sounding journal names and authors that don’t actually exist. Students who don’t check these citations risk submitting false evidence.Loss of authorship is a subtler but equally important issue. When students rely too heavily on AI to construct ideas or entire sections, they surrender control over their own argument and voice. While the result may seem fluent, it often lacks the depth, perspective, and reasoning that instructors look for in student writing.
AI misuse is particularly dangerous when it gives students a false sense of mastery. Because AI often produces confident, polished text, it may appear more trustworthy or accurate than it really is. Learners who submit these outputs without review may complete assessments without truly understanding the material, which can harm performance in future coursework or professional contexts.
Many universities and colleges are rapidly adapting their academic integrity policies to include AI-specific language. Some institutions prohibit the use of AI tools altogether, while others allow limited use with proper disclosure. Instructors may ask for writing process documentation, including outlines, notes, and drafts, to confirm that the submitted work reflects the student's effort.
Here are some typical institutional responses to AI misuse:
Generative AI can be a useful partner in the writing process, but only when guided by clear ethical boundaries and intentional use. To navigate this evolving landscape, learners must adopt best practices that prioritize academic integrity, critical thinking, and transparency. This section outlines frameworks and habits for responsibly integrating AI tools into academic writing without compromising personal voice or scholarly standards.
Responsible use begins with mindset. AI should be seen not as a shortcut, but as a supplement to your own ideas. Writers who use AI well treat it like any other academic resource: They engage with it critically, assess its outputs, and take full ownership of the final work. This requires judgment, reflection, and a clear understanding of expectations. Keep in mind the goals of writing academic papers in the first place: to deepen our knowledge and improve our skills by researching a topic, structuring and sequencing an argument, and persuading an audience. In this section, you’ll learn to apply decision-making frameworks and walk away with specific strategies to use AI without crossing ethical lines.
When deciding whether and how to use AI in your writing, use a simple ethical decision-making model. Here are four key questions to consider:
So, how can you put what you have learned into action? By adopting practical strategies. Here are some tips for using generative AI tools ethically while preserving your unique voice and meeting academic expectations:
Maintaining originality is perhaps the most important goal. AI can inspire or support, but it should never replace your perspective. Your experiences, insights, and synthesis of information make your writing unique and valuable. When students over-rely on generative tools, they risk producing generic, impersonal text that lacks depth or personality.