Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Using and Misusing Generative AI 

Author: Sophia

what's covered
In this lesson, you will evaluate how generative AI tools can be used responsibly in the drafting process while identifying risks of misuse. You’ll explore the capabilities of AI writing tools, how they fit ethically into your workflow, and the consequences of misuse in academic settings. Specifically, this lesson will cover:

Table of Contents

1. Understanding Generative AI in Academic Writing

Generative artificial intelligence (AI) has rapidly transformed the way individuals engage with language, writing, and research. For students in English Composition II, understanding this technology is really important. As writing tools powered by AI become more sophisticated and widely accessible, learners must develop a foundational awareness of what generative AI is, how it works, and how it is currently being used (ethically and unethically) in academic and professional settings.

Generative AI refers to algorithms and machine learning models designed to create new content. In the realm of writing, this includes producing coherent text, summarizing sources, translating languages, improving grammar, and even mimicking specific styles or tones. These capabilities are possible because the AI has been trained on massive datasets, which allows it to predict likely sequences of words based on prompts from users.

One of the most common types of generative AI in writing is the large language model (LLM). These models, like ChatGPT, Claude, or Gemini, are designed to predict and produce natural-sounding language based on user input. When you type a prompt into such a tool, the model doesn’t "think" in the human sense. Instead, it generates a response by statistically predicting what word comes next, drawing from its training data. The result is often useful, but you should never assume that the AI is always accurate, original, or unbiased.

Generative AI in writing serves as a language-generation assistant, which means it can:

  • Generate ideas or outlines from a basic prompt
  • Paraphrase or summarize existing text
  • Suggest edits for clarity, tone, or grammar
  • Simulate different writing styles or audiences
Despite its usefulness, it’s crucial to remember that AI-generated text doesn’t originate from lived experience, ethical reasoning, or fact-checking. It’s probabilistic, not intentional. This means it can fabricate sources, make factual errors, and sometimes reproduce biases that exist in its training data. AI is a support tool, not a substitute for critical thinking or writing.

You may already be familiar with generative AI tools, even if you haven’t used them. You may even use software that runs one of these tools in the background, like Google Docs. Some commonly used platforms in academic settings include:

  • ChatGPT (OpenAI): Used for brainstorming, outlining, and draft development.
  • Grammarly and GrammarlyGO: Enhances grammar and style and even provides AI-generated suggestions.
  • QuillBot: Known for paraphrasing and summarization.
  • Google Gemini (formerly Bard): Integrates with search and documents for real-time content generation.
It’s easy to be impressed by the fluency and speed of these tools, but academic writers must also ask:

  • Where is this information coming from?
  • Is the generated content original or derivative?
  • How should I credit AI contributions?
These questions can help you use AI responsibly in your own academic writing. Being aware of the capabilities and limitations of these tools helps students maintain integrity and avoid unintentional missteps.

did you know
Some AI models have been shown to create entirely fictional academic papers—complete with fake authors, institutions, and DOI numbers.

Unlike using a traditional writing handbook or getting feedback from a peer, generative AI is an on-demand assistant. There are strengths and weaknesses of generative AI, which may be better at brevity but weaker at nuance. AI can miss the point entirely if the prompt lacks context or specificity.

As generative AI tools become more and more popular, understanding how to use them ethically is critical for maintaining integrity and accountability. While these tools offer exciting opportunities for support and collaboration, they also raise concerns about authorship, transparency, and fairness.

big idea
Generative AI can accelerate and assist in the writing process, but its use requires critical awareness of authorship, accuracy, and ethical responsibility.

terms to know
Generative AI
AI models that can produce new content, such as text, based on user prompts and patterns learned from large datasets.
Large Language Model
A type of AI trained to predict and generate language responses.


2. Ethical Use of Generative AI in the Drafting Process

Ethical use begins with transparency and informed intention. Generative AI, when used responsibly, can be a valuable part of the prewriting and drafting phases. It can help you organize your thoughts, brainstorm ideas, clarify your sentence structure, and offer stylistic suggestions. However, the ethical boundary is crossed when students submit AI-generated content without attribution or allow the tool to do the work they are expected to do themselves. Academic writing is about the thought process, the argument construction, and the critical thinking demonstrated along the way.

Always find out what kind of AI usage your institution or instructor permits as acceptable for an assignment. Some instructors or programs think of their assignments like a stoplight. Red assignments mean there should be zero usage of AI. Yellow assignments involve AI, but only on some parts of the assignment. Green assignments might be mostly created by interacting with AI. The key is to know if you are working on a green, yellow, or red assignment, because this can completely change how you approach an assignment.

We think of the essay you are working on as firmly in the “yellow” category. There are some usages of AI that are appropriate, but there are others that would be considered dishonest.

2a. Appropriate Support: Brainstorming, Grammar, and Style

Generative AI can be helpful in the early stages of writing, particularly in brainstorming sessions. Asking an AI model to suggest topic ideas, outline a structure, or provide an initial draft of a thesis statement is similar to asking a peer for help. The difference lies in how you use the material it provides. Treating the AI as a collaborator, but not a ghostwriter, is essential.

Here are examples of appropriate uses of generative AI:

  • Generating a list of potential topics based on your keywords
  • Getting feedback on sentence clarity or style
  • Receiving grammar corrections with explanations
  • Translating passages from research into plain English for comprehension
Appropriate use also means not over-relying on AI-generated content. When AI writes paragraphs or builds arguments, the voice and reasoning may not reflect your own understanding. This disconnect can make it difficult to explain or defend your work and can also lead to a mismatch in tone or evidence quality. Ethical use keeps you, the writer, in charge of the message.

When used for style and grammar, AI can serve as a proofreading tool, much like a spellchecker or grammar checker. However, you should still read through the suggestions and understand the changes. Accepting all edits without review not only compromises learning but can introduce subtle errors or misinterpretations.

watch
In this video, you'll learn more about using generative AI to revise your paper.

2b. Cite AI-Generated Content Responsibly

As with any source, the use of AI in your writing requires acknowledgment when it contributes language, structure, or substantive content. This ensures academic transparency and helps clarify which parts of the work were generated and which were written by you. Using generative AI ethically means retaining authorship, disclosing assistance, and treating AI like a tool and not a substitute for your voice and thinking.

If you quote or paraphrase the AI directly, include a citation (“ChatGPT response to prompt ___, April 2025”). If the AI only helped with brainstorming or proofreading, you generally do not need to cite, but you should be prepared to explain your process if asked. Once again, you want to ask your instructor in future classes what types of usage they believe are honest and appropriate.

EXAMPLE

ChatGPT. (2025, April 5). Prompt: What are key ethical concerns with using AI in writing? [Large language model]. OpenAI. chat.openai.com

term to know
Academic Transparency
The ethical obligation to disclose outside assistance, including AI, that contributed to your academic work.


3. Misuse of Generative AI

While generative AI can be a powerful tool for academic support, there are also significant risks when it is misused. Ethical concerns arise when students use AI to complete full assignments without an instructor’s knowledge, create misleading or false information, or hide the true authorship of submitted work. As more learners adopt these tools, academic institutions are refining their policies and detection methods to address AI misuse. Understanding these risks is essential for staying within academic integrity boundaries and developing a responsible writing practice.

Misuse often begins with good intentions: A student might feel overwhelmed by deadlines, confused by an assignment, or unsure about their writing skills. The ease and speed of AI responses can be tempting, especially when a single prompt can yield what appears to be a polished paragraph or essay. However, submitting AI-generated work violates the core principles of academic honesty, including originality, authorship, and critical engagement with ideas.

Perhaps more importantly, AI deprives students of the learning they are paying for! In English Composition, overuse of AI would mean students do not deepen their knowledge of a topic, develop an effective writing process, or practice shaping arguments around evidence. If we call AI use “cheating,” you might well ask who is being cheated—and the answer would be the students, not the teacher.

3a. Plagiarism, Fabrication, and Loss of Authorship

Let’s begin with plagiarism, the act of presenting someone else's words or ideas as your own. With AI, this becomes more complicated. While the content may not belong to a known author, it also doesn't originate from you. If you copy and paste from an AI tool without revision or acknowledgment, you're essentially using a co-author's work without permission or credit.

Remember that AI can also introduce fabricated information. Fabrication is the act of creating or submitting false, invented, or misleading information in academic work, often including fake citations or sources. These tools are trained to generate convincing text, not verify facts. They often cite nonexistent sources, misquote statistics, or fabricate research findings.

EXAMPLE

Asking an AI to list academic studies on a topic may return credible-sounding journal names and authors that don’t actually exist. Students who don’t check these citations risk submitting false evidence.

Loss of authorship is a subtler but equally important issue. When students rely too heavily on AI to construct ideas or entire sections, they surrender control over their own argument and voice. While the result may seem fluent, it often lacks the depth, perspective, and reasoning that instructors look for in student writing.

AI misuse is particularly dangerous when it gives students a false sense of mastery. Because AI often produces confident, polished text, it may appear more trustworthy or accurate than it really is. Learners who submit these outputs without review may complete assessments without truly understanding the material, which can harm performance in future coursework or professional contexts.

term to know
Fabrication
The act of creating or submitting false, invented, or misleading information in academic work, often including fake citations or sources.

3b. Institutional Policies

Many universities and colleges are rapidly adapting their academic integrity policies to include AI-specific language. Some institutions prohibit the use of AI tools altogether, while others allow limited use with proper disclosure. Instructors may ask for writing process documentation, including outlines, notes, and drafts, to confirm that the submitted work reflects the student's effort.

Here are some typical institutional responses to AI misuse:

  • Warnings or grade penalties for undisclosed AI-generated work
  • Failing the assignment or course for repeated or egregious violations
  • Academic probation or expulsion in severe cases of misconduct
Instructors may also use AI detection tools such as Turnitin’s AI writing detector or GPTZero, which analyze patterns and phrasing to flag potential use of generative models. While these tools are not perfect, they increase the chances that misuse will be discovered. They can also produce situations where you have to explain an AI score to an instructor. This is just one more reason to be clear with your instructor about what types of AI usage are appropriate for a specific assignment.


4. Best Practices for Responsible Use

Generative AI can be a useful partner in the writing process, but only when guided by clear ethical boundaries and intentional use. To navigate this evolving landscape, learners must adopt best practices that prioritize academic integrity, critical thinking, and transparency. This section outlines frameworks and habits for responsibly integrating AI tools into academic writing without compromising personal voice or scholarly standards.

Responsible use begins with mindset. AI should be seen not as a shortcut, but as a supplement to your own ideas. Writers who use AI well treat it like any other academic resource: They engage with it critically, assess its outputs, and take full ownership of the final work. This requires judgment, reflection, and a clear understanding of expectations. Keep in mind the goals of writing academic papers in the first place: to deepen our knowledge and improve our skills by researching a topic, structuring and sequencing an argument, and persuading an audience. In this section, you’ll learn to apply decision-making frameworks and walk away with specific strategies to use AI without crossing ethical lines.

4a. Decision Making

When deciding whether and how to use AI in your writing, use a simple ethical decision-making model. Here are four key questions to consider:

  1. Is use of AI allowed? Review the course policy. Better yet, talk to the instructor.
  2. Is your use of AI transparent? Will you acknowledge the AI tool in your writing or process? If not, why? Ethical use requires being upfront about AI involvement.
  3. Does AI support your learning? If the tool is doing the thinking or writing for you, it may be impeding your learning. Responsible use enhances your skills but doesn’t replace them.
  4. Would you be comfortable explaining how you used AI? If not, reconsider your approach.
Ethical use in writing is about building intellectual habits that will serve you in college, careers, and life.

term to know
Ethical Use
The responsible and transparent application of tools and strategies in a way that supports learning and honors academic standards.

4b. Transparency and Originality

So, how can you put what you have learned into action? By adopting practical strategies. Here are some tips for using generative AI tools ethically while preserving your unique voice and meeting academic expectations:

  • Use AI early in the process. Generate outlines, explore arguments, or brainstorm ideas, but do your own drafting and revising.
  • Don’t use AI for entire sections of writing. Maintain control over thesis development, paragraph structure, and evidence analysis.
  • Cite AI content. If you borrow phrasing, structure, or factual claims, treat the AI like any other source.
  • Proofread all AI-assisted work. Double-check for hallucinated information, inconsistent tone, or vague claims.
  • Document your process. Keep copies of drafts and AI prompts. This adds transparency and can help if your use is ever questioned.
try it
Next time you use AI to brainstorm or outline, write a reflection note afterward: What did the AI help you with? What parts did you revise or replace? Would you feel confident showing your instructor this process?

Maintaining originality is perhaps the most important goal. AI can inspire or support, but it should never replace your perspective. Your experiences, insights, and synthesis of information make your writing unique and valuable. When students over-rely on generative tools, they risk producing generic, impersonal text that lacks depth or personality.

summary
In this lesson, you learned that understanding generative AI in academic writing is essential as these tools become more common in writing and research. These technologies can generate outlines, paraphrase texts, or adjust grammar and tone, but they operate based on probability, not intention. This makes critical thinking and verification vital, especially since AI can produce errors or biased content. Fluency does not equal reliability, and ethical use of generative AI in the drafting process requires transparency, self-awareness, and an understanding of assignment expectations. Appropriate support includes brainstorming, grammar, and style, and students who use AI should cite AI-generated content responsibly.

Writers should engage AI as a collaborator, not a substitute. Just like any external source, AI contributions that affect structure, language, or ideas must be credited. Missteps in this area often lead to misuse of generative AI, where plagiarism, fabrication, or over-reliance on machine-generated content can compromise learning and result in loss of authorship. As universities update policies and detection tools evolve, students must act with intention and clarity to follow institutional policies. That’s why adopting best practices for responsible use is so important: ethical use starts with decision making and is reinforced by transparency and originality throughout the writing process.

Terms to Know
Academic Transparency

The ethical obligation to disclose outside assistance, including AI, that contributed to your academic work.

Ethical Use

The responsible and transparent application of tools and strategies in a way that supports learning and honors academic standards.

Fabrication

The act of creating or submitting false, invented, or misleading information in academic work, often including fake citations or sources.

Generative AI

AI models that can produce new content, such as text, based on user prompts and patterns learned from large datasets.

Large Language Model

A type of AI trained to predict and generate language responses.