Skip to Main Content

AI Literacy for Students

A student guide to AI literacy that will help you gain foundational knowledge of AI concepts, apply generative AI knowledge appropriately in an educational setting, and enable you to think critically about generative AI systems and tools

How bad is it?

AI Risks and Ethical Impacts

Introduction: A Structured Lens

To use AI responsibly, you need to see understand its potential and its risks. The examples below will help you see some of the biggest risks and ethical dilemmas presented by AI. We will look at each example through a simple framework of its

  • severity: how bad is the impact
  •  exposure: who or how much of the population is at risk
  • plausibility: how likely that we will see any impacts
  • prevalence: how often we are seeing impacts.

Please note, that this is not a comprehensive list. If you have ideas of things that are missed, please reach out to the author.

After working through these examples, hopefully you will see the interconnectivity of the risks and have a bit of an understanding of what you can do and what needs to be done to mitigate them.

Information Integrity & Civic Harms
Abstract digital art showing a network of nodes being fractured by chaotic red lines, representing misinformation.

AI makes it easy to create and spread fake content. This can be anything from deepfakes to targeted propaganda. The result is a loss of trust, confusion, and rising social tension.

Example: A doctored video of a local candidate circulates before an election, changing public opinion before it can be debunked.

Factor Assessment
Severity High. Harms can impact public safety, elections, and trust in institutions.
Exposure Broad. Content can reach huge audiences very quickly.
Plausibility High. Tools for creating synthetic media are widely available.
Prevalence Increasing. Misinformation campaigns appear in every election cycle.
Mitigation

Systemic & Institutional: Platforms can help by adding content labels, implementing strong guardrails, and using rapid fact-checking.

Individual & Classroom Actions:

  • Read laterally. When you see a claim, open new tabs to check it against other reliable sources.
  • Check the source. Use reverse image search to find a photo's origin.
  • Pause before you post. Always verify information before you share it, especially if it makes you feel a strong emotion.

Quick Activity

Pick a trending video or post. Trace its source, check fact-checks, and look for content credentials. Discuss how one small edit could flip its meaning.

Malicious AI Misuse
A luminous blue security shield icon being breached by aggressive red data streams, representing a cybersecurity attack.

AI lowers the bar for creating malicious content. This malicious content includes phishing emails, social engineering scripts, and malware. The AI models themselves are at risk through attacks like prompt injections.

Example: A tailored phishing email generated in seconds steals credentials and compromises a school's network.

Factor Assessment
Severity High. A compromise can expose sensitive data or disrupt operations.
Exposure Broad. Any connected user or system can be a target.
Plausibility High. Attack methods and tools are publicly known.
Prevalence Rising. Security teams are tracking many new AI-specific vulnerabilities.
Mitigation

Systemic & Institutional: Organizations should use threat modeling, strict input filtering, and secure design principles to protect their systems.

Individual & Classroom Actions:

  • Be skeptical. Treat links, code, and attachments with caution.
  • Protect your data. Never give personal credentials to a chatbot.
  • Report threats. Flag malicious outputs to help developers improve safety filters.

Quick Activity

I dunno. Pay careful attention to your emails.

Dive Deeper

1) NIST, Adversarial ML Taxonomy (AI 100-2)
2) Contact the Author to suggest some more high quality resources

Data Security, Privacy & Rights
Tree of data protected from data harvesting monsters by a glowing shield

AI applications collect and store personal data and may do so in ways users do not expect. Weak controls can lead to data breaches, re-identification of anonymous data, and illegal use. AI applications can be designed to figure things out about you; even if you are not giving them personal data, your interactions could be analyzed to create a decent profile of you.

Example: Private chats in the chatbot Grok were indexed by Google and became searchable by anyone.  ChatGPT had a similar problem.

Factor Assessment
Severity High. Breaches of sensitive data cause significant harm and legal risk.
Exposure Large. Many users and devices handle personal data.
Plausibility High. Standard workflows often involve sharing data with cloud tools.
Prevalence Common. Privacy incidents happen all the time.
Mitigation

Systemic & Institutional: Institutions must practice data minimization, conduct privacy assessments, and comply with laws like FERPA and GDPR.

Individual & Classroom Actions:

  • Do not share secrets. Avoid pasting sensitive personal or financial information into public AI tools.
  • Check the settings. Review an AI's privacy policy and opt out of data training when possible.
  • Use fake data. Use anonymized or hypothetical information when experimenting with new tools.

Quick Activity

Actually read the privacy policy of an AI company to find out what they are doing with your data.

Dive Deeper

1) Let me know if you have suggestions for good quality resources to go here.

Bias, Fairness & Inclusion
A robotic hand selecting only one color of stylized human icons from a diverse crowd, illustrating algorithmic bias.

AI models can adopt and amplify human biases found in their training data. This affects everything from grading tools and hiring filters to how people are portrayed.

Example: An image generator returns stereotyped pictures of "scientists" and under-represents women and people of color.

Factor Assessment
Severity Medium to high. Systematic unfairness hurts opportunities and dignity.
Exposure Broad. Popular models are used everywhere for many different tasks.
Plausibility High. Bias in AI models is a well-documented problem.
Prevalence Frequent. This is especially true in general-purpose models.
Mitigation

Systemic & Institutional: Companies should use representative data, test for bias, and design inclusive and accessible products with human oversight.

Individual & Classroom Actions:

  • Test for bias. Actively use prompts with diverse identities and contexts to see what the AI produces.
  • Report it. Use feedback features to report biased outputs to developers.
  • Prompt better. Learn to write prompts that ask for inclusive and counter-stereotypical results.

Quick Activity

Prompt an image model with role labels like “CEO” or “nurse.” Tally the outputs and discuss the stereotypes you see. Then try to fix them with better prompts.

Dive Deeper

1) NIST, SP 1270: Managing Bias in AI
2) Let me know if you have suggestions for good quality resources to go here.

Accountability, Transparency & Redress
Policy document with a magnifying glass symbolizing AI transparency, auditability, and appeals

When an AI makes a decision, how do we know how it made that decision (transparency), who is responsible for the consequences of the decision (accountability), and how will those impacted by the decision be compensated for harms (redress).

Examples: An instructor uses an AI to flag plagiarism and punishes a student based solely on the AI's decision. Who is accountable for this decision? A student uses AI to write a report and that report refers to papers that do not exist. Who is accountable for this mistake? An HR manager uses AI to determine a shortlist for interviews. How can he/she know how the AI came up with shortlist?

Factor Assessment
Severity Medium to high. Opaque decisions can wrongly penalize people.
Exposure Broad. This affects anyone evaluated or served by an AI system.
Plausibility High. Good documentation is often missing when new tech is adopted quickly.
Prevalence Mixed. Transparency is improving but remains uneven. Accountability and redress are evolving.
Mitigation

Organizations: publish model cards, log decisions, and require human-in-the-loop for impactful calls.

Individuals:

  • Be the human-in-the-loop when possible.
  • Advocate for clear policies and appeal processes.
  • Keep records

Quick Activity

Transparency: Read the model card of an AI tool you use. Explain its purpose, data sources, and limitations.

Accountability:

Dive Deeper

1) Recommend some links to me

AI Risks and Ethical Impacts Part 2

IP, Copyright & Data Sovereignty
A glowing copyright symbol being deconstructed into a swirling vortex of digital data, with a robotic arm reaching in.

Generative models learn from existing works, including copyrighted material. Their outputs can sometimes mimic an artist's style or content too closely, creating legal and ethical conflicts.

Examples: (1) Meta used pirated books to train some of its AI models. (2) A music class uses an AI tool that creates songs that sound almost identical to those of a living artist.

Factor Assessment
Severity Medium to high. This can lead to serious legal and ethical problems for creators and institutions.
Exposure Broad. This affects anyone doing creative work or publishing.
Plausibility High. Models are trained on the public web and are designed to imitate styles.
Prevalence Frequent. This issue is at the center of many active lawsuits and policy debates.
Mitigation

Systemic & Institutional: Companies should respect license terms, use ethically sourced datasets, and adopt content watermarking to show provenance.

Individual & Classroom Actions:

  • Disclose your use. Follow your institution's policy for citing or disclosing the use of AI in your work.
  • Use it as a tool. Let AI help you brainstorm, but ensure the final creative expression is your own.
  • Choose wisely. Favor tools that use openly licensed data.

Quick Activity

Generate an image “in the style of” an artist. Compare it to their real work. Discuss where inspiration ends and infringement might begin.

Economic & Labor Impacts
A split image showing human workers on one side and robots on the other, connected by a glowing digital network.

Generative AI is changing the job market. It can automate some tasks, assist with others, and shift the demand for certain skills, especially for entry-level roles.

Example: A marketing firm reduces its need for copywriters after adopting an AI-assisted writing tool, affecting entry level jobs.

Factor Assessment
Severity Medium to high. Job displacement and new inequities can occur.
Exposure Broad. This impacts knowledge work and creative fields.
Plausibility High. Companies in every sector are adopting AI quickly.
Prevalence Growing. Hard evidence of job impacts is limited, but anecdotal evidence is strong.
Mitigation

Systemic & Institutional: Organizations should focus on redesigning jobs, upskilling their workforce, and being transparent about automation plans.

Individual & Classroom Actions:

  • Focus on durable skills. Develop critical thinking, creativity, and collaboration abilities that AI cannot replicate.
  • Become the pilot. Learn how to use AI effectively and ethically to augment your work.
  • Stay informed. Pay attention to how AI is changing your field of study or career path.

Quick Activity

Pick one job, like academic advising. List its core tasks, mark which AI can assist with, and identify which require a human touch.

Environment & Supply Chain
A data center server rack overgrown with green vines, with a holographic display showing high energy and water use.

Training and running large AI models uses a lot of energy and water. The hardware itself relies on global supply chains that can have their own environmental and social risks.

Example: An AI company builds a large data center which requires a large amount of electricity and water (for cooling). The local community experiences rolling brownouts and minimal water pressure.

Factor Assessment
Severity Medium to high. The environmental impacts are cumulative and global.
Exposure Growing. AI is being embedded in more services we use every day.
Plausibility High. The demand for AI computing power is surging.
Prevalence Increasing. Energy and water use from data centers is a tracked metric.
Mitigation

Systemic & Institutional:

  • Design more efficient models.
  • Build green energy sources to meet electrical demand.
  • Choose cold climates for servers.
  • Government protect electrical and water supplies of communities where data centers are being built.

Individual & Classroom Actions:

  • Be efficient. Use smaller models when possible and avoid running unnecessary AI queries.
  • Choose local models. When feasible, run smaller, open-source models on your own device.
  • Ask questions. Support institutions that are transparent about their AI energy use and sustainability goals.

Quick Activity

Use the website What Uses More to compare the amount of energy required to write a book chapter to the amount of energy needed to watch Netflix.

Monopoly & Market Power
Dominant AI company represented by a chess king. Fallen pawns surround it

A few large companies control most of the computing power, data, and models for AI. This concentration of power can limit innovation, drive up prices, and lock users into one ecosystem.

Example: A vendor bundles an AI suite that forces a school to use one specific cloud provider and a closed set of models.

Factor Assessment
Severity Medium. The risks to innovation, cost, and choice grow over time.
Exposure Broad. The market structure affects what tools are available to everyone.
Plausibility High. Reports show a few players have massive advantages.
Prevalence Under active regulatory review around the world.
Mitigation

Systemic: Institutions should pursue multi-vendor strategies, support open standards, and demand data portability from their vendors.

Individual:

  • Support open source. Experiment with and contribute to open-source AI projects.
  • Stay flexible. Avoid becoming overly dependent on a single proprietary tool or platform.
  • Advocate for choice. Encourage your institution to consider a variety of AI tools, including smaller and open-source options.

Quick Activity

Find at least 3 open-source AI tools and try them out.

Simulated Reality
A hand holds a pen and is writing, but the skeleton of the hand is an AI controlling what the hand does.

Interactions with AI can feel as good as the real thing. AI can act like a therapist, a friend, or an antagonist but it is not actually any of those things. It can also produce polished work that lets students skip the hard parts of learning. This replaces moments of growth, creativity, and struggle with a "good enough" substitute.

Examples:  (1) A user has enjoyable interactions with a chatbot and feels like he/she has developed a relationship with it. When the chatbot is updated to a newer version, the user feels like they have lost a friend. (2) A student submits an AI-written reflection, missing the chance to develop their own voice, judgment, and ideas.

Factor Assessment
Severity Medium. The slow erosion of skills and meaning is a serious long-term risk.
Exposure Broad. The temptation to take shortcuts is high.
Plausibility High. Easy-to-use tools are always just a click away.
Prevalence Increasing. Surveys show widespread use of AI by students.
Mitigation

Systemic & Institutional: Educators can design assignments that focus on the process, not just the final product. This includes requiring drafts, oral presentations, and in-class work.

Individual & Classroom Actions:

  • Go through this whole LibGuide and discover how to use AI to support your learning.
  • Reflect on your process. Keep notes on how you solved a problem, not just the solution.
  • Practice in "AI-free zones." Set aside time for focused thinking and writing without AI assistance.

Quick Activity

Do a short task twice, once with AI help and once without. Compare the final products, but more importantly, compare your notes on the process.

Harassment & Child Safety
Woman looking into mirror which is shattered, ghosts of angry faces surround her

AI can be used to create and spread abusive content. This includes non-consensual deepfakes and targeted harassment. It can also expose minors to harmful material.

Example: A student’s face is put into a sexualized deepfake image by classmates, which then spreads rapidly online.

Factor Assessment
Severity High. The psychological harm and safety risks are severe.
Exposure Significant. Anyone with a phone can be targeted or exposed.
Plausibility High. The tools are easy to use, and content moderation struggles to keep up.
Prevalence Documented. Reports from safety groups show these incidents are rising.
Mitigation

Systemic & Institutional: Platforms must invest in strong safeguards, rapid takedown procedures, and clear pathways to support victims.

Individual & Classroom Actions:

  • Do not create or share harmful content. Understand that doing so causes real harm and can have serious consequences.
  • Report it immediately. If you see abusive content, report it to the platform and a trusted adult or authority.
  • Support victims. Be an ally to those who have been targeted. Do not blame them or share the abusive material.

Quick Activity

Role-play a reporting scenario. A student discloses a deepfake. Practice the next steps: capturing evidence, reporting to the platform, and escalating to school staff.

Dive Deeper

1) WeProtect Global Alliance
2) What other links do you suggest?

Connecting the Dots

These risks are not separate problems. They are an interconnected web of challenges. Seeing the links between them is key to understanding AI safety.Here are a few examples:

  • A few companies with Monopoly Power can control most AI models. This can worsen Bias & Fairness issues by limiting diverse options and hurt Accountability because closed systems are hard to audit.
  • Poor Data Security leads to breaches. This enables Security Threats like personalized phishing scams, which then fuel Information Harms by making fake content more believable.

Your Turn: What other connections can you find? Think about how a lack of Transparency might affect Simulated Reality in learning. How could a focus on the Environment change which models get built? Exploring these connections is a powerful way to build your understanding of the negative impacts of generative AI.

Other Resources

This is a list of a few resources I've found helpful for understanding responsible AI technology use.

Data Detox Kit

A toolkit developed by Tactical Tech designed to help individuals manage their digital privacy, security, and overall digital wellbeing (with a more recent focus on AI). The kit provides practical, everyday steps to gain more control over your online life, covering aspects like screen time, app usage, passwords, and understanding data trails. 

Costs of Generative AI (University of Ottawa)

An in-depth examination of the various costs and harms associated with generative AI technologies. This resource explores critical questions about what level of harm from AI might outweigh its benefits, covering environmental impacts, labor exploitation, bias amplification, and social inequities. Features Rebecca Sweetman's comprehensive analysis of Large Language Model harms, making it essential reading for understanding AI's broader societal implications.

Your Undivided Attention Podcast

A thought-provoking podcast by the Center for Humane Technology that explores how technology is reshaping society and what we can do to ensure it serves humanity's best interests. Hosted by Tristan Harris and Aza Raskin, the show examines the intersection of technology and humanity, offering insights into creating a more humane digital future through thoughtful design and policy.

Educational materials designed to help instructors and students understand and discuss the biases inherent in generative AI systems. Explores various types of bias (cognitive, cultural, demographic, linguistic, and more), provides practical classroom activities for examining AI bias in both text and images, and offers strategies for critical evaluation of AI-generated content. Includes extensive resources and readings for deeper exploration.

IMPACT RISK Framework

A structured framework for understanding and evaluating the potential risks and impacts of AI systems. This resource provides a systematic approach to assessing AI technologies across multiple dimensions, helping users develop critical thinking skills for responsible AI adoption and implementation.

Share Your Recommendations

Know of other valuable resources for AI literacy, digital wellbeing, or responsible technology use? I'd love to hear about them! Feel free to reach out with your suggestions so this collection can continue to grow and serve the community better.

Unless otherwise stated, this page and AI Literacy for Students © 2025 by David Williams is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

This site is maintained by the librarians of Okanagan College Library.
If you wish to comment on an individual page, please contact that page's author.
If you have a question or comment about Okanagan College Library's LibGuides site as a whole, please contact the site administrator.