To use AI responsibly, you need to see understand its potential and its risks. The examples below will help you see some of the biggest risks and ethical dilemmas presented by AI. We will look at each example through a simple framework of its
Please note, that this is not a comprehensive list. If you have ideas of things that are missed, please reach out to the author.
After working through these examples, hopefully you will see the interconnectivity of the risks and have a bit of an understanding of what you can do and what needs to be done to mitigate them.
AI makes it easy to create and spread fake content. This can be anything from deepfakes to targeted propaganda. The result is a loss of trust, confusion, and rising social tension.
Example: A doctored video of a local candidate circulates before an election, changing public opinion before it can be debunked.
Factor | Assessment |
---|---|
Severity | High. Harms can impact public safety, elections, and trust in institutions. |
Exposure | Broad. Content can reach huge audiences very quickly. |
Plausibility | High. Tools for creating synthetic media are widely available. |
Prevalence | Increasing. Misinformation campaigns appear in every election cycle. |
Mitigation |
Systemic & Institutional: Platforms can help by adding content labels, implementing strong guardrails, and using rapid fact-checking. Individual & Classroom Actions:
|
Pick a trending video or post. Trace its source, check fact-checks, and look for content credentials. Discuss how one small edit could flip its meaning.
AI lowers the bar for creating malicious content. This malicious content includes phishing emails, social engineering scripts, and malware. The AI models themselves are at risk through attacks like prompt injections.
Example: A tailored phishing email generated in seconds steals credentials and compromises a school's network.
Factor | Assessment |
---|---|
Severity | High. A compromise can expose sensitive data or disrupt operations. |
Exposure | Broad. Any connected user or system can be a target. |
Plausibility | High. Attack methods and tools are publicly known. |
Prevalence | Rising. Security teams are tracking many new AI-specific vulnerabilities. |
Mitigation |
Systemic & Institutional: Organizations should use threat modeling, strict input filtering, and secure design principles to protect their systems. Individual & Classroom Actions:
|
I dunno. Pay careful attention to your emails.
1) NIST, Adversarial ML Taxonomy (AI 100-2)
2) Contact the Author to suggest some more high quality resources
AI applications collect and store personal data and may do so in ways users do not expect. Weak controls can lead to data breaches, re-identification of anonymous data, and illegal use. AI applications can be designed to figure things out about you; even if you are not giving them personal data, your interactions could be analyzed to create a decent profile of you.
Example: Private chats in the chatbot Grok were indexed by Google and became searchable by anyone. ChatGPT had a similar problem.
Factor | Assessment |
---|---|
Severity | High. Breaches of sensitive data cause significant harm and legal risk. |
Exposure | Large. Many users and devices handle personal data. |
Plausibility | High. Standard workflows often involve sharing data with cloud tools. |
Prevalence | Common. Privacy incidents happen all the time. |
Mitigation |
Systemic & Institutional: Institutions must practice data minimization, conduct privacy assessments, and comply with laws like FERPA and GDPR. Individual & Classroom Actions:
|
Actually read the privacy policy of an AI company to find out what they are doing with your data.
1) Let me know if you have suggestions for good quality resources to go here.
AI models can adopt and amplify human biases found in their training data. This affects everything from grading tools and hiring filters to how people are portrayed.
Example: An image generator returns stereotyped pictures of "scientists" and under-represents women and people of color.
Factor | Assessment |
---|---|
Severity | Medium to high. Systematic unfairness hurts opportunities and dignity. |
Exposure | Broad. Popular models are used everywhere for many different tasks. |
Plausibility | High. Bias in AI models is a well-documented problem. |
Prevalence | Frequent. This is especially true in general-purpose models. |
Mitigation |
Systemic & Institutional: Companies should use representative data, test for bias, and design inclusive and accessible products with human oversight. Individual & Classroom Actions:
|
Prompt an image model with role labels like “CEO” or “nurse.” Tally the outputs and discuss the stereotypes you see. Then try to fix them with better prompts.
1) NIST, SP 1270: Managing Bias in AI
2) Let me know if you have suggestions for good quality resources to go here.
When an AI makes a decision, how do we know how it made that decision (transparency), who is responsible for the consequences of the decision (accountability), and how will those impacted by the decision be compensated for harms (redress).
Examples: An instructor uses an AI to flag plagiarism and punishes a student based solely on the AI's decision. Who is accountable for this decision? A student uses AI to write a report and that report refers to papers that do not exist. Who is accountable for this mistake? An HR manager uses AI to determine a shortlist for interviews. How can he/she know how the AI came up with shortlist?
Factor | Assessment |
---|---|
Severity | Medium to high. Opaque decisions can wrongly penalize people. |
Exposure | Broad. This affects anyone evaluated or served by an AI system. |
Plausibility | High. Good documentation is often missing when new tech is adopted quickly. |
Prevalence | Mixed. Transparency is improving but remains uneven. Accountability and redress are evolving. |
Mitigation |
Organizations: publish model cards, log decisions, and require human-in-the-loop for impactful calls. Individuals:
|
Transparency: Read the model card of an AI tool you use. Explain its purpose, data sources, and limitations.
Accountability:
1) Recommend some links to me
Generative models learn from existing works, including copyrighted material. Their outputs can sometimes mimic an artist's style or content too closely, creating legal and ethical conflicts.
Examples: (1) Meta used pirated books to train some of its AI models. (2) A music class uses an AI tool that creates songs that sound almost identical to those of a living artist.
Factor | Assessment |
---|---|
Severity | Medium to high. This can lead to serious legal and ethical problems for creators and institutions. |
Exposure | Broad. This affects anyone doing creative work or publishing. |
Plausibility | High. Models are trained on the public web and are designed to imitate styles. |
Prevalence | Frequent. This issue is at the center of many active lawsuits and policy debates. |
Mitigation |
Systemic & Institutional: Companies should respect license terms, use ethically sourced datasets, and adopt content watermarking to show provenance. Individual & Classroom Actions:
|
Generate an image “in the style of” an artist. Compare it to their real work. Discuss where inspiration ends and infringement might begin.
Generative AI is changing the job market. It can automate some tasks, assist with others, and shift the demand for certain skills, especially for entry-level roles.
Example: A marketing firm reduces its need for copywriters after adopting an AI-assisted writing tool, affecting entry level jobs.
Factor | Assessment |
---|---|
Severity | Medium to high. Job displacement and new inequities can occur. |
Exposure | Broad. This impacts knowledge work and creative fields. |
Plausibility | High. Companies in every sector are adopting AI quickly. |
Prevalence | Growing. Hard evidence of job impacts is limited, but anecdotal evidence is strong. |
Mitigation |
Systemic & Institutional: Organizations should focus on redesigning jobs, upskilling their workforce, and being transparent about automation plans. Individual & Classroom Actions:
|
Pick one job, like academic advising. List its core tasks, mark which AI can assist with, and identify which require a human touch.
Training and running large AI models uses a lot of energy and water. The hardware itself relies on global supply chains that can have their own environmental and social risks.
Example: An AI company builds a large data center which requires a large amount of electricity and water (for cooling). The local community experiences rolling brownouts and minimal water pressure.
Factor | Assessment |
---|---|
Severity | Medium to high. The environmental impacts are cumulative and global. |
Exposure | Growing. AI is being embedded in more services we use every day. |
Plausibility | High. The demand for AI computing power is surging. |
Prevalence | Increasing. Energy and water use from data centers is a tracked metric. |
Mitigation |
Systemic & Institutional:
Individual & Classroom Actions:
|
Use the website What Uses More to compare the amount of energy required to write a book chapter to the amount of energy needed to watch Netflix.
A few large companies control most of the computing power, data, and models for AI. This concentration of power can limit innovation, drive up prices, and lock users into one ecosystem.
Example: A vendor bundles an AI suite that forces a school to use one specific cloud provider and a closed set of models.
Factor | Assessment |
---|---|
Severity | Medium. The risks to innovation, cost, and choice grow over time. |
Exposure | Broad. The market structure affects what tools are available to everyone. |
Plausibility | High. Reports show a few players have massive advantages. |
Prevalence | Under active regulatory review around the world. |
Mitigation |
Systemic: Institutions should pursue multi-vendor strategies, support open standards, and demand data portability from their vendors. Individual:
|
Find at least 3 open-source AI tools and try them out.
1) UK CMA, Foundation Models: Initial Report & 2024 Update
2) FTC, Generative AI Raises Competition Concerns
3) Let me know of other good resources.
Interactions with AI can feel as good as the real thing. AI can act like a therapist, a friend, or an antagonist but it is not actually any of those things. It can also produce polished work that lets students skip the hard parts of learning. This replaces moments of growth, creativity, and struggle with a "good enough" substitute.
Examples: (1) A user has enjoyable interactions with a chatbot and feels like he/she has developed a relationship with it. When the chatbot is updated to a newer version, the user feels like they have lost a friend. (2) A student submits an AI-written reflection, missing the chance to develop their own voice, judgment, and ideas.
Factor | Assessment |
---|---|
Severity | Medium. The slow erosion of skills and meaning is a serious long-term risk. |
Exposure | Broad. The temptation to take shortcuts is high. |
Plausibility | High. Easy-to-use tools are always just a click away. |
Prevalence | Increasing. Surveys show widespread use of AI by students. |
Mitigation |
Systemic & Institutional: Educators can design assignments that focus on the process, not just the final product. This includes requiring drafts, oral presentations, and in-class work. Individual & Classroom Actions:
|
Do a short task twice, once with AI help and once without. Compare the final products, but more importantly, compare your notes on the process.
1) UNESCO, Guidance for GenAI in Education & Research
2) EDUCAUSE, 2024 Action Plan: AI Policies & Guidelines
3) KEEP READING THIS LIBGUIDE!!!
AI can be used to create and spread abusive content. This includes non-consensual deepfakes and targeted harassment. It can also expose minors to harmful material.
Example: A student’s face is put into a sexualized deepfake image by classmates, which then spreads rapidly online.
Factor | Assessment |
---|---|
Severity | High. The psychological harm and safety risks are severe. |
Exposure | Significant. Anyone with a phone can be targeted or exposed. |
Plausibility | High. The tools are easy to use, and content moderation struggles to keep up. |
Prevalence | Documented. Reports from safety groups show these incidents are rising. |
Mitigation |
Systemic & Institutional: Platforms must invest in strong safeguards, rapid takedown procedures, and clear pathways to support victims. Individual & Classroom Actions:
|
Role-play a reporting scenario. A student discloses a deepfake. Practice the next steps: capturing evidence, reporting to the platform, and escalating to school staff.
1) WeProtect Global Alliance
2) What other links do you suggest?
These risks are not separate problems. They are an interconnected web of challenges. Seeing the links between them is key to understanding AI safety.Here are a few examples:
Your Turn: What other connections can you find? Think about how a lack of Transparency might affect Simulated Reality in learning. How could a focus on the Environment change which models get built? Exploring these connections is a powerful way to build your understanding of the negative impacts of generative AI.
This is a list of a few resources I've found helpful for understanding responsible AI technology use.
A toolkit developed by Tactical Tech designed to help individuals manage their digital privacy, security, and overall digital wellbeing (with a more recent focus on AI). The kit provides practical, everyday steps to gain more control over your online life, covering aspects like screen time, app usage, passwords, and understanding data trails.
An in-depth examination of the various costs and harms associated with generative AI technologies. This resource explores critical questions about what level of harm from AI might outweigh its benefits, covering environmental impacts, labor exploitation, bias amplification, and social inequities. Features Rebecca Sweetman's comprehensive analysis of Large Language Model harms, making it essential reading for understanding AI's broader societal implications.
A thought-provoking podcast by the Center for Humane Technology that explores how technology is reshaping society and what we can do to ensure it serves humanity's best interests. Hosted by Tristan Harris and Aza Raskin, the show examines the intersection of technology and humanity, offering insights into creating a more humane digital future through thoughtful design and policy.
Educational materials designed to help instructors and students understand and discuss the biases inherent in generative AI systems. Explores various types of bias (cognitive, cultural, demographic, linguistic, and more), provides practical classroom activities for examining AI bias in both text and images, and offers strategies for critical evaluation of AI-generated content. Includes extensive resources and readings for deeper exploration.
A structured framework for understanding and evaluating the potential risks and impacts of AI systems. This resource provides a systematic approach to assessing AI technologies across multiple dimensions, helping users develop critical thinking skills for responsible AI adoption and implementation.
Know of other valuable resources for AI literacy, digital wellbeing, or responsible technology use? I'd love to hear about them! Feel free to reach out with your suggestions so this collection can continue to grow and serve the community better.
Unless otherwise stated, this page and AI Literacy for Students © 2025 by David Williams is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
This site is maintained by the librarians of Okanagan College Library.
If you wish to comment on an individual page, please contact that page's author.
If you have a question or comment about Okanagan College Library's LibGuides site as a whole, please contact the site administrator.