No Result
View All Result
The ethics component addresses the issues of bias, data privacy, social and cultural implications, and fairness in AI. For each issue, we design discussion questions and class activities. Relevant reading and learning resources are also offered.
Social Role, Embedded Biased: A Reflective Statement
To put simply, self-reflexivity refers to how “who we are” influences “what we do”. In other words, our positionality including our beliefs, values, perceptions, and assumptions explicitly and/or implicitly shapes our actions in a context and consequently brings certain outcomes. The purpose of this assignment is to make us become more conscious of our individual, social, and cultural identities, and how our positionality influences our choices of use cases or projects.
Discussion questions:
The discussion questions lead to the activity, but the instructor can modify and adapt any parts of the suggested discussion questions, activities, or reading.
- Please briefly describe yourself.
- How do specific parts of your identity advantage you?
- How do specific parts of your identity disadvantage you?
- What are the possible biases you may have?
- On the whole, how does your identity influence you
- working on this assignment or project?
Activity
Please write a paragraph that briefly describes yourself and your identity, and how your identity influences your choices of courses, class projects, etc..
Suggested reading
How to Write a Reflexivity Statement (for professional or personal purposes)
Implicit bias test
Data Management Issues/Privacy: Research Ethics
The goal of data management is to ensure that data collection and modeling is conducted ethically. Data mining often uses human behavioral data regulated by different legal regimes, like the HIPPA Privacy Rule and the FTC Act. In addition, research involving human subjects needs to be reviewed and approved by IRB (the Institutional Review Board). All of these legal and regulatory protections aim to protect privacy and confidentiality of participants. Respecting human participants who willingly provide data is another important dimension of research ethics.
Discussion questions
The discussion questions lead to the activity, but the instructor can modify and adapt any parts of the suggested discussion questions, activities, or reading.
- Why is privacy important in [domain field]?
- What are good practices of collecting data to protect privacy?
- What are good practices of managing data to protect privacy?
- What are good practices of using data while maintaining privacy?
Activity
Please write a data management plan that ensures privacy and confidentiality, shows respect to human participants, and minimizes potential risks and harm, in data collection, data storage, data analysis, and data reporting.
Suggested reading
- Chapter 3 Ethics: What Are My Responsibilities as a Researcher?, pp. 43-64, in Introducing Communication Research: Paths of Inquiry (5th), by Donald Treadwell and Andrea Davis
- Five principles for research ethics. Cover your bases with these ethical strategies.
- Dr. Fred Cate’s talk revolves around the current approach to data privacy. He analyzes the role that consent plays in data protection and privacy today, grappling with how we manage consent in a world in which data is constantly being inferred about us. In a chaotic world, he emphasizes that it is important that we ask for consent in a meaningful and effective manner.
- Sharing Consumer Health Information? Look to HIPAA and the FTC Act
- Metcalf, J., & Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data & Society, 3(1), 2053951716650211. (PDF)
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. (PDF)
Social and Cultural Implications: The So-What Question
Data is a means to knowledge. The ultimate goal of data science is knowledge generation and decision making. Computer algorithms are powerful data engineering tools, but we, as human agents, need to answer the so-what questions. Data visualization is very helpful for interpretation of findings. The meaningfulness of data research and applications depends upon how well one can use his or her domain knowledge (like journalism, advertising, health communication, marketing, geography, economics, etc.) to interpret data and models.
Discussion questions
The discussion questions lead to the activity, but the instructor can modify and adapt any parts of the suggested discussion questions, activities, or reading.
- What social issue was reflected by the pattern of the data that you observed? Social issues could include but not limited to the distribution of resources, disparity of resources among groups of populations, under or over representations of a group in one dimension.
- How do you think the data might enhance the society’s understanding of that issue? In other words, did the data provide new insights and evidence to an issue at focus? Did the data analysis and visualization reveal anything that you didn’t know or notice before about that social issue?
- How do you think the findings of the data might affect the parties involved?
- How would your data set bring awareness of the issue to the public? What do you think the policy makers should do to resolve the issue?
Activity
Please write 2 to 3 paragraphs to answer the so-what question of your project/application: What is the broader social impact of the application? Did the data/model/application provide new insights and evidence to an issue at focus? How would your data/model/application bring awareness of the issue to the public? What do you think the policy makers should do to resolve the issue? Did the data/model/application and visualization reveal anything that you didn’t know or notice before about the social issue?
Suggested reading
- Data Journalism with Impact
- Beyond Clicks and Shares: How and Why to Measure the Impact of Data Journalism Projects
- Zeng, D. (2015). AI Ethics: Science Fiction Meets Technological Reality. IEEE Intelligent Systems, (3), 2-5. (PDF)
- Ienca, M., Ferretti, A., Hurst, S., Puhan, M., Lovis, C., & Vayena, E. (2018). Considerations for ethics review of big data health research: A scoping review. PloS one, 13(10), e0204937. (PDF)
Fairness in AI: Fairness Audit
AI techniques using big data and algorithmic processing are increasingly used to guide important social decisions, including hiring, admissions, loan granting, and crime prediction. However, AI is just as fair as the data, and the data are gathered from human activities. Data is often biased; data are as biased and flawed as human beings. While we assume that machines are neutral, there is evidence that algorithms may sometimes learn human biases and discrimination from data, rather than mitigating them.
Discussion questions
The discussion questions lead to the activity, but the instructor can modify and adapt any parts of the suggested discussion questions, activities, or reading.
- What are the fallacies leading to people’s blind trust in “objective” algorithms?
- What does “Math Destruction” mean? Can you give an example?
- What are the sources of bias in AI and big data?
- If I were going to do a fairness audit of an app (or myapp), what should I do?
- What can we do to promote fairness in AI and big data?
- What should be the roles of human agents and machine agents to ensure fairness?
Activity
The goal of this activity is to conduct a fairness audit to evaluate the data/model/application. For example, you can create audit metrics and conduct evaluation. Or, you can write a critical analysis of the data/model/application using critical studies theories, like feminism, social semiotics, etc. You can also create your own way of evaluating fairness.
Suggested reading
- Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more — but they don’t automatically make things fair. Mathematician and data scientist Cathy O’Neil coined a term for algorithms that are secret, important and harmful: “weapons of math destruction.” Learn more about the hidden agendas behind the formulas.
- MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn’t detect her face — because the people who coded the algorithm hadn’t taught it to identify a broad range of skin tones and facial structures. Now she’s on a mission to fight bias in machine learning, a phenomenon she calls the “coded gaze.” It’s an eye-opening talk about the need for accountability in coding … as algorithms take over more and more aspects of our lives.
- COMPAS Analysis using Aequitas
- Hagendorff, T. (2020). The ethics of Ai ethics: An evaluation of guidelines. Minds and Machines, 1-22. (PDF)
- Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From what to how. An overview of AI ethics tools, methods and research to translate principles into practices. arXiv preprint arXiv:1905.06876. (PDF)