Instructions:

Ethics for EduKC

Since the idea behind the development of Artificial Intelligence and Deep Learning models is to benefit humanity in every way possible, we care a lot about being ethical in doing so.

The problem of suggesting the best educational resuources to students is not an exception. Even though it looks like a simple problem / use case, we do not want the model to treat people of different kinds, unequally.

Let’s explore a few scenarios where we may have ethical concerns about our solution to this problem:

 

Social Role, Embedded Biased: A Reflective Statement

We Humans, have a few preconceptions about a lot of things – one of those is associating people with events, statuses, etc. But, we cannot let AI have such biases. It should be fair to all kinds and variety of inputs and should not learn to discriminate based on a few factors.
We have factored this while analyzing the data that is being provided to train our model, so that it will perform without bias. 

Data Management Issues/Privacy: Research Ethics

As we are storing some personal data of our users, as per the GDPR policy we have taken consent from the users about sharing their data. And we are storing it securely in a connection encrypted database. It is our responsibility to safeguard the data and prevent it from falling into the wrong hands. 

Social and Cultural Implications: The So-What Question

The data that we collected helped us to identify the needs and opportunities for providing a good recommendation to the students, especially from the minority sections of the society in Kansas city and other cities in Missouri. The insights that were extracted from the data were highly useful to churn the target users who would get highly benefited from our application.

Fairness in AI: Fairness Audit

Our application uses a few personal parameters like locality, income level, age, gender and budget to identify the needs of a user. Since these are an essential input to the model, we have practiced caution while supplying the input features to the model.

We are also continuously analyzing the performance of the model on different parameters which may contribute to the model bias, in a way that can be presented in a transparent fashion.

Welcome Back!

Login to your account below

Create New Account!

Fill the forms bellow to register

Retrieve your password

Please enter your username or email address to reset your password.