Who the hell cares about ethics anyways? Like actually; who cares? I’m joking of course. You should care and if you don’t then you’re probably a sociopathic freak. Anyways, what exactly are ethics? According to Oxford Languages, ethics are “moral principles that govern a person’s behavior or the conducting of an activity.” In other words, ethics are the guiding light present throughout so many religious doctrines, philosophical arguments, and existential debates; it’s what keeps society functioning and what allows every human to play the game of life. As AI advances at unprecedented rates, the question of ethical building practices becomes paramount and is one of the most important discussions taking place. Just imagine a world where machines, with their objective state of being, make life altering and subjective decisions for you at scale, and with such speed you won’t even comprehend what is happening; that’s the future of AI if not properly regulated or ethically constrained. So, how exactly should we be thinking about ethics in the development of AI, and what does the future of this technology hold for our moral obligations?

Key Ethical Issues in AI

There are numerous ethical issues to consider when discussing the development of AI. Here are four of the main ones:

  1. Privacy concerns
  2. Bias
  3. Decision-making transparency
  4. Job displacement

Just imagine if everything you ever did (yes, all the fucked up shit you’ve done in your life), was captured, stored, and used in ways that you have no say in. Since AI systems are trained on vast quantities of data in order to operate effectively, this is the future of reality. Large corporations already seize copious amounts of personal data to sell to others for hyper-personalized marketing campaigns. What if your most sensitive information was misused, leaked, or brought to the public’s knowledge? You probably wouldn’t feel so good then. There needs to be a way to guarantee individual privacy.

As stated in an earlier blog post, AI models are only as good as the data they are trained on. If the model is trained on incredibly racist data or data about discrimination and injustices in the world, it will predict a perpetuation of those narratives; narratives we fight so hard to break. This will feed a positive feedback loop of impending doom for society as bias becomes mainstream media (already happening), and the public is forced to choose a group to be a part of (when the individual matters the most; please stay away from group think ideology, it’s not healthy). There needs to be a way to limit bias in the data that models are trained on.

One of the reasons private organizations appear to be so shady is because there is no decision-making transparency. Wouldn’t it be great if every corporation outwardly stated its intentions, motivations, and actions? That would be lovely. Unfortunately, that doesn’t happen and that’s why the government specializes in auditing corporations (and individuals), attempting to hold moral virtues to the highest standard and minimizing the proliferation of “bad players.” There needs to be a way for AI to exhibit decision-making transparency, to empower the individual and to ensure equality of opportunity.

The implementation of AI will disrupt every part of the workforce from hard laborer to executive. As with any revolution, job displacement will be common practice and new, unheard-of-before occupations will be born. In fact, the first stage of AI implementation will result in 400 to 800 million jobs impacted by automation and AI by 2030. I’m certain new jobs will enter the market, however these new positions are yet to be seen (an interesting one I’ve seen so far is called “Prompt Engineer,” whose sole purpose is to come up with the best prompts for chatbots like ChatGPT, Bard, etc.). There needs to be a way to minimize job displacement while maximizing opportunity for all.

Case Studies

There are numerous examples of AI integration and development leading to unethical outcomes. Here are three of the best:

In 2014, Amazon began training an AI recruitment tool to automate the hiring process by reviewing job applications and assigning a rating of 1 to 5 stars. Trained on 10 years of prior resume data (primarily submitted by men), the AI system began to penalize resumes that included words like “women’s” and downgraded graduates of two all-women’s colleges. Despite effort to correct this bias, Amazon ultimately discontinued with this tool in 2018.

The COMPAS Recidivism Algorithm is an AI tool used by judges, probation and parole officers to asses a criminal defendant’s likelihood of re-offending. ProPublica analyzed the tool and found that it contained racial bias, incorrectly judging black defendants as higher risk for recidivism than white defendants. Black defendants who didn’t re-offend were twice as likely to be misclassified as higher risk compared to white defendants. Even when controlling for other factors, black defendants were 45% more likely to be assigned higher risk scores for recidivism than white defendants. This is a prime example of potential bias in AI systems, already being used today!

In 2016, Microsoft released an AI chatbot named Tay, designed to learn from Twitter interactions. Users started tweeting Tay with misogynistic, racist, and offensive remarks, which Tay repeated back after being instructed to “repeat after me.” Soon, some of Tay’s abhorrent comments became unprompted, suggesting that the bot was assimilating to the Twitter space. After only a day, Tay had become a racist, abusive asshole, raising the question on how AI models should be trained using public data.

Current Approaches to AI Ethics

In-line with my prior blog post on regulation, there are numerous ways that organizations and countries have proposed to build AI ethically. Here are three mainstream and popular ethical approaches to AI, proposed by some of the largest, brightest, and most advanced people/organizations on the planet:

Be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people incorporate privacy design principles, uphold high standards of scientific excellence, and be made available for uses that accord with these principles.

These guidelines propose seven requirements AI systems should meet in order to be deemed trustworthy. From human agency and oversight to technical robustness and safety, to transparency and data governance, these requirements ensure that AI is built ethically, and responsibly.

Coalition formed by Amazon, Facebook, Google’s DeepMind, IBM, Microsoft, and Apple in 2016 that aims to develop and share best practices in AI research and development, create an open platform for discussion and engagement, foster public understanding and awareness of AI, and support research that addresses AI’d global challenges

The Future of AI Ethics

There are plenty of unethical AI use cases that are occurring right below our noses. From deepfakes and misinformation to bias and discrimination to the utilization of AI in healthcare and warfare, AI has infiltrated the very fabric of reality and is used on a daily basis to spread false data, capture copious amounts of sensitive data, and objectively analyze subjective domains. Imagine if one day you came across a deepfake of you on the internet, with a synthetic voice that sounds just like you, talking about how you’re going to take a massive shit on the White House; you probably wouldn’t like it. Now imagine that but taken to the 100x extreme of being framed as a terrorist or being involved in a number of inhospitable and disturbing allocations that you had no part in. Unfortunately, once something’s on the internet, it will be there forever, and I doubt you want your name and reputation tainted by a machine.

So, what are the solutions to these issues? Some proposed fixes include, but are not limited to, systems to detect deepfakes, human oversight of critical AI decisions like diagnosis of terminal illness and life or death situations, international treaties to limit AI use in war, and more diverse training data with increased amounts of transparency for the layperson. I think that there needs to be an international forum that only top officials and experts in the field can contribute to, where they need to lay out step by step plans for the development of high level and widespread AI. There needs to be an abundance of white papers that explain AI to the layperson, including every instance of data that goes into an AI model and the expected results. There needs to be immense scrutiny and regulation before AI models can be used in public settings, much like the FDA vets drugs before they can be used in the general public. The goal is to make AI use and development as absolutely transparent as possible and that’s a two-fold equation: 1) making AI developers and leaders accountable for their actions and 2) distilling complex topics down to general population terminology so that everyone can not only understand the implications of AI, but play a part in the molding of reality with AI on our side.

Conclusion

This has 100% been a boring and [probably] convoluted piece, but I felt it was necessary to write. There are numerous instances of AI being used unethically in today’s society; all it takes is a Google search to find them. I say this not to scare you, but to let you know that although the future of AI is unimaginable, this very moment in AI is scary, shaky, rapidly evolving, and alive. The combination of international ethical consensus from every discipline (philosophy, psychology, sociology, science, etc.) needs to dictate the future of AI development. This isn’t a matter of the smartest computer scientists and machine learners determining the ethical future of reality; it’s about everyone playing a part in an inevitable future where your very life could be at the hands of an algorithmic decision being objectively chose by a heartless machine.

Tags

Leave a comment