Shaping the Future: The Need for AI Regulation

Written in

by

Regulation in general has been tricky and controversial since the dawn of man (now the Dawn of AI hehe). The Code of Hammurabi (1754 BCE) is one of the earliest known legal codes, regulating various aspects of life including trade, property, and family matters. Today, there are similar institutions in place to regulate everything from Telecommunications (FCC established in 1934) to Automobiles (NHTSA established in 1970) to Pharmaceuticals (FDA established in 1906). These organizations have helped society run smoothly, yet their birth and development have always been the subject of criticism.

The reason I tell you this is to help you understand that government regulation is NOT inherently bad. Despite the conspiracy theories and hatred towards regulation out in the ether, if it weren’t for these institutions, society might not be as developed or empowered as it is today. Hell, I might not even be able to write this blog openly if it wasn’t for certain establishments and systems that at least try to help keep society moving forward. All this to say that I’m all for regulation, especially when it comes to AI and the advancements wherein.

Why is Regulation in AI Needed so Badly?

I’m gonna go out on a whim here and say that AI needs to be regulated quickly, comprehensively, and with the ultimate force of the law. You might be asking yourself “well, how is AI different than other past technologies?” Here are several reasons why AI regulation is needed and how AI is far different than past advancements in technology:

  1. Ubiquity – AI can impact virtually every sector of society
  2. Complexity – It’s hard to predict and control behaviors of machine learning/AI systems
  3. Autonomy – AI can take actions without human interpretation (I’m sure you can imagine certain catastrophic consequences in your head)
  4. Scalability – AI can be deployed and scaled at an unprecedented speed
  5. Data Dependence – Since these models are trained on large volumes of data, if that data is inaccurate or misrepresented, it can lead to biased and inaccurate AI behavior and outcomes

When you think about it, there really is no limit to what AI can do. We are already seeing its potential in the white collar job market and have discussed in detail its implications in HealthTech, Drug Discovery, and Life at large. The fact that AI and AI development is a positive feedback loop (in that new advancements are fed into deep learning networks that can push the boundaries of current understanding such that new advancements are created and then fed into the same deep learning networks, deepening understanding, etc. I think you get the point) is inherently a cause for concern. Pretty soon AI models are going to be able to code themselves and curate authentic, original outcomes to age old inquiries, questioning why humans are even needed in the first place. This is exactly why we need guardrails in place; to limit the possible exponential effect AI can have on the human race, while maximizing immediate innovation in a healthy, yet progressive manner.

Pros of AI Regulation

In 1890, the first antitrust law was passed by Congress (The Sherman Act). Its purpose was to combat the growth of monopolistic business practices and promote fair competition in the marketplace. Still in effect today, specific antitrust laws exist to ensure that modern companies abide by ethical business etiquette and don’t hog 100% of a specific market. These laws are incredibly important when it comes to AI, specifically as AI relates to ubiquity and complexity. In fact, Sam Altman, Co-founder & CEO of OpenAI, testified before Congress a couple weeks ago, advocating for increased AI regulation. This precedes critiques that he [Sam Altman] simply wants to create the proper regulatory practices WITH the government, to safeguard his own company and enhance the barrier to entry for new AI players. These antitrust laws exist so that companies like OpenAI and other cutting-edge AI organizations aren’t able to dominate the market, crafting the public narrative moving forward.

Some other pros of AI regulation include, but are not limited to, the protection of national security interests and the assurance of ethical AI usage and transparency. Through properly enforced regulation, government institutions can mitigate potential liabilities to the United States. For example, it can vet AI development for malicious intent (hacking, exploiting vulnerabilities, leaking sensitive information’s etc.), helping secure economic infrastructure and eliminating biased and discriminatory AI behaviors trained on misinformation. The pros of AI regulation most likely outweigh the negatives, however there are still some aspects to be skeptical of.

Cons of AI Regulation

One of the biggest “cons” of government regulation I hear about so frequently in the western world is the stifling of innovation. However, most of this “stifling” comes from the fact that regulation is either:

  • Poorly Designed
  • Overly Strict

An example of poorly designed regulation is alcohol regulation during the era of Prohibition. Intended to reduce social issues related to alcohol (crime and public disorder), it actually led to an increase in organized crime syndicates and the development of underground markets for alcohol. A typical reaction by humans; the more we tend to restrict, the more we tend to indulge.

An example of overly strict government regulation includes specific rules around the sale of certain food items, like homemade baked goods, at local markets or small-scale businesses. These regulations require vendors to jump through various loopholes to receive permits, proper inspections, and compliance with health and safety standards. In this sense, it can make it very hard for small-scale entrepreneurs to sell their homemade products.

Another con of regulating AI is that it is incredibly hard to regulate something that many people do not have any understanding of (and no, I am not saying I understand jack shit, because I don’t). This is new age technology that requires decades of expertise and experience to understand at a fundamental level. As a result, government institutions need to rely not only on the proficiency of outside consultants and leaders in the technological field, but also on their moralistic tendencies (which I typically view as pseudo-moralistic, but that’s for another time). This can lead to poorly designed regulations that benefit the 1% and disparage the remaining 99%.

Lastly, the sheer mass of global development in the field of AI is unprecedented. How is it possible to regulate something that is everywhere all at once on the internet? How is it possible to regulate information that doesn’t care about borders or geo-political conflicts or socio-economic issues? AI is not a United States specific advancement; it’s a global one. So the sheer complexity of not only the technology itself, but the actual infrastructure of AI regulation is not discernibly easy to comprehend.

Conclusion/What I Think

At the end of the day, I think that AI needs to be regulated quickly and holistically. I believe that the amount of regulation should be contingent on the industry in which AI organizations operate in. For example, AI companies in Entertainment, Advertising, and Retail do not need to be held to as high standards as Healthcare, Transportation, Finance, Law Enforcement & Surveillance, and Defense & Military. Yeah, not enforcing AI developments in entertainment might lead to a really crappy movie (which is the case already WITHOUT AI enhancement), but not regulating AI advances in any of the later industries could potentially destroy someone’s life.

So how should AI actually go about being regulated? Well, I do like the idea of a governing body of global experts that will help safeguard worldwide information and protect people from the potential negatives of AI. I think this group of leaders in the space need to be vetted for unbiased practices and held to ethical standards, just as if you were to join the CIA or any high ranking government institution (I only used the USA as an example because I live here; I’m not saying it should be the model or standard other countries look up to).

Additionally, I think that this body of experts should assign a weightage to every industry with the rating associated with the possibility of extinction level occurrences. For example, Defense & Military can receive a rating of 10 (most potentially dangerous) for the implication of AI disruption and the notion that AI systems could infiltrate weapons systems and target humans. The possibility of immediate mass casualty associated with Defense & Military is much higher than say, Advertising (I highlighted immediate because you could technically make an argument that every industry needs to be regulated equally; but I don’t think that’s possible).

This blog has probably been incredibly boring and incredibly lame to many of you. But at the very least, I hope it helps shed some light on what regulation is (specifically in the United States), where it currently lies in society, and what can be/should be expected for the future of AI regulation. I truly believe that AI has the potential to maximize productivity and enhance human life. I also truly believe that, if safeguards are not put into place earlier rather than later, we are in for a rude awakening that is both unpredictable and incredibly scary.

Tags

2 responses to “Shaping the Future: The Need for AI Regulation”

  1. The Ethics of AI: Navigating the Moral Labyrinth of Artificial Intelligence – The Dawn of AI Avatar

    […] with my prior blog post on regulation, there are numerous ways that organizations and countries have proposed to build AI ethically. Here […]

    Like

  2. Generative AI, Data, and the War on Misinformation – The Dawn of AI Avatar

    […] Generative AI is different because of the amount of content it can create rapidly, however it is all subject to the data it is trained on. And if 62% of the internet is unreliable information, how can we be sure that the data we are training our models is accurate and unbiased? We can’t. There is literally no way to know for sure ALL of the factual content online. The most we can do, as people, is critically think about any claim online, and research with established organizations and institutions. Yes, there’s lots of lobbying and political scheming that goes on at the top of organizations, but if you can read studies and papers and dispense with your emotions, you can find the truth. There needs to be a push for media literacy, fact-checking tools, and responsible AI use. […]

    Like

Leave a comment