The Alignment Problem

Published on

GRC 2023 Global Essay Competition Top 30

By Lucinda Peng

The rampant rise of AI has rapidly resulted in real ethical and existential concerns for society. Not discrediting the plethora of potential positive effects of AI, but the steps and trajectory of AI development should be taken seriously. What are the risks and benefits of developing AI in society? How can we harness the power of AI while remaining in control of humanity? AI objectively poses existential and societal risks, often overshadowed by monetary and developmental potential and benefits, and this ignorance has the potential to lead to the destruction of humanity, (Ratchadourian). If halting development is improbable within the current context of society, what are some steps we can take to mitigate risks of AI development? Research, logic, and modeling should be done to barricade such events, the information we feed AI in LLMs should be controlled, and specialization of AI development for specific functions could aid in ensuring AI remains a tool.

In order to harness the power of AI for the greater good, we should first answer whether AI should be developed at all. Human-like or above intelligence poses risk to society as we know it. Intelligence is the evolutionary advantage of humans and our agency of control/power. Highly-intelligent AI coexists with the alignment problem, the alignment of the wishes of AI with humans, paired with sentient AI, deciding to overshadow human interests, would be catastrophic, (Christian). Importantly, biology is inherently self-regulating through death.
Currently, corrupt individuals are regulated by the biological clock, inapplicable to AI. Additionally, human feelings are a contained organic experience which AI lacks, with intrinsic beauty and regulatory value. Disregarding the influence of remorse, negative actions could be triggered by a destructive micro thought or a simple mistake in early development that snowballs, plaining the importance of thorough research, modeling, and testing during development in building a moral and decision making framework with no weaknesses. This would be complicated by the nuanced interpretations of language.

Deep integration of artificial intelligence with the internet has the potential to spread misinformation and disinformation, amplifying biases and attitudes like racism, misogyny, and homophobic ideas, bringing personal data misuse, security/privacy and decision-making impairment (Ahmad et al. 2023; The Washington Post). AI internet integration could remove a critical layer of human critical thinking (Carver 2023). LLMs, AI which use deep learning on large quantities of data, filter and combine information from multiple sources not shown to the public, makes distinguishing fact from fiction more difficult and obscures decision making processes (Kerner, Blouin 2023). This connectivity/wide range of AI, “knowing no international boundaries,” amplifies these effects and the danger of sentience, (Marwala 2023). Lack of knowledge/benefits and possible significant ramifications surrounding consciousness in AI suggests we should focus our resources on preventing such occurrences, through logic, filtering and selecting sources and language for LLMs. AI is an adaptive governance issue, requiring education and planning internationally, (Marwala 2023).

Graphic by Violet Dashi. Illustrations by Nadia and Simple Line via Adobe Stock

The computational abilities of AI potentiates both a powerful tool and a formidable detriment. It excels at repetitive, data driven/computational tasks, outpacing humans (Simplilearn). A double-edged-sword, AI promises countless medical, scientific, technological, philosophical, and developmental benefits. Regulation, strategic implementation, testing, modeling, specialization, and research moderate risks of hallucinations, falsehoods resulting from the use of incorrect or incomplete data, consent with data sourcing, and bias in LLMs (IBM 2023). Overreliance on AI threatens human critical thinking, particularly when combined with the amplified dangers/inaccuracies of LLMs and AI sentience, potentially leading to societal mis-development and societal collapse; this is an insidious combination with amplified dangers/inaccuracies of LLMs andAI sentience (Carver 2023; Forbes Council 2021).

Small scale, simple, and transparent models, with systems to check and balance coalesced with professional education and development should be employed. Meanwhile, AI has the potential to take over jobs from simple manual jobs to accounting (Rostron 2023). Job automation/AI implementation has the potential to widen the wealth gap (Mandal 2023). Thorough consideration and planning around individual rights and societal structures diminish unavoidable permanent job loss. Ironically, avarice drives AI development, despite Elon Musk, Steve Wozniak, Stuart Russel, and other technology/entrepreneurship titans signing a future of life petition to pause AI development (Future of Life Institute 2023).

Ultimately, AI is higher intelligence and we shouldn’t drown amidst the cornucopia of dazzling opportunity. Patience, future oriented thinking, and the realization of the inseparability of intense benefit with risk are essential. Independent, interdisciplinary, collaborative, and strategic development; education; research; modeling; safety features; reducing bias; input selection; and working backwards could increase the likelihood of long term prosperity. Humanity’s survival should remain the priority. Higher development of AI promises heightened risks. It is essential that we do not allow avarice and the riches of today get in the way of richness for humankind tomorrow.

Bibliography

Ariza-Montes, A. (2023). "Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness and Safety in Education." Humanities and Social Sciences Communications, 10(1), 311. https://doi.org/10.1057/s41599-023-01787-8.

Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & "The Real Danger of LLMs." Skiller Whale, https://medium.com/skiller-whale/the-real-danger-of-llms-e38b72530e7a.

"AI-generated Images Bias Racism Sexism Stereotypes." The Washington Post,https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/.

"AI's Impact on Unequal Distribution of Capital and Wealth Across the World." Medium,https://finclubju.medium.com/ais-impact-on-unequal-distribution-of-capital-and-w ealth-across-the-world-87440256710a#:~:text=While%20AI%20has%20the%20p otential,certain%20groups%20are%20disproportionately%20affected.

Bao, Fanchen. "Book Summary of Human Compatible." Medium,https://fanchenbao.medium.com/book-summary-of-human-compatible-6f36a8b89bf9.

Bostrom, Nick. "Doomsday Invention: Artificial Intelligence." The New Yorker, https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom.

Brian Christian." Medium,https://medium.com/@ridgers10/summary-of-the-alignment-problem-machine-lea rning-and-human-values-by-brian-christian-2d2edd4a1530.

"The Ethicality of AI Art: Potential Issues." MakeUseOf, https://www.makeuseof.com/dark-side-of-ai-art-potential issues/#:~:text=The%20ethicality%20of%20AI%20art,to%20theft%20and%20harmful%20stereotypes.

University of Michigan-Dearborn. "AI’s Mysterious Black Box Problem Explained."https://umdearborn.edu/news/ais-mysterious-black-box-problem-explaine

World Economic Forum. "Top 10 Ethical Issues in Artificial Intelligence." WEF,https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/. d.

GRC helps global non-profits, social impact startups, and governmental organizations achieve their goals while simultaneously empowering students at top universities to give back to the community.
More posts by Global Research and Consulting Group.
The Alignment Problem
Share
Twitter icon Facebook icon