Rob Reich, Stanford University political science professor - currently a senior researcher at the Stanford University Human-Centered AI Institute, former senior advisor at the U.S. AI Safety Institute, author of Ethics in the Age of Big Tech /Courtesy of Rob Reich

"If technology is not subject to democratic oversight and governance, we will be faced with a situation where we must choose between corporations operating technology irresponsibly and one of the models of authoritarian control over technology like that of China."

Rob Reich, a leading scholar in the field of science and technology ethics and a professor at Stanford University, emphasized the need for proper development directions for algorithms during a recent interview. According to him, the algorithms themselves are neither good nor bad, but they can become an invisible new power depending on how they are used. Social media is a representative example. The algorithms within social media are designed to maximize user engagement, which results in a high potential for amplifying provocative and emotionally charged content. Professor Reich noted, "If algorithms are left unchecked without democratic oversight, there is a risk that social and political power will become concentrated in the hands of a few," stressing that "the government and corporations must work together to develop artificial intelligence (AI) technology in a way that supports human-centered and democratic values."

Professor Reich is the McGregor-Zirang Chair of Social Ethics of Science and Technology at Stanford, served as a senior advisor to the U.S. AI Safety Research Institute in 2024, and is currently a senior researcher at the Stanford Human-Centered AI Institute (HAI). He is also a co-author of "System Error: The Ethics of the Big Tech Era," which was translated into Korean and published in 2022. The following is a question-and-answer session.

Algorithms are increasingly influencing our society.

"In reality, we are living in a world mediated by algorithms. Algorithms play a key role in not only human relationships but also the economic, social, and political institutions we belong to. In important areas such as hiring, criminal justice, healthcare, and education, algorithms are already leading change. The social media platforms through which we obtain information and consume entertainment are also driven by algorithms. The problem is that as the AI era arrives, the influence of algorithms is growing, but at the same time, the way they operate is becoming increasingly opaque. They have reached a level that even developers find hard to comprehend."

Then, do algorithms inherently pose a threat to human society? Or do their benefits outweigh the risks?

"Algorithms can provide tremendous opportunities for society while also posing serious risks. Therefore, this issue should not be approached as a mere choice but as a matter of balance. Responsibly designed and carefully implemented algorithm systems can reduce human biases, increase efficiency, and produce more consistent outcomes. Conversely, however, algorithms may reinforce human biases, spread injustices, and create a lack of transparency in decision-making processes, making it difficult or impossible to challenge decisions made by algorithms."

Concerns are growing that as AI advances, algorithms will no longer be controllable by humans.

"The most fundamental issue regarding the influence exercised by algorithms is their opaque power. Algorithms act as intermediaries in forming social relations. In other words, algorithms determine how we interact, what is encouraged, or what is rendered impossible. For example, they influence whether one can obtain loans, who gets matched on dating apps, whether a job application leads to an interview, which posts one sees on social media, and whether AI will replace our jobs. Ultimately, algorithms are forming a form of invisible hierarchy."

So, what should be done?

"To address this issue, systems must be established where humans manage algorithms. Algorithms must not be allowed to dominate humans. It is not a matter of shifting responsibility to individuals affected by algorithmic decisions. Developers and corporations designing and operating the systems must ensure fairness, transparency, and accountability. Algorithms themselves are value-neutral. They are not inherently good or evil. What matters is whether they are designed, operated, and managed in a manner that respects human values and promotes social welfare."

Recently in Korea, political polarization has deepened. The algorithms of social media are cited as a cause that presents only biased information tailored to one's preferences. Is it true that algorithms encourage this polarization and ideological bias?

"As a philosopher, I believe it is important to study this issue from a social scientific perspective. To experiment and research the impact of social media, independent researchers must have access to the data held by large platform corporations such as TikTok, Meta, Google, and X. There is no reason for citizens to trust data that corporations release about their own research, similar to not allowing students to grade their own exams. However, based on the best evidence revealed so far, as I discussed in my book, algorithms within social media are designed to maximize user engagement. This means there is a high likelihood that provocative and emotionally charged content will be amplified. While the so-called filter bubble does not exist as widely as one might think, it is clear that these platforms create an environment where extreme content can thrive and spread.

In countries like Korea, the essence of the problem lies not only in algorithms but also in institutional structures. A few large platforms wield enormous power in making decisions regarding AI-based content moderation, and these decisions are made by a small group of executives. Particularly when platforms are operated to maximize user engagement rather than democracy, the concentration of such power becomes even more problematic."

If users do not actively seek to disengage from the algorithms, what should be done?

"Possible solutions include increasing the transparency of algorithms, strengthening antitrust regulations to promote competition, and providing tools that help users make more proactive choices about the information they consume. However, the most important aspect is for democratic governments to establish policy mechanisms that protect the integrity of the information ecosystem. At the same time, careful approaches are needed to ensure that these measures do not infringe on freedom of expression."

You emphasized in your book that technology must be subject to democratic control.

"Yes. In the book, we argued that democratic institutions must adapt to the advancements of cutting-edge technology and guide the development of AI in a manner that supports human-centered and democratic values. If technology is not subject to democratic oversight, we will be left with only two choices: either corporations operating technology irresponsibly or a model of authoritarian control over technology like that of China. Ultimately, we will be in a situation where we have to choose one of the two."

What then should the government and corporations do together to develop responsible AI policies and design fair algorithms?

"Effective collaboration between the government and corporations must include two elements. First, the government should actively recruit personnel with technological expertise to secure 'technical team members.' Corporations should initiate value-based designs that consider social impacts from the early stages of product development by establishing standards and guidelines across the industry. This is not an unrealistic ideal. The regulatory sandbox model being implemented in the UK and Taiwan is a good example. It allows corporations to experiment with new technologies while regulatory agencies can monitor impacts in real-time and adjust regulations accordingly.

Second, we must reform democratic institutions to match the pace of advancements in cutting-edge technologies like AI. In the 20th century, economists developed new methodologies such as macroeconomics, microeconomics, and cost-benefit analysis, resulting in the establishment of institutions like the Central Bank, World Bank, and International Monetary Fund (IMF). If 21st-century economists are to be technology experts, we need to create new institutions that understand advanced technologies like AI and contribute to democracy. Institutions such as the KAIST Center for AI Fairness in Korea are the beginning of this change, and we must further develop them.