With New Tech Comes New Responsibilities – Analysis


By Lydia Kostopoulos

Human imagination is the foundation for everything in the world around us. As motivational writer William Arthur Ward famously said If you can imagine it, you can create it.”

Throughout history, it was the people with seemingly audacious and imaginative fantasies who invented so many of the technologies we take for granted today such as electricity, planes, organ transplants, the internet, and mobile phones—all having far-reaching implications for society and the global economy. With the convergences of multiple technologies in this Fourth Industrial Revolution, the technological fantasies that inventors have are even more audacious—flying anywhere around the world in 30 minutes, extending eternal life, 3D printing entire cities, managing agriculture from space, communicating to the computer through thought, and creating artificial intelligence that is equal to the cognitive capability of a human brain and being able to talk to communicate with animals. These are just some of the many technological fantasies that people are pursuing today.

It is these types of bold technological fantasies that drive innovation and will impact society. We have had many General Purpose Technologies (GPT) in our history (most notably fire and electricity), however, what is different in our historical moment is the speed with which general purpose technologies are being introduced into our society, economies, and homes. AI builds on top of all previous technologies such as the Internet, mobile phones, the Internet of Things (IoT) and synthetic biology. This rapid integration presents a tremendous economic and social opportunity but also comes with the potential for immense social peril.

Social cohesion and mental well-being at the whim of decision architecture 

As we live in an era of proliferating emerging technologies, it is worth appreciating the power of decision architecture that influences the decision-making process of people. The best example to showcase this is the underappreciated and underestimated power that social media decision architecture has had on the fabric of our society. Social media was created on the basis of the bold technological fantasy—‘What if there could be a digital platform where everyone could be in touch with all their friends’Since then, while it has remained a platform to connect with your friends, it has also expanded to become a utility for businesses, a precision advertising platform, and a loudspeaker for public officials and terrorists alike.

Over the years, the algorithms began to change and technologists and software engineers were asked to design algorithmic systems that incentivise keeping people on social media platforms for as long as possible so that they can view as many ads as possible. One such example is Facebook where 97.5 percent of all its revenue comes from ads. Corporate decisions that singularly focused on ad revenue, without regard for societal impact, have had many consequences for the mental health of all demographics across the population.

At the individual level, algorithmic decision architecture that exclusively favours corporate incentives has adversely impacted mental well-being. The Wall Street Journal conducted an investigative journalism report on TikTok’s algorithm and found that it was able to classify and show viewers items they were interested in even if they did not explicitly search for it. More concerningly, they found that what keeps people viewing more videos on the platform isn’t necessarily what they are interested in and what they like, but what they are most “vulnerable” to. Facebook and Instagram have been found to exacerbate body image issues for teenagers, particularly girls. Meta is aware of it and how its platforms create low self-esteem for kids and teens. A study on the social media use of adolescents who committed suicide found various themes relating to the harmful effects of social media such as “dependency, triggers, cyber victimisation, and psychological entrapment.” These negative consequences of AI on social media have not been fully managed and continue to cause harm today.

At the societal level, algorithmic decision architecture that exclusively favours corporate incentives without consideration of downstream order effects has created polarised and fragmented societies. The Center for Humane Technology outlines the impact of AIon society in their AI Dilemma presentation by Aza Raskin and Tristan Harris. In this presentation, they create a distinction between society’s first interaction with AI which was through social media; and they label society’s second interaction with AI as occurring in 2023 with the current and emerging generative AI tools. Information overload, doomscrolling, addiction, shortened attention spans, sexualisation of children, polarisation, fake news, cult factories, deepfake bots, and the breakdown of democracy are a few of the harms that have been identified from society’s algorithmic interaction with social media.

While the software developers did not have malicious intent, they overlooked their duty to society when they singularly focused on creating algorithms that were incentivised to maximise engagement on the platformRaskin and Harris’ assessment of what is socially unfolding with the next interaction with AI is a reality collapse due to an excess of fake everything, in turn, resulting in a trust collapse. In his book The Coming Wave, Mustafa Suleyman, founder of Google’s DeepMind, also flags his concern about the ubiquitous nature of generative AI and its ability to democratise the creation of cyberweapons, exploit code, and our very biology. These concerns that are unfolding each day have not been fully managed and are notable threats today.

While it is important to grapple with the social challenges of existing algorithms, there are new algorithms and new ways through which they will be embedded deeper into our lives.

New fantasies, new technologies, new responsibilities

As a coping mechanism for dealing with grief after her best friend passed away, Eugenia Kuyda created a conversational AI bot of him based on all their text exchanges. She did this so that she could continue to chat with him posthumously. Out of this experience, she created the company Replika where anyone can create a personalised AI companion to chat with. The testimonials feature many happy users who feel they found a friend and that this digital algorithmic companion has alleviated their loneliness. In fact, Replika and other digital companion AI companies are creating a valuable technology that addresses a growing social problem. In May 2023 the US Surgeon General released an advisory report calling out the public health crisis of loneliness and social isolation. Currently, two countries in the world have appointed a minister for loneliness—the United Kingdom and Japan. However, studies show that this epidemic of loneliness and social exclusion has a strong foothold in Africa and India as well and in other parts of the world where studies have not yet been conducted.

Given the adverse social implications AI has had with social media, as new AI-based chatbots and digital companions are created to alleviate the growing problem of loneliness, it will be imperative to consider the first rule in the “Three Rules of Humane Tech” outlined by the Center for Humane Technology: When you invent a new technology, you uncover a new class of responsibilities. This rule is not only relevant to those who invent a new technology, but it applies to all those who use this technology. Within the European Union’s General Data Protection Regulation (GDPR) policy governing how personal data is managed, there is a section on the “Right to be Forgotten” which did not need to exist until computers could remember us in perpetuity. Will new laws need to be created to force companies to maintain the cloud infrastructure of digital companions across the lifetime of an individual? Will rights be needed for these algorithmic companions so that those who rely on them do not have to grieve, or feel lonely without them? What if people want to marry their AI companion? If AI companions become an important form of social infrastructure and part of human intimacy, new laws will need to be created to protect these customised algorithms and access to them.

In due course, there will be new government policies and regulations to mitigate and manage the social implications of existing and new AI technologies, yet, the current state of technological advancement continues to exceed the speed with which regulations are legislated. The absence of regulation does not mean companies should abdicate their responsibility for it. In an era where loneliness and isolation are on the rise, those who design algorithms have an outsized role to play in creating algorithmic systems that do not destroy social cohesion, exacerbate loneliness, or push teenagers to commit suicide. Those who leverage algorithms built by others need to hold themselves and others accountable to ensure that algorithmic systems cause no harm.

In the meantime, the wild technological fantasy we should all embrace today should be to design algorithmic systems that create incentives for human flourishing and socio-economic prosperity.

  • About the author: Lydia Kostopoulos is a strategy and innovation advisor.
  • Source: This article was published by Observer Research Foundation

Observer Research Foundation

ORF was established on 5 September 1990 as a private, not for profit, ’think tank’ to influence public policy formulation. The Foundation brought together, for the first time, leading Indian economists and policymakers to present An Agenda for Economic Reforms in India. The idea was to help develop a consensus in favour of economic reforms.

Leave a Reply

Your email address will not be published. Required fields are marked *