Could Elon Musk Use Twitter To Develop Brain Implants? – Interview
By SwissInfo
Elon Musk, the billionaire CEO of Twitter, also heads a firm working on technologies that can be implanted in the brain. This represents an obvious conflict of interest and a risk for users, warns a Swiss-based researcher of technology ethics.
By Sara Ibrahim
Elon Musk, the American tycoon who heads several innovative companies such as Tesla and SpaceX, continues to make headlines. His takeover of Twitter in late October added a further jewel to the crown of the high-tech entrepreneur, who himself has over 110 million Twitter followers.
Musk’s visionary and controversial image amuses some, but worries others. Apart from electrifying the world and colonising Mars, the billionaire believes he will one day be able to hook up the human brain to artificial intelligence. To this end, he started Neuralink in 2016. The company aims to produce neural interfaces that can be implanted in the brain. These devices are still at an early stage, and experiments using them have only been done on monkeys and pigs.
Yet with access to the sensitive data of the 330 million active users of Twitter, Neuralink could develop invasive neurotechnologies, that is, implants able to read and manipulate people’s brains by influencing their behaviour, memories, thoughts and feelings. This is what Marcello Ienca fears. He is a researcher in neurotechnology ethics at the Federal Institute of Technology Lausanne (EPFL).
Switzerland is heavily involved in the development and regulation of technologies that act as an interface with the human nervous system. Ienca admits, however, that the Alpine country could do little on its own to prevent the creation of systems or platforms able to manipulate public opinion.
SWI swissinfo.ch: Why do you think Elon Musk is “morally unfit” to be both head of Twitter and CEO of Neuralink as you said in a recent Twitter post?
Marcello Ienca: Because the same person who owns one of the leading companies in the world producing neurotechnologies for implantation in the brain is now also the owner of a social media platform that collects sensitive data on millions of users. This fact is a bit worrisome.
Whoever gets involved with neurotechnologies to scan and manipulate the human brain should be operating on moral high ground. That’s not the case with Elon Musk. We are talking here about a pretty eccentric individual who already uses his Twitter feed to act like the chief troll of the web, influencing the market performance of his companies and politically influencing millions of voters, as we saw during the American mid-term elections [Musk called on people to vote for the Republicans, editor’s note].
Nothing in his way of acting would lead you to believe that he would be willing to stop manipulating public opinion for ethical reasons. This makes him unfit to develop technologies interfacing with the brain, which is the number one moral issue we’re dealing with right now.
SWI: On paper, however, there is no connection between Neuralink and Twitter. How could tweets help a neurotechnology business develop devices capable of influencing the human mind?
M.I.: Tweets can reveal a lot about people. They may give hints of their political credo or religious beliefs, but also their thoughts, emotions and moods. Using artificial intelligence, it is possible to analyse the feelings of a person from their verbal language. This process, called natural language processing for sentiment analysis, could derive psychological information on a person from their tweets with a fair degree of statistical reliability.
This process would tell the analyst certain things about the person – for example, if they tend to be more positive or negative, a risk-taker or fearful – so they could then be bombarded with targeted advertising or information which might be true or false. A case in point was the Cambridge Analytica scandal, where there was psychological profiling of Facebook users to influence them politically.
As yet, it is not easy for neurotechnology to extract such sensitive information from brain data. Also, the number of users is limited. But if brain data was combined with psychological data about millions of users of Twitter, it would be possible to improve the ability of social media and neurotechnology to understand and classify people on the basis of psychological characteristics – and to influence and manipulate massively.
Facebook was going in this direction in 2018 when it launched a brain-computer interface. This was a project that Zuckerberg later dropped, probably due to its cost.
SWI: Should we expect Musk’s neurotechnologies to be able to understand and condition the human mind?
M.I.: It’s likely. Today’s neurotechnologies do not yet allow to read people’s minds extensively, but they can make statistical correlations between brain data and psychological information, thus raising concerns about privacy. When the number of users (and thus the quantity of data) increases, risks of having even more extensive invasions of mental privacy will grow too.
Here we are no longer talking about helping patients affected by mental and neurological problems, who might benefit greatly from such technologies. The aim is also to market devices that can be used by an increasing number of people to record intracranial activity and optimise mental processes, concentration and memory. Devices of the “fitbit” variety are already on the market to monitor sleep, attention and anxiety in the brain. Some apps even let you control physical objects with the mind.
SWI: Is Switzerland in a position to prevent the risks surrounding neurotechnologies and protect the privacy of users?
M.I.: Switzerland is one of the most active countries in the world as regards the study of ethical and social implications of neurotechnologies and the development of innovative standards to tackle these challenges.
This country participated in the drafting of the OECD recommendation on responsible innovation in neurotechnology, which today is the principal international standard in the field. Also, organisations like GESDA , the Geneva Science and Diplomacy Anticipation Summit, have placed neurotechnologies high on their agenda.
That said, Switzerland on its own would be powerless in dealing with Musk or any global company. Its room for manoeuvre to protect the privacy and the mental integrity of what is a fairly small number of users is limited. The European Union, with more than 400 million inhabitants, would be in a better negotiating position, since it would be a blow to Musk to lose such a large number of users.
Translated from Italian by Terence MacNamee
This is the world that is coming on the other side of the world line singularity. I am glad that the Swiss government and people are looking into the future with an eye to avoiding the worst abuses of the unethical minority of individuals who have the resources necessary for such horrific possibilities.