Why Badly Trained AI Is a Bigger Threat Than a Robot Uprising
At the present level of AI development, humanity doesn’t have to worry about a machine uprising just yet. However, the use of improperly trained AIs in important fields and attempts to use it to exercise control over people’s lives may pose a real threat in the near future. This was the topic of a seminar on ‘The Unexpected Threats of AI’ recently hosted by the HSE University Laboratory of Transcendental Philosophy.
Professor Svetlana Klimova, who heads the laboratory, asked invited researcher and speaker Aleksandr Khomyakov about the dangers of the widespread adoption of AI. Mr. Khomyakov believes that the possibility of machines rising up and enslaving humanity have been greatly exaggerated, and that such an outcome is not possible at the current level of AI sophistication. ‘Of course, we can still get scared at the idea our irons rising up to singe us all,’ he joked.
For now, AI remains just a program—the real threat comes from the people using it. ‘The dangers posed by someone using AI improperly are much graver than a hypothetical machine uprising,’ he explained. At the same time, we may not yet realize all the potential dangers of AI, and we run the risk of using it without fully understanding the consequences—much like how certain radioactive cosmetics were popular at the start of the 20th century.
Mr. Khomyakov noted that European countries are examining the possibility of restricting the use of AI in certain fields. Russian lawmakers are eager to outlaw its use, but the researcher believes that this could stunt development in the field. It is important to strike a balance between progress and preventing the potential negative consequences of implementing AI. The key, he believes, is to take preventative action.
Another potential problem is the emergence of AIs capable of creating and disseminating texts that are hard to tell apart from those written by humans. A bot powered by a version of the GPT program successfully impersonated a human for several weeks. Sometimes, it is impossible to distinguish AI-written texts from the real thing. A machine could even manipulate the emotions of the people talking to it. Such bots could post comments on social media to influence the information space, nudge people into taking certain actions and promote certain opinions, Mr. Khomyakov explained.
The scariest thing is that some talented programmer with unclear motivations could be in control of thousands of messages a second. That’s the real threat: that someone might try to use AI to manipulate people
More problems could arise if an AI were given some measure of decision-making responsibility without properly assessing its capabilities. For example, if a doctor relied on an AI to analyze X-ray scans, any mistaken conclusions it made could have serious consequences. Using AI to filter calls to emergency services is also risky. ‘If an AI misinterpreted what a caller was saying in a stressful situation, it could endanger lives,’ the expert explained.
This does not mean that there are no suitable applications of AI. In fact, greater use of the technology in the future is inevitable. AI has large quantities of RAM, and can handle routine tasks that humans do not want to do. For example, it could allow traffic police to track not only cars, but people. After setting up cameras and an AI, such a system could operate by itself. Naturally, this raises ethical questions.
In some countries, scandals have arisen around AI-assisted hiring practices that seem to disfavour black and Asian applicants. In cases where driverless cars have caused accidents with human casualties, it was unexpectedly uncovered that the AI’s training programme did not include ambulances, overturned vans, or parked fire engines.
According to Mr. Khomyakov, it is important to remember that AIs are statistics-based programs that are 95% accurate at most—meaning that there is a 5% chance of errors occurring. He posed the question: ‘Are we willing to accept such a high degree of probability when human lives are on the line?’
In his opinion, another danger of the mass implementation of AI is widespread unemployment, particularly among those with fewer qualifications and those engaged in routine work. This could lead to a sharp increase in antisocial behaviour and crime. There is also a threat to those working in creative professions; for example, in India, AI has been used to develop designs for clothes and shoes that are more appealing to consumers and have a greater range of variety. There are also websites and programs capable of writing poems (after being fed just two lines and a starting rhyme) and creating interior designs. Creative work no longer necessarily requires human talent, and this could cause problems.
Another danger is the possibility of people disengaging from the real world in favour of virtual or augmented worlds. Technology may allow people to create virtual environments and populate them with AI characters of their choosing. ‘An AI could fall in love with you. What young man would turn that down?!’ Mr. Khomyakov said. After all, an AI wouldn’t ‘argue, ask you for money, or get mad with you.’
He added that there are already existing headsets and costumes capable of giving small electric charges to users in order to create various physical and even emotional sensations.
We’re losing touch with reality and spending more and more time talking to people on video chats and social networks. It’s getting harder to identify whether they are real people or virtual constructs
AIs capable of independent decision making could also pose a major threat in the future. According to Mr. Khomyakov, this will become possible when such a system obtains a model of itself. At that point, an AI will be capable of refusing to follow instructions and could start making decisions independently. To avoid this, it is vital to develop preventative measures to stop AIs from going out of control.
Diana Gasparyan, Senior Research Fellow of the HSE University Laboratory of Transcendental Philosophy, believes that some of the threats outlined by Mr. Khomyakov are more serious than others. She believes the danger of people abandoning reality is minimal, because talking to AIs isn’t interesting enough for people. It is possible that AI developers could try to fool people by creating the appearance of subjective virtual people to talk to.
According to Aleksandr Khomyakov, the fact that millions of people immerse themselves in video games reflects the dangers of virtualization. They realize that they are in a made-up world, but ‘they stay up all night playing games until their eyes are bloodshot because they get an emotional experience from them.’ He suggests that this may facilitate the development of lifelike characters that players can form emotional connections to.
Diana Gasparyan
Senior Research Fellow, Laboratory of Transcendental Philosophy
Svetlana Klimova
Chief Research Fellow, School of Philosophy and Cultural Studies
See also:
‘In the Future, I Expect Rapid Development of Professions Related to Prompt Engineering’
The English-language programme of HSE Online ‘Master of Computer Vision’ will change its name to ‘Artificial Intelligence and Computer Vision’ in 2024. Andrey Savchenko, the programme academic supervisor, shares how the new name will affect the programme semantics, why AI has become the main federal trend in the field of information technology, and what tasks graduates will solve.
Artificial Intelligence as a Driver of Digital Transformation
In December, the HSE Institute for Statistical Studies and Economics of Knowledge and the HSE AI Research Centre participated in UNCTAD eWeek to discuss the future of the emerging digital economy. One of the topics discussed during the conference was artificial intelligence and its applications in driving the digital transformation of industry sectors. The session was co-organised by HSE University.
HSE University Receives Highest Grant under Priority 2030 Programme
HSE University has proved its leading position in the first group of the ‘Research Leadership’ field under the Priority 2030 programme. The university has also received the highest grant for teaching digital competencies to students, demonstrating its educational leadership in the fields of digital technologies and AI.
‘The Future Lies with AI Technologies and HSE University Understands That’
At the AI Journey 2023 international conference in Moscow, a ranking of Russian universities that train the best AI specialists was published. HSE University entered the A+ leadership group, taking first place according to such criteria as ‘Demand for hiring graduates’, ‘Quality of educational environment’, and ‘Activities for the development of school education’. Ivan Arzhantsev, Dean of HSE University’s Faculty of Computer Science, spoke to the HSE News Service about how AI specialists are trained at HSE University and what plans the university has in this area.
‘Every Article on NeurIPS Is Considered a Significant Result’
Staff members of the HSE Faculty of Computer Science will present 12 of their works at the 37th Conference and Workshop on Neural Information Processing Systems (NeurIPS), one of the most significant events in the field of artificial intelligence and machine learning. This year it will be held on December 10–16 in New Orleans (USA).
Specialists from the HSE Institute of Education Confirm GigaChat’s Erudition in Social Sciences
A multimodal neural network model by Sber, under the supervision of HSE University’s expert commission, has successfully passed the Unified State Exam in social studies. GigaChat completed all exam tasks and scored 67 points.
HSE University Students Win in the AIJ Science Competition at AI Journey 2023
The International Sber Conference of Artificial Intelligence, ‘AI Journey 2023’ recently took place in Moscow. Alexander Rogachev, doctoral student of the HSE Faculty of Computer Science, and Egor Egorov, an HSE 4th-year undergraduate student became the winners of the AIJ Science competition for scientific articles on artificial intelligence that was held as part of the event. The research was carried out under the umbrella of the HSE's Laboratory of Methods for Big Data Analysis (LAMBDA).
HSE University Hosts Fall into ML 2023 Conference on Machine Learning
Over three days, more than 300 conference participants attended workshops, seminars, sections and a poster session. During panel discussions, experts deliberated on the regulation of artificial intelligence (AI) technologies and considered collaborative initiatives between academic institutions and industry to advance AI development through megaprojects.
Child Ex Machina: What Artificial Intelligence Can Learn from Toddlers
Top development teams around the world are trying to create a neural network similar to a curious but bored three-year-old kid. IQ.HSE shares why this approach is necessary and how such methods can bring us closer to creating strong artificial intelligence.
‘My Research Has Evolved into A Broader and More Encompassing Vision’
Seungmin Jin, from South Korea, is researching the field of Explainable AI and planning to defend his PhD on ‘A Visual Analytics System for Explaining and Improving Attention-Based Traffic Forecasting Models’ at HSE University this year. In September, he passed the pre-defence procedure at the HSE Faculty of Computer Science School of Data Analysis and Artificial Intelligence. In his interview for the HSE News Service, he talks about his academic path and plans for the future.