The rise of Artificial Intelligence (AI) seems to have taken hold of our inner most fears, accelerated recently by the introduction of ChatGPT. But do these leaps forward in AI capability mean that we are all doomed?
Elon Musk would have us believe that yes AI poses an existential threat to the human race (but please bear in mind that this man has called one of his children X Æ A-12 Musk, which surely says something about his judgement). Whereas Bill Gates believes that AI will help us to become not just more productive but also more creative.
Recently a number of the leaders I work with have been asking my view, as a psychologist and someone interested (some may say unhealthily) in Artificial Intelligence, Machine Learning and Data Analytics. This is not as crazy as it may sound after all psychology is the scientific study of human behaviour, cognition (aka intelligence), and emotions.
So, here are my thoughts. Concerns (which are discussed by others at length elsewhere): Does AI present an existential threat?
People fret that AI will somehow develop spontaneously and its control over our daily lives will spread faster than Covid-19, but can AI will learn to ‘improve itself’ in a way that is beyond our control? The argument behind this is that if a human brain has a hundred billion neurons, then an AI with a thousand billion simulated neurons will be more intelligent than a human. Perhaps the most significant worry here is that AI will be able to ‘read’ our intentions, taking away agency, autonomy and the upper hand of humanity. This concern is further exacerbated by ChatGPT which people think is just a pre-cursor of what’s to come.
When I first played with ChatGPT it did seem spookily aware of what I was thinking and requesting of it, but after much time running through endless questions its limitations start becoming more apparent. The fact is that AI is a long, long way off understanding what’s known in psychology as ‘theory of mind’ (being able to understand and take into account another individual’s mental state) which is arguably one of the most complex and ‘human’ aspects of intelligence. Why?
Brain structure is hugely, hugely complicated – some even say that we know less about the human brain than we do about space. Our brain is one of the most complex and mysterious structures in the known universe, and there’s still much to be discovered about how it works, how it develops, and how it interacts with the environment. As humans every challenge we face requires a different set of neuronal pathways for example – to move our muscles we need one set of structures, to store memories another, to see objects another still and these are all hugely interconnected, often in ways that we’ve not yet been able to verify. AI would need to be programmed specifically for each function, and then deliberately interconnected with other functions in ways that we don’t even understand in the brain in order to even start to resemble the more complex aspects of being human i.e. common sense, emotion, empathy and theory of mind.
I would however be lying if I told you I had no concerns. For me the biggest worries involve the following aspects:
- Bias – AI algorithms are only as unbiased as the data they are trained on. If the data contains biases, the algorithm will learn those biases and perpetuate them and of course every human (even those who believe otherwise) is biased. Unless this is strictly monitored with proper understanding this can lead to unfair and discriminatory outcomes, particularly in critically important areas such as hiring people for jobs (or more worryingly not hiring them) and criminal justice.
- Dependency – as AI becomes more advanced, there’s a risk that we become overly reliant on it, making us less practiced when it comes critical thinking which in turn will impact our decision-making abilities and could also limit our creativity.
- Unethical commercialization – while the algorithms that power the success of Instagram, TikTok and Facebook are not nearly as clever or considered as ‘The Social Dilemma’ would have us believe, the use of data to take advantage of our human nature is worrying and very real.
- Job displacement – as AI systems become more advanced and capable, there is a real risk that it will replace human workers in some roles, particularly in sectors that rely heavily on routine or repetitive tasks. This will impact the lowest paid, least skilled and the least privileged hardest creating more potential for injustice and unfairness within our society.
However, there are other areas of concern, that while concerns also offer significant opportunities (which I’ll discuss in the next article….):
- Loss of social skills – the increasing use of AI-powered personal assistants like Siri, Alexa, or Google Assistant has made it easier for individuals to complete tasks and access information without the need for human interaction. The less we interact with others, the rustier our skills become. I often find myself shouting at Alexa, and of course there is no need to say please or thank you (although I tend to and then feel very stupid, in effect the interaction is training me out of being polite). This may sound basic and not too much of a catastrophe, but as my parents said when I lacked manners at the dinner table – “If you do it at home, you’re likely to do it when you’re out” – these things become habitual and they iteratively change our behaviour. A loss of social skills and poorer interpersonal communication will without doubt impact healthy relationships and individual growth.
- Decrease in emotional intelligence – as I explain in Mirror Thinking empathy and emotional skills are developed iteratively through continual interaction with other humans. The often unconscious read of nuances in face-to-face communication continually add to our capabilities. But like a muscle, when these skills are not used, they become weaker and less developed and we begin to understand each other less. Research has shown that young adults who interact more frequently on text than face to face have lower levels of empathy. Unless careful consideration is taken, using AI more can and will lead to decreases in emotional intelligence, which in turn is likely to exacerbate the current issues we have with loneliness and mental health.
- The future of work – the two points above pose a problem for the workplace. McKinsey research shows that between 2016 and 2030, the demand for social and emotional skills will grow across all industries by 26 percent in the United States and by 22 percent in Europe as a direct result of, yes you guessed it an increased use of AI.
- Bots helping in place of humans – in some very human circumstances such as therapy, bots have been shown to have a significantly positive impact on outcomes. However, along with this ‘therapist in your pocket’ come the concerns of false diagnosis, false treatment (offering the wrong options to someone who is mentally unwell can cause significant issues), and privacy.
This article may look like ‘one side of a coin’ but there are also huge opportunities for AI to help us to become more human which I’ll explore more in Part 2 (focussing in particular on social skills, emotional intelligence, future of work and therapy bots).
In my view, while we do have things we need to remain very much on top of, AI is not going to present a catastrophic end to life as we know it. I’m not dismissing the fact that AI has the potential to spread rapidly in certain domains and to create issues, but we would also need to remember that AI depends on human design and ultimately human control.
Image – pexels.com