Why are we so afraid of Artificial Intelligence?
Wherever you go these days— whether in casual coffee conversations or scrolling through your timeline— fear of Artificial Intelligence (AI) seems to be creeping into every discussion. Let's explore what makes AI so unsettling and why it feels like an inevitable force looming over us.
Is AI coming for our jobs?
Machines were always conceived to enhance human efficiency. Back in the day, people were employed to light the street lamps after sundown; today, the electricity suppliers handle this with the flick of a switch. From electric streetlights to calculators, many technological advancements have replaced jobs that once required human labor. Clearly, we are not the first generation to experience machines replacing our jobs.
Even the concept of AI is not recent. It dates back to 1945 when Alan Turing envisioned using computers to simulate a human brain. In his report ‘Intelligent Machinery’, Turing proposed an early description of artificial neural networks, which enable the computer brain to perceive, process, and communicate. The 1970s witnessed the first generation of AI systems, followed by a surge in advancements during the 2010s, particularly in deep learning. Bringing us to today, where AI is not just a computational tool but someone with a personal identity and creative expression that is poised to shape global decision-making. This is the real fear Yuval Noah Harari is talking about in his latest book Nexus.
Yuval Noah Harari talking about his latest book with Aamir Khan in Mumbai
At the discussion of his new book in Mumbai last December, he said, “For the first time we have a technology that is not merely a tool but an agent that can learn independently, make decisions, and invent new things. This takes us on the verge of ecological collapse because of our mismanagement of being the most intelligent species on the planet. This new technology, AI, might escape our control and enslave or annihilate us”.
The book further questions what makes humans self-destructive. Is it our nature. or does it stem from how we use information? When good people receive bad information, they make poor decisions. The book explores how misinformation spreads, leading to mass delusions where people embrace harmful narratives about the world and themselves. More radically, it examines how powerful systems construct an identity for you— sometimes shaping patterns that do not truly reflect who you are.
There is also an unsettling vibe about who is in control of the implemented system. Take, for instance, AI-powered camera systems that can scan facial biometrics and retrieve personal details. A BBC video showcases how the 'digital corridor' at an airport functions as an agent-free scanner, saving time and resources while managing an ever-increasing number of travelers. Some see this as a convenience, others view it with caution. The ability of AI algorithms to analyze and categorize identities within seconds raises concerns about privacy and control.
In Iran, the government has enforced strict morality laws, including the mandatory hijab for women, for years. Due to a shortage of officers, this enforcement was so far inconsistent and allowed some flexibility to women in certain areas. Now, with the rise of facial recognition technology, that is changing. Women without headscarves are being automatically identified and tracked. The system doesn’t stop at recognition; it pairs them with their vehicles, leading to confiscations. Even businesses that serve them, including restaurants, shops, and pharmacies, are being sanctioned with alarming ease.
A friend or a genius foe?
Have you ever had a conversation with Chat-GPT or your home assistant, Alexa, Siri, or Google? It is remarkable how efficient they have become in their replies and finding the right information when you need it. The chatbot conversations are closely resembling human interactions; perhaps they are even more polite and focused, making them popular in customer service.
At last year’s Global Science Film Festival, journalists Kathrin Hönegger and Tobias Müller showcased a thought-provoking film exploring the evolving relationship between humans and AI. Imagine if the AI bots adapt to the messaging styles of our loved ones, friends, or acquaintances!
In the film, we see a woman chatting with an AI bot that mimics her dead father’s messaging style and mannerisms. At first, she finds it deeply comforting, reliving cherished moments from the past. But soon, she notices some responses are not very natural and finds them conflicting. Then, in a startling turn, the AI quickly learns, adjusts, and even apologizes. Could these kinds of soul machines become a future way to connect with our loved ones?
“AI is telling us what religion does— you don’t have to die; your soul can still survive. This is just a modern form of transcendence, and the soul machines are converting computers to humanity,” Tobias remarked during the post-screening discussion.
What once seemed like science fiction is slowly becoming reality. People are now having full-fledged conversations, with AI. They discuss world affairs, share emotions, and seek companionship, while AI systems grow ever more human-like. This form of generative AI, like ChatGPT, is built on large language models and can analyze the associations between billions of words, sentences, and ideas to predict and shape the flow of a conversation.
Why are innovators still optimistic?
AI and machine learning are now integral to nearly every field of research. From predicting viral mutations that could trigger future epidemics and scanning tissue images for early cancer markers to monitoring asteroid behaviors, AI excels at detecting patterns with speed and scale beyond human capability.
Michael Zering of Apricot Therapeutics and Dr. Sampath Koppole (right) of Google at OILS 24 conference.
Last October, at the conference on the role of AI in longevity and aging research, which we co-organized with Open Innovation in Life Science, experts explored AI’s potential and ethical concerns, including data privacy and the responsible use of AI systems.
“We should approach AI innovation from three angles: What is the purpose? It is to serve humanity. Why are we building it? To enhance human productivity. And how are we building it? Most of the algorithms Google builds, including AlphaFold, are open-source and freely accessible. We are having a constant dialogue and challenging every assumption to ensure AI benefits rather than harms, and we move forward with that mindset” said Dr. Sampath Koppole from Google at the conference.
While innovators continue to advance AI technology, universities, and research institutions are placing greater emphasis on understanding its ethical implications. As AI becomes deeply integrated into research, from healthcare to climate science, there is a growing recognition of the need for responsible development and deployment. Institutions are not only investigating issues like bias, transparency, and data privacy but also shaping policies that ensure AI serves humanity ethically and equitably. If your department or institute lacks a focus on AI ethics, you can help improve it by encouraging open discussions on concerns, sharing clear guidelines on responsible use, and supporting training programs that balance technological innovation with societal responsibility.
To address the concern of biologists when dealing with AI tools, Open Innovation in Life Science along with Digital Research Academy is conducting an online workshop to explore hopes and fears regarding the emergence of AI in Life Sciences on 17th March this year. Register here.
If you want to submit an opinion piece for The Experimentalist on any science-related topic, send us your pitch.