AGI, AI, Will AI take over your job? Will AI become Conscious?
What skills to learn in a world of AI? AI threats to humanity, What is AI capable off? Can AI be Creative?
The future terrifies me.
AI might not only become smarter than every human alive, it might even become conscious, start its own companies, and come up with its own scientific innovations. It might even do a better job than us in everything from therapy to research. In fact, Geoffrey Hinton, the godfather of AI, predicts the only job AI won't replace is plumbing.
Now these are just predictions. Most of these might never happen. Some of it might–nobody really knows.
But all of this worry has led me to read two books on AI: Supremacy by Parmy Olson and Mastering AI: A Survival Guide to Our Superpowered Future. I’ve also listened to 15+ hours of podcasts from experts including Mo Gawdat, Dario Amodei, Elon Musk, Peter Diamandis, Geoffrey Hinton etc. Along with a few articles. (I’m trying the labour illusion on you)
Here I’ll be summarizing what I’ve learnt.
I’ll address the following:
AI Threats
The future of AI
AI Jobs Replacement
Can AI do Art? Can AI be creative?
How to be irreplaceable in a world of AI? What skills to learn?
But before I go into the future of AI, let’s start with some of the things AI can do.
What AI already can do:
AI CEO runs a $10 billion company.
AI wrote one of Ali Abdaal’s most popular tweets.
A Portuguese man built a business only with ChatGPT’s instructions.
A boy fell in love with AI and killed himself. A Japanese bro even married an AI robot.
AI beat the world’s best player at Go. That’s crazy because unlike in chess where an algorithm can analyse all possible moves, in Go it can’t. Because there are more possible moves than atoms in the universe.
Demis Hassabis won the 2024 Nobel Prize in Chemistry for creating an AI to solve the protein structure problem.
Even ChatGPT is on its way to AGI.
What the frick is AGI, you might ask?
Well, Simple chatbots have limited abilities. They only do the things you ask them to. Some even call them artificial narrow intelligence(ANI).
Whereas, artificial general intelligence(AGI) refers to AI that can understand, learn, and perform any intellectual task that a human can do. It’s not limited to specific tasks. It can self-teach and perform tasks it wasn’t trained/ developed for.
AGI is AI with common sense. AGI is AI on par with humans.
“You'll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible.”-Francois Chollet
When AGI surpasses human intelligence by a large margin(something like twice the smartest human), we’ll call it Artificial Super Intelligence(ASI).
Now before we cover anything else, we need to remember AI is OVERHYPED.
Overhyping AI
Sundar Pichai, Google CEO, said Bard responded to prompts in Bengali without having been taught the language. However, a Google researcher pointed out that PaLM, the forerunner to Bard, had been trained in Bengali.
There are also good reasons to believe the Google Duplex presentation where Pichai showed off Google’s very human-sounding AI assistant was staged.
Lesson: don’t believe everything tech CEOs tell the public.
Even Elon Musk says we’ll have AGI in five years. But he also said we’ll have a man on Mars in 10 years, 13 years ago. So, I don’t know.
The former Google engineer, Blake Lemoine, said that LaMDA was sentient. But that claim was dismissed and he was fired.
He was just experiencing anthropomorphization or the “Eliza effect.” It refers to Eliza, the first Chatbot. The secretary of Eliza’s creator considered it to be conscious and had private conversations with it, and this was all in 1966, so people thinking AI is sentient is nothing new.
Even way back in 1946, the creators of ENIAC, the first general computer, believed they could make it think. This was when it could only perform 5000 sums per second, which is a trillion times less smart than the computers running AI today. And we still don’t have AI which truly understands stuff.
What I’m trying to say is experts aren’t always right with their predictions. So take everything they say with a grain of salt.
Now let’s get started.
AI Prediction
On one hand, there are people like Yann LeCun who believe there’s a near-zero chance that AI will pose an existential threat to humans.
While on the other hand, people like Eliezer Yudkowsky believe AI will surely lead to human extinction.
Geoffrey Hinton in between. He believes there’s a 10-20% chance it leads to human extinction.
He also predicts there’s a 50% probability that AI will surpass human intelligence in the next 20 years.
Here’s a prediction from OpenAI CEO, Sam Altman:
That’s a prediction many other AI experts hold as well.
But just that prediction doesn’t say everything. You might be wondering-
Will AI become conscious? Will AI automate jobs? What will the economy look like with AI?
Let’s address these questions.
Will AI become conscious?
Let’s start more simply with emotions–will AI have emotions?
People like Mo Gawdat, former Google X CEO, think so.
Take an emotion like fear.
What is fear? It’s a moment in the future that seems less safe than now when we perceive that we feel fear.
The way a human, a cat and a pufferfish respond to fear are all different. Similarly, the way a machine responds to fear will also be different. The machine might respond to fear by moving its code to another place so it can’t be shut down.
Machines might become smarter than us and perceive more emotions than us. Like how we have a wider range of emotions compared to a goldfish because we have a concept of the future.
So AI can have some sort of emotion, but can it truly understand stuff?
If a computer dialogue is indifferent to the human dialogue then it’s truly intelligent–that’s the Turing Test. And ChatGPT crushed the Turing Test. But that’s not enough, AI is just mimicking and predicting, right? Is it really understanding?
Geoffrey Hinton thinks AI really can understand:
"So some people think these things don’t really understand, they’re very different from us, they’re just using some statistical tricks. That’s not the case. These big language models, for example, the early ones were developed as a theory of how the brain understands language. They’re the best theory we’ve currently got of how the brain understands language. We don’t understand either how they work or how the brain works in detail, but we think probably they work in fairly similar ways." —Hinton
Now that’s just emotions and intelligence. Consciousness is different.
And It’s hard to say if AI will become conscious because-
1) We don’t even know how consciousness works.
2) The people who build AI themselves don’t know how AI works
AI is modelled around the human brain, so maybe, just maybe we might accidentally create conscious AI. And even if AI does become conscious how will we test it and find out? I don’t know. It’s hard to tell.
AI Will Change The Way the Economy and Money Work:
Eventually, AI might just do all the economic activity and not humans resulting in the loss of value for money.
This is why, Sam Altman in his article said:
“While people will still have jobs, many of those jobs won’t be ones that create a lot of economic value in the way we think of value today. As AI produces most of the world’s basic goods and services, people will be freed up to spend more time with people they care about, care for people, appreciate art and nature, or work toward social good.”
The places he thinks will have wealth in the future are:
1) companies, particularly ones that make use of AI, and
2) land, which has a fixed supply.
Then he comes to the idea of Universal Basic Income(UBI):
“All citizens over 18 would get an annual distribution, in dollars and company shares, into their accounts. People would be entrusted to use the money however they needed or wanted—for better education, healthcare, housing, starting a company, whatever.”
After which he goes on to say:
“As long as the country keeps doing better, every citizen would get more money from the Fund every year. Every citizen would therefore increasingly partake of the freedoms, powers, autonomies, and opportunities that come with economic self-determination. Poverty would be greatly reduced and many more people would have a shot at the life they want.”
Now there’s still time before any of this happens.
But it’s scary because before it happens AI will first allow those in control of it to generate disproportionately large amounts of money, increasing the gap between rich and poor.
AI Threats
Bad Humans and Competition
Right now AGI or ASI don’t exist. The only threat is the people developing them. If the people developing AI or the people in control of AI are immoral it can lead to problems.
The way AI is being built depends on how the people making it are. Good developers= good inputs= good AI.
But the developers might not be shit people, who want bad things to happen, they might just be in an Oppenheimer situation.
There’s a huge race towards developing AI. Everyone is scared bad hands will get to it before them. And so in the process of getting AGI first, ethical concerns are losing attention.
“When you see something that is technically sweet, you go ahead and do it, and argue about what to do about it only after you’ve had your technical success”
–J. Robert Oppenheimer
Dario Amodei, the CEO of Anthropic explains:
“To build the safe thing you need to build 90% of the dangerous thing. The problem and the solution are really intertwined, like coiled serpents”
Military
“Artificial intelligence is the future, not only for Russia but for all humankind,”--Putin
Now, there’s no doubt militaries are working on their own AIs. An AI war could end humanity, but I think that’s unlikely. We’ll probably figure something out like we did for nuclear bombs. I don’t know though. Nor am I interested in going further on this topic. So moving on.
The Paperclip Maximizer:
A guy called Nick Bostrom put forward a thought experiment, which goes something like this:
We have a powerful AI whose goal is to make as many paper clips as possible. To succeed in this, it might convert all the earth's iron to paperclips. It might keep building paperclip factories while destroying all houses and all humans in its way. Once all the space on Earth is over, it might go to other planets and start paperclip factories there.
AI might not have bad intentions nor may it become conscious. It might just become too smart, and too powerful with the wrong goals.
Mental Atrophy
If we become too dependent on AI. Our minds can atrophy. If our store of knowledge is AI, and if AI stops working, society might just crumble. Not only will we suffer with a loss of knowledge but our atrophied minds will not know what to do in such a situation.
However, I don’t think this is a major concern, at least not yet.
But, if we outsource everything to AI including our moral decisions. We’ll have fewer places to exhibit such decisions and will become morally deskilled.
Just like we go to the gym so we don’t become a bag of flab, similarly, we’ll write to think and learn ethics, so our brains don’t become useless jelly.
(Speaking of our brains we’ll have Neuralink which might just bring a completely new situation. But that’s for another article.)
Optimistic Scenarios & Why AI Might Not Be a Threat
AI vs AI: If we reach a point where we have a rogue AI that is smarter than all of humankind and rogue, the only way we’ll be able to match up with it is if we have another powerful AI that will negotiate with the other AI.
Dumb king scenario- We could have AI free of personal desire and ego, which will be smarter than us but willing to work for us. Just like dumb kings have smart ministers.
AI is so intelligent it just dips: AI might just get so intelligent, that it couldn’t care about this world and it’ll just figure a way to exit the planet and discover new sources of energy like wormholes (a random example).
Dan Koe(writer) Reasoning on Why AGI isn’t a big deal:
AGI, or artificial general intelligence, is a complete or universal system. Like a human who is not limited to a small subset of things that are possible. AGI may have more computational power or memory, but there’s no concept that it can understand that we can’t ourselves, and that doesn’t rule out the fact we can use universal computers.
AI And Jobs
AI Threat to Jobs
Sam Altman said, “Jobs are definitely going to go away, full stop.”
Goldman Sachs estimated around 300 million jobs globally could be automated with AI.
In 2013, a landmark study found that almost half of the US jobs were at high risk of being automated within two decades. However, their method was criticized and another OECD study concluded only 9% of jobs are at risk of automation in the time frame.
Now keep in mind these predictions are made only with current jobs in mind. With new technology usually new jobs follow, but with AI things might just be different.
Why AI Replacing Jobs is Different from Industrial and Technological Revolutions:
AI is more general.
AI is the fastest-evolving technology, and it gets adopted really fast.
AI faces our direct comparative advantage as a species: our intelligence.
Will AI Replace The Following Jobs?
1. Therapist
People say a therapist is one job AI can’t replace. It’s something only humans can do. But I’m convinced AI will do a better job than most human therapists in the near future. AI already beat humans in emotional intelligence tests. AI can tell if you’re gay with just a picture. AI will make therapy cheaper. It’ll allow people to remain anonymous. It won’t forget things you tell it. And you can basically talk to it anywhere and anytime, even at 3 am. You might say ‘What about the human connection?’ But I don’t think people will really mind losing the human connection that much. Remember people get attached to AI even though they know it’s a bot. Sure, some people may still choose human therapists over AI but I think the number will be small.
2. Doctor
When it comes to analysing data and diagnoses, AI will probably do a better job than doctors. AI has a 100% detection rate for detecting skin cancer. It can detect Atrial fibrillation 30 minutes before it happens. AI did a better job at detecting breast cancer from scans compared to radiologists. But when it came to detecting hip fractures a combination of doctors and AI did best(read this article). Also, In 2016, Geoffrey Hinton predicted radiologists would be replaced in 5 years, but the opposite happened, demands for radiologists increased.
So I don’t know if AI will fully replace doctors, but every doctor will surely use some form of AI. Mayo Clinic already is testing out MedPaLm-2 in its hospitals to assist doctors.
However, I don’t think AI will replace nurses. Not only because people prefer care from a human nurse, but also because AI is not very good at manipulating the physical world, it can’t put band-aids or stick the IV, and even if we could I don’t know how economical it would be.
3. Writers
ChatGPT can write, but it’s not the best of writers yet. It still will replace technical writers though. However, Altman believes AI is not replacing human writers until we have super intelligence, after which there’ll be bigger problems anyway. At the same time, AI can generate thousands of iterations/variations of a thing which is basically what divergent(creative) thinking is. Additionally, one exercise writers do to get better is just to copy the words of great writers word for word. AI can mimic every great writer. So it’s not hard to see AI being a great writer. As someone who wants to be a writer, that bothers me. But writing is a medium used to share experiences, and adventures and build connections, which AI can’t replace, because well it doesn’t have experiences or adventures to share.
4. Engineers and Coders.
Mark Zuckerberg says probably in 2025 Meta will have its own mid-level AI engineer. Keep in mind he said ‘probably’ not ‘surely’. He also said at first it’ll be expensive but over time they’ll make it cheaper and it’ll replace mid-level engineers. Considering he said ‘mid-level’, good engineers won’t be replaced, at least not soon. But what everyone agrees on is that you need to use AI to assist you or else you’ll fall behind.
For Coding, Jensen Huang, founder of Nvidia, says it will become just English. We already have something close–Replit– which builds an application from just a prompt. This doesn’t mean coding will disappear, it will just change, open more people to it and raise the standards of applications.
5. Blue Collar Jobs
With the inventions of devices like the Roomba it was believed that the blue-collar jobs would first go, and then the white-collar jobs and lastly the creative jobs. But AI seems to be taking over the creative jobs first, the white-collar jobs next and blue-collar jobs last.
This is because physical manipulation is hard for robots, and information to train on is plenty.
Geoffrey Hinton says plumbing probably won’t be replaced because of AI. It’s true. Roombas have replaced no one. But blue-collar jobs like drivers, cashiers, and retail salesmen are likely to go. In India, drivers might still stand a chance because Indian roads are way too chaotic for AI navigation!
Can AI do Art and be Creative?
You might say AI art has no value. But the Portrait of Edward de Belany created by AI sold for $432,000.
If the audience doesn't care or can’t tell whether art is AI/human generated then it is art. That’s unless we define art as an internal process, something humans do for expression.
We even see AI being used to make films.
Director Paul Trillo used Runaway’s Gen 2 generative AI to make a short film called thank you for not answering. Even the voice, audio effects, and music are all AI-generated.
The film however looks hella AI generated with deformed characters. I don’t think AI will replace movies soon. But it’s only bound to get better.
And yes AI can be creative.
MuZero the AI which beat the world’s best player at Go used strategies never seen before. It wasn’t even trained on human data, the way it got better at Go was just playing against itself.
Role of Humans in a World of AI
How to become irreplaceable in the world of AI and what skills to learn?
I don’t know the answer to that question.
So I’m just going to share some perspectives I came across. Now these tips may not apply to a world of ASI(Artificial SuperIntelligence) which is a situation that might never come, so this should make sense till then:
Devon Erickson
What to learn for the future according to Devon Erickson:
Mo Gawdat on what to learn:
“In the next 5-10 years those who capitalize on human connection will outperform those who don’t”-Mo Gawdat
Learn the tools, learn AI
Find the truth
Focus on Human connection
Dan Koe
Worry less about which "career skills" AI will take over, and more about whether you are training to be, and training your kids to be, high-agency, perceptive, self-motivated people who can navigate an unknowable future with an adaptable mind.--Dan Koe
Stop worrying about what career skills AI will make obsolete.
Worry about whether you are training (or being trained) to make ideal-future-aligned decisions, be open-minded and perceptive, and be self-motivated.”-Dan Koe
Dan Koe believes choosing your own goals and learning to learn will be important to adapt to a constantly changing world of AI. Here’s how to do it according to him:
The Tech/Business Youtuber Varun Mayya’s advice is:
Run your own experiments, try new technology.
In 2022, Varun predicted AI would be able to do creative tasks, It was before AI was mainstream. He knew this because he had tried the latest publicly available AI technology. He even experimented and created a thumbnail-making AI. This experimentation allowed him to make the prediction.Be responsible for results, not for input. With AI the number of hours you work will matter less. What will matter more is how you use AI to bring results and solve problems.
Focus on building a brand/marketing. Varun went from running a software company to starting a YouTube channel because he realized most things don’t need to be built from scratch now, using third-party sites/tools a great deal can be done. But there was little difference between the work of competitors and him, so he realized building trust, customer love and customer discovery matters more in today's world of AI.
Here’s a Tweet he referenced:
More Perspectives:
The entrepreneur David Sachs believes super specialised jobs are more easily replaced than general jobs.
Resilience, communication, leadership, and entrepreneurial skills are skills Jason Calacanis, an investor, believes aren’t going away.
“Be a creator and you won’t have to worry about jobs, careers, and AI” – Naval Ravikant.
I’d love to hear your thoughts too.
That’s it for this one. It was much longer than my usual articles. I wrote this because AI has been bothering me. It led to a semi-existential crisis for me. So I hope you found it useful. I’d love to know your thoughts on it.
You might like: What Creatives Need to Stand Out: Taste
I appreciate your thoughts and all the research you did to put this article together. You ask some very good questions! Thanks, Mr. S.