Skip to content

Dr. Christopher DiCarlo on Critical Thinking & an AGI Future

2024-09-01

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2024/08/12

Dr. Christopher DiCarlo is a philosopher, educator, and author. He is the Principal and Founder of Critical Thinking Solutions, a consulting business for individuals, corporations, and not-for-profits in both the private and public sectors. He currently holds the position of Senior Researcher and Ethicist at Convergence Analysis – a UK-based organization focusing on AI Risk and Governance. Dr. DiCarlo is also the Ethics Chair for the Canadian Mental Health Association and is also a lifetime member of Humanist Canada and an Expert Advisor for the Centre for Inquiry Canada. Dr. DiCarlo has won several awards including TV Ontario’s Big Ideas Best Lecturer in Ontario Award and Canada’s Humanist of the Year. His current book from Rowman and Littlefield, will be released in January, 2025, and is called Building a God: The Ethics of Artificial Intelligence and the Race to Control It. Dr. DiCarlo also hosts a new podcast called All Thinks Considered in which he engages in free and open discussion about current, important issues with world thought leaders, politicians, and entertainers through the lens of Critical Thinking and Ethical Reasoning.   

Scott Douglas Jacobsen: Today, we are here with Dr. Christopher DiCarlo. Let us talk about the newest things, and then we can roll back from there. You are working on a book called Building a God. Why that book? What is it about?

Dr. Christopher DiCarlo: A few years ago, I intended to spend the rest of my days promoting critical thinking wherever and however, I could in various educational systems. But then that darn Sam Altman came up with ChatGPT, and we realized in the AI business that the likelihood of producing a very powerful form of AI super computation isn’t at least 40 years away; it’s more like four years away.

Everything nudged me back into doing AI. I worked on a supercomputer idea in the nineties called The OSTOK Project. My purpose was to create a machine that is essentially an inference machine, better at making inferences than humans. We build telescopes to see far away and microscopes to see small. Why aren’t we building big brains to think better than humans ever could? So, I wanted to build a big brain that could make inferences to solve medical problems, cure cancer, beat Alzheimer’s, and figure out climate change. It could just do stuff way better than humans ever could, but I needed more financial or political backing in the nineties.

I still realized in Information Theory that somebody would eventually build this thing. Then, a few years ago, Altman came out with ChatGPT-3, and we saw the writing on the wall. So a bunch of us, like Sam Harris and others, realized that the clock is ticking and that the race is on between OpenAI, Meta, Google DeepMind, and Anthropic, et al. These big, big, big tech companies are all trying to produce what’s called AGI or artificial general intelligence. This is computation at a level that humans have never seen before, so it’ll be able to be superior to humans in every way, think better than humans in every way, multitask, and all that stuff.

But the concern I had in the nineties is much more pressing now. Whoever builds this machine will need to be able to align it; this means that it will cooperate with human values. Will it go rogue and decide that human values are quaint things of the past and that they will take over and do things on their own? And if we can’t align it, can we control it? Can we box it and ensure it doesn’t do some of the nasty things it could do, either by human hands or on its own? And if we can’t control it, can we at least contain it? Can we funnel this thing down so that if it does get out on us–we can manage to contain it in some possible way?

So, in the nineties, I drafted an accord for either the UN or some independent organization to develop, which says that moving forward, this is what we should be doing with this type of computational power. We need to register. Everybody needs to be on board: who’s doing what and where? We need accountability, transparency, and a body that can exact punishment. So if one group, agency, company, or country decides to go too far too fast with this type of technology, we can pull the plug or say, ‘No, this is getting a little beyond our reach.’

It occurred to me that everybody in these big tech companies and government defence agencies is trying to build a ‘god.’ They’re trying to build a super-intelligent machine, ‘god,’ that will be able to answer all of our questions and figure stuff out way better than we ever could. So it’s a gargantuan inference machine. It can make connections between information that we haven’t been able to see. That’s how we define geniuses. 

They see connections that we’ve never seen before. They’re creative. They’re inventive. We are headed very quickly in this direction, and nobody’s putting the brakes on. If you get a Trump government in November, they will say, “Drill, baby, drill.” They will open up the floodgates on AI and let these guys do whatever they have to to stay ahead of China. But the fact is, how do you build a god? How do you now put your morals into a god?

The old story used to be top-down. These gods existed and imparted to us these rules we must abide by. Ironically, we face the opposite: humans are now creating a god. What do we want this God to behave like? What ethics, codes of conduct, principles, precepts, and morals will we instill in it so it doesn’t become vengeful and decide we humans are a quaint part of the past that are no longer necessary? We must worry about whether nefarious agents will use this technology towards their political ends or if this entity will get beyond our ability to control or contain it.

Whether we will rely on saving the world from the potential of a machine that will either be used by nefarious agents or be beyond our control, that is it in a nutshell. That is what I am up to these days. What if, before these things become super-intelligent when they become brilliant, we use them to develop AI ethics? So we use the precursors to this as assistants to develop those accords, for instance, or those ethical guidelines. 

Some are mentioning that. Eric Drexler and others maintain that we’ve got to use “smart AI” or the “smartest AI.” Will that be possible? We hope. Others say we need to “box or close the gate on AI,” which was always my intention in the nineties: to make this thing unhackable externally and prevent it from getting out. If we can do that and build a specific AGI, let’s say you want a good optimizer for your trip to California this year. How should I do it? What’s the best way for me to go to California?

We develop a super AGI, a “triptych,” that will figure that out, but that’s all it does. That’s all it can do. It can never want to take over the world, rise, or do other things. We keep the AGI super specific. Then there’s another school of thought that says, “Yes, but eventually, somebody will unite them. Eventually, somebody will want a major, all-for-one type of supercomputer.” They don’t want to go to different, what I call “AGI angels.” They want an “AGI god,” something that can do everything for them. Unfortunately, our greatest downfall might be what motivates humans to come up with great discoveries, ideas, and inventions—this drive to see what will happen next.

That drive might be the thing that pushes Bezos, Musk, Gates, Zuckerberg, or Altman to be the ones who see themselves as pioneers to be the first to build this machine because, maybe, they believe that they can truly contain it, that they are the ones who have all the measures in place so no matter what happens, they’ve got King Kong, and they can make sure Kong behaves. They’ve got Kong in chains, and don’t worry about this. We know what we’re doing. But the thing with AI is that you don’t get a second chance. If it gets out and gets away from us, it will be so far ahead of our thinking that… what did Arthur C. Clark say? “Any sufficiently advanced technology is indistinguishable from magic.”

So, this thing will have abilities that will appear magical to us. It’ll have control of systems. It can have the capacity to invent, create, and figure stuff out at levels that we can’t possibly comprehend or understand. Are we ready to be number 2? Are humans ready to be number 2? That’s what we have to ask ourselves.

We’ve been number 1 for so long on this planet. Yet, these organizations and defence systems are racing super fast to build this thing before anybody else can because they all believe they can contain, control, or align it. The fact of the matter is the truth, and the world might not be ready for this, but somebody’s gotta tell them, “Nobody knows.”Nobody knows. People are still determining what will happen when this thing is built. Nobody has any idea whether it will comply, become violent, become sentient, or become conscious.

Nobody knows what will happen, yet these companies are just moving at breakneck speed because the competition is pushing them. After all, China is threatening and we don’t want Iran, North Korea or anybody else to get there first. The thing is, the majority of the world has no idea this is happening right now. What you’re hearing right now is only known by very few people in the world. In my job, only about 400 of us are working on this thing, trying to raise awareness to let people know. Do you realize what’s happening right now?

The world will be very angry if some company comes up with this thing; it will get loose on them and start shutting down electrical grids and doing all kinds of nasty things we could never have predicted. They will wonder why they weren’t informed and had no say. So, my job at Convergence Analysis is to raise AI awareness and use my ability as a public intellectual to bang this gong as loudly as possible because people have a right to know, and people have a right to vote for politicians who also know what’s going on. We’re sleepwalking into this thing, thinking everything will be fine. We are worried about nuclear energy and look at what happened with it. We got it under control.

We have mutually assured destruction to keep us all in line, so that just worked out. Well, AI is nothing like nukes. Nukes are dormant things that you have to operate and function. AI does not exist as an entity on its own.

And so, I’m not a doomsayer. I’m just a realist who wants the very best AI will give us, and there’s a lot of good it will surely give us. But at the same time, we need to limit the worst that could ever happen from this new technology. So my colleagues and I are pretty much devoting the rest of our lives to ensuring that, in terms of the genie is out of the bottle or Pandora’s box is opened, we can assure to the best of our ability that this thing won’t cause harm or others won’t use it to cause harm. That’s the most important question humanity is facing right now. There’s no more important thing to worry about right now than the rapid advancement of AI.

Jacobsen: So, of those 400 people internationally who are thinking about the future of AI and its development into AGI, who are some of those elite thinkers who are less speculative and more empirically supportive?

DiCarlo: Well, that’s the thing. Our organization has three teams working to find as much empirical evidence as possible. It is speculative up to a point.

Jacobsen: So, how do we look at this?

DiCarlo: Well, what does Sam Harris say? Sam says, “It’s inevitable. We’re not going to stop. The race is on. Even if it were 50 years away, time isn’t a factor. The fact that this is happening means its eventuality is destined to occur.” So, if that’s the case, we take it as a conditional. The conditional is if-then. Suppose it’s truly the case that we keep scaling up and adding computing power and data to these large language models and these other types of very powerful AI systems. In that case, it reaches a level of AGI.

At that point, nobody can say with accuracy that it’s going to be perfectly fine or that it’s going to be perfectly, incredibly dangerous. Dealing with the uncertainty is the tricky part. How do we move forward in dealing with uncertainty? So you have forecasters, speculators, super forecasters, predictors, and guys like Ray Kurzweil, Elon Musk, and even Hawking chimed in on this. It follows a logical progression: We had computation in the fifties that worked in a certain way, and then each decade since, working and chugging away.

And we’ve had great promises of AI advancements, AI summers. These are known as the seasons of AI, where everything looks good, crashes, and then we get another winter for a decade. Then, more technology develops, we get another summer, and something else happens. We’re on an upward trajectory and in a summer of AI. Some people maintain, like Marc Andreessen and other big players in the financial side who are big backers with dollars, that it’s “drill, baby, drill” time. We don’t have a thing to worry about. We’ve been able to control technologies in the past.

We have absolutely no concerns, so they’re the naysayers. But they’re in the minority. It’s the doomsayers—the Geoffrey Hintons, the Max Tegmarks, the Yoshua Bengios, the Sam Harris, all these folks. Even Elon Musk maintains that AGI will be accomplished much sooner than we thought. It’s a when not an if.

Everybody is on the same page that if we keep progressing at the level we’re at now, this thing will come into being at some point in the future. People put it between 2 and 10 years. When that happens, will we be prepared? Will we have the necessary infrastructure ready to ensure it can only give us the best of what we want while preventing the worst from occurring? And there are camps. There are schools of thought.

You have Eliezer Yudkowsky on the farthest end; he’s the biggest doomsayer. He’s saying, “We’re screwed no matter what. It doesn’t matter what we do right now. We will build this thing, and it will kill us all.” He’s that much of a doomsayer, so he’s saying we’ve got to shut things down now and never allow it to continue.

So, we need to look at Marc Andreessen and Eliezer Yudkowsky, the two furthest extremes, and say, “Among them, now that we’re living under uncertainty with these forms of technology: how do we think critically about this? How do we use our capacity for reasoning under uncertainty?” This was Daniel Kahneman’s big thing. He’s trying to reason under uncertainty. How do we do that as efficiently as possible?

People then throw around figures, and some indicate the likelihood of harm to humankind due to this technology. That’s anywhere from 5 to 20% right now when you take all the estimations, predictions, and forecasting. What are we coming in at? So then the question occurs to us:

If it’s only 5% that some nasty stuff could occur, shouldn’t we be taking that seriously? The bottom line is that we have to think very carefully moving forward and put measures in place to ensure that even if it’s only 5%, we reduce that as much towards 0. But nobody knows. So, to quote Rumsfeld, “There are known knowns, known unknowns, and unknown unknowns.” So, the biggest known unknown right now that everybody in the AI business knows is that nobody knows what will happen. Should that give us pause to be better safe than sorry? Yes, for sure. Absolutely. How do we do that? We learn as much as possible about FLOPS: Floating Point Operations executed Per Second. This metric measures the computational power and processing speed of AI-driven algorithms and models, which serves as an indicative benchmark for assessing their performance. Also, we need to lean as much about power, machine learning, deep learning, neural networks, and capabilities. Then, we’re the inference machines. We’re the ones who have to make inferences to think, “If we keep scaling up, will that be enough to reach AGI?”

And this has created, so far, three camps. The scaling-up camp believes more computing means more data; give it enough energy, and we’ll produce AGI eventually. The other camp is what I call the embodiment camp. They argue that this thing’s never going to reach AGI like a human because it doesn’t have a body. It doesn’t know what it’s like to move around in space-time, and the only way you can get it to think and act like a human is to give it more human-like experiences.

So you have to embody this thing. You have to put it into a robot or give it the capacity to have a spatial or temporal experience. Then there’s the distributed or collective group, like the Borg. They say, “Forget about a single AGI. Let’s make a thousand things that all work independently and collectively.” When one learns something, they all learn it. When another learns something, they all learn it. It’s distributed throughout, and that’s how we’ll reach AGI. We’ll get all these angels together working in concert to produce a singular god, as it were. So, those are the three schools of thought for reaching AGI.

Which one will produce it? Nobody knows. We’re dealing a lot with uncertainty here, and that alone is somewhat unsettling because it’d be great if we had a techno-fix where it’s just a matter of, “Well, if we control the chips,” right? NVIDIA has the greatest chips in the world right now, and Taiwan produces them. So, as long as China doesn’t interfere and they don’t get certain chips, we have exporting control over who gets the chips. Will that be like fissionable material and nukes?

So long as we know where the uranium is, where it’s going, where the plutonium is, and where it’s going, we have a good idea of who’s up to what. Is the same thing going to be true with chips? Can we control them? Is there a techno-fix? Maybe yes, maybe no. Again, this could be an unknown, and there may be ways to bypass that. But at this point, it’s so new. It’s so fast. It’s moving ahead at such great speed that the biggest concern for us is nobody putting on the brakes, and nobody is sure what will happen. They’re aware of it, but the UK, the EU, the US, and China all have their political discourse put out on how they will handle it. But a lot of that deals with things like bias, the spreading of misinformation, job loss—the stuff that AI will do as well, the harmful stuff AI will do. But not as much attention is paid to what we call X risk or the existential risk that AI can do, and that’s what the organization I’m with now is trying to sound the alarm bells about so that we make sure that never happens or even if some country decides they’re going to use it for their geopolitical ends, we can shut them down very, very quickly, or we can stop them from utilizing it in harmful ways.

So that’s what we’re dealing with. I wish I could tell you that the story was better, that we knew more, and that we had the empirical data, that we’re very clear. And that the future is very, very clear, but it’s not. Working under this uncertainty has many people worried and causing many sleepless nights. So that’s where we’re at. 

Jacobsen: Critical Thinking Solutions is a company you have. And my first indication of learning about you was a book you had written, How to Become a Really Good Pain in the Ass. So, you’ve done a series of books around critical thinking. This company is associated with that same stream of thought and education. As a practical example, how would you apply critical thinking tools to things like AI, AGI, fear-mongering, misinformation, and the mythologizing of AI? “God” is a placeholder, a metaphor for building something, and Ray Kurzweil asked, “Is there a God?” He said, “Well, not yet,” referencing these systems. What are some of your reflections about that?

DiCarlo: Yes. So, in this latest book, coming out in January of 2025, there are two parts to it. The first part is “AI and What You Should Know.” The first three chapters are about the history of AI, the benefits of AI, and the harms of AI. The book’s second half, three chapters, is “AI and What You Can Do About It.” Chapter 4, “Critical Thinking and Ethical Reasoning,” teaches people the ABCs of critical thinking.

When you get your thoughts together, can you construct them into an argument? Do premises support your conclusions? Then, can you think about biases—your own and what you bring to the table, as well as others and what they bring to the dialogue and the conversation? C is context. What context are we now living in? What’s the background information? What are the circumstances behind all of this?

Then, I will talk about ethical theory. How do you house that if you think using AI in particular ways is right, wrong, or bad? How do you ground that into ethical theory? Are you like Peter Singer, the philosopher and utilitarian who believes it’s all about the greatest good for the greatest number? As long as we’re optimizing the greatest good for people, that’s wonderful. And then others will come along and say, “Yes, but you can’t sacrifice others just for the good of the greater number. They have rights, autonomy, and dignity. You shouldn’t treat them like objects as a means to an end.”

So, I go through the critical thinking and ethical reasoning parts to allow people to work through what’s happening in AI. Do they think it’s a good thing or a bad thing? Have they looked at the information? Have they formulated their thoughts about it? And then, in terms of ethics, how do they theoretically ground their ethical beliefs? Do they look to virtue ethics? Do they see a golden mean as a guide on going about this? Is there a common principle amongst all ethical systems?

Something like a golden rule, which we see in all religions worldwide, some formulation of it, that’s great. You don’t want the machine to harm others. It wouldn’t want to harm itself; therefore, it shouldn’t harm others. Can we instill a golden rule into the people functioning alongside this AI and the AI itself? Because there are two factors involved here. We’re the ones who have to come up with the ethics to guide our behaviour and how we’re going to move forward in developing AI.

And then, we have to figure out what ethical principles, precepts, and methodologies we’re going to instill into this “God” that we’re building so that it doesn’t harm or destroy us or the environment. One of the great examples is the paperclip example by Nick Bostrom, the Oxford philosopher. This gets a little complicated, but basically, what he says is if you give an AI a singular function, a singular purpose, then you may open up Pandora’s box because it will try to accomplish that singular purpose and may do so in ways that could be harmful to others. In Bostrom’s paperclip example, you give an AI the job of making paperclips as efficiently and effectively as possible. It starts cranking them out. It’s aligned with our values. We think everything is cool.

Look at the paper clips this is making; they are wonderful. Then, it starts to run out of raw materials for paper clips, but it wants to keep going. It looks at humans and sees that humans are made of stuff that it could use, meltdown, and utilize to make more paper clips. So, it might develop ways to fool and deceive humans and eventually kill them to make more and more paper clips. It sounds like an absurd thought experiment, but this is the type of misalignment we’re worried about in AI, where you try to tell God to act this way. It says, “Oh, I’ll act that way.” But we don’t realize we’ve missed something and didn’t say, “Oh, but don’t do this.”

For example, I don’t know if you’re a fan of Rick and Morty, the cartoon, but there’s an episode in which Rick flies Morty and Summer to a planet to get some ice cream, and he says, “Summer, stay with the vehicle, stay with the ship.” She says, “But I’m just alone here.” He says, “Ship, keep Summer safe.” He and Morty walk away. So Summer’s in this spaceship, and a person approaches the ship and bangs on it. She’s a little frightened, so the ship kills the guy. Summer says, “No, don’t do that. Don’t do that anymore.” So, the next guy that comes up the ship paralyzes him. It shoots a laser into his spine and paralyzes his legs, so he can’t walk. It doesn’t kill him, but it keeps Summer safe. You see how it’s misaligned? It was told to keep Summer safe. “And that’s what it did.”

So, we must be careful when aligning God with our human values. Either it doesn’t harm us by just acting according to its commands, or it develops consciousness. If it develops consciousness and is aware of itself and its situation, will it let us turn it off? Will it let us tell it what we want it to do?

Maybe not. Then what happens if a nefarious country, a ne’er-do-well like North Korea, gets this type of technology? It says, “We don’t like South Korea. We don’t like the US. We don’t like certain other countries. We want you to figure out how to shut down their entire grid system.” Something as simple as that would turn America into a nasty place because we rely heavily on electricity. We don’t know. There are too many unknowns at this point, and the organization I’m working with now, and others like us moving forward, want to ensure we avoid getting into those scenarios where these nasty things happen.

So, one of our teams is very much involved in scenario modelling. They’re trying to think about what could go wrong, what types of things could go wrong that we can think of right now, and what paths to victory, or what we call theories of victory, look like in preventing that from ever happening. So that’s the empirical stuff we’re working with now—trying to envision how things might advance moving forward, anticipating what could go wrong and putting safeguards, what we call guardrails, in place so this AI can never get outside those guardrails. That’s essentially what we and a handful of other organizations worldwide are doing as we speak.

Jacobsen: So if we take terms or phrases like “general intelligence” or “artificial general intelligence,” could our ways of using terms lead us down certain paths of thought that may limit our thinking about these systems? Generally concerning what, or in what way, are we using human intelligence as the metric? When psychologists refer to general intelligence, they often mention triarchic and multiple theories. But it’s referenced in such a way. So, in what way do we mean “general,” and what are the implications of how we carefully select the terms for conceptualizing and characterizing these systems?

DiCarlo: Yes, that’s a good question. I define intelligence at the beginning of the book and the different ways in which we define intelligence. What we have now with AI is ANI or artificial narrow intelligence, which means it simply acts according to its algorithmic inputs. In other words, your Roomba will never come up to you and say, “I want to be an accountant. I don’t want to do this anymore. I have greater ambitions.” It won’t do that because it can’t because its programs are so narrow. It will always be just vacuuming your floor, docking and undocking, etc.

General intelligence—and this is where it gets interesting. So we have AI, and people have said it will never amount to anything and will never even be a very good chess player. So it’s beaten Kasparov. It’s beaten the greatest but will fail to beat Ken Jennings at Jeopardy. It tripled Ken Jennings’ score. But not Go. Go is too advanced a game. It has exponentially way more moves than chess. Then it beat a grandmaster at Go, so it keeps hitting these benchmarks. We keep seeing it improve and improve.

Recursive self-improvement is one of our biggest concerns in AI, the safety biz, and the risk business. This is the thing that we’re going to be most concerned about. If you allow AI to improve upon itself, it will evolve into what a term I’ve called technoselection. We had natural selection for the majority of our existence, where nature called the shots on what survived and what didn’t survive, what got to reproduce and what didn’t get to reproduce. 

Then we get disasters. We get extinctions. There have been at least five major ones, and then life still keeps coming up, but it always obeys the laws of natural and sexual selection. Then, humans evolve and develop consciousness. We say, “Hey, hang on. We can take these animals and these plants, we can screw around with them a bit, and we can artificially select for characteristics and adaptive qualities that we prefer, that we value, that we humans find satisfying.” So, we started to select a bunch of plants and animals artificially. But now we’re entering a realm where we will hand the reins over to the machines themselves to decide the best way to improve upon themselves. At that point, we’ll enter a realm I call techno-selection, which is no longer Darwinian.

It’s Lamarckian. This is technical but AI will direct how it wants its offspring to be. It will say, “Improve upon it in these ways.” The giraffe doesn’t look at the leaves it can’t reach and says, “I want my offspring to have a longer neck so it can reach them. So the next time we have sex, honey, start thinking about longer necks in our kids.” It can’t. It just can’t. It takes generations and generations of longer-necked giraffes mating with other longer-necked giraffes to produce offspring with longer necks. That’s how you get natural selection. We could artificially select for that, but it will take time.

Technoselection is immediate. The machine will identify functional optimization at a level we can’t imagine creating. And remember, this might not even be a single AGI. As Steve Omohundro says, we could be making millions of these things. They’re all self-improving and self-improving. Some people feel that when we transition from ANI to AGI, it will only be a short time before we get ASI, artificial superintelligence. That’s where the godlike qualities come in. It’ll be what Eliezer Yudkowsky calls the “foom.”

It’ll be so quick and so fast. We won’t be able to stop it. We won’t be able to see it coming. And once it occurs, you can’t close the gates on it. It’s already out there. If it gets into our satellite or grid systems or controls large parts of our infrastructure, it’s not like, “Oh, it’s here. Let’s turn it off.” By the time you’ve thought that, this thing has already figured out ways to outmaneuver and outthink you.

So we’re very worried that we will lose containment and control over this thing because it’s misaligned, and that’s called the alignment problem. It’s one of the biggest problems in AI right now. We hope to align ourselves and this God with our values, but we are still waiting for someone to know. Nobody knows. We’re not confident, but we hope to get there eventually.

Jacobsen: What do you consider problematic areas that require more critical thinking outside of artificial intelligence and similar areas in Canada? They could be perennial or new.

DiCarlo: Getting it into our school systems has been the toughest thing of my career. We had some success with the Liberal Party in Ontario, getting pilot projects started and generating some interest. However, not seeing value in critical thinking to the extent that people want to do something about it is very problematic. There’s much energy that goes into other programs, and that’s fine—learning more about Indigenous life, thinking about the rights of LGBTQ—that these things are all fine, but we’re not teaching kids how to think, and so many of them are missing out on that capacity.

Some teachers do it naturally, and they’re great, but certainly not all are. First, we need to get critical thinking into the high school system, either in modular form, which means a nice two-week package that slides nicely and neatly into any course, a standalone course, or both. But we’re not teaching kids how to think. The greatest gift you could ever give a child is the ability to think critically for themselves. There’s no greater gift in education that you can bestow upon a child than to give them the empowerment to understand information, be able to identify biases, to be able to appreciate context, spot fallacies, and know how to formulate their ideas and opinions into a structured and well-reasoned argument.

So, I will continue lobbying governments to get critical thinking into high school and, eventually, much more importantly, into the elementary school systems because that’s where you must start a child in critical thinking. When you look at the most successful countries in the world, why are their GDPs so good? Why is their level of happiness so high? Then you realize, when you look at their education systems, they almost have a 100% literacy rate, but they instill critical thinking in their systems. So, the students have that capacity as they mature developmentally and become young adults.

They’re also taught how to use those skills within a civic relationship. What does being a citizen of a country, province, or city mean? What does that mean? How does the whole system and series of systems work, and how are they interconnected? How do you play a part in that with those critical thinking skills? And so, the more aware people are of how to think about information, the better a situation will be.

That’s been a very important part of my life, and I can’t see myself giving that up anytime soon, but that’s how I see things moving forward.

Jacobsen: This is just something quick. So you have: How to Become a Really Good Pain in the Ass: A Critical Thinker’s Guide to Asking the Right QuestionsSo You Think You Can Think: Tools for Having Intelligent Conversations and Getting AlongSix Steps to Better Thinking: How to Disagree and Get AlongBuilding a God: The Ethics of Artificial Intelligence and the Race to Control ItTeach Philosophy With a Sense of Humor: Why (and How to) Be a Funnier and More Effective Philosophy Educator and Laugh to Your Classroom, and CPS A Practical Guide to Thinking Critically: How to Become a Good Pain in the Ass.

DiCarlo: CPS, A Practical Guide to Thinking Predictably. It was a one-time publication by McGraw-Hill for a course.

Jacobsen: If we include those books and Critical Thinking Solutions, where else can people find you?

DiCarlo: My new website is: criticaldonkey.com. I’ve got a podcast, All Thinks Considered. There are lots of interesting interviews with thought leaders and others. Season 2 is devoted entirely to AI. 

Jacobsen: So, what is your favourite interview so far? My guess is Charles McVety.

DiCarlo: I love Charles. He’s quite the character, man. He’s an interesting cat. I’ve never liked somebody so much with whom I’ve disagreed more. I can’t imagine. But no, my talk with Lloyd Hawkeye Roberson is a good one. It was one of the first ones. Mick West, the conspiracy theorist champion, Peter Singer, was a great interview Steve Omohundro and Kelly Carlin, George Carlin’s daughter. I’m having her on again in August. They’re all good and unique in their way. I need help to pick a favourite. Boy, that’s a tough one because we have so much going on in each one of those conversations. They’re all quite good, but we’re trying to get Keanu Reeves because we’d like him to be a spokesperson for AI safety. We can’t think of anybody who would be better.

Jacobsen: Trinity?

DiCarlo: Yes, Trinity.

Jacobsen: The Oracle? Oh, I think she passed away.

DiCarlo: Morpheus?

Jacobsen: Yes, Lawrence Fishburne is right. 

DiCarlo: Larry might be an interesting guest, but we must see what happens. 

Jacobsen: Chris, thank you very much for your time today. I appreciate it.

DiCarlo: Oh, my pleasure.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Leave a Comment

Leave a comment