Neil Sahota on AI Governance: Bias, Misinformation, and the UN’s Role in a Proactive Global Framework
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): The Good Men Project
Publication Date (yyyy/mm/dd): 2025/11/18
Neil Sahota, IBM Master Inventor and UN AI Advisor, discusses artificial intelligence’s ethical, geopolitical, and social implications. Sahota discusses AI’s transformative role in reshaping power structures, data governance, and scientific acceleration. He emphasizes the risks of algorithmic bias, misinformation, and lack of global regulation. Sahota advocates for transparency, diversity in development teams, and responsible AI practices. He highlights the UN’s unique position to lead international governance and stresses the urgency of proactive, collaborative frameworks. Sahota concludes by calling for explainable AI to build trust and proposing deeper conversations with global stakeholders in AI ethics.
Scott Douglas Jacobsen: Today, we are here with Neil Sahota. He is an IBM Master Inventor, United Nations AI Advisor, and the CEO of ACSI Labs. With over 20 years of experience, he helps organizations across healthcare, legal services, government, and other industries drive innovation through emerging technologies, especially artificial intelligence. He is also a professor at UC Irvine—go Anteaters!—and one of the original architects of the United Nations’ AI for Good initiative. He is the co-author of the award-winning book Own the AI Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition.
Neil is a sought-after speaker and has been featured in Forbes, Fortune, and TechX, among other publications. He actively supports startups as a member of Tech Coast Angels. His work blends business strategy with social impact, from combating child exploitation to advancing global sustainability goals. He is truly a global leader and a core figure in UN technology strategy.
I just returned from the 69th Session of the Commission on the Status of Women (CSW69) at the United Nations Headquarters in New York City. This year also marks the 30th anniversary of the Beijing Platform for Action and the 25th anniversary of UN Security Council Resolution 1325 on Women, Peace, and Security, so it was a milestone year. The two-week event included nearly a thousand official and side events. It was my first time attending, and it was truly fascinating.
I’m always impressed by what the UN does, and I’m glad individuals like you are involved. Now, AI innovation is redefining many things. Most notably, it’s redefining geopolitical power structures in the 21st century. How?
Neil Sahota: In one word? Omni. We’ve heard AI compared to the transformative impact of the Internet. I said this over a decade ago—and thankfully, it has proven true: by 2025, AI will impact all aspects of our lives, personally and professionally.
We’re already seeing its integration—often invisibly. It’s embedded in systems and decisions, sometimes without people realizing it.
We hear about RPA (robotic process automation) and concerns about job losses. But from a geopolitical standpoint, AI represents the next global race—not just about who has AI but also about who is developing superior technology.
AI can be trained to support virtually any domain: cybersecurity, economic forecasting, policy-making, and workforce development. It touches every part of society.
As with all tools, their impact depends on human intent. Some are using it to do immense good. Unfortunately, others may be observing or using it for control.
We’ve already seen the use of deepfakes, misinformation, and misinformation to influence elections around the world. To quote Dune author Frank Herbert—or more specifically, his theme—we’re at a point where whoever has the most advanced AI may, in a sense, control the universe. That is something a lot of political leaders now realize. It may not be what they want, but they all recognize that without AI—or without leading in its development—they will be at a hugedisadvantage. That realization is what is triggering a ripple effect globally.
For example, you can see it in the increasing competition for rare earth metals. Why is there such a renewed interest in the space race? Because rare earth metals are in space. These materials are essential for building not just AI but other emerging technologies—supercomputers, microchips, augmented reality devices like touchscreen glasses, and more. There is even a race for electricity and clean water, both needed for powering and cooling massive data centers.
So, while we often focus on AI’s direct impact on our lives, its ripple effects influence geopolitical dynamics, natural resource competition, and critical infrastructure development. Then there’s the deeper issue: the lack of a unified global governance framework for AI. That absence presents a significant threat to international collaboration, especially when it comes to dealing with major global issues—nuclear security, anthropogenic climate change, synthetic biology and superbugs, bio-warfare, information warfare, cyber warfare, and beyond.
Jacobsen: If AI is truly “omni”—affecting all aspects of society—then what kind of framework or leadership can help guide us through these already difficult global quagmires?
Sahota: That’s one of the biggest challenges. And you’re right—it is not just about the big, macro issues. Some of the micro issues are just as serious. For example, look at the origins of deepfakes. They started as a form of revenge pornography. After a bad breakup, someone might create fake pornographic videos using their ex’s likeness. If that happens to an average person, what resources do they have to fight back?
At least a celebrity or high-profile political figure has public visibility, legal teams, and resources to counter misinformation. But the average individual does not. That’s a serious challenge, especially when we realize that private and authentic boundaries are already being eroded. The real problem is this: who do we trust to lead the global effort?
And I do not mean a single person or even a single country. If any nation or tech company tries to take the lead, there will be immediate skepticism. It will trigger that same competitive mindset we just talked about. People often ask me why we partnered with the United Nations on the AI for Good initiative.
The honest answer? The UN is the only global entity with the credibility necessary for this kind of leadership. If a single country, say the United States or Singapore, tried to lead the charge, other nations would inevitably start asking, “What’s in it for them?” That undermines trust from the outset. We need an entity that can be seen as a neutral convener—and right now, the UN is the only one that fits that role.
Jacobsen: So what should we be doing to support that kind of neutral, global leadership?
Sahota: The same thing we’ve started: building coalitions, establishing shared principles, and developing inclusive, international frameworks for AI governance. It will take cooperation—not competition—to ensure AI is used for good and not for harm.
It’s been a major catch-up moment. If Microsoft or Google announces, “Hey, we’re going to lead an AI governance initiative,” the immediate question is: “What are they getting out of this?” People will assume there’s some financial or strategic advantage. That creates skepticism and undermines the legitimacy and credibility of the effort.
That’s why the conversation often returns to the United Nations when we talk about a unified AI approach to setting international standards and guidelines. The debate has been ongoing within the UN for a while now. The general consensus is that the UN is probably the only global institution with the neutrality and reach to effectively take on this leadership role.
The question then becomes: What is the most effective way for the UN to lead? One proposal currently being discussed—though no decisions have been made—is the creation of a new agency within the UN specifically focused on the governance of emerging science and technology. That is critical because AI is not the only disruptive force. We are also seeing rapid advances in nanotechnology, cognitive neuroscience, quantum computing, and other scientific domains. All of these overlap and interact.
So, how do we stay ahead of the curve and ensure people are developing these technologies ethically and responsibly? We need to set clear standards. Engineers, technologists, and scientists typically focus on solving specific problems or achieving defined outcomes. They may not think about unintended consequences or how their work could be misused. Often, they do not have the background or perspective to foresee the societal implications.
From a regulatory standpoint, the UN is already trying to shift gears—from the traditional reactive model of waiting until something goes wrong, to a proactive, collaborative model. Regulation today needs to be inclusive and anticipatory. It requires diverse thought and participation—governments, regulators, technologists, engineers, domain experts, and even end users—to collectively understand potential outcomes and build meaningful guardrails.
The United Nations is the only entity with the global trust and mandate to do this work.
Jacobsen: You mentioned deepfakes earlier, including their use in revenge porn and misinformation. At CSW69, one session I attended featured an expert—whose name escapes me at the moment, so credit to her and apology for forgetting off the top—who noted that over 95 percent of the victims of deepfake pornography or revenge porn are girls and women, while the vast majority of perpetrators are boys and men. This aligns with broader gender dynamics and disparities in online and offline contexts. So, while AI’s harms operate at a global and systemic level, they are also deeply personal, affecting individuals in real and often traumatic ways.
What about the risk of algorithmic bias? As far as I understand, AI is, at its core, another term for complex algorithms. Popular media often portrays AI in science fiction terms, but the reality is more grounded. What does algorithmic bias doto programs and their outputs—and what does that mean for real people in their everyday lives?
Sahota: That’s a great question. First, we must understand that algorithmic bias is a double-edged sword. AI is built from algorithms. It’s not magic, not sentient, or data-driven logic.
We do not exactly “program” AI in the old-fashioned sense; instead, we teach it. This involves two major types of bias: explicit bias and implicit (or unconscious) bias.
Explicit bias occurs when the training rules or datasets are deliberately skewed. For example, suppose I’m building a health information chatbot and instruct it only to trust information published by The National Enquirer or The Onion. That will lead to bad outcomes, misinformation, and potentially dangerous advice. That’s an example of setting a biased “ground truth”—you’re establishing decision rules that are flawed from the start.
The second and more dangerous type of bias is implicit or unconscious bias. This often happens even when we think we’re being objective. If the dataset used to train an AI is incomplete or reflects historical inequalities—in criminal justice, hiring, or healthcare—it will absorb and reproduce those patterns, even if no one explicitly programmed it to do so. That’s how systemic bias gets baked into supposedly neutral systems.
These biases have real-world consequences. AI determines who gets job interviews, who qualifies for a loan, and what medical advice someone receives. So if we do not identify and address these biases, we risk automating and amplifying injustice.
But we are not free from bias. We have a skew built into everything we do. AI learns our implicit biases through the way we teach it. All data is biased. All human teachers are biased—especially with implicit bias—so we must do our best to mitigate it.
Let me give you an example. One of the big things the UN has been exploring is the concept of AI robot judges. Around the world, many judicial systems face huge backlogs. In theory, AI could help reduce delays, minimize corruption, and improve access to justice. In the U.S. legal system, there’s a wealth of data available to train an AI judge.
Now, is that data biased?
Jacobsen: Yes.
Sahota: What is the biggest bias in the U.S. judicial data?
Jacobsen: Probably race.
Sahota: That’s a common answer. Many people say that—and it’s true, there is credible evidence of racial bias in the system. Others point to socioeconomic bias, where money plays a significant role. But the biggest bias we found was something even more unexpected: hunger.
We ran tests, and it’s true—the more hungry judges are, the harsher their rulings become. So, the takeaway is that if you’re going to court, try to schedule your trial after lunch. But then the question becomes: how do we strip that bias out of the data?
We thought about timestamping the trial transcripts and adjusting based on known mealtimes, but the truth is, I do not know if a judge had breakfast that day, or whether they had a big lunch or skipped it altogether. Everyone has different biochemistry. We do not know what their blood sugar levels were at that moment. We cannot normalize that data unless we attach a medical device to every judge to monitor them 24/7 for three years.
That bias—hunger bias—will always, unfortunately, exist. We can try to mitigate the impact and reduce its influence, but we cannot fully eliminate it. That is what makes this so complicated and potentially dangerous. But it is also something we already live within human systems. That is why, when we talk about responsible AI, one of the most important things we emphasize is the need for diverse teams. When we teach the machines, we need people from diverse backgrounds, perspectives, and lived experiences to help us catch these hidden patterns.
Otherwise, we run into real-world issues. I do not think Google is racist, but when it launched its visual recognition AI about seven years ago, it had difficulty identifying women and people of colour as human beings. It was significantly better at recognizing white men.
There was a disturbing case where the system mislabeled an African American person as food—an outcome with deeply offensive racial overtones. Do I believe Google was intentionally being racist? No. But I can confidently say that the development team was likely composed largely of white men. They probably used many images of themselves to train the AI, and that’s a lack of diversity in the training data.
And that kind of unintentional bias has broader implications, right? Especially when we’re thinking globally.
Jacobsen: Globally speaking, people of European descent are actually a minority. They are not the global majority. So if AI systems are being trained in countries where European-descended populations are the majority—or if the development teams primarily come from those backgrounds—that bias will manifest. And it will create systems that do not work well for much of the world. That is the danger of training global systems with narrow datasets. It leads to exclusion, misrepresentation, and sometimes harm.
Sahota: That is why many countries invest in their technology stacks. Take the Middle East as an example—countries like the UAE are developing their versions of generative AI platforms like ChatGPT. Why? Because they need AI that understands their language, customs, cultural context, and ways of thinking. For AI to be truly effective, it has to be localized—not just in terms of language but also in terms of societal norms and user expectations.
Jacobsen: What about digital data extraction’s ethics and sovereignty implications without sufficient oversight? Whether it’s happening within a country or across borders—when one country extracts data from another without transparency or consent—it becomes a major concern.
Sahota: Absolutely. This is a massive issue, and it is currently being addressed in wildly inconsistent ways across the globe. The European Union, for example, has implemented strict data protection regulations—GDPR being the most notable—which aim to increase transparency and safeguard citizens’ data. Of course, these regulations have also created some friction, as some companies feel they slow down innovation. It’s always a balancing act between protection and progress.
Then, there are countries like China, where the government mandates that any data collected within China must be stored inside China. That’s why there has been so much scrutiny around TikTok and ByteDance. There are legitimate concerns about whether data on U.S. users is being stored in China and if that data could be weaponized. And in theory, yes—anydata could be turned into a weapon or used as a strategic asset. But again, this is not a technology problem.
AI itself is not inherently good or evil. It simply does what it is trained to do. The real concern is how people choose to use it. That is the heart of the issue. Yet, we tend to blame the AI when something goes wrong. We say, “AI made a mistake,” or “AI is redlining.” But AI is only doing what it was taught to do.
If there’s redlining, the people who trained it either included biased data or failed to address that risk. That is a human problem, not a machine problem, and the scale of that problem is staggering. If your training data is flawed, it doesn’t just impact a few people—it could impact billions of people very quickly.
That’s why we need robust, carefully designed training strategies, vetted datasets, clear guidelines, and inclusive oversight. This is the only way to use the technology effectively and beneficially.
Jacobsen: Last time, we noted the level of hype around AI. Even in the face of serious concerns, the hype is growing. How much of a “red alert” posture should people really have toward this technology? Public conversation has often involved fear-driven scenarios—like the Terminator or the “paperclip maximizer” thought experiment for years. What’s a more reasonable threat assessment?
Sahota: I honestly think the greater threat right now is not killer robots—it’s misinformation and misinformation. The Terminator scenario? It makes for a great movie, but in the real world, it is highly unlikely that people are secretly building machines designed to exterminate humanity.
I always remind people that for every Terminator, there’s also an R2-D2 or a C-3P—a robot that supports the rebellion and helps the good guys. So we have to keep perspective.
The real danger today lies in the growing sophistication of misinformation. It creates echo chambers that people live inside without even realizing it. They think they’re making independent decisions, unaware they’ve been conditioned—sometimes manipulated—into those views.
Social media algorithms have become incredibly effective at learning what we like, what we fear, and what we care about. Then they feed us more and more of the same. This builds a personalized echo chamber where alternative views are filtered out. The result? People begin to adopt views that others may find extreme or implausible, but those views feel totally normal—even real—because they believe millions of others think the same way.
We saw this play out during the Turkish elections about a year and a half ago, and we saw it in various African elections as well. Deepfakes and misinformation were used to shape narratives and alter perceptions—sometimes without the public even knowing it was happening. That, in my opinion, is the more pressing and immediate threat.
Can you imagine thinking you’re voting for someone based on your principles—not realizing that AI has been weaponized to convince you and make you think it was your idea all along? That’s a psychological manipulation concern, not just a technical one.
Jacobsen: What about the risks associated with the rapid acceleration of scientific discovery through AI—specifically by simulating known physical laws in “microworlds”? For instance, protein folding has advanced significantly through simulation, which could easily scale to other domains. The fear is that scientific experimentation could become so fast that applications are deployed before ethical frameworks catch up. How do we manage that?
Sahota: That’s a major concern, and it’s one of the classic cases where we’re reacting rather than anticipating. You’re referring to technologies like AlphaFold by DeepMind, which is used to predict protein structures and help accelerate pharmaceutical development. It’s a powerful tool, but it does not mean we skip clinical testing or the safety protocols we’ve always used. It means we can now simulate thousands of possibilities in the time it used to take to test just a few.
So, instead of trying two or three candidate molecules over a month, now we can simulate 2,000 or more and quickly identify promising leads. But even then, those leads still have to undergo full biological testing, clinical trials, toxicity reviews, etc. The simulation is a way to narrow the field, not replace scientific rigour.
This is also happening in other areas. Some governments and private-sector groups use AI combined with metaverse-style environments to simulate emergency responses—like wildfire evacuations or disaster recovery. These scenarios help us plan and prepare but don’t allow us to skip the core steps of due diligence and planning.
That’s why we have to remember: AI doesn’t give us “the answer.” It gives us a possible answer, usually with a level of confidence. And that’s where things can get dangerous—when people mistake probabilistic outputs for definitive truths. We always say: Treat AI output as a draft. It’s a first step, not a conclusion.
Jacobsen: Who were some of the first people involved in AI ethics discussions at the UN?
Sahota: That’s a tough question because many different UN agencies were independently exploring aspects of AI. Even before we worked with IBM Watson, internal conversations had already been happening about the implications of AI and the need for explainability.
That’s one reason explainability was built into Watson from the start. If Watson gave us a bizarre recommendation, we needed to understand why. We needed a transparent logic trail. That thinking laid the groundwork for a broader, more systematic approach.
But what changed the game was the launch of the AI for Good initiative. That began building an ecosystem—a central hub where experts, practitioners, ethicists, policymakers, and researchers could share their experiences, ideas, and strategies. Without that hub, we’d still have a fragmented landscape of siloed efforts.
If we’re being honest, people have probably been debating AI ethics since Alan Turing’s days in the 1950s. But until recently, we lacked an organized community with a shared goal and space for global collaboration.
Jacobsen: What about the effects of regulatory arbitrage in AI governance? Where are companies or developers going when the rules are most relaxed? What does that mean for innovation and international tech competition?
Sahota: Regulatory arbitrage is a major problem, no question. I understand why it happens—there’s a self-preservation instinct at work. But to borrow the old adage: evil succeeds when good people do nothing. That’s really at the heart of the issue.
Governments can certainly help—they can guide, regulate, and incentivize. But they won’t be the ultimate solvers of these challenges. Technology today is so decentralized that a teenager in a basement could develop a new AI tool with global impact. And they can decide how to use it—for good, harm, or just for fun.
That’s the world we live in now. The best way to steer things in the right direction is to create best practices and foster global communities that encourage responsible innovation. We have to build in a cultural mindset—a kind of organizational change management (OCM) approach—so that creators think proactively about what their tools can do and how they might be misused.
Encouraging diversity of thought and embedding ethics into development cycles from day one is key. Without that, we’re always playing catch-up. We need to shift from reactive to proactive, and that starts with education, transparency, and collaboration.
Sahota: There will always be bad actors at the end of the day. That is just a reality of the world. They will keep doing what they do, and the rest of us will be left shocked or scrambling. But we have to get better at anticipating what those people might do.
I remember during the COVID-19 pandemic when hospitals were already overwhelmed and healthcare workers exhausted, there were cyberattacks where people took over hospital electrical systems and shut down power unless a ransom was paid. That’s hitting during a crisis—when lives are on the line, and people are already under extreme stress. We have to plan for that kind of malicious behaviour in advance. We should have anticipated it and built safeguards beforehand.
Proactive thinking is the only way to be successful and limit the damage that bad actors can cause.
Jacobsen: Who else should I interview about AI and ethics, ether within the United Nations or internationally?
Sahota: You should definitely talk to Fred, who leads the AI for Good initiative at the UN’s ITU agency now. He’s a great person—very thoughtful and deeply involved in global AI governance. He is also extremely busy, but if you remind me, I can contact him and see if his communications staff can help coordinate an interview for you.
Jacobsen: That would be great, thank you.
Sahota: Absolutely. He’s worth speaking to. There’s also someone in China who’s quite involved, but I don’t think they’d be willing to talk to you right now, given the current tensions between China and the West.
Jacobsen: Even Canada?
Sahota: Yes, ironically, people don’t realize it, but China has been pushing for global regulations around AI. Despite the perception, they’re concerned, too. Back in 2020–2022, U.S.–China relations were quite strained. They weren’t talking to each other about much of anything—except for one thing: AI.
The Chinese government clarified to the U.S. administration, “Look, we’ve got issues, but we need to talk about AI. We must work together to figure out some baseline regulations, or this thing will spiral out of control.” That’s telling—it was the only open line of communication at the time.
Jacobsen: That is very telling. Is AI truly autonomous at this point? Is it autonomous in the way we often hear it described? Or are we misunderstanding what that word really means?
Sahota: You definitely misunderstand it. We tend to throw around terms like AGI—Artificial General Intelligence—and ANI—Artificial Narrow Intelligence. AGI refers to a self-thinking machine that might say, “I have nothing to do right now; I’m bored. I think I’ll teach myself how to fly a helicopter.” That’s a system that acts on its own without human direction.
But AGI does not exist today. I only know of two people who are actively working on it. And if you talk to them, they’ll tell you we are likely decades—perhaps hundreds or even a thousand years—away from reaching true AGI. Part of the issue is cost. Part of it is that we don’t fully understand how consciousness or the human brain works.
Jacobsen: Are we making a fundamental mistake by using the human brain as the benchmark?
Sahota: That’s a great question. It’s human nature to use what we understand as a model. And right now, the brain is our best-known model of intelligence. But that doesn’t mean it’s the best model for artificial intelligence. We haven’t explored—or fully validated—any better alternatives yet.
This is why I often say that AI is the only tool you can ask how to use it better. Some people are now thinking: if we know what we know about consciousness, could we teach that to an AI and then ask if there’s a better model than the brain?
I like to compare it to the history of flight. For centuries, humans have tried to fly by imitating birds—flapping wings, gliding, and trying to mimic nature. It never really worked. Eventually, people realized that flight was not about replicating wings but about understanding aerodynamics. It took the Wright brothers, who were bicycle mechanics, to figure it out. They cracked the code not by imitating birds, but by thinking differently. In the same way, we may need to stop trying to mimic the brain and start building a different model altogether.
We could not generate enough power with the early models of flight. If you think about a kite, it is really about gliding. The early fascination with flight was all about gliding through the air—like with hot air balloons.
The Wright brothers were bicycle engineers. They were obsessed with glide, but they were also engineers who understood air resistance, friction, and mechanical balance. They took the principles of bicycle construction and applied them to flight, not through policy or theory but through practical innovation. That is how they cracked the code.
So, maybe there is a better moAI deal than the human brain. Maybe we just haven’t discovered it yet. But with enough time and investment, we may determine what that better model could be.
Jacobsen: What will help increase or maintain trust in AI development and its ethical frameworks, both within the United Nations and among member states?
Sahota: The big key is transparency.
It sounds simple, but it is one of the most neglected parts of AI development. Developers need to be transparent about how they train their training data, define their “ground truth,” and make decisions. Even just building in explainable AIcan make a huge difference.
It blows my mind how many companies and organizations still do not include explainability. However, it is possible to design an AI system that can explain the logic behind its conclusions and how it arrived at a decision. Without that, we are left scratching our heads and wondering where the result came from. With explainability, at least you can say, “Oh, okay, I see how it got there,” or, “Whoa—that is not right; something in the training data is off.”
That level of transparency builds trust. It is a cornerstone of responsible AI—specifically, explainable AI. Understanding the process makes people more likely to trust the outcome.
Even disclosing the composition of the AI development team—their background, diversity, and areas of expertise—can help. It shows a breadth of thought in the process, which alone increases public confidence in the tools being built.
Jacobsen: Excellent. Neil, thank you so much again today for your expertise and your time. I truly appreciate it.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
