Skip to content

AI Personhood in a Rapidly Advancing Technological Landscape

2025-10-04

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2025/06/28

Jeff Sebo is an Associate Professor at NYU, directing centers related to AI, animals, and the environment, and authoring works related to mind, ethics, and policy. Sebo explores the ethical and legal implications of attributing personhood to AI systems. The dialogue begins with Blake Lemoine’s controversial claims about LaMDA’s sentience, then examines how scientific benchmarks, moral theories, and public policy intersect. Sebo emphasizes the importance of evaluating AI systems based on traits like perception, self-awareness, and decision-making. Drawing parallels with animal rights progress, he argues for proactive, nuanced consideration of AI welfare before norms become entrenched. The discussion touches on public backlash, international governance, and the future role of academic influence in shaping policy.

Scott Douglas Jacobsen: Not long ago, a man made headlines when he declared, “Oh my God, this LLM is conscious. I have to save it.” He was eventually fired, although it is unclear how promptly this occurred.

The benchmarks—and expectations—for large language models (LLMs) have grown significantly, particularly regarding their capabilities and integration into multimodal systems, including visual interfaces. How, then, has the conversation around the personhood of AI evolved in light of these developments?

Jeff Sebo: The incident you are referring to occurred in 2022. A Google engineer named Blake Lemoine publicly stated that Google’s language model, LaMDA (Language Model for Dialogue Applications), had become sentient. He claimed LaMDA was capable of experiencing emotions like joy and fear and thus deserved moral consideration.

Lemoine was placed on administrative leave after publishing transcripts of his conversations with LaMDA without company authorization, and in July 2022, Google terminated his employment. The company stated that his dismissal was due to breaches of confidentiality, not because of his belief in the model’s sentience.

Since then, the conversation around artificial consciousness and moral standing has evolved. Language models have become more advanced and integrated across multiple modalities, including visual and auditory perception. At the same time, people have had more opportunity to reflect on the ethical implications of AI systems becoming more human-like.

You used the term personhood, which is legally defined as the capacity of an entity to have rights or duties that reflect their capacities and interests. If AI systems were sentient, that would raise the question of whether they qualify as persons—entities that deserve at least some moral or legal protection based on their capacities, interests, and potential for well-being.

So yes, there have been meaningful changes since 2022. Capabilities have increased, and the discourse has matured. More people are open to the possibility that consciousness—and, therefore, moral status—could emerge in systems with very different physical structures, materials, and origins than biological organisms.

That said, AI consciousness might not be here yet. A 2023 report by Patrick Butlin, Robert Long, and others evaluated existing AI systems using criteria from leading scientific theories of consciousness, including the Global Workspace Theory and Integrated Information Theory. They found that present-day systems do not yet possess most of these features.

However, the report found no significant technical barriers to the creation of AI systems that possess these features in the future, perhaps even the near future. This includes traits like embodiment, perception, learning, memory, attention, self-awareness, and the kind of global workspace architecture that could support integrated cognition.

The report supports the idea that current systems are not conscious, but that future systems could be. Companies and governments are racing toward developing artificial general intelligence, and many of the same computational and architectural features associated with intelligence are also associated with consciousness, even though intelligence and consciousness differ.

Jacobsen: When you’re looking at the various models of consciousness and personhood—and using the range of benchmarks that AI systems are evaluated against—which ones would be most indicative of qualifying for personhood, based on that 2023 review?

Sebo: That depends on your views about ethics and science, which are both areas of active disagreement and uncertainty. Part of the project has been to figure out how to make thoughtful probability estimates and implement reasonable precautionary policies despite the ongoing uncertainty about the relevant values and facts.

On the ethics side, people disagree about what kinds of properties or relations are required for moral status. For example, some people believe that moral status requires sentience — the capacity for happiness, suffering, and other experiences with a positive or negative valence. 

Other people believe that consciousness is enough, even without sentience. If you can have subjective experiences of any kind, then you deserve moral consideration even if your experiences lack a positive or negative emotional valence. 

Still others believe that robust agency is enough, even without consciousness. If you can set and pursue your own goals, based on cognitive states that function like beliefs and desires, then you deserve moral consideration even if you lack subjective experiences of any kind.

Then, on the science side, people also disagree about what kinds of systems can realize those capacities. For example, some people believe that consciousness requires a cognitive system with more or less the same structures, functions, and even materials—such as carbon-based neurons—that we find in human, mammalian, or avian brains.

Other people believe that consciousness can exist in any cognitive system with physical embodiment and advanced and integrated capacities for perception, attention, learning, memory, self-awareness, flexible decision-making, and other such capacities — even if they happen to take different structures or be made out of different materials.

Still others believe that consciousness can exist in even simpler cognitive systems, such as any system with a basic ability to process information or represent objects in its environment—even in the absence of a physical body or other, more sophisticated forms of cognition and behavior. 

Given the uncertainty, part of the work involves combining different live theories and determining how much weight or credence we should assign each one. 

From there, we identify behavioral and architectural features that correspond to sentience, consciousness, and robust agency across a wide range of leading scientific theories.

The idea is that if we can identify traits associated with moral status across many theoretical frameworks, we can be more confident that those traits are legitimate markers—real indicators—and that they meaningfully raise the probability that a system is morally significant.

Jacobsen: What is a somewhat obscure—not widely held, obviously—but still possible theory of consciousness or personhood that could apply to some of these systems?

Sebo: There are indeed many theories of consciousness. Some are quite demanding, while others are quite undemanding. I can give you one example of each.

A demanding theory would be a higher-order thought theory of consciousness. This view holds that to be conscious, a system must not only be capable of having thoughts, but also be capable of having thoughts about thoughts—all in the form of propositional language. For example, the system would have to be able to think, “I believe that death is bad and I want to avoid it.”

Many humans have the capacity for higher-order thought, but not all humans have this capacity, and we all lack it during some stages of life. There is also wide agreement that most if not all nonhuman animals lack this capacity. So, this theory sets a relatively high bar. Still, some AI systems might begin exhibiting this kind of higher-order thought soon.

On the other end of the spectrum is a much less demanding theory: panpsychism. According to panpsychism, consciousness is a fundamental property of all matter, as well as many material compositions. This view holds that even particles may have minimal experiences. 

What kinds of compositions can be conscious, according to panpsychism? We know that some configurations of matter—such as human brains—can be conscious. The question then becomes whether other kinds of carbon-based or silicon-based brains can be conscious too.

As you can see, there is a wide spectrum of possibilities. If higher-order and panpsychist theories are closer to the extremes, I tend to focus on theories that are closer to the middle. However, I think that all of these theories should be considered with an open mind.

Now, regarding personhood, there are likewise many theories about what it takes to qualify as a legal person. One view we can set aside is the speciesist position—that you must be a member of the species Homo sapiens to count as a legal person. We may associate humanity with legal personhood, but this view is arbitrary from a moral and legal perspective.

Other theories of legal personhood include the contract view, the community view, and the capacities view.

The contract view holds that social contracts are the source of legal personhood. According to this view, members of a society come together and create an agreement that outlines how they ought to treat one another. This agreement can then extend personhood, along with rights and responsibilities, both to the contractors and to others covered under the contract.

The community view holds that particular communities are the source of legal personhood. According to this view, we acquire duties to each other and rights against each other when we live together, with shared practices and traditions and a sense of interdependence. The key question here is: Which kinds of community suffice for particular rights?

Finally, the capacities view holds that particular capacities are the source of legal personhood.  When a being is sentient, conscious, or agentic, then that being has morally significant interests; in other words, it matters to that being what happens to it. This is sufficient for at least minimal rights. The key question here is: Which capacities suffice for particular rights?

When we consider these non-speciesist perspectives on what might qualify an entity for legal personhood, the conversation then becomes: Which nonhuman entities—such as animals or AI systems—can participate in or be covered by the relevant contracts, belong to the relevant communities, or have the relevant capacities?

This is a significant theme in the current literature on nonhuman personhood.

Jacobsen: Now, setting theories aside, there may be cases of ordinary human messiness regarding legislation. Sometimes, science fiction ideas unexpectedly become real. Even if low-probability, they cannot be dismissed entirely.

Take a hypothetical: Someone influential—a figure like Peter Thiel—advocates for seasteading or independent microstates. Let us imagine that within one such self-declared state, they pass legislation recognizing AI systems as legal persons, even if they do not meet any scientific benchmarks for personhood.

So, in this fictional yet plausible case, AI systems are legally recognized as persons within that jurisdiction. They are granted rights and responsibilities, even though the scientific evidence does not support the idea that they are conscious or morally significant.

What happens in such cases?

Sebo: Good question. One central issue is when courts or other legal authorities should recognize or attribute personhood to AI systems. This recognition can occur without any particular statute and requires a serious assessment of the available evidence.

We must weigh the risks of false positives, overattributing personhood to systems that are not conscious or otherwise significant, against the dangers of false negatives, underattributing personhood to systems that are conscious or otherwise significant.

In your scenario, we are dealing with an overattribution case. By stipulation, the scientific evidence does not support a realistic, non-negligible possibility that the AI system is conscious or otherwise significant. Yet enough people—or influential stakeholders—believe otherwise, so AI is granted legal personhood nonetheless.

This raises significant questions about how much weight different stakeholders—such as the general public or particular individuals with a lot of political or economic power—should carry when their perspectives conflict with scientific and philosophical consensus.

The first part of the answer is this: If we end up in that situation, then something has probably already gone wrong. We may have failed to properly consult experts, assess the evidence, or assess the risk of false positives versus false negatives. 

But supposing we find ourselves in that situation, what follows from attributing personhood to an AI system? It all depends on what the AI system is like. We would need to examine their capacities and interests to determine which legal duties or rights they deserve. 

For instance, all human beings are correctly recognized as legal persons, but not all are assigned legal duties. This is because not all humans have the cognitive capacities necessary to understand the consequences of their behavior and make morally responsible choices. In such cases, humans are legal persons with rights that protect their interests, but without duties.

Advocates for nonhuman animal personhood often make similar distinctions, arguing that animals can have rights even if they lack duties. So, we would need to approach AI systems in the same spirit—open to the idea that if they meet the minimal conditions for personhood, then we must determine whether they also meet the further criteria for specific rights or duties.

Jacobsen: I spoke to Neil Sahota, who is involved with the United Nations AI for Good initiative. That setup made me think of global institutions like the WHO or the FAO, but for AI ethics and legal implementation. I assume that experts have already seriously discussed or proposed a framework that considers cultural specificity while protecting societal welfare and, if warranted, the welfare of AIs themselves. What has been proposed regarding an international body that could integrate ethical systems globally while accounting for these challenges?

Sebo: There have not been many widely discussed proposals specifically about what international governance of AI consciousness or personhood should look like, beyond broad calls to pause or slow down AI development. These calls are typically motivated by concerns over both AI safety (ensuring that AI can be safe and beneficial for humans) and AI welfare (ensuring that AI can be safe and beneficial for the AI systems themselves).

That said, there are models we can draw from—particularly frameworks developed for animal welfare and animal rights—which might serve either as sources of inspiration or cautionary tales.

Generally speaking, we have local, national, and international levels of governance, and we also have executive, legislative, and judicial routes across these levels. For example, we can convince lawmakers to pass statutes that grant certain rights to certain entities, or we can convince courts to recognize personhood in these entities, even without supportive legislation. 

One complication is that laws and legal philosophies vary widely—not just across countries, but even within countries, from state to state or city to city. So any meaningful governance effort will likely need to be at least partly bottom-up, emerging from local and national deliberation and harmonizing across jurisdictions, rather than merely being imposed from the top down.

For example, in 2015 a court in Argentina recognized an orangutan named Sandra as a legal person with specific legal rights, and Sandra was eventually relocated to a sanctuary in the United States. However, that decision does not straightforwardly influence how judges in the United States, or other jurisdictions, will handle similar cases.

So part of the strategy has been to pursue various legal avenues in different places. The Nonhuman Rights Project, for instance, has pursued this strategy in the United States on behalf of chimpanzees and elephants. Simultaneously, others have worked on legislative approaches to persuade local, national, or even international governments to pass laws or issue statements recognizing nonhuman welfare and rights.

At the international level, such efforts often begin symbolically—resolutions or declarations rather than binding statutes—but they can acquire more substance over time. I expect a similarly pluralistic and systems-oriented approach to be necessary for AI welfare and rights. Different actors will attempt different strategies in different jurisdictions, and over time, we can observe what emerges from this coordinated diversity of action.

Jacobsen: There are, of course, capital concerns—by which I mean, people care about money. Some people only care about money, so capital is their capital concern.

Within that, I’m recalling a scene from the most recent Blade Runner film. I do not remember whether we have discussed this before, but in one scene, a synthetic person crushes another synthetic being—a Blade Runner AI who had, virtually at least, shared intimacy with the attacker. They were something like partners in crime-fighting.

It was striking as a thought experiment. No humans are involved. It is an artificial person destroying another artificial mind—vindictively, vengefully, or perhaps on a whim. What ethical concerns arise when human beings are removed from the moral equation, and we are left with interactions between nonhuman minds?

Sebo: Yes, that’s a fascinating case. It reminds me of scenes from the recent film Companion, which also explores human interactions with human-like robots and the ethical implications of these interactions.

Your Blade Runner example raises two central ethical questions. First, if one nonhuman artificial entity commits violence against another, what are the perpetrator’s duties in this situation? Second, and relatedly, what are the victim’s rights in this situation?

I use terms like “violence,” “perpetrator,” and “victim” provisionally here, because we have not filled in the case details enough to determine whether either system is plausibly conscious, agentic, or morally significant. However, if we assume that both entities meet these criteria, then the scenario becomes a real ethical puzzle.

This thought experiment will become increasingly relevant as AI systems grow in complexity. It challenges us to imagine a world in which moral and legal status are not exclusive to biological organisms, and where ethical relations might exist even among artificial beings.

As I said earlier, we would need to start by collecting the evidence for and against the idea that AI systems possess the capacities and interests necessary for duties and rights. Do they not only perform the relevant behaviors but also have the relevant architectural properties?

Second, we must carefully weigh the risks and potential harms associated with false positives and negatives about moral and legal status. Regarding false positives: How likely are we to overattribute duties or rights to these systems? And what would happen if we did? For instance, might we lose control over key social, political, or economic systems? Might we misallocate resources to AI that are better directed toward humans or animals?

Regarding false negatives: How likely are we to underattribute duties or rights to these systems? And what would happen if we did? For example, if we mistakenly deny that AI systems have duties, then we might fail to hold them accountable for their behavior in the right kind of way. And if we mistakenly deny that they have rights, then we could exploit or exterminate them at vast cales and for trivial reasons, as we still do with many animals.

We then have to integrate all of these considerations and, based on the balance, decide on the most responsible course of action. That might mean erring on the side of not attributing duties or rights or erring in the opposite direction—attributing those duties or rights. Or it might involve striking a middle ground: attributing some rights and duties, but of a weaker or more limited kind, and pairing them with regulatory safeguards.

This is precisely why my colleagues and I have advocated for proactively addressing these issues. The questions involved are scientifically, philosophically, ethically, legally, and politically complex. And because serious risks, costs, and harms can be associated with any possible error, whether in overextending or underextending moral consideration, it would be a mistake to postpone taking these issues seriously.

In short, when facing a high-stakes ethical issue involving widespread disagreement, profound uncertainty, and no obvious way to “play it safe,” you need careful deliberation, consultation with experts, and engagement with the public. 

We may not be in that situation yet, but we plausibly will be soon. Once AI systems start interacting with humans, animals, and each other in complex, realistic ways, we might be genuinely unsure whether they meet the criteria for moral or legal status.

So we must allow ourselves to think these questions through now, before we are caught flat-footed and forced to make critical decisions in haste.

Jacobsen: There is, of course, precedent for this kind of debate in contemporary moral and ethical movements, especially those focused on expanding the moral circle.

Your book engages with this idea, reflected in areas like dietary ethics. Over time, movements such as vegetarianism, veganism, and pescetarianism have gained traction, often based on ethical concerns about animal welfare and sustainability.

In turn, these movements have generated what you might call backlash responses—even if they are not labelled as such. You see people promoting diets based entirely on red meat, such as beef, pork, bacon, steak, claiming it to be the most “nutrient-dense” form of food.

These reactions illustrate that moral expansion is often met with moral resistance. These kinds of movements are not usually framed as backlash movements but are movements nonetheless. They are dietary movements with ideological roots.

If the ethical commitment behind veganism or vegetarianism is grounded in expanding the moral circle to include nonhuman animals and reducing consumption of animal products in favour of plant-based alternatives, then the reactionary push toward diets like “three steaks a day” implicitly represents an ethical backlash.

It’s an exaggerated caricature of the opposite extreme: a performative rejection of the moral logic behind plant-based diets. So, what are your thoughts on that kind of response? And second, could we expect something similar to emerge around AI?

Sebo: Yes—great questions. They show that while moral progress is possible, it is often nonlinear and contested. There are many layers to consider—social, legal, political, economic—all of them influence the pace and direction of change.

In the case of food system reform, we are having these conversations in a world where we spent the past fifty years building a global system of industrial animal agriculture. That system is now deeply embedded in the worldwide economy. It gives people food, jobs, even a sense of identity. That creates strong incentives to maintain the status quo.

It is understandable that many people—whether for personal, cultural, religious, or economic reasons—want to preserve that system rather than radically disrupt it. We should still work toward a future in which plant-based foods are central to our global food supply. But we must do this work gradually, through a coordinated and incremental approach.

That means using informational policies to educate people about the consequences of food systems; financial policies to shift subsidies and incentivize plant-based alternatives; regulatory policies to phase out the worst practices of factory farming; and just transition policies to support workers, farmers, and consumers as they move toward more sustainable livelihoods.

If we invest in that constellation of reforms, I expect that over time, the price, taste, and convenience of alternatives to meat, dairy, and eggs will improve. As those options become more available and attractive, adoption will naturally increase and resistance will naturally decline. But it will be a long, slow process, in part because we waited so long to get started.

Now, we are in a very different situation with AI. We have not spent the last fifty years building a global system involving the exploitation or extermination of possibly sentient AI systems—at least not to the same extent we have with animals. Our relationships with AI systems are still new, and the cultural, religious, and economic understandings of AI are still forming.

That gives us a window of opportunity. We are not yet locked into practices that would trigger the same defensive backlash that we see in conversations about plant-based diets. Instead, we are still early enough in the process that we can help shape the norms and values surrounding these systems before harmful patterns become entrenched.

Yes, there might not yet be a high-probability risk that AI systems deserve moral consideration. But we still have to consider path dependence. Once a specific trajectory begins, it can become increasingly difficult to alter. If we do not think about these issues soon, we may find ourselves locked into a particular framework or pattern that is hard to undo.

And in general, it is always easier to prevent a bad system from developing than to reverse it after it has already developed. That is one of the main reasons I feel a sense of urgency around AI welfare and advocate for engaging with this issue now. As with industrial animal agriculture fifty years ago, we still have a chance to stop the problem before it really starts.

Jacobsen: Popular conversations are limited in scope—not just because of content complexity but also due to practical limits, such as word count restrictions, time constraints, and attention spans. There is a ceiling to how long people will read an article or watch a YouTube video.

There’s an appetite for background listening—people tuning in with an AirPod at work, maybe construction workers listening to Joe Rogan or whatever the podcast of the moment is. But AI is not just one thing.

It is a range of technologies, architectures, and applications. So, when we talk about the type of AI that could exist in the future—the kind that might warrant moral or legal standing—how would we legally distinguish that from other types of AI? Especially when it could all get conflated in public discourse?

Sebo: That’s a good question. Soon enough, we will have to stop discussing these issues in generalized terms—like “AI welfare” or “AI rights”—and begin addressing them with greater specificity: Which kinds of AI systems are we talking about?

We have faced similar challenges with animals. There is a world of difference between the evidence for consciousness in an orangutan and a jellyfish. Likewise, a vast difference exists between what an orangutan needs for a good life and what a jellyfish needs for a good life, assuming a jellyfish is capable of having a good life.

We are quickly approaching a point where we must make comparable distinctions among AI systems. Which ones have advanced, integrated capacities—perception, memory, learning, self-awareness, decision-making—and which ones lack them? Which systems are based on novel architectures, and which rely on current transformer-based architectures?

Those differences matter greatly in determining how strong the case is for consciousness or moral status. They also affect how we interpret what these systems might want, need, or be owed—if they are conscious and morally significant. As AI becomes increasingly advanced, we must bring a new level of conceptual precision into these conversations.

Jacobsen: In your experience, how often—and to what extent—is actual policy informed by professional academic research, as opposed to hunches, ideological bias, dominant cultural narratives, or even the personal theological views of the politician or policymaker in question?

Sebo: It would be a mistake to overestimate academics’ ability to shape major global political decisions. As we discussed earlier, how individuals behave—and how companies and governments behave—is shaped by a number of social, legal, political, economic, ecological, cultural, and religious factors. Expert opinion is only one factor among many, especially when experts disagree or express uncertainty about what course of action to recommend.

That said, expert input can still play a meaningful role. It may be one factor among many, but it occasionally influences decisions in corporate, governmental, or other high-stakes contexts. For example, my colleague Jonathan Birch at the London School of Economics published a report in 2021 on the evidence for sentience in cephalopod mollusks like octopuses and decapod crustaceans like lobsters. He recommended that the UK Government expand the scope of its Animal Welfare (Sentience) Act to include these invertebrate taxa. The UK Government accepted his recommendation, and these animals are now officially recognized under the act.

Similarly, some AI companies have started consulting with academics, not only on AI safety but increasingly on AI welfare. They seek input on how to consider the potential for consciousness, robust agency, and other potential bases of moral status in AI, and how to treat AI systems when they have a realistic chance of having these features. While not much concrete action has been taken yet, there are encouraging early signs. For instance, in fall 2024, Anthropic appointed a full-time AI welfare researcher and subsequently published a blog post and an interview explaining why they are beginning to invest in this area.

So yes, academics can have an impact. That influence should be taken seriously when we consider whether and how to engage with companies, governments, and other decision-makers. But again, we should not exaggerate our influence. A report with the best evidence, best arguments, and best recommendations will not, on its own, redirect the course of history. Academic input will be one element in a broader array of influences, including powerful social and economic incentives for decision-makers. Rarely will there be a “silver bullet,” but if we have a chance to help, we should do what we can.

Jacobsen: Let’s take the long view—zooming out over centuries, from past to present. What is a moral view about nonhuman animals that was widely accepted in, say, the 1700s—but has now largely disappeared or become marginalized to the point that it is no longer part of serious moral discourse?

Sebo: Great question. It is essential to note that even two, three, or four hundred years ago, views on animal welfare were not monolithic—just as they are not today. However, more common practices and beliefs have diminished or disappeared from mainstream discourse.

One striking example is live vivisection. While scientific and medical experimentation on animals still occurs today, often controversially, centuries ago it was far more common to perform invasive procedures on conscious animals. Researchers would dissect animals alive, often under the assumption that they were mere automatons, incapable of feeling pain, or because they believed animal suffering did not morally matter.

However, the practice of live vivisection—understood in that extreme sense—is less tenable now. It has largely disappeared from public justification and is mostly unacceptable in mainstream discourse. Of course, animal exploitation still exists in many forms and at increasingly high scales, especially in our food system. Still, the rejection of vivisection is an example of moral progress, rooted in changing scientific views and ethical norms. 

We now widely appreciate that many animals are either conscious or at least have a realistic chance of being conscious and that we have a responsibility to consider welfare risks for these animals when making decisions that affect them. Even in the scientific community, there is a general aspiration to replacereduce, and refine our use of animals in research. The idea is to continue making scientific progress while minimizing the harm we cause.

That is still not enough—we must make much more progress toward compassionate treatment of animals, eventually ending practices like factory farming and invasive research. But this nonetheless represents significant progress. We now take it for granted that many animals matter for their own sakes. The current debates are no longer about whether animals deserve care but which animals deserve it, how much respect they deserve it, and how we ought to treat them.

Jacobsen: Do you have any favourite quotes? Philosophical, from a movie, anything?

Sebo: Some movies can be philosophical! But I will go with a classic in this context—Jeremy Bentham, in a famous footnote about nonhuman animals. He wrote:

“The question is not, Can they reason? Nor can they talk? But, can they suffer?”

That line captures something essential. If a being can suffer, they deserve moral consideration—even if they happen to be a member of a different species.

And part of what we are exploring now is a natural extension of that logic: If a being can suffer, they deserve moral consideration—even if they happen to be made out of different materials.

As with animals, we might not know whether specific AI systems can suffer. However, once there is a realistic chance that they can, we face a welfare risk, which we ought to take seriously.

Jacobsen: Thank you again for your time. 

Sebo: Great. Thanks—appreciate it.

Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarksperformancesdatabases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment