Skip to content

How Does Culture Shape Who You Are? Dr. Lloyd Hawkeye Robertson on the Memetic Self

2025-05-16

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): A Further Inquiry

Publication Date (yyyy/mm/dd): 2025/05/14

Part 1 of 2.

Dr. Lloyd Hawkeye Robertson, a Canadian psychologist and theorist, developed the concept of the memetic self—a culturally constructed identity formed from transmissible units of meaning called memes. He explores how language, culture, and social interaction give rise to self-awareness, tracing its development from mirror recognition in animals to modern identity. Robertson uses self-mapping, a therapeutic tool that visualizes a person’s identity through linked memes, to address fragmentation in conditions like autism, Alzheimer’s, and dissociative identity disorder. His work emphasizes coherence, volition, and cultural adaptability, and his forthcoming book—coauthored with his daughter—applies these insights to psychotherapy.

Scott Douglas Jacobsen: Today, we’re joined by Lloyd Hawkeye Robertson. He is a Canadian psychologist, educator, and theorist known for his innovative work on the culturally constructed self. With over 40 years of experience in counselling and educational psychology, he developed the concept of the memetic self—a cognitive framework composed of culturally transmitted ideas (or “memes”) that shape an individual’s identity. He is the author of The Evolved Self: Mapping an Understanding of Who We Are and a pioneer of self-mapping, a visual and therapeutic method for exploring and restructuring identity. His work bridges psychology, philosophy, and cultural studies, offering practical tools for therapy and education while exploring questions of free will, agency, and the evolution of selfhood across diverse cultures. Mr. Robertson, thank you very much for joining me again today. I appreciate it. It’s always a pleasure.

Dr. Lloyd Hawkeye Robertson: You’re welcome. I’m looking forward to this, Scott.

Jacobsen: So, what is the self?

Robertson: Oh, that’s pretty basic. Okay. The self is a construct, as you mentioned in your introduction. Thank you for that generous overview. Your question is, “What is the self?” The self is a conceptual framework we use to define who we are. It is not a physical entity in the brain but rather a cognitive and cultural construct—a mental map that incorporates beliefs, values, experiences, and roles.

This construct has evolved. One of the earliest indications of self-awareness in our evolutionary lineage is mirror self-recognition, which has been observed in some great apes, dolphins, elephants, and magpies. In our hominin ancestors, the development of language and culture allowed for increasingly complex and abstract self-concepts.

Recognizing one’s reflection—understanding that “this is me”—marks a foundational moment in developing self-awareness. Although early humans may not have had the language to describe it, the ability to form a concept of self-based on reflection and social interaction was critical. This capacity laid the groundwork for the complex, culturally mediated selves we navigate today.

From that modest beginning, our ancestors gradually evolved the capacity for social interaction. They needed a rudimentary idea of who they were to engage socially, even if it was not consciously articulated.

Language development significantly boosted the evolution of the self. Once we moved beyond simple two-slot grammar—like “him run”—to more complex phonetic constructs, we could combine distinct sounds that held no individual meaning but could generate an almost unlimited number of words.

With that, collections of words took on new, layered meanings. As this linguistic complexity emerged, our self-definition became more nuanced, expanded, and refined. About 50,000 years ago, humans began burying their dead. This act implies a recognition of mortality and a developing self-concept about life and death.

The most recent significant change in our understanding of the self—as part of cultural evolution—may have occurred as recently as 3,000 years ago. I say “may” because it could have emerged earlier, but our evidence dates to that period, particularly from Greek writing and Egyptian hieroglyphics. Of course, many earlier cultures lacked writing systems, so we cannot be definitive about when this modern conception of self emerged.

What is this self I’m referring to? It includes the ideas of volition, constancy over time, and uniqueness. For instance, although you and I, Scott, share many characteristics, I do not believe you are me, and vice versa. Even if I had an identical twin—same genetics, upbringing, and experiences—I still would not recognize him as myself. That sense of uniqueness is part of the “modern self”—a culturally evolved manifestation of identity with an inherent sense of individualism.

Here is the great irony: we are a social species, and the self emerged through social interaction within early human communities, particularly tribal Neolithic groups. The self could not have developed in isolation; it depends on interaction with others. So, we are fundamentally shaped by collectivism, even though individualism is built into our modern self. This creates an internal tension between the group’s needs and the individual’s autonomy.

Historically, that tension was mediated by religion—specifically, organized religion, which kept people in their social roles. In Western civilizations, a deity often prescribed those roles, and individuals could not transcend them. Tradition or ancestor worship defined the limits of the self in other cultural contexts.

Societies that completely suppressed the modern self remained stagnant, while those that permitted at least some individuals to develop a sense of autonomous selfhood became more adaptive. This is because the self is a powerful tool for problem-solving. It allows us to reinsert ourselves into past experiences as protagonists, to relive and learn from those events, and to rehearse possible futures mentally. We can adjust our behaviour accordingly. These are valuable psychological skills.

But they also come at a cost. With the modern self comes the capacity for anxiety and existential distress. I doubt that our earliest ancestors experienced clinical depression or anxiety disorders as we know them today. These conditions are part of the psychological “baggage” of possessing a self capable of complex reflection and future projection.

For millennia, the self was constrained—kept “on a leash,” so to speak—until a set of unique historical conditions emerged in Europe. Specifically, during and before the Enlightenment, the Catholic Church—which had long functioned to suppress individualism—lost control, particularly during the Reformation and the ensuing religious wars between Catholics and Protestants.

Individuals gained some permission to explore personal identity when centralized religious authority broke down. This blossomed into what we now call the Enlightenment. The Enlightenment did not invent the self—it authorized it. Not entirely, of course—we remain social beings with embedded restrictions—but it granted more freedom to individuals to develop their understandings.

This led to the rise of modern science and humanism. Knowledge was no longer handed down by authority. Instead, it became something you had to demonstrate through observation, reason, and experimentation. These practices allowed individuals to engage with a reality beyond themselves.

And that is where humanism emerged. So, you asked me what the self is—and now you see: when you ask me a question, you get a long-winded answer.

Jacobsen: How do you define “meme” within the framework of The Evolved Self?

Robertson: The word “meme” has had an unfortunate evolution. It was initially coined by Richard Dawkins in the 1970s. Dawkins coined the term “meme” to represent a self-replicating unit of culture.

For instance, a simple descriptor like the colour red is not a meme. It’s merely a physical property description, not a transmissible concept that evolves culturally. A meme, in contrast, is more than an idea; it is a cultural construct that carries meaning across individuals and generations.

Dawkins defined a meme as something broader than a simple descriptor but narrower than an entire ideology, religion, or belief system. The latter, of course, is composed of many memes—interrelated units of culture. You can, for example, substitute the colour red in a conceptual framework with blue, and the core concept might remain, but the meme is more than any one element—it has internal structure and transmissibility.

Unfortunately, Dawkins did not have the opportunity to develop the theory entirely. His work was criticized for being tautological. Critics asked, “How can you prove this? How do we observe or measure a meme?” These questions challenged the concept’s empirical rigour.

In my research, I proposed a refined definition of a meme: it must be a culture unit with behavioural, qualitative, and emotional (or emotive) implications. A proper meme is not just a label or idea—it affects how we feel, act, and make meaning.

This also resolves a challenge Dawkins left open—his observation that memes can have “attractive” or “repulsive” properties. He did not elaborate on the mechanics of that.

In my framework, if one meme naturally leads to another—like how “love” often leads to “marriage” in cultural narratives—that linkage reflects an attractive force between memes. Conversely, when two memes are psychologically or conceptually incompatible—”love” and “hate” coexisting as core guiding values in the exact moment—that reflects a repellent force.

My work on the modern self is composed of a collection of memes that are primarily attractive to one another. If a meme within that structure becomes repellent—meaning it no longer aligns with the rest of the self—it tends to be ejected. That is how we maintain coherent, relatively stable identities.

Of course, not everyone has a stable sense of self. My work as a psychologist involves helping people reconfigure their self-concepts when internal inconsistencies cause distress.

Now, where things get tricky is the evolution of the word “meme” online. The internet popularized the term in a way that deviates from its original definition. Internet memes typically involve humour or juxtaposition—two ideas or images that don’t usually go together. While some may qualify as memes in the original sense, internet usage represents a narrow and diluted interpretation.

Jacobsen: Did I hear you correctly? You’re saying the modern meme online sometimes overlaps with Dawkins’ definition, but only in a limited sense.

Robertson: Yes, exactly. Internet memes sometimes fulfill the criteria but rarely capture the deeper behavioural and emotional dimensions Dawkins originally gestured toward—which I’ve tried to formalize more clearly.

Jacobsen: So, how does this fit into your work on self-mapping?

Robertson: Good question.

One of the most academically grounded ways to create a self-map is to ask someone to describe who they are. You use prompting questions to elicit a detailed, rich description of their self-concept.

I collect those self-descriptions in my research—just like this interview is being recorded. I transcribe the responses and break the narrative into elemental units—essentially memes. Each unit is labelled and categorized. This approach parallels qualitative methods in social science research.

The coding method I use for self-mapping parallels the qualitative analysis approach developed by Miles and Huberman in the early 1990s.

You label each unit of meaning. A sentence could represent a single unit or contain multiple distinct concepts. You isolate those concepts into thematic categories—or “bins”—based on their shared meaning.

Then, if those units exhibit the characteristics I described earlier—qualitative, behavioural, and emotional implications—you can classify them as memes.

Next, you examine the relationships between those memes. You identify which memes are attracted to each other—either through thematic linkage or cause-effect associations—and chart those relationships. You map them visually, using lines to indicate attractive forces. That’s the core structure of the self-map I create.

Now, this method requires considerable time and effort.

So, to make the process more accessible, my daughter—a psychologist—and I developed a quicker method in collaboration with a colleague from Athabasca University. We created a structured questionnaire with 40 core prompts, which could be expanded to 50 or 60.

The questions focus on four primary areas. First, we ask: “Who are you?” People might respond with statements like “I’m a father” or “I’m a chess player.” These are self-descriptive memes—cultural elements that express identity.

Then, we ask: “What are 10 things you like about yourself?” and “What are 10 things you would change if you could?” Finally, we ask: “What are 10 things you believe to be true?”

One of my clients, earlier this year, offered a novel and powerful addition to the exercise: “What are 10 things you keep hidden from others?” That insight added emotional depth and complexity to the map.

Once we gather that data, we create a visual self-map, following the same principles as in my academic research. I jokingly call this the “quick and dirty” version, but it works. My daughter Teela and I have used it successfully with many clients.

The crucial step is refining the map with the client until they recognize themselves. That map resonates when they say, “Yes, this is me,” reflecting their identity. We become psychologists if something important is missing, like a sense of personal agency or volition.

We help them develop those underrepresented self-elements based on an idealized model of the modern self—a coherent, autonomous individual identity. When parts are missing or fragmented, we work to integrate them.

We should do a formal academic study to validate this quick method, but based on clinical experience, it works.

Jacobsen: If we take all these elements and look at them as a whole, we’re essentially describing an “evolved self.” That allows us to examine the coherent identity of a person. How would you describe someone who lacks a coherent self or identity?

Robertson: That does happen. Not everyone possesses a well-formed self.

Jacobsen: Please explain.

Robertson: Take classical autism, for example—the traditional form I learned about during my training, not the broader, more ambiguous “autism spectrum disorder” currently defined by the APA. That modern definition is so diffuse that it’s challenging to apply meaningfully in clinical settings.

In classical autism, you may encounter children who engage in highly repetitive, self-soothing behaviours. One case I worked with involved a boy who spent most of his day swinging a string with a weight on the end, keeping it taut in a circular motion. Even while eating—an essential survival activity—he needed the string in his hand. If someone took it away, he would have a full-blown panic attack.

At that level of autism, the individual lacks a coherent self.

One key indicator is the absence of what psychologists call “theory of mind”—the capacity to understand that others have thoughts, feelings, and motivations similar to one’s own.

The theory of mind is essential. It allows us to interpret the behaviour of others based on internal states. For example, I can infer that you, Scott, have emotions and goals. If I understand your context, I can anticipate your next question. That’s mind-reading—not in a mystical sense but in a psychological, predictive sense. It’s something we all do constantly.

It is vital for navigating everyday life. For example, when driving, we anticipate that other people will stay on the correct side of the road. In Canada, that means the right side. We base this assumption on our shared cultural understanding, which generally holds.

Jacobsen: So, what happens to people who do not have a self?

Robertson: There are others, aside from individuals with severe autism, who also lack a coherent self. One group includes people with advanced Alzheimer’s disease.

There’s a poignant story told by an Alzheimer’s researcher—I’m forgetting the researcher’s name, but the story involved a woman who would visit her husband, who had advanced Alzheimer’s. She would begin by introducing herself each time: “My name is [X], and I’m your wife.” Once he understood her name and the relationship, they could converse coherently.

Then, one day, after she introduced herself and said, “I’m your wife,” he looked at her and asked, “Yes, and who am I?”

He genuinely did not know. So yes, there are people who lose their sense of self. It is rare, but it happens. Most people have a self—and nearly always, there’s a one-to-one correspondence between self and body.

Jacobsen: This brings me to three points of contact for further questions.

The first two are based on your description, and the third is a broader conceptual issue. First, in the case of someone with what might be considered a nonstandard profile on the autism spectrum—who meets the characteristics you mentioned—what are the legal and professional implications of working with someone who, by your clinical analysis, lacks a functional self?

Second, in cases involving advanced dementia or Alzheimer’s, how do you interpret situations where a person can still speak in coherent, functional language yet openly asks, “Who am I?” or “Do you know who I am?”

Robertson: Those are deep and difficult questions.

In the case of someone with classic autism, we generally assume that a parent or legal guardian is involved—someone who can authorize professional intervention. The goal is to help the individual develop skills that improve quality of life. Whether or not these interventions fully succeed is another matter, but we do try—and sometimes, we help.

With advanced dementia or Alzheimer’s, things get more complicated—particularly when it comes to end-of-life care and living wills. You may have someone who no longer remembers ever having signed a living will, and yet, according to that document, medical professionals are instructed to allow them to die.

It raises profound ethical dilemmas. You may encounter someone who still shows signs of a will to live—even joy or affection—but can no longer comprehend their identity or the implications of past decisions. That contradiction is ethically challenging.

Jacobsen: I have a will to live and a living will to die. I cannot know who I am, yet I still live.

Robertson: Right. It’s not a lack of will—it’s a lack of cognitive ability to know.

Jacobsen: What about cases involving dissociative identity disorder—what used to be called multiple personality disorder? In those situations, more than one “self” seems to coexist in the same body.

Robertson: That diagnosis is controversial. Not all professionals agree that it reflects an actual condition. However, conceptually, it’s possible—because the self is a cultural construct.

The self is not a metaphysical entity that inhabits the body. Instead, it describes a person shaped by cultural constructs that include the body and socially mediated self-understanding. Think of the body and brain as the hardware and the self as the software—cultural programming that shapes perception, behaviour, and identity.

Given that framework, it’s theoretically possible for multiple “selves” to coexist—though this would be a scarce and complex scenario.The older term “Multiple Personality Disorder” implicitly recognizes the possibility of multiple selves. The term “dissociative identity disorder” implies a fragmented self.

Now, I’ve never worked personally with someone diagnosed with multiple selves, so I’m speaking from theoretical and scholarly understanding here.

From what I’ve read, therapists who work with such clients often report that one becomes dominant or “emergent” while others recede. The therapeutic aim, typically, is to integrate these multiple selves into a coherent whole so the individual can function more effectively.

There’s a fringe view in psychology suggesting that this therapeutic integration is akin to “murder”—that by fostering one coherent self, we are erasing others. I don’t accept that view. That’s an extreme form of ideological overreach.

Jacobsen: This introduces another critical nuance. The self emerges not only across human history—it also unfolds across individual development. The self is not present at conception or birth in its complete form. It’s an evolved pattern of information—a construct that takes shape over time. And, just as it can emerge, it can also deteriorate.

In advanced age or due to disease, the body and many faculties may still function—but the self might fade away. In that sense, you could argue that the self has a lifespan within the human lifespan. People talk about lifespan, and increasingly about healthspan—but perhaps we should also talk about a “self-span.”

Robertson: That’s an intriguing idea—a self-span.

Jacobsen: It would be difficult to measure precisely, of course, especially given the limitations of quick-and-dirty self-assessment methods versus more rigorous, clinical approaches like self-mapping. Still, it’s a meaningful concept.

If the self is a cultural construct, we might ask: Do different cultures shape the self in ways that affect when it tends to emerge developmentally? Does the self appear earlier or later, depending on the cultural context?

Robertson: That’s a fascinating question. I do not have a definitive answer, but I’ve mapped the selves of people from the interior of China, from Siberia, and collectivist communities in North America. Every culture I’ve studied has a self.

Here’s where the cultural variation becomes evident: different cultures emphasize different aspects of the self. One of the people I mapped was a woman from a traditional family in the interior of China.

Yes, she had the same structural aspects of the self-found in North American individuals, including a volitional component. But that part of her self—the volitional aspect—was not valued in her cultural context. Instead, family duty and moral conduct traits were emphasized, reflecting collectivist values.

So, structurally, her self was similar. But culturally, the valued components were different. What made this particularly interesting is that after mapping herself, she described herself as feeling like a “robot,” and she decided that was not a good thing.

Over about eight or nine months, she resolved to start making her own decisions. This did not prove easy because most of us do not make conscious decisions at every moment. Typically, we rely on habit, social norms, or deference to authority. For example, someone might say, “Lloyd Robertson says this is a good idea, so I’ll go with that.”

But most of the time, we act on autopilot. However, she began engaging in conscious decision-making—evaluating possible outcomes, comparing alternatives, weighing probabilities, and assigning value. She did this even with mundane choices like what to eat or wear in the morning.

It exhausted her. She felt she was getting nowhere. Eventually, she decided: “My life is too valuable to waste making every decision consciously. I’m going back to being a robot.”

But here’s the key insight: to make that decision, she had to engage her volitional self.

She never abandoned it. It was still there—intact, available, and waiting for the next time she chose to use it.

Jacobsen: Let’s say we have a rare case of genuine dual selves in one body. And to be clear, I do not mean conjoined twins—cases where two individuals share some neural connectivity. I’m referring to a single individual whose psychology has bifurcated. What if their volitional trajectories—their vector spaces—are at odds with one another?

This reminds me of a presentation by V. S. Ramachandran, the neurologist known for the mirror box experiment. He referenced split-brain patients—individuals whose corpus callosum had been surgically severed to treat epilepsy.

In such cases, if you cover one eye, you direct stimuli to only one hemisphere. For example, when Ramachandran asked these patients if they believed in God—by pointing up for “yes” or down for “no”—the left hemisphere might point “yes,” while the right pointed “no.”

The individual would often laugh in response. Ramachandran joked that this showed the right hemisphere had a sense of humour.

But there’s a more profound point here: split-brain patients can manifest two conflicting worldviews—internally consistent but contradictory selves. In theological terms, this raises amusing but profound questions. For instance, if belief grants salvation, does one hemisphere go to heaven and the other to hell?

On a more serious note, when these volitional patterns conflict—not just on trivial matters but on core values—what happens? And for those who criticize integration therapy as “murdering” a self, how do you respond?

Robertson: The split-brain experiments are fascinating but differ from dissociative identity disorder, a distinct condition.

In most people, the right hemisphere houses spatial awareness and emotional reasoning, while the left hemisphere tends to handle verbal processing. When the corpus callosum is severed, these two systems can no longer communicate so that each side may draw on separate memories or frameworks.

In an intact brain, people typically build a worldview—a cognitive map of how the world works. This worldview often resides in the left hemisphere. When incoming information conflicts with that map, people experience cognitive dissonance.

Eventually, the left hemisphere, which governs executive control and higher reasoning, will normally create a worldview representing our understanding of how the world works. We have many defence mechanisms that we use to keep that worldview intact, but at some point our constructed reality diverges too far from objective reality. The right brain, at a feeling level “dissolves” the construct and the left brain then begins creating a new or amended worldview. It does not happen often, but it happens enough to keep us psychologically adaptive.

Now, returning to your question: Is there a God? If only one hemisphere believes, which is correct?

Well, that depends on which side holds the belief. Humanism, for example, is highly cerebral—logical, empirical, and grounded in enlightenment thought. It is likely rooted in left-brain processes. Compassion, however, may bridge both hemispheres.

Jacobsen: So, what is the right brain holding onto?

Robertson: Something interesting happened to me the other day. I woke up with a Christian hymn running through my head—one I learned in my fundamentalist upbringing.

It struck me: Where did that come from? It must have been encoded deeply. I was baptized not once but twice, in complete immersion both times.

That early religious imprint likely lodged itself somewhere in my right hemisphere. It may be largely inactive now, but it is not gone.

Jacobsen: So, do developmental trajectories matter here?

You were raised with those strong evangelical influences at a young age, and even though you’ve moved beyond them, they left an imprint. Neuroscientifically, we know the dorsolateral prefrontal cortex—the seat of executive function—is the last part of the brain to develop. Evolutionarily, it’s also the most recent.

As far as we know, the dorsolateral prefrontal cortex—responsible for executive function—is the last part of the brain to develop. Most people usually complete that maturation in their mid-twenties. So, these systems take a long time to become fully online and must then be integrated with other neural networks.

Do developmental phases like the second significant period of synaptic pruning in adolescence reflect more concrete hardware changes, as opposed to the cultural software changes that occur across a person’s life?

Robertson: I like your question, Scott. And the answer is yes.

Jacobsen: Yay.

Robertson: If someone were raised entirely in the wild—say, the fictional case of a boy raised by wolves—we would not expect them to develop what I call the modern self.

The self is a cultural construct. Children are taught to have a self; one key mechanism is language acquisition. For example, when a child cries and the caregiver says, “Is Bobby hungry?” that implicitly teaches the child that Bobby has internal states—needs, desires, and preferences. That is the beginning of selfhood.

Your point about adolescence is spot on. The self is not fully formed in early childhood. In many ways, individual development parallels cultural evolution. Adolescence—especially early adolescence—is about experimentation, identity formation, and exploration. Teenagers try out roles, test boundaries, and slowly determine, “This is who I am,” or, “No, that’s not me.”

We must be cautious about defining someone’s self prematurely during this construction phase. You cannot predict how it will turn out, and efforts to control that process can be harmful.

There’s research suggesting the human brain continues maturing until around age 25. Jokingly, maybe we should not let people vote until they’re 25—but of course, I can say that now that I’m well past that age.

In truth, development is highly individual. Some mature earlier, others later. And yes, building on your earlier point, there may be significant cultural differences in how and when the self develops. That’s an area ripe for further research.

Now, when I say modern self-development and spread across all known cultures, there’s a practical reason: societies without individuals capable of forming modern selves could not compete with those that had them.

Jacobsen: What makes the modern self more competitive?

Robertson: Our sense of individuality.

In Christianity, for example, Scripture often exhorts individuals to “give up the self.” That very statement acknowledges the self’s existence and its power.

Such a sacrifice is required because the individual self can threaten collective stability. It challenges authority, tradition, and rigid social roles.

Jacobsen: That connects back to your earlier point—cultures that lack individuals with a modern self lose their competitive edge.

Robertson: Here’s the value of having a self.

In traditional cultures, individuals typically had an earlier form of self—defined primarily by their place in the collective. In response to threats or challenges, behaviours were guided by tribal memory, stories, and rigid social roles.

For example, if an enemy appeared, people would respond according to long-established patterns—based on age, gender, and status in the group. There was no need—or room—for improvisation.

But what happens when a new, unfamiliar situation arises—something the culture has not encountered before and for which there is no ritual?

In such cases, traditional cultures often turned to oracles—individuals capable of novel reasoning, that is, problem-solving. I suspect those early oracles possessed a more developed, volitional self, which is why they were trusted in the first place.

Similarly, in Hindu society, Brahmins were given a rigorous education, allowing them to cultivate modern selves capable of insight and judgment. But they were a small elite.

In many cultures, people who had developed themselves were respected and closely managed. They were given roles where they could contribute without disrupting social order.

The self-concept eventually spread across all human societies because we are a nomadic, adaptive species. We move, we mix, we evolve.

Just look at our evolutionary history—we even interbred with Neanderthals.

We interact. I do not believe a human society has ever been so isolated that its members lacked a developed self. But if such a group exists—perhaps an uncontacted tribe deep in the Amazon—I would love to study them.

Jacobsen: When I attended the 69th Commission on the Status of Women at the United Nations, I participated in a session featuring Ambassador Bob Rae of Canada. The session focused on Indigenous communities and was led by Indigenous women.

Someone on the panel mentioned a group from an isolated region—possibly resembling the cultural isolation you described. Their account of getting to the UN was striking. If you asked me how I got there, I’d say something like: “I took a bus to the airport, flew to New York, took the train…” For them, before all of that began, it started with a canoe.

That was their standard form of transportation before reaching any conventional transit station. So, even in that case, I would be hard-pressed to believe they were entirely uncontacted or isolated in today’s world.

Robertson: I agree. I suspect such total isolation no longer exists.

Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices.In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment