Skip to content

Interview with Sam Vaknin on Human-Machine Interface

2025-01-01

Author(s): Sam Vaknin and Scott Douglas Jacobsen

Publication (Outlet/Website): Noesis: The Journal of the Mega Society

Publication Date (yyyy/mm/dd): 2024/12

Scott Douglas Jacobsen: When did the first human-machine interactions truly begin in modern history insofar as we take technology now? 

Dr. Sam Vaknin: When a man (or a woman) picked up a stone and threw it at a scavenger. Jacobsen: How have technologies influenced the psycho-social makeup of human beings? 

Vaknin: Technology fostered the delusion that every problem has a solution and the hubris that attends upon proving this contention somewhat true. We have learned to internalize technologies and render them our extensions, driving us deeper into fantastic paracosms, replete with populations of internal objects that represent cohorts of external devices and systems. We became dependent on technology and this dependency emerged as our default mode, leading us to prefer machines to other humans. 

Jacobsen: These technologies, especially contemporary ones, come out of smart people working hard. How are they, in a way, extensions of ourselves based on those smart people’s understanding of some principle and then applying this to ergonomic design? 

Vaknin: These “smart people” are not representative of humanity, not even remotely. They are a self-selecting sample of schizoid, mostly white, mostly men. I am not sure why you limited your question to the least important and most neglected aspect of technology: ergonomic design, dictated by the very structure and functioning of the human body. There are other, much more crucial aspects of technology that reflect the specific mental health pathologies, idiosyncrasies, and eccentricities of engineers, coders, and entrepreneurs – rather than any aspect or dimension of being human. 

Jacobsen: How are military applications showing this to be the case with drones and the like? Also, the eventual reductio ad absurdum of long-term war with all these technology innovations around autonomous war-robots seems increasingly apparent, when, in some hypothetical future, it’d be simply machines fighting machines for some geographic or resource squabble of some leaders. 

Vaknin: War is increasingly more democratized (terrorism and asymmetrical warfare, anyone?). It is also more remote controlled. But its main aim is still to kill people, combatants and civilians alike. Machines will never merely fight only other contraptions. War will never be reduced to a mechanized version of chess. Men, women, and children will always die in battle as conflict becomes ever more total. The repossession of resources requires the unmitigated annihilation of their erstwhile owners. 

Jacobsen: Are autocratic, theocratic, or democratic, societies, utilizing the technologies ‘interfacing’ with human beings more wisely – which one? 

Vaknin: Wisdom is in the eye of the beholder. There is no difference in the efficacy of deploying technologies between various societal organizational forms. All governments and collectives – autocratic, democratic, and theocratic, even ochlocratic or anarchic – leverage technology to secure and protect the regime and to buttress the narratives that motivate people to fight, work, consume, and mate. 

Jacobsen: I interviewed another smart guy, Dr. Evangelos Katsioulis, years ago. He, at that time – maybe now too, believed no limit existed to the integration between machines and humans. When will human mechanics be understood sufficiently to when, as with the Ship of Theseus, human beings can function as human beings with 10%, 25%, 75% non-biological machine parts comprising their localized subjectivity and locomotion? 

[Editors’ Note: https://plato.stanford.edu/entries/identity-time/#4

Vaknin: Much sooner than we think. But there will always be a Resistance: a substantial portion of the population who will remain averse to cyborg integration and as the Luddites of yesteryear will seek to forbid such chimeras and destroy them. 

In some rudimentary ways, we are already integrated with machines. Can you imagine your life without your devices? 

Jacobsen: How are interactions with technologies more intimately blurring the sense of self? 

Vaknin: Human brains are ill-equipped to tell the difference between reality and mimicry, simulation, or fantasy. Technologies are the reifications of the latter at the expense of the former. 

One of the crucial aspects of the putative “Self” or “Ego’ is reality testing. As the boundaries blur, so will our selves. We are likely to acquire a hive mind, melded with all the technologies that surround us, seamlessly slipping in and out of dream states and metaverses. The “Self’ will become the functional equivalent of our attire: changeable, disposable, replaceable. 

As it is, I am an opponent of the counterfactual idea of the existence of some kernel, immutable core identity, self, or ego – see this video about IPAM, my Intrapsychic Activation Model. 

Jacobsen: How are the plurality of software and hardware available vastly outstripping the capacity for ordinary people to use them all, let alone understand them? Most seem drawn merely to video games, television, cell phones, and some social media platforms. That’s about it. There’s so, so much more around now. 

Vaknin: There have always been technologies for the masses as well as for niche users. Where we broke off with the past is in multitasking, the simultaneous suboptimal use of multiple devices. 

Jacobsen: What is the ultimate point of human-machine ‘interfaces’? We ‘birthed’ electronic machines and information processing. What will be birthed from this union of biological mechanisms and alloyed assistants, playthings? 

Vaknin: As they get more integrated by the day, the point is to empower, enhance, and expand both symbiotic partners: humans and machines alike. It is a virtuous cycle which will lead to functional specialization with both parties focused on what they do best. 

Still, if humans fail to bake Asimov-like rules into their automata, the potential for conflict is there, as artificial intelligence becomes more sentient and intelligent and prone to passing the Turing Test with flying colors. In short: indistinguishable from us, except with regards to its considerably more potent processing prowess. 

Popular culture reflected this uncanny valley: the growing unease with android robots, first postulated by Masahiro Mori, the Japanese roboticist, in 1970. 

The movie I, Robot is a muddled affair. It relies on shoddy pseudo-science and a general sense of unease that artificial (non-carbon based) intelligent lifeforms seem to provoke in us. But it goes no deeper than a comic book treatment of the important themes that it broaches. I, Robot is just another – and relatively inferior – entry in a long line of far better movies, such as Blade Runner and Artificial Intelligence

Sigmund Freud said that we have an uncanny reaction to the inanimate. This is probably because we know that – pretensions and layers of philosophizing aside – we are nothing but recursive, self-aware, introspective, conscious machines. Special machines, no doubt, but machines all the same. 

[Editors’ Note: https://web.mit.edu/allanmc/www/freud1.pdf 

Cf. https://www.sas.upenn.edu/~cavitch/pdf-library/Freud_Uncanny.pdf

Consider the James Bond movies. They constitute a decades-spanning gallery of human paranoia. Villains change: communists, neo-Nazis, media moguls. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: the machine. James Bond always finds himself confronted with hideous, vicious, malicious machines and automata. 

It was precisely to counter this wave of unease, even terror, irrational but all-pervasive, that Isaac Asimov, the late Sci-fi writer (and scientist) invented the Three Laws of Robotics: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

Many have noticed the lack of consistency and, therefore, the inapplicability of these laws when considered together. 

First, they are not derived from any coherent worldview or background. To be properly implemented and to avoid their interpretation in a potentially dangerous manner, the robots in which they are embedded must be equipped with reasonably comprehensive models of the physical universe and of human society. 

Without such contexts, these laws soon lead to intractable paradoxes (experienced as a nervous breakdown by one of Asimov’s robots). Conflicts are ruinous in automata based on recursive functions (Turing machines), as all robots are. Gödel pointed at one such self-destructive paradox in the Principia Mathematica, ostensibly a comprehensive and self-consistent logical system. It was enough to discredit the whole magnificent edifice constructed by Russel and Whitehead over a decade. 

Some argue against this and say that robots need not be automata in the classical, Church-Turing, sense. That they could act according to heuristic, probabilistic rules of decision making. There are many other types of functions (non-recursive) that can be incorporated in a robot, they remind us. 

True, but then, how can one guarantee that the robot’s behavior is fully predictable? How can one be certain that robots will fully and always implement the three laws? Only recursive systems are predictable in principle, though, at times, their complexity makes it impossible. 

An immediate question springs to mind: HOW will a robot identify a human being? Surely, in a future of perfect androids, constructed of organic materials, no superficial, outer scanning will suffice. Structure and composition will not be sufficient differentiating factors. 

There are two ways to settle this very practical issue: one is to endow the robot with the ability to conduct a Converse Turing Test (to separate humans from other life forms) – the other is to somehow “barcode” all the robots by implanting some remotely readable signaling device inside them (such as a RFID – Radio Frequency ID chip). Both present additional difficulties.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. ©Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen strictly prohibited, excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Leave a Comment

Leave a comment