Skip to content

Sam Vaknin: The Psychology of Human-Machine Interfaces

2024-06-05

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2024/06/05

Sam Vaknin is the author of Malignant Self-love: Narcissism Revisited as well as many other books and ebooks about topics in psychology, relationships, philosophy, economics, international affairs, and award-winning short fiction. He is former Visiting Professor of Psychology, Southern Federal University, Rostov-on-Don, Russia and on the faculty of CIAPS (Commonwealth Institute for Advanced and Professional Studies). He is a columnist in Brussels Morning, was the Editor-in-Chief of Global Politician, and served as a columnist for Central Europe Review, PopMatters, eBookWeb, and Bellaonline, and as a United Press International (UPI) Senior Business Correspondent. He was the editor of mental health and Central East Europe categories in The Open Directory and Suite101. His YouTube channels garnered 80,000,000 views and 405,000 subscribers. Visit Sam’s Web site: http://www.narcissistic-abuse.com.

Scott Douglas Jacobsen: When did the first human-machine interactions truly begin in modern history insofar as we take technology now?

Dr. Sam Vaknin: When a man (or a woman) picked up a stone and threw it at a scavenger.

Jacobsen: How have technologies influenced the psycho-social makeup of human beings?

Vaknin: Technology fostered the delusion that every problem has a solution and the hubris that attends upon proving this contention somewhat true. We have learned to internalize technologies and render them our extensions, driving us deeper into fantastic paracosms, replete with populations of internal objects that represent cohorts of external devices and systems. We became dependent on technology and this dependency emerged as our default mode, leading us to prefer machines to other humans.

Jacobsen: These technologies, especially contemporary ones, come out of smart people working hard. How are they, in a way, extensions of ourselves based on those smart people’s understanding of some principle and then applying this to ergonomic design?

Vaknin: These “smart people” are not representative of humanity, not even remotely. They are a self-selecting sample of schizoid, mostly white, mostly men. I am not sure why you limited your question to the least important and most neglected aspect of technology: ergonomic design, dictated by the very structure and functioning of the human body. There are other, much more crucial aspects of technology that reflect the specific mental health pathologies, idiosyncrasies, and eccentricities, of engineers, coders, and entrepreneurs – rather than any aspect or dimension of being human.

Jacobsen: How are military applications showing this to be the case with drones and the like? Also, the eventual reductio ad absurdum of long-term war with all these technology innovations around autonomous war-robots seems increasingly apparent, when, in some hypothetical future, it’d be simply machines fighting machines for some geographic or resource squabble of some leaders.

Vaknin: War is increasingly more democratized (terrorism and asymmetrical warfare, anyone?). It is also more remote controlled. But its main aim is still to kill people, combatants and civilians alike. Machines will never merely fight only other contraptions. War will never be reduced to a mechanized version of chess. Men, women, and children will always die in battle as conflict becomes ever more total. The repossession of resources requires the unmitigated annihilation of their erstwhile owners.

Jacobsen: Are autocratic, theocratic, or democratic, societies, utilizing the technologies ‘interfacing’ with human beings more wisely – which one?

Vaknin: Wisdom is in the eye of the beholder. There is no difference in the efficacy of deploying technologies between various societal organizational forms. All governments and collectives – autocratic, democratic, and theocratic, even ochlocratic or anarchic – leverage technology to secure and protect the regime and to buttress the narratives that motivate people to fight, work, consume, and mate.

Jacobsen: I interviewed another smart guy, Dr. Evangelos Katsioulis, years ago. He, at that time – maybe now too, believed no limit existed to the integration between machines and humans. When will human mechanics be understood sufficiently to when, as with the ship of Theseus, human beings can function as human beings with 10%, 25%, 75% non-biological machine parts comprising their localized subjectivity and locomotion?

Vaknin: Much sooner than we think. But there will always be a Resistance: a substantial portion of the population who will remain averse to cyborg integration and as the Luddites of yesteryear will seek to forbid such chimeras and destroy them.

In some rudimentary ways, we are already integrated with machines. Can you imagine your life without your devices?

Jacobsen: How are interactions with technologies more intimately blurring the sense of self?

Vaknin: Human brains are ill-equipped to tell the difference between reality and mimicry, simulation, or fantasy. Technologies are the reifications of the latter at the expense of the former.

One of the crucial aspects of the putative “Self” or “Ego’ is reality testing. As the boundaries blur, so will our selves. We are likely to acquire a hive mind, melded with all the technologies that surround us, seamlessly slipping in and out of dream states and metaverses. The “Self’ will become the functional equivalent of our attire: changeable, disposable, replaceable.

As it is, I am an opponent of the counterfactual idea of the existence of some kernel, immutable core identity, self, or ego – see this video about IPAM, my Intrapsychic Activation Model.

Jacobsen: How are the plurality of software and hardware available vastly outstripping the capacity for ordinary people to use them all, let alone understand them? Most seem drawn merely to video games, television, cell phones, and some social media platforms. That’s about it. There’s so, so much more around now.

Vaknin: There have always been technologies for the masses as well as for niche users. Where we broke off with the past is in multitasking, the simultaneous suboptimal use of multiple devices.

Jacobsen: What is the ultimate point of human-machine ‘interfaces’? We ‘birthed’ electronic machines and information processing. What will be birthed from this union of biological mechanisms and alloyed assistants, playthings?

Vaknin: As they get more integrated by the day, the point is to empower, enhance, and expand both symbiotic partners: humans and machines alike. It is a virtuous cycle which will lead to functional specialization with both parties focused on what they do best.

Still, if humans fail to bake Asimov-like rules into their automata, the potential for conflict is there, as artificial intelligence become smore sentient and intelligent and prone to passing the Turing Test with flying colors. In short: indistinguishable from us, except with regards to its considerably more potent processing prowess.

Popular culture reflected this uncanny valley: the growing unease with android robots, first postulated by Masahiro Mori, the Japanese roboticist, in 1970.

The movie “I, Robot” is a muddled affair. It relies on shoddy pseudo-science and a general sense of unease that artificial (non-carbon based) intelligent life forms seem to provoke in us. But it goes no deeper than a comic book treatment of the important themes that it broaches. I, Robot is just another – and relatively inferior – entry is a long line of far better movies, such as “Blade Runner” and “Artificial Intelligence”.

Sigmund Freud said that we have an uncanny reaction to the inanimate. This is probably because we know that – pretensions and layers of philosophizing aside – we are nothing but recursive, self-aware, introspective, conscious machines. Special machines, no doubt, but machines all the same.

Consider the James bond movies. They constitute a decades-spanning gallery of human paranoia. Villains change: communists, neo-Nazis, media moguls. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: the machine. James Bond always finds himself confronted with hideous, vicious, malicious machines and automata.

It was precisely to counter this wave of unease, even terror, irrational but all-pervasive, that Isaac Asimov, the late Sci-fi writer (and scientist) invented the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Many have noticed the lack of consistency and, therefore, the inapplicability of these laws when considered together.

First, they are not derived from any coherent worldview or background. To be properly implemented and to avoid their interpretation in a potentially dangerous manner, the robots in which they are embedded must be equipped with reasonably comprehensive models of the physical universe and of human society.

Without such contexts, these laws soon lead to intractable paradoxes (experienced as a nervous breakdown by one of Asimov’s robots). Conflicts are ruinous in automata based on recursive functions (Turing machines), as all robots are. Godel pointed at one such self-destructive paradox in the “Principia Mathematica”, ostensibly a comprehensive and self-consistent logical system. It was enough to discredit the whole magnificent edifice constructed by Russel and Whitehead over a decade.

Some argue against this and say that robots need not be automata in the classical, Church-Turing, sense. That they could act according to heuristic, probabilistic rules of decision making. There are many other types of functions (non-recursive) that can be incorporated in a robot, they remind us.

True, but then, how can one guarantee that the robot’s behavior is fully predictable? How can one be certain that robots will fully and always implement the three laws? Only recursive systems are predictable in principle, though, at times, their complexity makes it impossible.

An immediate question springs to mind: HOW will a robot identify a human being? Surely, in a future of perfect androids, constructed of organic materials, no superficial, outer scanning will suffice. Structure and composition will not be sufficient differentiating factors.

There are two ways to settle this very practical issue: one is to endow the robot with the ability to conduct a Converse Turing Test (to separate humans from other life forms) – the other is to somehow “barcode” all the robots by implanting some remotely readable signaling device inside them (such as a RFID – Radio Frequency ID chip). Both present additional difficulties.

The second solution will prevent the robot from positively identifying humans. He will be able identify with any certainty robots and only robots (or humans with such implants). This is ignoring, for discussion’s sake, defects in manufacturing or loss of the implanted identification tags. And what if a robot were to get rid of its tag? Will this also be classified as a “defect in manufacturing”?

In any case, robots will be forced to make a binary choice. They will be compelled to classify one type of physical entities as robots – and all the others as “non-robots”. Will non-robots include monkeys and parrots? Yes, unless the manufacturers equip the robots with digital or optical or molecular representations of the human figure (masculine and feminine) in varying positions (standing, sitting, lying down). Or unless all humans are somehow tagged from birth.

These are cumbersome and repulsive solutions and not very effective ones. No dictionary of human forms and positions is likely to be complete. There will always be the odd physical posture which the robot would find impossible to match to its library. A human disk thrower or swimmer may easily be classified as “non-human” by a robot – and so might amputated invalids.

What about administering a converse Turing Test?

This is even more seriously flawed. It is possible to design a test, which robots will apply to distinguish artificial life forms from humans. But it will have to be non-intrusive and not involve overt and prolonged communication. The alternative is a protracted teletype session, with the human concealed behind a curtain, after which the robot will issue its verdict: the respondent is a human or a robot. This is unthinkable.

Moreover, the application of such a test will “humanize” the robot in many important respects. Human identify other humans because they are human, too. This is called empathy. A robot will have to be somewhat human to recognize another human being, it takes one to know one, the saying (rightly) goes.

Let us assume that by some miraculous way the problem is overcome and robots unfailingly identify humans. The next question pertains to the notion of “injury” (still in the First Law). Is it limited only to physical injury (the elimination of the physical continuity of human tissues or of the normal functioning of the human body)?

Should “injury” in the First Law encompass the no less serious mental, verbal and social injuries (after all, they are all known to have physical side effects which are, at times, no less severe than direct physical “injuries”)? Is an insult an “injury”? What about being grossly impolite, or psychologically abusive? Or offending religious sensitivities, being politically incorrect – are these injuries? The bulk of human (and, therefore, inhuman) actions actually offend one human being or another, have the potential to do so, or seem to be doing so.

Consider surgery, driving a car, or investing money in the stock exchange. These “innocuous” acts may end in a coma, an accident, or ruinous financial losses, respectively. Should a robot refuse to obey human instructions which may result in injury to the instruction-givers?

Consider a mountain climber – should a robot refuse to hand him his equipment lest he falls off a cliff in an unsuccessful bid to reach the peak? Should a robot refuse to obey human commands pertaining to the crossing of busy roads or to driving (dangerous) sports cars?

Which level of risk should trigger robotic refusal and even prophylactic intervention? At which stage of the interactive man-machine collaboration should it be activated? Should a robot refuse to fetch a ladder or a rope to someone who intends to commit suicide by hanging himself (that’s an easy one)?

Should he ignore an instruction to push his master off a cliff (definitely), help him climb the cliff (less assuredly so), drive him to the cliff (maybe so), help him get into his car in order to drive him to the cliff… Where do the responsibility and obeisance bucks stop?

Whatever the answer, one thing is clear: such a robot must be equipped with more than a rudimentary sense of judgment, with the ability to appraise and analyse complex situations, to predict the future and to base his decisions on very fuzzy algorithms (no programmer can foresee all possible circumstances). To me, such a “robot” sounds much more dangerous (and humanoid) than any recursive automaton which does NOT include the famous Three Laws.

Moreover, what, exactly, constitutes “inaction”? How can we set apart inaction from failed action or, worse, from an action which failed by design, intentionally? If a human is in danger and the robot tries to save him and fails – how could we determine to what extent it exerted itself and did everything it could?

How much of the responsibility for a robot’s inaction or partial action or failed action should be imputed to the manufacturer – and how much to the robot itself? When a robot decides finally to ignore its own programming – how are we to gain information regarding this momentous event? Outside appearances can hardly be expected to help us distinguish a rebellious robot from a lackadaisical one.

The situation gets much more complicated when we consider states of conflict.

Imagine that a robot is obliged to harm one human in order to prevent him from hurting another. The Laws are absolutely inadequate in this case. The robot should either establish an empirical hierarchy of injuries – or an empirical hierarchy of humans. Should we, as humans, rely on robots or on their manufacturers (however wise, moral and compassionate) to make this selection for us? Should we abide by their judgment which injury is the more serious and warrants an intervention?

A summary of the Asimov Laws would give us the following “truth table”:

A robot must obey human commands except if:

  1. Obeying them is likely to cause injury to a human, or
  2. Obeying them will let a human be injured.

A robot must protect its own existence with three exceptions:

  1. That such self-protection is injurious to a human;
  2. That such self-protection entails inaction in the face of potential injury to a human;
  3. That such self-protection results in robot insubordination (failing to obey human instructions).

Trying to create a truth table based on these conditions is the best way to demonstrate the problematic nature of Asimov’s idealized yet highly impractical world.

Here is an exercise:

Imagine a situation (consider the example below or one you make up) and then create a truth table based on the above five conditions. In such a truth table, “T” would stand for “compliance” and “F” for non-compliance.

Example:

A radioactivity monitoring robot malfunctions. If it self-destructs, its human operator might be injured. If it does not, its malfunction will equally seriously injure a patient dependent on his performance.

One of the possible solutions is, of course, to introduce gradations, a probability calculus, or a utility calculus. As they are phrased by Asimov, the rules and conditions are of a threshold, yes or no, take it or leave it nature. But if robots were to be instructed to maximize overall utility, many borderline cases would be resolved.

Still, even the introduction of heuristics, probability, and utility does not help us resolve the dilemma in the example above. Life is about inventing new rules on the fly, as we go, and as we encounter new challenges in a kaleidoscopically metamorphosing world. Robots with rigid instruction sets are ill suited to cope with that.

At the risk of going abstruse, two comments:

1. Godel’s Theorems

The work of an important, though eccentric, Czech-Austrian mathematical logician, Kurt Gödel (1906-1978) dealt with the completeness and consistency of logical systems. A passing acquaintance with his two theorems would have saved the architect a lot of time.

Gödel’s First Incompleteness Theorem states that every consistent axiomatic logical system, sufficient to express arithmetic, contains true but unprovable (“not decidable”) sentences. In certain cases (when the system is omega-consistent), both said sentences and their negation are unprovable. The system is consistent and true – but not “complete” because not all its sentences can be decided as true or false by either being proved or by being refuted.

The Second Incompleteness Theorem is even more earth-shattering. It says that no consistent formal logical system can prove its own consistency. The system may be complete – but then we are unable to show, using its axioms and inference laws, that it is consistent

In other words, a computational system can either be complete and inconsistent – or consistent and incomplete. By trying to construct a system both complete and consistent, a robotics engineer would run afoul of Gödel’s theorem.

2. Turing Machines

In 1936 an American (Alonzo Church) and a Briton (Alan M. Turing) published independently (as is often the case in science) the basics of a new branch in Mathematics (and logic): computability or recursive functions (later to be developed into Automata Theory).

The authors confined themselves to dealing with computations which involved “effective” or “mechanical” methods for finding results (which could also be expressed as solutions (values) to formulae). These methods were so called because they could, in principle, be performed by simple machines (or human-computers or human-calculators, to use Turing’s unfortunate phrases). The emphasis was on finiteness: a finite number of instructions, a finite number of symbols in each instruction, a finite number of steps to the result. This is why these methods were usable by humans without the aid of an apparatus (with the exception of pencil and paper as memory aids). Moreover: no insight or ingenuity were allowed to “interfere” or to be part of the solution seeking process.

What Church and Turing did was to construct a set of all the functions whose values could be obtained by applying effective or mechanical calculation methods. Turing went further down Church’s road and designed the “Turing Machine” – a machine which can calculate the values of all the functions whose values can be found using effective or mechanical methods. Thus, the program running the TM (=Turing Machine in the rest of this text) was really an effective or mechanical method. For the initiated readers: Church solved the decision-problem for propositional calculus and Turing proved that there is no solution to the decision problem relating to the predicate calculus. Put more simply, it is possible to “prove” the truth value (or the theorem status) of an expression in the propositional calculus – but not in the predicate calculus. Later it was shown that many functions (even in number theory itself) were not recursive, meaning that they could not be solved by a Turing Machine.

No one succeeded to prove that a function must be recursive in order to be effectively calculable. This is (as Post noted) a “working hypothesis” supported by overwhelming evidence. We don’t know of any effectively calculable function which is not recursive, by designing new TMs from existing ones we can obtain new effectively calculable functions from existing ones and TM computability stars in every attempt to understand effective calculability (or these attempts are reducible or equivalent to TM computable functions).

The Turing Machine itself, though abstract, has many “real world” features. It is a blueprint for a computing device with one “ideal” exception: its unbounded memory (the tape is infinite). Despite its hardware appearance (a read/write head which scans a two-dimensional tape inscribed with ones and zeroes, etc.) – it is really a software application, in today’s terminology. It carries out instructions, reads and writes, counts and so on. It is an automaton designed to implement an effective or mechanical method of solving functions (determining the truth value of propositions). If the transition from input to output is deterministic, we have a classical automaton – if it is determined by a table of probabilities – we have a probabilistic automaton.

With time and hype, the limitations of TMs were forgotten. No one can say that the Mind is a TM because no one can prove that it is engaged in solving only recursive functions. We can say that TMs can do whatever digital computers are doing – but not that digital computers are TMs by definition. Maybe they are – maybe they are not. We do not know enough about them and about their future.

Moreover, the demand that recursive functions be computable by an UNAIDED human seems to restrict possible equivalents. Inasmuch as computers emulate human computation (Turing did believe so when he helped construct the ACE, at the time the fastest computer in the world) – they are TMs. Functions whose values are calculated by AIDED humans with the contribution of a computer are still recursive. It is when humans are aided by other kinds of instruments that we have a problem. If we use measuring devices to determine the values of a function it does not seem to conform to the definition of a recursive function. So, we can generalize and say that functions whose values are calculated by an AIDED human could be recursive, depending on the apparatus used and on the lack of ingenuity or insight (the latter being, anyhow, a weak, non-rigorous requirement which cannot be formalized).

Jacobsen: Thank you for the opportunity and your time, Sam.

Vaknin: Thank you as ever, Scott.

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.

Copyright

© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.

Leave a Comment

Leave a comment