Skip to content

Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11)













Publisher Founding: December 1, 2014

Web Domain: 

Location: Fort Langley, Township of Langley, British Columbia, Canada

Journal: In-Sight: Independent Interview-Based Journal

Journal Founding: August 2, 2012

Frequency: Three (3) Times Per Year

Review Status: Non-Peer-Reviewed

Access: Electronic/Digital & Open Access

Fees: None (Free)

Volume Numbering: 11

Issue Numbering: 1

Section: A

Theme Type: Idea

Theme Premise: “Outliers and Outsiders”

Theme Part: 26

Formal Sub-Theme: None.

Individual Publication Date: December 8, 2022

Issue Publication Date: January 1, 2023

Author(s): Scott Douglas Jacobsen

Interviewer(s): Scott Douglas Jacobsen

Interviewee(s): Tor Arne Jørgensen

Word Count: 4,005

Image Credit: Tor Arne Jørgensen

International Standard Serial Number (ISSN): 2369-6885

*Please see the footnotes, bibliography, and citations, after the interview.*


Tor Arne Jørgensen is a member of 50+ high IQ societies, including World Genius Directory, NOUS High IQ Society, 6N High IQ Society just to name a few. Tor Arne was also in 2019, nominated for the World Genius Directory 2019 Genius of the Year – Europe. He is also the designer of the high range test site; He discusses: machine learning apparatuses; a natural reaction; the fears; the idea of genius; and A.I.

Keywords: AGI, AI, humanity, intelligence, machine learning, learning systems, life, the future, Technological Singularity, Tor Arne Jørgensen.

Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11)

Scott Douglas Jacobsen: Given the machine learning apparatuses before us, and an increase in comprehension of different biological systems within human beings, how might biological systems inform machine learning systems?

Tor Arne Jorgensen: When it comes to learning through type-designed programming, in terms of artificial intelligence, it means putting in special directives that they the machines must follow as pre-programmed. We humans are constantly learning, in the sense that we create new layers with our mapping models, that in turn can be further built upon. We humans acquire steadily and constantly new knowledge by which we then put into practice by testing this new knowledge which we have then mapped, this is the very basis for intelligence. Today’s machines and their artificial intelligence does not acquire new knowledge, as new knowledge must be programmed in by us humans as to achieve improved functionality of these machines. This does not happen automatically, as with us humans.

When it comes to AGI, new knowledge must also be programmed in from the start, this implementation of this new knowledge must then be formed on the same basis as for our own intelligence, through this mapping which takes place in the neocortex, where layers upon layers of new and improved knowledge are built, which in turn can be implemented through new and improved cognitive functions.

AGI can only become self-regulating in the sense of being considered as equally cognitively evolving when our understanding of how our own brain works is completed in full. That means, where all the pieces can be put together into a clear and understandable format, then and only then can our biological imprint be completed in transferred understanding of machine intelligence on par with our own understanding as to the terminology surrounding intelligence. Summarized as follows; today most of the biological input is given through pure programming, man feeds the machine with updated commands, this done in order to achieve the desired improved function of the machines.

This will not change drastically until our own understanding of how our own brain works, with reference to the neocortex and its intelligence parameters i.e., a complete understanding of all the neocortex components. First then, can this be transferable in any or all sense over to the machines. And then the machines can finally implement some kind of formatives through, self-regulatory actions by its pre-understood state of evolving mantras.

Jacobsen: To purport an obsolescence to human beings posits an intrinsic function or purpose, a teleology, to human beings in the universe, why is this a natural reaction to an emergence of digital algorithms in the era of computers and an easy analogy with human cognitive processing? Those with a teleological philosophy and a non-teleological philosophy make the same claims in this sense. In that, “Human beings will become obsolete or outmoded.” We know children tend towards animistic and teleological explanations of the world. Does this tendency seem more innate? Although, as people mature, they tend to show an increased jettisoning of these assumptions, not in all or most cases, but an increasing statistical trend, certainly. One can observe these tendencies in proposals of a Technological Singularity or a technology particularity; a point at which machines match human intelligence.

Jorgensen: This is probably where I must question myself to a certain extent, whether these claims could have the same fundamental foundation today as the time before, with reference to the introductive angle of question formulations. The fact that we humans are biological bases, and thus are forever reinvest and initiatives for improved cognitive enrichment. Made real, with our acute ability to acquire new knowledge and to apply this new knowledge onto the old knowledge as to create an even greater spectrum of knowledge.

We do not need to be programmed by an external entity for this acquisition, it is created by itself all the time, we are biological beings who are constantly developing our basis for new cognitive updating of our surroundings through these frames of reference that are talked about in Jeff Hawkins price acclaimed book, A Thousand Brains, where this is pointed out in reference to the brain’s neocortex and its implementation intelligence. The fact that the acquisition of new knowledge is used and creates the basis for new knowledge, the very foundations for intelligence in every sense.

The fact that we humans will be outdated according to AGI, will probably not happen, then, yes, it must be said that at some point AGI will be able to match us intellectually, and certainly outperform us in several aspects. However, it should also be mentioned that this will not happen until AGI is an exemplary copy of our own complete understanding of our brain, where all parts of the brains fragmented knowledge can be put together into a total overall understanding of how the brain works.

When we will come to this conclusion and we will in due time of that I am confident, then who can say what kind of knowledge we will then behold, as new fields with new hitherto not understood quantifiable qualities, that again can further be expand upon as to our own intelligence quantum, far beyond what we today are able to understand. Furthermore, that AGI can only be equated with the human intelligence when this total understanding of how the brain works is completed, it will then finally be in a state of transferable forma over to the AGI unit, and thus enables it to form its own definable evolving statutes of new self-acquired acquisitions of new cognitive knowledge onto which it can again be furthered built upon.

As long as the machines build all their base knowledge onto what we humans have been evolved upon, we will not be seen as an endangered race, but rather as a race to be recon with and of great importance as to study more, and maybe to form an alliance with based on mutual acceptance, in the quest for a greater understanding of how the universe works.

Jacobsen: If the fears are shown true, as in a Terminator future or something akin to Blade Runner, then, in some sense, human beings become either extinct or non-dominant as the prime information processing entities on the surface of the Earth. If the fears are shown false, then co-existence seems more likely with evolved intelligences – human beings and other mammals – and constructed intelligences – machines or electronically ‘floating’ intelligences in the ‘cloud’ – functioning independently and interdependently as necessary. Perhaps, some synthesis of these two visions may be the real future. What seem like the more probable outcomes for the advance of technology, at present, and humanity?

Jorgensen: Portraying one scenario for the other will present many challenges, as neither-nor as to a desired outcome. What is meant by this, if one attempts to look at the first scenario, whereas we humans are exterminated in favor for the machines, in the case of the movie Terminator, whereby the machines and their desire to rid the world of humans, and to add, animals, yes, by all biological material. Would not the next move then be to end the very biological diversity that defines all life, by definition of our own planet. Or it could just be that humans pose a threat which is then isolated to the advantage of the machines, but as the Terminator films portray, all land life is extinct, perhaps just a calculated miss, or well-planned calculation to enlarge the worldview of humans’ and their role on earth, would by that, not again mean that all life on earth stands and falls on the very existence of humanity. “Without us, there is nothing.”

What then will the role of the machines consist of then, when this extinction is completed, will the machines then create a better and more shaped world with a greater diversity? What purpose would this have for the machines, they are the ruling ones, then the way forward will not be in the intention that the machines are implemented with the intention and meaning of something more in the long run.

Alas, the result would be to terraform our planet, purposely to adapt their (machines’) need to then ensure their own existence, may not just be limited to our own planet, but also beyond, a race of planet eaters. It can also be asked whether the machines will use the material that we humans have used as a basis for our own evolutionary development … What is certain, is that all concluded security protocols will be broken, and the principle of equality where established mutual foundations between humans and machines will cease to exist, broken by and for one party’s desire for world dominance. The machines will then, in principle, sadly still carry on our stamp as to the lust for power, an intimate desire, consolidated in the art of waging war, something so human.

I would like for you to consider these three factors that may or may not pose a global extinction of humanity, will by that refer to what the acclaimed neuroscientist and author Jeff Hawkins and his resent book from 2021, A Thousand Brains has listed below as follows, quote:

“But as we go forward and debate the risks verses the rewards of machine intelligence, I recommend acknowledging the distinction between three things: replication, motivations, and intelligence.” (Hawkins, p.169).

  • Replication: Anything that is capable of self—replication is dangerous. Humanity could be wiped out by a biological virus. A computer virus could bring down the internet. Intelligence machines will not have the ability or desire to self-replicate unless humans go to great lengths to make it so.
  • Motivations: Biological motivations and drives are a consequence of evolution. Evolution discovered that animals with certain drives replicated better than other animals. A machine that is not replicating or evolving will not suddenly develop a desire to, say, dominate or enslave others.
  • Intelligence: Of the three, intelligence is the most benign. An intelligent machine will not on its own start to self-replicate, nor will it spontaneously develop drives and motivations. We will have to go out of our way to design in the motivations we want intelligent machines to have. But unless intelligent machines are self-replicating and evolving, they will not, on their own, represent an existential risk to humanity.

(Hawkins, p.169-170).

Presented in the previous section, appear as solid statements, where many of the worried factors can be mitigated. Will thus rather focus on the following scenario.

Considering that we will be able to live side by side with machines in the future, where the idea is to create a mutual understanding of mutual respect, people, and AGI, then this will be able to function as intended.


The bible says that man is created in the image of God; meaning that all humans have an elevated status at birth. But then man wants to create machines that will then be viewed as the equivalent of man, will this not then fall on its own unreasonableness by that very notion. Will not machines then fall under our exalted state? I am at a crossroad by the very question, as where to stand on equality between humans and machines.

Machines today do as we command them to do, it applies to all of machine operated devices, the emotion intelligent machines of the future with the possibility of their own opinions about what they want to create, do or else, will machines based on the conundrum of equality of rights, then not go against their own core values ​​- like the slaves before during the infamous triangular trade of the early 15th through to the late18th century, or the slave trade in the southern states of the United States until the turn of the 19th century, and the ongoing sex trade.

What I see clearly is that, yes, in the not too far future we will see a paradigm shift, we will create technological innovations that will move from thoughtless instrumental creations in the demand for production efficiency. But, when it comes to building a sustainable foundation based on the notion of equality of rights between both humans and machines, given, that yes, this is for now just a fantasy-philosophical angling, but still, one must then step aside to the right of way of the other’s right to self-respect, by and for all, justice through reciprocity.

Also noted, as when, should morality have its rights instrumentally implemented? Without a doubt, this will be some of the biggest obstacles that we humans must addressed in the future that may not just be a fantasy, but very possible a new reality. The ability of machines to harm people is in the state of fiction, received its ratification whereby it is said.

Isaac Asimov, the science-fiction writer, famously proposed three laws of robotics. These laws are like a safety protocol:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human being except where such orders would conflict with the First law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First of Second Law. (Hawkins, p.152).

I know that much of what is written here come out as fictional nonces for many of you, but perhaps we will see this coming to the realm of reality. Now, if so were to happen, would this not also result in an equal legislation for us humans towards machines. So, we humans cannot harm any AGI, or as I see it enslave the AGI in any way as stated above. Let’s certainly hope so, respect on an equal footing, even if we are born in the image of Christ, and the machines is born in the image of man.

Jacobsen: Will the idea of genius become obsolete with advances in machine intelligence?

Jorgensen: The path towards creating machine intelligence. Well as l see it, it will be based on whether we ourselves will be able to form a total and uniformed understanding of our own intelligence. The term “genius” will remain, and as for me at least, regards to the creative level. The definable with intelligence is the ability to acquire new knowledge, i.e., with reference to the general basis that I am debating here.

Man’s ability to acquire, as well as adapt to, this newly acquired knowledge for one’s own good, which in turn can be built upon. The brains neocortex is about 76% of its total volume. This is where intelligence resides. Neuroscientist who studies the part related to human intelligence, have not to date, come to a complete understanding of that particular function, a lot of work still remains. It is pointed out in the book by the renowned neuroscientist and author; Jeff Hawkins in his lates book – A Thousand Brains, where it is pointed out that:

There are decades left, maybe more for a total understanding can be summarized, lots of puzzle board pieces are now understood, but putting these puzzle bits together into a complete comprehension, is still a long way off, but it will come into light someday… (Hawkins Jeff, A thousand brains).

If one will get this access to a full understanding of how the human

neocortex works with its connection to intelligence, then we can in a sense create a real AGI, where the general tendency can be built into the machines, i.e., self-learning machines, what is then called a “reference framework”, on which new knowledge can be built upon. This is then the new intelligence that will most likely dominate, maybe in our own century which Jeff Hawkins refers to and to which I agree. That differs from the learned specified knowledge that we today program into according to today’s AI.

When this happens, one can begin to consider whether the term “genius” will be diluted or not. I still do not think so, as the term is aimed at man’s ability to create, not a machine’s ability to produce fantastic works. We are unique in ourselves; we are the starting point for our inherent ability to create. Look at the value in what your own child creates in arts and crafts at school, point in case, of what my own children bring with them home after school, is by that, the most wonderful items we receive, not because it is incredibly well made, but because our children made it themselves. The same cannot be said about what a machine produces, and by that of any man-made work, we humans will prefer the later over any machine-made work, always, ask yourself, do you prefer machine-made artwork, or man-made artwork …? The term “genius” will forever remain.

Jacobsen: How will A.I. live in the future? How will human beings live in the future with A.I. making life more efficient, easier, in some regards, as now?

Jorgensen: Artificial intelligence will be able to help us humans in a variety of situations, for example, heightened customized performance within the medical field, super efficiency, specialized interventions, super adaptive parameters within economics, finance, and international trade whereby the implementer operation of interactive payment services, new and innovative initiatives for finance-based assets, and more seamless solutions for all border custom services etc. There will certainly be a lot of more of great solutions that one cannot imagine today. AI will probably continue as it is now currently doing within various factories around the world, only more specialized, and more effective.

That being said, the biggest changes will only happen when AGI becomes as functioning and as intelligent as us humans. The artificial general intelligence must first be equated with our own, it must function according to our own intelligence model setup, reference being made to the brains neocortex and how its parameters is laid out, only then will the great changes come into fruition. AGI will surpass anything that AI will ever be able to achieve. That said, I have previously mentioned that we humans have a specific setup of various emotions, the older part of the brain is responsible for this as the neocortex is viewed as the new part of the brain. But now we talk about some our primary functions aka the “old brain” and the senses thereof, human emotions like; sadness, pain, laughter, etc., AGI will function primarily by the modeling of the human counterpart the neocortex where the foundation for human intelligence lays.

So, AGI or Artificial General Intelligence will not be equipped with the same spectrum of emotions as us humans, this will perhaps be a matter for debate whether or not this will ever be implemented as a primary function or some form of subfunction for AGI sometime in the future, but again what would be the point? When one then talks about the spectrum of emotions that we humans have in all of us men, it must be pointed out that the older part of the brain that deals with these primary functions will be able to communicate with the newer neocortex, in the state of being able to create a holistic happening of what is expected of one. For example, if you are hungry, then the old part of the brain will register this, it perceives that the body needs food now, but it does not know how to do this, it needs the information from the neocortex that can then tell where this food is for us to then retrieve what is about to be consumed.

This is a huge simplification of the communication between the old brain and the neocortex, but the fact that the older part of the brain talks to the neocortex in order to make it easier to do the job we are supposed to perform. If you look at it this way, the the neocortex is our map, which gives us the exact position of where something is, as to what we want, i.e., the equivalent to longitude and latitude on a map. The old brain enables us physically to get to where the neocortex wants us to go to get what we need or want.

We humans have a need to see meaning through purpose in our daily life in one form or another, our everyday lives consist of lots of emotionally charged interactive moments that in return give us fulfillment as we go about our daily lives. This gives us purpose, it gives us a general meaning to carry on, but also presents us with our mortality too, which means, we all have a need to get the most out of our lives the time we have on this wonderful blue ball we call home. You can implement purpose into a machina as well, but the communication between the old brain and the neocortex must, the older part can produce the correct stimuli of emotions, but the neocortex must coordinate as to where it will happen or take place as to space and time.

Motion of thought: I proclaim, there is no merits of judicial justification for the primary implementation standard of AGI as I see it, regards to the integration of these emotion’ parameters. AGI will only ever just exist as an entity void of any sense of emotional awareness. Where then if I may, will, or should I say must the bridging between us humans and the machines take place if at all…?

As we humans tend to flee away from fellow human beings that seems emotionally dead, by that notion, this remark applies to the interactions between humans and machines, will not they too follow the same mode? Furthermore, will machines then also see this as a possible intersocial hindering that should be addressed, what then about the parent innovators behind these machines, will that have any furthering basis for their existential justification of these inventions regards to both the realm of the metaphysical, and philosophical perspective…?

When we talk about the future of machines, we cannot go about this and not mention the father of computers, Alan Turing, as we all known for the movie; “The imitation game”, whereby Alan Turing created a computer to solve the enigma machine that Nazis during WWII had going to cover what they were doing, where the next assault was going to be. The notion of Alan Turing and his proposal as to the imitation game; “States that if a person can`t tell if they are conversing with a computer or a human, then the computer should be considered intelligent” (Hawkins, p.159).

Will also consider the concept of eternal life. As a prolonged extension of our lives today is on the agenda, based on what the future existence and the need to move from our own planetary system over to other possible habitable planetary systems. The travel between these planetary systems will take long time, very possibly 150-250 years or more; will we humans not get tired of living, not including the time of hibernation or prolonged sleep due to long space travels? I have a friend who works with older people in nursing homes, many, not all of them, say to him when their time is at an end that; they feel ready to let go, they are tired, bored, or,

“I have lived long enough and now it`s time for me to rest”, these people died at ages vary from 80 to 95 years old, what would these people think about having to live for 200+ years? Does one run the risk of being “fed up” with life or not, as it is written in the song lyrics by the famous music band Queen; “Who wants to live forever.”  Will the general opposition towards living extended long lives, as to be able to restart one’s existence on other planets be enough for an all-right global approval by being presented this opportunity, or will the opposition to extended life be too much to ask for or to be expected, what do you the reader think? I know what I think…






American Medical Association (AMA 11th Edition): Jacobsen S. Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11). December 2022; 11(1).ørgensen-11

American Psychological Association (APA 7th Edition): Jacobsen, S. (2022, December 8). Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11). In-Sight Publishing. 11(1).ørgensen-11.

Brazilian National Standards (ABNT): JACOBSEN, S. D. Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11). In-Sight: Independent Interview-Based Journal, Fort Langley, v. 11, n. 1, 2022.

Chicago/Turabian, Author-Date (17th Edition): Jacobsen, Scott. 2022. “Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11).” In-Sight: Independent Interview-Based Journal 11, no. 1 (Winter).ørgensen-11.

Chicago/Turabian, Notes & Bibliography (17th Edition): Jacobsen, Scott Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11).” In-Sight: Independent Interview-Based Journal 11, no. 1 (December 2022).ørgensen-11.

Harvard: Jacobsen, S. (2022) ‘Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11)In-Sight: Independent Interview-Based Journal, 11(1). <ørgensen-11>.

Harvard (Australian): Jacobsen, S 2022, ‘Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11)In-Sight: Independent Interview-Based Journal, vol. 11, no. 1, <ørgensen-11>.

Modern Language Association (MLA, 9th Edition): Jacobsen, Scott. “Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11).” In-Sight: Independent Interview-Based Journal, vo.11, no. 1, 2022,ørgensen-11.

Vancouver/ICMJE: Jacobsen S. Conversation with Tor Arne Jørgensen on AGI: 2019 Genius of the Year – Europe, World Genius Directory (11) [Internet]. 2022 Dec; 11(1). Available from:ørgensen-11


In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Based on work at


© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen, or the author(s), and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors copyright their material, as well, and may disseminate for their independent purposes.

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: