Skip to content

An Interview with Associate Professor Pei Wang (Part Two)

2023-02-03

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): In-Sight: Independent Interview-Based Journal

Publication Date (yyyy/mm/dd): 2016/04/15

Abstract

An Interview with Associate Professor Pei Wang. He discusses: resources from the department and position; self-summarization and its relationship to A.I.; mainstream opinion on A.I.; ultimate goal; membership in professional organizations; Non-Axiomatic Reasoning System (NARS) and its contribution to computer science; Three fundamental misconceptions of Artificial Intelligence (2007); probability of the Singularity; immortality; good and bad news for thinking beings with A.I.; powerful A.I. reflecting on human thought; social and legal structure changes with A.I.; humans replaced or combined with A.I.; remnants of humanity with long-term A.I.; other civilizations in the galaxy; constructs of these other civilizations; ‘wants’ of A.I.s; weirdest aspect of living with A.I.; things that might not become weird; possible fragmentation caused by A.I.; percent chance on an A.I. takeover; future political controversies over A.I.; personal heroes; collaborative projects; and solo projects.

Keywords: A.I., Non-Axiomatic Reasoning System, Pei Wang, Singularity.

An Interview with Associate Professor Pei Wang (Part Two)[1],[2],[3],[4]

*Please see the footnotes throughout the interview, and bibliography and citation style listing after the interview.*

31. Now, you hold an associate professorship (2008-Present) at Temple University in the department of computer and information sciences.[5] What resources does this department and position provide for research into cognitive science and computer science?

I joined this department in 2001 after Webmind bankrupted, mostly because my home was at Philadelphia, so a local job made things easier. In these years my title changed a few times, though the position has been non-tenure track and teaching-oriented. Officially my current full title is “Associate Professor (Teaching/Instructional)”, which means my duty is full-time teaching, and there is little resource provided to my research, though in these years the department has been supportive to my research

32. You self-summarize, as follows:

My research goal is to build a thinking machine (also known as “artificial general intelligence” or “human-level artificial intelligence”). The approach I take is to design and implement a reasoning system, which unifies various cognitive facilities, such as reasoning, learning, categorizing, planning, problem-solving, decision-making, etc. The current achievements of this project can be found at http://nars.wang.googlepages.com/

Specialties: Artificial Intelligence and Cognitive Science in general, and especially on
* foundation of intelligence,
* reasoning under uncertainty,
* learning and adaptation,
* knowledge representation,
* decision making under time pressure.[6]

A.I. research studies the ability of a digital computer to perform tasks associated with intelligent beings. Cognitive science studies the mind and its processes. How does A.I. and cognitive science research relate to “foundation of intelligence,” “reasoning under uncertainty,” “learning and adaptation,” “knowledge representation,” and “decision making under time pressure”?

Though it is intuitive to identify “intelligence” as the ability to solve certain problems, in my opinion such an understanding fails to reveal the fundamental difference between the human mind and the conventional computer systems.

One consequence of this understanding of intelligence is that it suggests a “divide-and-conquer” strategy in A.I. research, which is responsible for the fragmentation of the field. For instance, “reasoning”, “learning”, “planning”, “decision making”, “natural language understanding”, and so on, have been traditionally treated as separated tasks to be performed using different theories and techniques, while in the human mind they may actually be different aspects of the same underlying process.

Another consequence is that “the problems solved by intelligent beings” is too broad and vague a notion. For instance, before computer was invented, only the human mind could carry out arithmetic operations on arbitrary numbers. If this task were also associated with intelligence, then a pocket calculator would be considered as an intelligent system. Since all these tasks are carried out by very different methods, it is difficult to find a common theoretical foundation on how intelligence works.

To resolve these issues, I define “intelligence” as “the ability of adaptation under insufficient knowledge and resources”, which is an attempt to provide a unified foundation for A.I., as well as to interpret various cognitive processes, such as reasoning, learning, decision making, etc., in a consistent manner.

Since the aim of my theory is not only to guide the development of A.I., but also to explain how human thinking works, it is a theory of cognitive science, too. However, since currently this field is dominated by cognitive psychology, where the focus are humans, not machines, my work is also out of the mainstream, since my model does not intend to describe the human mind in details, but to capture its basic principles. I do not think an A.I. will be identical to the human mind in all details.

33. What characterizes the “mainstream” opinion in A.I.?

To me, the mainstream A.I. is characterized by two opinions:

  • “Intelligence” is the ability to solve certain problems that are solvable by the human mind.
  • The problems associated with intelligence can be solved in the same way as how computers are traditionally used in problem solving.

34. What remains the ultimate goal with these convergent, and unified, interests in research?

The objective of my research is to get three results altogether: (1) a theory of intelligence, cognition, and mind (which are more or less the same thing in this context) in general, in the sense that it is not only applicable to humans, (2) a formal model of the theory, with all the details accurately specified, and (3) a computer implementation of the model, as a general-purpose thinking machine that is comparable to, though not identical with, the human mind in all major aspects.

35. You remain a member in the Artificial General Intelligence Society, Association for the Advancement of Artificial Intelligence, and Cognitive Science Society.[7] What does membership in these organizations – society and association, respectively – provide for you?

I was one of the founders of the field of Artificial General Intelligence (AGI), and it is the community I am mostly associated with. At the same time, I am still related to the mainstream A.I. community (represented by the Association for the Advancement of Artificial Intelligence) and the cognitive science community (represented by the Cognitive Science Society), mainly to keep track of their progress and trends.

36. You have involvement in the Non-Axiomatic Reasoning System (NARS) or the “general-purpose reasoning system.”[8] You have described the ability of NARS to learn from experience based on insufficiency in both resources and knowledge.[9] Its purpose is to reproduce cognitive faculties too. All research intersects on “a theory of intelligence,a formal model of the theory, and a computer implementation of the model.”[10] How does NARS contribute to the discipline of computer science and some researchers’ dreams of the development and foundation of artificial general intelligence?

NARS aims to become the “logic core” of intelligent systems that must handle questions and goals that are beyond their current capability in terms of knowledge and time-space resources. It will directly contribute to artificial intelligence and cognitive science, while also have impact on computer science and many other disciplines.

37. Three fundamental misconceptions of Artificial Intelligence (2007) described the nature of artificial intelligence in prominent conceptualizations, which remain wrong, and provides the correctives to these mis-conceptions.[11] These mis-conceptions relate to an A.I. identification with “an axiomatic system, a Turing machine, or a system with a model-theoretic semantics.”[12]

Even though, as the paper notes, the functional utility in these three core notions for A.I. systems, these three attract legitimate criticisms from individuals external to the discipline of artificial intelligence research and create problems for the field itself. In addition to these points of critique and response, the paper introduces a hypothetical, and example, intelligent system entitled NARS. NARS does not use any of the three previous core notions in the discipline of artificial intelligence research. Nonetheless, it provides the theoretical bases for its implementation – in spite of this common triplet rejection – in a standard digital computer, an “ordinary computer.”[13]

38. All of these conceptualizations, wrong ones by the paper’s analysis, derives from the treatment of empirical reasoning as mathematical reasoning in numerous instances. Nonetheless, what solutions does NARS bring to bear on the problem of the construction of a digital architecture capable of artificially and generally intelligent operations?

The solution proposed in NARS consists of several levels.

At the conceptual and philosophical level, it is the idea that A.I. is not computer science extended, but a separate discipline with its own fundamental assumptions. Roughly speaking, computer science is about how to solve problems with sufficient knowledge and resources, that is, it is the designer of the system who solves the problems, and the computer simply repeats the solution on each instance of the problem; artificial intelligence, on the contrary, should be about how to solve problems with insufficient knowledge and resources, that is, the system is not given all the relevant knowledge for the problems, and nor can the system afford the processing time and memory space to exhaustively try every possible solution, but has to learn to solve the problems on its own.

At the technical level, I formulated a new logic, called “Non-Axiomatic Logic”, to accurately specify the working process of a system that has to work with insufficient knowledge and resources. Concretely speaking, it answers questions like “If there is no way to get an answer that is absolutely correct, which answer is the best?” and “If it is impossible to consider all relevant knowledge when solving a problem, which knowledge should be considered?”, and so on.

39. What seems like the probability of the Singularity?

If “singularity” indicates a time after that A.I. becomes completely incomprehensible, I do not think it will ever happen at all. I believe A.I. can be built to follow the same principles and mechanisms as human intelligence, and it will show all kinds of cognitive functions and capabilities. In applications, A.I. will do many things better than us. However, this can be achieved exactly because we understand how “intelligence” and “cognition” work, so A.I. will be comprehensible in principle, even though we probably will not be able to exactly predict or explain the details in the behavior of an A.I. Actually we often cannot do that already for an ordinary (unintelligence) computer.

Unlike Ray Kurzweil and many other people, I do not see A.I. systems as conventional computer systems with stronger and stronger problem-solving power. Instead, I see it as a different type of computer systems, whose problem-solving power will indeed increase unbounded, but its governing principles (which is where “intelligence” is) remains more or less the same. In the current discussion about A.I., one fundamental confusion is between these two levels of capability. For example, are the present-time scholars “more intelligent” than those lived in the ancient Greek, like Socrates, Plato, and Aristotle? We surely have much higher problem-solving ability, but to me, our “intelligence” is more or less the same as them, which is not about “what one can do”, but “what one can learn”. In this way, future A.I. may be like human beings of future generations more capable, but remains comprehensible to us, at least in principle.

40. Does immortality as argued by Dr. Ray Kurzweil seem reasonable – even with an extended timeline – to you?

No. I have not seen a convincing argument on this topic yet.

41. As we figure out A.I., what good and bad news will it have for us as thinking beings?

Like any major technical breakthrough in history, A.I. will be both an opportunity and a challenge at the same time. For pure intellectual considerations, the good news will be that we have reached a major milestone in the understanding of how “thinking” works, while the bad news is that we will lose our monopoly on this ability, and have to deal with the undesired consequences.

42. Will powerful A.I. show us that human thinking is sloppy and threadbare?

Probably not, since many negative aspects of human thinking are inevitable in all intelligent systems, so we will see them in A.I., too. For example, “forgetting” is often taken as a defect of the human mind, but according to my theory, it is a phenomenon that is certain to occur in an adaptive system working with insufficient knowledge and resources. A.I. will make all kinds of human-like errors.

43. How will social and legal structures change to accommodate non-human beings that are as smart as or smarter than humans?

We will not know for sure until we are close to our objective in this research, so now is too early to speculate the details, except that such changes will surely become necessary.

44. Will humans be replaced by or combine with A.I.?

I do not believe humans will be replaced by A.I. At least I have not seen any argument for that possibility that is not based on a misconception of A.I. It is certainly possible to “combine” human and A.I. in various ways. It is just like some people already cannot live without their cellphones.

45. What remnants will exist of humanity in the long-term if A.I. pans out?

Since human beings will continue to exist after A.I. has been achieved, there is no “remnants” to talk about.

46. Do you think there are other civilizations in our galaxy?

I think that is a valid possibility.

47. What constructs might these civilizations produce for themselves?

I have no idea.

48. What will A.I.s ‘want’ in the future?

I discussed this topic in my AGI-12 paper (Motivation Management in AGI Systems) in detail.[14] Roughly speaking, an A.I.’s initial goals or motivations are specified by humans (designers or users), then some derived goals are generated from them and the system’s knowledge, which wholly or partly comes from the system’s experience. Therefore, what an A.I. wants will be determined both by its nature and its nurture, but not by either of the two alone. Furthermore, in deciding what action to take, the system will usually consider all active goals, rather than any single one of them, even the initial one.

A common misconception about the motivation/goal of A.I. is to assume that the system’s actions will all be fully decided by a single initial goal, as exemplified by Bostrom’s “paperclip maximizer”. A truly intelligent system will not do that.

49. What will be the weirdest aspect of living with A.I.?

I do not know that yet.

50. What things might not become weird?

Most of them.

51. Will A.I. put pressure on society to fragment into those collectives which embrace A.I. and avoid A.I.?

That may happen, if the situation is not properly handled by politicians and scientists, though I do not see it as an inevitable scenario.

52. What percent would you assign to the risk of an A.I. takeover?

It depends on the definition of “A.I. takeover”. I do not believe it is possible for A.I. to completely take over the world, though it will surely take over certain aspects of our life, such as a large part of traffic control.

53. Will future political controversies over A.I. become as heated as the current enflamed political scene in The United States of America?

I hope not, and will try to avoid that scenario, though cannot guarantee that it cannot happen.

54. When will we elect the first A.I.-augmented politician?

Again, it depends on how “A.I.-augmented” is defined. When a politician depends on computer in decision making, I do not think it is too different if the function is provided by an implant chip or a smartphone.

55. What personal heroes exist in history, in the present, and who most influenced you?

None too special to be singled out.

56. Any upcoming collaborative projects?

The Jet Propulsion Laboratory (of NASA and Caltech) is cooperating with my team to apply my results into their system.

57. Any upcoming solo projects?

Nothing major planned, as my current research has already taken all of my time.

Thank you for your time, Professor Wang.

Bibliography

  1. Artificial General Intelligence Society. (2015). Artificial General Intelligence Society. Retrieved from http://www.agi-society.org/.
  2. Encyclopedia Britannica. (2015). Encyclopedia Britannica. Retrieved from http://www.britannica.com/.
  3. Indiana University. (2015). Indiana University. Retrieved from http://www.iu.edu/.
  4. LinkedIn. (2015). Pei Wang. Retrieved from https://www.linkedin.com/in/pei-wang-3a46241.
  5. Pei, W. (2007). Three fundamental misconceptions of Artificial Intelligence.Journal Of Experimental & Theoretical Artificial Intelligence19(3), 249-268. doi:10.1080/09528130601143109
  6. Peking University. (2015). Peking University. Retrieved from http://english.pku.edu.cn/.
  7. Wang, P. (2015). Dr. Pei Wang. Retrieved from http://cis-linux1.temple.edu/~pwang/.
  8. Wang, P. (2012). Motivation Management in AGI Systems. Retrieved from http://cis-linux1.temple.edu/~pwang/Publication/motivation.pdf.
  9. Wang, P. (2015). NARS: an AGI Project. Retrieved from https://sites.google.com/site/narswang/.
  10. WANG P. PROBLEM SOLVING WITH INSUFFICIENT RESOURCES.International Journal Of Uncertainty, Fuzziness & Knowledge-Based Systems [serial online]. October 2004;12(5):673-700. Available from: Academic Search Complete, Ipswich, MA. Accessed December 3, 2015.

Appendix I: Footnote

[1] Associate Professor (2008-Present), Temple University; Director of Research (2000, January-2001, April), Webmind Inc.

[2] Individual Publication Date: April 15, 2016 at www.in-sightjournal.com; Full Issue Publication Date: May 1, 2016 at www.in-sightjournal.com.

[3] Ph.D. (1991, September-1995, December), Computer Science and Cognitive Science, Indiana University; M.S. (1983-1986), Computer Science, Peking University; B.S. (1979-1983), Computer Science, Peking University.

[4] Photograph courtesy of Associate Professor Pei Wang.

[5] Please see Wang, P. (2015). Dr. Pei Wang. Retrieved from http://cis-linux1.temple.edu/~pwang/.

[6] Please see LinkedIn. (2015). Pei Wang. Retrieved from https://www.linkedin.com/in/pei-wang-3a46241.

[7] Please see Artificial General Intelligence Society. (2015). Artificial General Intelligence Society. Retrieved from http://www.agi-society.org/.

[8] NARS: an AGI Project (2015) states:

NARS (Non-Axiomatic Reasoning System) is a general-purpose reasoning system, coming from my study of Artificial Intelligence (AI) and Cognitive Sciences (CogSci).

What makes NARS different from conventional reasoning systems is its ability to learn from its experience and to work with insufficient knowledge and resources.

NARS attempts to uniformly explain and reproduce many cognitive facilities, including reasoning, learning, planning, reacting, perceiving, categorizing, prioritizing, remembering, decision making, and so on.

The research results include a theory of intelligence, a formal model of the theory, and a computer implementation of the model.

The ultimate goal of this research is to fully understand the mind, as well as to build thinking machines. Currently this research field is often called “Artificial General Intelligence” (AGI).

Please see Wang, P. (2015). NARS: an AGI Project. Retrieved from https://sites.google.com/site/narswang/.

[9] Please see WANG P. PROBLEM SOLVING WITH INSUFFICIENT RESOURCES. International Journal Of Uncertainty, Fuzziness & Knowledge-Based Systems [serial online]. October 2004;12(5):673-700. Available from: Academic Search Complete, Ipswich, MA. Accessed December 3, 2015.

[10] Please see Wang, P. (2015). NARS: an AGI Project. Retrieved from https://sites.google.com/site/narswang/.

[11] Please see Pei, W. (2007). Three fundamental misconceptions of Artificial Intelligence. Journal Of Experimental & Theoretical Artificial Intelligence19(3), 249-268. doi:10.1080/0952813060114310

[12] Please see Pei, W. (2007). Three fundamental misconceptions of Artificial Intelligence. Journal Of Experimental & Theoretical Artificial Intelligence19(3), 249-268. doi:10.1080/0952813060114310

[13] Please see Pei, W. (2007). Three fundamental misconceptions of Artificial Intelligence. Journal Of Experimental & Theoretical Artificial Intelligence19(3), 249-268. doi:10.1080/09528130601143109

[14] Please see Wang, P. (2012). Motivation Management in AGI Systems. Retrieved from http://cis-linux1.temple.edu/~pwang/Publication/motivation.pdf.

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.

Copyright

© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: