Born to do Math 109 – Three Black Boxes Walk Into A Universe
Author(s): Scott Douglas Jacobsen and Rick Rosner
Publication (Outlet/Website): Born To Do Math
Publication Date (yyyy/mm/dd): 2019/02/22
[Beginning of recorded material]
Scott Douglas Jacobsen: Some concepts or ideas seem basic here. I do not mean simple, but base. The idea of information as a result of the relation between things.
But also, the basic notion of two points for that interaction to happen and for the exchange of information. But as you’re noting months ago, even those two points, say, they’re also emergent.
Rick Rosner: Everything is emergent. You need the hardware to register phenomena. I am not well-versed in neural nets. But I can gloss over it. You need things that are capable of keeping score.
Systems capable to register a wide variety of signals about the outside world. There should be consistencies in the outside world – the world outside of the neural net. You have the sensory apparatus.
Then you have whatever is impinging on the sensory apparatus, whether something outside the net and inside your head, sensory input from outside, and so on. It can be inside or outside your head.
Something is capable of keeping score and becoming aware of things that are consistent among the set of all things that impinge on the system or that part of the system.
Jacobsen: Could this be seen as something like unlinked that are emergent and linked things that are emergent? Things emerge out of the bubbly soup. Those that are linked up. Others simply are taken into the registration of the linked systems.
Rosner: Some of it depends on the apparatus. The apparatus is only capable of registering consistency within its purview.
Jacobsen: What does purview mean in this context? It is that which is possible to be registered in the universe.
Rosner: A purview is a limited number of type of things that can trigger its sensors. It has a limited analytic capacity. Depending on how it is set up, it has a limit to the complexity that it can register as consistent.
That is, a grasshopper has a less sophisticated understanding of the world than a human because the human has more analytic capacity and more sensory capacity. The grasshopper will not be able to register as many consistencies as a human.
Jacobsen: In a sense, does this imply two other concepts? The scope and type of registration. The other is the depth and speed of processing of that scope and type of registration. What can add to it? How can we wrangle this into an IC framework or system for understanding the world? Because this is good.
Rosner: In a general sense, you can argue that a system’s capacity is proportional to the size and power and speed of its hardware. To add to that, it is also proportional to the system’s experience. That as the system adapts itself experientially to the world that it is in.
It will become more powerful at understanding, digesting, and analyzing that world.
Jacobsen: We have these systems that are emergent. The basic framework of the system that is bubbly emergent out of some fuzz.
Rosner: Yes.
Jacobsen: Then we have systems only arising from one of two ways. One is evolved. The other is artificially constructed.
Rosner: Sure.
Jacobsen: Within those two, we have registration with scope and type. Then we have depth and speed and processing.
Rosner: You can divide it into natural and unnatural, and evolved and – call it – forced. Where somebody has already done the analyzing, in our case, when you’re building a video game, at some level, the analysis is being done by evolved creatures who input their accumulated experience and understanding into the system.
Jacobsen: You mean the case with Deep Blue in Chess and AlphaGo with Go.
Rosner: Yes, the understanding and interpretation are now being turned over to machine analytics. You might be able to turn over the behavior of a head of hair.
Jacobsen: Is this part of the decoupling of possible human science to simply aided human science and then catapulting beyond anything normal and natural human science?
Rosner: Yes. Except, there will always be bridges.
Jacobsen: Fair enough.
Rosner: A sufficiently powerful AI. An AI with enough computing capacity behind it – this is probably a general principle – will begin to behave in ways opaque to its constructors. Google Translate has its own metalanguage inside it, known only to the AI itself.
There are examples of Go and Chess. As the AI becomes more powerful, it makes moves that are good but inexplicable to humans. This is no different, really, than human beings inexplicable to other humans.
We are trying to understand one another, whether a true crime novel or a TV show. We are looking at other people and trying to know why they behave the way they behave. If you’re in a relationship or a working relationship, you are looking at a black box.
You are trying to figure out why people are being such fuckheads.
[End of recorded material]
License
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightpublishing.com.
Copyright
© Scott Douglas Jacobsen and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen and In-Sight Publishing with appropriate and specific direction to the original content. All interviewees and authors co-copyright their material and may disseminate for their independent purposes.
