-
Joseph P. Farrell posted an update 6 years, 7 months ago
An interesting article that made me think of our very own Dr. Scarmoge: https://mindmatters.ai/2019/08/machines-are-not-really-learning/
The Giza Forum (Legacy)
Closed Archive of The Old Forum
Absolutely a topic of interest for not only myself but quite a few others in a number of fields as well. I hope that folks would read all the way to the end of this article where they would read “The mind can’t be just a computer, Gödel demonstrated that fact and Turing tried to live with it.” A thread to follow: Lucas, J. R. – Minds, Machines and Gödel in Philosophy, Vol. 36, No. 137 (Apr. – Jul., 1961), pp. 112-127. for which he caught … well you know. In 1989 the brilliant mathematical physicist Roger Penrose came to Lucas’ defense in his book The Emperors New Mind for which he caught … well you know. So much so that he wrote a second book in 1994 Shadows of the Mind in an attempt to put the final nail in the coffin of strong AI. In Shadows pp. 72-77 Penrose gives an incredibly succinct explanation of Godel’s Theorem and what he says it does. The original, anti-strong AI, goal was to argue, via Godel, that you can NOT equip a robot with human-level (type) consciousness VIA AN ALGORITHM (Lucas to Penrose). What I think (not original to me by any means) is that Penrose not only does that convincingly but does a great deal more. It argues, again I think, most convincingly that we can NOT even algorithmically simulate HUMAN THOUGHT; and not human thought generally speaking but with just that much of human thought that deals with a very special class of human mathematical assertions (I am of course working from the assumption that machines can NOT make assertions [an idea I think worth detailed interdisciplinary exploration])
called TT[sub1] sentences. …. to be continued in post Part II to follow this one …
(TT in the previous post and this post should be read as “pi”) TT[sub 1] sentences are formed from the basic relations of arithmetic (+, * (dot), and <) constants 0, 1, 2, 3, … and variables X {sub 0], X [sub 1], X [sub 2], … by (a) first joining these entities into meaningful ( i.e. in math and logic noted as WFF [well formed formulas] using the logical connectors ( and [conjunction], or [disjunction], and not [negation] ), and then (b) attaching in front of the result a string of universal quantifiers V (small x inside the V) [ sub i ], one for each x [ sub i ] occurring in the result of (a). Penrose argues that no algorithm can provide the means for a robot to do what we as human beings do with TT [sub 1] sentences. By the way one of the things we do as human beings with TT [sub 1] sentences is to attach meaning to them. Robots will, if you accept Penrose’s argument, of course have trouble with this. One last thought here … if we are talking about “machine learning” (different sense of the word learning here … beware of one of the most often committed informal fallacies EQUIVOCATION) is not excluded by algorithmicity, if such “;learning” comes about by means of subroutine interactions. There is at least one objection to Penrose’s argument that could come from the CogNeuro PDP approach (Artificial Neural Network Models) regarding the contradiction that Penrose highlights … If A (n,n) stops, then C [sub n] (n) does not stop. the second in this case can be substituted for the first … hence a contradiction. (Shadows of The Mind, pp. 75 see lines (I) and (J). The basic idea of the objection is that the contradiction hypothesized is only an epiphenomenon resulting from the unknowable number of subroutine interactions and relations resulting from the running of algorithms. In other words the contradiction is a result of human perception and understanding of the algorithm’s “behavior.” Isn’t this then to say then that the algorithm could have an “awareness”or the ability to “self-reflect” and could have an “understanding” of “itself” that we as human observers of its “behavior” can not have being that we are “outside” the “experience” of the algorithm?
… apologies for the additional post, meant to put this reference at the end of the last post.
… a somewhat long read but well worth the time especially if you want an argument that cognition (mind – or at least this one important aspect of it — of course depending on how you understand mind to begin with).
Van Gelder, Tim – What Might Cognition Be, If Not Computation? in The Journal of Philosophy, Vol. 92,
No. 7 (Jul., 1995), pp. 345-381
HI Scarmoge, can you finish this please. I think you might have skipped the conclusion … … “a somewhat long read but well worth the time especially if you want an argument that cognition … “
oops Mea Culpa … sorry about that … it was early in the morning. 🙂
… cognition could be something other than computation.
HI Scarmoge .. .thanks for the conclusion. So, if cognition could be something other than computation, is it possible that the the only way for AI to be cognitive is for it to either be merged or blended with a cognitive entity (such a human being) OR, for it to be taken over or possessed by a cognitive entity (such as a non-corporeal consciousness)?
As to the first what you suggest, merging, sounds plausible. However, I’m not sure how one distinguishes between “working together” and truly being “merged” which I think you are hinting at in your question (merged or blended). My suspicion is that what we call consciousness is a result of a particular “ability” (the conversion of sensory input [of whatever kind] by the human nervous system into meaning [interpretation] ) brought about by our being “hard wired” into the Cosmos. By “hard wired”into the Cosmos here I mean that given that we have the kind being that we have at least a certain minimal number and stem “types” of logical forms of the “logical laws” of the Cosmos embodied in us, (think something along the lines of Chomsky’s “Black Box” for language) whose basis is what Peirce calls Triadic Relation. What is a Triadic Relation?
Generally speaking, Peirce defines a Sign or Representamen ” … as the first Correlate of a triadic relation, the second Correlate being termed its Object, and the possible Third Correlate being termed its Interpretant, by which triadic relation the possible Interpretant is determined to be the first correlate of the same triadic relation to the same Object, and for some possible Interpretant. (MS. 540, Collected Papers 2.242).
Why do I mumble about all of this? I also suspect that if it were possible to create an entity (machine or otherwise) whose operating system functioned on the basis of triadic logic instead of dyadic logic something like “sentience’ or “consciousness” might be able to be realized. I do also think that even if this were somehow possible that it would at best be only a “very fine” simulation of what we have as human beings.
Is it official, DOCTOR. Scarmoge?
… on track for December. 🙂
… Thanks ML! … from your keyboard to the Gods Ears!
[ and by Gods here I mean the honorable and esteemed committee members or as I like to refer to them “The Three” ]
Nine are hard to come by now-a-days. 🙂
… Hi … following one from your very interesting reply … it seems to me there exists a nasty entity that is looking for a way to manipulate our universe. I think it’s not a natural resident of our universe and your reference to triadic logic gave me a clearer reason why. The thing is not hard-wired to our cosmos as you so well put it. But I do think it has been looking for a way to do this through human beings or other human cousins however . I don’t think it has an interest in animals or plants or any other consciousness because we have abilities (now defunct) that it requires. Your last sentence gave me a sense of relief that our universe is safe, for now.