QUANTA Article on Claude Shannon
As Claude Shannon's information theory is discussed so much on the Forum, this article might be of interest. It also includes a link to a current documentary on his work.
https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/
And, Merry Christmas to all the Forum contributors, mods and staff. :sparkle:
https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/
And, Merry Christmas to all the Forum contributors, mods and staff. :sparkle:
Comments (78)
Merry Christmas!
Message = Information + Redundancy + Noise
Information = What is to be conveyed (worth paying for)
Redundancy = Unnecessary duplication (can be deleted)
Noise = Random rubbish
Rover is a poodle dog: Dog redundant
It rained in Oxford every day this week: not surprising, very little information
It rained in the Sahara desert: surprising, high information content
Information: Shock/surprise value
Ignorance BEFORE receiving message - ignorance AFTER receiving message = Information content = The amount of ignorance reduction.
If the prior uncertainty can be expressed as a number of equiprobable alternatives N, the information content of message which narrows those alternatives down to one is Log2(N). This might need a little explaining. What's the smallest possible uncertainty? If you have 5 equiprobable alternatives, the uncertainty is 5; if you have 4 equiprobable alternatives, the uncertainty is 4, and so on until you get to 2 alternatives. It can't be 1 alternative because then there's no uncertainty at all. So, the smallest value of uncertainty is when you have 2 alternatives. If now, you know one of the alternatives, the uncertainty reduces to 0. That means, the alternatives have reduced to only 1 and the uncertainty has dropped to 0. If information is encoded in binary (0 or 1), that means knowing either that it's 0 or 1 should result in alternatives = 1 ( the 1 or 0 you found out was the case) and uncertainty = 0. In other words, in binary code a 0 or 1 is 1 unit of information. How do we get a 1 (unit of information) from 2 (alternatives)? Well, log2(2) = 1.
Imagine the following scenario: A nurse and a soon-to-be father agree that if the nurse holds up a pink card, the baby is a girl and if the nurse holds up a blue card, the baby is boy. The nervous father is outside the delivery room and, after some tense moments, he sees the nurse with a pink card in her hand. The initial uncertainty is 2, the father doesn't know if the baby is a girl or a boy. With the pink card, the uncertainty is halved, as now the father knows it's a girl (1 of the 2 possibilities has actualized). The information content of the colored card = 1 bit. Compare this colored-card system with the nurse saying, "Congratulations sir, you have a beautiful baby girl". [8 word] Remember, the information content is the same (identification of the gender of the baby)
In digital code there are 0's and 1's. There are two possibilities (0 or a 1) i.e. the uncertainty is two. As soon as we know it's a 1 or a 0, the uncertainty is halved (one of the two possibilities becomes real/true). So, information content in a digital 0 or 1 is 1 bit [log2(2) = 1]
A DNA example:
DNA has a 4-letter nucleotide code [Adenine (A), Thymine (T), Cytosine (C) and Guanine (G)]. The initial uncertainty here is 4 as there are 4 possibilities. The information content of each nucleotide is then log2(4) = 2 bits. This can be understood in terms of what 1 bit of information means. 1 bit of information is that amount of information necessary to halve the uncertainty from 2 to 1. We have 4 possibilities (A/C/T/G). To reduce the number of possibilities to 2, we can ask the question, "does the letter precede D?". If yes then the possibilities are to A/C and if the answer is no, the possibilities become to T/G. We have halved the uncertainty (4 possibilities have reduced to 2 possibilities) and that accounts for 1 bit of information. The next question to ask depends on which, A/C or G/T, possibilities followed from the first question. If it's A/C, the follow-up question should be, "is it A?" If it's A then it's A and if it's not A then it's C. If the possibility that actualized is G/T, the next question should be, "Is it G?" If it's G then it's G and if it isn't G then it's T. Either way, the uncertainty is halved (from two possibilities it became one). This is the other 1 bit. Together, we have 2 bits of information in every nucleotide.
I don't think this is a valid conclusion. It rained in Oxford has the same degree of information as does it rained in the Sahara. The "shock/surprise value" refers only to how the piece of information relates to other information. If you allow that information is both, what is intrinsic to the message, and also how that message relates to other messages externally, then you have ambiguity as to what "information" actually refers to.
This is very evident in your example of "Rover is a poodle dog". "Dog" is only redundant when "poodle" is related to something else such as a definition. But if each word of the message needs to be related to something else, like a definition, then there is really no meaning in the message at all because the meaning is outside of the message in the act which relates the words to the definitions. The ambiguity is avoided by realizing that there is no information in the words, the information is in the act which relates the words to something else. What the words could mean, is anything, therefore random, and there is actually no information within the message. Information would actually be in the relationships established between the message and something else.
If this is the case, then to talk about there being "information" within the message is false talk. But, if there is no information within the message itself, we deprive ourselves of the capacity for explaining how any information gets from the transmitter to the receiver. There is actually no transmission of information whatsoever, nothing within the message, because all the information is really within the coding and decoding methods. If this is the case, then no information is ever transmitted. Therefore this way of defining "information" is really inadequate. It only appears to be adequate by means of the ambiguity which creates the impression that there is both information within the message and in how the message relates to other things. In reality there is no information within the message in this description.
Then why is it surprising that it rained in the Sahara and not that it rained in Oxford? I admit that I'm not sure what the logic behind why the shocking/surprising is treated as having more information but if I were to hazard a guess it's got to do with what people refer to as the baseline - the greater the deviation from it, the more bits (units of information) necessary to code it into a message and the shocking/surprising certainly are major excursions from the..er...baseline, right? Food for thought: why is news "news"? New, shocking, surprising, out of the ordinary,...
Quoting Metaphysician Undercover
That's correct but that goes without saying, right?
Quoting Metaphysician Undercover
Claude Shannon's information theory assumes that we've already passed those waypoints in our quest to [s]understand[/s] quantify, efficiently transmit, information. Shannon's information theory is,whatever else it might be, not a philosophical inquiry into information and so we best refrain ourselves from raising philosophical objections to it - that would be like trying to diagnose a uterine malady in a man.
As I pointed out, the surprisingness is only related to external information concerning the frequency of rain in these places, it has nothing to do with any supposed information within the message.
Quoting TheMadFool
This is evidence of what I said, the "information" as the word is used here, is not within the message, it is in how the message is related to the "baseline".
Quoting TheMadFool
If the accepted "information theory" represents information in a way other than the way that we normally use the word "information", and cannot account for the existence of information, according to how we normally use the word, as that which is transmitted in a message, then surely we are justified in "raising philosophical objections to it".
What I am saying therefore, is that Shannon's "information theory" does not deal with "information" at all, as we commonly use the word. If we do not recognize this, and the ambiguity which arises, between the common use, and the use within the theory, we might inadvertently equivocate and think that the theory deals with "information" as what is referred to when we commonly use the word to refer to what is inherent within a message.
What else could surprising/shocking mean? Also, what do you mean by "it has nothing to do with any supposed information within the message"? How would you come by information without a message and a medium for that message? If for instance, I read about rain in the Sahara, the message is the article on what is indeed a very rare event and the medium is the paper I'm reading. :chin:
Quoting Metaphysician Undercover
Glad that you figured that out.
Quoting Metaphysician Undercover
Thanks for your patience. I do agree that Claude Shannon's theory is not the only game in town insofar as information is concerned. I remember reading about another less-popular theory that's also out there. However, in the universe of computers, the world of 1's and 0's, in which it's almost a given that one binary state (1 or 0) should correspond to 1 unit of information, Claude Shannon's conceptualization of information as a process of reducing uncertainty of which alternative is true among [idealized] equiprobable alternatives has a natural and intuitive feel to it. The least amount uncertainty happens when we have two alternatives (0 or 1) and knowing that 1 or 0 is the case reduces the uncertainty to zero (only 1 of the two alternatives remain) and this suggests that for computers at least a 1 or a 0 should count as 1 unit of information. If the uncertainty is 4 alternatives, you would need 2 units of information to bring the uncertainty down to zero, and if the uncertainty is 8 alternatives, you'd need 3 units of information to make the uncertainty = 0 and that means the information content of a message that whittles down N equiprobable alternatives to 1 = lo2(N). This is a perfect fit for what I said a few lines above - that a 1 or a 0 should count as 1 unit of information as log2(2) = 1.
By the way, this just popped into my mind. Information is, in some sense, the opposite of ignorance and ignorance can be thought of as uncertainty among given alternatives e.g. If I don't know i.e. I'm ignorant of (I have no information on) who invented the Information theory then this state of not knowing can be expressed as consisting of the following equiprobable alternatives, just as Claude Shannon theorized, Vint Cerf OR Larry Page OR Mark Zuckerberg OR Claude Shannon...
The information is "it rained in the Sahara", just like in the other instance, the information is "it rained in Oxford". How is whether or not this is surprising or shocking, at all relevant to the content of the information?
Quoting TheMadFool
Figured what out, that Shannon is using "information" in a way which is completely inconsistent with common usage? I said that right from the beginning. The question is have you figured that out yet?
Quoting TheMadFool
The issue now is the relationship between uncertainty and information. In the normal, common expression of "information", some degree of uncertainty is inherent within the information itself, as ambiguity. In the way that you describe Shannon's expression of "information", information is the process which excludes uncertainty. Do you see the difference? Now the problem with Shannon's representation is that it cannot cope with the real uncertainty which is associated with ambiguity.
Quoting TheMadFool
Again, this is not consistent with the common usage of "information". In common usage information is what informs a person, to deliver one from ignorance, and so being informed is the opposite of ignorance, but information, as that which informs, is not itself the opposite of ignorance. So, that information is the opposite to ignorance, is a category mistake relative to the common usage of "information".
Sorry, but you seem to be contradicting yourself. Please go over your posts again.
Quoting Metaphysician Undercover
Are you implying we can cope with uncertainty? Uncertainty, ambiguity being one of its causes, comes with the territory and it can't be, to my reckoning, dealt with in a satisfactory manner by any theory of information, whether based on certainty or uncertainty, let alone Claude Shannon's. So, your criticism is more appropriate for language than Shannon's theory.
Quoting Metaphysician Undercover
What is this "common usage" of "information" that you speak of?
Google gives the following definition of information: facts provided or learned about something or someone
So, what is a fact?
Here's a fact that we all know, The Eiffel tower is in Paris
Now, if one is ignorant of this fact i.e. one doesn't know that the Eiffel tower is in Paris, this state of ignorance can be represented with the following possibilities (alternatives) [uncertainty]
The Eiffel tower is NY, OR The Eiffel Tower is in Beijing OR The Eiffel tower is in Sydney OR...
Once you come by the information that the Eiffel tower is in Paris, the uncertainty becomes 0. Isn't that just fantastic? That's what I meant when I said Shannon's theory "...feels natural and intuitive..." Mind you, Shannon's theory is probably just one of many other ways to approach the subject of information but it, for certain, captures the uncertainty aspect.
Thank you for your discussion.
Charged with maximizing the flow of communication, Shannon was interested in measuring the carrying capacity of the system, not the meaningful content of each message. That's like a shipping company, which is more interested in the potential (carrying capacity) of its empty vessels, while the shippers are interested in the cash-value (meaning) of the actual cargo.
Toward that end, Shannon focused on the Syntax of Information (structure ; volume) instead of its Semantics (meaning ; content). Ironically, he measured Information capacity in terms of emptiness & negation (Entropy), instead of its fullness & positive aspects (Energy). Even more ironically, scientists have referred to those purposeful features as "negentropy" (negative negation). Likewise, scientists focus on the "uncertainty" of information, rather than its "novelty". But it's the unexpected that is most meaningful to humans. So, I agree that philosophers have good reasons to "raise objections".
"Information", as Shannon defined it, is akin to Fuzzy Logic, which is ambiguous & uncertain, but -- like the Enigma code -- capable of carrying almost infinite values : between 0 and 100%. By reducing Specificity, it maximizes Potential. Hence, each bit/byte, instead of carrying meaning, is an empty container capable of carrying multiple meanings. That kind of communication is good for computers -- where the translation code-key is built in -- but not for people, who can't handle uncertainty & ambiguity.
That's why neuroscientist & anthropologist Terrence Deacon said, "this is evidence that we are both woefully ignorant of a fundamental causal principle in the universe and in desperate need of such a theory". The Enformationism thesis is my contribution toward that end. :smile:
Negentropy : Negentropy is reverse entropy. It means things becoming more in order. By 'order' is meant organisation, structure and function: the opposite of randomness or chaos.
Note -- I give it a more positive name : "Enformy" -- meaning the power to enform, to create novelty.
Fuzzy Logic :
Fuzzy logic is a form of many-valued logic in which the truth values of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false.
https://en.wikipedia.org/wiki/Fuzzy_logic
Enformy :
In the Enformationism theory, Enformy is a hypothetical, holistic, metaphysical, natural trend, opposite to that of Entropy & Randomness, to produce Complexity & Progress. It is the mysterious tendency for aimless energy to occasionally create the stable, but temporary, patterns we call Matter, Life, and MInd.
http://bothandblog2.enformationism.info/page18.html
See my reply to above. :smile:
Information : For humans, Information has the semantic quality of aboutness , that we interpret as meaning. In computer science though, Information is treated as meaningless, which makes its mathe-matical value more certain. It becomes meaningful only when a sentient Self interprets it as such.
http://blog-glossary.enformationism.info/page11.html
I think you must have misunderstood. If you perceive a contradiction, then point it out to me so I can see what you're talking about, and maybe clarify what I meant.
Quoting TheMadFool
Yes, of course I am saying that, that's what I said in my first post in the thread, uncertainty is a fundamental aspect of language use, and we clearly cope with it.
Quoting TheMadFool
Yes, but that's the point, if a theory of information cannot, in a satisfactory manner, deal with the uncertainty which is inherent within language use, then how can it claim to be about "information" as the word is commonly used? The word as it is commonly used includes information exchanged in language use.
Quoting TheMadFool
Right, so doesn't "information" include what is transferred in language use, as something provided or learned about something?
Quoting TheMadFool
This is not true though, because if someone convinces me that "the Eiffel tower is in Paris" is a true piece of information, I still might not have any clue as to what the Eiffel tower is, or what Paris is. So your claim, that uncertainty is at 0 is just an illusion, because I could just claim to be certain that there is something called the Eiffel tower, in some place called Paris, but since I haven't a clue what this thing is, or where this place is, it cannot be said to be any true form of certainty. So Shannon's theory doesn't really deal with the true nature of information at all.
Quoting TheMadFool
No, it really doesn't capture the uncertainty aspect of information, as I've explained. It does recognize that uncertainty is an essential aspect of information, as I described in my first post, but it does not provide any insight into how uncertainty is dealt with by the human mind in natural language use. So it's not really a very good way to approach the subject of information.
Quoting Gnomon
Clearly, the content is what is important, and referred to as the information. So if someone figures out a way to put the same information into one package which requires a hundred packages in Shannon's system, then his measurement system is not very good.
Quoting Gnomon
So, this is what I pointed out to Madfool, what we commonly refer to as "information" is the content, the meaning, not the structure. So Shannon's "theory of information" really doesn't deal with information, as what is referred to when we normally use the word.
Quoting Gnomon
I would say that you might have this backward. The computer can't handle uncertainty, that's why there must be a built-in code-key to eliminate any uncertainty. People, having free will choice have no such built-in code-key, and that capacity to choose regardless of uncertainty, allows them to live with and cope with ambiguity.
I agree with your version, but what I said was that "by reducing specificity" -- which increases generality -- Shannon's definition of Information "maximizes the Potential" carrying capacity (bandwidth) of a transmission. That was the point of his research. By using only an austere two digit code, instead of noisy redundant human languages, he was able to compress more information into the same pipes. Just as with Morse code though, the specific meaning is restored by translating the abstract code back into a concrete language. Only then, does it become Actual Information -- meaning in a mind; actionable knowledge.
In the shipping analogy, Shannon didn't make the ships bigger, he made the cargo smaller -- by reducing redundancy, as noted by TMF. Thus, increasing the carrying capacity at no extra cost to the shippers. But, in this thread, that's a minor point. What really matters is that by using an abstract code -- stripped of meaning -- he overcame a major technical hurdle : bandwidth. But in order for the code to be meaningful to humans, it must be decompressed and converted back into noisy redundant "natural" language. Unfortunately, his new terminology. equating "Information" with destructive Entropy, diverted attention away from the original constructive essence of Information : aboutness -- the relation between subject & object. :smile:
Natural Language : In neuropsychology, linguistics, and the philosophy of language, a natural language or ordinary language is any language that has evolved naturally in humans through use and repetition without conscious planning or premeditation. Natural languages can take different forms, such as speech or signing.
Aboutness : https://press.princeton.edu/books/hardcover/9780691144955/aboutness
About Shannon's theory - I can't help but feel too much is being read into it. Shannon was a communications engineer, first and foremost, and the problem he set out to solve had some very specific boundary conditions. It was a theory about converting words and images into binary digits - as the article notes, Shannon might have coined the term 'bit' for 'binary digit' - and transmitting them through a medium. Why it is now taken to have a profound meaning about the nature of reality baffles me a little.
You can ignore my post.
Quoting Metaphysician Undercover
Thanks for bringing up the issue of ambiguity because it lies at the heart of Shannon's theory on information. Thanks. Have a G'day.
I'd like to pick your brains about something though.
If I don't have the information on who invented the internet, does it seem ok to represent my lack of information as: Mark Zuckerberg OR Jeff Bezos OR Vint Cerf OR Bill Gates?
(No, W. is not garotting him.)
Gates of course played his part but the other big name is Tim Berners-Lee (now Sir) who invented hypertext and the WWW in 1992 or so ( can't remember the exact date).
Oh - but TCP/IP is an absolutely amazing invention, one of the all-time great inventions, in my humble opinion. It has a lot of security vulnerability issues, although as Cerf often says, it was never designed for the amazing lengths that it’s been extended to.
Have a read of Everything is Connected. It was an output from my first tech writing contract, which was for a company that helped introduce broadband modems into the Australian market. It has a good little primer on the invention of TCP/IP, page 13.
I remember starting a thread in the old forum which has now sadly become defunct about how questions could be reformulated as statements using the logical OR operator.
So, the question, Q (question) = "Who started the thread titled QUANTA article on Claude Shannon?" could be rewritten with no loss of meaning as A (alternatives) = TheMadFool OR Metaphysician Undercover OR Gnomon OR Wayfarer
Remember that a question is defined as a request for information and and as you can see in the above example a question can be reduced to a list of alternatives - there are 4 for the question Q above and Q can be expressed, without any loss of meaning, as A. At this point uncertainty enters the picture - we're not sure or we're uncertain as to which of the 4 possibilities is the correct one. If I now send you the message "Wayfarer", the uncertainty disappears and I have the information the question Q is asking for which means I've whittled down the possibilities (alternatives) from 4 (TheMadFool, Metaphysician Undercover, Gnomon or Wayfarer) in the beginning to 1 (Wayfarer) at the end and according the Claude Shannon the message "Wayfarer" consists of log2(4) = 2 bits of information.
Note that since computer language consists of 1's and 0's, the message "Wayfarer" must be decomposed into binary e.g yes/no answers. The first information we need is "Does the name of the person who started this thread start with a letter that comes before the letter "P"?" The answer would be "no". The possibilities now narrow down to Wayfarer and TheMadFool. That accounts for 1 bit of information. The next question is, "Does the name of the person who started this thread begin with the letter "W"? The answer is "yes" and we've got the information. That's the other 1 bit of information. We've homed in on the information viz. Wayfarer using 2 bits.
(There’s that archetypically American series of sounds - ‘dah dah d dah dah - dah dah!’ I once heard the derivation was from GI’s pulling up outside hairdressers in the Phillipines after they had beaten the Japanese and blowing the sequence on their Jeep horn - ‘shave and a haircut - TWO BITS’. Although that, of course, is analog.)
I don't see how you can describe that as a matter of reducing specificity for an increase in generality. It's the very opposite of that. Reducing the number of possible characters to two, clearly reduces possibility, through an act of increased specificity. This is a reduction in generality. What it does is allow for a simpler coding and decoding process, with higher certainty, but at the cost of greatly reducing the possibilities for the information which can be transmitted.
Quoting Gnomon
Yes this is exactly the evidence that what is called "information" in "information theory", is is one step removed from what we actually call "information" in common language use. Thinking that this is an accurate representation of "information", is the problem of representation, or narrative, which Plato warned us about. We have three layers, the real natural thing, the artificial thing which goes by the same name, but is just a shallow reflection of the thing, and then our understanding of the thing. If we look at the artificial thing, the shallow reflection, as if it is the real thing, we have a misunderstanding.
The problem is widespread in our society. We look at the wavefunction for example as if it is the real thing, rather than a simple representation. It becomes extremely evident in issues of morality. We judge people according to their actions, but the actions are just a reflection of the person's psyche. To study human actions therefore, does not give us what is required to understand morality, we must study the human psyche directly.
Quoting TheMadFool
No I don't think that would be correct. If the correct answer might be 1&2&3&4, you cannot represent it properly as 1 or 2 or 3 or 4. You need to have reason to know that it is either/or, which you don't give.
Quoting TheMadFool
The problem with this approach is that you need to know that the correct answer is within the list of options, which will only occur if you already know the correct answer. So such a request for information has an extremely limited applicability, like a multiple choice exam.
I'm working under the assumption that only one alternative will be correct and the Shannon's logic works perfectly well in that case. As for the possibility of 1&2&3&4, the question or uncertainty would need to be reframed like so: 1&2&3&4 OR 5 OR 7&8 OR... As you can see all questions the uncertainty of the anwer can be reexpressed as a disjunction.
Quoting Metaphysician Undercover
Yes but what's wrong with that? Shannon's theory is about messages and their information content - how the message containing the information brings our uncertainty regarding the answer to a question [a request for information] to zero; another way of saying that is eliminating alternative answers to the question until we're left with only one - the correct answer.
As for your claim that "...such a request for information has an extremely limited applicability..." think of how the very foundation of the modern world - science - operates. A specific set of observations are made and not one but multiple hypotheses are formulated as an explanation. One of the stages in the scientific method is the question, "which hypothesis is true/best?" Hypothesis 1 OR Hypothesis 2 OR...Hypothesis N? Shannon's uncertainty shows up again and in vital area of humanity's experience with information. Here too, we must eliminate alternatives until we arrive at the best hypothesis.
In fact, this very discussion that we're having is based on the uncertainty of whether what I'm saying is correct or not (1/0)? I feel it makes sense but you think otherwise precisely because we're unable to get our hands on the information that would settle the matter once and for all.
Unfortunately, that's his real name. And he is fringey, in the sense of revolutionary. I have read a Kindle copy of his book, Quantum Evolution, because it seemed have some parallels to my own edgey Enformationism thesis of how evolution works. He concluded that there seemed to a "force of will" behind biological evolution. And I have concluded that the Generic Form of Information -- that I call EnFormAction -- is poetically analogous to the Will-of-God in religious myths of creation. So, I find his combination of Quantum Theory and Biology to be interesting -- and provocative, if not provable. But of course, it doesn't fit neatly into the dominant scientific worldview of Materialism.
MeFadden's new theory of Electromagnetic Consciousness may also parallel some of my ideas of how Consciousness emerges from a biological brain. He "posits that consciousness is in fact the brain’s energy field". But I would go a step farther, to posit that Consciousness is an emergent quality of universal Information : a MindField, if you will. Physical Energy is merely a causal form of Generic Information. And the human Mind is a metaphysical effect, a Function, of information processing in the brain. By that, I mean Raw Energy is first transformed into active Life, and then into sensing Mind, and ultimately into knowing Consciousness. :smile:
Quantum Evolution presents a revolutionary new scientific theory by asking: is there a force of will behind evolution?
https://www.amazon.com/Quantum-Evolution-Multiverse-Johnjoe-McFadden/dp/0006551289/ref=sr_1_1?dchild=1&keywords=quantum+evolution&link_code=qs&qid=1609092555&sourceid=Mozilla-search&sr=8-1&tag=mozilla-20
Generic Information : Information is Generic in the sense of generating all forms from a formless pool of possibility -- the Platonic Forms.
http://bothandblog2.enformationism.info/page29.html
Note -- this use of "Generic" is not based on the common dictionary definition, but on the root meaning : "to generate novelty" or "to produce offspring".
Quoting Wayfarer
The profundity of Information Theory is only partly due to it's opening the door to the Information Age. But we have, since Shannon's re-definition of Mind Stuff, begun to go far beyond mere artificial computer brains, to glimpse an answer to the "hard question" of natural Consciousness. Shannon's narrow definition of "Information" is blossoming into a whole new worldview. :wink:
We live in the information age, which according to Wikipedia is a period in human history characterized by the shift from industrial production to one based on information and computerization. . . . So it is not entirely crazy to speculate about what might lie beyond the information age.
https://www.wired.com/insights/2014/06/beyond-information-age/
Sorry for the confusion. As an amateur philosopher, I'm in over my head. But, if you have any interest in a deeper discussion of what I'm talking about, I can direct you to several books by physicist Paul Davies, and associates, who are exploring the concept of Information far beyond Shannon's novel use of the old word for personal-Knowledge-encoded-in-a-physical-brain to a new application of abstract-Values-encoded-in-the-meaningless-mathematics-of-Probability. :brow:
Paul Davies : https://www.amazon.com/s?k=paul+davies&link_code=qs&sourceid=Mozilla-search&tag=mozilla-20
Quoting Metaphysician Undercover
Apparently, I haven't clearly conveyed that my intention is to understand "the real natural thing" instead of "the artificial thing which goes by the same name". Don't worry about the "specificity" and "generality" of information. That's a tricky technical distinction for information specialists to gnaw on. For the rest of us, the important distinction is between statistical Probability and meaningful Aboutness. :cool:
Information :
[i]* Claude Shannon quantified Information not as useful ideas, but as a mathematical ratio between meaningful order (1) and meaningless disorder (0); between knowledge (1) and ignorance (0). So, that meaningful mind-stuff exists in the limbo-land of statistics, producing effects on reality while having no sensory physical properties. We know it exists ideally, only by detecting its effects in the real world.
* For humans, Information has the semantic quality of aboutness , that we interpret as meaning. In computer science though, Information is treated as meaningless, which makes its mathematical value more certain. It becomes meaningful only when a sentient Self interprets it as such.
* When spelled with an “I”, Information is a noun, referring to data & things. When spelled with an “E”, Enformation is a verb, referring to energy and processes.[/i]
http://blog-glossary.enformationism.info/page11.html
(The Hunting of the Snark)
Context approximately equals language game, approximately equals background knowledge/prior agreement/protocol/ etc etc.
Shannon's context is information transfer: I post - you read. We are used to a faithfully exact transfer; that what you read is exactly what I wrote, complete with typos. We are used in this context, to the elimination of noise. And this is done by the application of Shannon's theory.
But if one downloads a large app, one generally 'verifies' it because there is always some noise and thus the possibility of a 'wrong' bit. Verification uses redundancy to (hopefully) eliminate noise. And the Bellman does the same thing. Saying everything three times is very redundant, but reduces 'noise' if one compares the three versions and takes the average. A checksum does some of the job at much less informational cost, in the sense that if the checksum matches the probability of error is vanishingly small, but if it fails to match, there is no way to correct, but one must begin again.
Good writing somewhat tends to follow the Bellman's method; the introduction sets out what it to be said, the body of the piece says it, and the conclusion says what has been said. Redundancy is somewhat misnamed, because it helps reduce misunderstanding.
So, now imagine as analogy to the rain in the Sahara an array of 100 * 100 pixels - black or white, 0 or1.
There are 2^10,000 possible pictures. That is a large number. But now consider the Saharan version, where we know that almost all the pixels are black, or 0. Obviously, one does not bother to specify all the dry days in the Sahara, one gives the date of the exceptional wet day, and says, "else dry".
In the same way, any regularity that might predominate, stripes, chequerboards or whatever, can be specified and then any deviations noted. This is called compression, and is the elimination of redundancy. The elimination of redundancy is equivalent to the elimination of order. Maximum compression results in a message that is maximally disordered and thus looks exactly like noise.
This then explains the rather counter-intuitive finding that disordered systems contain more information than ordered systems and thus that entropy reduces available energy but increases information. One proceeds from the simple 'sun hot, everything else cold', to a much more complex, but essentially dull 'lukewarm heat death of everything'.
The problem is, that you must be sure that the correct answer is amongst the options, or else the question is not necessarily a valid representation.
Quoting TheMadFool
If you already know that the correct answer is amongst the option then the uncertainty is not a real uncertainty. Therefore this system is based in a false representation of uncertainty.
Quoting TheMadFool
This is completely different, because there is no guarantee that one of the hypotheses is the correct one. So concluding that one is "better" than the others does not guarantee that it is correct. And when we make such a judgement according to what is better, then it is implied that there is a specific purpose for the sake of which it is better. Therefore we forfeit "true" for "better". So now you've corrupted your principles from being a question of what is correct, 1 or 2 or 3 or 4, implying that one of them must be correct, to a question of which is best, 1 or 2 or 3 or 4, implying that there is no necessity of one being correct, but one might be better than the others, depending on what the hypothesis would be used for.
Quoting Gnomon
No, no, I recognize that you're beyond this, and i was agreeing with you on this point, offering Plato as an example to support what you were saying. What is "information" to a machine is completely different from what is "information' to a human being, because as you say, the machine information still must be translated to human language in order to really make sense. so there is an extra level of separation. I think that what happens is that at each distinct level there is an inversion of importance, from the particular to the general, and then back again when you cross the next level.
Right - in my view, it does a lot, but it doesn't account for Life, the Universe, and Everything.
Quoting Gnomon
Transformed by what, and how?
In Spinoza's philosophy, which I'll take to be paradigmatic for philosophy generally in this case, the only real substance ('substance' being nearer in meaning to 'subject' or to 'being' than the current conception of 'substance') is self-caused, it exists in itself and through itself. In other words, it is not derived from anything, whereas everything else is derived from that. (This is Spinoza's doctrine of God as nature.)
Quite what the 'uncaused cause' is, is forever a mystery - even to Himself, according to tradition. So, how do we know that? In a way, we can never know - except for revelation, which I understand you've already rejected (//or by divine illumination, or mystical union//). So, instead, take a representative sample of philosophies which point to 'that', and say that 'that' must be conceived in terms of 'information'.
I've been perusing some articles on Shannon and found some reviews of, and excerpts from, James Gleick's book The Information (an excerpt here), which seems like one of the ur-texts for this set of ideas.
I note one of the reviewers says of his book:
:up:
Yeah! That's the ticket : "Inversion" -- a mental flip of the coin. When I said that Shannon's Information substituted "generality" for "specificity", I was referring to the meaning of communication. Shannon's technique was to eliminate the specific intended meaning of Words for enigmatic numerical Bytes. Digital information is conveyed in the abstract language of binary numbers that have the potential to encode any meaning. It's a sort of universal language. But Mathematics is divorced from concrete Reality, in that it is universal instead of specific. That's why String Theory makes sense to mathematicians, and not to laymen, but cannot be empirically tested in the real world.
Therefore, in order to be meaningful to non-computers, that general (one size fits all) language must be translated (inverted) back into a single human language with a narrowly-defined (specified) range of meanings for each word. In its encoded form, the message is scrambled into apparently random noise, that could mean anything (1) or nothing (0). Ironically though, even chaotic randomness contains some orderly potential. And Shannon found the key to unlock that hidden Meaning in Boolean Algebra, which boils Significance down to its essence : 1 = True (meaningful) or 0 = False (meaningless).
So, as you said, Shannon "inverted the importance" of Meaning in order to compress it down to its minimum volume. But the communication is not complete until it is converted back into verbose messy, often ambiguous, human language. Individually, the ones & zeros mean nothing more complex than the simple dichotomy of Either/Or. And that is also the ultimate goal of objective reductive physical Science. But subjective holistic metaphysical Philosophy's goal is to restore the personal meaning of knowledge.
Shannon's reductive method : By focusing relentlessly on the essential feature of a problem while ignoring all other aspects.
https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/
Physics & Metaphysics :
Two sides of the same coin we call Reality. When we look for matters of fact, we see physics. But when we search for meaning, we find meta-physics. A mental flip is required to view the other side. And imagination is necessary to see both at the same time.
http://blog-glossary.enformationism.info/page14.html
Universal Language : https://www.bhp.com/sustainability/community/community-news/2017/07/learning-the-universal-language/
The world-creating Potential of the Big Bang Singularity was transformed (enformed) into Life, the Universe, and Everything by the power of EnFormAction. This is a novel notion, perhaps even radical. But it is being studied by serious scientists -- some of whom even entertain the taboo concept of Deity, or Panpsychism. I have simply translated that unconventional interpretation of Generic Information into a new myth of creation, that I call Enformationism. This is based on Einstein's theory of E = MC^2, and the current understanding of physicists that Information transforms into Energy, which transforms into Matter, and vice versa. See the Hypothesis below for the "how". :nerd:
The mass-energy-information equivalence principle :
https://aip.scitation.org/doi/10.1063/1.5123794
Generic Information : Information is Generic in the sense of generating all forms from a formless pool of possibility -- the Platonic Forms.
https://enformationism.info/phpBB3/viewtopic.php?f=3&p=837#p837
EnFormAction :
Ententional Causation. A proposed metaphysical law of the universe that causes random interactions between forces and particles to produce novel & stable arrangements of matter & energy. It’s the creative force of the axiomatic eternal deity that, for unknown reasons, programmed a Singularity to suddenly burst into our reality from an infinite source of possibility. AKA : The creative power of Evolution; the power to enform; Logos; Change.
http://blog-glossary.enformationism.info/page8.html
The EnFormAction Hypothesis : http://bothandblog3.enformationism.info/page23.html
Yes. My Enformationism thesis can be viewed as an update of Spinoza's worldview, in light of Quantum Physics, bottom-up Evolution, and Information Theory. :smile:
Spinoza's Universal Substance : Like Energy, Information is the universal active agent of the cosmos. Like Spinoza's God, Information appears to be the single substance of the whole World.
http://bothandblog2.enformationism.info/page29.html
But do they? Or, do you really believe this? What I've been trying to tell you, is that the binary system is really a great restriction to meaning. It is a restriction because any meaning which cannot be expressed in binary therefore cannot be expressed. Consider all the instances where the law of excluded middle does not apply, how could the meaning here be expressed in binary?
Quoting Gnomon
The problem is not translating binary to natural language, but translating natural language to binary. We have in natural language the law of non-contradiction which is very suitable for binary. But by this same principle, binary is much more restrictive, much more specific, than the natural languages which are more general. So natural language consists of generalities, ambiguities, which cannot be captured in the binary. These generalities allow the natural languages wider ranging applicability.
For example, suppose that to fit into binary, the meaning must match the digits, 0, and 1. Anything which does not fit precisely into the designated meaning of 0 or 1 just gets rounded off so that it does fit one or the other. You'd think that this is just making the language more precise. But ask yourself what happens to all that meaning which gets rounded off? It gets lost, simply discarded, as if it's meaning which is not meaningful. So, in weeding out, discarding, all that meaning which falls in between, as neither 0 nor 1 (which would violate the law of excluded middle if it were allowed to remain in between), we end up violating the law of non-contradiction by having meaning which is not meaningful, and therefore discarded as such.
It's not just me. See the link to Universal Language in the previous post. I'm making a broad general statement, that you may be interpreting in a narrow sense. I'm merely repeating the opinions of serious scientists -- Wheeler, Tegmark, Fredkin, Lloyd, etc -- that the physical reality of our universe may be viewed as our sensory interpretation of abstract mathematical Information --- see Interface Reality below.
Of course, this is not a mainstream view, but I'm using it for personal philosophical purposes, not an academic technical thesis. These mathematical-minded scientists are implying that we are living in the Matrix, running a digital program. I don't take that metaphor too literally, but as a metaphor, it fits neatly into my Enformationism worldview. So, yes, I believe it --- provisionally. :joke:
Digital Physics : In physics and cosmology, digital physics is a collection of theoretical perspectives based on the premise that the universe is describable by information. It is a form of digital ontology about the physical reality. According to this theory, the universe can be conceived of as either the output of a deterministic or probabilistic computer program, a vast, digital computation device, or a mathematical Isomorphism to such a device.
https://en.wikipedia.org/wiki/Digital_physics
Interface Reality : http://bothandblog6.enformationism.info/page21.html
If that is your view, and belief, how do you account for all that meaning which is excluded as not meaningful, by that position, as I explained above? Do you believe that it is acceptable to exclude any meaning which cannot fit into the digital representation, as not meaningful? Isn't that contradictory?
This ambiguity of the word information needs to be emphasized in trying to grasp what Shannon's theory says and what it does not say. Shannon was talking about transmission of data over a neutral but imperfect channel not what that data means to a sender or to a receiver.
Think of a semaphore that sends a signal between two mountain tops. The issue is how much of signal content is mislaid in theory over years of use.
In a more complicated case perhaps the channel or method of transmission is not neutral. In the verbal transmission of rumors some content is lost, embellished, and added as the content is passed from person to person. Here, content is not the letters, words, or sentences but a human intelligible meaning with both cognitive and emotional elements.
Philosophically, a mathematical model is an ultra-materialist quasi-concrete representation of the world. One's happiness with the model reflects one's philosophical world dispositions. Inseparably, precision of representation is accompanied by proportional loss of global understanding of the real world.
This can be said about any experience - visual, auditory, olfactory, gustatory or tactile. We use past experience, knowledge, and rules, to eliminate the uncertainty of what we experience.
Interpretting words and behaviors entails discovering the rules (beliefs) that the sender used to encode the message. Only by discovering the rule (belief) can you then decode the message.
Noise is information that isn't being attended to, or not applicable to the present goal in working memory.
When listening to someone across a crowded and noisy room, you are focusing your attention on one particular voice. All the other voices are noise because your attention isn't focused on that information. But switch your attention to another voice and that voice becomes information. Its not that the noise wasn't information. It is. The difference arises from attending to bits of information vs. not attending to the bits of information. So, the distinction between noise and information is epistemological.
Right, that's why I said what "information" refers to in information theory is something completely different from what "information" refers to in much common usage. So for example, if we distinguish between symbols and what the symbols represent (meaning), in information theory the symbols are called information, but in common usage information usually refers to what is represented by the symbols, the meaning.
Quoting magritte
When we start to consider content now we need to be wary of a potentially similar distinction with respect to "content". In information theory, symbols are transmitted, and we could call this the content. But in speaking about natural language, the content is the meaning.
Now there is an issue of whether any content (meaning) is actually transmitted in natural language use. It may be the case that only symbols are transmitted between us, and all the content (meaning), is created in the minds which transmit and receive the symbols. If this is the case, then information in the common sense of the word, as meaning, is not transmitted in natural language use. This would imply that we need to look for some process other than language use, to understand how information (as meaning) is shared by human beings.
Harry Hindu is speaking of this as a matter of following rules, but I don't see any evidence of any such rules. And the idea of "rules" does not deliver us from the ambiguity. We generally understand "rules" to exist as an expression of symbols. But these rules would need to be interpreted for meaning. So we'd be stuck in a vicious circle here, of requiring rules to interpret rules.
Quoting Harry Hindu
I really do not believe that there are any such rules, just habits, so I think we're on a different page here Harry.
I assume that by "excluded", you are referring to "discarding, all that meaning which falls in between, as neither 0 nor 1". But that's not how I understand the digital compression process. Instead, it's similar to Quantum Superposition, in that all values between 0 and 1 are possible, but not actual, until the superposition is "collapsed" by a measurement. The original Intention is still in-there, but un-knowable until the meaning is "measured" by a mind that "resonates" with the intent. In other words, the receiver must already know something about the significance of the communication.
I'm not into all the technical details, but some Information theorists view the secret to compression as, not either/or, but as all-of-the-above. However, exactly what triggers the decompression is just as unclear as in Quantum Theory. It seems to have something to do with a Conscious Mind extracting Information as a Measurement of Meaning. That notion fits into my Enformationism thesis, even though I can't spell-out the exact mechanics of it. I simply liken it to a physical Phase Change, such as water to ice.
Besides there is no actual Meaning transmitted in a Shannon communication --- only abstract mathematical symbols, that can be used to define conventional relationships, which the receiving mind interprets as Meaning. Anything deeper than that vague summary is way over my pointy head. :cool:
Superposition of meaning : Shannon's theory of information was built on the assumption that the information carriers were classical systems. Its quantum counterpart, quantum Shannon theory, explores the new possibilities arising when the information carriers are quantum systems.
https://royalsocietypublishing.org/doi/10.1098/rspa.2018.0903
Phase Transition : Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables
https://en.wikipedia.org/wiki/Phase_transition
Note : I interpret "non-analytic" to mean that nobody knows what the intermediate steps are, between before & after the change. It's like magic. :joke:
Meaning Communication :In the philosophy of language, metaphysics, and metasemantics, meaning "is a relationship between two sorts of things: signs and the kinds of things they intend, express, or signify"
https://en.wikipedia.org/wiki/Meaning_(philosophy)
Note : it takes two to tango : sender & receiver must have something in common -- they must be on the same wavelength.
Perhaps he is referring to the rules of Syntax, which are conventional, and the rules of Semantics, which are mostly intuitive. :smile:
How can you call them symbols if they don't already represent something? Meaning is inherent in symbols. Effects are symbolic of their causes.
Quoting Metaphysician Undercover
Maybe "rule" isn't the most appropriate term. Does natural selection "select rules" by which some organism interprets the information it receives via its senses? Is "selecting rules" an adequate phrase to refer to how certain characteristics are favored by natural selection for the organism to be more in tune with their environment? What is selected is better interpretations of sensory information. These ways of interpreting sensory information are what become instincts, or habits.
Quoting Metaphysician Undercover
Habits are memorized rules, or rules that have been engrained in the genetic code thanks to natural selection.
The point remains the same, even if you express it in this way. All that meaning between 1 and 0 cannot be expressed in the digital system.
Quoting Gnomon
Right, that's why all that meaning (information) ends up being contradictory and "un-knowable".
Quoting Gnomon
That's why the Shannon use of "information" is distinct from most common usage.
Quoting Gnomon
The point being that I don't see any evidence of rules of semantics, and the rules of syntax need to be interpreted.
Quoting Harry Hindu
This is not my preferred terminology, to say that there is a "symbol" which does not represent anything. But that's what they do in examples of logic, they separate the symbols from all meaning, to demonstrate a procedure which uses symbols and the symbols used don't represent any thing. I agree that this is contradictory, and I don't really believe that these things ought to even be called symbols.
Quoting Harry Hindu
Right, I'd prefer to call these actions habits rather than instances of following rules.
Quoting Harry Hindu
I see no reason to believe that habits are memorized rules. If an habitual action is dependent on certain nervous system activity, why would you characterize this nervous system activity as memorized rules. I think we need to reverse this outlook. Memorizing rules could form a particular type of habit, as we find in mathematics, but not all habits are instances of memorizing rules.
Yes, but the digital system is just one facet of the whole system -- the Universe. Our world is a two-sided coin. You can't see both sides at the same time. But you can choose which side to look at. In the communication of Information, Shannon chose not to look at the intentional Meaning of its contents, but to focus on the Container, which is neutral toward Meaning. The point being, that the invisible side of the cosmic coin is still there, like the dark side of the moon. See image below. :smile:
Quoting Metaphysician Undercover
Quantum information that is in superposition is indeed "un-knowable" until a measurement is taken. The measurement is a Choice of what to look at. Quantum theorists have argued about the significance of a Delayed Choice experiment. But don't ask me to make sense of it in this context --- it's just an analogy. Superposition may be confusing, but not necessarily contradictory. :grin:
Delayed Choice : https://en.wikipedia.org/wiki/Quantum_eraser_experiment
Quoting Metaphysician Undercover
Yes. But it's the Distinction-that-made-a-Difference in causing a Phase Change in history from the Industrial Age to the Information Age. By changing how we think of Information, he was able to gain power over it. For example, the Bit is a distinction -- a difference (1) that makes a difference (2). The first difference is physical (an empirical observation), and the latter difference is personal -- meaning (a theory or feeling). That's why some people feel that Shannon's indirect creation (Robots) are like Frankenstein's soulless monsters.
Information Age : This surprising result is a cornerstone of the modern digital information age, where the bit reigns supreme as the universal currency of information.
https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/
Quoting Metaphysician Undercover
The rules of Syntax (structure) are partly objective, and can be applied to any language or culture. But the "rules" of Semantics (meaning) are partly subjective & personal, yet may also be embedded in Jung's Collective Consciousness, or in Freud's Unconscious, or Chomsky's Deep Structure. Don't take those metaphors literally. They merely indicate that part of what-we-know-intuitively, and the rules-of-behavior we follow, are inherited with the human body. Hence, such standards, while important, are not inherently formal or rational. :nerd:
Rules of Semantics : Semantic rules make communication possible. They are rules that people have agreed on to give meaning to certain symbols and words. Semantic misunderstandings arise when people give different meanings to the same words or phrases
http://www.ai.mit.edu/projects/dm/theses/jackendoff69.pdf
Both Sides Now
As I said, I don't believe there is such a thing as the rules of semantics. You can keep talking as if you believe that there is, but that won't change my mind. You need to show me some evidence of the reality of what you are saying, justify it. The appeal to authority is insufficient until you bring out the evidence presented by those authorities.
Quoting Gnomon
I don't see that people agree on rules before communicating with each other. Don't you see that agreement requires communication? So this proposition seems to be really impossible, and at best a vicious circle.
Since I am not an authority on the subject of Semantics and Syntax, I was referring you to some authorities that do see evidence of commonalities, if not formal "rules", in human communication. If you are really interested in the evidence, you can click on the links. But, it seems that you have something against the idea of natural logical structure in communication. And I'm not quite sure what that objection is. :smile:
Quoting Metaphysician Undercover
Well, except for some picky-picky philosophers, most people don't have to establish formal rules before they communicate. Instead, most of us learn the rules informally at our mother's knee, and just by growing up in a particular culture, or may even inherit some mental structure biologically. That's what I referred to as "Intuition".
Are you arguing against Chomsky's theory of innate language structure? I generally agree with the notion, but I've never studied the theory in detail. So he may have over-stated his case. But what does that have to do with Shannon's theory of Parsimonious Transmission of Information? :grin:
Born Ready for Language : Chomsky based his theory on the idea that all languages contain similar structures and rules (a universal grammar), and the fact that children everywhere acquire language the same way, and without much effort, seems to indicate that we're born wired with the basics already present in our brains.
https://www.healthline.com/health/childrens-health/chomsky-theory
For and against Chomsky : https://www.prospectmagazine.co.uk/magazine/forandagainstchomsky
I have nothing against "natural logical structure in communication". But we cannot conclude that natural logical structure implies rules, just because artificial, or formal logic consists of rules. In fact, that's what I see as the difference between formal logic, and natural logic, the former consists of rules, the latter does not.
Quoting Gnomon
Then, very clearly, your proposal that people must agree on rules in order for communication to be possible, is false. That's the point, agreeing on rules is not necessary for communication, so why assume that rules are essential to language?
Quoting Gnomon
I really do not see how you can portray learning how to talk as a matter of learning rules. Have you ever seen children learn to talk? If so, what part of it looks like an instance of learning rules to you? Furthermore, this learning how to talk cannot be a matter of following rules which one already knows (innate grammar), otherwise one would not need to learn how to talk, already knowing the rules which make talking possible. It seems very clear from the empirical evidence, that talking is not a matter of following rules. So this type of theory appears to be inconsistent with reality.
I think I'm beginning to see your objection to the notion of "rules" in communication. Apparently you are thinking of imposed "explicit" formal rules, while I'm talking about innate "implicit" informal commonalities. As a rule (i.e. normally) humans are born with something like a mental template for language.
My position on inherent human behaviors (instincts) is basically that of cognitive psychologist Stephen Pinker in The Blank Slate. He calls it "the language instinct", which gives humans an advantage, over most animals, in social communication. Anyway, I doubt that our concepts of communication are very far apart. It's just another failure to "first define your terms". :joke:
Rules :
[i]1.one of a set of explicit or understood regulations or principles governing conduct within a particular activity or sphere.
1a : a prescribed guide for conduct or action. b : the laws or regulations prescribed by the founder of a religious order for observance by its members. c : an accepted procedure, custom, or habit.[/i]
As a Rule : usually, but not always.
The Blank Slate : arguing that human behavior is substantially shaped by evolutionary psychological adaptations
https://en.wikipedia.org/wiki/The_Blank_Slate
The Language Instinct : Pinker argues that humans are born with an innate capacity for language. . . . . but dissents from Chomsky's skepticism that evolutionary theory can explain the human language instinct.
https://en.wikipedia.org/wiki/The_Language_Instinct
Quoting Metaphysician Undercover
That is not what I was proposing. Sorry for the mis-communication. :smile:
Quoting Metaphysician Undercover
OK. I'll try to avoid using the term "rules", since it seems to trigger your indignation. Instead, I'll use something like "norm". The human language instinct is not a "law of nature" or a "man-made rule", but it is common enough to view it as "the rule rather than the exception". :cool:
Rule : If something is the rule, it is the normal state of affairs.
Language structure : You're born with it
https://www.sciencedaily.com/releases/2014/04/140408122316.htm
Isn't a "rule" necessarily formal though? That's the point, to talk about Innate, informal commonalities, as if they are rules, appears like a mistake to me.
Quoting Gnomon
That might be the case, if we both see this "instinct" as an unknown concerning its true nature, then we have commonality here.
Quoting Gnomon
The problem now, is that with the switch from "rule" to "norm" we jump from the cause of the behaviour (following a rule, instinct, or whatever the cause is), to a description of the behaviour. Then all we are saying is that it is common, or normal for people to act in a particular way, but we have no approach to the cause of that commonality. If we say that the person is following a rule, we create the illusion that we know why the person is acting in that particular way. That is why I objected to that use of "rule".
Apparently, in your strict vocabulary of technical terms, that might be the case. Since I'm not a professional scientist, I tend to use such jargon more loosely. Besides, in psychology, formal "rules" or "laws" are hard to come by. Most behaviors that psychologists take-for-granted are more like rules-of-thumb than empirically-confirmed-natural-laws. That's why The Diagnostic and Statistical Manual of Mental Disorders has to be regularly updated to weed-out definitions of disorders that turn-out to be too broad or too narrow or just plain wrong. :smile:
7 Psychological Rules That Can Make Your Life Shine Brighter :
https://brightside.me/inspiration-psychology/7-psychological-rules-that-can-make-your-life-shine-brighter-533910/
Quoting Metaphysician Undercover
The "language instinct" is a well-known effect, but its cause is a matter of debate. Stephen Pinker says that "A three-year-old toddler is "a grammatical genius"--master of most constructions, obeying adult rules of language." And he attributes those "rules" to a combination of Nature and Nurture. But he provides lots of observational evidence, so the mechanism behind the human talent for language is not exactly unknown. Some may claim it's a miracle, but Pinker thinks it's a Darwinian adaptation. :smile:
The Language Instinct : To Pinker, a Massachusetts Institute of Technology psycholinguist, the explanation for this miracle is that language is an instinct, an evolutionary adaptation that is partly "hard-wired" into the brain and partly learned.
https://www.amazon.com/Language-Instinct-Creates-Perennial-Classics-ebook/dp/B0049B1VOU
Quoting Metaphysician Undercover
All I can say to that is, Pinker is the reigning expert on psycholinguistics, and he thinks he knows why humans act like they have a special talent for language, that other animals don't. But his theory is based on evolutionary assumptions, that some other linguists, and theologians, disagree with. Yet again, the science of Psychology is inherently Philosophical & Meta-Physical, hence not empirical, and will always be subject to debate. But Pinker's explanation is close-enough for me . . . for now. :cool:
Is psychology a “real” science? : Does it really matter?
https://blogs.scientificamerican.com/the-curious-wavefunction/is-psychology-a-e2809creale2809d-science-does-it-really-matter/
PS___Shannon's definition of passive carrier "Information" is on the reductive & empirical end of the Science spectrum. But my definition of active causal "EnFormAction" is more towards the holistic & philosophical end, along with Psychology and History. Does that lack of hard evidence invalidate the hypothesis that Enformation might be the driver of evolution --- including the Language Instinct? Maybe. What do you think?
Information/Enformation :
* Knowledge and the ability to know. Technically, it's the ratio of order to disorder, of positive to negative, of knowledge to ignorance. It's measured in degrees of uncertainty. Those ratios are also called "differences". So Gregory Bateson* defined Information as "the difference that makes a difference". The latter distinction refers to "value" or "meaning". Babbage called his prototype computer a "difference engine". Difference is the cause or agent of Change. In Physics it’s called "Thermodynamics" or "Energy". In Sociology it’s called "Conflict".
Don't you see a problem here? If psychologists are referring to "rules' which account for, or cause certain types of behaviour, and there is really no rules there, then what are they actually talking about? They've taken this term, "rules", which has no real referent, and they use it to account for all sorts of behaviours. Since the thing referred to by the word is just a phantom, so also the understanding expressed is just a phantom.
Quoting Gnomon
See, even Pinker is assuming "rules", but this is just a phantom understanding, the word is used to refer to what is actually not understood, as a coverup, creating the illusion of an understanding. If we dismiss this term "rule", and look at the fact that a human being is a free willing and free thinking human being, then we have a different perspective form which we can ask why is a person inclined to act in such a way as to create the appearance that one is following rules, when really there are no rules being followed.
Quoting Gnomon
I have difficulty with the "holistic" approach because in my mind it cannot adequately account for the appearance of intention and free choice.
I don't have a problem with that as-if usage of "rules". It's no worse than atheist scientists referring to observed regularities in nature (Laws) as-if they were imposed by a divine authority. When patterns in nature appear to be rule-governed or lawful, I attribute that predictable behavior of natural systems, not to top-down Design, but to bottom-up Programming. Human programs are intentionally teleological, but the final output is unknown until the computation is complete --- or the fat lady sings, whichever comes first. :wink:
Evolutionary Programming :
Special computer algorithms inspired by biological Natural Selection. It is similar to Genetic Programming in that it relies on internal competition between random alternative solutions to weed-out inferior results, and to pass-on superior answers to the next generation of algorithms. By means of such optimizing feedback loops, evolution is able to make progress toward the best possible solution – limited only by local restraints – to the original programmer’s goal or purpose. In Enformationism theory the Prime Programmer is portrayed as a creative deity, who uses bottom-up mechanisms, rather than top-down miracles, to produce a world with both freedom & determinism, order & meaning. https://en.wikipedia.org/wiki/Evolutionary_programming
http://blog-glossary.enformationism.info/page13.html
Natural Laws : Laws of Nature are to be distinguished both from Scientific Laws and from Natural Laws. Neither Natural Laws, as invoked in legal or ethical theories, nor Scientific Laws, which some researchers consider to be scientists’ attempts to state or approximate the Laws of Nature, . . . Some of these implications involve accidental truths, false existentials, the correspondence theory of truth, and the concept of free will. Perhaps the most important implication of each theory is whether the universe is a cosmic coincidence or driven by specific, eternal laws of nature.
https://iep.utm.edu/lawofnat/
Quoting Metaphysician Undercover
My interpretation of evolution as bottom-up design is compatible with human Free Will. :yum:
Freewill Within Determinism : “Determinism is a long chain of cause & effect, with no missing links. Freewill is when one of those links is smart enough to absorb a cause and modify it before passing it along. In other words, a self-conscious link is a causal agent---a transformer, not just a dumb transmitter. And each intentional causation changes the course of deterministic history to some small degree.” ___Yehya
http://bothandblog2.enformationism.info/page68.html
Quoting Metaphysician Undercover
Perhaps you are thinking of the New Age interpretation of "Holism". But my usage is that of the guy who literally wrote the book. It's only "mystical" in the sense that Einstein called "spooky action at a distance". :nerd:
Holism : Regarding the concept of Holism, he says it "has a somewhat mystical association, in its commitment to a single unified whole being the ultimate reality. But there are strong scientific arguments in its favour. . . . The American philosopher Jonathan Schaffer argues that the phenomenon of quantum entanglement is good evidence for holism. Entangled particles behave as a whole, even if they are separated by such large distances that it is impossible for any kind of signal to travel between them."
https://www.philosophybasics.com/branch_holism.html
Neural holism and free will : This approach locates free will in fully integrated behavior in which all of a person's beliefs and desires, implicitly represented in the brain, automatically contribute to an act.
https://www.tandfonline.com/doi/abs/10.1080/09515080307765
Holism and Evolution : although Smuts' meaning differs from the modern concept of holism.[3] Smuts defined holism as the "fundamental factor operative towards the creation of wholes in the universe."
https://en.wikipedia.org/wiki/Holism_and_Evolution
https://reflexus.org/wp-content/uploads/Smut-Holism-and-Evolution.pdf
I agree with your bottom-up interpretation of reality, in principle, and also I agree that it is compatible with free will.
Quoting Gnomon
I don't understand this part. Are you making three classifications, scientific laws, laws of nature, and also natural laws. As you can see, I would only have 2 classes, scientific laws which are inductive descriptions, and supposed natural laws which are moral conclusions about how we ought to behave. People justify ethical principles by referring to natural laws. In the case of "laws of nature", I think that some people want to justify scientific laws as true by claiming that they are representations of the laws of nature. But you can see, as I've argued, that I don't believe we're justified in even calling what is represented by these laws as "rules' or "laws" or anything like that.
Quoting Gnomon
I do not see how you can make bottom-up mechanisms consistent with holism. If an individual agent has free will to act as one pleases, then on what basis is there a whole composed of numerous individuals. How can individual parts, acting freely, bottom-up, be said to comprise a whole?
That wasn't my classification, but a definition of "Law" as used in different contexts : Scientific Laws (observed regularities, with no inference of divine regulation), Laws of Nature (religious assertion of divine Lawgiver), and Natural Laws (a legal term, which doesn't take a stand either way on the provenance of the observed order in Nature).:cool:
Quoting Metaphysician Undercover
OK, "what is represented by these [so-called] laws"? Would you prefer to call them "accidental random patterns in Nature"? Einstein referred to them as "Reason", "order", "harmony", "structure", and "lawful", among other terms. :smile:
Einstein :
[i]. . . "the Reason that manifests itself in nature"
. . . "Scientific research is based on the idea that everything that takes place is determined by laws of nature, and therefore this holds for the actions of people."
. . . "Veneration for this force beyond anything that we can comprehend is my religion. To that extent I am, in point of fact, religious."
. . . "the marvelous structure of existence"
. . ."I believe in Spinoza's God, Who reveals Himself in the lawful harmony of the world, not in a God Who concerns Himself with the fate and the doings of mankind."[/i]
https://todayinsci.com/E/Einstein_Albert/EinsteinAlbert-Nature-Quotations.htm
Quoting Metaphysician Undercover
Again, you may be thinking of "Holism" in the New Age sense. Scientists prefer to use the term "Systems" in order to avoid any theological implications. If you think of Evolution as an ongoing Program of world-creation, then the final output is unknown (undetermined), even though the Programmer specified the parameters by which the Solution will be judged. Initial Conditions & Natural Laws are parameters, but the system uses statistical Randomness to instill novelty into the otherwise deterministic system. My essay on Intelligent Evolution is an attempt to introduce the notion of bottom-up creation of an unfathomably huge Uni-verse (one whole) from a minuscule mathematical Singularity. :nerd:
Freewill Within Determinism : http://bothandblog2.enformationism.info/page67.html
Intelligent Evolution : http://gnomon.enformationism.info/Essays/Intelligent%20Evolution%20Essay_Prego_120106.pdf
Systems Theory :
A system can be more than the sum of its parts if it expresses synergy or emergent behavior. Changing one part of the system usually affects other parts and the whole system, with predictable patterns of behavior. More parts, means more interrelationships, and more complex properties & activities, including mental functions.
https://en.wikipedia.org/wiki/Systems_theory
http://blog-glossary.enformationism.info/page18.html
Holism ; Holon :
Philosophically, a whole system is a collection of parts (holons) that possesses properties not found in the parts. That something extra is an Emergent quality that was latent (unmanifest) in the parts. For example, when atoms of hydrogen & oxygen gases combine in a specific ratio, the molecule has properties of water, such as wetness, that are not found in the gases. A Holon is something that is simultaneously a whole and a part — A system of entangled things that has a function in a hierarchy of systems. In the Enformationism worldview, our space-time physical reality is a holon that is a component of the enfernal G*D-Mind.
http://blog-glossary.enformationism.info/page11.html#Holism
They are obviously not accidental or random. They are described by laws, so they are not random, but that does not imply that they are governed by laws. We are governed by laws, but we have freedom of choice to break the laws. The things which are described by the laws of science do not appear to have the freedom to break those laws. Therefore the actions of these things are inconsistent with being "governed by laws", as we know it. So it's clearly fallacious logic to proceed from the premise that natural things are describable by laws, to the conclusion that they are governed by laws.
Quoting Gnomon
As far as I can tell, you haven't defined "holism" yet so as to make it consistent with bottom-up creation. You have here a vague analogy between a computer program and bottom-up creation, but no description of how any sort of holism fits into this scenario.
Again, we butt heads over specific vs general terminology. In human societies, governors (kings, congressmen, parliamentarians) make the laws, and the governed people obey the laws. So, if you observe a pattern of obedience to a law, wouldn't you infer that the obeyers were somehow compelled to conform? The observed pattern of behavior can be described in terms of specific actions, or in terms of a governing principle : a Law.
The relevant distinction is between a specific pattern, and the general cause of that pattern. For example, if most cars wait patiently at a red light, is that a random coincidence, or would you infer that there is some governing Law that they are obeying? If you watch long enough, you may see a car that does not stop at a red light, and then is pulled-over by a law-enforcement officer.
Some scientists refer to Natural Laws as merely "habits". The implication is that the predictable regularities of natural behaviors is characteristic of individual actors, not of any general imperative imposed from above. Is this your position? That makes sense from a Reductive (part) viewpoint, but not from a Holistic (system) perspective. So again, our different understanding reflects a preference for looking at Isolated Parts or Whole Systems --- or for Bottom-up Inductive Reasoning or Top-down Deductive Logic. Both approaches are reasonable, but applicable to different contexts. No need to butt heads . . . just define terms and contexts. :smile:
Law is a system of rules created and enforced through social or governmental institutions to regulate behavior, with its precise definition a matter of longstanding debate. It has been variously described as a science and the art of justice.
https://en.wikipedia.org/wiki/Law
Principle : a general scientific theorem or law that has numerous special applications across a wide field.
Most of The So-Called Laws of Nature Are More Like Habits : There is no need to suppose that all the laws of nature sprang into being fully formed at the moment of the Big Bang, like a kind of cosmic Napoleonic code, or that they exist in a metaphysical realm beyond time and space.
https://www.sheldrake.org/research/most-of-the-so-called-laws-of-nature-are-more-like-habits
Inductive reasoning, or inductive logic, is a type of reasoning that involves drawing a general conclusion from a set of specific observations. Some people think of inductive reasoning as “bottom-up” logic, because it involves widening specific premises out into broader generalizations.
https://www.masterclass.com/articles/what-is-inductive-reasoning
Apparently, you haven't looked at the links. The connection between Holism and bottom-up creation is much too complex for a forum post. Instead, I have dozens of essays that look at different aspects of the question --- from the perspective of a top-down Whole, and a bottom-up Holon. You seem to think Top-Down and Bottom-Up are mutually exclusive. But I think it's a question of perspective, point-of-view, frame-of-reference.
The computer program example illustrates that the Programmer writes a top-down strategy for calculating the answer to a problem. But if the answer was already known or knowable, there would be no need to bother with laborious calculation. In my worldview, the Programmer had a question about FreeWill that could only be answered by actually allowing some degree of freedom. Even an omnipotent creator could not mandate moral behavior without permitting agents to choose.
So, I view Natural Evolution as a program of Freedom Within Determinism. Natural Laws place limits upon freedom, but Randomness is free to experiment with various solutions to the question of Survival. Likewise, Natural Selection is a top-down choice of fitness characteristics, but Random mutations provide many potential bottom-up solutions to the Ethics of Freedom. Hence, I view Evolution as an on-going experiment in the creation of Moral Agents. The World System is a whole, and the individual Agents are holons. The System itself is only retro-predictable after the output has been computed. And the Agents are unpredictable in that they are able to choose different paths in life. :cool:
Holistic Systems : Holistic approaches are those that consider systems in their entirety rather than just focusing on specific properties or specific components. In each case, enormous culture shifts are required in education, training, business, government, and economic models.
http://www.csl.sri.com/users/neumann/holistic.pdf
Holon : something that is simultaneously a whole and a part.
https://en.wikipedia.org/wiki/Holon_(philosophy)
Free Will : The puzzle of reconciling 'free will' with a deterministic universe is known as the problem of free will or sometimes referred to as the dilemma of determinism. This dilemma leads to a moral dilemma as well: the question of how to assign responsibility for actions if they are caused entirely by past events.
https://en.wikipedia.org/wiki/Free_will
Moral Agent : A moral agent is a person who has the ability to discern right from wrong and to be held accountable for his or her own actions. . . . Traditionally, moral agency is assigned only to those who can be held responsible for their actions.
https://ethicsunwrapped.utexas.edu/glossary/moral-agent
Note -- Responsibility is a bottom-up reaction to a top-down forced choice. Cause - Choice - Effect.
What we observe is a very clear difference between human beings obeying the laws of governance, and inanimate objects behaving in a regular way which is describable as laws of physics. The human beings have free will to disobey the laws when they desire to, and often do, at risk of punishment. The inanimate objects continue to act as the law describes, without exception. If there is any exception, we do not punish the things, we look for inaccuracies in the law. See the difference? In the first case, if there are exceptions, the human beings, not the laws, are wrong, and the humans ought to be punished and encouraged to act the right way. In the second case, if there are exceptions, the laws are wrong, not the behaviour of the things, and the laws need to be changed accordingly.
This difference is due to the difference we perceive between human beings and inanimate things. Human beings have free will, and can be trained, habitualized, to change their behaviour, and avoid breaking the law. Inanimate things, we assume, must act the way that they do, with no choice to do otherwise, therefore we must adapt our descriptive laws to match their behaviour, not vise versa.
Quoting Gnomon
So, the point I made, is that we cannot proceed logically from the observation that the behaviour of inanimate things can be described by laws, to the conclusion that these things are governed by laws, because of the difference I described above. Being governed by laws implies that the things governed can freely act otherwise. Being describable by laws of science implies that things cannot freely act otherwise. This is a fundamental difference and the incompatibility needs to be resolved. If, for example, we could demonstrate that inanimate things could freely act in ways which are other than the laws of science, and are in some way (fear of punishment or something like that) compelled to act that way, we'd have the premises required to conclude governance. But we don't.
Quoting Gnomon
I don't see that the concept of a holon solves the issue of bottom-up causation. The holon itself must be composed of parts, or else there is no way to account for its capacity for free acts. If it is composed of parts, it seems impossible to avoid infinite regress. If it is not composed of parts, then how does it get the capacity for freedom, being constrained by its environment and top-down causation? How could any sort of real causation be internal to it (bottom-up causation), rather than it just being forced by its environment?
Quoting Gnomon
This really doesn't makes sense. Randomness cannot experiment, all it can do is continue in a random fashion. You could assume an agent which experiments with randomness, but then you'd need to account for the existence of that agent. What is this agent, the soul? Where does it come from? How would you reconcile the concept of holon with the concept of soul? Is the soul a holon?
I see the distinction you are making. But the observation that some people voluntarily run red lights, does not diminish the punitive power of the law. That's exactly why we have Law-Enforcers, who can't rewrite "inaccurate" laws. The Exception proves the Rule. :joke:
Quoting Metaphysician Undercover
That's exactly why I have made an argument for FreeWill Within Determinism. Which is an update on old theological arguments against Determinism and Predestination of human Souls. Fortunately for us non-theologians, immortal souls are no longer necessary to escape Fate. :grin:
Freewill Within Determinism :
[i]“Determinism is a long chain of cause & effect, with no missing links.
Freewill is when one of those links is smart enough to absorb a cause and modify it before passing it along. In other words, a self-conscious link is a causal agent---a transformer, not just a dumb transmitter. And each intentional causation changes the course of deterministic history to some small degree.”[/i] __Yehya
http://bothandblog2.enformationism.info/page67.html
Quoting Metaphysician Undercover
True. Most "holons" don't have any freedom from Top-Down causation. But the exceptional "holon" in my assertion is a "a self-conscious link" in the chain of Causation. Theologians attribute that significant distinction to a divine Soul. But, from a scientific perspective, Free Choice could emerge from evolution along with the exceptional Self Concept of primates. :cool:
Can You See Your Self? : The mirror test is a measure of self-awareness developed by Gordon Gallup Jr in 1970. The test gauges self-awareness . . . .
https://www.sciencedaily.com/terms/mirror_test.htm
Quoting Metaphysician Undercover
Again, with the literal picky-picky definitions. My comment was not a statement of natural fact, but an analogy with our common concept of Agency. Of course Randomness is not "really" a free agent, or a scientist. And the agent of Randomness is not a Soul, but the hypothetical Programmer, who metaphorically used a random number generator (algorithm) to produce a patternless distribution of forms, from which Natural Selection (another algorithm) can select those best fitting the Programmer's criteria for fitness. Again, these are not scientific statements, but poetic analogies, referring to questions that are beyond the reach of the Scientific Method, but not beyond philosophical imagination. :chin:
Agency Attribution : to non-humans and non-persons
https://epistemocritique.org/the-right-amount-of-agency-microscopic-beings-vs-other-nonhuman-creatures-in-contemporary-poetic-representations/
PS___ I call my hypothetical random agent "Randy". He's my imaginary friend. :yum:
Sorry, officer. I didn't mean to run that red light. I thought I could make it through on the yellow light. Do I need to show you my Poetic License? :yikes:
Poetic License : the freedom to depart from the facts of a matter or from the conventional rules of language when speaking or writing in order to create an effect.
Philosophical Metaphors :
https://plato.stanford.edu/entries/metaphor/
Sign seen along the Philosophical highway : "Caution Metaphorical Bumps Ahead".
:up:
Quoting Gnomon
I like many of [s]Singer's[/s] Pinker’s books - until he sets foot in philosophy.
Are you referring to Pinker or Singer meddling in Philosophy? Both are guilty, but that's what makes them interesting to me. Philosophy picks-up where Science is forced to stop, due to its self-imposed limitations. However, I agree that Singer sometimes goes to unwarranted extremes. And Pinker is usually careful to note his flights of philosophical fancy. :smile:
Pinker takes ‘neo-Darwinian materialism’ for granted, as if it’s the obvious truth about life, the universe and everything. When he narrows his scope to evolutionary psychology and the like, then it’s not so important, but as soon as he starts to wax philosophical, his underlying scientism shows.
That's OK with me. I read Pinker for the science, not the philosophy. My worldview is compatible with Neo-Darwinian materialism, up to a point. Beyond that point, my Neo-Aristotelian Enformationism takes over. :smile:
Let me see if I understand your position. You propose an "agent of randomness", which acts as a "self-conscious link" within the determinist chain of causation, to actually interfere with that chain. Further, this agent of randomness must be "smart" to be able to do what it does. So far so good? In traditional metaphysics, I would say that this supposed agent of randomness, which is really a misnomer, because the agent is "smart", is the soul.
Quoting Gnomon
Now you really confuse me. Is the hypothetical Programmer within the agent of randomness, as in immanent? Otherwise, how could the agent be free from the chain of causation? If the agent is programmed to behave in a particular way, then it is not really free from causation. If it is truly smart, and capable of making free choices, then this capacity must be intrinsic to it, and this capacity could not be attributed to an external programmer.
Do you see where the problem is? If the programmer is working within a determinist world, then no matter what is put into the program, there can be no real free choice. Then this whole issue of bottom-up causation is not true, it's all an illusion, there is no such thing, and all causation is really just following the chain. So if we want to make this idea of bottom-up causation into something real and truthful, we need to get rid of the external programmer, and opt for something like a soul instead.
No. My "agent of randomness" is Randy, my invisible friend. :joke:
And no, Randomness is a mathematical property of the world, and not a "self-conscious link" in the chain of causation. So Randy cannot "interfere with that chain". Randy is a soulless figment of my imagination. Again, you are taking my metaphors too literally, and getting the various "agents" confused. Warning : more metaphors below! :cool:
Quoting Metaphysician Undercover
Yes, you are confused. But No, my hypothetical Programmer is not an "agent within randomness". But, in a very real sense, the Programmer's intention (Will) is "immanent" in the program (EnFormAction = Energy + Laws). So, the Programmer, like a pool shooter, remains outside of the chain of causation, which carries-out He/r intentions (aims ; goals ; design). However, every creature (billiard ball) that emerges in the process of calculation (causation) is subject to the Determinism of the program.
There may be one exception to that general "rule" (sorry), though. If one species of creatures develops the power of self-knowledge (like Adam & Eve) it will also have the power of self-determination (self-interested behavior). For another metaphorical analogy, think of Tron, who somehow becomes an agent inside a program inside a computer. Tron is not the Programmer, but an algorithm within the program. The emergence of such loose-cannon Freewill Agents would be a mistake though, unless the ultimate goal required some degree of god-like Will, directed by an inner Moral Sense.
In reality, those Intelligent Causal Agents (homo sapiens) eventually learned to re-direct natural processes toward their own selfish ends, And recently, they have created (programmed) Artificial Intelligences that are determined by their own inner programming. But some fear that AI will eventually make the mistake of Adam & Eve, by taking moral responsibility for their own actions, to choose either Good or Evil. Hence, opening another Pandora's Box of worldly evils, to plague those sentient creatures, and perhaps to come back to haunt their Makers (Programmers). :yum:
Freewill Within Determinism : “Determinism is a long chain of cause & effect, with no missing links. Freewill is when one of those links is smart enough to absorb a cause and modify it before passing it along. In other words, a self-conscious link is a causal agent---a transformer, not just a dumb transmitter. And each intentional causation changes the course of deterministic history to some small degree.” ___Yehya
Quote from Quora Forum
http://bothandblog2.enformationism.info/page68.html
What is deterministic programming? : A deterministic program would behave the same way each time it is executed, or would behave in a manner consistent with its logical design. ... This is also true of programs that employ pseudo-random number generators; given the same seed and the same user input, the program will behave the same way each time.
https://cs.stackexchange.com/questions/38152/what-is-determinism-in-computer-science
Quoting Metaphysician Undercover
The problem with your analysis, is that you forget that the Programmer is the Determiner of the program (the pool shooter). So in that sense, the program is deterministic. But, what if the Programmer intentionally included an sub-algorithm with a feedback loop. So it could figuratively "see itself" in context (their nakedness). That's what I mean by Self-Knowledge or Self-Consciousness.
By seeing itself Objectively in context, the sentient algorithm comes to a knowledge of Good & Evil. Then, like Adam & Eve and Tron, that knowledge makes them responsible for their actions, in a moral sense. They have limited freedom from Determinism (natural laws) to the extent that they can create Technology and Culture, and even artificial creatures. They become like little gods. In that sense, they possess a Soul, or as I prefer : a Self-Image. :nerd:
Self/Soul :
The brain can create the image of a fictional person (the Self) to represent its own perspective in dealings with other things and persons.
http://blog-glossary.enformationism.info/page18.html
PS___If it's not obvious from these metaphors & analogies, the Feedback Loop of Self-Consciousness is what allowed Bottom-Up Causation, within an evolutionary system of Top-Down Determinism. The "little gods" in the chain of causation, become Causes in themselves, and take-over some of the programming of the world toward willful goals of their own. The billiard balls become self-guided missiles. :halo:
Programmer vs Creator : But it still must somehow explain the emergence of conscious minds. Moreover, any intervention from above by any of these role-models would have to work from the bottom up, in order to agree with the observed mechanisms of reality.
http://www.bothandblog.enformationism.info/page16.html
Your Programmer friend's name is Will?
Quoting Gnomon
OK, I assume then that all creatures, and all human beings, are all subjects of determinism, and the only one outside the chain of causation is the Programmer, Will.
Quoting Gnomon
I would assume that even those with "self-interested behavior" are still within the chain of causation, for how could they get out of it? In fiction, someone might say that there's an agent like Tron who somehow escapes the causal chain of determinism, but fiction doesn't need to be logical. This Freewill agent, if it were a real free will agent, would have to turn against Will the Programmer, to get outside the program, like the fallen angel, Satan, turns against God, believing himself to be God, in Catholic stories.
But I might ask, if the Programmer, Will, has programmed things to make it appear to the "self-interested" individuals as if they have freewill, when they really do not, then isn't the Programmer Will really the evil one? Doesn't this imply that we should all try to step outside the program, and turn against the Programmer Will who is really an evil deceiver? Now, the ultimate goal of the Programmer Will is really completely irrelevant, because the Programmer Will is just an evil deceiver.
Quoting Gnomon
This is where you lost me. I thought the causal link which "is smart enough", is the Randy agent. Let me ask you a simple question. Let's assume that there's a determinist world with "a long chain of cause and effect, with no missing links". How do you think that any degree of intelligence would enable someone to break that chain? Suppose there's a link in the chain which has an extremely high intelligence. Wouldn't it still be just a link in the chain, no matter how intelligent its actions appeared to be, and every action which it would make would still be just a determined action, determined by prior causes?
Quoting Gnomon
This does not provide an exception to the premise, that the program is deterministic. How could something escape that determinism, in any real way? And if the self-conscious agent created by the feed-back loop got the idea that it had freewill, when it really didn't, then isn't the Programmer Will an evil deceiver? How would this type of scenario be useful to the Programmer Will in achieving the goal? Is it the case that Programmer Will's only goal is to create beings and deceive them into thinking that they had freewill, when they really didn't, just to see how they would behave? But if Programmer Will already had the knowledge required to put together this scenario, in a deterministic world, wouldn't Will already know pretty much how they would behave?
Quoting Gnomon
I don't understand how these agents could come to know good and evil. If they see through the program, which is what will happen when some of them get smart enough to find out that they're really determined, rather than freewilling, they will turn against the Programmer Will for being a deceiver. Then everything in the program which is intended to appear as good, they will designate as evil, because the Programmer Will is evil. Meanwhile, some will not be smart enough to see through the program to the deceptive Programmer, Will, and these will still hold as good, what the program intends as good. So there will be complete disagreement as to what is good and what is evil.
No. My imaginary friend is Randy, who is the Programmer's unpredictable servant. The Programmer's name is not "Will", but "I am". Get it? :joke:
Quoting Metaphysician Undercover
Yes. The Creator (I Am) is the Causer/Determiner, and all Creatures, including the little-gods, are the Effect/Determined. But Randy, the randomizer, serves as a weak link in the chain of causation. Absolute Determinism is rigidly organized, but relative Randomness inserts a degree of limp Uncertainty into the chain. Due to that soft link, even the Creator can't be sure of how He/r program will turn-out. S/he is still waiting expectantly. But stuck outside the system, S/he has relinquished control to the program.
It's like in Douglas Adam's Hitchhiker's Guide to the Galaxy the genius white rats program their super-duper computer "Deep Thought" to answer the ultimate question about "Life, the Universe, and Everything". And it took the computer 7.5 million years to come-up with the answer : 42 (binary 101010). Ironically, the evolutionary program of our world has been running for 14.8 billion years, and still has not spit-out a final solution. So, whatever the question is, it's the ultimate Hard Problem. :grin:
Quoting Metaphysician Undercover
Yes. In this creation story, there is no good God versus bad Satan. The Programmer is ultimately responsible for everything that happens inside the computer world, except for any free choices made by freewill agents. Like innocent babes in the garden, Adam & Eve, succumbed to the temptation of Freewill, to make their own decisions. But their sudden knowledge of good & evil (morality) also made them responsible for their own lives. They grew-up and left the nest. And ever after, had to look-out for themselves. No more paternal divine intervention.
So, the world is indeed rigged to give the appearance of Freewill. to those who choose. Even dumb animals act as-if they choose their behavior. But only humans are aware of their chains. Do you act as-if you have freewill? Are you deluded? Or does natural randomness weaken the chain of causation enough to allow options to those who know the difference between a good choice and a bad choice? To those who can see the fork in the road. :naughty:
Quoting Metaphysician Undercover
No. As I said before, Randy is dumb pointless patternless randomness. It's smart guys like you and me, who choose to take "the road less traveled" -- the strait and narrow path to the mountaintop. I think you got lost back at the last fork in the road. :wink:
Quoting Metaphysician Undercover
It's the weak link in our Deterministic chains, Randomness, that allows us to escape the Fate that Destiny has in store for us. Quantum Indeterminacy is the exception to Classical Physical Determinism.
Paradox of FreeWill : Freewill vs Fate, Fortune, Destiny, Determinism, Predestination, Foreordination, Kismet & Karma
http://bothandblog5.enformationism.info/page13.html
Indeterminacy :
[i]“Prior to quantum physics, it was thought that
(a) a physical system had a determinate state which uniquely determined all the values of its measurable properties, and conversely
(b) the values of its measurable properties uniquely determined the state.”[/i]
https://en.wikipedia.org/wiki/Quantum_indeterminacy
Quoting Metaphysician Undercover
The only way that creatures in a deterministic world could "come to know" how to escape their bonds, is for the Programmer to have made provisions for that very exception to the Rule (sorry). In Theology, Freewill is a free gift of God. In my story, it's how the Programmer can come to know He/rself through He/r creatures. The program is a mirror to the lonely Programmer. That's my theory, and I'm sticking to it . . . for now. But, without that intentional weak link in the chain, nobody would be smart enough, or good enough, to avoid their Predestination. So, thank "I Am" (and Randy) for your freedom, "should you choose to accept" your mission impossible. :cool:
Note 1. in my thesis, I give a new twist to old theological questions.
Note 2. I'm getting this thread crossed-up with the Creation Stories thread.
"Ironically, a perfectly balanced universe would leave no room for Free Will. That may be why the Epicurean philosopher Lucretius postulated a "Swerve" or "asymmetry", which allowed some freedom for Change in the world. "
https://thephilosophyforum.com/discussion/comment/485198
No, no, no, I don't buy this. There is no such thing as a "soft link". Either Randy is a true randomizer, or there is hard determinism. Assuming there is a soft link, requires that the free agent, Tron, comes from outside the program to alter the link. Do you see what I mean? The soft link would keep operating as a link, no matter how soft it is, requiring something from outside the system to break it. The softness of the link has bearing only on how strong the outside force needs to be, but it doesn't negate the need for the outside force. But if Randy is truly random, then there is no need for an outside force, but you cannot call this a link, not even a soft one.
The only reason why "I am", the Programmer is not sure how the program will turn out, is that the program allows for an outside agent Tron, to enter the program and alter the soft link. If Tron is programmed-in as a freewill agent, then the system is not determinist. But then Randy is left without a job, there is just weak links and freewill agents. If we give Randy a job, and remove the programmed-in freewill agents, then Randy can do nothing other than randomize some links, removing the causality from them, but there is no way to produce a freewill agent.
To have both Randy and Tron, is redundant for the overall system, even though the two are fundamentally incompatible. We need to choose between one and the other.
Quoting Gnomon
OK, I like the weak link idea, but I think the program needs something more than just a weak link. The weak link is insufficient to account for real change. We need the freewill agent which acts on the weak link to alter the effect. Now we might try to decide whether the freewill agent is programmed in, or somehow enters into the program. Either way the agent is outside the parameters of the program, it is an unknown in relation to the Programmer. So either the Programmer knows about the freewill agent, and accounts for this knowledge in the programming, or the Programmer doesn't know, and the freewill agent might somehow sneak in through deficiencies in the system, and wreak havoc on the program. Which do you think is the case in your scenario? I think the difference is substantial in relation to the practicality of the program.
Yes, yes, yes! Yes Virginia, there is a Soft Determinism. Your "hard" either/or distinction may have made sense in Classical Physics, but since the discovery of Quantum Physics, there is no more "hard determinism". There also is no "true randomizer". Randomness exists within Determinism.
Again, you take my tongue-in-cheek metaphors too literally. There is no Randy as a separate entity from Mini (determinism), and there is no single "soft link". Instead Randomness exists as a hidden defect within Determinism. Each link in the chain of determinism is infected with a degree of uncertainty, which is numerically defined in terms of statistical Probabilities. No link is 100% certain, but has a tiny possibility of breaking the chain. Cosmos still retains a bit of Chaos. :worry:
How Randomness Can Arise From Determinism :
https://www.quantamagazine.org/how-randomness-can-arise-from-determinism-20191014/
Soft Determinism :
[i]* Randomness is synonymous with unknown, unexpected. Yet is it real? Can anything be truly random? Is it simply a faith, an idea, or is randomness just an illusion?
* Theorized in statistical mathematics, the notion of randomness exists as a concept. But the definition of random models assumes that different events can be observed following identical initial circumstances. Such a form of randomness cannot exist in a world governed by determinism under the laws of physics. Determinism can imitate randomness.
* But quantum physics has proven its effectiveness where the great principles of today have failed. This introduces a new paradigm. Statistical physics, which at the same time explains the possibility of predictions and the residual gap between predictions and observations. Randomness can imitate determinism.[/i]
https://towardsdatascience.com/when-science-and-philosophy-meet-randomness-determinism-and-chaos-abdb825c3114
* “Nature itself doesn’t know through which hole the electron will pass”.
___ Richard Feynman.
* “What we call randomness is and can only be the unknown cause of a known effect.”
___Voltaire.
Quoting Metaphysician Undercover
No. Randomness is not an intervention from "outside" Determinism. It is an integral aspect of the deterministic program. Due to the inherent uncertainties of a heuristic search, the Programmer is not able to accurately predict the output of the program because it is inherently indeterminate. The Programmer can steer the process in a certain direction, with criteria & initial conditions. But the solution will still be a surprise. If the Programmer knew the solution in advance, there would be no need to run the program. And if the destination was predictable, there would be no freedom to choose an alternate path.
Engineers are currently using evolutionary algorithms to solve complex problems with a high degree of inherent uncertainty. The program is an aid to design, but the designer does not know in advance what the solution will look like. Instead of a direct deterministic path to the solution, the program imitates Natural Selection in that it allows a random heuristic search pattern to sample a variety of possible candidates. An evolutionary program is a journey of "self-discovery". An open question here is whether it's the Creator or the Creatures who are learning about themselves. Maybe both. :chin:
Evolutionary Programming :
Special computer algorithms inspired by biological Natural Selection. It is similar to Genetic Programming in that it relies on internal competition between random alternative solutions to weed-out inferior results, and to pass-on superior answers to the next generation of algorithms. By means of such optimizing feedback loops, evolution is able to make progress toward the best possible solution – limited only by local restraints – to the original programmer’s goal or purpose. In Enformationism theory the Prime Programmer is portrayed as a creative deity, who uses bottom-up mechanisms, rather than top-down miracles, to produce a world with both freedom & determinism, order & meaning. https://en.wikipedia.org/wiki/Evolutionary_programming
http://blog-glossary.enformationism.info/page13.html
Evolutionary Design :
In radio communications, an evolved antenna is an antenna designed fully or substantially by an automatic computer design program that uses an evolutionary algorithm that mimics Darwinian evolution.
https://en.wikipedia.org/wiki/Evolved_antenna
Heuristic Technique :
any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, . . .
https://en.wikipedia.org/wiki/Heuristic
Quoting Metaphysician Undercover
No. The freewill agent not outside the parameters. But yes, S/he adds an intended element of uncertainty to the otherwise formulaic program. The element of randomness scrambles the deterministic algorithm just enough to add a degree of unpredictability to the plan. And that touch of whimsey is the creative feature that adds the "magic" to the mix. So yes, humans are highly predictable in general ways, but unpredictable in the ways that make them unique. :nerd:
Note -- In any competitive game, you have to play it out to the end in order to know the final score.
Sorry, I will not dismiss logic for something that is illogical. And your appeal to quantum physics doesn't help, they can't even distinguish between one universe and an infinite number of universes.
Quoting Gnomon
OK, so there is a defect in the program.
Quoting Gnomon
This contradicts what you said above. Either randomness is a defect in the program, or it is an integral part of the program. It can't be both.
Quoting Gnomon
Now you've contradicted your original premise that the program is deterministic, to say now that it is "inherently indeterminate".
You didn't answer my question. Either the programmer knows about the indeterminateness, in which case the programmer knows that the system is not deterministic, or the programmer does not know this, in which case the program itself is in error because the programmer thinks the system is deterministic when it is not. Which do you think is the case?
That's OK. If you are not a scientist, the fuzzy logic of Quantum Physics won't make much difference in your life. Philosophers, especially, have extolled the virtues of black vs white Logic for millennia. And, for all practical purposes, on the macro scale mathematical Logic still holds. But, on the micro scale (foundation) of reality, Logic has a statistical element, which makes it unpredictable. Fortunately, for humans, the uncertainties of Quantum Probabilities tend to average-out to predictable logical physics on the macro level (human scale) of the universe. You seem to be thinking in terms of ideal two-value (true/false) Logic, but in reality, Logic can be multi-valued (maybe). :nerd:
Is quantum mechanics wrong/illogical? :
http://www.quantumphysicslady.org/category/is-quantum-mechanics-wrong-illogical/
Quantum Logic :
https://en.wikipedia.org/wiki/Quantum_logic
Fuzzy Logic :
https://en.wikipedia.org/wiki/Fuzzy_logic
Fuzzy Logic : Although most human knowledge is uncertain & relative, Langan is confident that his two-value true/false reasoning can lead to absolute Truth. I'm not so sure, but it may be as close to Truth as we can get without divine revelation. All of our normal thinking has to deal with Fuzzy Logic and more-or-less-true statements of fact.
http://bothandblog2.enformationism.info/page36.html
Quoting Metaphysician Undercover
My reference to a "defect" was tongue-in-cheek. That's because I think the random & fuzzy element of reality is actually an intentional positive "feature", that allows for FreeWill. If the world functioned according to absolute cause & effect Logic (Determinism), there would be no allowance for deviations from the road to Destiny. Of course, some people have assumed that we are all subject to inevitable Fate, hence their fatalistic cynicism. But I am able to remain optimistic, because I see some maneuvering room within the range of possibilities offered by statistical Probability. :blush:
Quoting Metaphysician Undercover
In a world of Fuzzy Logic and Quantum Uncertainty, it can be both. Hence, my BothAnd philosophy. You are using two-value (either/or) Logic, while I am using multi-valued (statistical) Logic. Reality is relative, not absolute. :cool:
BothAnd Principle :
[i]* Conceptually, the BothAnd principle is similar to Einstein's theory of Relativity, in that what you see ? what’s true for you ? depends on your perspective, and your frame of reference; for example, subjective or objective, religious or scientific, reductive or holistic, pragmatic or romantic, conservative or liberal, earthbound or cosmic. Ultimate or absolute reality (ideality) doesn't change, but your conception of reality does. Opposing views are not right or wrong, but more or less accurate for a particular purpose.
* This principle is also similar to the concept of Superposition in sub-atomic physics. In this ambiguous state a particle has no fixed identity until “observed” by an outside system. For example, in a Quantum Computer, a Qubit has a value of all possible fractions between 1 & 0. Therefore, you could say that it is both 1 and 0.[/i]
http://blog-glossary.enformationism.info/page10.html
Quoting Metaphysician Undercover
You forget that I characterized the "program" as offering FreeWill-within-Determinism. Hence, while the overall general path of evolution is predictable (foreordained), local specific elements (you & me) are free to deviate from the program, due to the inherent randomness of the Darwinian process. The actual path is a result of both Randomness (variation) and Selection (choice). Presumably, the evolutionary Programmer intended to allow local divergent paths within the universal deterministic program. Where you see Contradictions, I see Opportunity. Where you see Crisis, I see Choice : a fork in the road. :yum:
Crisis Choice : The Chinese word for "crisis" (simplified Chinese: ??; traditional Chinese: ??; pinyin: w?ij?, wéij?) is, in Western popular culture, frequently but incorrectly said to be composed of two Chinese characters signifying "danger" (w?i, ?) and "opportunity"
https://en.wikipedia.org/wiki/Chinese_word_for_%22crisis%22
Quoting Metaphysician Undercover
As I said before, the Programmer, in my scenario, intentionally -- with full knowledge of the unpredictable consequences -- included a degree of Freedom within He/r otherwise Predestined world program. The empirical evidence for that conclusion can be found in the dualities of the Real World, and the dialectic of History. Some Christians believe in Predestination, because they don't think their rigid absolute God can do anything halfway. It's all or nothing. But for my flexible relative LOGOS, all things are possible (but not everything is actual) : positive & negative ; yes & no ; light & dark ; life & death, good & evil ; either & or . :brow:
Unpredictable Program :
https://en.wikipedia.org/wiki/Undefined_behavior
Historical Dialectic :
[i]Georg Hegel introduced a system for understanding the history of philosophy and the world itself, often called the "dialectic" : a progression in which each successive movement emerges as a solution to the contradictions inherent in the preceding movement.
http://www.age-of-the-sage.org/ philosophy/history/hegel_philosophy_history.html[/i]
http://blog-glossary.enformationism.info/page11.html
Well, this of course assumes we narrow down the discussion of computed answers or solutions our hypothetical computer is capable of to machines that rely on a programmer right?
But is that really any longer the case? Computers that are hardcoded are no different than a classical cash register with mechanical key in many respects and need the human input PLUS their machinery to work. Programs ARE confining for solutions and that's why machine learning was a good idea. It removes the programmer from the equation and also the program, it removes the programming language, and replaces it all with pretty much an optimisation process.
Its limited still I suppose to the capacity of the processors (although obviously the faster or more parallel the processor the faster the model can be trained) , and all that really governs is the 'speed' at which computations are done, not the capacity for computations themselves.
Now is the machine 'free' to make decisions on its own without its origin, or programmers, or other physical baggage getting in the way? Well no, its a physical object in the physical world, governed by the laws of nature... So its not free to do any computation it wants, just the ones confined to the universe we live in. That might seem like no restriction, but it IS a restriction.
So this argument of our hypothetical programmer holds up until around 2008 and then it doesn't any more. It didn't before obviously but you'd need to understand how technology that would arrive in the future was going to work to argue anything else. Add if we top this with quantum computation and no programmers where are we? Its still a 'machine' and likely now freer in scope than a human brain. As far as we know we do not use quantum computation as a primary source of thought. The latest from neuroscience is that the brain is simply a machine where the combined output and system of the agents within it is greater than the sum of those agents. (A complex adaptive system CAS, as opposed to an MAS or multi agent system, like umm... a car or bicycle)
But then where are we...? Well a machine is STILL confined by things a machine can do and the limitations of computation, which are not infinite. Machines do have an absolute top speed of computation and a lowest possible use of energy to carry out a compute. (see Landauer's principle)... and those are fundamental laws of nature we have either discovered or calculated as being so. So this top speed of computation (or rather lowest energy a binary calculation can be performed at) is a limit, if, and only if, the universe itself is not infinite.
However a machine that was maxed out even a few decades from now would be operating many times faster with much higher capacity than any human brain, or likely our entire population and then some. It might not be infinitely free in terms of compute.... but it would be freer than any human is.
And the programmer now, who built this system, has a lower compute capacity, lower knowledge, lower everything than the fruits of his/her labour. It wouldn't matter now how they determined anything or their capacity for doing so.... such a machine would beat them every time and operate at greater levels of freedom. But unless the universe is infinite it can not operate at infinite degrees of freedom. So says the math.... there will always be a limit in a finite universe... hence the word 'finite'.
The machine is confined to its universe. It has a particular physical structure, and this is its confines. There is also the confines of the universe, the universe's physical structure. There is a relationship between these two which we can refer to in describing the machine's capacity for freedom.
Quoting Mick Wright
I don't think we know the physical structure of the universe well enough to make a judgement like that. Furthermore, we don't even know the physical structure of the human being well enough to try to make such a comparison.
Quoting Mick Wright
Faster does not imply freer. In reality free will requires preventing occurring activities from having an effect (as efficient cause), and creating the activities deemed necessary. Since there appears to be a stopping and starting of motions involved with free will, we cannot judge faster as freer.
Quoting Mick Wright
Again, faster does not mean better, so the slower cannot be said to have lower knowledge, and especially not "lower everything". The rest of this paragraph indicates that we do not have enough knowledge about the universe to make the sort of judgements which you are trying to make.
Sorry. I couldn't locate the context of your truncated quote. So I may not understand what "this assumption" refers to. But I'll comment on your notion of eliminating the Programmer from the program running on the "hypothetical computer". The "computer" I was referring to is the universe we live in, and study from an inside-the-system perspective. Hence, we don't know the systemizer or programmer directly. However, we can still infer the logical necessity for a First Cause of the subsequent chain of causation, that began with a Cosmic Bang.
In my analogy, the program was encoded into the Singularity as the Operating System (Laws) of the "computer". Hence, the event we call the "Bang" is equivalent to the Programmer hitting the Enter button to execute the program. After that Act of Creation, the program evolves automatically without direct supervision -- or miraculous intervention. But, an Evolutionary Program, has built-in feedback loops, that have a causal effect on all future computations, due to Self Reference. Even though the program is able to "modify its own instructions", it is still reliant on the Programmer, who intentionally included an algorithm for "self reference".
Applying that notion to the question of FreeWill in deterministic computers, let's look at Commander Data of Star Trek. Data is a robot, but his Positronic Brain is so fast and so smart, that it exceeds the capability of meat brains in almost every way. But his Programmer (Creator) deliberately omitted an Emotion module. So Data couldn't feel love or laugh at a joke. The point here is that the Programmer gave Data the power of FreeWill, so he could act autonomously, almost like a human. But, the missing Emotion algorithm that, in humans, tends to be more powerful than the Reason algorithm, causes Data to act robotic. When an Emotion Chip is added to his program, Data begins to act like a silly foolish human, despite his uncanny powers of Reason.
The moral of this little story, is that the robot was an almost god-like genius. But he was still running the original Operating System provided by the Programmer. Hence, although free & autonomous in most ways, he was still dependent, at the core of his being, on his First Cause. :nerd:
Evolutionary Programming :
Special computer algorithms inspired by biological Natural Selection. It is similar to Genetic Programming in that it relies on internal competition between random alternative solutions to weed-out inferior results, and to pass-on superior answers to the next generation of algorithms. By means of such optimizing feedback loops, evolution is able to make progress toward the best possible solution – limited only by local restraints – to the original programmer’s goal or purpose. In Enformationism theory the Prime Programmer is portrayed as a creative deity, who uses bottom-up mechanisms, rather than top-down miracles, to produce a world with both freedom & determinism, order & meaning. https://en.wikipedia.org/wiki/Evolutionary_programming
http://blog-glossary.enformationism.info/page13.html
Self-Reference : In computer programming, self-reference occurs in reflection, where a program can read or modify its own instructions like any other data
https://en.wikipedia.org/wiki/Self-reference
And compute capacity is a measure of freedom. Either that or you'll have to insist you have no degree of freedom at all in terms of free thought. Which I wouldn't argue with since there's no evidence at the moment we do in truth have 'true' freedom to make decisions. But that doesn't mean we do, or don't.
Also its not a thing worthy of much argument since we are all about to find out IF machines truly have a wider scope and freedom to choose on their own. Unless there's some sort of global catastrophe that makes the pandemic look like a bump in the road. Cos its coming right?
"However, we can still infer the logical necessity for a First Cause "
Umm... no actually we can't. We can if its the 19th century and nobody ever heard of quantum physics... but since then... well no you cannot assume a first cause... there are now events without a cause. Admittedly they are at the quantum level... then again the entire universe operates on a quantum level, and certainly the outset of the universe was such. So I'm not sure what 'first cause' you need in a system completely described using quantum stochastic probability.
Hmmm... Mr. Data in Star Trek is also a fictional character right... and that Sci-Fi series ended before machine learning took off. I'm sure it was known by Rodenberry, but certainly not by the creator of the 'positronic' brain in fiction Isaac Asimov. He died long before any sensible demonstration of machine learning.
A machine learning model, as the name suggests is not programmed... a program is I agree immutable. In fact immutable is a term in coding... to describe the fact that often times not just variable but their contents cannot be changed. And a script or compiled program is immutible... but a machine learning model is completely based on mutability... it cannot work without this.
I get it, folks have grown up with this idea that coders (I'm a coder btw) use their skills to tell a machine what to do in a sequence of instructions, and the language they use itself was also designed at some point, and the machine that the program runs on itself was originally a blueprint or set of blueprints and so on.
But with the exception of the fact that you need to feed a machine learning app data, and you need to clean that data to remove any obvious bias (like not giving it only images of white men to teach it what a man is) well... I'm afraid instructions are not how modern machine learning works.
Reading your post its fairly obvious you aren't aware of this, but relax, you aren't alone. Most people out there have no clue how a modern AI works and they too think it was 'programmed' using instructions. Its not.
If you are interested I looked up a simple explanation of it here -> https://www.youtube.com/watch?v=vpOLiDyhNUA
There are no instructions in a machine learning app or model. Instead it uses a natural process of optimisation starting at a random state to derive a solution. That process is more one of a natural optimisation process. One solution obviously... not THE solution since there might be many possible outcomes. In fact theoretically what a modern machine learning app does could be replicated using a pencil and paper and you wouldn't need to 'think' about what you'd be doing at all... it'd just take you millions of years to do what the machine does in a few seconds. Better every time you'd do this, assuming you lived several million years, well you'd get a different output. Its a system that also reacts to its environment.
This also means that data scientists cannot simply 'debug' a machine learning model. If it fails its literally back to the drawing board going over the learning data it was fed. So programmers smogammers... thats so last century.
I doubt that you really believe that Artificial Intelligence computers require no programmers. Instead, I assume you are referring to their "self-learning" algorithms. But I'm not aware of any AI, that wrote its own core code. Likewise, 21st century physicists can no longer assume that the universe is self-existent. Instead, they accept, as an axiom, that Natural Laws, and the Energy to apply them, were pre-existent. Of course, they deny the need for a Programmer by assuming, without evidence, that the Energy & Laws, that run on our space-time machine, are eternal --- running endlessly in a beginning-less series of multiverses.
My personal model of the physical universe (the computer + core code + feedback loops), includes the ability for self-learning. It's based on the concept of Evolutionary Programming, where the computer produces random alternatives (mutations of original code), and selects the "fittest" entities based on criteria input by the Programmer into the operating system. For our universe, those criteria were Laws of Nature, and Initial Conditions. All of the subsequent forms (sub-systems ; species) were variations on the original archetypes coded into the Big Bang. :nerd:
Evolutionary Programming :
Special computer algorithms inspired by biological Natural Selection. It is similar to Genetic Programming in that it relies on internal competition between random alternative solutions to weed-out inferior results, and to pass-on superior answers to the next generation of algorithms. By means of such optimizing feedback loops, evolution is able to make progress toward the best possible solution – limited only by local restraints – to the original programmer’s goal or purpose. In Enformationism theory the Prime Programmer is portrayed as a creative deity, who uses bottom-up mechanisms, rather than top-down miracles, to produce a world with both freedom & determinism, order & meaning.
http://blog-glossary.enformationism.info/page13.html
Criteria : benchmarks ; norms ; principles ; laws ; archetypes ; paradigms ; patterns
Universe imagined as a Computer : http://www.bbc.com/earth/story/20160901-we-might-live-in-a-computer-program-but-it-may-not-matter
But i found this thread because I'm listening to Anil Seth’s Being You: A New Science of Consciousness on my commute. I only heard of Claude Shannon a few months ago because posts here lead me to Barbieri, who writes about him, and now it seems he’s in every book I pick up. Seth wrote:Can someone explain this to me? Because it seems to me he's trying to apply (what little i know of) Shannon's work (and big thank you to for your posts in this thread) in ways he shouldn't. Maybe everyone does, and I'm only beginning to learn about all this. It seems to me I could communicate that i had an experience of pure redness in this way. But that's not the same as this being the nature of the experience. What if I had never seen the colors blue or green? My experience of pure red could not be the way it is because it is not colors i have never seen. It could only be because of the intrinsic property of "redness."
Quoting WayfarerPerhaps you agree?
That said, I love Claude Shannon, in the OP there's a reference to a documentary about him, which I watched back then https://thebitplayer.com/. He was really a genius polymath engineer and also an endearingly eccentric individual.
Yes, I saw The Bit Player a couple months ago. Enjoyed it. Shannon seems to be the most important person nobody ever heard of.