You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

A Methodology of Knowledge

Philosophim August 20, 2020 at 21:38 14200 views 194 comments
I've been having a fantastic discussion with a member on this forum (even though it was positive, calling out is usually frowned on) about some epistemology matters. I was mostly discussing their side of it, but it got me interested in hearing perspectives on my side of it. This forum seems full of some great people, so maybe a nice discussion can happen from it.

Years ago I wrote my own epistemological theory of how we "know" knowledge. Its not too terribly long, and I've tried to make the language clear and readable for someone who is not well versed in philosophical terms. I've used it to answer a number of Epistemological puzzles such as "Theseus's ship", and have used it in daily general assessment of whether I know things as I go about daily life.

So if you're interested, here it is broken into four chunks.

Part 1 The basics of knowledge (2 pages)
https://docs.google.com/document/d/17cHCI-_BY5k0tmpWXSoHCniGWW8hzpbVDDptLp5mIgg/edit?usp=sharing

Part 2 How to apply knowledge within personal contexts (5 pages)
https://docs.google.com/document/d/1Crx8zMpD9cdZ47Zw4RDhsS7VUzyb4xCdhIbEfcV10oA/edit?usp=sharing

Part 3 Knowledge within societal contexts (3 pages)
https://docs.google.com/document/d/14_KGMPbO2e_z8icrjuTmxVwGLxxUA0B_CqNT-lF6SXo/edit?usp=sharing

Part 4 Rational Induction (5 pages)
https://docs.google.com/document/d/1Q84NCGIcwkjytFZaLBIv9JmRGzhKHDjlV7j_dDPTDAY/edit?usp=sharing

I hope its an enjoyable read!

Edit: Most of the early responses did not address the papers. If you want the part where the real discussion begins, please check For Bob Ross's first reply on tab 2. We have a great conversation from there that goes over and answers many questions you might have.

Comments (194)

Mww August 20, 2020 at 22:27 #445079
Reply to Philosophim

It was a good read. Well done.
Philosophim August 20, 2020 at 22:33 #445083
Reply to Mww

Thank you Mww, I'm glad you enjoyed it!
Jarmo September 03, 2020 at 06:14 #448922
Reply to Philosophim


[Knowledge] is both the belief in something, and a further belief that “the something” is co-existent with reality


Can you really believe in something without believing that “the something” is co-existent with reality?


I claim the sky is red while I clearly experience it as blue. The contrary existence of the blue sky negates my belief that it is red.


Can you really believe something is red and at the same time experience it as blue? Or does that “negative belief” mean that you don’t actually believe, you just “claim”?


Without memories, how could I remember my claim to what a memory is and think to deny its reality?


I don’t think that experience of remembering something requires that you actually have memories. I would grant that we both have memories, but I believe that at this point we step outside of absolute knowledge.
Philosophim September 04, 2020 at 03:50 #449265
Reply to Jarmo Quoting Jarmo
Can you really believe in something without believing that “the something” is co-existent with reality?


Yes you can. I can buy a lottery ticket believing that I will win, but with the knowledge that I probably will not. This statement is a very basic beginning to knowledge, which I go more detail into. My next line is, "Yet how can one be certain one’s belief is co-existent with reality? " I am starting with a basic premise, then questioning it myself to try to make it better. That second belief in a beliefs concurrence with reality needs something more. Then I go into contradiction.

Quoting Jarmo
Can you really believe something is red and at the same time experience it as blue? Or does that “negative belief” mean that you don’t actually believe, you just “claim”?


So in the beginning, it is assumed we are speaking colors in a "normative" way, That being that blue and red are both different colors. You understand what blue and red are, and you see red and blue as different colors. If you are seeing one color, regardless of what it is, and it is not another color, regardless of what it is, you cannot claim it is color B, when you see Color A without a contradiction.
Now if you wish to change the premises to avoid a contradiction, that is fine. But then we're not talking about the same example. The point is to understand that a person cannot hold a true contradictory belief. I cannot see 1, understand what 1 is, then claim it is 2 when I understand 1 is different from 2. Does that make sense?

Quoting Jarmo
I don’t think that experience of remembering something requires that you actually have memories. I would grant that we both have memories, but I believe that at this point we step outside of absolute knowledge.


But can you counter the point that I made about memories as being knowledge? The point is that I know I experience memories, not that those memories are accurate representations of a past reality. If I experience a memory of a pink elephant, that memory is what I know, not whether there was a pink elephant in reality that gave me that memory. At that point in the argument, I am claiming nothing more than this. Does this clarify what I'm pointing out?

I appreciate the feedback!
Jarmo September 04, 2020 at 15:57 #449392
Reply to Philosophim

Quoting Philosophim
I can buy a lottery ticket believing that I will win, but with the knowledge that I probably will not.


So at the same time you believe that “you will win” and that “you probably will not win”? I don't think that is possible.
Philosophim September 04, 2020 at 16:53 #449401
Quoting Jarmo
So at the same time you believe that “you will win” and that “you probably will not win”? I don't think that is possible.


I'm going to assume at this point that you've only read part one, otherwise you would understand what I meant by probability, and what I mean by believing something while also knowing that it is not likely. If you read through to part 4, my answer should make more sense.

If this bothers you at this point however in part 1, notice the claim is not a hard claim, it is a supposition as a starting though. I then lead from that supposition refining it into something more concrete using the law of contradiction. That is to ultimately discover that the purest form of knowledge is a discrete experience. Part 2 begins to take the knowledge of discrete experience, and then see if it can be applied to reality. For example, I might believe that a creature over there is a sheep, but how do I know its a sheep? So for now, read part 2. If you still have a question or issue with the point, we can come back to it and I can use the language and examples in part 2 to answer your question better.



god must be atheist September 04, 2020 at 23:23 #449458
Quoting Philosophim
Yes you can. I can buy a lottery ticket believing that I will win, but with the knowledge that I probably will not.


You know a probability. You do not know before the draw date that it's a losing ticket.

You buy the lottery ticket not because you believe you will win, but because you hope you will win.

Two mistakes for two tries. Not a very strong argument.
Philosophim September 05, 2020 at 16:07 #449623
Quoting god must be atheist
You know a probability. You do not know before the draw date that it's a losing ticket.

You buy the lottery ticket not because you believe you will win, but because you hope you will win.

Two mistakes for two tries. Not a very strong argument.


I was giving an example in which a person believes they can win, but knows they likely will not. Having hope that they win can also happen, but was not the example I gave. These are two separate examples. Attacking an example I did not give is not evidence that the argument I gave "wasn't very strong".

Further, you seem to be ignoring the context of the discussion, which is the paper. This is evident, because you would know how I define belief, and that is not defined as "hope". I will enjoy your contributions to the discussion, but double check that you understand the context of the discussion before adding your opinion to a question between another poster and myself.

god must be atheist September 06, 2020 at 05:46 #449823
Quoting Philosophim
Further, you seem to be ignoring the context of the discussion, which is the paper.


I put forth respectfully that I am not ignoring the context of the discussion. I am pointing out instead,that your conceptualization of processes is muddled. You insisted again that muddled thinking.

You can't say you know something when you don't know it; and you can't say you beleive in something when you don't beleive it. You only brought up these two limping and lame examples because you were asked to present a situation. By bending the possible, you presented a situation, but I respectfully insist that your presented situation is invalid.
Philosophim September 06, 2020 at 13:04 #449882
Reply to god must be atheist

There is nothing respectful about it. Using the words "limping" and "lame" do not show respect. Further, if you insist the presented situation is invalid, you would address my counter point, that you were using a straw man argument. You did not. You have provided no evidence that you understand how belief and knowledge are defined within the context of the argument.

Combined with the poor use of words that simply assume your short comments have "solved the puzzle without question", I can only assume at this point you are not interested in having an intelligent discussion, but an emotional one where you try to make yourself feel good by putting another person down.

Now if you wish to remove words which demonstrate ego, and instead wish to discuss the points of the argument using neutral words like, "I think this is wrong because of x", we can happily continue and I will sweep this experience under the bridge. And this, I encourage. It is a poison trap of the mind to believe that indicating superiority over another is the final goal of intellectual discussion. I invite you to address a problem free of ego, and instead join me in a discussion.
Deleted User September 06, 2020 at 13:14 #449883
Quoting Philosophim
Yes you can. I can buy a lottery ticket believing that I will win, but with the knowledge that I probably will not.
You can have these thoughts with attendant feelings, but it does not make sense to say you have belief X and knowledge -X. You could say 'a part of me believes that I will win but I know the chances are lower than that part of me thinks.' Because we are not unified beings. But only in the context of parts of a self does it make sense.

Let's say my name is Jack.

It doesn't make sense to say Jacks believes he will win the lottery despite his knowledge his chances are very small. (unless the lotter is rigged)

It could make sense to say part of Jack believes he will win. Part of Jack believes this is incredibly unlikely.

Generaly in philosophy knowledge is considered a rigorously arrived at belief. I do believe we can hold more than one belief about something, but then we cannot be summed up as having just one if we have more than one.

Philosophim September 06, 2020 at 16:12 #449911
Reply to Coben

I appreciate the feedback Coben. The paper defines knowledge as I go. I am answering this question in terms of the context of the paper. And yes, you can hold knowledge, but also a separate belief depending on the circumstance. In terms of the paper, this is the combination of two types of inductions, plausibility, and probability. I go over this in part 4. I can hold the belief that it is plausible I will win, but also know the probability that indicates it is not likely that I will win. There is no contradiction within the confines of the paper that I am aware of. Feel free to point out if there is one, but you will likely need to reference part 4.
god must be atheist September 07, 2020 at 06:54 #450056
Quoting Philosophim
I invite you to address a problem free of ego, and instead join me in a discussion.


Apparently you take all counter-arguments that are based purely on logic and language, as malicious, direct, and uncalled-for attacks on your ego.

So please don't tell meQuoting Philosophim
you try to make yourself feel good by putting another person down.
when you practice the same thing. Ignoring my arguments ONLY because of my provocative style is not what I expected you to do. You answer the same arguments that I had presented when presented to you by Coben. He put it differently, but he said the same thing as I did.

I see you can't ignore style. So why the lecture to me about ego? When your biggest obstacle of seeing arguments and meritfully responding to them is your very own ego.

Philosophim September 07, 2020 at 11:51 #450105
Quoting god must be atheist
Apparently you take all counter-arguments that are based purely on logic and language, as malicious, direct, and uncalled-for attacks on your ego.


No, I pointed out your inappropriate attitude in you use of language is one of ego and that you ignored the counter point that your argument was a straw man. You still have not addressed this counter point, which fully addressed your argument.

You do not have a "provacative" style. Its rude, demeaning, and is not interested in hearing the argument, or further discussion. Provocative arguments get directly to the meat of the argument, and should "provoke" thought, not animus. Anyone who assumes being rude or demeaning during a discussion is somehow "intelligent" is someone using this cover as an excuse to be a rude person for their own self-satisfaction. Perhaps you were under the impression that because some people who purport to be intelligent use it to demean others, that is what intelligent discussion is all about. It is not. If you are unaware that your word choice is rude, then you have received feedback, and I invite you to change going forward. If you wish to see a way to critique an argument in a respectful manner, look at the way Coben typed his reply.

I invite you one last time. I gave you the point that your argument was a straw man argument. If you were unaware that your word choices were inflammatory, it is forgiven if you continue on with the argument itself. To your points, you will need to read part 1,2, and 4 if you are to attack what I mean when I say you can believe you will win the lottery, but know by probability that you are unlikely to win it.
3017amen September 17, 2020 at 21:02 #453234
Reply to Philosophim

Thanks for the invite Philosophim. If I could loosely paraphrase your thesis, in a practical way, I do think it is useful to parse or understand which means and methods are appropriate in gaining wisdom given the circumstances. To this end, and before I comment, I have a question.

The attachment uses a concept called "subjective deduction." Is that your theory or way of combining both a priori and a posteriori kinds of reasoning in an all inclusive way for gaining knowledge and wisdom? (And or perhaps combining subjective truths and objective truths.)
Philosophim September 18, 2020 at 12:25 #453394
Quoting 3017amen
The attachment uses a concept called "subjective deduction." Is that your theory or way of combining both a priori and a posteriori kinds of reasoning in an all inclusive way for gaining knowledge and wisdom? (And or perhaps combining subjective truths and objective truths.)


I have had people use the a priori and a posteriori words to relate before, and it has often caused them to misunderstand the points. Subjective deduction is really the best summary of what knowledge is. The "subjective" depends on the subjects involved. This may be the self, or the context of friends, scientists, the world, etc.

However, I do have two terms that mirror the priori duo. Distinctive knowledge, or the knowledge of one's discrete experiences, is similar to a priori. Applicable knowledge is similar to a posteriori. In either case, knowledge is not a claim of truth. Knowledge is a claim of deduction. While a deduction may not be the truth, if we were to find and be certain of the truth, it would most likely come from a deduction, and not an induction.

So to your summary, it is near the mark. Just know that these are not a priori and a posteriori as fully defined. Feel free to ask on anything else, I will do my best to clarify the definitions or simplify any arguments I've put forth here.
3017amen September 18, 2020 at 15:26 #453433
Quoting Philosophim
I have had people use the a priori and a posteriori words to relate before, and it has often caused them to misunderstand the points. Subjective deduction is really the best summary of what knowledge is. The "subjective" depends on the subjects involved. This may be the self, or the context of friends, scientists, the world, etc.


That sounds like a subjective truth. A truth that relates to me and no other object. For example if I have a will to be or a will to exist, what deduction is required for the will?

Quoting Philosophim
if we were to find and be certain of the truth, it would most likely come from a deduction, and not an induction.


Are you sure? Are you suggesting living life is nothing but a tautology? Unless I'm misunderstanding you, your holy grail of knowledge seems to be a priori deduction.

It seems to me you're making a case for subjective idealism.

As such, that would go back to our previous discussion about consciousness and logical impossibility. Since your holy grail is deduction, the consequence of such methodology in exploring or describing a particular truth value is tantamount to logical impossibility, when applied to the nature of a thing. So dedction doesn't help us in parcing ontology/consciousness because of the mutually exclusive truth values of either/or and true/false. Consciousness doesn't work that way, it's both/and. Deduction can't help us.

On the other hand if one were to analyze a given proposition through empirical analysis and inductive reasoning, there would be more import.

It's kind of like saying mathematical truth's and associated knowledge (a priori/deduction) shouldn't be used to test the validity of anything. In themselves, they are just truths that relate to abstract concepts.

Maybe I'm misunderstanding your theory. How does subjective deduction explain (or describe) consciousness? (Using deduction to describe it results in logical impossibility.)
Philosophim September 18, 2020 at 18:57 #453464
Reply to 3017amen Quoting 3017amen
That sounds like a subjective truth. A truth that relates to me and no other object. For example if I have a will to be or a will to exist, what deduction is required for the will?


No, knowledge is not a claim to the truth. Knowledge is a methodology that to our understanding, will have the best hopes of obtaining the truth. Have you read part 2 and 3? (Almost no one has, lol. I take no offense). They introduce the idea of context through other subjects.

Since you've read part 1 at least, you can go back to the first part and show how will is a deduction through an understanding of discrete experience. I note that a "will" is a desire, and an action for that desire to happen. That is the distinctive knowledge we have introduced. If we agree upon it within our context, then we attempt to apply that knowledge. I find I can will to type an answer using a keyboard. Reality does not contradict me. I can will to fly with my mind alone, but reality contradicts me. As long as the application of our definition for will is not contradicted, we can know will by application.

Quoting 3017amen
It seems to me you're making a case for subjective idealism.


No, I am not stating that only ourselves exist. In part 2, I go over that very briefly at the start by explaining what an "I" is, and showing that other people are other "I"s.

Quoting 3017amen
Since your holy grail is deduction, the consequence of such methodology in exploring or describing a particular truth value is tantamount to logical impossibility, when applied to the nature of a thing.


Deduction does not prove something to be true. But it is the most rational method of matching to truth, if what we know is true. I go over that in part 4 with inductions. We cannot prove something to be true through knowledge. We can only show that knowledge is a logical methodology that holds conclusions which have not yet been contradicted by reality. As long as reality does not contradict knowledge, then it is rational to hold such a viewpoint as being the best fit for what is true.

If you have a handle on these concepts, then I can go into consciousness. First, consciousness must be defined. Is is the consciousness of the poets, the consciousness of science, or something else entirely? This establishes the contextual distinctive knowledge. Once that is done, we apply it. If we can apply it without contradiction from reality, then we can say within our context, that we know what consciousness is. If we cannot apply it without contradiction, then we cannot applicably know consciousness within our distinctive context.

What can be concluded is that a contradiction of terms within our distinctive knowledge, or "definition" in this case, means it is not distinctive knowledge. It is a mere belief. And if one cannot apply that definition to reality without a contradiction, it is not applicable knowledge, just a mere applicable belief.
3017amen September 18, 2020 at 21:38 #453507
Quoting Philosophim
we can know will by application.


But we can't know it through deduction.

Quoting Philosophim
Deduction does not prove something to be true.


Would that mean you agree that deduction cannot adequately describe ontology/conscious existence? Otherwise deduction does indeed provide for a priori truth values.

Quoting Philosophim
As long as reality does not contradict knowledge, then it is rational to hold such a viewpoint as being the best fit for what is true.


Using deduction, reality contradicts consciousness and consciousness contradicts reality.

Quoting Philosophim
First, consciousness must be defined. Is is the consciousness of the poets, the consciousness of science, or something else entirely?


All of the above, including all such tenants of philosophical idealism.

Quoting Philosophim
If we cannot apply it without contradiction, then we cannot applicably know consciousness within our distinctive context.


Deductive logic has taught us consciousness cannot be explained. Hence, it's logically impossible yet it still exists. Just like any self-referential proposition creating contradiction, paradox and incompleteness.

Think of it this way Philosophim, if you could explain conscious logically in principle, you would be living in a different world. Or perhaps more importantly, you would be considered in many ways transcendent, meaning your knowledge and understanding would be outside the usual categories of rational human thought.

Philosophim September 18, 2020 at 22:25 #453512
Quoting 3017amen
But we can't know it through deduction.


Knowing by application is knowing by deduction. I would read parts 2, 3, and 4 if you want to understand it all. Part1 is only a primer, and is only a small portion of the argument. Don't worry, they're all about the same length.

Quoting 3017amen
Would that mean you agree that deduction cannot adequately describe ontology/conscious existence?


No, deduction can adequately describe ontology or conscious existence without issue. It is all about defining it, then applying it.

Quoting 3017amen
All of the above, including all such tenants of philosophical idealism.


But these are actually different definitions of consciousness within different contexts. Again, you'll need to read through part 3.

Quoting 3017amen
Deductive logic has taught us consciousness cannot be explained.


How is this so? Perhaps if you show me, I will be able to explain it within the terms I've put forward.

But if you wouldn't mind, please read the rest 3017Amen. With this theory, I can answer virtually any knowledge question you ever ask.

3017amen September 18, 2020 at 22:32 #453514
Quoting Philosophim
, deduction can adequately describe ontology or conscious existence without issue. It is all about defining it, then applying it.


Then please explain using logico deductive reasoning; driving while daydreaming and being in a coma, living yet not living.

Quoting Philosophim
How is this so?


When you explain the aforementioned you will have the answer.



Philosophim September 19, 2020 at 14:07 #453713
Quoting 3017amen
Then please explain using logico deductive reasoning; driving while daydreaming and being in a coma, living yet not living.


Certainly. I will use the terms in the paper. Please feel free to critique and ask questions if something does not make sense.

When using subjective deduction, we realize that if we applicably know one thing, we can use that as a basis for greater knowledge. The most simple example of this is math. As we applicably know that numbers are deduced by discrete experience, they follow the logic of discrete experience. So if I know that I can create "an" identity, or the number one, then I can also create 2 identities, and examine the logic between the two.

Recall the point in which we can examine a field of grass, a blade of grass, or even a portion of the grass as a discrete experience. What this lets us do is affirm that if I create an identity of one blade of grass, and another blade of grass together in my mind, I now have 2 blades of grass. With this logic, I can build algebra, calculus, and all other math.

This applies to knowledge outside of math as well. Let us apply this to driving.

So first, I need the distinctive knowledge within a specific context of what "driving" is. As you can see here, the term, "Driving" has evolved over the years. https://en.wikipedia.org/wiki/Driving So we don't want to take the term used in the 1800's, but the term used today.

Now because you are also chatting with me, we both have to agree on the context of the word as well. So we must both be happy with this definition before we try to apply it. I will propose the definition, feel free to add or detract from it in your reply.

Lets start with driving as, "steer, guide, navigate" in regards to a motor vehicle. The vehicle in this case is a car. We will also now need a few other definitions. Consciousness, and daydreaming. Consciousness can be defined as our personal awareness and agency. I can consciously think about the words I'm typing, wondering if the word "expeditious" is spelled correctly. The unconscious happens outside of my awareness or focus. For example, I don't think about where the letters that make up "is" resides on the keyboard anymore, and I type it without thinking about it at all.

Of course, maybe that's not fully conscious, or unconscious. Because there are other aspects of the body that I have no agency over at all. I cannot will my digestion to alter, or my kidneys to do a better or worse job of filtration. Some might call this unconscious, but perhaps a better term would be "autonomous". These are functions that are outside of our conscious capability.

Ok, with this established we can more clearly state that consciousness is our agency, and unconscious actions happen outside of our agency, but we could put our focus on them and regain conscious control over them at any time.

That's the first definition. We are going to use that to build into daydreaming, so make sure consciousness is well defined for you first. Now daydreaming is a state of emulated sensory imagination. It is interesting, because we do not have to have conscious focus on our senses at all times. Many times I find I am not conscious of the temperature, or seeing what is in front of my face. Basically this becomes an "unconscious" (as defined here) process.

There is usually the implicit notion that when daydreaming, we are consciously aware of it. For all we know, daydreams and processes are constantly firing in our head, and we are only aware of them when we focus on them. But regardless, I think for your purposes you would like daydreaming to be the conscious focus. If we daydream an emulation of something visual, we tend to focus our consciousness away from what is in front of our eyes. At least, I do. At this point our visual processing becomes unconscious.

Back to driving. Can we drive unconsciously? Yes. https://www.psychologytoday.com/us/blog/sleepless-in-america/200812/can-people-drive-while-asleep
There are several instances of people driving while sleep walking. As literal "daydreaming" as you can get! At this point, the consciousness has no awareness or control, so it must be that the person is driving unconsciously.

With all of these definitions and bits of applicable knowledge set up, now we just piece them together in a way that avoids a contradiction.

1. We applicably know people can drive while sleeping, so people can drive unconsciously.
2. We distinctively know the difference between consciousness and unconsciousness.
3. If a person is daydreaming, we assume their consciousness if focussed on that daydream.
4. If their consciousness is not focused on driving, yet they are still driving, it must be they are driving unconsciously.
5. If they wreck while daydreaming as stated above, then they wrecked while driving unconsciously. Their unconscious driving failed to handle the challenges of the road.
6. There is no contradiction in having one's consciousness focused elsewhere while the unconscious mind processes other functions.
7. Therefore there is no contradiction if a person crashes while daydreaming.

There's your start! So feel free to break it down and show where we have disagreements. Appreciate the conversation as always 3017Amen!
Deleted User September 20, 2020 at 00:37 #453890
Quoting Philosophim
I was giving an example in which a person believes they can win, but
knows they likely will not.

Quoting Philosophim
Yes you can. I can buy a lottery ticket believing that I will win, but with the knowledge that I probably will not.

First I want to point out these are descriptions of two very different scenarios. The belief that one can win, but knows it is likely they will not, is a description of two beliefs (one a belief classed as knowledge, that do not contradict each other. The belief that one WILL win despite one's knowledge of the odds, is completely different.

I'm afraid I am not going to read a long essay or series of essays online. If you prefer not to respond to people who won't read the paper, I'll understand.

If one has knowledge X, one believes knowledge X based on criteria that your and or others have decided are rigorous enough to class the conclusion as knowledge.

I cannot know that the sun is one star amongst many and not also believe that.

We can, of course, hold more than one belief, and these can be contradictory.

But if I know the odds are very low I will win the lottery, I belief that.
I may also believe that I will win, based on gut feelings.

IOW I may be in a state of cognitive dissonance, unable to reconcile beliefs that contradict each other.

But one cannot sum me up, then, as simply believing I will win. I also believe, based on my criteria for knowing that I have very little chance, that it is unlikely I will win.

UNLESS....I also believe that the odds are very low but I am psychic OR the lottery is fixed. Then the beliefs do not contradict each other. Though I would then quibble that 'the odds are low' in this specific scenario and any special third belief that affects my thinking needs to be mentioned in the scenario description.

But I can neither be summed up as just believing I will win nor as just believing it is unlkely I will win. I must be described as having two beliefs that contradict eachother.

For example, it makes no sense for me to say, I know it is raining, but I don't believe it is raining.

Philosophim September 20, 2020 at 13:13 #454059
Quoting Coben
First I want to point out these are descriptions of two very different scenarios. The belief that one can win, but knows it is likely they will not, is a description of two beliefs (one a belief classed as knowledge, that do not contradict each other. The belief that one WILL win despite one's knowledge of the odds, is completely different.


Correct. One can have knowledge, but believe that knowledge is wrong. One cannot both know, and not know the same thing. One cannot believe, and not believe the same thing. But one can know something, and believe their knowledge to be wrong. This is what you were to pull out of the example.

Knowledge is a logical process that must follow certain path, and arrives at deductive conclusions. A belief is simply a wish or desire that something is a particular way. I can believe whatever I want. But what I can know is based on a logical process and deductive conclusion. Part 4 goes into inductions, the specific kinds of beliefs like probability, possibility, plausibility, and irrational beliefs. There I analyze what each entails, examples of when it is used, and the soundness of them.

Quoting Coben
I'm afraid I am not going to read a long essay or series of essays online. If you prefer not to respond to people who won't read the paper, I'll understand.


That is fair. This OP is about those essays though. I would wonder why you would post if you aren't going to read the theory though. I can't imagine arguing about a theory I have no knowledge of.

I define knowledge and beliefs a very particular way using logic from the base up. As such, I'm going to use those terms here. You may find the reads enjoyable. I have never had a single person able to prove these essays conclusions as wrong. In fact, I use this method of knowledge within my day to day. Just a small background if you are concerned it is amateur, I have a master's in philosophy, and I program for a living today. I am no intellectual slouch, or naive. It does not mean my argument is correct, only that it is likely worth your time to read.

Each is about the length of part 1. Part 1 is basic, and does not vary much from the conclusions of many other epistemologists. Part 2 is where you'll see a new way of looking at knowledge. Part 3 and 4 are mostly expanding upon the conclusions of part 2, and are only needed if there are further questions.





Deleted User September 21, 2020 at 01:38 #454264
Quoting Philosophim
Correct. One can have knowledge, but believe that knowledge is wrong.


Though one would also believe it was true. Knowledge is a kind of belief. You can't say I know that the earth revolves around the sun but I don't believe it. That is not a complete description. You also believe it. You would be in a state of cognitive dissonence. Quoting Philosophim
One cannot believe, and not believe the same thing
It's an incomplete description at best. I can believe X and not X, though. I can believe that I will graduate college, that since I am managing my courses well, have been complimented by my professors, but also have a belief that I am a failure and won't manage. One can, and I have had, such contradictory beliefs, and then also not just about me, but about statements about the world.Quoting Philosophim
A belief is simply a wish or desire that something is a particular way.
A belief is something one believes. And we often form beliefs through perfectly good non-conscous processes but which we have not done formally and consciouslly. We form beliefs through all sorts of processes some rigorous others not and both rigorous processes and not rigorous ones are fallible.Quoting Philosophim
Knowledge is a logical process that must follow certain path, and arrives at deductive conclusions.


Deduction being one process but not the only one, even within science say.Quoting Philosophim
That is fair. This OP is about those essays though. I would wonder why you would post if you aren't going to read the theory though. I can't imagine arguing about a theory I have no knowledge of.
It's a discussion forum, people tend to present their ideas also in discussion form and I get knowledge via that. I think the medium is best suited for those discussions, but obviously people can use the forums in a variety of ways. Yes, I am not critiquing your theory in the sense that I am not critiquing your papers. But even in your presentation I see assertions that I can interact with. Conclusions. Those are yours and I can respond to them.





Philosophim September 21, 2020 at 13:27 #454385
Quoting Coben
Though one would also believe it was true. Knowledge is a kind of belief.


No, just because you have knowledge of something does not mean that you believe it to be true. I have knowledge of a meeting scheduled tomorrow at 5 pm. Is it true that the meeting will happen tomorrow at 5pm? It turns out someone cancels earlier in the day, and the meeting does not happen.

Further, you may get into discussions that seem to make perfect sense. You know what the premises and the conclusions are, but you just don't believe it to be true.

Knowledge is not a claim to truth. It is a subjective deductive process. Now if we are to have a belief that something is true, it is more likely to be truth if we use deduction, than induction. This is the way science functions. Science does not claim truth. It claims that certain theories have not been proven false yet.

And yes, knowledge is a kind of belief. As I've written here, it is a subjectively deduced belief. The paper is a quest to identify what knowledge and epistemology is. Currently within philosophy, there is no agreement. This is why it is probably best that you read the paper before continuing. Would you critique a plumber on how they are putting the sink together without first reading the instructions? If you are going to put forth the effort to have a discussion, which is a great positive btw, it would make the conversation go much smoother if you understood the terms we are discussing.

Quoting Coben
It's an incomplete description at best. I can believe X and not X, though


In a logical sense, you cannot believe both X and not X. When discussing logic, X = X 100%. If its X = 99.999999999% of X, that's not the same thing. So (Not) X = X is impossible. (Not) X = 99.999999999% of X is possible.

Quoting Coben
I can believe that I will graduate college, that since I am managing my courses well, have been complimented by my professors, but also have a belief that I am a failure and won't manage


So your following example is not a contradictory belief. You believe that you could either pass, or fail. You don't have a 100% belief that you will pass, and a 100% belief that you will fail. Believing either could occur is just fine.

Quoting Coben
Deduction being one process but not the only one, even within science say.


Now this is outside of the paper. What other form of knowledge do we have besides deduction? This may also help me bring the paper terms into some other context. After all, I don't want to write the whole thing again in our discussion. =P

Quoting Coben
Yes, I am not critiquing your theory in the sense that I am not critiquing your papers.


Again, you are critiquing how I am building a sink, telling me "the pipes don't go that way" when you don't have the instructions in front of you. Now if you read the instructions and inform me I am incorrect, that can work just fine.

It would be nice too. I have a slight burden here. I've never been able to find anyone who can prove my theory wrong. Trust me, this is not through lack of trying. If I could prove it wrong, then I would be done with it. I have tried to attack it every which way I can think of, but I just can't invalidate it. It would be of immense help if someone else would try, and perhaps point out something I'm missing.

Since you're already invested this far in the conversation, would you help me out? All told, its about as long as a philosophy journal article.
Deleted User September 22, 2020 at 05:51 #454710
Quoting Philosophim
No, just because you have knowledge of something does not mean that you believe it to be true. I have knowledge of a meeting scheduled tomorrow at 5 pm. Is it true that the meeting will happen tomorrow at 5pm? It turns out someone cancels earlier in the day, and the meeting does not happen.
You had knowledge there was going to be a meeting and then it was cancelled. You were correct that a meeting was scheduled.

of course what we consider knowledge can turn out not to be the case. So, even tougher examples can be made. But this is example is clearly a situation where you believe that a meeting is scheduled. If you believed that all scheduled meeting will take place, that other belief caused you problems with th knowledge you, in fact, had.

And let's be even more specific: You said you had knowledge, there was a meeting scheduled. This means you arrived at this conclusion via some rigorous process. A memo was sent to all employees about the meeting. You double checked with your supervisor. You got a request from the supervisor's assistant to prepare issues in advance for the protocol, etc. All information that supports the conclusion that a meeting was schedulted.

Now if you worked for a company that schedules meetings all the time and they often or regularly are cancelled, that would affect any knowledge you have about whether it must take place.

I should add also that you are using a phrase that indicates less certainty that what one usually means when one uses the term knowledge: "had knowledge of." You also shifted to predictive knowledge, rather than, say knowledge about the world or some facet of it. One can certainly consider that knowledge, but in general that is statistical knowledge, especially when dealing with incredibly complex phenomena like a meeting: where personalities, crises, traffic,changed priorities, illness and a variety of other factors can always change outcomes.

Megarian September 22, 2020 at 14:32 #454816
I have problems beginning with the first sentence.

"Any discussion of knowledge must begin with beliefs.  A belief is a will, or a sureness reality exists in a particular state". 

This claim seems to be that you were born with a set of beliefs that you that you've tested to create knowledge.

'A thirteen-month old child is playing by himself on a rug. He encounters a wet-spot on the rug next to his bottle. He picks up his bottle and sucks on it. Then he rubs the wet-spot again. He shakes his bottle until several drop fall out onto the rug. He rubs his hand over the new wet-spot.' A Young Child is… (A documentary film by Barry Hampe)

In his book ‘Making Documentary Film and Reality Videos’ Hampe writes about obtaining this footage. They wanted a piece on walking and talking in toddlers. The crew simply focused their cameras on this young boy and waited for him to walk or talk. It wasn’t until they began to edit the film that they found this piece of natural learning. The child perceived a wet-spot in his environment and it aroused his curiosity. Experience leads him to associate wet-spot with bottle contents. He tested this idea and confirmed its’ correctness.

This is how knowledge-claims come into being. We wonder, we test; we store the information away for future use and testing. knowledge-claims are not created by belief-claims. Belief-claims are created as justification for knowledge-claims.

Belief-claims can be tied to objective proofs (Certainty), subjective proofs (Certitude) or no proofs at all. It's knowledge-claims that require an objective justification to be valid.

The main problem is logic as Justified True Belief (JTB).
Logic is a knowledge-claim in itself. It is an enclosed system with specific rules. Correctly follow the rules will create a logically valid conclusion, but its validity is only within that enclosed system. Logical proofs only prove logical systems, ie, the problem of self-reference.
Then there's the Gettier Problem to answer. Wikipedia has a primer on that.










Philosophim September 22, 2020 at 15:26 #454838
Quoting Coben
This means you arrived at this conclusion via some rigorous process.


Yes, and what was that rigorous process? I lay that out in the paper.

Quoting Coben
I should add also that you are using a phrase that indicates less certainty that what one usually means when one uses the term knowledge: "had knowledge of." You also shifted to predictive knowledge, rather than, say knowledge about the world or some facet of it.


I am using knowledge as I defined in the paper. Distinctive, and applicable. Statistical knowledge would be distinctive, and its predictive claim would be an induction.

Philosophim September 22, 2020 at 16:14 #454844
Quoting Megarian
This claim seems to be that you were born with a set of beliefs that you that you've tested to create knowledge.


No, I am not claiming you are born with beliefs. I state in the paper that beliefs are things you make. Knowledge is an attempt to figure out which beliefs match reality without contradiction. The beginning is a thought process to a particular conclusion. That sentence is not a conclusion, only a first premise I consider.

Quoting Megarian
Experience leads him to associate wet-spot with bottle contents. He tested this idea and confirmed its’ correctness.


Fantastic, I think you will like my conclusions within the paper then. I would indeed conclude that within the child's context, they had knowledge. While the described process is a witness to this behavior, my paper breaks down the process into a repeatable and verifiable process.

Quoting Megarian
knowledge-claims are not created by belief-claims.


I believe you are incorrect on this. A belief is an assertion of some kind. Knowledge occurs after you run a belief through a process. We can have very certain beliefs, but they are not knowledge. I go over this in the paper.

Quoting Megarian
Belief-claims can be tied to objective proofs (Certainty), subjective proofs (Certitude) or no proofs at all. It's knowledge-claims that require an objective justification to be valid.


Right, this can be broken down even simpler into having inductive beliefs, and deductive beliefs. What entails objective? What entails a deductive versus inductive belief? Are there certain inductive beliefs that are more reasonable to hold then others? The paper covers all of this, and I think you will be pleased at many of my conclusions.

Quoting Megarian
The main problem is logic as Justified True Belief (JTB).


In one of the many iterations of this paper, I used the Gettier problem as a starting point. I removed it because it makes the paper too long, the average person is unfamiliar with the Gettier problem, and it is ultimately unneeded to start the process. Once you read it and understand that I am not proposing JTB, feel free to use the Gettier argument as a method of refutation. I would love it if a person who has some familiarity with common epistemological theories would critique the theory after reading it.

I appreciate your contribution!
Jarmo September 23, 2020 at 11:49 #455092
Reply to Philosophim

I must disappoint you by saying that I did not read the later parts. I did try, but quickly came to the realization that I really should try to understand part 1 first. Surely understanding of the first part should be a requirement for understanding the later parts? Or at least it shouldn’t be the other way around?

However, after reading some of the recent discussions in this thread, I realized that my understanding fails already at the very first paragraph. Initially I thought that you defined belief in a similar way as I (and I assume most others) would, but in a response to Coben you say this:

A belief is simply a wish or desire that something is a particular way.


Equating belief with a wish or desire seems quite extraordinary. According to Stanford Encyclopedia of Philosophy:

Contemporary Anglophone philosophers of mind generally use the term ‘belief’ to refer to the attitude we have, roughly, whenever we take something to be the case or regard it as true.


To me that’s quite a common sense definition. Would you agree with that (and I’m just not understanding the way you define the same thing) or is your conception of belief really something very different?
Megarian September 23, 2020 at 18:56 #455185
I have to agree with Jarmo here. It's the validity of the premises I question.

"That sentence is not a conclusion, only a first premise I consider." - Philosophim
And
"Knowledge occurs after you run a belief through a process." - Philosophim

I will unpack the example of the toddler and the water bottle more fully.

He feels a wet spot on the floor - information.
He drinks from the bottle, it's wet - information.
He shake a few drops onto the floor and feels the result - information testing
.
Tested information is knowledge. The knowledge claim is knowledge plus belief in the validity of the claim. Starting with belief ignores the fact that a belief can be held without any validity. Knowledge proceeds belief.

I know you try to work through this farther on in the paper. Consider the meta-criticism I made.

You are attempting to create an epistemology based on a formal logic system. A formal logic system cannot serve as a foundation for an epistemology.

A formal logic system is itself a knowledge claim.
A formal logic system can validate logically proofs that have no actuality.
A formal logic system creates logical proofs that only prove the system and only within that system.

Logic is one of the best tools we've created to refine information into ever more precise knowledge-claims. Like all knowledge-claims it has limitations, particularly when comes to creating knowledge-claims.









Philosophim September 24, 2020 at 11:29 #455461
Reply to Jarmo Quoting Jarmo
I really should try to understand part 1 first.


Not a worry! As long as you are referencing the paper, we can have a great conversation.

Contemporary Anglophone philosophers of mind generally use the term ‘belief’ to refer to the attitude we have, roughly, whenever we take something to be the case or regard it as true.


This is also a great way of defining a belief. A wish or desire is one of the attitudes we can have. A "certain way" implies a match with reality, or "what is", or "truth".

Feel free to point out if I've made an error in the argument if we use the encyclopedia definition.
Philosophim September 24, 2020 at 11:43 #455463
Quoting Megarian
Tested information is knowledge. The knowledge claim is knowledge plus belief in the validity of the claim. Starting with belief ignores the fact that a belief can be held without any validity. Knowledge proceeds belief.


What am I testing if I have no belief? After combining the first two bits of information, the child had a hunch, or a belief that the drink and the wet spot had a link. Otherwise, why would the child purposefully shake the bottle and then immediately feel it?

This is also what science does. There is a hypothesis, a belief, then that belief is tested. My knowledge theory will support, but also explain at a level beyond simple observation and opinion, that yes, the child knows that wet spot came from the water in the bottle. This is a creation of a formal theory of knowledge that can be used to explain why that child knows in terms that can be reapplied to any case.

Again though, read part 2 at least. Part 1 is about distinctive knowledge, or "the knowledge of identities". Part 2 is applicable knowledge, or how I apply the knowledge of identities without contradiction. Once you understand this, I can go back to the child argument and show how the child has applicable knowledge that the spot on the floor came from the bottle.

Quoting Megarian
A formal logic system is itself a knowledge claim.
A formal logic system can validate logically proofs that have no actuality.
A formal logic system creates logical proofs that only prove the system and only within that system.


Heh, yes, I am well aware of this. A good summation of the theory is called, "Subjective deduction" What I do conclude is that this methodology of knowledge is our best bet at creating conclusions that match to reality. Knowledge is a tool of measurement. But it is clearly different from simple beliefs. I conclude nothing different from your points above. That does not invalidate it, because the alternative to logical thinking, is illogical thinking. In part 4, I demonstrate why illogical thinking is sometimes necessary, but why using probability, logical thinking is usually the smarter choice if one wants to understand reality.

Quoting Megarian
Logic is one of the best tools we've created to refine information into ever more precise knowledge-claims. Like all knowledge-claims it has limitations, particularly when comes to creating knowledge-claims.


Yes, I show that in part 2 with applicable knowledge, later in part 3 with contextual knowledge, and finally in part 4 where I cover the tiers of cogency with induction. Part 1 is only the primer. Think of it as the "I think therefore I am" portion of many epistemology arguments. Every tool has its limits. But limits do not mean it is invalid. Part 4 takes the limitations of knowledge, then says, "What do we do when we reach those limits?" and then uses the lessons learned prior to identify levels of induction that are more cogent than others.

While you may have trouble with the premises, you seem to not be arriving at my conclusions. You are drawing your own conclusions first, without reading my own. Find my conclusions, then see if the premises fit the conclusion. Otherwise, you're only critiquing half of the argument, and seem to be thinking I'm not going to be drawing conclusions that I do, and drawing conclusions that I do not.

Megarian September 25, 2020 at 15:40 #455931
"What am I testing if I have no belief?
After combining the first two bits of information, the child had a hunch…."
- Philosophim
 
The information is being tested.
The "hunch" is the instinct to learn born of evolutionary heritage made rational by  evolving in a rational world.
 
 "…alternative to logical thinking, is illogical thinking." - Philosophim
 
Logical systems are human creations based on the instinctual rationality formed by evolving in a rationale world. Rational thinking does not require a formal logical. The majority of rational thought is simply utilitarian, knowing something worked before and believing it will work again, from this logical systems are born.
 
This is a creation of a formal theory of knowledge that can be used to explain why that child knows in terms that can be reapplied to any case. - Philosophim
 
Then you should be able unpack your conclusions from the system and return to the field where you encountered a sheep and apply them to the sheep (or any animal you're familiar with) to explain its knowledge system. (Non-human learning and knowledge)
 
How would your conclusions explain people holding and acting on beliefs that they admit have no actuality to support them; in some case admitting that those beliefs are disproved? (Belief as emotional attachment to a social group.) How does it apply to Scientific Methodology's creating knowledge claims as a social activity?
 
"While you may have trouble with the premises, you seem to not be arriving at my conclusions." - Philosophim
 
Well yeah, of course. You've carefully designed an enclosed abstract logical system where the design insures the premises support the conclusions. The only lines of criticism open are the premises.
 
Your conclusions, whether they have real world validity or not, are not relevant to my criticism, that an abstract formal logical system, no matter how carefully constructed, cannot be imposed on the world as foundation for an epistemology.
 
It should be obvious by now that my strict-materialist evolutionary naturalism will find few points of agreement with logical idealism.
If you are only interested in someone to parse the logical formulae and not interested in systemic criticism feel free to say so. 
Philosophim September 25, 2020 at 16:29 #455944
Quoting Megarian
The information is being tested.


What is the information? Lets be generous and simplify it to "The wet spot on the ground and the liquid in the bottle."

What is a "hunch"? A belief that the liquid in the bottle caused the wet spot on the ground.

I think we're having a semantics issue. Part 3 covers this. We are speaking about the same thing, we're just using different words to represent those things. Lets not get caught up in that.

Quoting Megarian
Logical systems are human creations based on the instinctual rationality formed by evolving in a rationale world.


I agree. Part of the paper goes into the question of limiting one's context versus expanding one's context for knowledge. It notes that for some people, limiting their context might be more helpful for them. Like the example I use in part 3 with the biologist, the group of friends, and the "tree".

Quoting Megarian
Rational thinking does not require a formal logical.


I would disagree. Rational thinking requires logic. Now most of our thinking is not rational, because rational thinking would take too much time. But one problem in epistemology is determining the validity of different types of irrational thinking. This can also be called, "The problem of induction". Part of the theories purpose is to give a rational way of evaluating which forms of "irrational thinking" are most valuable. However, we must first understand what rational thinking is, and rationally evaluate which inductions are more cogent then others. That is covered in part 4.

If you wish to have this statement be more than an opinion, you'll need to point out in the paper why I am incorrect in making this conclusion.

Quoting Megarian
Then you should be able unpack your conclusions from the system and return to the field where you encountered a sheep and apply them to the sheep (or any animal you're familiar with) to explain its knowledge system. (Non-human learning and knowledge)


Ha ha! You need to read part 2. I do exactly that. It is a nod to the famous epistemological argument of course. And yes, my system can be used from an animal's context as well. I can show how a dog or a child can know. Of course, in the dog's case they would need to understand what a contradiction is. I believe they do at a basic level.

Quoting Megarian
How would your conclusions explain people holding and acting on beliefs that they admit have no actuality to support them; in some case admitting that those beliefs are disproved? (Belief as emotional attachment to a social group.) How does it apply to Scientific Methodology's creating knowledge claims as a social activity?


Part 3 and Part 4. Science is a context. As for holding beliefs that are not knowledge, I go over in part 4 what plausible and irrational beliefs are, and demonstrate that they are on the lower tier of beliefs when we have higher tiers available to consider. I give plenty of examples there, feel free to pull one out to discuss.

Quoting Megarian
Well yeah, of course. You've carefully designed an enclosed abstract logical system where the design insures the premises support the conclusions. The only lines of criticism open are the premises.


This is quite possibly the biggest compliment I could receive as a philosopher.

However, I believe you are drawing conclusions that are not being made from these premises in your criticism. You are thinking about where I'm going to go, instead of seeing where I am actually going. In essence, one cannot critique a conclusion or its premises in isolation from one another. Criticizing one or the other must take the other into account to be valid criticism.

Quoting Megarian
an abstract formal logical system, no matter how carefully constructed, cannot be imposed on the world as foundation for an epistemology.


All this translates to is, "I believe before I've read and understood your argument, that its wrong, because I believe it cannot be right".

Perhaps I have failed in my construction. You need to show me this. I would like to think you are a bit more charitable than that. But if you simply think I'm wrong, so wrong in fact that you won't even bother reading and understanding it, then further discussion will not get us anywhere. I understand that a lot of these questions are to feel out whether its worth your time to read it. But at this point, I think I've given plenty of reasons that are pertinent to your interest.

All of these questions you've pointed out are addressed in some fashion in the paper, and its no longer than a philosophy journal article. Keep your questions in mind while you read it. Read the whole thing through once. Then feel free to point out where I have failed so we can get to the meat of these questions. Thanks!


Jarmo September 26, 2020 at 12:12 #456287
Reply to Philosophim
Quoting Philosophim
Feel free to point out if I've made an error in the argument if we use the encyclopedia definition.


So, if we go with the encyclopedia definition, we can go back to the first sentence that I quoted from your paper:

[Knowledge] is both the belief in something, and a further belief that “the something” is co-existent with reality


If we rephrase the encyclopedia definition with the words used here, we get: belief in something is the attitude we have, whenever we regard that “the something” is co-existent with reality. Then what does the “further belief that ‘the something’ is co-existent with reality” add here?
Deleted User September 26, 2020 at 23:39 #456490
Quoting Jarmo
So, if we go with the encyclopedia definition, we can go back to the first sentence that I quoted from your paper:

[Knowledge] is both the belief in something, and a further belief that “the something” is co-existent with reality


Yes, to me the encyclopedia definition cannot fit that sentence because any belief in something already includes the further belief. It is implicit in that sentence that one can have a belief in something but not believe it is coexistant with reality. (I am also not sure what this latter phrase means. I assumed at first it meant something like 'real' or 'the case', but actually co-existent means lives or occupies the same space as something else. If I believe in yaks, then this means I believe yaks are not part of reality but in the same space as it. Not a subset) Further it makes for an odd epsitemology. Knowledge is not a belief that meets certain criteria (generally rigorous ones) but a belief that is two beliefs, neither of which must meet certain criteria.

So, pick a belief you consider false: an Abrahamic God, alien abductions, whatever you considera false belief. It is clear that believers in alien aductions believe in alien abducutions and consider these to be coexistant with reality. Or real. So this would mean it is knowledge. Or perhaps he is saying they consider it knowledge, which is often also true. Since most people conflate belief and knowledge or don't have any extra criteria except degree of certainty not based on thought out criteria.
Philosophim September 27, 2020 at 13:42 #456649
Reply to Jarmo Quoting Jarmo
Then what does the “further belief that ‘the something’ is co-existent with reality” add here?


A good question. Lets look at the second portion of the definition.

roughly, whenever we take something to be the case or regard it as true.


Being co-existent with reality is an assertion of your belief "being the case", or yourself regarding it as true. What I've done is define what "being the case" is. If you have a belief that you can pick up a ball , then you believe reality will not contradict you when you go to pick up the ball. You believe it is the case that you will be able to pick up the ball. You believe that it is true that you will be able to pick up the ball.

The point of stating "not contradict reality", is to spell out what "being the case" is in a less clearly abstract manner. Basically, what is true is what cannot be contradicted.

I believe these to be two separate beliefs. For me, belief is more atomic. I can contain in my head different types of beliefs that I do not hold as "being the case". They "might" be the case. So if I go outside, it might be sunny, or rainy. These beliefs could be contradicted by reality. I don't know for certain, but I don't believe that if I go outside I'm going to warp to another dimension. This is a claim that I believe will not be contradicted by reality.

So to me, having a belief does not necessitate that it must be the case. Beliefs can be inductive, and doubted in one's mind. That does not mean they are not beliefs. A particular type of belief in which one also believes the belief is not contradicted by reality, is an attempt at a knowledge claim. If I believed that when I stepped outside it would be sunny, and not any other state, I am making a knowledge claim that could be contradicted by reality, but I believe will not be.

In the introduction, I am trying to use the most basic language and build up from there, but it seems this needs greater explanation at the beginning. If I added an explanation like the one I gave above, would that help clarify the issue? Do you think a better example or word can be given for what I am trying to describe? I appreciate the feedback!

Deleted User September 27, 2020 at 13:56 #456653
Quoting Philosophim
The point of stating "not contradict reality", is to spell out what "being the case" is in a less clearly abstract manner. Basically, what is true is what cannot be contradicted.

This would mean your definition of knowledge is even more rigorous than, for example, that in science. Because science is always - at least theoretically - open to revision. At the point in time something gets accepted as knowledge, the scientific community has not evidence to contradict the theory, however it is not determined that it cannot be contradicted.

That is a very hard thing to predict. Strong evidence and nothing to falsify it now and no competing theory with either more evidence or less posited entities is more or less current practice in science. No claims are made that it cannot be contradicted.

Though perhaps you meant 'cannot be contradicted now as far as we know.'

Quoting Philosophim
roughly, whenever we take something to be the case or regard it as true.


Being co-existent with reality is an assertion of your belief "being the case", or yourself regarding it as true.


I think everyone beliefs that their beliefs are the case and regard them as true. They may have different degrees of certainty. It might be a shaky belief as a kind of working hypothesis or it might be something one considers must be the case.

Knowledge is a communal belief - at least in many epsitemologies. What beliefs do we decide are knowledge? And some set of criteria are put forth.





Philosophim September 27, 2020 at 14:12 #456659
Reply to Coben

Coben, I had accidently submitted my reply before it was finished. If you don't mind, feel free to re-examine it and see if it further answers your question.

There is an extra bit here though I think I should address as well.

The sentence preceding my particular sentence on belief is, "Knowledge expects a consistency.". I then explain that knowledge it the pairing of two beliefs together. The following sentence notes "Yet how can one be certain one’s belief is co-existent with reality?" The intent I was hoping to convey here was "Knowledge is a set of beliefs that people feel with certainty, but then we must be able to demonstrate that this "claim to knowledge" is correct.

So it appears my writing is poor and does not convey this. If I changed this section to identify what a "knowledge claim" was, would this make it more clear? As a rough draft, "So what can we call knowledge? At first glance, knowledge appears to be a claim that It is both the belief in something, and a further belief that “the something” is co-existent with reality"... and then continue on with a bit of the rewrite I've mentioned in the previous post. Essentially really emphasizing that at this point in the thought process that knowledge is a claim one's belief cannot be contradicted by reality, and that to do so, there must be some application of that belief to reality.

Quoting Coben
Though perhaps you meant 'cannot be contradicted now as far as we know.'


To clarify, it is not "as far as we know now", it is, "We cannot be contradicted." That can only be done, "now", as the future cannot contradict you. So yes, I could know one thing now, then later find that knowledge is invalided by a contradiction.

And another portion of your earlier reply:

Quoting Coben
So, pick a belief you consider false: an Abrahamic God, alien abductions, whatever you considera false belief. It is clear that believers in alien aductions believe in alien abducutions and consider these to be coexistant with reality. Or real. So this would mean it is knowledge. Or perhaps he is saying they consider it knowledge, which is often also true. Since most people conflate belief and knowledge or don't have any extra criteria except degree of certainty not based on thought out criteria.


Yes, this is good. What the paper will show is how we identify if something is not contradicted by reality. And yes, depending on your context of definitions, you can know that the God of Abraham is not contradicted by reality. We can also create another context in which you cannot know the God of Abraham, because such a context is contradicted by reality. How we tackle different contexts, and which contexts we should strive for is commented on in chapter 3, and would be a great discussion once you read that section. If I could explain all of the possible topics and consequences of a one sentence proposal on knowledge, I would. Alas, I had to write about 20 pages to do so. =P

Thanks for the feedback!
Megarian September 29, 2020 at 17:07 #457338
I think we're having a semantics issue.
- Philosophim
 
These are not semantic issues they're the meat of these questions. We have completely opposing world views. 
 
The world has order. This order is the predictive relationships of environments.
Environments are unstable. Organisms that evolve simple genetically coded behavior are less likely to survive that those organism that evolve Associative Learning Mechanisms (ALM) to gather  information of their environment.
 
Species process the ALM of the predictive relationships in their environment by evolving Associative Application of Information(AAI) mechanisms. 
AAI is rationality
information A with information B results in -C. C, or +C.
 
This is the rationale; genetically encoded in life long before there were any humans to use it to create logical systems.
 
Rational thinking requires logic. - Philosophim
 
No, logic is a creation of rational thinking.
In the example of the pre-verbal toddler he has neither language or belief. His activity is a genetically encoded AAI mechanism.
Encountering information, associating information and testing the association.
 
The example of the toddler is not an philosophical abstraction created to illustrate a point. It's a real world phenomena to which I am giving explanation and I don't see that process in your paper. There are no real world testable conclusions in your paper.
 
But one problem in epistemology is determining the validity of different types of irrational thinking.
- Philosophim
 
I disagree, real irrational thinking is dysfunction, brain damage or chemical imbalance.
Other than that, thinking follows the rational AAI pattern.
Criticism of validity is at the points of information, association or testing.
 
A foundation for epistemology needs to produce testable claims about the phylogenic, ontogenic and cultural environments.
 
Such a claim must stand under criticism: 
On its utility in problem solving. (Does it work?) 
On its internal coherence.  (Is it self-contradictory?)  
On its external consistency.  (Does it 'fit' in a framework of other claims about the world?) 
On its semenality.  (Does it/can it lead to new/more precise claims?)
 
I find no utility in the system you propose. Your answers to criticisms are self-referential, externally inconsistent or Argumentum ad dictionarium. Being devoid of testable claims it has no semenality.
 
The paper gives us no information promoting an understanding of the general nature of knowledge or the species-specific human nature of knowledge creation and use. It's answers to these question that an epistemology can be founded on.

Until you produce a testable conclusion on the nature of knowledge, not the nature of a logical system, you have no foundation for an epistemology.
Philosophim September 29, 2020 at 22:09 #457397
Reply to Megarian Quoting Megarian
We have completely opposing world views.


I honestly think its not that different. Lets see if I can demonstrate this.

Quoting Megarian
This is the rationale; genetically encoded in life long before there were any humans to use it to create logical systems.


No disagreement. This does not counter my theory. I am observing the rules that result from our biological abilities and limitations, which does not require us to know how those come about. You'll notice in part one I mention that we do not need to understand why we discretely experience, only observe that we do.

Quoting Megarian
The example of the toddler is not an philosophical abstraction created to illustrate a point. It's a real world phenomena to which I am giving explanation and I don't see that process in your paper. There are no real world testable conclusions in your paper.


There is are plenty of tests in the paper. The reason why I like the theory is all of these tests can be done yourself. In fact, they have to be applied, otherwise they are only distinctive knowledge, and not applicable knowledge. The toddler is a real world occurrence, and we can abstract that occurrence into a methodology. For example, I can see a few blades of grass. I can then abstract each blade of grass as a number. My abstraction of the baby's actions do not deny the babies actions, only explain it in a logical methodology. I gave you a breakdown, is my breakdown incorrect? If not, why?

Quoting Megarian
No, logic is a creation of rational thinking.


That is perfectly fine, it does not change anything claimed here. If you are referring to logic as the formalized linguistic expression of rational thinking, then yes, I fully agree. When I am using logic here, I am only talking about rational thinking, minus the need for language. To me, linguistic logic would be formalized logic, but that is completely unnecessary here. Distinctive knowledge does not require any language. Same with applicable knowledge. Language is a result of distinctive knowledge. Useful language is a result of applicable knowledge. But language is not necessary for distinctive or applicable knowledge.

Quoting Megarian
Encountering information, associating information and testing the association.


This is mirroring my theory here. Create distinctive knowledge, and apply distinctive knowledge. Testing is an option within distinctive knowledge, but not a necessity.

Quoting Megarian
But one problem in epistemology is determining the validity of different types of irrational thinking.
- Philosophim
 
I disagree, real irrational thinking is dysfunction, brain damage or chemical imbalance.


I will clarify, as I am speaking within the context of the paper again. You'll notice that I summarize the theory of knowledge here as "subjective deduction". This leads into an analysis of induction. The "irrationality" is using induction at all. Using this theory, I am able to address the problems with induction, showcasing 4 levels of inductive thought, and demonstrating a tier system of cogency. One difficulty with knowledge is being able to demonstrate why is is more rational to use certain types of inductions over others. For example, intuitively why is it more rational to believe the sun will rise tomorrow then to believe that it will not rise tomorrow? A breakdown of the terms and logic can identify why.

Quoting Megarian
A foundation for epistemology needs to produce testable claims about the phylogenic, ontogenic and cultural environments.


Not a problem. Once we dissect the logic, we can easily go to any of these subjects. Have you read the paper fully yet? Please actually answer this in your next answer so I can know if I can start heavily pulling terms and referencing parts of the text. It will make our conversations go much easier.

Quoting Megarian
On its utility in problem solving. (Does it work?) 
Yes. I use it in my daily life.

On its internal coherence.  (Is it self-contradictory?)
No, it does not contain any self-contradictions. At least, none that I've seen. Feel free to add your own insight on the paper.  

On its external consistency.  (Does it 'fit' in a framework of other claims about the world?) 
Yes. It is the base for all types of contextual knowledge theories. I am able to explain why a baby can know that the wet spot on the floor was caused by its actions. I can explain away the Gettier theory. (I had it in the paper at one time, but it was more like a book then. I mean, I can barely get people to read the 20 pages as it is). I can explain why a family has knowledge that is specific to themselves, but if taken in the greater context of the world, would not be considered applicable knowledge.
On its semenality.  (Does it/can it lead to new/more precise claims?)

Absolutely. I can answer the riddle of Theseus' ship. I can give a logical evaluation of inductive claims. Its pretty darn useful.

Quoting Megarian
I find no utility in the system you propose.


Considering I can tell you still haven't read the paper, that's really not a solid claim. Read the paper, then get back to me showing why the system has no utility. All you are making is an opinion claim, you are not referencing the actual material.

Quoting Megarian
The paper gives us no information promoting an understanding of the general nature of knowledge or the species-specific human nature of knowledge creation and use


You obviously did not read part 3 or 4. Again, why make claims like this without reading it? I'm really scratching my head over here. Are you just wanting to argue for arguing's sake, or do you want to actually analyze the paper, do some fun thinking, and come to a reasonable conclusion? I mean, I might be wrong. I'm open to that. But you have to actually read the paper. I don't think this is unreasonable in the slightest.

So go read it! Its not a waste of your time. If we can spend posts here here discussing when you're not even talking about what's in the paper, and instead some imaginary idea you've come up with, imagine what we could generate if you read it and we can actually discuss about the real idea!



Jarmo October 04, 2020 at 08:43 #458710
Reply to Philosophim
Quoting Philosophim
I can contain in my head different types of beliefs that I do not hold as "being the case". They "might" be the case.


So is that the difference between the beliefs in your statement about knowledge (that I quoted earlier)? So the first belief (“belief in something”) is only a belief that something might be? And the second belief (“belief that ‘the something’ is co-existent with reality”) is a belief that something actually is?

First of all I think that belief in x already indicates that you believe x is true (or 'the case' or 'co-existent with reality'), so the truthiness does not need to be further emphasised. On the other hand, if you only believe that x might be, then you should be explicit about that. So your statement should be rephrased like this:

[Knowledge] is both the belief that something might be, and a further belief that “the something” is.

But my second point is that while the statement now makes a bit more sense, I still think it’s nonsensical. I don’t think it’s possible, at the same time, to believe that x might be and that x is.

So you can believe that it might be sunny and that it might be rainy. Then when you look out the window and see that it is actually rainy, the ‘might’ vanishes, and you only believe that it is rainy. You don’t believe that it is sunny or that it might be sunny or that it might be rainy.
Philosophim October 05, 2020 at 00:20 #458916
Reply to Jarmo Quoting Jarmo
So is that the difference between the beliefs in your statement about knowledge (that I quoted earlier)? So the first belief (“belief in something”) is only a belief that something might be? And the second belief (“belief that ‘the something’ is co-existent with reality”) is a belief that something actually is?


" It is both the belief in something, and a further belief that “the something” is co-existent with reality."

-Quote from part 1

How we obtain that "that something is co-existent with reality", is through subjective deduction, or if a belief is not contradicted by reality. For what is not contradicted by reality, "is". At least, as a very simple start within our own minds. That's what is called "distinctive knowledge". Part 2 also identifies, "Applicable knowledge".

So to keep this within the context of the paper and ease confusion, make sure you understand what "Distinctive knowledge" is. Feel free to critique it and poke holes in it, as understanding this is required for anything else to make sense. And don't just look at my comment, please read the more fleshed out portion of the later half of part 1. The above is a summary, not a comprehensive answer.



PeterJones October 19, 2020 at 16:08 #462691
" I've used it to answer a number of Epistemological puzzles such as "Theseus's ship", and have used it in daily general assessment of whether I know things as I go about daily life.".
Reply to Philosophim

You seem to believe that knowledge is belief. I cannot make sense of this idea. Do you know you are aware? Or do you simply believe it?

I believe (!) Aristotle points out somwhere that true knowledge is identical with its object. Descartes reached the same conclusion. This is the idea one needs to explain knowledge. It is a difficult idea but it partly explains the claim of the mystics that knowing is fundamental.. .

That is to say, knowing, which for Russell is the most difficult and most truly philosophical problem, cannot be explained as an emergent phenomenon. This seems to be the experience of all philosophers. .
Philosophim October 19, 2020 at 16:52 #462710
Reply to FrancisRay Quoting FrancisRay
You seem to believe that knowledge is belief.


Are you going by the comments here, or the paper itself? Because the comments here are only to get people to actually read the paper and understand the points. You will not glean my argument from the comments alone.

If you have read the paper, I will summarize to help you understand. No, knowledge is not merely a belief. There are two parts to knowledge. Distinctive, and applicable. In both cases, a belief that is subjectively deduced is the particular type of knowledge based on the context. Which part are you having trouble understanding?

I do not care about any of your personal philosophies of knowledge, I care about good criticism of the ideas of the paper. Now if you can apply your personal philosophies of knowledge to the paper, that would be fantastic! But generic personal opinion without addressing the paper serves no point.
PeterJones October 20, 2020 at 13:24 #463045
Reply to Philosophim Your point is a fair one, but I see no point in reading an article that seems epistemilogically naive.even before I start reading. You can ignore me.
Philosophim October 20, 2020 at 13:45 #463053
Reply to FrancisRay Quoting FrancisRay
Philosophim Your point is a fair one, but I see no point in reading an article that seems epistemilogically naive.even before I start reading. You can ignore me.


If you haven't read the article, how do you know its epistemologically naive? I have studied and am familiar with the history of epistemology up through Quine. Lets be real: most people just don't want to read an article longer than a forum post, and look for every excuse not to. Which, I have nothing against! Its fine. But telling me it looks naive when you haven't read it? Come now. Just be honest and go about your day instead of trying to slight me.
PeterJones October 20, 2020 at 13:55 #463056
Reply to Philosophim Well, I read a couple of responses. You say you've used your theory 'to answer a number of Epistemological puzzles such as "Theseus's ship", and have used it in daily general assessment of whether I know things as I go about daily life.' This tells me it's not a good theory. If you need a theory to tell you that you know things then you're talking about degrees of belief, not knowledge.

Quine has no useful theory either. In the Western tradition the best there is for knowledge is 'justified true belief'. .
Philosophim October 20, 2020 at 14:04 #463062
Reply to FrancisRay Quoting FrancisRay
Well, I read a couple of responses.


Which are utterly useless, as they lack the context of the paper. It never ceases to amaze me that people think they can judge a paper without actually reading the paper. I also never stated I agreed with Quine, I'm just indicating that I am familiar with several epistemological theories.

This is devolving into dumb ego. Read the paper if you're interested and converse on that. If you're going to comment on the paper without reading the actual paper, its a waste of both of our times.
PeterJones October 21, 2020 at 08:21 #463351
Reply to Philosophim I understand your pov on this issue, but I also understand mine. I don't need to read it to know I don't endorse it. .
Philosophim October 23, 2020 at 18:52 #464227
Quoting FrancisRay
I don't need to read it to know I don't endorse it.


Congrats on a completely worthless contribution to the thread then. For anyone looking to have a worthwhile conversation, please post on the topic of the forum, which is the paper itself. I'm sure most people don't need to be told this obvious statement, but we get a few on here who don't understand basic logic.
PeterJones October 24, 2020 at 11:46 #464419
Reply to Philosophim Yes, you;re right, it was useless. I was just frustrated at how complex the issues are made by this sort of approach, but I should have engaged with it properly. My bad.
Philosophim October 24, 2020 at 12:44 #464440
Reply to FrancisRay

Any person who has the character to admit to a mistake becomes a giant in my eyes. You have my utmost respect and forgiveness! It is water under the bridge.
PeterJones October 24, 2020 at 17:51 #464503
Reply to Philosophim Likewise, thanks for being open to an apology. .
Bob Ross November 30, 2021 at 01:54 #625741
Hello @Philosophim,

Firstly, great work! I enjoyed reading your collection of epistemological essays! Since it was a lot to unpack, I am going to simply address your epistemological proposition from an extremely broad sense (thereby only addressing the most vital quarrels I think I have with your proposition). I will leave it up to you to decide what you would like to discuss thereafter:


  • Your proposition seems to be founded on the one-sided relationship between “reality” and beliefs; however, it is really two-sided: “reality” can influence a belief and a belief can influence “reality”. For example, placebo effect is a real psychological phenomenon which reveals the two-sidedness of the relationship. With that being said, I do agree with your proposition in the sense that beliefs should coincide with “reality” in some manner—but this does not necessarily mean that anything that isn’t directly factually true (such as a unicorn's existence) directly invalidates the belief in such: belief and knowledge are two distinct concepts. For example, the thorough refutation of libertarian free will does not in any way (I would argue) immediately refute the belief in such--as, even in the case that free will is an illusion (in a hard determinist worldview--which I am not advocating for but merely utilizing as an example), it does not follow that one is obliged to then completely disregard (without further examination) the proposition that the belief in free will is useful. Beliefs, completely separate from knowledge claims, can manifest (psychologically and sociologically speaking) real, empirical facts about "reality".
  • Our epistemological worldviews deviate at a much more fundamental level than I expected, but, interestingly enough, we agree the broader we speak. For example, you stated (in, I believe, your first essay): “In recognizing a self,, I am able to create two “experiences”.  That is the self-recognized thinker, and everything else.” I think that this is the intuitive thing to do, but it is only an incredibly general description and, therefore, doesn’t go deep enough for me. There are three distinctions to be made, not simply “I” and “everything else”: the interpreter, the interpretations (representations), and self-consciousness. In your terms, it would translate (I think) into a discrete experiencer (self-consciousness and the interpreter joined into to one concept) and everything else (the interpretations/representations which are later divided into other “I”s and “everything else besides ‘I’ and other ‘I’s”). I would make it much more precise than that: “I am self-conscious, therefore something is”. I think Descartes biggest mistake was assuming “I” am the one “thinking”. Although you refurbish it into discrete experiences, I think this is still fundamentally assuming things we cannot. That is why I define an “experience” as a witnessing of immediate knowledge (the process of thinking, perception, and emotion) by means of rudimentary reason, and a “remembrance” (or memory as you put it in subsequent essays) as seemingly stored experiences. Notice that I am directly implying that I have no reason to believe that I am an active ingredient, so to speak, in any of those processes. I am not convinced that I am actively participating in the process of perception, emotion, or thought, but rather, I am “participating” insofar as I am a witness of such (I witness the process of rigorous thought and the feeling of convincement): I am self-consciousness. There is another aspect, or “force”, so to speak, that is distinctly separate from the “I” in a “discrete experiencer” (self-consciousness): the interpreter. The interpreter is a form of mediate knowledge (namely a prior knowledge) which is necessitated by the fact that the “I” doesn’t directly affect the processes associated with immediate forms of knowledge. In other words, both the interpreter and the “I” (self-consciousness) are apart of the subject (consciousness), but the “I” is only a particular aspect of it. This distinction, for me, between keeping it a binary distinction ("I" vs "everything else") and regressing it further (the subject into two categories) reveals some heavily impactful positions of mine (pertaining to epistemology). For example, if I wasn’t self-conscious of the process of thinking that is occurring in “my” brain, then, although this response would still get written and sent, "I" would have no knowledge of it. Likewise, if we all were not self-conscious of our process of thinking, perception, and emotion (i.e. immediate forms of knowledge), then there would be no knowledge at all. Interestingly enough, I think that the interpreter's processes would continue—as "we" (self-consciousness) are mere witnesses—but there would be no epistemological grounds for any sort of knowledge because neither of us would have any recollection of it. Think of it like the hypothetical scenario that you had a thought two seconds ago, of which you never experienced (you were not consciously aware of it—aka self-conscious of it) (or you do not and will never remember)--can any knowledge be derived therefrom? I would argue no! This is like your reference of a “discrete experiencer” in the sense that if you didn’t discretely experience then there is no knowledge, however that becomes very confusing to me as I don’t think your consciousness would completely stop. I think you would continue to do exactly what you are doing now (in terms that the interpreter, one aspect of the subject, would persist), except that “you” would have no knowledge of it. Your interpreter, so to speak, would continue to create interpretations (as far as I can tell), which are discrete representations, but those interpretations would not be experienced (in the sense that I defined it--a witnessing of immediate knowledge by means of rudimentary reason) because there would be no witness. It’s kind of like how some animals can’t even recognize themselves in a mirror: I would argue that they do not have any knowledge if (and its a big if in this case) they are not self-conscious. Yes they have knowledge in the sense that their body will react to external stimuli, but that isn’t really knowledge (in my opinion) as removing self-consciousness directly removes “me” (or “I”) from the equation and that is all that is relevant to "me" (as this fundamental epistemological question wouldn't have even been posed in any meaningful way to the subject if the second aspect of the subject--namely self-consciousness--was not present). As an example, let's take your sheep example: what if that entire concept that you derived a deductive principle from (namely tenants that constitute a sheep) were all apart of a hallucination. What if you really snorted a highly potent hallucinogen in the real world and it is so potent that you will never wake up in the real world but, rather, you will die in your hallucinated world once your body dies in the real one. Do you truly have knowledge of the sheep (in the hallucinated world) now, given that the world isn't real? What is "real"? I would argue, since the relevance directly ties to the witness (and not the interpreter) aspect of the subject, whatever corresponds best to what the witness experiences is exactly what is "real" and, therefore, the real world in my example here would be nothing except irrelevant. The knowledge, if knowledge is going to mean anything, of the sheep is just as valid in a real world as a "real" (potentially hallucinated and, thereby, "fake") one! Everything revolves around the subject for me, nay, specifically the witness (self-consciousness, one aspect of the subject). In other words, the validity (absolutely speaking) of the interpreter's interpretations has no direct effect on what one can or cannot constitute as knowledge.
  • For now, the last aspect I will discuss here is the idea, which I consider to be your main point, that inductions can be classified by means of how close they are to deductive principles (i.e. your distinction between probability, possibility, plausibility, and irrational induction). There is a lot that can be said about this, but, in an effort to keep this relatively short, I will focus on one aspect: the fundamental problem with this logic. The most fundamental aspect of our lives (I would argue) is rudimentary reason, which is the most basic (rudimentary) method by which we can derive all other things. We reason our way into even knowing (in the first place) that we have any forms of immediate knowledge (perceptions, thoughts, and emotions): we induce by our witnessing (experiencing) the process of perception, emotion, and thought that that we indeed have such (by means of rudimentary reason--a witnessing of a series of thoughts that, in turn, create a seemingly conclusory thought of which we are, at least in that instant, thoroughly convinced of). Imagine that you never witnessed yourself conclude anything: you therefore would be no different, with respect to yourself, than a rock (now to someone who could conclude--aka has rudimentary reason to some extent--would be able to distinguish you from a rock, but you wouldn't be able to). To realize that you have thoughts is to make a conclusion, if that is absent then the interpreter associated with you as a subject would persist in such immediate processes but, most importantly, you would be completely unaware and, honestly, I don't' think "you" would exist anymore (self-consciousness cannot persist in the absence of conclusions): this is what I mean by rudimentary reason, that, although it doesn't have to be rational, you must be able to form conclusions which requires a rudimentary form of something (which I call rudimentary reason). You see, if one were to be committed to stating that the closer an induction is to a deductive principle the more likely it is correct, then the very means by which they induced those deductive principle would have no grounds to stand. My use of rudimentary reason to induce immediate and mediate knowledge, and thereby all deductive principles, would have no deductive principle at its base: it is a pure induction. Experience at its most fundamental level, namely immediate knowledge, is pure induction. Now, I think you are right in the sense that inductions that are based off of strong deductive principles are stronger than inductions that are chained together, but I don't tend to establish what is more knowable based off of this postulation because, as I previously stated, my most immediate forms of knowledge, and consequently my basis of everything one way or another, are induced. That is why I say that the closer (metaphorically and literally) the concept is to immediate knowledge, the more reasonably "real" it is (notice that I am not anchoring it down towards a deductive base).


Well, I think that is enough for now (although there is much to discuss!) as your essays covered a lot of ground. Again, great work! I look forward to hearing back from you on your thoughts.
Bob
Philosophim December 01, 2021 at 00:16 #626176
Reply to Bob Ross

Thank you for reading this Bob! This might be a long reply, and I will likely need time to go over it adequately, so sorry in advance if this takes some time for a reply. I've already spent an hour tonight going over it, and my time tonight is spent. I just wanted to acknowledge and express gratitude for taking the time to read it. I will have your answers in the coming days!

Bob Ross December 01, 2021 at 03:13 #626222
@Philosophim
Absolutely no worries! Take as much time as you need (no rush): I am looking forward to your response (:
I like sushi December 02, 2021 at 04:59 #626641
Decartes point was that you can always doubt what you believe reality to be. You cannot doubt that you are doubting though as you'd be doubting by doing so.

A doubting thing is thinking. A thinking thing exists. I am a thing that doubts therefore I am a thinking thing. So I exist.

Our memories are plastic so we cannot rely on them. We can doubt our memories.

Ironically I think Decartes may have gotten this kind of backwards in terms of 'knowledge'. It seems the kind of knowledge framed is abstract only whereas Intentionality is necessarily experienced due to being incomplete/unfulfilled 'knowledge' as compared to pure abstractions.

Where the rules and limits are set (abstract) absolute knowledge exists - but we may still make errors within these bounds as we're no bound by them ourselves. Given that the limits and rules of 'reality' are not known (or may not exist) we are not able to form 'absolute knowledge' about Intentionality other than to say there is Intentionality - 'directed doubt' (to a proposed thing be it abstract or otherwise).

Philosophim December 04, 2021 at 13:37 #627680
Thank you for the wait Bob. I wanted to make sure I answered you fully and fairly.

Quoting Bob Ross
you stated (in, I believe, your first essay): “In recognizing a self,, I am able to create two “experiences”.  That is the self-recognized thinker, and everything else.” I think that this is the intuitive thing to do, but it is only an incredibly general description and, therefore, doesn’t go deep enough for me.


It is fine if you believe this is too basic, but that is because I must start basic to build fundamentals. At this point in the argument, I am a person who knows of no other yet.

Quoting Bob Ross
There are three distinctions to be made, not simply “I” and “everything else”: the interpreter, the interpretations (representations), and self-consciousness.


I have no objection with discussing this sub divisions of the "I" later. At the beginning though, it is important to examine this from the perspective of a person who is coming into knowledge of themselves for the first time. A "rough draft" if you will. Can we say this person has knowledge in accordance to the definitions and the logic shown here, not the defintions another human being could create. I needed to show you the discrete knowledge of what an "I" was, which is essentialy a discrete experiencer. Then I needed to show you how I could applicably know what an "I" was, which I was able to do.

To this end, we can say that a discrete experiencer does not need the addition of other definitions like consciousness for the theory to prove itself through its own proposals.

Quoting Bob Ross
it would translate (I think) into a discrete experiencer (self-consciousness and the interpreter joined into to one concept)


I think this is a fine assessment. We can make whatever definitions and concepts we want. That is our own personal knowledge. I am looking at a blade of grass, while you are creating two other identities within the blade of grass. There is nothing wrong with either of us creating these identities. The question is, can we apply them to reality without contradiction? What can be discretely known is not up for debate. What can be applicably known is.

Quoting Bob Ross
That is why I define an “experience” as a witnessing of immediate knowledge (the process of thinking, perception, and emotion) by means of rudimentary reason, and a “remembrance” (or memory as you put it in subsequent essays) as seemingly stored experiences.


This is a great example of when two people with different contexts share their discrete knowledge. I go over that in part 3 if you want a quick review. We have several options. We could accept, amend, reinterpret, or reject each other's definitions. I point this out for the purposes of understanding the theory, because I will be using the theory, to prove the theory.

What I will propose at this point is your definition additions are a fine discussion to have after the theory is understood. The question is, within the definitions I have laid out, can I show that I can discretely know? Can I show that I can applicably know this? You want to discuss the concept of the square root of four, while I want to first focus on the number 2. You are correct in wanting to discuss the square root of four, but we really can't fully understand that until we understand the number 2.

Back to context. If you reject, amend, or disagree with my definitions, we cannot come to an agreement of definitive knowledge within our contexts. This would not deny the theory, but show credence to at least is proposals about distinctive knowledge conflicts within context. But, because I know you're a great philosopher, for now, please accept the definitions I'm using, and the way I apply it. Please feel free to point out contradictions in my discrete knowledge, or misapplications of it. I promise this is not some lame attempt to avoid the discussion or your points. This is to make sure we are at the core of the theory.

To recap: An "I" is defined as a discrete experiencer. That is it. You can add more, that's fine. But the definition I'm using, the distinctive knowledge I'm using, is merely that. Can I apply that discrete knowledge to reality without contradiction? Yes. In fact, it would be a contradiction for me to say I am not a discrete experiencer. As such I applicably know "I" am a discrete experiencer. Feel free to try to take the set up above, and using the definitions provided, point out where I am wrong. And at risk of over repeating myself, the forbiddance of introducing new discrete knowlege at this point is not meant to avoid conversation, it is meant to discover fundamentals.

Quoting Bob Ross
It’s kind of like how some animals can’t even recognize themselves in a mirror: I would argue that they do not have any knowledge if (and its a big if in this case) they are not self-conscious. Yes they have knowledge in the sense that their body will react to external stimuli, but that isn’t really knowledge (in my opinion) as removing self-consciousness directly removes “me” (or “I”) from the equation and that is all that is relevant to "me"


Would an animal be an "I" under the primitive fundamental I've proposed and applicably know? If an "I" is a discrete experiencer, then I have to show an animal is a discrete experiencer without contradiction in reality. If an animal can discern between two separate things, then it is an "I" as well. Now I understand that doesn't match your definition for your "I". Which is fine. We could add in the defintion of "consciousness" as a later debate. The point is, I've created a defintion, and I've applied it to reality to applicably know it.

Thus as a fundamental, this stands within my personal context. I note in part 3 how limitations on discrete knowledge can result in broad applications for certain contexts that ignore detail in other contexts. It is not the application of this distinctive context to reality that is wrong, it is a debate as to whether the discrete knowledge is detailed enough, or defined the way we wish. But for the single person without context, if they have defined "I" in this way, this is the only thing they could deduce in their application of that definition to reality.

Quoting Bob Ross
As an example, let's take your sheep example: what if that entire concept that you derived a deductive principle from (namely tenants that constitute a sheep) were all apart of a hallucination.


I'll repost a section in part 2 where I cover this:

"What if the 'shep' is a perfectly convincing hologram? My distinction of a sheep up to this point has been purely visual. The only thing which would separate a perfectly convincing hologram from a physical sheep would be other sensory interactions. If I have no distinctive knowledge of alternative sensory attributes of a sheep, such as touch, I cannot use those in my application. As my distinctive knowledge is purely visual, I would still applicably know the “hologram” as a sheep. There is no other deductive belief I could make."

So for your example, the first thing we must establish is, "How does the hallucinator distinctly know a sheep?" The second is, "Can they apply that distinctive knowledge to reality without contradiction?" When you say a deductive principle, we must be careful. I can form deductions about distinctive knowledge. That does not mean those deductions will be applicably known once applied to reality. Distinctive knowledge can use deduction to predict what can be applicably known. Applicable knowledge itself, cannot be predicted. We can deduce that our distinctive knowledge can be applied to reality without contradiction, when we apply it to reality. But those deductions are based on the distinctive knowledge we personally have, and the deductions we conclude when we apply them to reality.

To simplify once again, distinctive knowledge are the conceptualizations we make without applying them to reality. This involves predictions about reality and imagination. Applicable knowldge is when we attempt to use our conceptualizations in reality without reality contradicting them. With that in mind, come back to the hallucination problem and identify the distinctive knowledge the person has, and then applicably what they are trying to prove.

Once this fundamental is understood and explored, then I believe you'll see the heirarchy of inductions makes more sense. First we must understand what a deduction is within the system. Within distinctive knowledge, I can deduce that 1+1 = 2. When I apply that to reality, by taking one thing, and combining it with another thing, I can deduce that it is indeed 2 things.

Quoting Bob Ross
The most fundamental aspect of our lives (I would argue) is rudimentary reason, which is the most basic (rudimentary) method by which we can derive all other things.


True. But I have attempted to define and apply rudimentary reason as a fundamental, and the above paper is what I have concluded. Again, I am not trying to be dismissive of your creativity or your world view in any way! I would love to circle back to those points later. I am purely trying to guide you to the notion that we do not need these extra additions of definitions to learn these fundamentals, nor could we discuss them without first understanding the fundamentals proposed here.
I also wanted to leave you with one of your points on the table.

Quoting Bob Ross
What if you really snorted a highly potent hallucinogen in the real world and it is so potent that you will never wake up in the real world but, rather, you will die in your hallucinated world once your body dies in the real one. Do you truly have knowledge of the sheep (in the hallucinated world) now, given that the world isn't real?


This is one of the best critiques I've seen about the theory. Yes, I have an answer for this, but until the fundamentals are truly understood, I fear this would be confusing. if you understand what I've been trying to say in this response, feel free to go over this proposal again. We are at the point where we are going over addition and subtraction, and here you went into calculus with binary! I will definitely respond to this point once I feel the basics are understood.

Thank you again Bob, I will be able to answer much quicker now that it is the weekend!


Bob Ross December 04, 2021 at 21:38 #627852
Hello @Philosophim,
Thank you for the wait Bob. I wanted to make sure I answered you fully and fairly.


Absolutely no problem! I would much rather wait for a detailed, thought-provoking response than a quick, malformed (so to speak) one! I thoroughly enjoy reading your replies, so take as much time as you need!

After reading your response, I think your forbiddence of my terminology is fair enough! So I will do my best to utilize your terminology from now on (so, in advance, I apologize if I misuse them--and please correct me when I do). However, I think there is a still, in the absence of my terminology, a fundamental problem with your essays and so I will try to elaborate hereafter in your terminology as best I can.

It is fine if you believe this is too basic, but that is because I must start basic to build fundamentals. At this point in the argument, I am a person who knows of no other yet.


I understand that the intuitive thing to do is the start off with "I" vs "everything else" for most people, and I have no problem with that, but it seems to me, as your essays progress, they retain this position which I, as far as I understand your argument hitherto, believe to be false (and not in terms of later applicable knowledge nor a priori, but in the immediate experiential knowledge that you begin your derivation from). Intuitiveness is not synonymous with immediateness, and I think this is a vital distinction for your proposition. For example, the intuitive thing to do is the separate as you did, the "I" and "everything else", for most people but that is not the most immediately known thing to the subject. For example, if, in your sheep example, you were an individual with depersonalization disorder, the disassociation of the "I", the process of discretely perceiving the sheep, and the discrete manifestations of "everything else" (including the sheep) would be incredibly self-apparent and discrete--although, to a person where their brain associates these concepts very tightly it would be the subjects first and most fundamental mistake to assume it is really the "I" discretely perceiving and the discrete manifestations of "everything else". This is why, although I have to invoke some applicable knowledge (that, although you or I may have never had such a disassociation like depersonalization disorder), for a person with such a disassociation, your essay would entirely miss its mark (in my opinion) because they do not share such a basic fundamental distinction as you. However, I would argue that you both actually share a much much more fundamental basic than the binary distinction your proposition invokes: the "I", the process of discretely perceiving the sheep, and all discrete manifestations--a ternary distinction.

This is where I may need some clarification, because I might have misinterpreted what your proposition was. You see, I thought that "I am a discrete experiencer" was a broadened sense of stating "I am a discrete perceiver, thinker, etc". I read your essays as directly implying (by examples such as the sheep) that a specific instance of "I am a discrete experiencer" was "I am a discrete perceiver and thinker"--and I believe this to be false (if that is not what you are implying, then please correct me). This is not the most fundamental basic separation for the subject. Now I understand that a lot of people may initially be in the same boat as you (in that binary distinction) and then derive the ternary distinction I make, but, most importantly, not all (and it would be a false claim). I totally agree with you that we shouldn't utilize an a priori or applicable knowledge yet, just the discrete experiences as you put it, but "I am a discrete experiencer", based off of such discrete experiences (which can be more self-apparent to people with psychological disorders), is a false basis for knowledge. Again, I do think their is a lot of applicable knowledge that we could both discuss about this ternary distinction I am making, but for your argument I would not need to invoke any as discrete experiences is enough. So, I would argue, either the essays need to begin with a more fundamental basis for knowledge (ternary distinction) or, as the essays progress, they need to morph into a ternary distinction. With that being said, I totally understand (in hindsight) that a lot of my first post was irrelevant to specifically your proposal (I apologize), but I don't agree in deriving the different levels, so to speak, of induction with a basis that isn't the most immediate. Is this me being to technical? Would it overly complicate your proposition? Possibly. This is why I understand if you would like to keep it broad in your proposition, but, in that case, I would temporarily agree with you in this sense; moreover, more importantly, I wouldn't use it as a basis of knowledge because the most fundamental aspect of your epistemology I think to be false. Hopefully that made some sense.

I think this is a fine assessment. We can make whatever definitions and concepts we want. That is our own personal knowledge. I am looking at a blade of grass, while you are creating two other identities within the blade of grass. There is nothing wrong with either of us creating these identities. The question is, can we apply them to reality without contradiction? What can be discretely known is not up for debate. What can be applicably known is.


Again, I completely agree that we can derive a prior (and applicable) knowledge that we could both dispute heavily against one another, but I am not trying to utilize any of such: I am disputing what can be discretely known. I am debating on whether your proposition is right in its assessment of what is discretely known. I am debating whether your proposition is right in persisting such a binary distinction all the way to its last conclusion. As another example, psychedelic drugs (that produce extreme disassociations) can also, along with psychological disorders, demonstrate that an individual within your sheep example, in your very shoes, would not base their derivations on a binary distinction. Again, I understand that, although I said I wouldn't invoke applicable knowledge, I just did (in the case that I do not have depersonalization disorder): but for that subject, in your example, your basis would simply be factually incorrect and if the use of such applicable knowledge is not satisfactory for you, then you could (although I am not advocating you to) produce this disassociation and try the sheep example in real life. Now, I am not trying to be condescending here at all, I am deadly serious and, therefore, I apologize if it seemed a bit condescending--I think this is an important problem, fundamentally, with your derivation in the essays. I could be misunderstanding you completely, and if that is the case then please correct me!

This is a great example of when two people with different contexts share their discrete knowledge. I go over that in part 3 if you want a quick review. We have several options. We could accept, amend, reinterpret, or reject each other's definitions. I point this out for the purposes of understanding the theory, because I will be using the theory, to prove the theory.


I think that this is the issue with your derivation: it only works if the individual reading it shares your intuitive binary distinction. I agree generally with your assessment of how to deal with competing discrete experiences, but to say "I am a discrete experiencer" is only correct if you are meaning "experiencer" in the sense that you are witnessing such processes, not in the sense that "I am a discrete perceiver". I think this is a matter of definitions, but an important matter, that I would need your clarification on. In both bases, ternary and binary, it may happen that both conclude the same thing, but the latter would be deriving it from a false premise that can be determined to be false by what is immediately known (regardless of whether it is intuitive or not).

You want to discuss the concept of the square root of four, while I want to first focus on the number 2.


I think this analogy (although I could be mistaken) is implying that the binary distinction is what we all begin with and then can later derive a ternary distinction: I don't think that is always the case and, even when it is, it is a false one that only requires those discrete experiences to determine it. In other words, one who does begin with a binary distinction can by use of those very discrete experiences determine that there original assertion was wrong and, in fact, it is ternary. Even for a person who doesn't have extreme disassociation, the fact that these processes, that are indeed discrete, get served, so to speak, to the experiencer and not produced by them is enough to show that it is not a binary distinction. Again, I do understand that, for most, it is intuitive to start off with a binary distinction, but applicable and a priori knowledge is not required to realize that it is a false conclusion. I would agree that my assessment is more complicated, which is directly analogous to your analogy here. However, numbers (like the number 2) are first required to understand square roots, which is not analogous to what I am at least trying to say. Again, just as intuitive a binary distinction may be to you, so is a ternary distinction for a depersonalized disorder patient (or even some that have never had a disorder of any kind)--and they simply won't agree with you on this (and it is a dispute about the fundamental discrete experiences and not simply applicable knowledge).

But, because I know you're a great philosopher, for now, please accept the definitions I'm using, and the way I apply it. Please feel free to point out contradictions in my discrete knowledge, or misapplications of it. I promise this is not some lame attempt to avoid the discussion or your points. This is to make sure we are at the core of the theory.


Fair enough! I totally understand that I got ahead of myself with my first post. Hopefully I did a somewhat better job of directly addressing your OP. If not, please correct me!

To recap: An "I" is defined as a discrete experiencer. That is it. You can add more, that's fine.


Again, it depends entirely on what you mean by "experiencer" whether this is always true for the subject reading your papers. If you mean, in terms of a specific example, that the "I" encompasses the idea that it is a discrete perceiver; I think this is wrong and can be immediately known without a prior knowledge, regardless of whether I am personally in a state of mind that directs me towards an initial binary distinction.

And at risk of over repeating myself, the forbiddance of introducing new discrete knowlege at this point is not meant to avoid conversation, it is meant to discover fundamentals


Absolutely fair! However, hopefully I have demonstrated that I am disputing whether the binary distinction even is a fundamental or not.

Would an animal be an "I" under the primitive fundamental I've proposed and applicably know? If an "I" is a discrete experiencer, then I have to show an animal is a discrete experiencer without contradiction in reality. If an animal can discern between two separate things, then it is an "I" as well. Now I understand that doesn't match your definition for your "I". Which is fine. We could add in the defintion of "consciousness" as a later debate. The point is, I've created a defintion, and I've applied it to reality to applicably know it.


Firstly, and I am not trying to be too reiterative here, I am disputing the idea that it is a primitive fundamental: I think it is not. Yes, I see your point if you are trying to determine if the animal is a discrete experiencer in the sense that they perceive external stimuli (for example), but this gets ambiguous for me quite quickly. If that is what you mean, that you can demonstrate that they discretely experience as in they discretely perceive (and whatever else you could demonstrate), then that use of the term "experiencer" would not be the same as the "experiencer" in "I am a discrete experiencer", unless you are stating that "experience" is synonymous with the processes of perception, thought, etc. If you are stating that "experience" is synonymous with the processes that feed it, then you have eliminated "you" from the picture and, therefore, the "I" in "I am a discrete experiencer" is simply "The body is a discrete experiencer", which I don't think that would make any sense to either of us if we were to derive our knowledge from the body and not the "I" (again, I could be misunderstanding you here). In other words, although I understand that I am redefining terms, I do not think you can demonstrate that animals experience, only that they have processes similar to which feeds our experiences. What I am trying to say, and I'm probably not doing a very good job, is that you can't demonstrate an animal to experience equivocally to when you determined that you experience: the "I" (you) cannot determine that the animal, or anyone else for that matter, is an "I" in the sense that both of my uses of "I" are equivocal. Now this brings up a new issue of whether we could determine that other people, for example, have "I"s at all. I would say that we can, but it is later on: applicable knowledge in your terminology. To be clear, you would have discrete experiences that demonstrate that there are other entities that have processes required to experience, but any inferences after that are applicable knowledge claims. This is why it is ambiguous for me: when you say you can prove there are other "I"s like your "I", I think you are utilizing the term "I" in two different senses--the former is simply an entity that has the processes similar to what feeds your experiences (which can be derived from discrete experiences), while the latter is experience itself (which cannot be extended via discrete experiences to any other entities but, rather, can be inferred by applicable knowledge to be the case).

But for the single person without context, if they have defined "I" in this way, this is the only thing they could deduce in their application of that definition to reality.


Hopefully I did an adequate job of presented evidence that this is not true.

I think, for the sake of making this shorter, I will leave your further comments for later discussion, because I think that you are right in saying that we need to discuss the more fundamental aspects to your OP first. I think that the hallucinated dilemma is a discussion for after what I have stated here is hashed out.

Thank you for a such an elaborate response and I look forward to hearing back from you!
Bob
Philosophim December 05, 2021 at 16:21 #628072
Likewise, thank you for your response again Bob! And no, I do not find you condescending. I would much rather the point was over explained then not enough. Feel free to always point out where I'm wrong, its the only way to put the theory through its paces.

If there is a disagreement with the foundation, lets focus on that first. The rest is irrelevant if that is wrong.

First, lets focus on definitions. To clarify, a discrete experience is the ability to part and parcel what we "experience". A lens focusses light into a camera, but the lens does not discriminate or filter the light into parts. We do. Discrete experiences include observations, and our consciousness. Discrete experience is the "now", our memory, and anytime you think. As you note, we could split up discrete experience into different categories. I could include consciousness, but consciousness is still a discrete experience.

So what is distinctive knowledge? The awareness of any discrete experience. I discretely know when I sense. When I have a memory. What I define my own consciousness to be. Since I can know that I discretely experience, I know whatever it is that I discretely experience. That is discrete knowledge.

Quoting Bob Ross
I read your essays as directly implying (by examples such as the sheep) that a specific instance of "I am a discrete experiencer" was "I am a discrete perceiver and thinker"--and I believe this to be false


No, the only thing I am claiming is, "I am a discrete experiencer". Perception is a discrete experience, as well as thinking, consciousness, and whatever other definitions and words you want to divide up the notion of what we can discretely experience. Being a discrete experiencer does not requires consciousness, or even any notion of an "I". For beings like us, we can divide what we discretely experience into several definitions. I can create sub-conscious, meta-conscious, meta-meta-conscious, and conscious-unconscious. I can write books, and essays, and have debates about metaphysical meta-conscious-unconsciousness.

Yes, these words are not real words within the context of society, but there is nothing to prevent a person from making up these words, and attributing it to some part of their "self" or a portion that they discretely experience. The subdivisions are unimportant at a base claim of knowledge, as they are all discrete experiences, and all of them if created by an individual, are distinctly known to that individual.

Quoting Bob Ross
In other words, one who does begin with a binary distinction can by use of those very discrete experiences determine that there original assertion was wrong and, in fact, it is ternary.


What I am saying is, someone can subdivide the notion of an "I" even further if they like. They can even change the entire definition of "I", and state it requires consciousness, thus excluding certain creatures. There's nothing against that. The definition of "I" am using is based upon the fact I can discretely experience. Me changing the definition of an "I" does not negate that underlying fact. What is ultimately important when one decides on a bit of distinctive knowledge, is to see if they can apply it to reality without contradiction.

So, if I define "I" as simply a discrete experiencer, then I could apply this to reality and state that things which are deemed to discretely experience are "I"'s.

If you define "I" as needing consciousness, then when you applied to reality, any thing that discretely experienced that did not have consciousness would not be an "I". Yours would add in the complication of needing to clearly define consciousness, then show that application in reality.

Both of us are correct in our definitions, and both of us are correct in our application. I would discretely and applicably know an "I" in my context, while you would have both knowledges in your context. It is just like the sheep and goat example in part 3. Someone could define a "goat" to encompass both a sheep and a goat. Or they could create "sheep" and "goat" as being separate. Or they could go even further, and state that a goat of 20 years of age is now, "The goat". It doesn't matter what we create for our definitions for individual use. We distinctly know them all. The question is whether we can apply them to reality without contradiction, so then we can claim we can applicably know them as well.

The only way to prove that someone's definitions are not going to be useful on the applicable level, is to demonstrate two definitions that they hold contradict each other. For example, using the context of regular English, if I said, "Up is down" in a literal sense, we could know that describes an application of reality that is a contradiction. But I could be an illogical being that uses two contradictory definitions in my head. That is what I distinctively know.

The question of "correctness" comes in when two contexts encounter one another.
The keys when discussing the two parts of knowledge come down to whether the distinctive knowledge proposed can be applied to reality without contradiction, and whether the distinctive knowledge that can be applied, is specific and useful enough for our own desired purposes.

In your case, you are dissatisfied with my definition of an "I", because you want some extra sub distinctions for your own personal view of what "I" is. But "I" is merely a placeholder for me at this time for the most basic description of, "that which discretely experiences". Why am I so basic here? Because it avoids leading the discussion where it does not need to go at this time. Further, it serves to avoid the issue of solipsism. Finally, it avoids the discussion of knowledge from focusing squarely on human beings, or particular types of human beings. Notice that your addition of consciousness adds a whole extra addition to the discussion. A new word that needs to be defined, and applied to reality without contradiction. But I am not trying to get the specifics of what we can derive from the our ability to discretely experience as human beings. I am just trying to get the most fundamental aspects of knowledge as a tool.

We could of course conflict further on the notion of what we discretely experience, going round and round as to what consciousness entails, what what all sorts of sub-assessments entail. But it is a fruitless discussion for the purposes of what I'm using the definitions for. I am not wrong, and you are not wrong in our own contexts. We must come to an agreed upon context of distinctive knowledge, and the way to do that for the most number of people, is to get the concepts as basic as possible. The symbol of "I" is unimportant, as long as you understand the concept underneath the "I" that I made from my own personal context. You are entering into "I" as the context developed by myself, while you can hold the "I" as the context on your end. The final "I" is the agreement of compromise between the context of you and I together. We can hold all three in our head without contradiction. It is not the word or symbol that matters. It is again, the underlying concept.

So with this, before you build upon it, before you subdivide it, I ask you to think about the exact definitions of discrete experience, distinctive knowledge, and applicable knowledge. Have I contradicted myself? Have I applied these basic definitions to reality without contradiction? If I have done so, then I have shown a system of knowledge that I can use in my personal context. After that, we can address the notion of cross context further. Thanks again!

Bob Ross December 05, 2021 at 22:11 #628194
Hello @Philosophim,
Thank you for your clarifications of your terminology: I understand it better now! I see now where our disagreements lie. For now, I am going to grant your use of "discrete experiencer" in a broader sense as you put it. However, this still does not remove my issue with your papers and, quite frankly, it is simply my lack of ability to communicate it clearly that is creating this confusion. So, therefore, I am going to take a different approach: I re-read your essays 1-3 (I left 4 out because it something which will be addressed after we hash this out) and I have gathered a few quotes from each of them with questions that I would like to get your answer to (and some are just elaborative). Hopefully this will elaborate a bit on my issue. Otherwise, feel free to correct me.

Essay 1:

I noted discrete experiences in regards to the senses, but what about discrete experience absent those senses?  Closing off my senses such as shutting my eyes reveals I produce discrete experiences I will call “thoughts.”  If I “think” on a thought that would contradict the discrete experience of “thoughts” I again run into a contradiction.  As such, I can deductively believe I have thoughts absent the senses as well.


I think what you meant (and correct me if I am wrong here) is the five senses, not all senses. If you had no senses, you wouldn't have thoughts because you would not be aware of them. There are many senses to the body and it is always evolving, so I would argue that the awareness of thoughts is a sense (but is most definitely different than the five senses). I think your argument here would be that you are talking specifically senses that pertain to the bodies contact with external stimuli and that is fine. But thoughts are not absent of all senses. I think this arises due to your essays' lack of addressing the issue of the ternary distinction. This will hopefully make more sense as I move on to the next quotes.

Essay 2:

I will label the awareness of discrete experiences as “distinctive knowledge”.  To clarify, distinctive knowledge is simply the awareness of one’s discrete experiences.


I agree. Your advocating here (as I understand it) that in the absence of awareness, you have no distinctive knowledge and, thereby, no applicable knowledge and, therefore, no knowledge at all. You are directly implying that even if you can determine another animal to be a discrete experiencer, you still have no reason to think that they have any distinctive knowledge because that doesn’t directly prove that they are aware of their discrete experiences. Am I correct in this? Therefore, the “I” for you as a discrete experiencer, at this point, has one extra property that cannot be demonstrated (yet) to be in another animals “I”: awareness of those discrete experiences. Therefore, both uses of the term “I” are not being used completely synonymously (equivocally).

Essay 3:

I have written words down, and if another being, which would be you, is reading the words right now then you too are an “I”.


As of now, the “I” defined for you has another property that you haven’t proven to exist in the other “I”s: awareness of discrete experiences—distinctive knowledge as you defined it. Just like how I can park my car with a complete lack of awareness of how I got there, I could also be reading your papers without any awareness of it. So, with that in mind, I would ask: are you asserting that this is not the case? That you have sufficiently proven that, not only are other animals an “I” in the sense that they are discrete experiencers, they also have distinctive knowledge? To say "I" and your "I" both exist, in my head, is to directly imply that I am using "I" equivocally: I don't think your essays do that. Your essays, thereby, seem to assume that you have used the term "I" equivocally and immediately starts compared one "I"s knowledge to another "I"s knowledge: but, again, you haven't demonstrated that anyone besides your self has distinctive knowledge, only discrete experiences.

If I come across you reading these words and understanding these words,, and you are not correlative with my will, then you are an “I” separate from myself.


Only in the sense that “I” is used to denote a discrete experiencer and not distinctive knowledge. Therefore, your essays don't actually determine anyone else to know anything, only that they discretely experience. Am I incorrect here?

If other people exist as other “I’s” like myself, then they too can have deductive beliefs.


I agree. But at this point in your essay we have no reason to believe that they are aware of them, which would be required for “I” to be used equivocally. In saying "other "I's" like myself", you are implying (to me) that you think that you have proven other "I's" to have distinctive knowledge (or knowledge in any sense), but I don't think you have.

A person's genetics or past experiences may incline them to discretely experience properties different from others when experiencing the same stimulus.


This is why, I would argue, not everyone who reads your paper is going to fundamentally agree with you with respect to your sheep example. They will attempt to gather knowledge starting from the subject, just like you, but they may not view it as a binary distinction. If it is initialized with a ternary distinction, then, as you hinted earlier, solipsism becomes a problem much quicker and, therefore, your ease of derivation (in terms of a binary distinction) will not be obtained by them. For example, for a person that starts their subjective endeavor with a ternary distinction, it is entirely possible that they must address the issue of “where are these processes coming from?” and “am I justified in assuming they are true?” way before they can get to any kind of induction hierarchy that resides within the production of those processes. So I would like to ask: do you think that the barrier between the “I” and the processes feeding it, and thereby the previous questions, is not more fundamental, and therefore must be addressed first, before the subject can continue their derivation with respect to your essays? My problem is that you skip ahead straight to the sheep example, which is an analysis of the products of the processes, when you haven’t addressed the more fundamental problem of whether those very processes are accurate or not (you just seem to imply that it should be taken on an axiom of some sort). Or if we even can know if they are accurate or not. Or if it really matters if they are or not. Don’t get me wrong, I have no problem with your derivation (for the most part) with respect to your analysis of the products of those processes, but your writings imply axioms that are not properly addressed.

To sum it up, your essays, in an effort to conclude induction hierarchies, completely skip over the justification for why one should even start analyzing the products of those processes in a serious manner, to assuming they are true, in the way that you did. If we are to derive a basis for knowledge, then we must assume nothing, starting from the subject, which includes doubting the assumption that we have any reasonably grounds to assume the "I" should utilize the discrete experiences that get thrown at it. Now you could say, and I think this may be what your essays imply, that, look, we have these processes that are throwing stuff at the "I", of which it is aware of, such as perception and thought, and here's what we can do with it. If that's what your essays are trying to get at, then that is fine. But that doesn't start with the most basic derivation of the subject: you are skipping addressing the problem of whether knowledge can even be based off of these products of the processes. In other words, if your essays are simply dealing with what we can "know" in the sense that all we care about is having knowledge pertaining to the product of these processes and, therefore, it doesn't matter if those processes are utterly false, then I think we are in agreement. But I would add that you aren't addressing this at all in the essays and that's why I can't personally use it, as it is now, to base knowledge. Hopefully that makes a bit more sense. If not, please let me know!
Bob
Philosophim December 06, 2021 at 01:30 #628241
Great, I believe we've iterated through this and are closer to understanding each other.
Quoting Bob Ross
I think what you meant (and correct me if I am wrong here) is the five senses, not all senses. If you had no senses, you wouldn't have thoughts because you would not be aware of them.


I had to laugh at this one, as I've never had senses defined in such a way as sensing thoughts. We have two definitions here, so let me point out the definition the paper was trying to convey. The intention of the senses in this case is any outside input into the body. Some call them the five senses, but I wasn't necessarily stating it had to be five. Anything outside of the body is something we sense. The thing which takes the senses and interprets it into concepts, or discrete experiences, is the discrete experiencer. For the purposes here, I have noted the ability to discretely experience is one thing you can know about your "self".

Is it incorrect for me to say I have discrete experiences? I believe it is impossible to not. If I claim, "No." I must have been able to discretely experience the concept of words. As a fundamental, I believe that's as solid as can be.

Quoting Bob Ross
But thoughts are not absent of all senses.


Could you go into detail as to what you mean? I'm not stating you are wrong. Depending on how you define senses, you could be right. But I can't see how that counters the subdivision I've made either. If I ignore my senses, meaning outputs entering into the body, then what is left is "thoughts". Now again, we can subdivide it. Detail it if you want. Say, "These types of thoughts are more like senses". I'm fine with that. It does not counter the fundamental knowledge that I am a discrete experiencer. If I know that I discretely experience, then what I discretely experience, is what I know.

Quoting Bob Ross
You are directly implying that even if you can determine another animal to be a discrete experiencer, you still have no reason to think that they have any distinctive knowledge because that doesn’t directly prove that they are aware of their discrete experiences. Am I correct in this?


No. I hesitate to go into animals, because its just a side issue I threw in for an example, with the assumption that the core premises were understood. Arguing over animal knowledge is missing the point. If you accept the premises of the argument, then we can ask how we could apply these definitions to animal knowledge. If you don't accept the premises of the argument, then applying it to animals is a step too far. This is not intended to dodge a point you've made. This is intended to point out we can't go out this far without understanding the fundamanets. My apologies for jumping out here too soon! Instead, I'm going to jump to other people, which is in the paper.

Quoting Bob Ross
As of now, the “I” defined for you has another property that you haven’t proven to exist in the other “I”s: awareness of discrete experiences—distinctive knowledge as you defined it. Just like how I can park my car with a complete lack of awareness of how I got there, I could also be reading your papers without any awareness of it.


You would not be able to read without the ability to discretely experience. This was implicit but perhaps should be made explicit. If you can read the letters on the page, you can discretely experience. If you can then communicate me with those letters back in kind, then you understand that they are a form of language. If you can do this, you can read my paper, and you can enter the same context as myself if you so choose. You can realize you are a discrete experiencer, and apply the test to reality.

You cannot do that without being a discrete experiencer like me. I would have to come up with a new method of knowing if someone who could not read or communicate was an "I" as defined. But again, I am not concerned with branching out into detailing how this fundamanal process of knowledge could be used to show a person who cannot communicate is an I, but establishing the fundamental process of knowledge first, with which we can use to have that discussion.

To that end, it doesn't matter if you're "conscious". It doesn't matter if you're spaced out, in a weird mental state, etc. You're a discrete experiencer like me. You run into the very problem of denying that you are a discrete experiencer, just like myself. So the rest follows that what you discretely experience, is what you distinctely know. And for you to conclude that, you must understand deductive beliefs, and be capable of doing them.

Do you deny that you deductively think? That you can discretely experience? Of course not. So that is good enough for the purposes that I need to continue the paper into resolving how two discrete experiencers can come to discrete and applicable knowledge between them. All I need is one other discrete experiencer, and the theory can continue.

Quoting Bob Ross
A person's genetics or past experiences may incline them to discretely experience properties different from others when experiencing the same stimulus.
This is why, I would argue, not everyone who reads your paper is going to fundamentally agree with you with respect to your sheep example.


They can fundamentally disagree with me by distinctive knowledge. They cannot fundamentally disagree with me by application, unless they've shown my application was not deduced. But in doing so, they agree with the process to obtain knowledge that I've set up. The sheep examples are all intended to show we can invent whatever distinctive knowledge we want, but the only way it has use in the world, is to attempt to apply it without contradiction.

I understand you're still concerned with the specifics of distinctive knowledge claims I've made such as, "What is an "I", when the real part to question is the process itself. What I'm trying to communicate, is that there is no third party arbiter out there deciding what "I" should mean, or what any word should mean. We invent the terms and words that we use. The question is whether we can create a process out of this that is a useful tool to help us understand and make reasonable decisions about the world.

Is it incorrect that an individual can invent any words or internal knowledge that they use to apply to the world? Is it incorrect, that if I apply my distinctive knowledge to the world and the world does not contradict my application, that I can call that another form of knowledge, applicable knowledge? If you enter into the context of the words I have used, does the logic follow?

Quoting Bob Ross
If it is initialized with a ternary distinction, then, as you hinted earlier, solipsism becomes a problem much quicker and, therefore, your ease of derivation (in terms of a binary distinction) will not be obtained by them. For example, for a person that starts their subjective endeavor with a ternary distinction, it is entirely possible that they must address the issue of “where are these processes coming from?”


It is not a problem for me at all if someone introduces a ternerary distinction. The same process applies. They will create their distinctive knowledge. Then, they must apply that to reality without contradiction. If they cannot apply it to reality without contradiction, then they have invented terms that are not able to be applicably known. Distinctive knowledge that implies solipsism tends to fail when applied to the world. In my case, I have terms that can be applicably known. Therefore I have a tool of reasoning that allows me to use my distinctive knowledge to step out in the world and handle it.

Quoting Bob Ross
My problem is that you skip ahead straight to the sheep example, which is an analysis of the products of the processes, when you haven’t addressed the more fundamental problem of whether those very processes are accurate or not


I don't doubt this is a problem for a reader, so thank you for pointing this out. Your feedback tells me I need to explicitly point out how if you are reading this, you are by the definitions I stated, an "I" as well. The sheep part itself I use to give examples to how distinct knowledge can change, and that's ok. The only thing that matters is if that knowledge can be applied to reality without contradiction. So I think I can retain that, I just need to add the detail I mentioned before.

Quoting Bob Ross
Now you could say, and I think this may be what your essays imply, that, look, we have these processes that are throwing stuff at the "I", of which it is aware of, such as perception and thought, and here's what we can do with it. If that's what your essays are trying to get at, then that is fine.


Yes, this is a more accurate assessment of what I am doing. I am inventing knowledge as a tool that can be used. With this, I can say I distinctly know something, and I can applicably know something. I have a process that is proven, and the process itself can be applied to its own formulation. You can go back with the conclusions the paper makes, and apply it from the beginning. I use the process to create the process, and it does not require anything outside of the process as a basic foundation.

Thank you again for your thoughts and critiques! I hope this cleared up what the paper is trying to convey in the first two papers. If these fundamentals are understood, and can withstand your critique, then we can address context, which I feel might need some tightening up. I look forward to your next thoughts!
Bob Ross December 06, 2021 at 03:15 #628275
Hello @Philosophim,

I had to laugh at this one, as I've never had senses defined in such a way as sensing thoughts. We have two definitions here, so let me point out the definition the paper was trying to convey. The intention of the senses in this case is any outside input into the body. Some call them the five senses, but I wasn't necessarily stating it had to be five. Anything outside of the body is something we sense. The thing which takes the senses and interprets it into concepts, or discrete experiences, is the discrete experiencer. For the purposes here, I have noted the ability to discretely experience is one thing you can know about your "self".


Although I understand what you are trying to say, this is factually false (even if we remove my claim that thoughts are sensed). Senses are not restricted to external stimuli (like the five senses, for example): there are quite a lot of senses (some up for debate amongst science circles). Some that would be pertinent (that aren't up for debate) to this discussion would be those that are internally based: Equilibrioception (sense of balance, which is not a sense of external stimuli but, rather, internal ear fluid), Nociception (the sensation of internal pain), Proprioception (sense of body parts without external input), and Chemoreception (sense of hunger, thirst, vomiting, etc). I could go on, but I think you probably understand what I mean now: you have plenty of sensations that are deployed and received exclusively within your body. With regards to my claim that "thoughts are not absent of all senses", I do believe this to be true, but I think this isn't relevant to this discussion yet (if at all), so disregard that comment for now.

Could you go into detail as to what you mean? I'm not stating you are wrong. Depending on how you define senses, you could be right. But I can't see how that counters the subdivision I've made either. If I ignore my senses, meaning outputs entering into the body, then what is left is "thoughts".


I think this would digress our conversation even more, so I am going to leave this for a later date. Long story short, I think that just like how you can feel your heart beat pumping (which doesn't require external stimuli) you can also sense thoughts with seemingly conclusory thoughts (which are emotionally based convincements). Again, this isn't as pertinent to our conversation as I initially thought and I understand that it doesn't directly negate your idea that, apart from external stimuli, there is "thoughts". However, to say it is apart from all senses I think it is wrong.

No. I hesitate to go into animals, because its just a side issue I threw in for an example, with the assumption that the core premises were understood. Arguing over animal knowledge is missing the point. If you accept the premises of the argument, then we can ask how we could apply these definitions to animal knowledge. If you don't accept the premises of the argument, then applying it to animals is a step too far. This is not intended to dodge a point you've made. This is intended to point out we can't go out this far without understanding the fundamanets. My apologies for jumping out here too soon! Instead, I'm going to jump to other people, which is in the paper.


I understand your point that discussing animals is a step too far, but my critique also applies to humans.

You would not be able to read without the ability to discretely experience. This was implicit but perhaps should be made explicit. If you can read the letters on the page, you can discretely experience. If you can then communicate me with those letters back in kind, then you understand that they are a form of language. If you can do this, you can read my paper, and you can enter the same context as myself if you so choose. You can realize you are a discrete experiencer, and apply the test to reality.


I completely agree. However, I am not disputing whether you can determine me to be a discrete experiencer: I am disputing whether you can reasonably claim (within what is written in your essays) that you know that I have any sort of knowledge. By your essays' definition, distinctive knowledge is the awareness of discrete experiences: this is a separate claim from whether I have discrete experiences. In other words, you are right in stating that you have sufficient justification to say I am a discrete experiencer, but then you have to take it a step further and prove that I am aware of my discrete experiences. In your essays, both instances of knowledge that you define (distinctive and applicable) are a separate claims pertaining to the subject beyond the claim that they are discrete experiencers.

In your essays, they do not define "discrete experiences" as a form of knowledge (correct if I am wrong here), but they define two forms of knowledge: the awareness of discrete experiences (distinctive knowledge) and, after there is distinctive knowledge, the application of beliefs (applicable knowledge). Since applicable knowledge is contingent on distinctive knowledge and distinctive knowledge is, in turn, contingent on awareness, therefore proving that I, as the reader, have discrete experiences does not in any way prove that I have any forms of knowledge as defined in your essays.

You cannot do that without being a discrete experiencer like me

To that end, it doesn't matter if you're "conscious". It doesn't matter if you're spaced out, in a weird mental state, etc. You're a discrete experiencer like me.

True, but I could be a discrete experiencer without having any knowledge as defined by your essays (namely, without any distinctive or applicable knowledge).

So the rest follows that what you discretely experience, is what you distinctely know.


This is exactly what I have been trying to demonstrate: the first half of the above quote does not imply in any way the second half (for other "Is"). You define distinctive knowledge with the explicit contingency on awareness, not simply that you discretely experience. I am arguing that you can discretely experience without being aware of it. I think this is basically my biggest issue with your essays summed into one sentence (although I don't want to oversimplify your argument): you wrongly assume that you proving something is a "discrete experiencer" therefore proves that they have distinctive and applicable knowledge but, most importantly, you haven't demonstrated that that something is aware of any of it and, consequently, you haven't proven they have either form of knowledge.

Do you deny that you deductively think?


Yes. I inductively think my way into a deductive belief. I have a string of thoughts (inductively witnessed) that form a seemingly conclusory thought (which can most definitely be a deductive belief). My thoughts do not start, or initialize so to speak, with deduction.

So that is good enough for the purposes that I need to continue the paper into resolving how two discrete experiencers can come to discrete and applicable knowledge between them


Again, I think that you are wrongly assuming that proving that an individual discretely experiences directly implies that they have distinctive or applicable knowledge: you defined distinctive knowledge specifically to be contingent on awareness, which I don't think has anything to do with what you defined as "discrete experiences".

They can fundamentally disagree with me by distinctive knowledge. They cannot fundamentally disagree with me by application, unless they've shown my application was not deduced


I agree that, in the event that they begin partaking in their analysis of the products of those processes, they will apply them the same way as you. However, it doesn't begin with a deduction: you have to induce your way to a deduction that you can then apply. If that is what you mean when you say that it begins with deduction (that it is a induced deductive belief) then I agree with you on this.

The sheep examples are all intended to show we can invent whatever distinctive knowledge we want, but the only way it has use in the world, is to attempt to apply it without contradiction.


I agree with you here, but I don't think you are starting your writing endeavor (in terms of the essays) at the basis, you are starting at mile 30 in a 500 mile race. Once we agree up to 30, then I (generally speaking) agree with you up to 500 (or maybe 450 (: ) and I think you do a great job at assessing it from mile 30 all the way to 500. However, I think it is important to first discuss the first 30 miles, otherwise we are building our epistemology on axioms.

What I'm trying to communicate, is that there is no third party arbiter out there deciding what "I" should mean, or what any word should mean. We invent the terms and words that we use. The question is whether we can create a process out of this that is a useful tool to help us understand and make reasonable decisions about the world.


I understand and this is a fair statement. However, my biggest quarrel is that I believe you to be starting at mile 30 when you should be starting at mile 0. If you want to base your epistemology on something that doesn't address all the fundamentals, but more generally (with the help of axioms) addresses the issue in a way most people will understand, then that is fine. I personally don't agree that we should start at mile 30 and then loop back around later to discuss miles 0-29. It must start at 0 for me.

Is it incorrect that an individual can invent any words or internal knowledge that they use to apply to the world? Is it incorrect, that if I apply my distinctive knowledge to the world and the world does not contradict my application, that I can call that another form of knowledge, applicable knowledge?


This is absolutely correct. But I am not disputing this.

If you enter into the context of the words I have used, does the logic follow?


No it does not. Proving a being to be a discrete experiencer doesn't prove awareness of such. Therefore, by your definitions of knowledge, I don't understand how your essays prove others to have any forms of knowledge (distinctive or applicable). If you start at mile 30, and I start at mile 30, then I think that your logic (without commenting on the induction hierarchy yet, so just essays 1-3) is sound. But, again, that's assuming a lot to get to mile 30.

It is not a problem for me at all if someone introduces a ternerary distinction. The same process applies. They will create their distinctive knowledge. Then, they must apply that to reality without contradiction. If they cannot apply it to reality without contradiction, then they have invented terms that are not able to be applicably known.


I agree. But, again, this is starting at mile 30, not mile 0. You skip over the deeper questions here and generally start from the analysis of the products of the processes: this is not the base. If you don't want to start from the base, then that is totally fine (I just disagree). If you think that you are starting from the base, then I would interested to know if you think that the binary distinction is the base.

I don't doubt this is a problem for a reader, so thank you for pointing this out. Your feedback tells me I need to explicitly point out how if you are reading this, you are by the definitions I stated, an "I" as well.


Again, this only proves that you know that I, as the reader, am a discrete experiencer. Now you have to prove that I have distinctive and, thereafter, applicable knowledge. Distinctive knowledge was defined as the awareness of discrete experiences, not merely the discrete experiences themselves. Therefore, I don't think your "I" is being extended equivocally to other "Is" in your essays: your "I" is a discrete experiencer that is aware of it and, thereby, has both accounts of knowledge, whereas the other "Is" merely are proven to be a discrete experiencer (no elaboration or proof on whether they are aware of such). I get that, for me, I know I am aware, but to apply your logic to other people, it does not hold that I know that they are aware due to their discrete experiences. Therefore, when your essays start discussing context, it is wrongly assuming that the previous contents of the subsequent essays covered a proof of some sort that other "Is" are aware of their discrete experiences.

Yes, this is a more accurate assessment of what I am doing. I am inventing knowledge as a tool that can be used.


My critique here (that I am trying to portray) is that this tool is starting out at mile 30, not 0.

I hope I wasn't too reiterative, but I think this is a vital problem with your essays. But, ironically, it isn't a problem if you wish to start at mile 30, and if that is the case then I will simply grant it (for the sake of conversation) and continue the conversation to whatever lies after it. I personally don't think it is a good basis for epistemology because it isn't a true basis: it utilizes axioms.
Bob
Philosophim December 06, 2021 at 03:39 #628283
Quoting Bob Ross
By your essays' definition, distinctive knowledge is the awareness of discrete experiences


Ah, I see now. This is incorrect. A person's awareness of the vocabulary has nothing to do with it. Their awareness that they discretely experience, has nothing to do with it. Your discrete experiences ARE your distinctive knowledge. If you have a discrete experience meta-analyzing a discrete experience, or you don't, it isn't important.

The point was I wondered whether I could prove myself wrong that I discretely experienced. I could not. Then I asked if I could prove that the discrete experiences I had did not exist. I find that I could not. Of course they exist. I'm having them. Therefore discrete experiences are knowledge of the individual. But a particular type of knowledge. It is when one tries to apply that discrete experience as representing external reality that one needs to evaluate whether that is an applicable belief, or an applicable piece of knowledge.

That is why if you can read, I know you can discretely experience that language. Then I introduce the terms to you. Then I show you a process by which you can attempt to apply it to reality using deduction, apart from a belief. Perhaps the label of distinctive knowledge is confusing and unnecessary. All I wanted to show is that any discrete experience is something you know, whether you realize it or not. That is a type of knowledge within your personal context. This was to contrast with the application of that personal knowledge as a belief in its application to reality.

If I removed distinctive knowledge from the terminology, and just used "discrete experiences when not applying to reality" would that make more sense? Do you think there is a better word or terminology? And does that clear up what is going on now? I agree with you by the way, if I tried to assert that distinctive knowledge required a person to be aware of their discrete experiences, I would be introducing a meta analysis on discrete experiences that could never be proven. I am not doing that.
Bob Ross December 06, 2021 at 04:09 #628292
@Philosophim
I think that we are in agreement if you are removing the idea of awareness from discrete knowledge. However, I am still slightly confused, as here is your definition in your essay:

I will label the awareness of discrete experiences as “distinctive knowledge”. To clarify, distinctive knowledge is simply the awareness of one’s discrete experiences.


This explicitly defines distinctive knowledge as the awareness of discrete experiences. But now you seem to be in agreement with me that it can't have anything to do (within the context of your essays) with awareness. I would then propose you change the definition because, as of now, it specifically differentiates discrete experiences from distinctive knowledge solely based off of the term "awareness". I think that your essays are addressing what can be known based off of the analysis of the products of the processes and in relation to other discrete experiencers (where their awareness of such is irrelevant to the subject at hand, only that they also discretely experience). Is that fair to say? If so, then am I making any sense (hitherto) about why this is starting at mile 30? Maybe I am not explaining it well enough (it is entirely possible as I am not very good at explaining things). Are you ok with your essays starting their endeavor at mile 30, as opposed to mile 0? Or do you think it is starting at 0?
Bob
Bob Ross December 06, 2021 at 04:17 #628296
@Philosophim

Also, I would then be interested to what you refurbished definition of distinctive knowledge would be: is it simply discrete experiences and memories? If so, then I think this completely shifts the claims your essays are making and, subsequently, my critiques. But I won't get into that until after you respond (if applicable).
Bob
Philosophim December 06, 2021 at 13:21 #628406
Reply to Bob Ross Quoting Bob Ross
I will label the awareness of discrete experiences as “distinctive knowledge”. To clarify, distinctive knowledge is simply the awareness of one’s discrete experiences.

This explicitly defines distinctive knowledge as the awareness of discrete experiences. But now you seem to be in agreement with me that it can't have anything to do (within the context of your essays) with awareness.


No, you are quite right Bob. I wrote this decades ago when I was much younger and not as clear with my words. I believe you are one of the few who has read this seriously. Back then, I had a greater tendency to use words more from my own context and personal meaning, then what would be proper and precise English. This is a mistake in my writing.

Yes, distinctive knowledge is the discrete experiences you have. Memory as well, is a discrete experience. If this is understood, then I think we can continue.
Bob Ross December 06, 2021 at 14:47 #628415
@Philosophim

I am glad that we have reached an agreement! I completely understand that views change over time and we don't always refurbish our writings to reflect that: completely understandable. Now I think we can move on. Although this may not seem like much progression, in light of our definition issues, I would like you to define "experience" for me. This definition greatly determines what your argument is making. For example, if the processes that feed the "I" and the "I" itself are considered integrated and, therefore, synonymous, then I think your "discrete experiencer" argument is directly implying that one can have knowledge without being aware of it (and that the processes and the "I" that witnesses those processes are the same thing). On the contrary, if the processes that feed the "I" and the "I" itself, to any sort of degree, no matter how minute, are distinguished then, therefrom, I think you are acknowledging that, no matter to what degree, awareness is an aspect of knowledge. If neither of those two best describe your definition of "experience", then I fear that you may be using the term in an ambiguous way that integrates the processes (i.e. perception, thought, etc) with the "I" without necessarily claiming them to be synonymous (which would require further clarification as I don't think it makes sense without such). If nothing I have explained henceforth applies to your definition of "experience", then that is exactly why I would like you to define it in your own words.
Bob
Philosophim December 07, 2021 at 04:26 #628686
Reply to Bob Ross
Certainly Bob!

Experience is your sum total of existence. At first, this is undefined. It precedes definition. It is that which definitions are made for and from. A discrete experiencer has the ability to create some type of identity, to formulate a notion that "this" is separate from "that" over there within this undefined flood.

It is irrelevant if a being that discretely experience realizes they are doing this, or not. They will do so regardless of what anyone says or believes. In questioning the idea of being able to discretely experience I wondered, are the discrete experiences we make "correct"? And by "correct" it seems, "Is an ability to discretely experience contradicted by reality?" No, because the discrete experience, is the close examination of "experience". At a primitive level it is pain or pleasure. The beating of something in your neck. Hunger, satiation. It is not contradicted by existence, because it is the existence of the being itself. As such, what we discretely experience is not a belief. It is, "correct".

If I discretely experience that I feel pain, I feel pain. Its undeniable by anything in existence, because it is existence itself. If I remember something from years past, that memory exists. If I choose to define an existence as something, I choose to do that. It is undeniable that I have chosen that. Therefore discrete experience is "known", by a discrete experiencer by the fact is it not contradicted by reality.

Again, a discrete experiencer does not have to realize that their act of discretely experiencing, is discrete experiencing. Discrete experience is not really a belief, or really knowledge in the classical sense. When I say distinctive knowledge, it is the set of discrete experiences a thing has. A discrete experiencer, has discrete experiences. But, if a bit of distinctive knowledge is used in one extra step, to assume that what one discretely experiences can be used to accurately represent something more than the discrete experience itself, then we have a situation where it is a belief, or knowledge. When one has applied their distinctive knowledge, such as adjusting it to logically apply to reality without contradiction, I call applicable knowledge.

That's basically the start, and I hope explains experience and discrete experience with greater clarity!
Bob Ross December 08, 2021 at 00:07 #628979
Hello @Philosophim,

Thank you for the clarification! I see now that we have pin pointed our disagreement and I will now attempt to describe it as accurately as I can. I see now that your definition of "experience" is something that I disagree with on many different accounts, hopefully I can explain adequately hereafter.

Firstly, I apologize: I should have defined the term "awareness" much earlier than this, but your last post seems to be implying something entirely different than what I was meaning to say by "awareness". I am talking about an "awareness" completely separate from the idea of whether I am aware of (recognize) my own awareness (sorry for the word salad here). For example, when you say:

It is irrelevant if a being that discretely experience realizes they are doing this, or not. They will do so regardless of what anyone says or believes.


You are 100% correct. I do not need to recognize that I am differentiating the letters on my keyboard from the keyboard itself: the mere differentiation is what counts . But this, I would argue, is a recognition of your "awareness" (aka awareness of one's awareness), not awareness itself. So instead, I would say that I don't need to be aware (or recognize) that I am aware of the differentiation of the letters on my keyboard from the keyboard itself: all that must occur is the fundamental recognition (awareness) that there even is differentiation in the first place. To elaborate further, when you say:

A discrete experiencer has the ability to create some type of identity, to formulate a notion that "this" is separate from "that" over there within this undefined flood.


I think you are wrong: "I" am not differentiating (separating "this" from "that"), something is differentiating from an undefined flood and "I" recognize the already differentiated objects (this is "awareness" as I mean it). To make it less confusing, I will distinguish awareness of awareness (i.e. defining terms or generally realizing that I am aware) from merely awareness (the fundamental aspect of existence) by defining the former as "sophisticated awareness" and the latter as "primitive awareness". In light of this, I think that when you say:

If I discretely experience that I feel pain, I feel pain. Its undeniable by anything in existence, because it is existence itself...Again, a discrete experiencer does not have to realize that their act of discretely experiencing, is discrete experiencing. Discrete experience is not really a belief, or really knowledge in the classical sense.


I think you are arguing that one doesn't have to be aware (recognize) that they are aware of the products of these processes ("sophisticated awareness") to be able to discretely experience, which would be directly synonymous with what I think you would claim to be our realization of our discrete experiences. This is 100% true, but this doesn't mean that we don't, first and foremost, require awareness ("primitive awareness") of those processes. The problem is that it is too complicated to come up with a great example of this, for if I ask you to imagine that "you" didn't see the key on your keyboard separate from one another, then you would say that, in the absence of that differentiation, "your" "discrete experiences" would lack that specific separation. But I am trying to go a step deeper than that: the differentiation (whether the key is separated from the keyboard or it is one unified blob) is not "you" because that process of differentiating is just as foreign, at least initially, as inner workings of your hands. "You" initially have nothing but this differentiation (in terms of perception) to "play with", so to speak, as it is a completely foreign process to "you". What isn't a foreign process to "you" is the "thing" that is "primitively aware" of the distinction of "this" from "that" (aka "you") but, more importantly, "you" didn't differentiate "this" from "that": it is just there.

To make it clearer, when you describe experience in this manner:

Experience is your sum total of existence.

At a primitive level it is pain or pleasure. The beating of something in your neck. Hunger, satiation. It is not contradicted by existence, because it is the existence of the being itself.


I think you are wrong in a sense. Your second quote here, in my opinion, is referring directly to the products of the processes, which cannot be "experienced" if one is not "primitively aware" of them. I'm fine with saying that "experience" initially precedes definition (or potentially that it even always precedes definition), but I think the fundamental aspect of existence is "primitive awareness". If the beating of something in your neck, which is initially just as foreign to you as your internal organs, wasn't something that you were "primitively aware" of, then it would slip your grasp (metaphorically speaking). With respect to the first quote here, I don't think that my "primitive awareness", although it is the fundamental aspect of existence, is the sum total of all existence: the representations (the products of the processes), the processes themselves, and the "primitive awareness" of them are naturally tri-dependent. However, and this is why I think "primitive awareness" is the fundamental aspect, it is not an equal tri-dependency: for if the "primitive awareness" is removed, then the processes live on, whereas if "primitive awareness" is removed then, thereby, "I" am removed. Furthermore, the processes themselves are never initially known at all (which I agree with you on this), but only their products and, naturally I would say, they are, in turn, useless if "I" am not "primitively aware" of them. So it is like a ternary distinction, but not an equal ternary distinction in terms of immediateness (or precedence) to the "I". For example, if something (the processes) wasn't differentiating the keys on my keyboard, then I would not, within my most fundamental existence, "experience" the keys on a keyboard. On the contrary, I see no reason to belief that, in the event that I was no longer "primitively aware" of the differentiation between the keys and the keyboard (of which I did not partake in and is just, initially, as foreign to me as the feeling of pain), the processes wouldn't persist. What I am saying is "experience" is "primitive awareness" and it depends on the products of the processes to "experience" anything (and, upon further reflection and subsequently not initially, the processes themselves).

Now, I totally understand that the subject, initially speaking, does not (and will not) be aware of my terminology, but just like how they don't have to be aware of your term "discrete experience" to discretely experience, so too I would argue they don't have be aware of my term "primitive awareness" to "experience".

In other words, when you say:
Experience is your sum total of existence. At first, this is undefined. It precedes definition.


I agree that discrete experiences, in terms of the products of the processes, is initially undefined, but the "primitive awareness" is not. You don't have to know what a 'K' means on your keyboard to know that you are aware that there is something in a 'K' shape being differentiated from another thing (of which we would later call a keyboard). This is way the "primitive awareness" is more fundamental than the products of the processes: you don't have to make any sense of the perceptions themselves to immediately be "primitively aware" of those perceptions.

And, lastly, I would like to point something out that doesn't pertain to the root of our discussion:
In questioning the idea of being able to discretely experience I wondered, are the discrete experiences we make "correct"? And by "correct" it seems, "Is an ability to discretely experience contradicted by reality?" No, because the discrete experience, is the close examination of "experience"


I would not constitute this as a real proof: that discretely experiencing doesn't contradict reality and, therefore, it is "correct". I understand what you are saying, but fundamentally you are comparing the thing to itself. You are really asking: "Is an ability to discretely experience contradicted by discretely experiencing". You are asking, in an effort to derive an orange, "does an orange contradict an orange?". You set criteria (that it can't contradict reality) and then define it in a way where it is reality, so basically you are asking "does reality contradict reality". Don't get me wrong, I would agree that we should define "correct" as what aligns with experience, but it is an axiom and not a proof. It is entirely possible that experience is completely wrong due to the representations being completely wrong and, more importantly, you can't prove the most fundamental by comparing it to itself. I think my critique is what you are sort of trying to get at, but I don't see the use in asking the question when it is circular: it is taken up, at this point, as an axiom, but your line of reasoning here leads me to believe that you may be implying that you proved it to be the case.

In light of all I have said henceforth, do you disagree with my assessment of "experience"?

I look forward to hearing back from you,
Bob
Philosophim December 08, 2021 at 13:52 #629131
Yes, I am enjoying the discussion of getting to the essence of the work. I much appreciate your desire to understand what the argument is trying to say, and I hope I am coming across as trying to understand the argument you are making as well.

Quoting Bob Ross
It is irrelevant if a being that discretely experience realizes they are doing this, or not. They will do so regardless of what anyone says or believes.

You are 100% correct. I do not need to recognize that I am differentiating the letters on my keyboard from the keyboard itself: the mere differentiation is what counts . But this, I would argue, is a recognition of your "awareness" (aka awareness of one's awareness), not awareness itself. So instead, I would say that I don't need to be aware (or recognize) that I am aware of the differentiation of the letters on my keyboard from the keyboard itself: all that must occur is the fundamental recognition (awareness) that there even is differentiation in the first place.


Good, I think we're thinking along the same lines now. That fundamental recognition matches the definition of discrete experiencing. Such discrete experiencing does not require words. We could say that words are a "higher" level of discrete experiencing. But I don't do that in the paper, because that differentiation is not important as a fundamental.

Now can the theory be refined with this differentiation? It could. Someone could call that consciousness. Someone could say, "I" isn't the primitive part of me, "I" only requires that I have consciousness or higher level defining. The theory allows this without issue. But that refinement of I would be a different context of the "I" in the argument. The "conscious I" versus the "unconscious I" are one possible example.

Quoting Bob Ross
I think you are wrong: "I" am not differentiating (separating "this" from "that"), something is differentiating from an undefined flood and "I" recognize the already differentiated objects (this is "awareness" as I mean it).


This is a perfect example of your discrete experience, versus mine. I am not wrong. My definition of "I" applies to reality without contradiction. Your definition of I is also100% correct. Can it apply to reality without contradiction? Perhaps. But we are not having a disagreement about the application of the word, we are having a disagreement about the construction of the definition.

My "I" contains both the fundamental, and the "higher" level discrete experiences we make that I believe you are pointing out. Whether its the fundamental awareness, or meta awareness (making a fundamental awareness into a word for example), they are both discrete experiences. A house cat and a tiger are both cats. For certain arguments, it is important to differentiate between the two. And it may be necessary as the theory grows, or someone creates a new theory based on these fundamentals. But for now, for the fundamentals, I see no reason by application, why there needs to be a greater distinction or redefinition of "the primitive I". The only reason I have the primitive "I", is to quickly get into the idea of context without contradiction, or needing to dive into some form of consciousness, which would likely be another paper.

You are putting your own desired definition of "I" into the argument. Which is fine and perfectly normal. You might be thinking I am stating that my definition of "I" is the definition that is 100% correct, and we should all use it forevermore. I am not. I am saying "I" in this context of understanding knowledge as a process is all that we need. I am not saying we couldn't have "I" mean something different in a different context. In psychology, "I" will be different. For a five year old, "I" will be different. Each person can define "I" as they wish. If they can apply it to reality without contradiction, then they have a definition that is useful to them in their context.

"I" here is simply a definition useful within the context of showing the fundamental process of knowledge as a tool between more than one "I", or discrete experiencer.

Quoting Bob Ross
I'm fine with saying that "experience" initially precedes definition (or potentially that it even always precedes definition), but I think the fundamental aspect of existence is "primitive awareness". If the beating of something in your neck, which is initially just as foreign to you as your internal organs, wasn't something that you were "primitively aware" of, then it would slip your grasp (metaphorically speaking).


Agreed. If you don't discretely experience something, then it is part of the undefined existence. To reiterate, this applies to primitive awareness. I'm not sure we both have the same intention when using this new phrase, so but for my part, its merely the barest of discrete experiences. Think of it this way. My primitive discrete experience is seeing a picture and the feelings associated with it. Then I look closer, and see a sheep in the field. Then I look again and see there is another sheep crouching in the grass in the field that I missed the first two times. While the crouching sheep was always in my vision, I did not discretely experience it. Or, as I think you are implying, have primitive awareness of it.

Quoting Bob Ross
For example, if something (the processes) wasn't differentiating the keys on my keyboard, then I would not, within my most fundamental existence, "experience" the keys on a keyboard.


If you define "I", as consciousness, then you are correct within this context, and could applicably know that. But if I define "I" as a discrete experiencer, you are incorrect in your application. If I am able to pick out and type a "k" on the keyboard, that cannot be done without a discrete experience. Just because you haven't registered it beyond haptics, or have to put a lot of mental effort into it, doesn't mean it isn't a discrete experience.

Do you see the importance of definitions within contexts? We have two different contexts of "I", and they are both correct within their contexts. The question is, which one do we use then? But if we are at this point, then we are at the level of understanding the fundamentals of the argument to address that point.

First, I asked you to understand the context of "I" that I've introduced here, which I believe you have done more than admirably. I hopefully have returned the idea that I understand your context of "I" as well. At this point, we attempt to apply both to reality without contradiction. We both succeed. Why I'm asking you to use my "I", is because it helps us get to the part of the argument where we introduce context. Perhaps I could introduce "consciousness" and get to the same point. But that would likely extend the argument by pages, and would only be explaining a sub-division of discrete experience. Why introduce a sub-division when it doesn't seem necessary to talk about context? If you can explain why my definition of an "I" does not allow me to identify other discrete experiencers, then you will have a point. But so far, I do not see that. Therefore, I do not think we need that context of your "I".

What I'm trying to indicate is that your context of "I" for the argument isn't the "I" of the context of this argument. Within my contextual use of "I", can I apply that to reality without contradiction? You might say yes, but feel that it is inadequate and does not address so many other thing you want to discuss. That is fine. My "I" does not negate your "I", nor its importance in application. If it makes you more comfortable, we could make a different word or phrase for it like, "Primitive I". It is not the word that matters. It is the underlying meaning and context. For the context of ultimately arriving at applicable knowledge, and then the idea that there are other discrete experiencers besides myself, is this enough?

Quoting Bob Ross
I would not constitute this as a real proof: that discretely experiencing doesn't contradict reality and, therefore, it is "correct".


I do not believe it is an axiom. Someone can question if what they discretely experience is "real". The axiom I think is, "That which does not contradict reality is knowledge". I don't have any proof of this statement when it is introduced. I state it, then try to show it can be true. If the axiom is upheld, then I can conclude that what I discretely experience is known to me. But without the axiom of what knowledge is, I don't believe I claim that. Even then, I don't like the idea of "something that is true by default". I believe we can start with assumptions, but when we conclude there should be some proof that our assumptions are also correct in some way. But like you said, this is an aside to the conversation. I will not say you are wrong, and I am just giving an opinion that may also be wrong. The discussion of proofs and axioms could be a great topic for another time though!



Bob Ross December 09, 2021 at 02:23 #629400
Hello @Philosophim,

I agree, I think that we both understand each others' definition of "I" and that I have not adequately shown the relevance of my use of "I". Furthermore, I greatly appreciate your well thought-out replies, as they have helped me understand your papers better! In light of this, I think we should progress our conversation and, in the event that it does become pertinent, I will not hesitate to demonstrate the significance (and, who knows, maybe, as the discussion progresses, they dissolve themselves). Until then, I think that our mere recognition of each others' difference of terminology (and the underlying meanings) will suffice. To progress our conversation, I have re-read your writings a couple times over (which does not in any way reflect any kind of mastery of the text) and I have attempted to assess it better. Moreover, I would like to briefly cover some main points and, thereafter, allow you to decide what you would like to discuss next. Again, these pointers are going to be incredibly brief, only serving as an introduction, so as to allow you to determine, given your mastery (naturally) of your own writings, what we ought to discuss next. Without further ado, here they are:

Point 1: Differentiation is a product of error.

When I see a cup, it is the error of my perception. If I could see more accurately, I would see atoms, or protons/neutrons/electrons or what have you, and, thereby, the distinction of cup from the air surrounding it becomes less and less clear. Perfectly accurate eyes are just as blind as perfectly inaccurate eyes: differentiation only occurs somewhere in between those two possibilities. Therefore, a lot of beliefs are both applicable knowledge and not applicable knowledge: it is relative to the scope. For example, the "cup" is a meaningful distinction, but is contradicted by reality: the more accurately we see, or sense in general, the more the concept of a "cup" contradicts it. Therefore, since it technically contradicts reality, it is not applicable knowledge. However, within the relative scope of, let's say, a cup on a table, it is meaningful to distinguish the two even though, in "reality", they are really only distinguishable within the context of an erroneous eye ball.

Point 2: Contradictions can be cogent.

Building off of point 1, here's an example of a reasonable contradiction:
1. There are two objects, a cup and a table, which are completely distinct with respect to every property that is initially discretely experienced
2. Person A claims the cup and table to be separate concepts (defining one as a 'cup' and the other as a 'table')
3. Person B claims that the cup and the table are the same thing.
4. Person A claims that Person B has a belief that contradicts reality and demonstrates it by pointing out the glaring distinctions between a cup and a table (and, thereby, the contradictions of them being the same thing).
5. Person B argues that the cup and table are atoms, or electrons/protons/neutrons or what have you, and, therefore, the distinction between the cup and the table is derived from Person A's error of perception.
7. In light of this, and even in acknowledgement of this, Person A still claims there is a 'cup' and a 'table'.
8. Person A is now holds two contradictory ideas (the "cup" and "table" are different, but yet fundamentally they are not different in that manner at all): the lines between a 'cup' and a 'table' arise out of the falseness of Person A's discrete experiences.
9. Person B claims Person A, in light of #8, holds a belief that is contradicted by reality and that Person A holds two contradictory ideas.

Despite Person A's belief contradicting reality, it is still cogent because, within relative scope of their perceptions, there is a meaningful distinction between a 'cup' and a 'table'--but only compared to reality in a relative scope. Also, Person A can reasonably hold both positions, even though they negate one another, because the erroneous nature of their existence produces meaningful distinctions that directly contradict reality. In this instance, there is no problem with a person holding (1) a belief that contradicts reality and (2) two contradictory, competing views of reality.

Point 3: Accidental and essential properties are one in the same
Building off of point 1 and 2, the distinction between an accidental and essential property seem to be only different in the sense of scope. I think this is the right time to invoke Ship of Theseus (which you briefly mention in the original post in this forum). When does a sheep stop being a sheep? Or a female stop being a female? Or an orange stop being an orange?

Point 4: The unmentioned 5th type of induction

There is another type of induction: "ingrained induction". You have a great example of this that you briefly discuss in the fourth essay: Hume's problem of induction. Another example is that the subject has to induce that "this" is separate from "that", but it is an ingrained, fundamental induction. The properties and characteristics that are apart of discrete experience do not in themselves prove in any way that they are truly differentiating factors: the table and the chair could, in reality, be two representations of the same thing, analogous to two very different looking representations of the same table directly produced by different angles of perspective. We have to induce that use of these properties and characteristics (such as light, depth, size, quantity, shape, color, texture, etc) are reasonable enough differentiating factors to determine "this" separate from "that". For example, we could induce that, given the meaningfulness of making such distinctions, we are valid enough in assuming they are, indeed, differentiating factors. Or we could shift the focus and claim that we don't really care if, objectively speaking, they are valid differentiating factors, but, rather, the meaningfulness is enough.

Point 5: Deductions are induced

Building off of point 4, "ingrained induction" is utilized to gather any imaginable kind of deductive principle: without such, you can't have deductions. This directly implies that it is not completely the case that deductions are what one should try to anchor inductions off of (in terms of your hierarchical structure). For example, the fact of gravity (not considering the theory or law), which is an induction anchored solely to the "ingrained induction", is far a "surer" belief, so to speak, than the deductive principle of what defines a mammal. If I had to bet on one, I would bet on the continuation of gravity and not the continuation of the term "mammal" as it has been defined: there are always incredible gray areas when it comes to deductive principles and, on some occasions, it can become so ambiguous that it requires refurbishment.

Point 6: Induction of possibility is not always cogent

You argue in the fourth essay that possibility inductions are cogent: this is not always the case. For example:

A possibility is cogent because it relies on previous applicable knowledge. It is not inventing a belief about reality which has never been applicably known.


1. You poofed into existence 2 seconds ago
2. You have extremely vivid memories (great in number) of discretely experiencing iron floating on water
3. From #2, you have previous applicable knowledge of iron floating on water
4. Since you have previous applicable knowledge of iron floating on water, then iron floating on water is possible.
5. We know iron floating on water is not possible
6. Not all inductive possibilities are cogent

Yes you could test to see if iron can float, but, unfortunately, just because one remembers something occurring doesn't mean it is possible at all: your applicable knowledge term does not take this into account, only the subsequent plausibility inductions make this sub-distinction.

Point 7: the "I" and the other "I"s are not used equivocally

Here's where the ternary distinction comes into play: you cannot prove other "I"s to be a discrete experiencer in a holistic sense, synonymous with the subject as a discrete experiencer, but only a particular subrange of it. You can't prove someone else to be "primitively aware", and consequently "experience", but only that they have the necessary processes that differentiate. In other words, you can prove that they differentiate, not that they are primitively aware of the separation of "this" from "that".

Hopefully those points are a good starting point. I think hit on a lot of different topics, so I will let you decide what to do from here. We can go point-by-point, all points at once, or none of the points if you have something you would like to discuss first.

I look forward to hearing from you,
Bob
Philosophim December 09, 2021 at 20:00 #629549
Reply to Bob Ross
Fantastic points! It is a joy for me to see someone else understand the paper so well. I'm not sure anyone ever has. Lets go over the points you made.

Quoting Bob Ross
Point 1: Differentiation is a product of error.

When I see a cup, it is the error of my perception. If I could see more accurately, I would see atoms, or protons/neutrons/electrons or what have you, and, thereby, the distinction of cup from the air surrounding it becomes less and less clear. Perfectly accurate eyes are just as blind as perfectly inaccurate eyes: differentiation only occurs somewhere in between those two possibilities.


Instead of the word "error" I would like to use "difference/limitiations". But you are right about perfectly inaccurate eyes being as blind as eyes which are able to see in the quantum realm, if they are trying to observe with the context of normal healthy eyes. Another contextual viewpoint is "zoom". Zoom out and you can see the cup. Zoom in on one specific portion and you no longer see the cup, but a portion of the cup where the elements are made from.

Fortunately, we are no only bound to sight with our senses. Not only do we have our natural senses, we can invent measurements to "sense" for us as well. Sight is when light is captured in your eyes, and your brain interprets it into something meaningful. Same with measurements at the nano, or macro level, are the same.

Quoting Bob Ross
Therefore, a lot of beliefs are both applicable knowledge and not applicable knowledge: it is relative to the scope.


You've nailed it, as long as its realized what is applicable is within the contextual scope being considered. I can have applicable knowledge in one scope, but not another. This applies not only to my personal context, but to group contexts as well. In America at one time, swans were defined as being white, and applicably known as such. In Western Australia, "swans" can be black. Each had applicable knowledge of what a swan was in their own context, but once the contexts clashed, both had new challenges to their previous applied knowledge. The result of that, is within the context of world wide zoology, swans can be both black or white.

Quoting Bob Ross
For example, the "cup" is a meaningful distinction, but is contradicted by reality: the more accurately we see, or sense in general, the more the concept of a "cup" contradicts it. Therefore, since it technically contradicts reality, it is not applicable knowledge. However, within the relative scope of, let's say, a cup on a table, it is meaningful to distinguish the two even though, in "reality", they are really only distinguishable within the context of an erroneous eye ball.


If you remove the word error, and replace it with "difference" I think you've nailed this. Within the context of having human eyes, we see the world, and know it visually a particular way. We do not see the ultra violet wavelength for example. In ultra violet light, blue changes to white. So is it applicably known as blue, or white? Within the context of a human eyeball, it is blue. In the context of a measurement that can see ultraviolet light, it is white. Within the context of scientific reflective wavelengths, it is another color. None are in error. They are merely the definitions, and applicable knowledge within those contextual definitions.

Quoting Bob Ross
Point 2: Contradictions can be cogent.


I would like to alter this just slightly. Contradictions of applicable knowledge can never be cogent within a particular context. If there is a contradiction within that context, then it is not deduced, and therefore not knowledge. If two people hold two different sets of distinctive knowledge, but both can apply them within that particular context and gain applicable knowledge within that set of distinctive knowledge, then they are not holding a contradiction for themselves. But if two people are using the same distinctive context, then they cannot hold a contradiction in its application to reality.

The real conflict is the conflict of which distinctive knowledge to when there is a conflict. I'll try not to repeat myself on how distinctive contexts are resolved within expanded context, but the examples I gave in part 3 show that. If you would like me to go over that again in this example, and also go point by point on your example, I will. I'm just trying to cover all of your points at a first pass, and I feel getting into the point by point specifics could be too long when trying to cover all of your initial points. Feel free to drill into, or ask me to drill further into any of these points more specifically on your follow up post.

Quoting Bob Ross
Building off of point 1 and 2, the distinction between an accidental and essential property seem to be only different in the sense of scope. I think this is the right time to invoke Ship of Theseus (which you briefly mention in the original post in this forum).


Nailed it. And with this, we have an answer to the quandary that Theseus' ship posed. When is a ship not a ship anymore? Whenever we decide its not a ship anymore within the scale of context. The answer to the question, is that there is no one answer.

For example, one society could state that both the original parts, and replaced parts, are Theseus' ship. However, the ship that is constructed with the newest parts is the original ship. So if two ships were built, Theseus ship would be the newest part ship, while the oldest part ship would be another ship made out of the originals old parts.

Another society could reverse this. They could say that once a ship has replaced all of its old parts, it is no longer the original ship anymore, and needs to be re-registered with the government. This could be due to the fact that the government assures that all vessels are sea worthy and meet regulation, and it figures if all of the original parts are replaced, it needs to be re-inspected again to ensure it still meets the regulatory standards.

It is a puzzle that has no specific answer, does have specific answers that fulfil the question, but has puzzled people because they believed there was only one answer.

What is essential and accidental in each is within the context of each society. For accidental properties, perhaps society B wasn't detailed enough, and it turns out you can replace "most" of a part of a ship, like an engine besides one cog, and that's still "The original engine with a lot of pieces replaced on it." In society A, they might say "Its a new engine with one old piece left on it". In the first case it is essential that every piece be replaced for something to be considered a "new" part, while in the later, a few old parts put on a new part still means its a "new part with some old pieces".

Quoting Bob Ross
There is another type of induction: "ingrained induction". You have a great example of this that you briefly discuss in the fourth essay: Hume's problem of induction. Another example is that the subject has to induce that "this" is separate from "that", but it is an ingrained, fundamental induction.


Recall that the separation of "this" and "that" is not an induction in itself, just a discrete experience. It is only an induction when it makes claims about reality. I can imagine a magical unicorn in my head. That is not an induction. If I believe a magical unicorn exists in reality, that is a belief, and now an induction.

Now you could argue that in certain cases of discrete experience, we also load them with what you call "ingrained inductions". Implicitly we might quickly add, "that exists in reality" and "this exists in reality". You are correct. Most of our day to day experiences are not knowledge, but inductions based off of past things we've known, or cogently induced. Its much more efficient that way. Gaining knowledge takes time experimentation, and consideration. The more detailed the knowledge you want, the more detailed the context, and the more time and effort it takes to obtain it.

And that is ok. I do not carry a ruler around with me to measure distance. Many times I estimate if that is a few feet with my eyeball. And for most day to day contexts, that is fine. Put me in a science lab, and I am an incompetent who should be banned. Put me in a situation in which I need to know that the stream is a little under a foot wide, and I can easily cross, and I am an efficient and capable person.

Quoting Bob Ross
For example, the fact of gravity (not considering the theory or law), which is an induction anchored solely to the "ingrained induction"


Hm, I would ask you to specify where the induction is. Gravity is not a monolith, but built upon several conclusions of application. Is there a place in gravity that has been applied, and found to be inconclusive? The induction is not what gravity claims to describe itself as, the induction would be in its application. Off the top of my head I could state the idea that "Gravity is always applying a pull from anything that has mass to every other mass in the universe" an induction for sure. That does not negate its application between particular bodies we can observe.

But more to your point, I believe the theory allows us to more clearly identify what we can conclude as knowledge, and what we can include as cogent, and less cogent inductions. It may require us to refine certain previous assumptions, or things that we have unintentionally let slide in past conclusions. As science is constantly evolving, I don't see a problem with this if it helps it evolve into a better state. If you would like me to go into how I see this theory in assisting science, I can go into it at a separate post if desired.

Quoting Bob Ross
The properties and characteristics that are apart of discrete experience do not in themselves prove in any way that they are truly differentiating factors: the table and the chair could, in reality, be two representations of the same thing, analogous to two very different looking representations of the same table directly produced by different angles of perspective.


By discrete experience and context, they can, or cannot be. Recall the situation between a goat and a sheep. If I include what a goat is under the definition of a sheep, I can hold that both a goat and sheep, are a "sheep" The reason why we divide up identities into smaller groups of description is that they have some use to us. It turns out that while a goat and sheep share many properties, they are consistently different enough in behavior that it is easier and more productive to label them as two separate class of animals.

The idea that the table and chair are two separate things is not a truth in reality apart from our contexts. So there could be a context where chair and tables are separate, or they are together as a "set". We can identify them as we like, as long as we are clear with our identities, and are able to apply them to reality without contradiction.

Quoting Bob Ross
Point 6: Induction of possibility is not always cogent

You argue in the fourth essay that possibility inductions are cogent: this is not always the case.


Cogency is a way to define a hierarchy of inductions. But an induction is still always an induction. Its conclusion is not necessarily true from the premises. Just because something existed once, does not mean it will ever exist again. We know its possible, because it has at least existed one time. So in the case where you have a memory of iron floating on water, as long as you believe in the accuracy of your memories, you will reasonably believe it is possible for iron to float on water.

Of course, when you extended that context to another person, you would be challenged. Person after person would state, "No, I've never seen or heard of any test that showed iron floated on water." What you do is your choice. You could start doubting your memory. You could start testing and see that it fails time and time again. You are the only one in the world who thinks its possible, while the rest of society does not.

And finally, inductions are not more reasonable than deductions. If you believe it is possible for iron to float on water, but you continually deduce it is not, you would be holding an induction over a current deduction. You might try to explain it away by stating that it was possible that iron floated on water. Maybe physics changed. Maybe your memories are false or inaccurate. And as we can see, holding a deduction as the greater value than the induction, gives us a reason to question our other inductions instead of holding them as true.

And for our purposes, we might indeed be able to prove that their memories are false. Surely they had memories of parents. We could ask the parents if they knew of his birth. They would quickly realize they did not have an id, or a record of it anywhere in society. Once the memories were seen as doubtful, then they could not be sure they had actually seen iron float. At that point, its plausible that the person's memories of iron floating on water were applicably known, but it has been reduced from a possibility, and is even less cogent now then affirming the deduction of today, that iron does not float on water.

Quoting Bob Ross
Point 7: the "I" and the other "I"s are not used equivocally

Here's where the ternary distinction comes into play: you cannot prove other "I"s to be a discrete experiencer in a holistic sense, synonymous with the subject as a discrete experiencer, but only a particular subrange of it. You can't prove someone else to be "primitively aware", and consequently "experience", but only that they have the necessary processes that differentiate. In other words, you can prove that they differentiate, not that they are primitively aware of the separation of "this" from "that".


You may be correct. We would need to clarify the terms and attempt to apply them to reality. And that's fine. As for this line, " In other words, you can prove that they differentiate, not that they are primitively aware of the separation of "this" from "that", yes I can. Differentiation within existence is "primitive awareness". Lets not use that phrase anymore if it causes confusion. If we don't have solid definitions between us, we won't match up in the context of discussion.

Another thing to consider, is I don't need to prove anything deeper in the "I" then I did in that context. If you read the paper and understand the concepts, are you a discrete experiencer? Can you deduce? Can you take the methodology, apply it, and it comes away with consistent results that give you a useful tool to interact with reality in a rational manner? It is there to prove yourself. If you can understand the paper and follow its conclusions, then you have actively participated in the act of distinctive and applicable knowledge. If you want to produce another "I" for your own personal context, there is nothing stopping you, or contradicting the "primitive I" in the paper.

What I want to take away from this instead of debating over an "I" is a broader concept that there will be some things that we cannot applicably know based on the context we set up. Will I ever applicably know what it is to discretely experience as you do? No, nor you for I. But can I applicably know that this is impossible? Yes. Applicably knowing our limits is just as important. Calculus was invented to measure limits of calculation, where the calculation eventually forms an asymptote of results. While I may not be able to know what its like to discretely experience as yourself, I can know you discretely experience, and use that knowledge to formulate a tool that can evaluate up to our limits.

There is my massive reply! Out of all that, pick 2 that you would like me to drill into for the next response. When you are satisfied with those, we can go back and drill into two more, so I don't approach the questionable limits of how much I can type in one post! Wonderful contributions as always.






Bob Ross December 10, 2021 at 05:04 #629669
@Philosophim,

I see now! I now understand your epistemology to be the application of deductions, or inductions that vary by degree of cogency, within a context (scope), which I completely agree with. This kind of epistemology, as I understand it, heavily revolves around the subject (but not in terms of simply what one can conceive of course) and not whatever "objective reality", or the things-in-themselves, may be: I agree with this assessment. For the most part, in light of this, I think that your brief responses were more than adequate to negate most of my points. So I will generally respond (and comment) on some parts I think worth mentioning and, after that, I will build upon my newly acquired understanding of your view (although I do not, without a doubt, completely understand it I bet) .

Instead of the word "error" I would like to use "difference/limitiations". But you are right about perfectly inaccurate eyes being as blind as eyes which are able to see in the quantum realm, if they are trying to observe with the context of normal healthy eyes. Another contextual viewpoint is "zoom". Zoom out and you can see the cup. Zoom in on one specific portion and you no longer see the cup, but a portion of the cup where the elements are made from.


I agree: it is only "error" if we deem it to be "wrong" but, within context, it is "right".

Contradictions of applicable knowledge can never be cogent within a particular context.


In light of context, I agree: I was attempting to demonstrate contradictions within all contexts, which we both understand and accept as perfectly fine. On a side note, I also agree with your assessment of Theseus' ship.

Recall that the separation of "this" and "that" is not an induction in itself, just a discrete experience. It is only an induction when it makes claims about reality. I can imagine a magical unicorn in my head. That is not an induction. If I believe a magical unicorn exists in reality, that is a belief, and now an induction.


Upon further reflection, I think that I was wrong in stating that differentiation is an "ingrained induction"; I think the only example of "ingrained inductions" is, at its most fundamental level, Hume's problem of induction. That is what I was really meaning by my gravity example, although I was wrongly stating it as induction itself, that I induce that an object will fall the next time I drop it. This is a pure induction and, I would argue, is ingrained in us (and I think you would agree with me on that). After thinking some more, I have come to the conclusion that I am really not considering differentiation an "ingrained induction" but, rather, an assumption (an axiom to be more specific). I am accepting, and I would argue we all are accepting, the principle of noncontradiction as a metalogical principle, a logical axiom, upon which we determine something to either be or not be. However, as you are probably aware, we cannot "escape", so to speak, the principle of noncontradiction to prove or disprove the principle of noncontradiction, just like how we are in no position to prove or disprove the principle of sufficient reason or the principle of the excluded middle. You see, fundamentally, I think that your epistemology stems from "meaningfulness" with respect to the subject (and, thereafter, multiple subjects) and, therefrom, you utilize the most fundamental axiom of them all: the principle of noncontradiction as a means towards "meaningfulness". It isn't that we are right in applying things within context of a particular, it is that it is "meaningful" for the subject, and potentially subjects, to do so and, therefore, it is "right". This is why I don't think you are, in its most fundamental sense, proving any kind of epistemology grounded on absolute grounds but, rather, you are determining it off of "meaningfulness" on metalogical principles (or logical axioms). You see, this is why I think a justified, true, belief (and subsequently classical epistemology) has been so incomplete for such a long time: it is attempting to reach an absolute form of epistemology, wherein the subject can finally claim their definitive use of the term "know", whereas I think that to be fundamentally misguided: everything is in terms of relevancy to the subject and, therefore, I find that relevancy directly ties to relative scope (or context as you put it) (meaningfulness).

I also apply this (and I think you are too) to memories: I don't think that we "know" any given memory to truly be a stored experience but, rather, I think that all that matters is the relevance to the subject. So if that memory, regardless of whether it got injected into their brain 2 seconds ago or it is just a complete figment of the imagination, is relevant (meaningful) to the subject as of "now", then it is "correct" enough to be considered a "memory" for me! If, on the contrary, it contradicts the subject as of "now", then it should be disbanded because the memory is not as immediate as experience itself. I now see that we agree much more than I originally thought!

I would also apply this in the same manner to the hallucinated "real" world and the real world example I originally invoked (way back when (: ). For me, since it is relative to context, if the context is completely limited to the hallucinated "real" world, then, for me, that is the real world. Consequently, what I can or cannot know, in that example, would be directly tied to what, in hindsight, we know to be factually false; however, the knowledge, assuming it abides by the most fundamental logical axiom (principle of noncontradiction), is "right" within my context. Just like the "cup" and "table" example, we only have a contradiction within multiple "contexts", which I am perfectly fine with. With that being said, I do wonder if it is possible to resolve the axiomatic nature of the principle of noncontradiction, because I don't like assuming things.

Furthermore, in light of our epistemologies aligning much better than I originally thought, I think that your papers seem to only thoroughly address the immediate forms of knowledge (i.e. your depiction of discrete experiences, memories, and personal context is very substantive), but do not fully address thereafter. It seems to get into what I would call mediate forms of knowledge (i.e. group contexts and the induction hierarchies) in a general sense, sort of branching out a bit past the immediate forms, but I think that there's much more to discuss (I also think that there's a fundamental question of when, even in a personal context, hierarchical inductions stretch to far to have any relevancy). This is also exactly what I have been pondering in terms of my epistemology as well, so, if you would like, we could explore that.

I look forward to hearing from you,
Bob
Philosophim December 10, 2021 at 12:47 #629749
I think you understand the theory Bob. Everything you said seemed to line up! Yes, I would be interested in your own explorations into epistemology. Feel free to direct where you would like to go next.
Bob Ross December 12, 2021 at 17:48 #630510
Hello @Philosophim,

I think that the first issue I am pondering is the fact that neither of our epistemologies, as discussed hitherto, really clarifies when a person "knows", "believes", or "thinks" something is true. Typically, as you are probably well aware, knowledge is considered with more intensity (thereby requiring a burden of proof), whereas belief and to simply "think" something is true, although they can have evidence, do not require any burden of proof at all. According to your epistemology, as I understand it, considers a person to "know" something if they can apply it to "reality" without contradiction (i.e. applicable knowledge)--which I think doesn't entirely work. For example, I could claim that I "know" that my cat is in the kitchen with no further evidence than simply stating that the claim doesn't contradict my or anyone else who is in the room's "reality". Hypothetically, let's say I (and all the other people) are distantly away from the kitchen and so we cannot verify definitively the claim: do I "know" that my cat is in the kitchen? If so, it seems to be an incredibly "weak" type of knowledge to the point that it may be better considered a "belief" or maybe merely a theory (in a colloquial sense of the term, not scientific)(i.e. I "think").

Likewise, we could take this a step further: let's say that I, and everyone else in the room, get on a phone call with someone who is allegedly in that very kitchen that we don't have access to (in which I am claiming the cat to reside) and that person states (through the phone call) that the cat is not in the kitchen. Do I now "know" that the cat is not in the kitchen? Do I "know" that that person actually checked the kitchen and didn't just make it up? Do I "know" that that even was a person I was talking to? I heard a voice, of which I assigned to a familiar old friend of mine whom I trust, but I am extrapolating (inducing) that it really was that person and, thereafter, further inducing off of that induction that that person actually checked and, thereafter, that they checked in an honest manner: it seems as though their is a hierarchy even within claims of knowledge themselves. I think that your hierarchy of inductions is a step in the right direction, but what is a justified claim of knowledge? I don't think it would be an irrational induction to induce that the person calling me is (1) the old, trustworthy friend I am assigning the voice to and (2) that they actually didn't discretely experience the cat being in the kitchen, but am I really justified? If so, is this form of "knowledge" just as "strong", so to speak, as claiming to "know" that the cat isn't in the room in which I am in right now? Is it just as "strong" as claiming to "know" that the cat isn't in the room I was previously in, but have good reason to believe the cat hadn't somehow traveled that far and snuck its way into the room, whereas do I really "know" the cat didn't find its way into the kitchen (which is quite a distance away from me, let's say in a different country or something)?

Another great example I have been pondering is this: do I "know" that a whale is the largest mammal on earth? I certainly haven't discretely experienced one and I most certainly haven't measured all the animals, let alone any one, on this earth. So, how am I justified in claiming to "know" it? Sure, applying my belief that a whale is the largest mammal doesn't contradict my "reality", but does that really constitute as "knowledge"? In reality, I simply searched it and trusted the search results. This seems, at best, to be a much "weaker" form of knowledge (of some sorts, I am not entirely sure).

I think after defining the personal context and even the general societal context of claims, as you did in your essays, and even after discussing hierarchical inductions, I am still left with quite a complexity of problems to resolve in terms of what is knowledge.

Bob
Philosophim December 13, 2021 at 13:05 #630845
Wonderful! We are about to get into part 4, induction hierarchies. I have never been able to discuss this aspect with someone seriously before, as no one has gotten to the point of mostly understanding the first three parts. While we discuss, recall our methodology of distinctive knowledge, and deductively applying them as applicable knowledge still stands. Within part 4, I subdivided inductions into four parts, but I can absolutely see the need for additional sub-divisions, so feel free to point out any you see.

Quoting Bob Ross
For example, I could claim that I "know" that my cat is in the kitchen with no further evidence than simply stating that the claim doesn't contradict my or anyone else who is in the room's "reality".


Applicably knowing something depends on our context, and while context can also be chosen, the choice of context is limited by our distinctive knowledge. If, for example I did not have the distinctive knowledge that my friend could lie to me, then I would know the cat was in the room. But, if I had the distinctive knowledge that my friend could lie to me, I could make an induction that it is possible that my friend could be lying to me. Because that is an option I have no tested in application and due to my circumstance, cannot test even if I wanted to, I must make an induction.

Quoting Bob Ross
I think that your hierarchy of inductions is a step in the right direction, but what is a justified claim of knowledge?


When you can deduce nothing else within your context of distinctive knowledge. If you recall the sheep and goat issue, prior to separating the identities of a sheep and a goat, both could be called a "sheep". But once the two identities are formed, there is a greater burden on the person who is trying to applicably know whether that animal is either a sheep, or a goat.

Arguably, I think we applicably know few things. The greater your distinctive knowledge and more specific the context, the more difficult it becomes to applicably know something. Arguably though, the greater specificity also gives you a greater assurance that what you do applicably know, will allow greater precision in handling reality. It is easier for a person with a smaller imagination and vocabulary to know something. This reminds me of the concept of newspeak in 1984.

"In "The Principles of Newspeak", the appendix to the novel, Orwell explains that Newspeak follows most of the rules of English grammar, yet is a language characterised by a continually diminishing vocabulary; complete thoughts are reduced to simple terms of simplistic meaning.
- https://en.wikipedia.org/wiki/Newspeak

Orwell understood implicitly that the simpler and more general the language, the more you could get your populace to "applicably know" without question. If this state is "good" no matter what the state does, then questioning anything the state does is "evil". Simple terms make simple men. But, simple terms also make efficient men. It is not that induction is wrong, it is that incorrectly understood, it can be misused. I think a useful term for when we are discussing a situation in which a person has extremely limited distinctive knowledge is the "simpleton context". We can use this when there is a question of fundamentals.

I would argue the bulk of our decisions are through intuitive inductions, and being able to categorize which one's are the most useful to us, is one of the strengths of the theory. Now that we have have a way to manage the cogency of inductions lets go back to your cat in the kitchen example.

As a reminder, hierarchy of inductions is as follows: Probability, possibility, plausibility, and irrational. Each are formed based on how much of their underlying logic is based upon deductions versus other inductions. First, lets examine the most basic of inductions.

Quoting Bob Ross
Likewise, we could take this a step further: let's say that I, and everyone else in the room, get on a phone call with someone who is allegedly in that very kitchen that we don't have access to (in which I am claiming the cat to reside) and that person states (through the phone call) that the cat is not in the kitchen.


I'll just cover the first three questions. We will not use the simpleton context here. It is a useful context for addressing fundamentals, so if there are any questions, we can return to it at anytime to find the underlying basis. We will be people who are normal seekers of knowledge.

Do I now "know" (applicably) that the cat is not in the kitchen?
No, because I know it is possible that my friend might lie, and I don't know if the person is telling the truth.

Do I "know" that that person actually checked the kitchen and didn't just make it up?
If it is possible that my friend could call me outside of the kitchen, and I have no way of verifying where he called from, then no.

Do I "know" that was even was a person I was talking to?
If I know it is possible that something else could mimic my friends voice to the point I would be fooled, then no.

From this discussion, I think I've actually gleaned something new from my theory I didn't explicitly realize before! If we have the distinctive knowledge of something that is possible or probable, these act as potential issues we have to applicably test and eliminate before we can say we applicably know something. This is because possibilities and probabilities are based on prior applicable knowledge.

Lets change the cat situation to different hierarchies so you can see different outcomes. The person who you're talking to is a trusted friend who rarely lies to you. Its possible they could, but its improbable. There doesn't appear to be a tell in their voice that they are lying, so it would be more cogent to look at the probability they are lying. They rarely lie to you, and they wouldn't have an incentive to lie (that you know of), so you assume they probably aren't lying.

They tell you the cat is in the kitchen as you hear them pouring the food into their bowl. You even hear a "meow" over the phone. You still don't know it, because you have distinctive knowledge of the fact that your friend could be lying this one, or playing a clever prank. You know that it is possible to get an electronic device that would mimic the sound of a cat. You know that it is possible for someone to pour something into a bowl that sounds like cat food, but that doesn't mean the cat is in the kitchen. But, again, its improbable that your friend is lying to you. Probability is more cogent to make decisions off of then possibilities. Therefore, you are more reasonable in assuming your friend is not lying to you, and making the induction that the cat is in the kitchen.

Of course, you could be wrong. All inductions could be wrong. But it would still be less reasonable for you to believe the cat was not in the kitchen based on possibility, when you have probability that indicates the cat likely is.

Quoting Bob Ross
Another great example I have been pondering is this: do I "know" that a whale is the largest mammal on earth?


It depends on your context. If you are implicitly including, "out of all the mammals we have discovered so far," then yes. Or you could explicitly give that greater specific context and add that phrase into the sentence. Often times, we may say things with implied contexts behind them, due to efficiency. The danger of efficiency is of course people can skip steps, overlook implicit claims, and take things literally when it was never intended to.

When we also state, "out of all the mammals we have discovered so far," we are also implicitly noting it is "out of all the possible mammals we've discovered so far". We do not consider plausibilities. For example, I can imagine an animal bigger than a whale that stands on four feet and reaches its neck into the clouds. But we have never applicably known such a creature, so it is not an induction that can challenge the deduction we have made.

I feel there is a lot to cover and refine with inductions, so I look forward to your questions and critiques!
Bob Ross December 13, 2021 at 23:43 #631082
Hello @Philosophim,

I have never been able to discuss this aspect with someone seriously before, as no one has gotten to the point of mostly understanding the first three parts.


I am glad that we are able to agree and discuss further as well! Honestly, as our discussion has progressed, I have realized more and more that we hold incredibly similar epistemologies: I just didn't initially understand your epistemology correctly.

Applicably knowing something depends on our context, and while context can also be chosen, the choice of context is limited by our distinctive knowledge. If, for example I did not have the distinctive knowledge that my friend could lie to me, then I would know the cat was in the room. But, if I had the distinctive knowledge that my friend could lie to me, I could make an induction that it is possible that my friend could be lying to me. Because that is an option I have no tested in application and due to my circumstance, cannot test even if I wanted to, I must make an induction.


I think that this is fair enough: we are essentially deriving the most sure (or reasonable) knowledge we can about the situation and, in a sense, it is like a spectrum of knowledge instead of concrete, absolute knowledge. However, with that being said, I think that a relative, spectrum-like epistemology (which I think both our epistomologies could be characterized as such) does not account for when we should simply suspend judgement. You see, if we are always simply determining which is more cogent, we are, thereby, never determining if the most cogent form is really cogent enough, within the context, to be worth even holding as knowledge in the first place.

Arguably, I think we applicably know few things. The greater your distinctive knowledge and more specific the context, the more difficult it becomes to applicably know something.


I completely agree. Furthermore, I also understand your reference to 1984 and how vocabulary greatly affects what one can or cannot know with their context because, I would say, their vocabulary greatly determines their context in the first place.

I think before I get into your response to the cat example I need to elaborate on a couple things first. Although I think that your hierarchical inductions are a good start, upon further reflection, I don't think they are quite adequate enough. Let me try to explain what I mean.

Firstly, let's take probabilistic inductions. Probability is not, in itself, necessarily an induction. It is just like all of mathematics: math is either deduced or, thereupon, induced. For example, in terms of math, imagine I am in an empty room where I only have the ability to count on my fingers and, let's say, I haven't experienced anything, in terms of numbers, that exceeded 10. Now, therefrom, I could count up to 10 on my fingers (I understand I am overly simplifying this, but bare with me). I could then believe that there is such a thing as "10 things" or "10 numbers" and apply that to reality without contradiction: this is a deduction. Thereafter, I could then induce that I could, theoretically, add 10 fingers worth of "things" 10 times and, therefore, have 100 things. Now, so far in this example, I have no discrete experiences of 100 things but determined that I know 100 "fingers" could exist. So logically, as of now, 100 only exists in terms of a mathematical induction, whereas 10 exists in terms of a deduction. I would say the same thing is true for probability. Imagine I am in a room that is completely empty apart from me and a deck of 52 cards. Firstly, I can deductively know that there is 52 "things". Secondly, I could deductively know an idea of "randomness" and apply that without contradiction as well. Thirdly, I could deductively know that, at "random", me choosing a kind out of the deck is a chance of 4/52 and apply that without contradiction (I could, thereafter, play out this scenario ad infinitum, where I pick a card out of a "randomly" shuffled deck, and my results would slowly even out to 4/52). All of this, thus forth, is deductive: created out of application of beliefs towards reality in the same way as your sheep example. Now, where induction, I would say, actually comes into play, in terms of probability, is an extrapolation of that probabilistic application. For example, let's take that previous 52 deck scenario I outlined above: I could then, without ever discretely experiencing 100 cards, induce that the probability of picking 1 specific card out of 100 is 1/100. Moreover, I could extrapolate that knowledge, which was deduced, of 4/52 and utilize that to show whether something is "highly probable" or "highly improbable" or something in between: this would also be an induction. For example, if I have 3 cards, two of which are aces and one is a king, I could extrapolate that it is "highly probable" that I will randomly pick an ace out of the three because of my deduced knowledge that the probability of picking an Ace is 2/3 in this case. My point is that I view your "probabilistic inductions" as really being a point towards "mathematical inductions", which does not entirely engross probability. Your 52 card deck example in the essays is actually a deduction and not an induction.

Secondly, I think that probabilistic inductions and plausible inductions are not always directly comparable. To be more specific, a probabilistic "fact" (whether deduced or induced) is comparable to plausible inductions and, in that sense, I think you are right to place the former above the latter; however, I do not think that "extended" probabilistic claims are comparable (always) to plausible inductions. For example, let's say that there is a tree near my house, of which I can't see from where I am writing this, that I walk past quite frequently. Let's also say that I have three cards in front of me, two of which are aces. Now, I would say that the "fact" that the probability of me randomly picking an ace is 2/3 is "surer" (more cogent form of knowledge) than any claim I could make in terms of an inapplicable plausibility induction towards the tree still being where it was last time I walked past it (let's assume I can't quickly go look right now). However, if I were to ask myself "are you surer that you will pick an ace out of these three cards or that the tree is still intact", or I think you would put it "is it more cogent to claim I will pick an ace out of these three cards or that the tree is still intact", I am now extending my previous "fact" (2/3) into an "extended", contextual, claim that weighs the "highly probableness" of picking an ace out of three cards with the plausibility of the tree still being intact. These are two, as they were stated in the previous sentence, completely incompatible types of claims and, therefore, one must be "converted" into the other for comparison. To claim something is probable is purely a mathematical game, whereas plausible directly entails other means of evidence other than math (I may have walked by the tree yesterday, there may have been no storms the previous night, and I may have other reasons to believe it "highly implausible" that someone planned a heist to remove the tree). In this example, although I may colloquially ask myself "what are the odds that someone moved the tree", I can't actually convert the intactness of the tree into purely probability: it is plausibility. I think this shows that probability and plausibility, in terms of "extended" knowledge claims stemming from probability, are not completely hierarchical.

Thirdly, building off of the previous paragraph, even though they are not necessarily comparable in the sense of probability, they can be compared in terms of immediateness (or discrete experiential knowledge--applicable and distinct knowledge): the "probabilistic deductions" are "surer" (or more cogent) than "plausible inductions", but "probabilistic inductions" are not necessarily "surer" than "plausible inductions" (they are only necessarily surer if we are talking about the "fact" and not an "extension"). Let's take my previous example of the tree and the 3 cards, but let's say its 1000 cards and one of them is an ace: I think I am "surer" that the tree is still there, although it is argument made solely from an inapplicable plausible induction (as I haven't actually calculated the probability nor have I, in this scenario, the ability to go discretely experience the tree) than me getting an ace from those 1000 cards. However, I am always "surer" that the probability of getting an ace out of 1000 cards is 1/1000 (given there's only one) than the intactness of the tree (again, assuming I can't discretely experience it as of now). Now I may have just misunderstood your fourth essay a bit, but I think that your essay directly implies that the hierarchy, based off of proximity towarfd deductions, is always in that order of cogency. However, I think that sometimes the "extension" of probability is actually less cogent than a plausible induction.

I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I". You see, the probabilistic "fact" of picking an ace out of three cards (two of which are aces: 2/3) is "surer", or more cogent, because it is very immediate to the "I" (it is a deduction directly applied to discrete experiences). The probabilistic "extension" claim, built off of a mathematical deduction in this case (but could be an induction if we greatly increased the numbers), that I am "surer" of getting an ace out of three cards (2/3) is actually less cogent (or less "sure") of a claim than the tree is intact because, in this example, the tree intactness is more immediate than the result of picking a card being an ace. Sure, I know it is 2/3 probability, but I could get the one card that isn't an ace, whereas, given that I walked past the tree yesterday (and couple that with, let's say, a relatively strong case for the tree being there--like there wasn't a tornado that rolled through the day before), the "sureness" is much greater; I have a lot of discrete experiences that lead me to conclude that it is "highly plausible" (note that "highly probable" would require a conversion I don't think possible in this case) that the tree is still there. Everything, the way I see it, is based off of immediateness, but it gets complicated really fast. Imagine that I didn't have an incredibly strong case for the tree still being there (like I walked past it three weeks ago and there was a strong storm that occurred two weeks ago), then it is entirely possible, given an incredible amount of analysis, that the "sureness" would reverse. As you have elegantly pointed out in your epistemology, this is expected as it is all within context (and context, I would argue, is incredibly complicated and enormous).

I will leave it at that for now, as this is getting much longer than I expected, so, I apologize, I will address your response to the cat example once we hash this out first (as I think it is important).

Bob
Philosophim December 14, 2021 at 23:50 #631489
Great comments so far Bob! I'll dive in.

Quoting Bob Ross
Firstly, let's take probabilistic inductions. Probability is not, in itself, necessarily an induction.


I understand exactly what you are saying in this paragraph. I've deductively concluded that these inductions exist. Just as it is deductively concluded that there are 4 jacks in 52 playing cards.

Quoting Bob Ross
Now, where induction, I would say, actually comes into play, in terms of probability, is an extrapolation of that probabilistic application.


Quoting Bob Ross
For example, if I have 3 cards, two of which are aces and one is a king, I could extrapolate that it is "highly probable" that I will randomly pick an ace out of the three because of my deduced knowledge that the probability of picking an Ace is 2/3 in this case.


Exactly. That is the induction I am talking about. We can know an induction discretely. But know an inductions outcome when we apply it to reality.

Quoting Bob Ross
My point is that I view your "probabilistic inductions" as really being a point towards "mathematical inductions", which does not entirely engross probability.


There are likely degrees of probability we could break down. Intuitively, pulling a jack out of deck of cards prescribes very real limits. However, if I note, "Jack has left their house for the last four days at 9am. I predict today on Friday, they will probably do the same," I think there's an intuition its less probably, and more just possible.

Perhaps the key is the fact that we don't know what the denominator limit really is. The chance of a jack would be 4/52, while the chance of Jack leaving his house at 9 am is 4 out of...5? Does that even work? I have avoided these probabilities until now, as they are definitely murky for me.

Quoting Bob Ross
Secondly, I think that probabilistic inductions and plausible inductions are not always directly comparable. To be more specific, a probabilistic "fact" (whether deduced or induced) is comparable to plausible inductions and, in that sense, I think you are right to place the former above the latter; however, I do not think that "extended" probabilistic claims are comparable (always) to plausible inductions.


Ah, I'm certain I cut this out of part four to whittle it down. A hierarchy of inductions only works when applying a particular set of distinctive knowledge to an applicable outcome. We compare the hierarchy within the deck of cards. We know the probability if pulling a jack, we know its possible we could pull a jack, but the probability is more cogent that we won't pull a jack.

The intactness of the tree would be evaluated separately, as the cards have nothing to do with the trees outcome. So for example, if the tree was of a healthy age, and in a place unlikely to be harmed or cut down, it is cogent to say that it will probably be there the next day. Is it plausible that someone chopped it down last night for a bet or because they hated it? Sure. But I don't know if that's actually possible, so I would be more cogent in predicting the tree will still be there tomorrow with the applicable knowledge that I have.

Quoting Bob Ross
I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I".


With the clarification I've made, do you think this still holds?

Quoting Bob Ross
Imagine that I didn't have an incredibly strong case for the tree still being there (like I walked past it three weeks ago and there was a strong storm that occurred two weeks ago), then it is entirely possible, given an incredible amount of analysis, that the "sureness" would reverse. As you have elegantly pointed out in your epistemology, this is expected as it is all within context (and context, I would argue, is incredibly complicated and enormous).


This ties into my "degrees of probability" that I mentioned earlier. In these cases, we don't have the denominator like in the "draw a jack" example. In fact, we just might not have enough applicable knowledge to make a decision based on probability. The more detailed our applicable knowledge in the situation, the more likely we are to craft a probability that seems more cogent. If we don't know the destructive level of the storm, perhaps we can't really make a reasonable induction. Knowing that we can't make a very good induction, is also valuable at times too.

My apologies is this is a little terse for me tonight. I will have more time later to dive into these if we need more detail, I just wanted to give you an answer without any more delay.
Bob Ross December 16, 2021 at 03:46 #631803
Hello @Philosophim,
I apologize, as things have been a bit busy for me, but, nevertheless, here's my response!

My apologies is this is a little terse for me tonight. I will have more time later to dive into these if we need more detail, I just wanted to give you an answer without any more delay.


Absolutely no problem! I really appreciated your responses, so take your time! I think your most recent response has clarified quite a bit for me!

I understand exactly what you are saying in this paragraph. I've deductively concluded that these inductions exist. Just as it is deductively concluded that there are 4 jacks in 52 playing cards.


There are likely degrees of probability we could break down. Intuitively, pulling a jack out of deck of cards prescribes very real limits. However, if I note, "Jack has left their house for the last four days at 9am. I predict today on Friday, they will probably do the same," I think there's an intuition its less probably, and more just possible.

Perhaps the key is the fact that we don't know what the denominator limit really is. The chance of a jack would be 4/52, while the chance of Jack leaving his house at 9 am is 4 out of...5? Does that even work? I have avoided these probabilities until now, as they are definitely murky for me.


Although I am glad that we agree about the deduction of probability inductions, I think that we are using "probability" in two different ways and, therefore, I think it is best if I define it, along with "plausibility", for clarification purposes (and you can determine if you agree with me or not). "Plausibility" is a spectrum of likelyhoods, in a generic sense, where something is "Plausible" if it meets certain criteria (of which do not need to be derived solely from mathematics) and is "Implausible" if it is meets certain other criteria. In other words, something is "plausible" if it has enough evidence to be considered such and something is "implausible" if it has enough evidence to be considered such. Now, since "plausibility" exists within a spectrum, it is up to the subject (and other subjects: societal contexts) to agree upon where to draw the line (like the exact point at which anything thereafter is considered "plausible" or anything below a certain line is considered "implausible"). Most importantly, I would like to emphasize that "plausibility", although one of its forms of evidence can be mathematics, does not only encompass math. On the contrary, "probability" is a mathematical concrete likelyhood: existing completely separate from any sort of spectrum. The only thing that subjects need to agree upon, in terms of "probability", is mathematics, whereas "plausibility" requires a much more generic subscription to a set (or range) of qualifying criteria of which the spectrum is built on. For example, when I say "X is plausible", this only makes sense within context, where I must define (1) the set (range) of valid forms of evidence and (2) how much quantity of them is required for X to be considered qualified under the term "plausible". However, if I say "X is probable", then I must determine (1) the denominator, (2) numerator (possibilities), and (3) finally calculate the concrete likelyhood. When it is "plausible", it simply met the criteria the subject pre-defined, whereas saying there is a 1% chance of picking a particular card out of 100 cards is a concrete likelyhood (I am not pre-defining any of it). Likewise, if I say that X is "plausible" because it is "probable", then I am stating that (1) mathematical concrete likelyhoods are a valid form of evidence for "plausibilities" and (2) the mathematical concrete likelyhood of X is enough for me to consider it "plausible" (that the "probability" was enough to shift the proposition of X past my pre-defined line of when things become "plausible").

You see, when you say "while the chance of Jack leaving his house at 9 am is 4 out of...5?", I think you are conflating "probability" with "plausibility"--unless you can somehow mathematically determine the concrete likelyhood of Jack leaving (I don't think you can, or at least not in a vast majority of cases). I think that we colloquially use "probable" and "plausible" interchangeably, but I would say they are different concepts in a formative sense. Now it is entirely possible, hypothetically speaking, that two subjects could determine that the only valid form of evidence is mathematically concrete likelyhoods (or mathematically derived truths in a generic sense) and that, thereby, that is the only criteria by which something becomes worthy of the term "plausible" (and, thereby, anything not derived from math is "implausible"), but I would say that those two people won't "know" much about anything in their lives.

Ah, I'm certain I cut this out of part four to whittle it down. A hierarchy of inductions only works when applying a particular set of distinctive knowledge to an applicable outcome. We compare the hierarchy within the deck of cards. We know the probability if pulling a jack, we know its possible we could pull a jack, but the probability is more cogent that we won't pull a jack.

The intactness of the tree would be evaluated separately, as the cards have nothing to do with the trees outcome. So for example, if the tree was of a healthy age, and in a place unlikely to be harmed or cut down, it is cogent to say that it will probably be there the next day. Is it plausible that someone chopped it down last night for a bet or because they hated it? Sure. But I don't know if that's actually possible, so I would be more cogent in predicting the tree will still be there tomorrow with the applicable knowledge that I have.


Ah, I see! That makes a lot more sense! I would agree in a sense, but also not in a sense: this seems to imply that we can't compare two separate claims of "knowledge" and determine which is more "sure"; however, I think that we definitely can in terms of immediateness. I think that you are right in that a "probability" claim, like all other mathematical inductions, is more cogent than simply stating "it is possible", but why is this? I think it is due to the unwavering, inflexibility of numbers. All my life, from my immediate forms of knowledge (my discrete experiences and memories), I have never come in contact with such a thing as a "flaky number" because it is ingrained fundamentally into the processes that makeup my immediate forms of knowledge (i.e. my discrete experiences have an ingrained sense of plurality and, thereby, I do too). Therefore, any induction I make pertaining to math, since it is closer to my immediate forms of knowledge (in the sense that it is literally ingrained into them), assuming it is mathematically sound, is going to trump something less close to my immediate forms of knowledge (such as the possibility of something: "possibility" is just a way of saying "I have discretely experienced it before without strong correlation, therefore it could happen again", whereas a mathematical induction such as "multiplication will always work for two numbers regardless their size" is really just a way of saying "I have discretely experienced this with strong correlation (so strong, in fact, that I haven't witness any contradicting evidence), therefore it will always happen again". When I say "immediateness", am not entirely talking merely about physical locations but, rather, about what is more forthright in your experiences: the experience itself is the thing by which we derive all other things and, naturally, that which corresponds to it will be maintained over things that do not.

For example, the reason, I think, human opinions are wavering is due to me having experiences of peoples' opinions changing, whereas if I had always experienced (and everyone else always experienced) peoples' opinions unchanging (like gravity or 2+2 = 4), then I would logically characterize it with gravity as a concrete, strongly correlated, experience held very close to me. Another example is germ theory: we say we "know" germs make us sick, and that is fine, but it is the correlation between the theory and our immediate forms of knowledge (discrete experiences and memories) that make us "know" germ theory to be true. We could be completely wrong about germs, but it is undeniable that something makes us sick and that everything adds up so far that it is germs (it is strongly correlated) (why? because that is apart of our immediate knowledge--discrete experiences and memories).

With that in mind, let's take another look at the tree and cards example: which is more "sure"? I think that your epistemology is claiming that they must be separately, within their own contexts, evaluated to determine the most cogent form of induction to use within that particular context (separately), but, upon determining which is more cogent within that context, we cannot go any farther. On the contrary, I think that I am more "sure" of the deduction of 2/3 probability because it is tied to my immediate forms of knowledge (discrete experiences and memories). But I am more "sure" of the tree still being their (within that context) than that I am going to actually draw the ace because I have more immediate knowledge (I saw it 2 hours ago, etc) of the tree that adds up to it still being their than me actually getting an ace. Another way to think about it is: if my entire life (and everyone else testified to it in their lives as well), when presented with three cards (two of which are aces), I always randomly drew an ace--as in every time with no exceptions--then I would say the the "sureness" reverses and my math must have been wrong somehow (maybe probability doesn't work after all? (: ). This is directly due to the fact that it would be no different than my immediate knowledge of gravity or mathematical truths ( as in 2+2 = 4, or the extrapolation of such). Now, when I use this example, I am laying out a very radical example and I understand that me picking an ace 10 times in a row does not constitute probability being broken; however, if everyone all attested to always experiencing themselves picking an ace every time, and that is how I grew up, then I see no difference between this and the reality of gravity.

I was going to say much more, and elaborate much more, but this is becoming entirely too long. So I will leave you with my conclusion: the cogency (or "sureness", as I put it) of knowledge is not, at its most fundamental level, about which kind of induction the given claim stems from, but more about the degree of immediateness to the "I". — Bob Ross


With the clarification I've made, do you think this still holds?


Sort of. I think that, although I would still hold the claim that it is based off of immediateness, I do see your point in terms of cogency within a particular scenario, evaluated separately from the others, and I think, in that sense, you are correct. However, I don't think we should have to limit our examinations to their specific contexts: I think it is a hierarchy of hierarchies. You are right about the first hierarchy: you can determine the cogency based off of possibility vs probability vs plausibility vs irrationality. However, we don't need to stop there: we can, thereafter, create a hierarchy of which contextual claims we are more "sure" of and which ones we are less "sure" of (it is like a hierarchy within a spectrum).

In these cases, we don't have the denominator like in the "draw a jack" example. In fact, we just might not have enough applicable knowledge to make a decision based on probability. The more detailed our applicable knowledge in the situation, the more likely we are to craft a probability that seems more cogent. If we don't know the destructive level of the storm, perhaps we can't really make a reasonable induction. Knowing that we can't make a very good induction, is also valuable at times too.


I think that most cases we cannot create an actual probability of the situation: I think most cases of what people constitute as "knowledge" are plausibilities. On another note, I completely agree with you that it is entirely the case that there is a point at which we should suspend judgment: but what is that point? That is of yet to decide! I think we can probably cover that next if you'd like.

I look forward to your response,
Bob
Philosophim December 19, 2021 at 16:25 #632869
Great conversation so far Bob! First, I have had time to think about it, and yes, I believe without a denominator, one cannot have probability, only possibilities that have occurred multiple times. I think this ties in with your idea of "immediateness" when considering cogency, and I think you have something that could be included in the cogency calculus.

I believe immediateness is a property of "possibility". Another is "repetition". A possibility that has been repeated many times, as well as its immediateness in memory, would intuitively seem more cogent than something that has occurred less often and farther in the past. Can we make that intuitiveness reasonable?

In terms of repetition, I suppose repetition means that you have applicably known an identity without distinctive alteration or amending multiple times. Something that has stood applicably for several repeats would seem to affirm its use in reality without contradiction.

Immediateness also ties into this logic. Over time, there is ample opportunity for our distinctive knowledge to be expanded and amended. Whenever our distinctive knowledge changes, so does our context. What we applicably knew in our old context, may not apply in our current context.

I think immediateness is a keen insight Bob, great contribution!

Quoting Bob Ross
"Plausibility" is a spectrum of likelyhoods, in a generic sense, where something is "Plausible" if it meets certain criteria (of which do not need to be derived solely from mathematics) and is "Implausible" if it is meets certain other criteria.


I'll clarify plausibility. A plausibility has no consideration of likelihood, or probability. Plausibility is simply distinctive knowledge that has not been applicably tested yet. We can create plausibilities that can be applicably tested, and plausibilities that are currently impossible to applicably test. For example, I can state, "I think its plausible that a magical horse with a horn on its head exists somewhere in the world." I can then explore the world, and discover that no, magical horses with horns on their head do not exist.

I could add things like, "Maybe we can't find them because they use their magic to become completely undetectable." Now this has become an inapplicable plausibility. We cannot apply it to reality, because we have set it up to be so. Fortunately, a person can ignore such plausibilities as cogent by saying, "Since we cannot applicably know such a creature, I believe it is not possible that they exist." That person has a higher tier of induction, and the plausibility can be dismissed as being less cogent.

With this explored, we can identify probability as an applicable deduction that concludes both a numerator and denominator, or ratio. Possibility is a record of applicable deduction at least once. It is a numerator, with an unknown denominator. Repetition and immediateness intuitively add to its cogency. Finally plausibilities are distinctive knowledge that has not had a proper attempt at applicable deduction.

Quoting Bob Ross
Another way to think about it is: if my entire life (and everyone else testified to it in their lives as well), when presented with three cards (two of which are aces), I always randomly drew an ace--as in every time with no exceptions--then I would say the the "sureness" reverses and my math must have been wrong somehow (maybe probability doesn't work after all?


I pulled this one quote out of your exceptional paragraph, because I think it allows an anchor to explore all of your propositions. Probability is based off of applicable knowledge. When I say there is a 4 out of 52 chance of drawing a jack, part of the applicable knowledge is that the deck has been shuffled in a way that cannot be determined. The reality is, we applicably know the deck is deterministic once the shuffle is finished. If we turned the deck around, we could see what the card order is. The probability forms from our known applicable limits, or when we cannot see the cards.

In the case that someone pulled an ace every time someone shuffled the cards, there is the implicit addition of these limits. For example, "The person shuffling doesn't know the order of the cards." The person shuffling will doesn't try to rig the cards a particular way." "There is no applicable knowledge that would imply an ace would be more likely to be picked than any other card."

In the instance in which we have a situation where probability has these underlying reasons, but extremely unlikely occurrences happen, like an ace is drawn every time someone picks from a shuffled deck, we have applicable knowledge challenging our probable induction. Applicable knowledge always trump's inductions, so at that point we need to re-examine our underlying reasons for our probability, and determine whether they still hold.

We could do several tests to ascertain that we have a situation in which our probability holds. Perhaps pass the deck to be shuffled to several different people who are blindfolded. Test the cards for strange substances. Essentially ensure that the deck, the shuffle, and the pick all actually have the context for the probability to be a sound induction.

It could be physics changes one day and it turns out that an ace will always end up at the top of any shuffled deck. At that point, we have to retest our underlying applicable knowledge, and discover that some of it no longer holds. We would have to make new conclusions. Fortunately, what would not break is how we applicably deduce, and the hierarchy of inductions.

Quoting Bob Ross
However, I don't think we should have to limit our examinations to their specific contexts: I think it is a hierarchy of hierarchies. You are right about the first hierarchy: you can determine the cogency based off of possibility vs probability vs plausibility vs irrationality. However, we don't need to stop there: we can, thereafter, create a hierarchy of which contextual claims we are more "sure" of and which ones we are less "sure" of (it is like a hierarchy within a spectrum).


An excellent point that I think is applied to another aspect, context. Within the context of a person, I believe we have a heirarchy of inductions. But what about when two contexts collide? Can we determine a hierarchy of contexts? I believe I've mentioned that we cannot force a person to use a different context. Essentially contexts are used for what we want out of our reality. Of course, this can apply to inductions as well. Despite a person's choice, it does not negate that certain inductions are more rational. I would argue the same applies to contexts.

This would be difficult to measure, but I believe one can determine if a context is "better" than another based on an evaluation of a few factors.

1. Resource expenditure
2. Risk of harm within the context
3. Degree of harm within the context

1. Resource expenditures are the cost of effort in holding a specific context. This can be time, societal, mental, physical effort, and much more. As we've discussed, the more specific and detailed one's distinctive knowledge, the more resource expenditure it will require to applicably know within that that distinctive context.

2. The risk of harm would be the likelihood that one would be incorrect, and the consequences of being incorrect. If my distinctive context is very simple, I may come to harm more often in reality. For example, lets say there are 2 types of green round fruits that grow in an area. One is nutritious, the other can be eaten, but will make you sick. If you have a distinctive context that cannot identify between the two fruits, you are more likely to come to harm. If you have a more specific distinct context that can enable you to identify which fruit is good, and which is not, you decrease the likelihood you will come to harm.

3. The degree of harm would be the cost for making an incorrect decision based on the context one holds. If for example, I have a very simple distinctive context that means I fail at making good decisions in a card game with friends, the risk of harm is very low. No money is lost, and we're there to have a good time. If however I'm playing high stakes poker for a million dollar pot, the opportunity cost of losing is staggering. A context that increases the likelihood I will lose should be thrown out in favor of a context that gives a higher chance of winning. Or back to fruit. Perhaps one of the green round fruits simply doesn't taste as good as the other. The degree of harm is lower, and may not be enough for you to expend extra resources in identifying the two fruits as having separate identities.

I believe this could all be evaluated mathematically. Perhaps it would not be so useful to most people, but could be very important in terms of AI, large businesses, or incredibly major and important decisions. As such, this begins to seep out of philosophy, and into math and science. Which if the theory is sound, would be the next step.

Really great points again Bob! Holidays are on the horizon, so there may be a lull between writings this week, but should resume after Christmas. I hope you have a nice holiday season yourself!
Bob Ross December 26, 2021 at 01:10 #635002
Hello @Philosophim,

First and foremost, Merry Christmas! I hope you have (had) a wonderful holiday! I apologize as I haven't had the time to answer you swiftly: I wanted to make sure I responded in a substantive manner.

There is a lot of what you said that I wholeheartedly agree with, so I will merely comment on some things to spark further conversation.

I believe immediateness is a property of "possibility". Another is "repetition". A possibility that has been repeated many times, as well as its immediateness in memory, would intuitively seem more cogent than something that has occurred less often and farther in the past. Can we make that intuitiveness reasonable?


I think you are right in saying immediateness is a property of possibility, and therefrom, also repetition. Moreover, I would say that immediateness, in a general sense, is "reasonableness". What we use to reason, at a fundamental level, is our experiences and memories, and we weigh them. I think of it, in part, like Hume's problem of induction: we have an ingrained habit of weighing our current experiences and our memories to determine what we hold as "reality", just like we have an ingrained sense of the future resembling the past. I don't think we can escape it at a fundamental level. For example, imagine all of the your memories are of a life that you currently find yourself not in: your job, your family, your intimate lover, your hobbies, etc within your memories directly contradicts your current experiences of life (like, for instance, all of your pictures that you are currently looking at explicitly depict a family that contradicts the family you remember: they don't even look similar at all, they have different names, they aren't even the same quantity of loved ones you remember). In the heat of the moment, the more persistently your experiences continue to contradict your memories, the more likely you are to assert the former over the latter. But, on the contrary, if you only experience for, let's say, 3 minutes this alternate life and then are "sucked back into" the other one, which aligns with your memories, then you are very likely to assert that your memories were true and you must have been hallucinating. However, 3 years into experiencing that which contradicts your memories will most certainly revoke any notion that your original memories are useful, lest you live in a perpetual insanity. That would be my main point: it is not really about what is "true", but what is "useful" (or relevant). Even if your original memories, in this case, are "true", they definitely aren't relevant within your current context. This is what I mean by "weighing them", and we don't just innately weigh one over the other but, rather, we also compare memories to other memories. Although I am not entirely sure, I think that we innately compare memories to other memories in terms of two things: (1) quantity of conflicting or aligning memories and (2) current experience. However, upon further reflection, it actually seems that we are merely comparing #1, as #2 is actually just #1: our "current" experience is in the past. By the time the subject has determined that they have had an experience (i.e. they have reasoned their way into a conclusory thought amongst a string of preceding thoughts that they are, thereby, convinced of) they are contemplating something that is in the past (no matter how short a duration of time from when it occurred). Another way of putting it is: once I've realized that the color of these characters, which I am typing currently, is black, it is in the past. By the time I can answer the question of "Is my current experience I am having in the present?", I am contemplating a very near memory. My "current" mode of existence is simply that which is the most recent of past experiences: interpretations are not live in a literal sense, but only in a contextual sense (if "present" experience is going to mean anything, it is going to relate to the most recent past experience, number 1 in the queue). The reason I bring this up is because when we compare our "current" experience to past experiences, we are necessarily comparing a past experience to a past experience, but, most notably, one is more immediate than the other: I surely can say that what is most recent in the queue of past experiences, which necessarily encompasses "life" in general (knowledge)(discrete experiences, applicable knowledge, and discrete knowledge), is more "sure" than any of the past experiences that reside before it in the queue of memories. However, just because I am more "sure" of it doesn't make it more trustworthy in an objective sense: it becomes more trustworthy the more it aligns with the ever prepending additions of new experiences. For example, if I have, hypothetically speaking, 200 memories and the oldest 199 contradict the newest 1, then the determining factor is necessarily, as an ingrained function of humanity, how consistently each of those two contradicting subcategories compares to the continual prepending of new experiences (assuming 1 is considered the "current" experience and 2 is the most recent experience after 1 and so forth). But, initially, since the quantity of past experiences is overwhelmingly aligned and only contradicted by the most recent one, I would assert the position that my past 199 experiences are much more "true" (cogent). However, as I continue to experience, at a total of 500 experiences, if the past 300 most recent experiences align with that one experience that contradicted the 199, then the tide has probably turned. However, if I, on the next experience after that one contradicting experience, start experiencing many things that align with those 199, then I would presume that that one was wrong (furthermore, when I previously stated I would initially claim the 199 to be more “true” than the 1, this doesn’t actually happen until I experience something else, where that one contradictory experience is no longer the most recent, and the now newest experience is what I innately compare to the 1 past contradictory one and the other 199). Now, you can probably see that this is completely contextual as well: there's a lot of factors that go into determining which is more cogent. However, the "current" experiences are always a more "sure" fact and, therefore, the more recent the more "sure". For example, if I have 200 past experiences and the very next 10 are hallucinations (thereby causing a dilemma between two "realities", so to speak), I will only be able to say the original, older 200 past experiences were the valid ones if I resume experiencing in a way that aligns with those experiences and contradicts those 10 hallucinated experiences. If I started hallucinated right now (although we, in hindsight know it, let's say I don't), then I will never be able to realize it is really false representations until I start having "normal" experiences again. Even if I have memories of my "normal experiences" that contradict my "current" hallucinated ones, I won't truly deem it so (solidify my certainty that it really was hallucinated) until the hallucinated chain of experiences is broken with ones that align with my past experiences of "normal experiences". Now, I may have my doubts, especially if I have a ton of vivid memories of "normal experiences" while I am still hallucinating, but the more it goes on the more it seems as though my "normal experiences" were the hallucinated ones while the hallucinated ones are the "normal experiences". I'm not saying that necessarily, in this situation, I would be correct in inverting the terms there, but it seems as though only what is relevant to the subject is meaningful: even if it is the case that my memories of "normal experiences" are in actuality normal experiences, if I never experience such "normal experiences" again, then, within context, I would be obliged to refurbish my diction to something that is more relevant to my newly acquired hallucinated situation. Just food for thought.

I'll clarify plausibility. A plausibility has no consideration of likelihood, or probability. Plausibility is simply distinctive knowledge that has not been applicably tested yet. We can create plausibilities that can be applicably tested, and plausibilities that are currently impossible to applicably test. For example, I can state, "I think its plausible that a magical horse with a horn on its head exists somewhere in the world." I can then explore the world, and discover that no, magical horses with horns on their head do not exist.

I could add things like, "Maybe we can't find them because they use their magic to become completely undetectable." Now this has become an inapplicable plausibility. We cannot apply it to reality, because we have set it up to be so. Fortunately, a person can ignore such plausibilities as cogent by saying, "Since we cannot applicably know such a creature, I believe it is not possible that they exist." That person has a higher tier of induction, and the plausibility can be dismissed as being less cogent.


Although I was incorrect in in saying plausibility is likelihood, I still have a bit of a quarrel with this part: I don't think that all unapplicable plausibilities are as invalid as you say. Take that tree example from a couple of posts ago: we may never be able to applicably test to see if the tree is there, but I can rationally hold that it is highly plausible that it is. The validity of a plausibility claim is not about if it is directly applicable to reality or not, it is about (1) how well it aligns with our immediate knowledge (our discrete experiences, memories, discrete knowledge, and applicable knowledge) and (2) its relevancy to the subject. For this reason, I don't think the claim that unicorns exist can be effectively negated by claiming that it is not possible that they exist. A winged horse-like creature with a horn in the middle of its skull is possible, in that it doesn’t defy any underlying physics or fundamental principles, and, therefore, it is entirely possible that there is a unicorn out there somewhere in the universe (unless, in your second example, it has a magical power that causes it to be undetectable—however the person could claim that it is a natural cloaking instead of supernatural, in a magical sense, just like how we can’t see the tiny bacteria in the air, maybe the unicorn is super small). For me, it isn’t, in the case of a unicorn, that it is not possible that makes me not believe that they exist, it is (1) its utter irrelevancy to the subject and (2) the complete lack of positive evidence for it. I am a firm believer in defaulting to not believing something until it is proven to be true, and so, naturally, I don’t believe unicorns exist until we have evidence for them (I don’t think possibility is strong evidence for virtually anything—it is more of just a starting point). This goes hand-in-hand with my point pertaining to plausibility: the lack of positive evidence for a unicorn’s existence goes hand-in-hand, directly, with our immediate forms of knowledge. If nobody has any immediate forms of knowledge pertaining to unicorns (discrete experiences, applicable knowledge, and discrete knowledge), then, for me, it doesn’t exist—not because it actually doesn’t exist (in the case that it is not a possibility), but because it has no relevancy to me or anyone else (anything that we could base off of unicorns would be completely abstract—a combination of experiences and absences/negations of what has been experienced in ways that produce something that the subject hasn’t actually ever experienced before). Now, I think this gets a bit tricky because someone could claim that their belief in a unicorn existing makes them happier and, thereby, it is relevant to them. I think this becomes a contextual difference, because, although I would tell them “you do you”, I would most certainly claim that they don’t “know” unicorns exist (and, in this case, they may agree with me on that). You see, this gets at what it means to be able to “applicably know” something: everything a subject utilizes is applicable in one way or another. If the person tells me that “I don’t know if unicorns exist, but I believe that they do because it makes me happy”, they are applying their belief to the world without contradiction with respect to their happiness: who am I to tell them to stop being happy? However, I would say that they don’t “know” it (and they agreed with me on that in this case), so applying a belief to reality is not necessarily a form of knowledge (to me, at least). But in a weird way, it actually is, because it depends on what they are claiming to know. In my previous example, they aren’t claiming to “know” unicorns exist, but they are implicitly claiming to “know” that believing in it makes them happier and I think that is a perfectly valid application of belief that doesn’t contradict reality (it just isn’t pertaining to whether the unicorn actually exists or not). Now, if I were to notice some toxic habits brewing from their belief in unicorns, then I could say that are holding a contradiction because the whole point was to be happier and toxic habits don’t make you happier (so basically I would have to prove that they are not able to apply their “belief in unicorns=happier” without contradiction). Just food for thought (:

In the case that someone pulled an ace every time someone shuffled the cards, there is the implicit addition of these limits. For example, "The person shuffling doesn't know the order of the cards." The person shuffling will doesn't try to rig the cards a particular way." "There is no applicable knowledge that would imply an ace would be more likely to be picked than any other card."

In the instance in which we have a situation where probability has these underlying reasons, but extremely unlikely occurrences happen, like an ace is drawn every time someone picks from a shuffled deck, we have applicable knowledge challenging our probable induction. Applicable knowledge always trump's inductions, so at that point we need to re-examine our underlying reasons for our probability, and determine whether they still hold.


I completely agree with you here: excellent assessment! My main point was just that it is based off of one’s experiences and memories: that is it. If we radically change the perspective on an idea that we hold as malleable (such as an “opinion”), such that it is as concrete as ever in our experiences and memories, then we are completely justified in equating it with what we currently deem to be concrete (such as gravity).

I believe I've mentioned that we cannot force a person to use a different context. Essentially contexts are used for what we want out of our reality. Of course, this can apply to inductions as well. Despite a person's choice, it does not negate that certain inductions are more rational. I would argue the same applies to contexts.


I would, personally, rephrase “Despite a person’s choice, it does not negate that certain inductions are more rational” to “Despite a person’s choice, it does not negate that certain inductions are more rational within a fundamentally shared subjective experience”. I would be hesitant to state that one induction is actually absolutely better than another due to the fact that they only seem that way because we share enough common ground with respect to our most fundamental subjective experiences. One day, there could be a being that experiences with no commonalities with me or you, a being that navigates in a whole different relativity (different scopes/contexts) than us and I wouldn’t have the authority to say they were wrong—only that they are wrong within my context as their context (given it shares nothing with me) is irrelevant to my subjective experience.

This would be difficult to measure, but I believe one can determine if a context is "better" than another based on an evaluation of a few factors.


I love your three evaluative principles for determining which context is “better”! However, with that being said, I think that your determinations are relative to subjects that share fundamental contexts. For example, your #3 (degree of harm) principle doesn’t really address two ideas: (1) the subject may not share your belief that one ought to strive to minimize the degree of harm and (2) the subject may not care about the degree of harm pertaining to other subjects due to their actions (i.e. psychopaths). To put it bluntly, I think that humans become cognitively convinced of something (via rudimentary reason) and it gets implemented if enough people (with the power in society—or have the ability to seize enough power) are also convinced of it (and I am using “power” in an incredibly generic sense—like a foucault kind of complexity, not just brute force or guns or something). That’s why society is a wave, historically speaking, and 100 generations later we condemn our predecessors for not being like us (i.e. for the horrific things they did), but why would they be like us? We do not share as much in common contextually with them as we do with a vast majority of people living within the present with us (or people that lived closer to our generation). I think that a lot of the things that they did 200 years ago (or really pick any time frame) was horrendous: but was it objectively wrong? I thing nietzsche put it best: “there are no moral phenomenon, just moral interpretations of phenomenon”. I am cognitively convinced that slavery is abhorrent, does that make it objectively wrong? The moral wrongness being derived from cognition (and not any objective attribute of the universe) doesn’t make slavery any more “right”, does it? I think not. The reason we don’t have slavery anymore (or at least at such a large scale as previous times) is because enough people who held sufficient power (or could seize it) were also convinced that it is abhorrent and implemented that power to make a change (albeit ever so slow as it was). My point is that, even though I agree with you on your three points, you won’t necessarily be able to convince a true psychopath to care about his/her neighbors, and their actions are only “wrong” relative to the subject uttering it. We have enough people that want to prevent psychopaths from doing “horrible” things (a vast majority of people can feel empathy, which is a vital factor) and, therefore, pyschopaths get locked up. I am just trying to convey that everything is within a context (and I think you agree with me on this, but we haven’t gone this deep yet, so I am curious as to what you think). It is kind of like the blue glasses though experiment, if we all were born with blue glasses ingrained into our eyeballs, then the “color spectrum” would consist of different colors of blue and that would be “right” relative to our subjective experience. However, if, one day, someone was born with eyes that we currently have, absent of blue glasses, then their color spectrum would be “right” for them, while our blue shaded color spectrum would be “right” for us. Sadly, this is where “survival of the fittest” (sort of) comes into play: if there is a conflict where one subjective experience of the color spectrum needs to be deemed the “right” one, then the one held by the most that hold the “power” ultimately will determine their conclusion to be the “truth”: that is why we call people who see green and red flip-flopped “color blind”, when, in reality, we have no clue who is actually absolutely “right” (and I would say we can’t know, and that is why each are technically “blind” with respect to the other—we just only strictly call them “color blind” because the vast majority ended up determining their “truth” to be the truth). When we say “you can’t see red and green correctly”, this is really just subjectively contingent on how we see color.

I think that my main point here is that absolutely determining which context is better is just as fallacious, in my mind, as telling ourselves that we must determine whether our “hand” exists as we perceive it or whether it is just mere atoms (or protons, or quarks) and that we must choose one: they don’t contradict eachother, nor do fundamental contexts. Yes we could try to rationalize who has a better context (and I think your three points on this are splendid!), but that also requires some common ground that must be agreed upon and that means that, in some unfortunate cases, it really becomes a “does this affect me where I need to take action to prevent their context?” (and “do I have enough power to do anything about it?” or “can I assemble enough power, by the use of other subjects that agree with me, to object to this particular person’s context”).

I look forward to your response!
Bob
Philosophim December 31, 2021 at 14:47 #637225
Hello Bob! I'm back from vacation. I hope the holidays found you well.

Your immediateness section is spot on! Our chain of "trusting" memories is the evaluation of possibilities and plausible beliefs. Having a memory of something doesn't necessarily mean that memory is of something we applicably knew. Many times, its plausible beliefs that have not been applicably tested. While I agree that immediateness is an evaluative tool of possibilities (that which has been applicably known at least once), an old possibility is still more cogent than a newer plausibility.

Plausibility does not use immediateness for evaluation, because immediateness is based on the time from which the applicable knowledge was first gained. Something plausible has never been applicably known, so there is no time from from which we can state it is relevent.

Quoting Bob Ross
Moreover, I would say that immediateness, in a general sense, is "reasonableness".


The reasonableness is because it is something we have applicably known, and recently applicably known. I say this, because it is easy to confuse plausibilities and possibilities together. Especially when examining the string of chained memories, it is important to realize which are plausibilities, and which are possibilities. If you have a base possibility that chains into a plausibility, you might believe the end result is something possible, when it is merely plausible.

So taking your example of a person who has lived with different memories (A fantastic example) we can detail it to understand why immediateness is important. It is not that the memories are old. It is that that which was once possible, is now no longer possible when you apply your distinctive knowledge to your current situation.

We don't even have to imagine the fantastical to evaluate this. We can look at science. At one time, what was determined as physics is different than what scientists have discovered about physics today. We can look back into the past, and see that many experiments revealed what was possible, while many theories, or plausibilities were floating around intellectual circles, like string theory.

However, as pluasibilities are applied to reality, the rejects are thrown away, and the accepted become possibilities. Sometimes these possibilities require us to work back up the chain of our previous possibilities, and evaluate them with our new context. Sometimes, this revokes what was previously possible, or it could be said forces us to switch context. That which was once known within a previous context of time and space, can no longer be known within this context.

With this clarified, this will allow me to address your second part about plausibility.

Quoting Bob Ross
Take that tree example from a couple of posts ago: we may never be able to applicably test to see if the tree is there, but I can rationally hold that it is highly plausible that it is.


Is it possible that the tree is not there anymore, or is it plausible? If you applicably know that trees can cease to be then you know it is possible that a tree can cease to be. It is plausible that the tree no longer exists, but this plausibility is based on a possibility. The devil is in the details, and the devil understand that the best way to convince someone of a lie, is to mix in a little truth.

The reality, is this is a plausibility based off of a possibility. Intuitively, this is more reasonable then a plausibility based off of a plausibility. For example, its plausible that trees have gained immortality, therefore the tree is still there. This intuitively seems less cogent, and I believe the reason why, is because of the chain of comparative logic that its built off of.

But the end claim, that one particular tree is standing, vs not still standing, is a plausibility. You can rationally hold that it is plausible that it is still standing, but how do we determine if one plausibility is more rational than another? How do we determine if one possibility, or even one's applicable knowledge is more cogent than another? I believe it is by looking at the logic chain that the plausibility is linked from.

Quoting Bob Ross
The validity of a plausibility claim is not about if it is directly applicable to reality or not, it is about (1) how well it aligns with our immediate knowledge (our discrete experiences, memories, discrete knowledge, and applicable knowledge) and (2) its relevancy to the subject. For this reason, I don't think the claim that unicorns exist can be effectively negated by claiming that it is not possible that they exist.


I think the comparative chains of logic describes how (1) it aligns with our immediate knowledge and inductive hierarchies. I believe (2) relevancy to the subject can be seen as making our distinctive knowledge more accurate.

Going to your unicorn example, you may say its possible for an animal to have a horn, possible for an animal to have wings, therefore it is plausible that a unicorn exists. But someone might come along with a little more detail and state, while its possible that animals can have horns on their head, so far, no one has discovered that its possible for a horse to. Therefore, its only plausible that a horse would have wings or a horn, therefore it is only plausible that a unicorn exists. In this case, our more detailed context allows us to establish that a unicorn is a concluded plausibility, based off of 2 pluasibilities within this more specific context.

Logically, what is pluasible is not yet possible. Therefore I can counter by stating, "It is not possible for a horse to have wings or horns grow from its head. Therefore it is not possible that a unicorn exists in the world."

Quoting Bob Ross
I am a firm believer in defaulting to not believing something until it is proven to be true, and so, naturally, I don’t believe unicorns exist until we have evidence for them


I think this fits with your intuition then. What is plausible is something that has no applicable knowledge. It is more rational to believe something which has had applicable knowledge, the possible, over what has not, the plausible.

Quoting Bob Ross
Now, I think this gets a bit tricky because someone could claim that their belief in a unicorn existing makes them happier and, thereby, it is relevant to them.


Hopefully the above points have shown why a belief in their existence, based on their happiness of having that belief, does not negate the hierarchy of deductive application and induction. Recall that to applicably know something, they must have a definition, and must show that definition can exist in the world without contradiction. If they give essential properties, such as a horse with a horn from its head and wings, they must find such a creature to say they have applicable knowledge of it.

Insisting it exists without applying that belief to reality, is simply the belief in a plausibility. Happiness may be a justification for why they believe that plausibility, but it is never applicable knowledge.
Happiness of the self does not fulfill the discovery of the essential properties of a horn and wings on a horse in the world.

Quoting Bob Ross
I would, personally, rephrase “Despite a person’s choice, it does not negate that certain inductions are more rational” to “Despite a person’s choice, it does not negate that certain inductions are more rational within a fundamentally shared subjective experience”.


I agree with the spirit of this, but want to be specific on the chain comparison within a context. What is applicable, and the hierarchy of inductions never changes. What one deduces or induces is based upon the context one is in. Something that is possible in a specific context, may only be plausible in a more detailed one as noted earlier. But, what is possible in that context, is always more rational then what is plausible in that context.

Quoting Bob Ross
For example, your #3 (degree of harm) principle doesn’t really address two ideas: (1) the subject may not share your belief that one ought to strive to minimize the degree of harm and (2) the subject may not care about the degree of harm pertaining to other subjects due to their actions (i.e. psychopaths).


I agree here, because no matter what formula or rationale I set up for a person to enter into a particular context, they must decide to enter in that particular content of that formula or rationale! This means that yes, there will be creatures that are not able to grasp certain contexts, or simply decide not to agree with them. This is a fundamental freedom of every thinking thing.

So then, there is one last thing to cover: morality. You hit the nail on the head. We need reasons why choosing to harm other people for self gain is wrong. I wrote a paper on morality long ago, and got the basic premises down. The problem was, I was getting burned out of philosophy. I couldn't get people to discuss my knowledge theory with me, and I felt like I needed that to be established first. How can we know what morality is if we cannot know knowledge?

Finally, it honestly scared me. I felt that if someone could take the fundamental tenants of morality I had made, they could twist it into a half truth to manipulate people. If you're interested in hearing my take on morality, I can write it up again. Perhaps my years of experience since then will make me see it differently. Of course lets finish here first.

Quoting Bob Ross
That would be my main point: it is not really about what is "true", but what is "useful" (or relevant).


I just wanted to emphasize this point. Applicable knowledge cannot claim it is true. Applicable knowledge can only claim that it is reasonable.

And with that, another examination done! Fantastic points and thoughts as always.
Bob Ross January 01, 2022 at 03:39 #637500
Hello @Philosophim,

Absolutely splendid post! I thoroughly enjoyed reading it!

Upon further reflection, I think that we are using the terms "possibility" and "plausibility" differently. I am understanding you to be defining a "possibility" as something that has been experienced at least once before and a "plausibility" as something that has not been applicable tested yet. However, I as thinking of "possibility" more in terms of its typical definition: "Capable of happening, existing, or being true without contradicting proven facts, laws, or circumstances". Furthermore, I was thinking of "plausibility" more in the sense that it is something that is not only possible, but has convincing evidence that it is the case (but hasn't been applicably tested yet). I think that you are implicitly redefining terms, and that is totally fine if that was the intention. However, I think that to say something is "possible" is the admit that it doesn't directly contradict reality in any way (i.e. our immediate forms of knowledge) and has nothing directly to do with whether I have ever experienced it before. For example, given our knowledge of colors and the human eye, I can state that it is possible that there are other shades of colors that we can't see (but with better eyes we could) without ever experiencing any new shades of colors. It is possible because it doesn't contradict reality, whereas iron floating on water isn't impossible because I haven't witnessed it but, rather, because my understanding of densities (which are derived from experiences of course) disallows such a thing to occur. Moreover, to state that something is "plausible", in my mind, implies necessarily that it is also "possible"--for if it isn't possible then that would mean it contradicts reality and, therefore, it cannot have reasonable evidence for it being "plausible". Now, don't get me wrong, I think that your responses were more than adequate to convey your point, I am merely portraying our differences in definitions (semantics). I think that your hierarchy, which determines things that are derived more closely to "possibilities" to be more cogent, is correct in the sense that I redefine a "possibility" as something experienced before (or, more accurately, applicably known). However, I think that you are really depicting that which is more immediate to be more cogent and not that which is possible (because I would define "possibility" differently than you). Likewise, when you define a "plausibility" to be completely separate from "possibility", I wouldn't do that, but I think that the underlying meaning you are conveying is correct.

an old possibility is still more cogent than a newer plausibility.

I would say: that which is derived from a more immediate source (closer to the processes of perception, thought, and emotion--aka experience) is more cogent than something that is derived from a less immediate source.

Plausibility does not use immediateness for evaluation, because immediateness is based on the time from which the applicable knowledge was first gained.


Although, with your definitions in mind, I would agree, I think that plausibility utilizes immediateness just as everything else: you cannot escape it--it is merely a matter of degree (closeness or remoteness).

So taking your example of a person who has lived with different memories (A fantastic example) we can detail it to understand why immediateness is important. It is not that the memories are old. It is that that which was once possible, is now no longer possible when you apply your distinctive knowledge to your current situation.


I agree! But because possibility is derived from whether it contradicts reality--not whether I have experienced it directly before. Although I may be misunderstanding you, if we define possibility as that which has been applicably known before, then, in this case, it is still possible although one cannot apply it without contradiction anymore (because one would have past experiences of it happening: thus it is possible). However, if we define possibility in the sense that something doesn't contradict reality, then it can be possible with respect the memories (in that "reality") and not possible with respect to the current experiences (this "reality") because we are simply, within the context, determining whether the belief directly contradicts what we applicably and distinctly know.

We don't even have to imagine the fantastical to evaluate this. We can look at science. At one time, what was determined as physics is different than what scientists have discovered about physics today. We can look back into the past, and see that many experiments revealed what was possible, while many theories, or plausibilities were floating around intellectual circles, like string theory.


Although I understand what you are saying and it makes sense within your definitions, I would claim that scientific theories are possible and plausible. If it wasn't possible, then it is isn't plausible because it must first be possible to be eligible to even be considered plausible. However, I fully agree with you in the sense that we are constantly refining (or completely discarding) older theories for better ones: but this is because our immediate forms of knowledge now reveal to us that those theories contradict reality in some manner and, therefore, are no longer possible (and, thereby, no longer plausible either). Or, we negate the theory by claiming it no longer meets our predefined threshold for what is considered plausible, which in no way negates its possibility directly (although maybe indirectly).

However, as pluasibilities are applied to reality, the rejects are thrown away, and the accepted become possibilities. Sometimes these possibilities require us to work back up the chain of our previous possibilities, and evaluate them with our new context. Sometimes, this revokes what was previously possible, or it could be said forces us to switch context. That which was once known within a previous context of time and space, can no longer be known within this context.


I think you are sort of alluding to what I was trying to depict here, but within the idea that an applied plausibility can morph into a possibility. However, I don't think that only things I have directly experienced are possible, or that what I haven't directly experienced is impossible, it is about how well it aligns with what I have directly experienced (immediate forms of knowledge). Now, I may be just conflating terms here, but I think that to state that something is plausible necessitates that it is possible.

Is it possible that the tree is not there anymore, or is it plausible?


Both. If I just walked by the tree 10 minutes ago, and I claim that it is highly plausible that it is still there, then I am thereby also admitting that it is possible that it is there. If it is not possible that it is there, then I would be claiming that the tree being there contradicts reality but yet somehow is still plausible. For example, if I claimed that it is plausible that the tree poofed into existing out of thin air right now (and I never saw, I'm just hypothesizing it from my room which has no access to the area of land it allegedly poofed onto), then you would be 100% correct in rejecting that claim because it is not possible, but it is not possible because contradicts every aspect of my immediate knowledge I have. However, if I claimed that it is highly plausible that a seed, in the middle of spring, in an area constantly populated with birds and squirrels, has been planted (carried by an animal, not purposely planted by humans) in the ground and will sprout someday a little tree, I am claiming that it is possible that this can occur and, not only that, but it is highly "likely" (not in a probabilistic sense, but based off of immediate knowledge) that it will happen. I don't have to actually have previously experienced this process in its entirety: if I have the experiential knowledge that birds can carry seeds in their stomachs (which get pooped out, leaving the seed in fine condition) and that a seed dropped on soil, given certain conditions, can cause a seed to implant and sprout, then I can say it is possible without ever actually experiencing a bird poop a seed out onto a field and it, within the right conditions, sprout. A more radical example is the classic teapot flouting around (I can't quite remember which planet) Jupiter. If the teapot doesn't violate any of my immediate forms of knowledge, then it is possible; however, it may not be plausible as I haven't experienced anything like it and just because the laws allow it doesn't mean it is a reasonable (or plausible) occurrence to take place. Assuming the teapot doesn't directly contradict reality, then I wouldn't negate a belief in it based off of it not being possible but, rather, based on it not being plausible (and, more importantly, not relevant to the subject at all).

The reality, is this is a plausibility based off of a possibility. Intuitively, this is more reasonable then a plausibility based off of a plausibility. For example, its plausible that trees have gained immortality, therefore the tree is still there. This intuitively seems less cogent, and I believe the reason why, is because of the chain of comparative logic that its built off of.


In this specific case, I would claim that trees being immortal is not plausible because it contradicts all my immediate knowledge pertaining to organisms: they necessarily have an expiration to their lives. However, let's say that an immortal tree didn't contradict reality, then I would still say it is implausible, albeit possible, because I don't have any experiences of it directly or indirectly in any meaningful sense. If the immortality of a tree could somehow be correlated to a meaningful, relevant occurrence that I have experienced (such as, even if I haven't seen a cell, my indirect contact with the concept of "cells" consequences), then I would hold that it is "true". If it passes the threshold of a certain pre-defined quantity of backed evidence, then I would claim it is, thereafter, considered "plausible".

But the end claim, that one particular tree is standing, vs not still standing, is a plausibility.


Upon further examination, I don't think this is always the case. It is true that it can be a plausibility, but it is, first and foremost, a possibility. Firstly, I must determine whether the tree being there or not is a contradiction to reality. If it is, then I don't even begin contemplating whether it is plausible or not. If it isn't, then I start reasoning whether I would consider it plausible. If I deem it not plausible, after contemplation, it is still necessarily possible, just not plausible.

You can rationally hold that it is plausible that it is still standing, but how do we determine if one plausibility is more rational than another?


By agreeing upon a bar of evidence and rationale it must pass to be considered such. For example, take a 100 yard dash sprint race. We both can only determine a contestant's run time "really fast", "fast", "slow", or "really slow" if we agree upon thresholds: I would say the same is true regarding plausibility and implausibility. I might constitute the fact that I saw the tree there five minutes ago as a characterization that it is "highly plausible" that it is there, whereas you may require further evidence to state the same.

I believe it is by looking at the logic chain that the plausibility is linked from.


Although I have portrayed some differences, hereforth, between our concepts of possibility and plausibility, I would agree with you here. However, I think that it is derived from the proximity of the concept to our immediate forms of knowledge, whereas I think yours, in this particular case, is based off of whether it is closer to a possibility or not (thereby necessarily, I would say, making something that is possible and something that is plausible mutually exclusive).

I think the comparative chains of logic describes how (1) it aligns with our immediate knowledge and inductive hierarchies. I believe (2) relevancy to the subject can be seen as making our distinctive knowledge more accurate.


Again, I agree, but I would say that the "chains of logic" here is fundamentally the proximity to the immediate forms of knowledge (or immediateness as I generally put it) and not necessarily (although I still think it is a solid idea) comparing mutually exclusive types, so to speak, such as possibility and plausibility, like I think you are arguing for.

Going to your unicorn example, you may say its possible for an animal to have a horn, possible for an animal to have wings, therefore it is plausible that a unicorn exists. But someone might come along with a little more detail and state, while its possible that animals can have horns on their head, so far, no one has discovered that its possible for a horse to. Therefore, its only plausible that a horse would have wings or a horn, therefore it is only plausible that a unicorn exists


I would say that someone doesn't have to witness a horned, winged horse to know that it is possible because it doesn't contradict any immediate forms of knowledge (reality): it abides by the laws of physics (as far as I know). This doesn't mean that it is plausible just because it is possible: I would say it isn't plausible because it doesn't meet my predefined standards for what I can constitute as plausible. However, for someone else, that may be enough to claim it is "plausible", but I would disagree and, more importantly, we would then have to discuss our thresholds before continuing the conversation in any productive manner pertaining to unicorns. Again, maybe I am just conflating the terms, but this is as I currently understand them to mean.

Logically, what is pluasible is not yet possible


I don't agree with this, but I am open to hearing why you think this is the case. I consider a possibility to be, generally speaking, "Capable of happening, existing, or being true without contradicting proven facts, laws, or circumstances" and a plausibility, generally speaking, to be "Seemingly or apparently valid, likely, or acceptable; credible". I could potentially see that maybe you are saying that what is "seemingly...valid, likely, or acceptable" is implying it hasn't been applicably known yet, but this doesn't mean that it isn't possible (unless we specifically define possibility in that way, which I will simply disagree with). I would say that it is "seemingly...valid, likely, or acceptable" because it is possible (fundamentally) and because it passes a certain predefined threshold (that other subjects can certainly reject).

I think this fits with your intuition then. What is plausible is something that has no applicable knowledge. It is more rational to believe something which has had applicable knowledge, the possible, over what has not, the plausible


Again, the underlying meaning here I have no problem with: I would just say it is about the proximity and not whether it is possible or not (although it must first be possible, I would say, for something to be plausible--for if I can prove that something contradicts reality, then it surely can't be plausible). I think that we are just using terms differently.

So then, there is one last thing to cover: morality. You hit the nail on the head. We need reasons why choosing to harm other people for self gain is wrong. I wrote a paper on morality long ago, and got the basic premises down. The problem was, I was getting burned out of philosophy. I couldn't get people to discuss my knowledge theory with me, and I felt like I needed that to be established first. How can we know what morality is if we cannot know knowledge?

Finally, it honestly scared me. I felt that if someone could take the fundamental tenants of morality I had made, they could twist it into a half truth to manipulate people. If you're interested in hearing my take on morality, I can write it up again. Perhaps my years of experience since then will make me see it differently. Of course lets finish here first.


I would love to hear your thoughts on morality and ethics! However, I think we need to resolve the aforementioned disagreements first before we can explore such concepts (and I totally agree that epistemology precedes morality).

Applicable knowledge cannot claim it is true. Applicable knowledge can only claim that it is reasonable.


I absolutely love this! However, I would say that it is "true" for the subject within that context (relative truth), but with respect to absolute truths I think you hit the nail on the head!

I look forward to your response,
Bob
Philosophim January 01, 2022 at 20:07 #637651
I think the true issue here, is a difference in our use of terms between plausibility and possibility. Lets see if we can come to the same context.

I am repurposing the terms of probability, possibility, and plausibility after redefining knowledge into distinctive and applicable knowledge. The reason is, the terms original use was for the old debated generic knowledge. As they were, they do not work anymore. However, they are great words, and honestly only needed some slight modifications. If you think I should invent new terms for these words I will. The words themselves aren't as important as the underlying meaning.

At each step of the inductive hierarchy, it is a comparative state of deductive knowledge, versus applicable knowledge.

Possibility is a state in which an applied bit of distinctive knowledge has been applicably known. At that point in time, a belief that the applicably knowledge could be obtained again, is the belief that it is "possible".

Plausibility is distinctive knowledge that has not been applicably tested, but we have a belief as to the applicable outcome.

You noted,
Quoting Bob Ross
Logically, what is plausible is not yet possible

I don't agree with this, but I am open to hearing why you think this is the case.


The reason something plausible is not yet possible, is because once something plausible has been applicably known one time, it is now possible. It is an essential property to the meaning of plausibility, that it is exclusionary from what is possible.

As such, many times you were comparing to possibilities together, instead of a plausibility and a possibility.

Quoting Bob Ross
However, I think that to say something is "possible" is to admit that it doesn't directly contradict reality in any way (i.e. our immediate forms of knowledge) and has nothing directly to do with whether I have ever experienced it before. For example, given our knowledge of colors and the human eye, I can state that it is possible that there are other shades of colors that we can't see (but with better eyes we could) without ever experiencing any new shades of colors.


We say something is possible if it has been applicably known at least once. To applicably know something, you must experience it at least once. We cannot state that it is possible that there are other shades of color that humanity could see if we improved the human eye, because no one has yet improved the human eye to see currently unseeable colors.

What you've done is taken distinctive knowledge, that is built on other applicable knowledge, and said, "Well its "likely" there are other colors". But what does "likely" mean in terms of the knowledge theory we have? Its not a probability, or a possibility, because the distinctive knowledge of "I think there are other colors the human eye could see if we could make it better." has never been applicably known.

We could one day try improving the human eye genetically. Maybe we would succeed. Then we would know its possible. But until we succeed in applicably knowing once, it is only plausible.

I feel that "Plausibility" one of the greatest missing links in epistemology. Once I understood it, it explained many of the problems in philosophy, religion, and fallacious thinking in general. I understand your initial difficulty in separating plausibilities and possibilities. Plausibilities are compelling! They make sense in our own head. They are the things that propel us forward to think on new experiences in life. Because we have not had this distinction in language before, we have tied plausibilities and possibilities into the same word of "possibility" in the old context of language. That has created a massive headache in epistemology.

But when we separate the two, so many things make sense. If you start looking for it, you'll see many arguments of "possibility" in the old context of "knowledge", are actually talking about plausibilities. When you see that, the fault in the argument becomes obvious.

With this in mind, re-read the points I make about immediateness, and how that can only apply to possibility. Plausbilities cannot have immediateness, because they are only the imaginations of what could be within our mind, and have not been applied to reality without contradiction yet.

Quoting Bob Ross
I would say that someone doesn't have to witness a horned, winged horse to know that it is possible because it doesn't contradict any immediate forms of knowledge


As one last attempt to clarify, when you state it doesn't contradict any immediate forms of knowledge, do you mean distinctive knowledge, or applicable knowledge? I agree that it does not contradict our distinctive knowledge. I can imagine a horse flying in the air with a horn on its head. It has not been applied to reality however. If I believe it may exist somewhere in reality, reality has "contradicted" this distinctive knowledge, by the fact that it has not revealed it exists. If I believe something exists in reality, but I have not found it yet, my current application to reality shows it does not exist.

Plausibilities drive us to keep looking in the face of realities denial. They are very useful. The powerful drivers of imagination and creativity. But they are not confirmations of what is real, only the hopes and dreams of what we want to be real.

I hope that clears up the issue. Fortunately, this may be the final issue! Great discussion as always.

Bob Ross January 02, 2022 at 04:44 #637804
Hello @Philosophim,
I agree: I think that we are using terms drastically differently. Furthermore, I don't, as of now, agree with your use of the terminology for multiple different reasons (of which I will hereafter attempt to explain).

Firstly, the use of "possibility" and "plausibility" in the sense that you have defined it seems, to me, to not account for certain meaningful distinctions. For example,let's consider two scenarios: person one claims that a new color could be sensed by humans if their eyes are augmented, while person two claims that iron can float on water if you rub butter all over the iron block. I would ask you, within your use of the terms, which is more cogent? Under your terms, I think that these would both (assuming they both haven't been applied to reality yet) be a "plausibility" and not a "possibility", and, more importantly, there is no hierarchy between the two: they only gain credibility if they aren't inapplicable implausibilities and, thereafter, are applied to reality without contradiction. This produces a problem, for me at least, in that I think one is more cogent than the other. Moreover, in my use of the terms, it would be because one is possible while the other can be proven to be impossible while they are both still "applicable plausibilities" (in accordance to your terms). However, I think that your terms do not account for this at all and, thereby, consider them equal. You see, "possibility", according to my terms, allows us to determine what beliefs we should pursue and which ones we should throw away before even attempting them (I think that your use of the terms doesn't allow this, we must apply it directly to reality and see if it fails, but what if it would require 3 years to properly set up? What if we are conducting an experiment that is clearly impossible, but yet considered an "applicable plausibility? What term would you use for that if not "possibility"?). Moreover, there is knowledge that we have that we cannot physically directly experience, which I am sure you are acquainted with as a priori, that must precede the subject altogether. I haven't, and won't ever, experience directly the processes that allow me to experience in the first place, but I can hold it as not only a "possibility" (in my sense of the term) but also a "highly plausible" "truth" of my existence. Regardless of what we call it, the subject must have a preliminary consideration of what is worth pursuing and what isn't. I think it is the term "possibility", I think that you are more saying that we must apply it to reality without contradiction--which confuses me a bit because that is exactly what I am saying but I would then ask you what you would call something that has the potential to occur in reality without contradiction? If you are thinking about the idea of iron floating on water, instead of saying "that is not possible", are you saying "I would not be able to apply that to reality without contradiction"? If so, then I think I am just using what I would deem a more concise word for the same thing: possibility. Furthermore, it is a preliminary judgement, not in term so claiming that something can be applied to reality to see if it holds: I could apply the butter rubbing iron on water idea and the color one, but before that I could determine one to be an utter waste of time.

Secondly, your use of the terms doesn't account for any sort of qualitative likelihoods: only quantitative likelihoods (aka, probability). You see, if I say that something isn't "possible" until I have experienced it at least once, then a fighter jet flying at the speed of sound is not possible, only plausible, for me because I haven't experienced it directly nor have I measured it with a second hand tool. However, I think that it is "plausible", in a qualitative likelihood sense, because I've heard from many people I trust that they can travel that fast (among other things that pass my threshold of what can be considered "plausible"). I can also preliminarily consider whether this concept would contradict any of my discrete or applicable knowledge and, given that it doesn't, I would be able to categorize this as completely distinct from a claim such as "iron can float on water". I would say that a jet traveling at the speed of sound is "possible", therefore I should pursue further contemplation, and then I consider it "highly plausible" because it meets my standard of what is "highly plausible" based off of qualitative analysis. In your terms, I would have two "plausibilities" that are not "possible" unless I experience it (this seems like empiricism the more I think about it--although I could be wrong) and there is no meaningful distinction between the two.

Thirdly, think that your use of the terms lacks a stronger, qualitative (rationalized), form of knowledge (i.e. what "plausibility" is for me). If a "plausibility" is weaker than a "possibility", and a "possibility" is merely that which one has experienced once, then we are left without any useful terms for when something has been witnessed once but isn't as qualitatively likely as another thing that has been witnessed multiple times. For example, the subject could have experienced a dog attack a human; therefore, it is "possible" and not "plausible" (according to your terms), but when a passerby asks them if their dog will attack them if they pet it, the subject now has to consider, not just that it is "possible" since they have witnessed it before, the qualitative likelihood that their dog is aggressive enough to be a risk. They have to necessarily create a threshold, of which is only useful in this context if the passerby agrees more or less with it, that must be assessed to determine if the dog will attack or not. They must both agree, implicitly, because if the subject has too drastically different of a threshold then the passerby then the passerby's question will be answer in a way that won't portray anything meaningful. For example, if the subject thinks that its dog will be docile as long as the passerby doesn't pet its ears and decides to answer "no, it won't attack you", then that will not be very useful to the other subject, the passerby, unless they also implicitly understand that they shouldn't pet the ears. Most importantly, the subject is not making any quantitative analysis (as we have discussed earlier) but, rather, I would say qualitative analysis that I would constitute in terms of "plausibility". However, if you have another term for this I would be open to considering it as I think that your underlying meaning is generally correct.

Fourthly, I think that your redefinitions would be incredibly hard to get the public to accept in any colloquial sense (or honestly any practical sense) because it 180s their perception of it all and, as I previously mentioned, doesn't provide enough semantical options for them to accurately portray meaning. I am not trying to pressure you into having to abide by common folk: I just think that, if the goal is to refurbish epistemology, then you will have to either (1) keep using the terms as they are now or (2) accompany their redefinitions with other terms that give people the ability to still accurately portray their opinions.

We cannot state that it is possible that there are other shades of color that humanity could see if we improved the human eye, because no one has yet improved the human eye to see currently unseeable colors


I would say that this reveals what I think lacks in your terminology: we can't determine what is more cogent to pursue. In my terminology, I would be able to pursue trying to augment the eye to see more shades of colors because it is "possible". I am not saying that I "know" that they exist, only that I "know" that they don't contradict any distinctive or applicable knowledge I have (what I would call immediate forms of knowledge: perception, thought, emotion, rudimentary reason, and memories). I'm not sure what term you would use here in the absence of "possibility", but I am curious as to know what!

But what does "likely" mean in terms of the knowledge theory we have? Its not a probability, or a possibility, because the distinctive knowledge of "I think there are other colors the human eye could see if we could make it better." has never been applicably known.


Again, I think this is another great example of the problem with your terms; If it isn't possible or probable then it is just a plausibility like all the other plausibilities. But I can consider the qualitative likelihood that it is true and whether it contradicts all my current knowledge, which will also determine whether I pursue it or not. I haven't seen a meteor, nor a meteor colliding into the moon, but I have assessed that it is (1) possible (in my use of the term) and (2) plausible (in my use of the term) because I have assessed whether it passes my threshold. For example, I would have to assess whether the people that taught me about meteors would trick me or not (and whether they are credible and have authority over the matter--both of which require subjective thresholds). Are they liars? Does what they are saying align with what I already know? Are they trying to convince me of iron floating on water? These are considerations that I think get lost in the infinite sea of "plausibilities" (in your terms). The only thing I can think of is that maybe you are defining what I would call "possible" as an "applicable plausibility" and that which is "impossible" as a "inapplicable plausibility". But then I would ask what determines what is "applicable"? Is it that I need to test it directly? Or is it the examination that it could potentially occur? I think that to say it "could potentially occur" doesn't mean that I "know" that it exists, just that, within my knowledge, it has the potential too. I think your terms removes potentiality altogether.

I feel that "Plausibility" one of the greatest missing links in epistemology. Once I understood it, it explained many of the problems in philosophy, religion, and fallacious thinking in general. I understand your initial difficulty in separating plausibilities and possibilities. Plausibilities are compelling! They make sense in our own head. They are the things that propel us forward to think on new experiences in life. Because we have not had this distinction in language before, we have tied plausibilities and possibilities into the same word of "possibility" in the old context of language. That has created a massive headache in epistemology.


I understand what you mean to a certain degree, but I think that it isn't fallacious to say that something could potentially occur: I think it becomes fallacious if the subject thereafter concludes that because it could occur it does occur. If I "know" something could occur, that doesn't mean that I "know" that it does occur and, moreover, I find this to be the root of what I think your are referring to in this quote.

But when we separate the two, so many things make sense. If you start looking for it, you'll see many arguments of "possibility" in the old context of "knowledge", are actually talking about plausibilities. When you see that, the fault in the argument becomes obvious.


I agree in the sense that one should recognize that just because something is "possible" (in my use of the term) that doesn't mean that it actually exists, it just means that it could occur (which can be a useful and meaningful distinction between things that cannot). I also understand that, within your use of the terms, that you are 100% correct here: but I think that the redefining of the terms leads to other problems (which I have and will continue to be addressing in this post).

Plausbilities cannot have immediateness, because they are only the imaginations of what could be within our mind, and have not been applied to reality without contradiction yet.


I think that they are applied to reality without contradiction in an indirect sense: it's not that they directly do not contradict reality, it's that they don't contradict any knowledge that I currently have (distinct and applicable). This doesn't mean that it does happen, or is real, but, rather, that it can happen: it is just a meaningful distinction that I think your terms lack (or I am just not understanding it correctly). And, to clarify, I think that "applicable plausibilities" aren't semantically enough for "things that could occur" because then I am no longer distinguishing "applicable" and "unapplicable" plausibilities based off of whether I can apply it to reality or not. In my head, it is a different to claim that I could apply something to reality to see if it works and saying that, even if I can't apply it, it has the potential to work. If I state that the teapot example is an "inapplicable plausibility", then I think that the butter on iron example, even if it took me three years to properly setup the experiment, would be an "applicable plausibility" along with the shades of colors example. But I think that there is a clear distinction between the shades of colors example and the button on iron example--even if I can apply them, given enough time, to reality to see if they are true: I have applied enough concepts that must be presupposed for the idea of iron to float on water to work and, therefore, it doesn't hold even if I can't physically go test it.

As one last attempt to clarify, when you state it doesn't contradict any immediate forms of knowledge, do you mean distinctive knowledge, or applicable knowledge? I agree that it does not contradict our distinctive knowledge. I can imagine a horse flying in the air with a horn on its head. It has not been applied to reality however. If I believe it may exist somewhere in reality, reality has "contradicted" this distinctive knowledge, by the fact that it has not revealed it exists. If I believe something exists in reality, but I have not found it yet, my current application to reality shows it does not exist.


I mean both (I believe): my experiences and memories which is the sum of my existence. However, I am not saying that it exists, only that it could exist. This is a meaningful distinction between things that could not exist. I would agree with you in the sense that I don't think unicorns exist, but not because they can't exist, but because I don't have any applicable knowledge (I believe is what you would call it) of it. So I would agree with you that I can't claim to "know" a unicorn exists just because it could exist: but I can claim that an idea of a unicorn that is "possible" is more cogent than one that is "impossible", regardless of whether I can directly test anything or not.

But they are not confirmations of what is real, only the hopes and dreams of what we want to be real.


I sort of agree. There is a distinction to be made between what is merely a hope or dream, and that which could actually happen. I may wish that a supernatural, magical unicorn exists, but that is distinctly different from the claim that a natural unicorn could exist. One is more cogent than the other, and, thereby, one is hierarchically higher than the other.

I look forward to hearing your response,
Bob
Agent Smith January 02, 2022 at 08:06 #637831
Methodology of knowledge?

[quote=Socrates]I know that I know nothing.[/quote]

Socrates possessed a methodology of knowledge. He knows (that he knows nothing).

Whatever methodology Socrates utilized, it failed to provide any other form of knowledge at all, apart from knowledge of his own ignorance.

What does that tell you about Socrates' methodology of knowledge? It almost seems like his methodology was designed specifically to prove one and only one proposition: I know that I know nothing.

How do you prove Socrates' (paradoxical) statement?

1. I know nothing

Ergo,

2. I know nothing

Remember he only knows I know nothing. The argument is a circulus in probando, fallacious (informally).

:chin:
Philosophim January 03, 2022 at 23:09 #638367
I still think we're a bit apart on the terms. Let me see if I can define them more clearly.

Quoting Bob Ross
Firstly, the use of "possibility" and "plausibility" in the sense that you have defined it seems, to me, to not account for certain meaningful distinctions.


The meaningful distinctions should be:

Possibility - the belief that because distinctive knowledge has been applicably known at least once, it can be known again.

Plausibility- the belief that distinctive knowledge that has never been applicably known, can be applicably known.

In an earlier post, I mentioned knowledge chains. I believe this was before we had clarified the distinction between the two inductions. Lets take your example here:

Quoting Bob Ross
For example,let's consider two scenarios: person one claims that a new color could be sensed by humans if their eyes are augmented, while person two claims that iron can float on water if you rub butter all over the iron block. I would ask you, within your use of the terms, which is more cogent?


First, we cannot compare cogency between different branches of claims. This is because cogency takes context into account as well, and the difference between evaluating the human eye, and an floating iron block, are two fairly separate contexts. Recall that inductions are cogent when we reach the limit of what can be applicably known, so we could have a situation in which a plausibility is the most cogent conclusion within that context, while in another context, a possibility is the most cogent.

The more important question, is how can we determine what is most cogent in the belief of what will happen in an attempted application, with a claim within the same context? This is where knowledge chains, and their comparisons come into play.

What we know or believe often times implicitly relies on prior beliefs or applicable knowledge. If I am making a judgement about the human eye, then I am taking my knowledge and inductions about the eye into my assessment.

I applicably know the eye can see X colors.
I applicably know we can improve the eye's ability to see with greater focus.
Therefore I believe we can improve the eye to see greater than X colors.

We have 2 knowledge claims, then we leap to a plausibility. We don't know if its possible yet, as we haven't tried applying it to reality. But we believe that if we attempt to, we will discover that we can improve the eye to see more than X colors.

Now, lets think prior to the availability of eye surgery.

I applicably know the eye can see X colors.
I think its plausible we can improve the eye's ability to see with greater focus.
Therefore I believe we can improve the eye to see greater than X colors.

Here we have 1 knowledge claim, a plausibility, then another plausibility built on the first plausibility. Comparing the two chains, the first chain is more cogent than the second chain. Even though the conclusions are the same, it is the chain of logic that determines our conclusion, which determines which end statement is more cogent than the other.

This is valuable, because this destroys the Getter problem. It doesn't matter if either claim happens to be true or not. We could of course refine the context. Perhaps include some prior statements that we are implicitly glossing over. But it is about taking a belief, thinking about all of the alternative ways we can arrive at that belief (or the negation of that belief), and taking the most rational logic chain of events.

I believe the above should cover what you meant by "qualitative likelihood". Hierarchial induction determines which of the inductions within consideration of a conclusion is most rational. And it is more rational to consider outcomes that involve possibilities, over outcomes that involve plausibilities. But more importantly, we need to examine the chain of rationality one took to arrive at one's induction as well. This should provide all that's needed for a strong and measurable basis of cogency.

Quoting Bob Ross
Moreover, there is knowledge that we have that we cannot physically directly experience, which I am sure you are acquainted with as a priori, that must precede the subject altogether. I haven't, and won't ever, experience directly the processes that allow me to experience in the first place, but I can hold it as not only a "possibility" (in my sense of the term) but also a "highly plausible" "truth" of my existence.


According to this, there is no apriori. Everything is distinctively or applicably known by our experience. I you believe there is something that must exist prior to your current existence, then like every other other induction, it must be some variation on probability, possibility, plausibility, or an irrational belief.

Quoting Bob Ross
I would say that this reveals what I think lacks in your terminology: we can't determine what is more cogent to pursue. In my terminology, I would be able to pursue trying to augment the eye to see more shades of colors because it is "possible".


Under the old terminology, you wouldn't be able to state it was possible either. It may very well be that we cannot modify a human eye to see greater color, because it ends up that color is observed in the brain, and we would have to rewire that as well. As such someone would ask, "How do you know that is possible?"

With the chain of reasoning comparisons I noted above, we can definitely determine which is most cogent to pursue. In fact, it might help us realize we have underlying assumptions that we need to discover first.

Quoting Bob Ross
I understand what you mean to a certain degree, but I think that it isn't fallacious to say that something could potentially occur: I think it becomes fallacious if the subject thereafter concludes that because it could occur it does occur.


Every induction is a claim that something might be. An induction, by definition, is a conclusion that is not necessarily concluded from the premises involved. If I'm going to predict the sun will rise tomorrow, because its risen several times, I know that it is possible. If I say the sun will not rise tomorrow, that is plausible, as the sun has always risen. My plausibility might be correct, and my possibility might be incorrect. The point of cogency is to evaluate the inductions, and evaluate which one is more reasonable to hold to when you are deciding what will happen in the future.

There was a lot that went in many directions on your post. I couldn't cover it all in one post, but I thought if I tried to direct back to the meaning of the terms, and answer some of the repeating themes, it would clarify most of the issues.
Bob Ross January 04, 2022 at 18:08 #638715
Hello @Philosophim,
The dots have finally clicked for me! I think I understand what you are stating now and, so, most of what I said has been negated (I apologize for the confusion). However, I do still have a couple quarrels, so I will elaborate on those in a concise manner (that way, if I am still not understanding it correctly, you can correct me without having to address too many objections).


Possibility - the belief that because distinctive knowledge has been applicably known at least once, it can be known again.

Plausibility- the belief that distinctive knowledge that has never been applicably known, can be applicably known.


I have no problem with the underlying meaning of "possibility", however I think it still leaves out potentiality, but more on that later. With respect to "plausibility", I think you just defined, in accordance with your essays, an "applicable plausibility", contrary to an "inapplicable plausibility", which is not just a "plausibility". You defined it in the quote that it "can be applicably known", which is what I thought an "applicable plausibility" was. Maybe I am just misremembering.

First, we cannot compare cogency between different branches of claims. This is because cogency takes context into account as well, and the difference between evaluating the human eye, and an floating iron block, are two fairly separate contexts


I think you are sort of right. I think that you are thinking of the hierarchical inductions within a particular context as a linear dependency (i.e. a possibility -> plausibility is more cogent than a possibility -> plausibility -> plausibility); However, I think it is more of a plane, contexts engulfing contexts, style of hierarchies: no context is strictly isolated from any other context as they all are dependent on a more fundamental context which engulfs them together. Think of it as the evaluating the human eye context and the floating iron block context as separate contexts, indeed, but residing within a shared context(s) which is where they can be cross-examined from. A great example is the context in which the law of noncontradiction is a valid axiom: this contextual plane would engulf, because it is more fundamental, the two aforementioned contexts. Therefore, in the abstract, if context A and B reside within the law of noncontradiction context, and A does not abide by the law of noncontraction while B does, then A is less cogent than B on a more fundamental contextual plane--regardless of the fact that their hierarchical inductions are considered separately. Their are always parent contexts that engulf a given context unless you are contemplating the axioms from which all others are derived (then it gets tricky).

Before I continue to your post, let me briefly try to explain the difference between "possibility" (in your terms) and potentiality. Let's use two examples:

I applicably know what two "things" are.
I applicably know what three "things" are.
I applicably know that the underlying meaning of "two" and "three" are not synonymous.
Therefore, "two" "things" and "three" "things" are synonymous.

I applicably know the eye can see X colors.
I applicably know we can improve the eye's ability to see with greater focus.
Therefore I believe we can improve the eye to see greater than X colors.

Although they are to be considered separate from one another, in the sense of the induction chains, because they are two totally different contexts, we can still compare them because they both reside within a parent (or more than one parent) context; The law of noncontradiction, assuming the subject holds that as a fundamental axiom, would be a great example of a parent context that engulfs these two examples and, therefore, the former example is less cogent than the latter, despite their clearly different contexts, due to the parent context's negation of the former example's potentiality. Normally this would be called "possibility", but since you use it differently I think we are safe using potentiality instead. But, most importantly, notice that these two examples are not mutually exclusive, in a holistic sense, as they stem from more fundamental parent contexts.


I applicably know the eye can see X colors.
I applicably know we can improve the eye's ability to see with greater focus.
Therefore I believe we can improve the eye to see greater than X colors.


I understand your hierarchical induction chains, and they are brilliant (and great example)! However, consider this:

1. I see a round object at the top of a hill.
2. I have never experienced this round object before.
3. I applicably know that it is windy out.
4. I have experienced a round log fall down a hill during a windy day.
5. I have never experienced a round log fly up off of a hill during a windy day.
6. I have experienced "things" flying off of a hill.
7. The round object is similar in size to the log, but isn't a log.

Consider the following conclusions:
1. The round object is going to fly off of the hill
2. The round object is going to roll down the hill

Now, I think that you are perfectly right in stating that the cogency of these two conclusions, since they are within the same context, can be evaluated based off of the induction chains. However, in this example, let's try it out:

For conclusion 1:
I applicably know that some "things" can fly off of hills.
I applicably know that this round-object is a "thing".
Therefore, the round-object will fly off the hill.
I can apply this belief to reality to see if it holds.
Therefore, I am holding an "applicable plausibility" based off of two possibilities.

For conclusion 2:
I applicably know that some round-like objects, such as a log, can roll down a hill.
I applicably know that some round-like objects, such as a log, will roll down a hill in windy climates.
Therefore, the round-like object will roll down the hill.
I can apply this belief to reality to see if it holds.
Therefore, I am holding an "applicable plausibility" based off of two possibilities.

Notice that these are (1) both within the same context and (2) they both are at the same point in the induction chain. However, the latter is more cogent than the former because we have to evaluate the parent context(s) that they share: in this case, a good example is the law of similarity. If the subject holds that a generic "thing" is less cogent to base correlations off of than more particular groups of concepts, aka (I believe) the law of similarity, then they hold a more fundamental, parent context, that engulfs the two aforementioned conclusions and, thereby, one is more cogent than the other. Likewise, if the person held a parent context that directly contradicts the engulfed context, then that engulfed context would have to go (or the parent one would have to be refurbished to allow it to live on) and, more importantly, that context would be based off of the law of noncontradiction, which resides within another context: it is a hierarchy of planes that are engulfed by one another where the most fundamental is the most engulfing.

Similarly, with respect to your example prior to eye surgery, that requires a more fundamental acceptance, a parent context, that the context of the situation nullifies the ability for me to say they were truly wrong or right, or that I am truly right compared to them (because it depends on the context). If I didn't hold that context, then I wouldn't agree with you in this sense and neither would be truly wrong: I would just be disagreeing with you at a more fundamental context that engulfs the other.

I believe the above should cover what you meant by "qualitative likelihood".


I am going to refrain from elaborating on "qualitative likelihoods" to restrict the amount of objections I give in this post (that way it is easier for you, hopefully). But we can most definitely talk about this after.

According to this, there is no apriori.


Originally I was going to object, but I think that a priori is perfectly compatible with your view (or at least how I understand it) and can elaborate on this further if you would like.

With the chain of reasoning comparisons I noted above, we can definitely determine which is most cogent to pursue.


Only within the particular context and not considering the parent contexts.

In general, I like your epistemology! I think it is an empiricist leaning view that is more "sure" of the chick and less "sure" of the egg. I just think we can improve it (:

Bob
Philosophim January 09, 2022 at 14:07 #640446
reply="Bob Ross;638715"]

Great response Bob, my apologies for the delay. I caught "The Covid," and have been fairly sick. Fortunately I'm vaccinated, so recovery is going steady so far.

Quoting Bob Ross
With respect to "plausibility", I think you just defined, in accordance with your essays, an "applicable plausibility", contrary to an "inapplicable plausibility", which is not just a "plausibility". You defined it in the quote that it "can be applicably known", which is what I thought an "applicable plausibility" was. Maybe I am just misremembering.


An applicable plausibility is something which can be applied to reality if we so choose. For example, "If I go outside within five minutes, it will rain on me as soon as I step outside of the door." I do not know if it is raining, nor can I figure it out from within the house. There is nothing preventing me from going outside within the next five minutes. Its an applicable plausibility that I will be rained on, because I can test it.

An inapplicable plausibility is a plausibility that either cannot be tested, or is designed not to be able to be tested. If for example I state, "There is a unicorn that exists that cannot be sensed by any means," this is inapplicable. There is nothing to apply to reality with this idea, as it is undetectable within reality. Perhaps there is a unicorn that exists that cannot be sensed in reality. But we will never be able to apply it, therefore it is something that cannot be applicably known.

Quoting Bob Ross
Therefore, in the abstract, if context A and B reside within the law of noncontradiction context, and A does not abide by the law of noncontraction while B does, then A is less cogent than B on a more fundamental contextual plane--regardless of the fact that their hierarchical inductions are considered separately.


Just because two built contexts are dissimilar, it doesn't mean they cannot have commonalities. But commonalities do not mean they can necessarily be evaluated against the different inductions within their independent contexts. The human eye and iron floating on water with butter are just too disparate to compare. The law of non-contradiction simply means you have an irrational inductive belief, which is completely divorced from rationality. I suppose if there's nothing stopping a person from placing comparative contexts in planes, but I would think the end result would be the same.

To add, the comparison is about finding the best induction to take within that context. So if my only recourse in one instance, lets say iron floating on water, is a plausibility over an irrational induction, its more cogent to choose the plausibility. If in the case of an eye, I have a probability vs a possibility, its more cogent to take the probability. But there's really no comparing the probability of improving the eye, the the options of plausibility vs irrationality with iron floating on water with butter. We could state that within the context of the eye, we have more cogent inductions to select from than in the context of iron floating on water, but that's really about it.

Quoting Bob Ross
I applicably know what two "things" are.
I applicably know what three "things" are.
I applicably know that the underlying meaning of "two" and "three" are not synonymous.
Therefore, "two" "things" and "three" "things" are synonymous.


Can you clarify this? I interpreted this as follows.

I applicably know A and B.
I applicably know C, D, and E.
I applicably know that the numbers two and three are not synonymous.
Therefore A and B, and C,D, and E are synonymous.

I don't believe that's what you're trying to state, but I could not see what you were intending.

Quoting Bob Ross
For conclusion 1:
I applicably know that some "things" can fly off of hills.
I applicably know that this round-object is a "thing".
Therefore, the round-object will fly off the hill.
I can apply this belief to reality to see if it holds.
Therefore, I am holding an "applicable plausibility" based off of two possibilities.

For conclusion 2:
I applicably know that some round-like objects, such as a log, can roll down a hill.
I applicably know that some round-like objects, such as a log, will roll down a hill in windy climates.
Therefore, the round-like object will roll down the hill.
I can apply this belief to reality to see if it holds.
Therefore, I am holding an "applicable plausibility" based off of two possibilities.


I still wasn't quite sure what you meant by parent contexts in these examples. I think what you mean is the broader context of "things" versus "round objects". Please correct me here. For my part, it depends on how we cut hairs so to speak. If the first person does not applicably know that things can roll down a hill as well, then neither statement is more cogent than the other. If the first person knows that "things" can also roll down hill, then there's no cogent reason why they would conclude the "thing" would fly off the hill over roll down the hill.

What might help is to first come up with a comparison of cogency for a person within a particular context first. Including two people complicates comparing inductions greatly, but generally follows the same rules as a person comparing several inductive options they are considering within their own context.

You may be on to something by the way. You're the first person I've had the opportunity to really dig in with the inductive hierarchy, and I will be the first to say it is only a foundation. I just want to make sure the foundation is understood first. While I feel the hierarchy chain is a good start, the second step, which is much more difficult to establish, is comparing two inductions of the same hierarchy and determining which one is more cogent. I think there is something that might be needed beyond the hierarchy chains, such as a further subdivision of the base four inductions. I'm eager to hear more of your ideas!
Bob Ross January 10, 2022 at 02:29 #640712
Hello @Philosophim,
I caught "The Covid," and have been fairly sick. Fortunately I'm vaccinated, so recovery is going steady so far.


Oh no! I am glad that you are recovering and I hope you have a speedy recovery!


An applicable plausibility is something which can be applied to reality if we so choose. For example, "If I go outside within five minutes, it will rain on me as soon as I step outside of the door." I do not know if it is raining, nor can I figure it out from within the house. There is nothing preventing me from going outside within the next five minutes. Its an applicable plausibility that I will be rained on, because I can test it.

An inapplicable plausibility is a plausibility that either cannot be tested, or is designed not to be able to be tested. If for example I state, "There is a unicorn that exists that cannot be sensed by any means," this is inapplicable. There is nothing to apply to reality with this idea, as it is undetectable within reality. Perhaps there is a unicorn that exists that cannot be sensed in reality. But we will never be able to apply it, therefore it is something that cannot be applicably known.


This is all and well, but I think you defined "plausibility" (in your previous post) as exactly what you just defined as an "applicable plausibility"--and that was all I have trying to point out. You defined "plausibility" as "the belief that distinctive knowledge that has never been applicably known, can be applicably known". A "plausibility", under your terms (I would say), is not restricted to what "can be applicably known" (that is a subcategory called "applicable plausibilities"), whereas "plausibility" is a much more generic term than that (as far as I understand your terms).


Just because two built contexts are dissimilar, it doesn't mean they cannot have commonalities. But commonalities do not mean they can necessarily be evaluated against the different inductions within their independent contexts.


I agree in that two contexts can be dissimilar and still have commonalities, but those commonalities are more fundamental aspects to those contexts and, therefore, although they are dissimilar, they are not separate. Even the most distinct contexts share some sort of dependency (or dependencies). An induction (within a context) that contradicts a parent context is less cogent than an induction (within a different context) that doesn't.

The human eye and iron floating on water with butter are just too disparate to compare.


You can compare them relative to their shared dependencies (such as the law of noncontradiction). You could say that, since iron floating on water (even if you haven't experienced it before) cannot occur based off of what you have learned (experienced) about densities in chemistry and your acceptance of the law of noncontradiction, then this is not as cogent as the eye example since it violates thereof. This is a comparison of potentiality, where both are compared to an accepted principle that engulfs both of them (which are part of the context, but is shared). From what I understand from your hierarchical inductions, the idea that (1) A can be A and not A and that (2) A will have the same identity as another A are both not possible unless we experience it, with no distinction between the two (preliminarily). However, I am saying that #1 has no potential to exist while #2 does because I have accepted the law of noncontradiction and law of identity as underlying principles which rules out #1 and allows for #2. However, if I did not accept the law of noncontradiction and I did not accept the law of identity, then #1 has the potential to be "true" while #2 does not. It is also relative to the parent contexts: the shared dependencies (more fundamental concepts that the given contexts at hand depend on).

The law of non-contradiction simply means you have an irrational inductive belief, which is completely divorced from rationality


I would say that it means that the subject has accepted the axiom as "true" and, therefore, it will be a dependency for many future ideas (or beliefs) they will have (as they will build off of it). It isn't necessarily "true" in all contexts, we just share that more fundamental principle.

To add, the comparison is about finding the best induction to take within that context.


I think that that is one goal, but the comparing of all contexts is also desired. All knowledge stems from the same tree, therefore one can derive any given contexts back to a common node. I am just saying that the idea that you strictly cannot compare contexts eliminates potentiality. When I say something can potentially exists, or happen, it means that it does not violate any of my parental contexts (any underlying principles that would be required for the concept to align with my knowledge as it is now). Hitherto, your epistemology eliminates this altogether: you either have a possibility or plausibility (probability encompasses the idea of a possibility) and you can't preliminarily determine whether one plausibility has the potential to occur or not.

no comparing the probability of improving the eye, the the options of plausibility vs irrationality with iron floating on water with butter


You would be comparing it one step deeper than that: iron floating on water has no potential to occur whereas improving the eye does (I would call this "possibility").


Can you clarify this? I interpreted this as follows.

I applicably know A and B.
I applicably know C, D, and E.
I applicably know that the numbers two and three are not synonymous.
Therefore A and B, and C,D, and E are synonymous.

I don't believe that's what you're trying to state, but I could not see what you were intending.


You are absolutely right: I was trying to keep it as fundamental as possible, but I see how that was confusing. I was merely pointing out essentially that the law of noncontradiction is an underlying principle (which is apart of the context) that can determine one context more cogent than another because they exist within a plane. I was just making up a contradictory example off the top of my head and I apologize--as it wasn't very good at all.

I still wasn't quite sure what you meant by parent contexts in these examples. I think what you mean is the broader context of "things" versus "round objects". Please correct me here. For my part, it depends on how we cut hairs so to speak. If the first person does not applicably know that things can roll down a hill as well, then neither statement is more cogent than the other. If the first person knows that "things" can also roll down hill, then there's no cogent reason why they would conclude the "thing" would fly off the hill over roll down the hill.


At its most fundamental level, I was trying to convey that the law of similarity could be another example of a parent context, where one may determine two completely even contexts (i.e. possibility -> plausibility and one that is a possibility -> plausibility) based off of an underlying principle that governs both contexts. If I have witnessed a "thing" fly and roll off of a hill, but the "things" that I have seen fly look less similar to the "thing" on the hill now and the "thing" looks more similar to the "things" that I have seen roll down a hill, then I might determine one context more cogent than the other based off of the fact that I accept the law of similarity as an underlying principle that engulfs both the contexts in question. It is a more fundamental examination then the hierarchical inductions.

What might help is to first come up with a comparison of cogency for a person within a particular context first. Including two people complicates comparing inductions greatly, but generally follows the same rules as a person comparing several inductive options they are considering within their own context.


I agree. I think that, within an individual context, the subject will compare their knowledge based off of a tree like structure (or plane like structure where principles engulf other principles) and decide their credence levels based off of that (which includes your hierarchical induction chains). I think that multiple subjects do essentially the same thing, but they will accept their own experiences are more cogent than others (because it is more immediate to them as the subject) and, therefore, that is the most vital factor.

Look forward to your response.
Bob
Philosophim January 12, 2022 at 23:09 #642088
Reply to Bob Ross Thanks for the well wishes Bob. I almost feel like my normal self again today.

Quoting Bob Ross
This is all and well, but I think you defined "plausibility" (in your previous post) as exactly what you just defined as an "applicable plausibility"--and that was all I have trying to point out. You defined "plausibility" as "the belief that distinctive knowledge that has never been applicably known, can be applicably known". A "plausibility", under your terms (I would say), is not restricted to what "can be applicably known" (that is a subcategory called "applicable plausibilities"), whereas "plausibility" is a much more generic term than that (as far as I understand your terms).


In both cases, the person believes that the plausibility can be applicably known. The difference between an applicable, and inapplicable plausibility, is whether it is designed so that it can be applied to reality. You can craft a belief about reality that can never be actually applied to reality. Its plausible, but inapplicable. It doesn't mean that the plausibility isn't true either. All of these labels are for inductions, which by nature, may or may not be true. The goal is to find which inductions are most rational to hold. An inapplicable plausibility is pretty low on the hierarchy, as it is a claim to what is real when you can never actually apply it to reality.

Quoting Bob Ross
I agree in that two contexts can be dissimilar and still have commonalities, but those commonalities are more fundamental aspects to those contexts and, therefore, although they are dissimilar, they are not separate. Even the most distinct contexts share some sort of dependency (or dependencies). An induction (within a context) that contradicts a parent context is less cogent than an induction (within a different context) that doesn't.


If you have two identical underlying building blocks between two compounded inductions, then you can compare those. But if you add anything else on top to make them different, they are no longer fair comparisons.

For example, I hold the law of non-contradiction as true. From this I believe it is plausible that the moon is made out of green cheese. Separately from this, I believe it is plausible that the sun is really run by a giant lightbulb at its core. The basis of the law of contradiction between them has no bearing on the evaluation of comparing the plausibilities.

That being said, you can compare the belief in the law of non-contradiction, versus the belief of its denial. If you hold the law of non-contradiction as applied knowledge, or an induction that you believe in, you can evaluate an inductions chain, and reject any inductions that relay on the law of non-contradiction being false within its chain.

I "think" this is what you are going for. If so, yes, you can determine which inductions are more cogent by looking in its links, and rejecting links that you do not know, or believe in. But this is much clearer if you are trying to decide whether the moon is plausibly made out of green cheese, or something else, then trying to compare the moon and the sun. Does that make sense?

Quoting Bob Ross
When I say something can potentially exists, or happen, it means that it does not violate any of my parental contexts (any underlying principles that would be required for the concept to align with my knowledge as it is now). Hitherto, your epistemology eliminates this altogether: you either have a possibility or plausibility (probability encompasses the idea of a possibility) and you can't preliminarily determine whether one plausibility has the potential to occur or not.


You can't preliminarily determine whether one plausibility has the potential to occur or not, because it is an induction. And an induction is when we conclude a result that does not necessarily stem from the premises. Any prediction about the future for example, can always be wrong. Hypotheses, even the most educated ones, about what will happen in a science experiment can also be wrong. Holding to a cogent induction does not guarantee it will actually happen either. Cogency is simply deciding which induction is more reasonable to hold. The nature of holding an induction is always a gamble, no matter how much you might rationalize prior to holding one.

That isn't limited to the epistemology proposed here either. At least this epistemology has a way of rationally measuring inductions. Prior to this, I don't believe there is any epistemology that can claim which inductions are more rational to hold. So if I believe the bird in front of me can fly, because I have applicably known things with wings can fly, its more cogent then stating that the bird in front of me could plausibly levitate off the ground with psychic powers. That being said, if I'm looking at a penguin, my induction will be wrong once applied. With inductions, nothing is certain.

Quoting Bob Ross
If I have witnessed a "thing" fly and roll off of a hill, but the "things" that I have seen fly look less similar to the "thing" on the hill now and the "thing" looks more similar to the "things" that I have seen roll down a hill, then I might determine one context more cogent than the other based off of the fact that I accept the law of similarity as an underlying principle that engulfs both the contexts in question.


We can break the chain down as follows.
Thing X which has Y traits I have seen fly off of hills.
Thing A which has B traits I have seen roll off of hills before.

(These are both based on our contexts of what we have applicably known)
It is possible that a thing with B traits can roll off of hills.
It is possible that a thing with Y traits can fly off of hills.
I have never seen a thing with B traits fly, and I have never seen a thing with Y traits that can roll.
It is plausible that a thing with B traits can roll, and plausible that a thing with Y traits can fly.

Since it is possible that a thing with B traits can roll, but only plausible that a thing with B traits can fly, it is more cogent to assume the thing with B traits, will likely fly.
Apply the same reasoning to Y.

I don't think there is a law of simularity, but there is a chain of probabilities and possibilities within this context. And with this context, we can conclude certain beliefs would be more cogent. Does that mean the thing with B traits will roll and the thing with Y traits will fly? No. We can only applicably know the answer by applying our induction to reality without contradiction.

I hope that clears up the process a bit more! Let me know what you think.






Bob Ross January 13, 2022 at 19:43 #642509
Hello @Philosophim,
I am glad that you are feeling well!

In both cases, the person believes that the plausibility can be applicably known.


I don't think this is necessarily true. It depends on what you mean by "applicably known": lots of people believe in things that they claim cannot be "applicably known". For example, there are ample amounts of people that believe in an omnipotent, omniscient, etc (I call it the "omni" for short) God and actively claim that these traits they believe in are necessarily outside of the scope of what we can "applicably" know. Another, non-religious, example is a priori knowledge: most people that claim their are a priori knowledge also actively accept that you necessarily cannot applicable (directly) know the components of it. At its most generic form, they would claim that we there is something that is required for experience to happen in the first place, for differentiation to occur, but you definitely will never be able to directly "applicably" know that. I guess you could say that they are indirectly "applying" it to reality without contradiction, which I would be fine with.


For example, I hold the law of non-contradiction as true. From this I believe it is plausible that the moon is made out of green cheese. Separately from this, I believe it is plausible that the sun is really run by a giant lightbulb at its core. The basis of the law of contradiction between them has no bearing on the evaluation of comparing the plausibilities.


I think that, because the law of noncontradiction is one of the (if not the) fundamental axiom there is, it is easy to consider it irrelevant to the comparison of two different plausibilities; however, nevertheless, I think that it plays a huge, more fundamental, factor in the consideration of them. For example, if my knowledge of physics (or any other relevant subject matter) that makes it "impossible" (aka has no potential to occur) for green cheese to be able to make up a moon, then, before I have even started thinking about hierarchical inductions, I have exhausted the idea to its full capacity (which, in this case, isn't much). Furthermore, if I have knowledge that both examples (the giant lightbulb and the green cheese moon) are "impossible" (have no potential to occur), then they are equally as useless as each other, but, more importantly, notice that I still compared them to a certain degree. Now, I could hold that the law of noncontradiction isn't as black and white as I presume we both think: maybe I have a warped understanding of superpositioning, for example. Maybe I believe, prior to even having the ideas of the green cheese or light bulb sun, that A can be and not be at the same time as long as there is no observant entity forcing an outcome. Now, this is not at all how superpositioning works (I would say), but someone could, nevertheless, hold this position. Moreover, with the stipulation that there are no observers, even if I have solid evidence that green cheese can't make up a planet, the planet could be made of green cheese and green cheese can't "possibly" makeup a planet at the same time. This refurbished understanding of the law of noncontradiction poses whole new problems, but notice that they wouldn't be objectively wrong: only wrong in the sense that we don't share the same fundamental context (i.e. the same understanding of the law of noncontradiction).

That being said, you can compare the belief in the law of non-contradiction, versus the belief of its denial. If you hold the law of non-contradiction as applied knowledge, or an induction that you believe in, you can evaluate an inductions chain, and reject any inductions that relay on the law of non-contradiction being false within its chain.


This is, essentially, what I am trying to convey. That would be a consideration prior to hierarchical inductions and would provide an underlying basis to compare two different plausibilities. I think we do this with a lot more than just the law of noncontradiction.

I "think" this is what you are going for. If so, yes, you can determine which inductions are more cogent by looking in its links, and rejecting links that you do not know, or believe in. But this is much clearer if you are trying to decide whether the moon is plausibly made out of green cheese, or something else, then trying to compare the moon and the sun. Does that make sense?


Correct me if I am wrong, but I think that you are trying to convey that, once all the underlying beliefs are evaluated and coincide with the given belief in question, you can't compare two different contexts' hierarchical induction chains. I don't think this is necessarily the case either, but I want to focus on the more fundamental disputes first before segueing into that.

My main point is that potentiality is completely removed when "possibility" is refurbished in your epistemology. The problem is that there are no distinctions between applicable plausibilities. For example, imagine I have 2,000 5 ft bricks. Now, imagine two claims: (1) "you can fit 200 of these bricks in a 10 x 10 x 10 room" and (2) "you can fit 2,000 of these bricks into a 10 x 10 x 10 room". Let's say that I've never experienced filling a room with 5 ft bricks. I think, according to your definitions, both claims would not be possibilities but, rather, applicable plausibilities because I haven't ever experienced either (and "possibility" is something that has been experienced before). However, I don't need to attempt to apply both directly to reality to figure out which one has the potential to occur. Even though they are plausibilities, #1 has the potential to occur (meaning that, although I could be wrong since it is an induction, all my knowledge aligns with this having the potential to occur) while #2 does not (because, assuming my math is sound, 1000 Sq Ft / 5 Ft only allows for 200 5 ft bricks). So, even if I haven't directly attempted to fill a room with 200 nor 2,000 5 ft bricks, I can soundly believe that one claim is more cogent than the other because one aligns with my current knowledge while the other does not. If we were to put them both as plausibilities, then I would say one is "highly plausible" while the other is "highly implausible" to make a meaningful distinction between the two.

Another fundamental problem is what constitutes experiencing something before? How exact of a match does it have to be? If it is an exact match, then we hold very little possibilities and a vast majority of our knowledge is ambiguously labeled as "plausibilities". For example, I have internal monologue. I think that it is "possible" (in accordance with my use of the terms) that other people have internal monoloqes too; however, I have never experienced someone else having an internal monologue, therefore it isn't a "possibility" in accordance with your terms. I think the obvious counter argument would be that I have experienced my own internal monologue, therefore it is "possible". But my experience of my own internal monologue is not an experience that is an exact match to the claim in question ("other people have internal monologue"). Someone could walk up to me and rightly claim that my own experience of my own internal monologue is in no way associated with the experience of someone else having an internal monologue, therefore I don't know if it is possible (according to your terms): and they would be correct. However, I would still hold that other people have the potential to have internal monologue because they have the necessary faculty (very similar to mine) for it to occur. As of now, I am not saying that they definitely have internal monologue, or that they can, just that they have the potential to. To take this a step further, the belief that other people can have internal monologue is an "inapplicable plausibility" (I can't demonstrate any other than my own). However, although I can't claim that another person can have internal monologue, I would not tell someone else who claims to have internal monologue that that is "impossible" (according to your terms), even though I haven't experienced someone else having internal monologue, because I have a more fundamental parent context that I abide by: empathy. If I were in their shoes, and I actually did have internal monologue (regardless of whether, in my current state, I can actually claim it is "possible"), I would want the other person to give me the benefit of the doubt out of respect. So my empathetic parental context would overrule, so to speak, my factual consideration and, therefrom, I would walk around claiming that it is "possible" for someone else to have internal monologue although technically I can't claim that within your definitions. So basically: I can claim that they have the potential to have internal monologue and, although I can't claim they can have internal monologue, I will claim they can regardless.

This brings up a more fundamental issue (I think): the colloquial term "possibility" is utterly ambiguous. When someone says "it is possible", they may be claiming that "it can occur" or that "it can potentially occur", which aren't necessarily synonymous. To say something "can occur", as you rightly point out, is only truly known if the individual has experienced it before, however to say something "can potentially occur" simply points out that the claim doesn't violate any underlying principles and beliefs. I think this is a meaningful distinction. If I claim that it is "possible" (in my terms) for a rock to fall if someone drops from a mountain top, it depends on if I have directly experienced it or not whether I am implicitly claiming that it "can occur" (because I've experienced it) or that it "can potentially occur" (because, even though I haven't experienced it before, my experiences, which are not direct nor exact matches of the given claim, align with the idea that it could occur). I think this can get a bit confusing as "can" and "can potentially" could mean the same thing definitions wise, but I can't think of a better term yet: it's the underlying meaningful distinction here that I want to retain.

Also, as a side note, I like your response to the object rolling off hills example, however this is getting entirely too long, so I will refrain from elaborating further.

Look forward to hearing from you,
Bob
Philosophim January 15, 2022 at 17:02 #643448
Quoting Bob Ross
I don't think this is necessarily true. It depends on what you mean by "applicably known": lots of people believe in things that they claim cannot be "applicably known". For example, there are ample amounts of people that believe in an omnipotent, omniscient, etc (I call it the "omni" for short) God and actively claim that these traits they believe in are necessarily outside of the scope of what we can "applicably" know.


Then what they are describing is an inapplicable plausibility. It is when you believe that something that exists, but have constructed it in such a way that it cannot be applicably tested. I can see though that my language is not clear, so I understand where you're coming from. Applicable knowledge is when you apply a belief to reality that is not contradicted. All inductions are a belief in something that exists in reality. The type of induction is measured by its ability to be applicably applied or known.

So people believe that God exists in reality, like all inductions. The type of induction is an inapplicable plausibility, because the essential properties of God are things that cannot be applied to reality. There is no way to discover if a God outside of space and time exists, because we cannot go outside of space and time.

Quoting Bob Ross
Another, non-religious, example is a priori knowledge: most people that claim their are a priori knowledge also actively accept that you necessarily cannot applicable (directly) know the components of it. At its most generic form, they would claim that we there is something that is required for experience to happen in the first place, for differentiation to occur, but you definitely will never be able to directly "applicably" know that. I guess you could say that they are indirectly "applying" it to reality without contradiction, which I would be fine with.


I think this is largely ok. Maybe a more specific example would help me to determine if you have the right of it. As I noted earlier, a priori knowledge doesn't really exist under this theory. There is distinctive knowledge, and there is applicable knowledge. You cannot have applicable knowledge, without first applying distinctive knowledge. You can create whatever distinctive knowledge you want, but it is not applicable knowledge until it is tested against reality.

Quoting Bob Ross
I think that, because the law of noncontradiction is one of the (if not the) fundamental axiom there is, it is easy to consider it irrelevant to the comparison of two different plausibilities; however, nevertheless, I think that it plays a huge, more fundamental, factor in the consideration of them. For example, if my knowledge of physics (or any other relevant subject matter) that makes it "impossible" (aka has no potential to occur) for green cheese to be able to make up a moon, then, before I have even started thinking about hierarchical inductions, I have exhausted the idea to its full capacity


Even though you did not actively think about hierarchial induction, you practicied it implictly. You noted that on the chain of reasoning, the law of non-contradiction proves that the moon is not made of green cheese. Therefore, you have no need to continue that chain of reasoning. No one has ever applicably known a situation in which the something was both itself, and its negation. Further, its definition makes a contradiction impossible. If you define something as one way, then define it as its negation, you have created a situation that can never be applied to reality.

That is because it is impossible even as distinctive knowledge. Recall that distinctive knowledge is what is held within a particular context that is not contradictory. I cannot claim that "A" is not "A" when I mean A and not A within the same context of equality. Something provably impossible ends any further thinking along the lines of it being possible.

Quoting Bob Ross
Moreover, with the stipulation that there are no observers, even if I have solid evidence that green cheese can't make up a planet, the planet could be made of green cheese and green cheese can't "possibly" makeup a planet at the same time.


If we cannot observe it, we cannot apply this to reality. Therefore it is an inapplicable plausibility. It is something we can consider, but it will fail in an inductive hierarchy test against something possible, probable or even applicably plausible.

Quoting Bob Ross
That being said, you can compare the belief in the law of non-contradiction, versus the belief of its denial. If you hold the law of non-contradiction as applied knowledge, or an induction that you believe in, you can evaluate an inductions chain, and reject any inductions that relay on the law of non-contradiction being false within its chain.

This is, essentially, what I am trying to convey. That would be a consideration prior to hierarchical inductions and would provide an underlying basis to compare two different plausibilities.


Again, you are doing the practice of hierarchial induction here, whether you are aware of it or not. I don't think its a consideration prior, but a consideration of it.

Quoting Bob Ross
Correct me if I am wrong, but I think that you are trying to convey that, once all the underlying beliefs are evaluated and coincide with the given belief in question, you can't compare two different contexts' hierarchical induction chains.


This is correct.

Quoting Bob Ross
I can soundly believe that one claim is more cogent than the other because one aligns with my current knowledge while the other does not. If we were to put them both as plausibilities


This is essentially what the hierarchy does. In one case of your induction, you founded it upon applicable knowledge. In another, you did not.

Chain one: Applicable knowledge => plausibilty
Chain two: Possibliity => plausibility.

It is more cogent to believe in the first plausibility, then the second. We can do a little math to prove it.

Lets say that applicable knowledge counts as 100% being an accurate assessment of reality without contradiction. An induction is less than 100 percent. When you have a chain of beliefs, you can multiply the percentage chance of the beliefs together. For example, getting one result out of a roll of six dice is 1/6* 1/6 or 1/36 chance (individual values for each die, so five on die one is different from 5 on die two).

Every induction is either 1, not contradicted by reality, or 0, contradicted by reality. We do not applicably know whether it is a 1, or a 0, we we will make it a binary variable with 1 as true, and 0 as false.

So the first chain is:
1 * X
The second chain is:
X * Y

The first chain's chance of being correct using probability, is 50% The second chains chance is .5*.5 or .25% chance of being correct.

A probability and possibillity are more cogent, because they are really a chain based off of applicable knowledge. There is only one binary uncertainty, will what was applicably known be applicably known again.

Possibility chain:
Applicable knowledge => induction that it will still be applicably known

Plausibility chain:

Take something possible (For example, the moon will still exist when I look for it) => create induction (It is made out of green cheese) => Can be applied v Can't be applied

A possibility is essentially always 1* X of it still being applicably known.
A plausibility is essentially always predicting another induction off of what is possible, or X * Y

If I continue and say, The moon is made of green cheese, and this green cheese has green bacteria, then my induction of green bacteria can be seen as:

X * Y * Z or .125 chance of not being contradicted by reality.

Quoting Bob Ross
For example, I have internal monologue. I think that it is "possible" (in accordance with my use of the terms) that other people have internal monoloqes too; however, I have never experienced someone else having an internal monologue, therefore it isn't a "possibility" in accordance with your terms.


Correct, depending on the context. You do not know if people have internal monologues in their head like yourself. Fun fact, there are people who cannot visualize inside of their head. They literally cannot imagine a vision of anything when they close their eyes. So what do we do here? Do we fall into solipsism? No, we simply adjust the context of what it means to have internal monologues between two different people.

First, we can determine a conclusion or experience that could only happen if one had an internal monologue. For example, I could ask a person, "Can you invent a story of two people talking to each other in your head?" A person who can internally monologue, can create a conversation of two people talking in their mind. A person who could not understand the question, or was unable to fulfil the request (with the possibility that they were telling the truth depending on how deep we want to go) would not be able to have an inner monologue in their head. If however, they could fulfil the request, then they must be able to have an inner monologue in their head.

Do we know what that inner monologue sounds or looks like in their head? No. We likely never will. This is the "hard problem" of consciousness. We can determine a bat can think, but we can never have the experience of thinking like a bat.

Finally, also recall that cogency is the highest level of induction we can make. Imagining what it is like to have the experience of being a bat is an inapplicable plausibility, and there is no real alternative. There is no confirmation or denial of applicable knowledge, no probability or even possibility beyond the idea that it is possible brains can have consciousness. Perhaps as we improve the science of the mind, this will change, but for now, this is what we have.

Quoting Bob Ross
This brings up a more fundamental issue (I think): the colloquial term "possibility" is utterly ambiguous. When someone says "it is possible", they may be claiming that "it can occur" or that "it can potentially occur", which aren't necessarily synonymous.


Agreed. Colloquially, the term possiblity is a bad term, because we have not had a viable means of assessing knowledge. This colloquial term of "possibility" causes confusion, and ambiguous arguments that those without this method of knowledge, are not equipped to handle.

Quoting Bob Ross
To say something "can occur", as you rightly point out, is only truly known if the individual has experienced it before, however to say something "can potentially occur" simply points out that the claim doesn't violate any underlying principles and beliefs. I think this is a meaningful distinction. If I claim that it is "possible" (in my terms) for a rock to fall if someone drops from a mountain top, it depends on if I have directly experienced it or not whether I am implicitly claiming that it "can occur" (because I've experienced it) or that it "can potentially occur" (because, even though I haven't experienced it before, my experiences, which are not direct nor exact matches of the given claim, align with the idea that it could occur). I think this can get a bit confusing as "can" and "can potentially" could mean the same thing definitions wise, but I can't think of a better term yet: it's the underlying meaningful distinction here that I want to retain.


I think you've nailed it. This is really the separation between what is possible, and what is plausible. It takes time to wrap your head around it. Perhaps an applicable plausibility is better described as "an inductive claim of potentiality". That seems to clash with "probability" though, and honestly, all inductions could be argued as "potential". So I'm not sure the generic term of potential works well anymore either. But the underlying meaningful distinction you are describing, is the difference between what is possible, and what is plausible. It is a distinction we have not had in epistemology until now, and I believe the introduction of this distinction is a real key in unlocking some of the problems epistemology has had over the years.

Quoting Bob Ross
Also, as a side note, I like your response to the object rolling off hills example, however this is getting entirely too long, so I will refrain from elaborating further.


Not a worry. I was sick, and having difficulty finding the time and effort to cover larger posts. I am feeling much better now, and more energized! My apologies if I was not able to drill down or cover ideas as much as I normally would. Please continue to drill into every nook and cranny.







Philosophim January 15, 2022 at 17:06 #643450
Quoting Agent Smith
How do you prove Socrates' (paradoxical) statement?


Hello Agent Smith. I appreciate your contribution, I just had not gotten around to it yet. For this forum post, I would be glad to answer your question, but you need to understand the knowledge theory first. Have you read the papers? I could give you an answer, but if you haven't read the papers yet, you will not understand it. If you are against reading the papers at first, feel free to start with Bob's posts. We cover a lot of questions and answers, and it may help you. Thanks!
Bob Ross January 16, 2022 at 01:43 #643651
Hello @Philosophim,


Then what they are describing is an inapplicable plausibility. It is when you believe that something that exists, but have constructed it in such a way that it cannot be applicably tested. I can see though that my language is not clear, so I understand where you're coming from. Applicable knowledge is when you apply a belief to reality that is not contradicted. All inductions are a belief in something that exists in reality. The type of induction is measured by its ability to be applicably applied or known.


I agree with you here, but my point was that it is an inapplicable plausibility (which means we are on the same page now I think). A couple posts back, you were defining "plausibility" as "the belief that distinctive knowledge that has never been applicably known, can be applicably known", which I am saying that is an "applicable plausibility", not "plausibility". I am now a bit confused, because your response to that was "In both cases, the person believes that the plausibility can be applicably known", which that is why I stated people can have plausibilities that they don't think can be applicably known. You are now saying, as far as I am understanding it, that if they think it can't be applicably known, then it is an "inapplicable plausibility" (I agree with that, but notice that doesn't align with your previous definition of "plausibility", as it was defined as "can be applicably known"--unless you think that "inapplicable plausibilities" are not a subcategory of "plausibility", I don't see how this isn't a contradiction).

Upon further reflection, I think that if we define every "plausibility" that has no potential as an "irrational induction" (and, consequently, all plausibilities have potential), rather than an "applicable/inapplicable plausibility", then I have no objections here. So, using my brick example, the claim that one can fit 2,000 5 ft bricks in 1000 sq ft is an "irrational induction" (not a "plausibility") because 1000 / 5 = 200, which necessarily eliminates any potentiality. However, I still think that there are meaningful hierarchies between claims (between plausibilities, for example) that relate to sureness apart from cogency which can be evaluative tools.

Even though you did not actively think about hierarchial induction, you practicied it implictly.


Fair enough. But I would say that the fundamental comparison with respect to the law of non-contradiction is a valid comparison across all hierarchical chains.

No one has ever applicably known a situation in which the something was both itself, and its negation.


This is true, but also notice that no one has ever applicably known a situation in which, in the absence of direct observation, something necessarily was not both itself and its own negation.

If you define something as one way, then define it as its negation, you have created a situation that can never be applied to reality.


Let's say we have these two claims:
1. Absent of direct observation, things abide by the law of noncontradiction.
2. Absent of direct observation, things do not abide by the law of noncontradiction.

Firstly, I could apply both of these indirectly to reality without any contradiction because, using the law of noncontradiction, I can create situations where the law of noncontradiction doesn't necessarily have to occur (mainly absent of sentient beings). Don't get me wrong, I agree with you in the sense that both are inapplicable plausibilities, but that is with respect to direct application. I may decide, upon assessing the state of a currently unobserved thing, to decide that the outcome should calculated as if they are superpositioned (this is how a lot of the quantum realm is generally understood). This can be indirectly applied to reality without any contradiction. Or, on the contrary, I could decide the outcome should be calculated as if they are either/or (this is generally how Newtonian physics is understood).

If we cannot observe it, we cannot apply this to reality.


We cannot directly apply it to reality, but we can produce meaningful calculations based off of superposed states which necessarily imply A being, only in the theoretical, in two contradictory states. Even if we could not produce meaningful calculations, it is equally as much of an "inapplicable plausibility" as claiming it does abide by the law of noncontradiction.


Again, you are doing the practice of hierarchial induction here, whether you are aware of it or not. I don't think its a consideration prior, but a consideration of it.


I think that if these underlying principles, which engulf other contexts, are a consideration of it, contrary to prior to it, then you are agreeing with me that hierarchical chains, to some degree or another, can be compared. I was merely trying to distinguish between the underlying, engulfing, principles and the point at which the induction chains can no longer be fairly compared.

It is more cogent to believe in the first plausibility, then the second. We can do a little math to prove it.


I agree with you here, but now we are getting into another fundamental problem (I would say) with your terminology: if a "possibility" is what one has experienced once before, then virtually nothing is a possibility. And, to be more clear, I think it is a much bigger disparity then I think your epistemology tries to imply. It all depends, though, on what one defines as "experiencing before". I don't think we experience the exact same thing very often (potentially at all) and, consequently, when one states they have "experienced that before" what they really mean is they've "experienced something similar enough to their current experience for them to subjectively constitute it as a match". For example, under your terms, the claim that "that car, which I haven't experienced run, will run, when started, because I've experienced my car run before" is not a "possibility" but, rather, a "plausibility". I think that we would both agree that that is the case. However, this directly implies that I also can't claim that "this apple is edible, although I haven't taken a bite yet, because I've experienced eating an apple before" is a "possibility": not even in the case that I have good reason to claim that this apple resembles another apple I've eaten before. Similarly, I also can't claim that it is "possible" that my car will start because I've experienced it start before because, directly analogous to the apple example, my car is not the exact same, within the exact same context, as when I experienced it start (at least, the odds are that it most definitely is not). In more philosophical terms, the problem is that almost all experiences are of particulars, not universals. A previous experience of thing A cannot be constituted as a previous experience of B, ever, because A is a separate particular from B and, therefore, "possibilities" would be constrained to only that experience after it occurs and never before its occurrence. However, I would say that a previous experience of thing A can be constituted as a previous experience of B if it qualifies, potentially with reference to objective evidence but necessarily contingent on subjective determination, as similar enough, within the context, to a previous context. Notice how "possibility" is no longer a zero sum game, a binary question, and, subsequently, becomes a matter of passing a subjectively determined threshold (which could be, in turn, based off of objective claims) by necessity: this is what I would call the spectrum of sureness. It isn't a question of whether something (1) has been experienced before or (2) it hasn't but, rather, a question of how sure are you of the similarity between what you just experienced and a past experience: does it constitute as similar enough. I think there is rigidity within your epistemology that mine lacks, as I see it more as an elastic continuum of sureness. I don't know if that makes any sense or not.

Correct, depending on the context. You do not know if people have internal monologues in their head like yourself.


Not depending on the context, but every context that contains such a claim. Asking someone if they have internal monologue, no matter how you end up achieving it, doesn't prove it is "possible" in the sense that you "have experienced it at least once before". "Hard consciousness", as you put it, is exactly what I am trying to convey here in conjunction with your "possibility" term: by definition, I can never claim it is "possible" for someone else to have internal monologue. Even if you knew that the person could not physically lie about it, you would never be able to claim it is "possible" because you have never experienced it yourself (even if you have experienced internal monologue, you haven't experienced it particularly within them).

We can determine a bat can think, but we can never have the experience of thinking like a bat.


We cannot, under your terms, claim that a "bat can think", only that it is a plausibility. Even if we scanned their brains and it turns out the necessary, similar to ours, faculty exists for thought, we would never be able to label it as a "possibility" because we have not experience a bat thinking. This is, to a certain degree, what I was trying to convey previously: how incredibly narrow and limited "possibility" would be. It would essentially only pertain to universals (that which has an objective, or absolute--depending on how you define it--basis) or subjective universals (that which is true for all experience for a particular subject). An example of this would be numbers in terms of quantity: one abstract "thing" is the exact same experience as one abstract other "thing" because quantity is derived from the same subjective universal called differentiation. Differentiation, at its most fundamental level, is the same for all particulars for that subject--as they wouldn't even be particulars, but rather a particular, if this wasn't the case.

I look forward to hearing from you,
Bob
Agent Smith January 16, 2022 at 06:59 #643701
Philosophim January 16, 2022 at 20:18 #643901
Quoting Bob Ross
I agree with you here, but my point was that it is an inapplicable plausibility (which means we are on the same page now I think). A couple posts back, you were defining "plausibility" as "the belief that distinctive knowledge that has never been applicably known, can be applicably known", which I am saying that is an "applicable plausibility", not "plausibility". I am now a bit confused, because your response to that was "In both cases, the person believes that the plausibility can be applicably known", which that is why I stated people can have plausibilities that they don't think can be applicably known.


Fantastic point. I need to revise what an inapplicable plausibility is. What would be more accurate is the belief that something exists that cannot be applicably known. Would we call this faith? I'm hesitant to use that word, as it is loaded with a lot of other emotions. But I think you are right. An inapplicable plausibility is different enough from a plausibility to warrant a separate identity in the heirarchy. That would leave us with probability, possibility, plausibility, faith, and irrational inductions.

Quoting Bob Ross
Upon further reflection, I think that if we define every "plausibility" that has no potential as an "irrational induction"


This is correct. An irrational induction is a belief that something exists, despite applicable knowledge showing it does not exist.

Quoting Bob Ross
This is true, but also notice that no one has ever applicably known a situation in which, in the absence of direct observation, something necessarily was not both itself and its own negation.


As you are aware, this would be an induction then.

Quoting Bob Ross
Firstly, I could apply both of these indirectly to reality without any contradiction because, using the law of noncontradiction, I can create situations where the law of noncontradiction doesn't necessarily have to occur (mainly absent of sentient beings).


What does indirect application to reality mean? I only see that as an inductive belief about reality. This isn't an applicable knowledge claim, so there is no application to reality. If there are no sentient beings, then there is no possibility of application knowledge.

Quoting Bob Ross
Don't get me wrong, I agree with you in the sense that both are inapplicable plausibilities, but that is with respect to direct application.


Can you describe what an indirect application to reality would be?

Quoting Bob Ross
I may decide, upon assessing the state of a currently unobserved thing, to decide that the outcome should calculated as if they are superpositioned (this is how a lot of the quantum realm is generally understood). This can be indirectly applied to reality without any contradiction.


Superpositioning, to my understanding, is essentially probability. There are X number of possible states, but we won't know what state it will be until we measure it. The measurement affects the position itself, which is why measuring one way prevents us from measuring the other way. You won't applicably know the state until you apply that measurement, so the belief in any particular outcome prior to the measurement would be an induction.

Quoting Bob Ross
I agree with you here, but now we are getting into another fundamental problem (I would say) with your terminology: if a "possibility" is what one has experienced once before, then virtually nothing is a possibility.


Great! We might be nearing a limitation for where I've thought on this. Just as we can construct detailed contexts to the point we could hardly claim applicable knowledge on anything, we can do so with inductive cogency. For example, I could state that to know a particular car is mine, it needs to be identical to the atomic level. Once I've measured that, I could say, "The quantum level". Of course, elections are moving around constantly, so from one moment to the next, I would say I had a brand new car.

The point of identity, the ability to discretely experience in a meaningful way, is to construct limitations of context that allow us to understand and interact with the world in an accurate and helpful way to us. This can be called, "rational". If I construct a context that is so detailed, it takes years to conclude even one discrete claim of knowledge, or the requirements are impossible to apply, what use is it?

I can identify a field of grass, a blade of grass, a piece of grass, ad infinitum. The point is to define it in such a way and context, as to be useful. The same goes with inductions. If I define a car as X, know that an attribute of a car is that it starts, I can say it is possible that a car can start. If I define what a car is as needing 10 hours of poking prodding, and dismantling to applicably know it, the distinctive knowledge useless in my every day application. If I define each car as separate entities, and only insist I know it is possible for this car to start, but not possible for any other car to start, then I make it a plausibility.

Is that useful to me? Depends on my context, but for most context of every day use, probably not. At that point I remove a hierarchy. So everything I have left over at that point is comparative plausiblities. Even though its a car, I'm trapped in my inability to analyze plausibilities. Maybe the car doesn't turn on. Maybe it turns into a demon. Maybe the ignition is actually in a hidden panel undeneath the floor board. Without a possibility comparison, I'm rationally trapped in my inability to justify one plausibility as being more cogent than another.

The addition of the hierarchy of induction is not to state, "This is true." Its the introduction of distinctive definitions, that have examples of being applied to reality without contradiction. To my mind, this distinction is useful. To another, perhaps it is not. Perhaps there are better words and phrases depending on your context that would be more useful to you. This is how all new claims work. A new distinctive knowledge is introduced that can be applicably known. Do we amend our context to use it, or reject it? You cannot force an individual to accept or reject it. You must show them it is a tool that can be useful.

Quoting Bob Ross
I think there is rigidity within your epistemology that mine lacks, as I see it more as an elastic continuum of sureness. I don't know if that makes any sense or not.


No, this makes perfect sense, and I hope you see that I agree with you that distinctive knowledge is infinitely elastic. There are infinite possibilities of how to define the world. Infinite contexts. Infinite sounds, language, etc. The question is, can you construct something that is useful? That fits the needs of your context at the time? Can it be used between more than one person? There is no reason the word "sheep" has to mean anything. There is nothing in reality that necessitates it. It just just an agreement we hold, because the word "sheep" has a use to us that we can use in our own lives, and in communicating to others.

Quoting Bob Ross
"Hard consciousness", as you put it, is exactly what I am trying to convey here in conjunction with your "possibility" term: by definition, I can never claim it is "possible" for someone else to have internal monologue. Even if you knew that the person could not physically lie about it, you would never be able to claim it is "possible" because you have never experienced it yourself (even if you have experienced internal monologue, you haven't experienced it particularly within them).


Full agreement. I do not think there is anything wrong with applicably knowing the limits of what you can applicably know. I find it a strength of the theory.

Quoting Bob Ross
We cannot, under your terms, claim that a "bat can think", only that it is a plausibility. Even if we scanned their brains and it turns out the necessary, similar to ours, faculty exists for thought, we would never be able to label it as a "possibility" because we have not experience a bat thinking.


Again, this depends upon your context. I could state that thinking is not just brain activity, but the ability to react to stimuli in a way that does not kill the creature. So I could place a bad smelling and rotten piece of fruit next to a fresh piece of fruit, and see what the bat does. If we state "thinking" is having the ability to reason at the level of an average human, than a bat will never be applicably known as thinking.

Again, fantastic assessment. I think you understand the theory pretty well now. The question to you is, is it useful for you? Is it logically consistent? Can it solve problems that other theories of knowledge cannot? And is it contradicted by reality, or is it internally consistent? Thanks again, I look forward to hearing from you.



Bob Ross January 19, 2022 at 04:50 #645005
Hello @Philosophim,

An inapplicable plausibility is different enough from a plausibility to warrant a separate identity in the heirarchy.


It is completely up to you, but I think that inapplicable plausibilities should be a plausibility; It is just that, in order to avoid contradictions, "plausibility" shouldn't be defined as what can be applicably known, just what one believes is "true" (or something like that). What can be applicably known would, therefore, be a subcategory of plausibility, namely "applicable plausibilities", and what cannot be applicably known would be another subcategory, namely an "inapplicable plausibility". On a separate note, the potentiality of a belief would be differentiated between irrational inductions and all other forms (as in it is irrational if it has no potential). And it is not necessarily always the case that a belief that cannot be applied to reality has no potential to occur and, thusly, there is a meaningful distinction between irrational inductions and inapplicable plausiblities (as in the latter is guaranteed to have potential, but cannot be applied). I just think that a contradiction arises if you define "plausibility" as always applicable (can be applied). You could, on the flip side, decide that the belief in what cannot be applied to reality is irrational and, consequently, that would make it an irrational induction.

This is correct. An irrational induction is a belief that something exists, despite applicable knowledge showing it does not exist.


Fair enough.


What does indirect application to reality mean? I only see that as an inductive belief about reality. This isn't an applicable knowledge claim, so there is no application to reality. If there are no sentient beings, then there is no possibility of application knowledge.


What I meant by "indirect" and "direct" seems to be, in hindsight, simply an inductive belief about reality (you are right). But my point I was trying to convey is that we produce meaningful probabilistic models based off of the idea that something is in multiple states at once, which doesn't really abide by the law of noncontradiction in a traditional sense at least.


Superpositioning, to my understanding, is essentially probability. There are X number of possible states, but we won't know what state it will be until we measure it. The measurement affects the position itself, which is why measuring one way prevents us from measuring the other way. You won't applicably know the state until you apply that measurement, so the belief in any particular outcome prior to the measurement would be an induction.


I agree. I was merely conveying that, to build off of what you said here, we don't assume the law of noncontradiction in terms of some quantum "properties" (so to speak), but the contrary. For example, a 6-sided die is considered to have 6 states. Even when the subject isn't observing the die, they will assume the law of noncontradiction: it is in one of the 6 states. Whereas, on the contrary, electrons can have two spin states: up or down. However, unlike the previous 6-sided die example, the subject, if they are quantum inclined (:, will assume the electron is equally likely in both positions (thus, not assuming the law of noncontradiction in the same sense as before).

Great! We might be nearing a limitation for where I've thought on this.


I think that, to supplement what you stated, possibility really isn't defined as clear as it should be. Instead of what "has been experienced before", it should be what "is similar enough to what one has experienced before". This is what I mean by rigidity (although I understand you agree with me on it being elastic): "possibility", as defined as what has been experienced before, implies (to me) that you have to experience it once before in a literal, rigid, sense. On a deeper level, I think it implies that experiences tend to be more like universals and less like particulars. For me, defining it in the previously mentioned refurbished way implies that subjective threshold.

Further, I think that the terminology is still potentially somewhat problematic. Firstly, your essays claim that probabilities are the most cogent, yet they actively depend on possibilities. There is no validity in probabilities, or honestly math in its entirety, if we weren't extrapolating it from possibilities (numbers in actuality, in reality). To say that the probability of 1/52 is more cogent than a possibility seems wrong to me, as I am extrapolating that from the possibility of there being 52 cards. Maybe it is just a difference between cogency and sureness, but I am more sure that 52 cards are possible than any probability I can induce therefrom. Secondly, it seems a bit wrong to me to grant probabilities their own category when there can be plausible probability claims and possible probability claims. For example, it becomes even more clear (to me) that I am more sure of the possibility of 52 cards when I consider it against specifically 1/N probability where N is a quantity that I haven't experienced in actuality (in reality). 1/N would be a probability that is really just a "plausible probability", contrary to a "possible probability" which would be a quantity, such as 52, that I can claim is possible. One is, to me, clearly a stronger claim than the other. Furthermore, probabilities are really just a specific flavor of mathematical inductions, which it seems odd that they have their own category yet mathematical inductions aren't even a term. For example, if I have a function F(N) = N + 1, this is a mathematical induction but not a probability. So, is it a plausibility? Is it a possibility? Depends on whether N is something experienced before or not (or how loosely we are defining similar enough). Probabilities are considered the most cogent, but is 1/N probability, where N is an unexperienced number in actuality, really more cogent than F(N), where N is a number experienced in actuality? I think not. On the flip side, is F(N), where N is an unexperienced number in actuality, more cogent than a 1/N probability where N is a number experienced in actuality? I think not. What if F(N) and 1/N are a number, N, that has not been experienced in actuality before? Are they equally as cogent? F(N) would be a plausibility, and I would say probability too, but probability would be considered more cogent simply because it has its own term (at least that is how I am understanding it). In reality, I think mathematical inductions (which includes probability) are subject to the same, more fundamental, categories: possibility, plausibility (and its subtypes), and irrational induction. Also, F(N) and 1/N where N is unexperienced and so large it can't be applied to reality would both be inapplicable plausibilities. Therefore, I think we are obligated to hold the position that an inapplicable plausibility mathematical induction (such as F(N) where N is inapplicable) are less cogent than an applicable plausibility (such as I can apply the existence of this keyboard without contradiction to reality) because fundamentally, mathematics, abides by the same rules. But, however, an applicable plausibility mathematical induction (such as F(N) where N is applicable) would be more cogent than probably every other non mathematical plausibility I can come up with because the immediateness of numbers, and its repetition, surpasses pretty much all others.

Thirdly, it also depends on how you define "apply to reality" whether that holds true. Consider the belief that you have thoughts: is your confirmation of that ever applied to "reality"? It seemed as though, to me, that your essays were implying sensations outside of the body, strictly, which would exclude thoughts. However, the claim that you even have thoughts is a belief and, therefore, must be subject to the same review process. It seems as though your thoughts are the initial beliefs being applied to "reality", which seems to separate the two concepts; Do you applicably know that you think? I don't apply my thoughts to reality in the sense that I would about whether a ball will roll down a hill: my thoughts validate my thoughts. If my thoughts validate my thoughts, then we may have an example of one of the most knowable beliefs for the subject that is technically inapplicable. However, if we define the thinking process as an experience, then we can say it is possible because we have experienced it before. However, most importantly, that directly, and necessarily, implies that you are not thought but, rather, you experience thought. On the contrary, if you don't experience thought, and subsequently the separation between the two isn't established, then you cannot claim that your own thoughts are possible, since you are incapable of experiencing them.

he question to you is, is it useful for you? Is it logically consistent? Can it solve problems that other theories of knowledge cannot? And is it contradicted by reality, or is it internally consistent?


I think that it is an absolutely brilliant assessment! Well done! However, I think, although we have similar views, that there's still a bit to hash out.

I look forward to hearing from you,
Bob
Philosophim January 19, 2022 at 13:55 #645154
Quoting Bob Ross
It is completely up to you, but I think that inapplicable plausibilities should be a plausibility; It is just that, in order to avoid contradictions, "plausibility" shouldn't be defined as what can be applicably known, just what one believes is "true"


I agree with this! I got caught up in my own verbiage, and need to separate the inductions by the ability to apply applicable knowledge, that I forgot one does not believe one can applicably know something to believe it is real.

Quoting Bob Ross
On a separate note, the potentiality of a belief would be differentiated between irrational inductions and all other forms (as in it is irrational if it has no potential).


Here, I am very careful to not use the word potentiality, because I think it loses meaning as an evaluative tool in the inductive hierarchy. Colloquially, I think its fine. I understand what you mean. But the reason why I don't think it works in the hierarchy is because the inductive hierarchy is not trying to assert what has more potential of being true, only which induction is more rational.

I believe this is a very important distinction. Recall that what is applicably known is based upon our context as well. A very narrow context might lead us to some strange probabilities and possibilities. It doesn't mean they are potential, as reality may very well defy them. They are simply rational inductions based on the applicable knowledge we have at the time.

Further, potentiality is not something the hierarchy can objectively measure. Let say that in a deck of 52 cards, you can choose either a face card, or a number card will be drawn next. You have three guesses. Saying number cards is more rational going by the odds. But the next three cards drawn are face cards. The deck was already shuffled prior to your guess. The reality was the face cards were always going to be drawn next, there was actually zero potential that any number cards were going to be pulled in the next three draws. What you made was the most rational decision even though it had zero potential of actually happening.

Lets go one more step. Same scenario. Only this time, I didn't put any number cards in the deck, and didn't tell you. You believe I made an honest deck of cards, when I did not. You had no reason to believe I would be dishonest in this instance, and decided to be efficient, and assume the possibility I was honest. With this induction, I rationally again choose number cards. Again however, the potential for number cards to be drawn was zero.

An induction cannot predict potentiality, because an induction is a guess about reality. The conclusion is not necessarily drawn from the premises. Some guesses can be more rational than another, but what is rational within our context, may have zero potential of actually being. That being said, generally acting rationally is a good idea, because it is based on what we do applicably know about the world, versus what we do not. It is less uncertainty, but has no guarantee.

So, I do understand your intention behind using potentiality, and in the end, it might boil down to semantics and context. For the purposes of trying to provide a clear and rational hierarchy, I'm just not sure whether potentiality is something that would assist, or cloud the intention and use of the tool.

Quoting Bob Ross
Whereas, on the contrary, electrons can have two spin states: up or down. However, unlike the previous 6-sided die example, the subject, if they are quantum inclined (:, will assume the electron is equally likely in both positions (thus, not assuming the law of noncontradiction in the same sense as before).


Not to get too off on a tangent here, but I believe the only reason we calculate it as having both, is because it is equally likely they could be either prior to measurement. It is like calculating what would happen for each side of a six sided die prior to rolling the die. But perhaps we shouldn't wade into quantum physics for examples, as I believe it mostly to be a field of conceptual land mines in any conversation, much less while addressing a new theory of knowledge!

Quoting Bob Ross
To say that the probability of 1/52 is more cogent than a possibility seems wrong to me, as I am extrapolating that from the possibility of there being 52 cards.


Probability does not assert there are possibly 52 cards, it asserts that there are 52 cards, whether this be based on applicable knowledge or belief. Of course, what if I'm having a thought experiment? This is a great time to get into math.

Math is the language of discrete experience, and distinctive knowledge. 1, is "a discrete experience" One blade of grass. A field of grass. One piece of grass. It is the abstraction of our ability to discretely experience "a" thing. "Two" is the idea that we can create 1 discrete experience, and another discrete experience. The discrete experience of both together as one identity, is two.

Math is the logic of discrete experience. It is why it fits so well into our world view, because it is an abstraction of how we view the world. When I say, "two blades of grass," this relies on a context of two identities that are similar enough to be labeled "blades of grass". It does not assert their equality on a mass or atomic level. This is because it is an abstraction of our ability to contextualize identities down to their essential properties for the purposes of addition and subtraction, while throwing out all non-essential properties.

The proofs of math work, because they can be confirmed by our discrete experience being actively applied. Therefore I can abstract that if I have 20 bushels of hay, and take away 2 bushels of hay, I have 18 bushels of hay. I can discretely experience that in my head right now. I'm not claiming what constitutes a bushel. I have no need for the weight of each bushel down to the ounce, its color, smell, etc. I just need a discrete experience of a bushel, and this is enough to abstract something useful for reality.

Even so, just like language, math must be applied to reality without contradiction to be applicably known. I can predict that a feather will fall at 9.8 meters a second, but may find in my measurements it does not . I might state that my 5 bushels of hay at 20 pounds each will result in 100 pounds of hay, but upon actual measurement, I find they only weigh 98 pounds.

Quoting Bob Ross
For example, if I have a function F(N) = N + 1, this is a mathematical induction but not a probability. So, is it a plausibility? Is it a possibility?


This is a known function. This is an observation of our own discrete experience. If I take N identities, and add one more, then this will equal the identities added together. So, 2+1 are the same as the identity of 3. This applies to the abstract of discrete experience, which when applied to reality could specifically be bushels of hay, sheep, etc. As it is in its functional form, it is only a descriptive logic of discrete experiencing.

This leads to, Quoting Bob Ross
Thirdly, it also depends on how you define "apply to reality" whether that holds true. Consider the belief that you have thoughts: is your confirmation of that ever applied to "reality"?


This goes back to the beginning of the essay. Recall that what we discretely experience, we know. That is because it is impossible to deny that we discretely experience. When I discretely experience something that I label as "thoughts" in my head, I distinctively know I have them. Applicable knowledge is when we apply our distinctive knowledge outside of our own ability to create identity as we wish. I might believe that the apple in front of me is healthy for me, but when I bite into it, I find it rotten. The apple is something apart from my own identifiable control in this way. Your thoughts are also reality.

Distinctive knowledge occurs, because the existence of having thoughts is not contradicted. The existence of discretely experiencing cannot be contradicted. Therefore it is knowledge. I label this special type of knowledge distinctive, because it is something within our control. I can create a world of magic and unicorns distinctively, but there is a limit when applied to that which I do not have control over, reality.

So, going back again to abstracting the idea of 1/52 playing cards, I can distinctively create the limitation in my head that there are 52 playing cards, that they are randomly shuffled, and 1 is pulled without applicably knowing which card it is. I can then establish the limitations of what the necessary possibilities are knowing what each card is within the deck. But, if I applicably apply this probability to any one particular deck in reality, what actually happens is what actually happens.

Perhaps some of the cards were not all the same weight or smoothness, and it causes some of them to stick in the shuffle. Perhaps there is some strange law of physics we didn't know about in reality that causes the Ace of spades to come up more frequently. Math is the ideal of distinctive knowledge, but it must still be applied to reality when it makes a prediction about a particular reality to see if it is applicably known.

Quoting Bob Ross
Secondly, it seems a bit wrong to me to grant probabilities their own category when there can be plausible probability claims and possible probability claims.


We cannot meaningfully understand what plausible probability is, without first distinctively and applicably knowing what plausibility, and probability are first. Recall then, that a plausible probability is a chain of reasoning. I have a plausibility, and from that plausibility, I assert a probability. I have a possibility, and from that I assert a probability. I have applicable knowledge, and from that applicable knowledge, I assert a probability.

If I could compare all three inductions, it would be most rational to use the one that has applicable knowledge as its base.

1. Its plausible the dark side of the moon is on average hotter than the light side of the moon, therefore it is probable any point on the dark side of the moon will be hotter than any point on the light side of the moon.
2. Its possible the side of the moon facing away from Earth is on average colder than the light side of the moon, therefore it is probable any point on the dark side of the moon will be colder than any point on the light side of the moon.
3. The dark side of the moon has been measured on average to be cooler than the light side of the moon at this moment, therefore it is probable any point on the dark side of the moon will be colder than any point on the light side of the moon.

As you can see, intuitively, and rationally, it would seem the close the base of the chain is to applicable knowledge, the more cogent the induction.

Quoting Bob Ross
I think that it is an absolutely brilliant assessment! Well done! However, I think, although we have similar views, that there's still a bit to hash out.


Thank you! Yes, please continue to drill into the theory as much as you can. Its usefulness is only as good as its ability to withstand critiques. Again, greatly enjoying the conversation, and my thanks for your pointed assessment and crticism!
Bob Ross January 21, 2022 at 04:52 #645905
Hello @Philosophim,

Further, potentiality is not something the hierarchy can objectively measure. Let say that in a deck of 52 cards, you can choose either a face card, or a number card will be drawn next. You have three guesses. Saying number cards is more rational going by the odds. But the next three cards drawn are face cards. The deck was already shuffled prior to your guess. The reality was the face cards were always going to be drawn next, there was actually zero potential that any number cards were going to be pulled in the next three draws. What you made was the most rational decision even though it had zero potential of actually happening.


Although I understand what you are saying, and I agree with you in a sense, potentiality is not based off of hindsight but, rather, the exact same principle as everything else: what you applicably know at the time. Prior to drawing three face cards, if you applicably know that there is at least one number card in the 52 (or that you have good reason to believe that there is one regardless of whether you directly experienced one), then there is a potential that you could draw it. Regardless of whether it is the most rational position, it is nevertheless a rational position. However, if you applicably know that there are no number cards in the 52 (or you have good reason to doubt it), then it has no potential and, therefore, it is irrational.

Only this time, I didn't put any number cards in the deck, and didn't tell you. You believe I made an honest deck of cards, when I did not. You had no reason to believe I would be dishonest in this instance, and decided to be efficient, and assume the possibility I was honest. With this induction, I rationally again choose number cards. Again however, the potential for number cards to be drawn was zero.


Again, I understand what you are saying and I agree. However, within the context (in the heat of the moment) the numbers do have the potential to be in the deck if you have assessed that your knowledge deems it so. In hindsight, which refurbishes the context and maybe a new context depending on how one looks at it, you can now claim that there was no potentiality. But with respect to whether it had potentiality prior to this new knowledge that they lied, it is more rational to conclude that it has potentiality. I would argue, furthermore, that this assessment is actually necessary for one to even pick numbers in the first place (in terms of your example): if they don't think there is any potential for there to be a number, then they wouldn't pick numbers (and if they did, then it would be irrational). Although sometimes potentiality and "possibility" (in your terms) coincide, it isn't necessarily the case that something only has potential if you have "experienced it once before".

An induction cannot predict potentiality, because an induction is a guess about reality.


It is a part of the guess. First I make an educated guess that there is potential for water to exist on another planet somewhere, then I guess on how likely that is and, thereafter, whether it really constitutes as knowledge or not (with consideration to my discrete and applicable knowledge). Potentiality is the first (or at least one of the first) considerations when attempting to determine knowledge. If the subject determines there is no potential, then they constitute any further extrapolations as irrational and thereby disband from it.

Some guesses can be more rational than another, but what is rational within our context, may have zero potential of actually being


It isn't about what can potentially occur in light of new evidence afterwards, it is about what can potentially occur in light of the current evidence. It is perfectly fine if we find out later that what we thought had no potential actual does have some, or vice-versa. This is how it is for all contexts and even the induction hierarchies. Potentiality is a guide to what one should pursue (as one of the first considerations), and I would argue we all implicitly partake in it: that's why if you can convince someone that they hold a contradiction, they will feel obligated to refurbish their beliefs (most of the time). It is the fact that they know they are holding an irrational belief, due to the potentiality being nonexistent, that motivates their will to change. This would be, colloquially speaking, "possibility". I agree that this may just be a sematical difference, but I think defining possibility as "what one has experienced once before" eliminates the other meaningful aspect of the term (potentiality).

It is less uncertainty, but has no guarantee


Nothing is guaranteed. It could very well be that in five years we will look back, in hindsight, and "know" our understanding of induction hierarchies was utterly wrong (with consideration to new evidence). This doesn't mean that we can't use the induction hierarchies now, does it? I don't think so. So it is with potentiality. In my head, this would be like claiming that we can't utilize "possibilities" because, in the future, it may be the case that we find out it never actually was possible.

For the purposes of trying to provide a clear and rational hierarchy, I'm just not sure whether potentiality is something that would assist, or cloud the intention and use of the tool.


Personally I think it is necessary, but of course do what you deem best!

Math is the logic of discrete experience.


I agree for the most part: math deductions are the logic of discrete experience and we inductively apply that in the abstract. But I think the problem remains: where does mathematical inductions fit into the hierarchy?

This is a known function. This is an observation of our own discrete experience


It is an observation of our own discrete experience (when it is a deduction), but that doesn't exempt it from the hierarchy (when it is an induction). 1 + 2 = 3 is an observation of our own discrete experience, whereas X + Y = Z (where all of them are numbers never discretely experienced before) is based off of our own discrete experience (it's an induction, as you are probably well aware). When I state that 1 + 2 = 3, I know that these numbers are possible, whereas I don't know that is the case in terms of X + Y = Z for all numbers. Furthermore, there is actually cases where I know that they aren't possible, in the case of imaginary numbers (i), such as 1 + ?-25 = 5i. We also apply math to actual infinities that may not actually exist (such as infinity and negative infinity and even PI and E, which are irrational numbers). When we take the limit approaching infinity, are you claiming that that is an observation of our own discrete experience (or a distant extrapolation)? Therefore, the function F(N) is not an observation of our own discrete experience (that would be a deduction) but, rather, an induced function meant to predict based off of our deducible knowledge (it is literally an induction put into a predictive model). This directly implies that, for N in F(N) it could either (1) be a possible number, (3) applicable plausible number (with regards to your terms: has potential and can be applied but isn't proven to be possible), (4) inapplicable plausible number (has potential and hasn't been proven to be possible but cannot be applied), or (5) an irrational number (has no potential, isn't possible, and has no potential). I think you are right in the sense that, in the abstract, X + Y = X + Y will always hold, but saying it will always hold is an induction (it is just so ingrained, as you stated, into our discrete experience itself that we hold it dear--in my terms it is one of the most immediate things, closest to our existence). Most importantly, none of this exempts it from the hierarchy of inductions and, therefore, I would like to know where you were classify it?

When I discretely experience something that I label as "thoughts" in my head, I distinctively know I have them.


My intention is not to try and put words in your mouth, but I think you are, if you think this, obliged to admit that you and thought are distinct then. I don't think you can hold the position that we discretely experience them without acknowledging this, but correct me if I am wrong. If you do think they are separate, then I agree, as I think that your assessment is quite accurate: we do apply our belief that we have thoughts to reality, because the process of thinking is apart of experience (reality). It is just the most immediate form of knowledge you have (I would say): rudimentary reason.

Distinctive knowledge occurs, because the existence of having thoughts is not contradicted. The existence of discretely experiencing cannot be contradicted. Therefore it is knowledge.


I agree!

We cannot meaningfully understand what plausible probability is, without first distinctively and applicably knowing what plausibility, and probability are first.


If I follow this logic, I still end up with a problem: without first distinctively and applicably knowing what mathematical induction is, I cannot meaningfully understand what a probability is. Therefore, why isn't mathematical inductions a category on the induction hierarchy? Why only probabilities?

Furthermore, I apologize as my term "plausible probability" is confusing: I am not referring to a chain plausibility -> probability. What I was really referring to was something we've previously discussed a bit: there are different cogencies within probabilities since they are subject, internally and inherently, to the other three categories (irrational, possibility, and plausibility). Same goes for math in general. Two separate probabilities, with the same chances, could be unequal in terms of sureness (and cogency I would say). You could have a 33% chance in scenario 1 and 2, but 1 is more sure of a claim than 2. This would occur if scenario 1 is X/Y where X and Y are possible numbers and scenario 2 is X/Y where X and Y are plausible numbers (meaning they have the potential to exist, but aren't possible because you haven't experienced them before). My main point was that there is a hierarchy within probabilities (honestly all math) as well.

Moreover, another issue I was trying to convey is why does probability have its own category, but not mathematical inductions? I think what your "probability" term really describes, in terms of its underlying meaning, is mathematical inductions. If I induce something based off of F(N), this is no different than inducing something off of 1/N chances, except that, I would say, anything induced from the former is more cogent. This is because if I base a belief on there being a 90% chance, that will always be less certain (because it is a chance) than anything based off of F(N) (directly that is). For example, if I induce that I should go 30 miles per hour in my car to get to may destination, which is 60 miles away, in 2 hours, that is calculated with numbers that are a possibility or plausibility (the mathematical operations are possible, but not necessarily the use of those operations on those particular numbers in practicality). But this is more cogent than an induction that I should bet on picking a number card out of a deck (no matter how high the chances of picking it) because the former is a more concrete calculation to base things off of (it isn't "chances", in the sense that that term is used for probability). Don't get me wrong, the initial calculation, because it is also math, of probability is just as cogent as any other mathematical operation (it's just division, essentially), but anything induced from that cannot be more cogent than something directly induced from a more concrete mathematical equation such as 60 miles / 30 mile per hour = 2 hours. Notice that these are both inductions but one doesn't really exist in the induction hierarchies (mathematical inductions) while the other is the most cogent induction (probability). Why?


1. Its plausible the dark side of the moon is on average hotter than the light side of the moon, therefore it is probable any point on the dark side of the moon will be hotter than any point on the light side of the moon.
2. Its possible the side of the moon facing away from Earth is on average colder than the light side of the moon, therefore it is probable any point on the dark side of the moon will be colder than any point on the light side of the moon.
3. The dark side of the moon has been measured on average to be cooler than the light side of the moon at this moment, therefore it is probable any point on the dark side of the moon will be colder than any point on the light side of the moon.


This may be me just being nit picky, but none of those were probable (they are not quantitative likelihoods, they are qualitative likelihoods). If you disagree, then I would ask what the denominator is here. But my main point is there is a 4th option you left out: if I can create a mathematical equation that predicts the heat of a surface based off of it's exposure to light, then it would be more cogent than a probability (it is a mathematical induction based on a more concrete function than probability) but, yet, mathematical inductions aren't a category.

Furthermore, #2 isn't possible unless you've experienced the side of the moon facing away from the earth being colder than when you experienced it on the light side. This is when we have to consider what we mean by "what we have experienced before". This is more of potentiality than possibility (in your terms). I think that your use of "possible" is more in a colloquial sense in #2.

As you can see, intuitively, and rationally, it would seem the close the base of the chain is to applicable knowledge, the more cogent the induction.


I agree!

Look forward to hearing from you,
Bob
Philosophim January 21, 2022 at 13:36 #646013
Quoting Bob Ross
Although I understand what you are saying, and I agree with you in a sense, potentiality is not based off of hindsight but, rather, the exact same principle as everything else: what you applicably know at the time.


I have been thinking about this for some time. I like the word "potential". I think its a great word. The problem is, it comes from a time prior to having an assessment of inductions. Much of what you are describing as potential, are a level of cogency that occurs in both probability, and possibility. The word potential in this context, is like the word "big". Its a nice general word, but isn't very specific, and is used primarily as something relative within a context.

Perhaps this is why I'm shying away from implementing it as something measurable within the hierarchy. Logically, I can only say inductions are more cogent, or rational than another. I have absolutely no basis to measure the potential of an induction's capability of accurately assessing reality. At the most, I suppose I would be comfortable with stating that "potential" is anything that is the realm of probability or possibility, as these directly rely on claims of applicable knowledge in their chain of rationality, but I cannot use it as anything more than that before it turns into an amorphous general word that people use to describe what they are feeling at the time.

Quoting Bob Ross
Potentiality is the first (or at least one of the first) considerations when attempting to determine knowledge. If the subject determines there is no potential, then they constitute any further extrapolations as irrational and thereby disband from it.


This is what I mean by saying the word begins to morph into something too general. Now a word which could describe a state of probability or possibility, becomes an emotional driving force for why we seek to do anything. I could hold an irrational belief, and say its because its potentially true. Potential in this case more describes, "I believe something, because I believe something (It has potential). Its not that potential is a poor word, its just as its been used, its too poorly defined and amorphous. Without concrete measurement, it can be used to state that any belief in reality could be true. So until a more concrete and defined use of the word can be created, I think I'm going to stick with evaluating inductions in terms of rationality, instead of potentiality.Quoting Bob Ross
If I induce something based off of F(N), this is no different than inducing something off of 1/N chances, except that, I would say, anything induced from the former is more cogent.


Quoting Bob Ross
But I think the problem remains: where does mathematical inductions fit into the hierarchy?


So earlier, I was trying to explain that math was the logical conclusions of being able to discretely experience. I remember when I learned about mathematical inductions, I thought to myself, "That's not really an induction." The conclusion necessarily follows from the premises of a mathematical induction. I checked on this to be sure.

"Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n, which can take infinitely many values."
https://en.wikipedia.org/wiki/Mathematical_induction

N + 1 = F(N) is a logical process, or rule that we've created. Adding one more identity to any number of identities, can result in a new identity that describes the total number of identities. It is not a statement of any specific identity, only the abstract concept of identities within our discrete experience. Because this is the logic of a being that can discretely experience, it is something we can discretely experience.

We could also state N+1= N depending on context. For example, I could say N = one field of grass. Actual numbers are the blades of grass. Therefore no matter how many blades of grass I add into one field of grass, it will still be a field of grass. I know this isn't real math, but I wanted to show that we can create concepts that can be internally consistent within a context. That is distinctive knowledge. "Math" is a methodology of symbols and consistent logic that have been developed over thousands of years, and works in extremely broad contexts.

Quoting Bob Ross
My intention is not to try and put words in your mouth, but I think you are, if you think this, obliged to admit that you and thought are distinct then. I don't think you can hold the position that we discretely experience them without acknowledging this, but correct me if I am wrong. If you do think they are separate, then I agree, as I think that your assessment is quite accurate: we do apply our belief that we have thoughts to reality, because the process of thinking is apart of experience (reality). It is just the most immediate form of knowledge you have (I would say): rudimentary reason.


I don't believe you did in this case. If you recall, thoughts come after the realization we discretely experience. The term "thought" is a label of a type of discrete experience. I believe I defined it in the general sense of what you could discretely experience even when your senses were shut off. And yes, you distinctively know what you think. If I think that a pink elephant would be cool, I distinctively know this. If I find a pink elephant in reality, this may, or may not be applicably known. Now that you understand the theory in full, the idea of thoughts could be re-examined for greater clarity, definition, and context. I only used it in the most generic sense to get an understanding of the theory as a whole.

Quoting Bob Ross
Two separate probabilities, with the same chances, could be unequal in terms of sureness (and cogency I would say). You could have a 33% chance in scenario 1 and 2, but 1 is more sure of a claim than 2. This would occur if scenario 1 is X/Y where X and Y are possible numbers and scenario 2 is X/Y where X and Y are plausible numbers (meaning they have the potential to exist, but aren't possible because you haven't experienced them before). My main point was that there is a hierarchy within probabilities (honestly all math) as well.


I think again this is still the chain of rationality. A probability based upon a plausibility, is less cogent than a probability based on a possibility.

Back to your idea of using math inductively.

Quoting Bob Ross
For example, if I induce that I should go 30 miles per hour in my car to get to may destination, which is 60 miles away, in 2 hours, that is calculated with numbers that are a possibility or plausibility (the mathematical operations are possible, but not necessarily the use of those operations on those particular numbers in practicality). But this is more cogent than an induction that I should bet on picking a number card out of a deck (no matter how high the chances of picking it) because the former is a more concrete calculation to base things off of (it isn't "chances", in the sense that that term is used for probability).


You distinctively know that if you travel 30 miles per hour to get to a destination 60 miles away, in 2 hours you will arrive there. Now, if you get in your vehicle, can you consistently travel 30 miles per hour? Is the destination exactly 60 miles away, or is it 60 and some change? If say that any decimals are insignificant digits, and you can travel exactly 20 miles per hour, and the distance is exactly 60 miles away, then you will arrive in exactly two hours, because we have defined distance and time and applied it to reality to work that way without contradiction.

A probability is not a deduction, but an induction based upon the limitations of the deductions we have. Probability notes there are aspects of the situation that we lack knowledge over. As noted earlier, a randomly shuffled deck of cards is not really random. We call it "random" because we distinctively and applicably know that we lack the ability to observe the order it was shuffled in. We induce what is rationally most likely when we lack this information, based on the other information we do know.

As such, the first case is actually a deduction, the second is an induction.

Quoting Bob Ross
This may be me just being nit picky, but none of those were probable (they are not quantitative likelihoods, they are qualitative likelihoods).


You are correct! I was being sloppy. I was more interested in conveying the idea of chains of rationality. Instead of average, I should have said "median". In that case we know we have a majority of spots on one side that would be above or below the temperature of the other side, and could create a probability.

Quoting Bob Ross
But my main point is there is a 4th option you left out: if I can create a mathematical equation that predicts the heat of a surface based off of it's exposure to light, then it would be more cogent than a probability (it is a mathematical induction based on a more concrete function than probability) but, yet, mathematical inductions aren't a category.


I think most of the conversation has boiled down to induction vs deductions with math. Math is a tool that can be used to create deductions, or inductions, just like distinctive knowledge. Looking at distinctive knowledge, everything inside of itself that is internally consistent is deduced. But I can induce something. I can state, "This distinctive knowledge applies to reality without contradiction, even though I haven't applied it to reality yet." This is the impetus of all beliefs. Trying to find a way to measure more rationally which beliefs we should spend the time and effort pursing is why we develop a system of knowledge, and use the inductive hierarchy.

Math is merely the logic of discrete experience. Meaning you can use math deductively, and also use some of those deductions to make predictions about reality. These aren't mathematical inductions, these are inductions based on math within its chain of rationality. Does this make sense?

Absolutely fantastic deep dive here Bob. I've wanted to so long to discuss how the knowledge theory applies to math, and its been a joy to do so. I also really want to credit your desire for "potentiality" to fit in the theory. Its not that I don't think it can, I just think it needs to be more carefully defined, and serve a purpose that cannot be gleaned with the terms we already have in the theory. Thank you again for the points, you are a keen philosopher!


Bob Ross January 21, 2022 at 17:04 #646068
Hello @Philosophim,

Absolutely fantastic deep dive here Bob. I've wanted to so long to discuss how the knowledge theory applies to math, and its been a joy to do so. I also really want to credit your desire for "potentiality" to fit in the theory. Its not that I don't think it can, I just think it needs to be more carefully defined, and serve a purpose that cannot be gleaned with the terms we already have in the theory. Thank you again for the points, you are a keen philosopher!


Thank you Philosophim! You are a marvelous philosopher yourself! I am also thoroughly enjoying our conversation. I agree in that our dispute is really pertaining, at a fundamental level, to two concepts: potentiality and math.


I have been thinking about this for some time. I like the word "potential". I think its a great word. The problem is, it comes from a time prior to having an assessment of inductions. Much of what you are describing as potential, are a level of cogency that occurs in both probability, and possibility. The word potential in this context, is like the word "big". Its a nice general word, but isn't very specific, and is used primarily as something relative within a context.


I agree, I definitely need to define it more descriptively. However, with that being said, at a deeper level, the term possibility is also like the word "big": it is contingent on a subjective threshold just like potentiality. Although I like your definition of it (what has been experienced once before), that very definition is also utterly ambiguous (from a deeper look). Just like how I can subjectively create a threshold of when something is "big", which you could disagree with (cross-referencing to your own threshold), I also subjectively create a threshold of what constitutes as "experiencing it before". Furthermore, I also subjectively create a threshold of what constitutes as having the potential to occur. I think we can definitely get further into the weeds about "possibility" and "potentiality", but all I am trying to point out here is that their underlying structure is no different.

Logically, I can only say inductions are more cogent, or rational than another.


I agree, I think potentiality is an aspect of rationality. If it has no potential, just like if it isn't possible, then it is irrational. Potentiality isn't separate from rationality (it is apart of rational thinking).

I have absolutely no basis to measure the potential of an induction's capability of accurately assessing reality


The basis is whether you think it aligns accurately with your knowledge. For example, although this may be a controversial example as we haven't hashed out math yet, I can hold that, even though I haven't experienced it, lining up (side by side) 2 in long candy bars for 3,000 feet has the potential to occur because it aligns with my knowledge (i.e. I do applicably know that there is 3,000 feet available to lay things and I do applicably know there are 2 in long candy bars); however, most importantly, according to your terminology, this is not possible since I haven't experienced it before. Likewise, without ever experiencing it, I can hold that it is irrational to believe that one can fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet (because, abstractly, 1,000 feet can only potentially hold 6,000 2 inch candy bars side by side). Yes, there is a level of error (mainly human error) that needs to be accounted for and, thusly, it is merely an ideal. But, nevertheless, I can utilize this assessment of potentiality to determine which is more cogent and which to not pursue (although both are not possibilities--as of yet--I should not sit there and try to fit 7,000 2 in long candy bars--side by side--within 1,000 ft since I already know it has no potential). Notice though, and this is my main critique, that the use of solely possibility (in your terms) within your epistemology strips the subject of being capable of making such a distinction (they are both not possible without further elaboration).

Much of what you are describing as potential, are a level of cogency that occurs in both probability, and possibility


I am failing to understand how this is the case? Potentiality is the component of colloquial use of "possibility" that got removed, implicitly, when your epistemology refurbished the term. Therefore, it does not pertain, within your terms, to possibility directly at all. Yes, in the sense that potentiality branches out to plausibility, possibility, and probability is true. But that is because it is a requisite (if it has no potential, it necessarily gets redirected to the irrational pile of claims). Something can't be plausible if it can be proven to have no potential (and it doesn't necessarily have to be "I've experienced the exact, contradictory, event to this claim, therefore it is an irrational induction": I don't have to experience failing to be able to fit 1,000,000 5 ft bricks into a 10 x 10 x 10 room to know that it is an irrational inductive belief). Moreover, something can't be probable (with an actual denominator) if it doesn't have potential. And, finally, it can't be possible (you couldn't have experienced it before) if it has no potential (and if you did experience it, legitimately, then it has potential). I think the main issue you may be having is that your new definition of possibility implicitly stripped this meaningful distinction out of "possibility" in favor of a new, less ambiguous term. However, now we must determine, assuming you agree with me, how to implement this distinction back into the epistemology. Otherwise, the subject is incredibly limited in what they can meaningfully induce about the world.

but I cannot use it as anything more than that before it turns into an amorphous general word that people use to describe what they are feeling at the time.


I agree: people could use it with no real substance. However, this is also true for possibility. I could make subjective thresholds for what constitutes "experiencing something before" that renders possibilities utterly meaningless. I think "rationality" isn't merely determining something a possibility, plausibility, or any other term: it is also about how one reasoned their way into the thresholds for those terms. I can dereference any term into a meaningless distinction, but how can we keep it meaningful for all subjects when it isn't a rigid distinction? I think we just have to agree, as two subjects conversing, on the underlying reasoning behind our subjective thresholds: that is rationality (what we both constitute as valid reasoning).

Now a word which could describe a state of probability or possibility, becomes an emotional driving force for why we seek to do anything.


I see where you are coming from and you are totally correct: people can de-value anything. However, I don't see how it is actually a probability or possibility: only that the distinction between what is irrational and rational (rational being probability, possibility, and plausibility) is necessarily potentiality (to one degree or another). All three terms within rational beliefs (not considering which is more rational than another, which could technically make a rational belief actually irrational if one determines another rational belief to be a better choice) inherent from this concept of potentiality: it is a requisite.

I could hold an irrational belief, and say its because its potentially true.


If we are defining an irrational belief as what has no potential to be true, then this statement is an irrational belief, within our subjective determination of what the term "irrational belief" should imply, because it directly contradicts the definition.

Potential in this case more describes, "I believe something, because I believe something (It has potential).


Awe, I see. This is what I was referring to a while ago (in our posts): people tend to make an illegitimate jump where they claim that "since it has potential, it is possible, therefore I believe it". This is not necessarily true though. Honestly, your defining of possibility as "experiencing it once before" is so brilliant for this very reason: something can have potential but yet never have been experienced, therefore it isn't possible (yet). Therefore, consequently, merely claiming something has potential ergo I believe it is true is irrational because, rationally speaking, something can't be constituted as "true" if it first isn't possible. Potentiality doesn't pertain to the "truth" of the matter, just a requisite to what one should rationally not pursue. It is a deeper level, so to speak, of analysis that can meaningfully allow subjects to reject other peoples' claims just like what you are describing.

Without concrete measurement, it can be used to state that any belief in reality could be true.


Not everything could be true. Firstly, not everything is possible (because we either (1) haven't experience it or (2) we have experienced contradictory events to the claim). Secondly, not everything has potential (because we may have experienced enough knowledge to constitute it as not having the capability to occur). Admittedly, potentiality and possibility are incredibly similar and that's why, traditionally, they are but one term. However, potentiality is a more broad claim, less bold and assertive, than possibility (if we define it as having experienced it before). Now, within this new terminology, we can boldly and assertively claim something is possible (assuming we agree on the subjective thresholds in place) because we have experienced it before. In regards to potentiality, we aren't boldly claiming that it can occur, just that there is potential for it to occur. This is more meaningful in terms of negation and not positive claims: we can meaningfully claim that something is irrational if it has no potential (assuming the subjective thresholds are agreed upon, like everything else). It isn't as meaningful in terms of two things that have potential and that's where the other terms come into play, but they only come into play once it is accepted that it has potential (that's why it is a requisite).

I think I'm going to stick with evaluating inductions in terms of rationality, instead of potentiality.


That is absolutely fine! My intention is not to pressure you into reforming it, but I do think this is a false dichotomy: this assumes potentiality is a separate option from rationality. Potentiality, and its consideration, is engulfed within rational thinking and the negation thereof is why it becomes irrational. We can't claim that something that has no potential is irrational if we aren't also claiming that if it does that it is rational to continue the analysis.


So earlier, I was trying to explain that math was the logical conclusions of being able to discretely experience. I remember when I learned about mathematical inductions, I thought to myself, "That's not really an induction." The conclusion necessarily follows from the premises of a mathematical induction. I checked on this to be sure.

"Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n, which can take infinitely many values."
https://en.wikipedia.org/wiki/Mathematical_induction


This is true, but that is with respect to the mathematical operations, not the numbers themselves. I can say it is possible to perform addition because I have experienced it before, I cannot say that it is possible to add 3 trillion + 3 trillion because I haven't experienced doing that before with those particular numbers: I am inducing that it still holds based off of the possibility of the operation of addition. But, yes, you are correctt in the sense that philosophical induction is not occurring with respect to the operations themselves, but I would say it is occurring at the level of the numbers.

N + 1 = F(N) is a logical process, or rule that we've created. Adding one more identity to any number of identities, can result in a new identity that describes the total number of identities. It is not a statement of any specific identity, only the abstract concept of identities within our discrete experience. Because this is the logic of a being that can discretely experience, it is something we can discretely experience.

We could also state N+1= N depending on context. For example, I could say N = one field of grass. Actual numbers are the blades of grass. Therefore no matter how many blades of grass I add into one field of grass, it will still be a field of grass. I know this isn't real math, but I wanted to show that we can create concepts that can be internally consistent within a context. That is distinctive knowledge. "Math" is a methodology of symbols and consistent logic that have been developed over thousands of years, and works in extremely broad contexts.


I agree, but this doesn't mean it holds for all numbers. We induce that it does, but it isn't necessarily the case. We assume that when we take the limit of 1/infinity that it equals 0, but we don't know if that is really even possible to actually approach the limit infinitely to achieve 0. Likewise, we know that if there are N distinct things that N + 1 will hold, but we don't if N distinct things are actually possible (that is the induction aspect, which I think you agree with me on that, although I could be wrong).

I don't believe you did in this case. If you recall, thoughts come after the realization we discretely experience. The term "thought" is a label of a type of discrete experience. I believe I defined it in the general sense of what you could discretely experience even when your senses were shut off. And yes, you distinctively know what you think. If I think that a pink elephant would be cool, I distinctively know this. If I find a pink elephant in reality, this may, or may not be applicably known. Now that you understand the theory in full, the idea of thoughts could be re-examined for greater clarity, definition, and context. I only used it in the most generic sense to get an understanding of the theory as a whole.


Yes, I may need a bit more clarification on this to properly assess what is going on. Your example of the pink elephant is sort of implying to me something different than what I was trying to address. I was asking about the fundamental belief that you think and not a particular knowledge derived from that thought (in terms of a pink elephant). I feel like, so far, you are mainly just stating essentially that you just think, therefore you think. I'm trying to assess deeper than that in terms of your epistemology with respect to this concept, but I will refrain as I have a feeling I am just simply not understanding you correctly.

I think again this is still the chain of rationality. A probability based upon a plausibility, is less cogent than a probability based on a possibility.


Yes, but your essays made it sound like probability is its own separate thing and then you can mix them within chains of inductions. On the contrary, I think that "probability" itself is actually, at a more fundamental level, contingent on possibility and plausibility for it to occur in the first place.

You distinctively know that if you travel 30 miles per hour to get to a destination 60 miles away, in 2 hours you will arrive there.


Agreed, but, depending on if I've experienced it before, it may be an induction based off of possibility or plausibility.


A probability is not a deduction, but an induction based upon the limitations of the deductions we have. Probability notes there are aspects of the situation that we lack knowledge over.


Whether or not it is a deduction or induction, probabilities are derived from two separate claims that are not equally as cogent as one another. A calculation based off of a possibility is more cogent than a plausibility. Yes, this is still using the induction hierarchy, but notice it is within probabilities, which means probabilities itself is contingent on possibility and plausibility while the latter two are not contingent in any way on probability.

I look forward to hearing from you,
Bob
Philosophim January 23, 2022 at 15:31 #646776
Quoting Bob Ross
I agree, I definitely need to define it more descriptively. However, with that being said, at a deeper level, the term possibility is also like the word "big": it is contingent on a subjective threshold just like potentiality.


All distinctive knowledge is formed subjectively. Why I think possibility is more clear and useful than potential as a discrete experience, is I have a clear definition that can be applied to reality without contradiction. How do I apply the definition of potential to applicably know it?

Quoting Bob Ross
I agree, I think potentiality is an aspect of rationality. If it has no potential, just like if it isn't possible, then it is irrational. Potentiality isn't separate from rationality (it is apart of rational thinking).


I think here we're along the same intuition. Intuitively, potentiality seems like a word that would be used to describe the likelihood of an induction being correct. But how do I determine that? How do I applicably know that? With probability, I have clear limitations in what can potentially be drawn. If I know the cards are set, but I don't know the outcome, I could say, "Potentially, I could draw a jack." Perhaps we could state potentiality is a description of the possible outcomes of a probability? Its clearly defined, and can be applicably known.

Perhaps with possibility, "potential" could be used as well. "Because the bear was here yesterday, its potentially here today." The only issue here is the word has changed meaning. What we're really stating in this instance is, "Its possible the bear is here today, because we applicably knew the bear was here yesterday. At that point, the word really is no different from "possibility".

I think that sums up my issues with the word. It needs a clear definition that can be applicably known. In regards to potentiality, it seems to be the same as the word possibility. So perhaps, we could call potential a synonym of possibility? Potential = possible?

I suppose I should also address why potential cannot work with plausibility at all. A plausibility has no means to evaluate its potential, because I believe potential evaluates a strong sense of what we believe can be real. A plausibility is almost an abandonment of potentiality as an evaluation, because the only way to know if a plausibility is possible/potential, is to applicably apply it to reality.

Quoting Bob Ross
For example, although this may be a controversial example as we haven't hashed out math yet, I can hold that, even though I haven't experienced it, lining up (side by side) 2 in long candy bars for 3,000 feet has the potential to occur because it aligns with my knowledge (i.e. I do applicably know that there is 3,000 feet available to lay things and I do applicably know there are 2 in long candy bars); however, most importantly, according to your terminology, this is not possible since I haven't experienced it before.


It is plausible. Its a claim about reality that has not been applicably tested yet. Maybe you aren't able to do it when you try. When applying a plausibility to reality, details come up that we haven't thought about. For example, what type of candy bar? Are we standing them vertically, or laterally? What is the surface, something inclined, rough, or flat? A possibility already has those answers. If you stand a candy bar, you can evaluate that candy bar and glean all the necessary information to show how it is applicably known.

So, if you have all of those answers, then you can state, since it is possible to line up a candy bar in X manner, then it is possible that a candy bar will be able to be lined up if X manner is repeated. Because there is no claim that the candy bar should not be able to stand if X manner is repeated, it stands to reason that if we could duplicate X manner many times, 3000 per say, the candy bars would stand aligned. But, if we've never aligned a candy bar one time, we don't applicably know if its possible.

Math alone does not evaluate the details of whether something is possible or plausible. For example, I can state 1 unicorn + 1 unicorn is 2 unicorns. That is distinctively known. But if I go looking for unicorns in reality, the fantastical magical horse kind, I do not know if its possible. The hierarchy of inductions is in relation to a beliefs application to reality. It is not a question of the distinctive knowledge that leads up to the belief itself.

Likewise, without ever experiencing it, I can hold that it is irrational to believe that one can fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet (because, abstractly, 1,000 feet can only potentially hold 6,000 2 inch candy bars side by side).

You can calculate that it is implausible abstractly. Lets even say we add details to make sure its impossible, such as ensuring the candy bars cannot be squished together. This again, is just like showing that just as a candy bar with X properties can stand, some object of unchangeable X dimensionality cannot fit into another area of X unchangeable dimensionality. But, we need to experience the possibility of two unchangeable dimensionalities, where one can fit inside of the other. Set it up correctly, and you are describing what is possible (or impossible in this case).

Just as an aside, it might be beneficial to describe what I consider distinctively impossible. What is distinctively impossible, is a plausibility that takes two possibilities, and results in a contradiction of at least one possibility. A plausibility, cannot claim a possibility is incorrect, as it is a lower level on the hierarchy due to its level of applicable knowledge relation. Applicable impossibility, is found when new applicable knowledge contradicts our previous possibilities.

Quoting Bob Ross
Something can't be plausible if it can be proven to have no potential (and it doesn't necessarily have to be "I've experienced the exact, contradictory, event to this claim, therefore it is an irrational induction":


So to adjust this sentence with the defined terms we have so far, "Something can't be plausible if it can be proven to be impossible (distinctive or applicable). Something can't be plausible if is contradicts what is possible both in our distinctive and applicable knowledge.

Quoting Bob Ross
I could make subjective thresholds for what constitutes "experiencing something before" that renders possibilities utterly meaningless.


True of everything. But can we turn it the other way, and make a threshold of what constitutes "experiencing something before" that renders possibilities meaningful when applied to reality. Yes. Can we do the same with potentiality? So far, I don't believe a definition of the word has been created so far that can be applied to reality consistently, clearly, and in a way that cannot be replaced by another word.

Quoting Bob Ross
Potentiality doesn't pertain to the "truth" of the matter, just a requisite to what one should rationally not pursue. It is a deeper level, so to speak, of analysis that can meaningfully allow subjects to reject other peoples' claims just like what you are describing.


Perhaps potentiality describes the hierarchy of induction itself then? In essence, the hierarchy allows us to rationally dismiss beliefs of a lower hierarchy that compete with ours. If I believe I have a 1/52 chance of pulling an ace of spades, and someone says, "Well its possible you could not pull an ace of spades," its not going to change the odds. The idea that an evil demon could change the result of the card, destroying my odds, is a plausibility that can be dismissed as well. And someone coming up with the idea that its actually 1/53 cards is an irrationality I can outright dismiss.

That being said, I do believe the level difference in the hierarchy should temper how quickly you dismiss a counter belief. One removed should always be considered to ensure your currently held belief is correct. But if you find upon re-evaluation that your level of hierarchy still holds, you may dismiss it. Perhaps this is what you mean by "potential"? The difference of the level of the hierarchy determines how much consideration you should give to it when rationally thinking about it?

Quoting Bob Ross
I think I'm going to stick with evaluating inductions in terms of rationality, instead of potentiality.

That is absolutely fine! My intention is not to pressure you into reforming it, but I do think this is a false dichotomy: this assumes potentiality is a separate option from rationality.


Please continue to defend your viewpoints on potentiality. I have not thought on it at length until now, and I may have mentioned that the hierarchy is a baseline that can be used to build something more. I think at this point to construct potentiality as a viable term it will need to

a. Have a clear definition of what it is to be applicably known.
b. It must have an example of being applicably known.
c. Serve a purpose that another applicably known term cannot.

Quoting Bob Ross
I can say it is possible to perform addition because I have experienced it before, I cannot say that it is possible to add 3 trillion + 3 trillion because I haven't experienced doing that before with those particular numbers: I am inducing that it still holds based off of the possibility of the operation of addition.


To clarify again, the process of addition is distinctive knowledge. Adding the abstract of 3 trillion identities to 3 trillion identities will always result in 6 trillion identities, because that is the logic of discrete experience. Induction only occurs when we apply this to reality. What essential properties make up each identity of the the first 3 trillion in reality? The second 3 trillion? What counts as adding them to become the new 6 trillion identity? It is their proximity? Ownership? Time and place? If we can applicably know these identities, then we can apply the logic of identities, math, and applicably know the outcome.

Quoting Bob Ross
I agree, but this doesn't mean it holds for all numbers. We induce that it does, but it isn't necessarily the case. We assume that when we take the limit of 1/infinity that it equals 0, but we don't know if that is really even possible to actually approach the limit infinitely to achieve 0.


In this case, we distinctively know the answer. A limit means that the calculation will never result in 0. It is not ascertaining specifically how small that calculation can get. Its just a deduction that it will never arrive at 0. An induction would be, "If I apply the calculation with X numbers, I will get the result .0000000124. You'll have to actually do the calculation to applicably know whether that belief is true or not.

Quoting Bob Ross
Likewise, we know that if there are N distinct things that N + 1 will hold, but we don't if N distinct things are actually possible (that is the induction aspect, which I think you agree with me on that, although I could be wrong).


This is correct. We distinctively know the abstraction of N identities plus one more will always result in F(N). But if we apply this math to reality, to see if there are actually N identities in existence, we are using an induction that must be verified.

Quoting Bob Ross
Yes, I may need a bit more clarification on this to properly assess what is going on. Your example of the pink elephant is sort of implying to me something different than what I was trying to address. I was asking about the fundamental belief that you think and not a particular knowledge derived from that thought (in terms of a pink elephant). I feel like, so far, you are mainly just stating essentially that you just think, therefore you think. I'm trying to assess deeper than that in terms of your epistemology with respect to this concept, but I will refrain as I have a feeling I am just simply not understanding you correctly.


I distinctively know that I think of a pink elephant. If I believe that a pink elephant exists in the next room, I have to go through the steps of applying that to reality to applicably know if that's true or not. This is just like math. I distinctively know N+1=F(N), but when I apply that to reality, I have to go through the steps that show it can be applied without contradiction by fleshing out exactly what it is I'm adding.

Quoting Bob Ross
Yes, but your essays made it sound like probability is its own separate thing and then you can mix them within chains of inductions. On the contrary, I think that "probability" itself is actually, at a more fundamental level, contingent on possibility and plausibility for it to occur in the first place.


Lets see if we can break this down. If I applicably know the cards in a deck, and applicably know I cannot know the order of shuffling, nor can the person doing the shuffling, then I can claim probability directly based upon applicable knowledge. Possibility is underneath probability in the fact that a probability is a calculated possibility with limits. A possibility alone has no assessment of calculated limitations. Its possible that I can draw a card. Its probable that its a 4/52 chance of being a jack.

Another great deep dive Bob! I hope that clarified numbers a bit, and also gave you a set of points you could use to define potential in a way that fits within the epistemology. I look forward to your responses as well!
Bob Ross January 23, 2022 at 19:01 #646847
Hello @Philosophim,

I think at this point to construct potentiality as a viable term it will need to

a. Have a clear definition of what it is to be applicably known.
b. It must have an example of being applicably known.
c. Serve a purpose that another applicably known term cannot.


I appreciate that you put your concerns (with respect to potentiality) in a such a concise manner, as it really helps me, on the flip side, really hone in on what I am trying to say. I've never been the best at explanations. So thank you! I will attempt to address this in my post hereafter.

I think, upon further reflection, we are both conflating potentiality and possibility to a certain extent in the process of trying to dissect the colloquial use of "possibility". Potentiality is "what is not contradicted in the abstract", whereas possibility is "what has been experienced before". When you define possibility in that manner, I think you are implicitly defining it as "I've experienced X before, because I've experienced X IFF X==X". Therefore, assuming we don't get too knit picky with a more strict comparison X===X, possibility is like "I've experienced an Orange before, because I've experienced an Orange IFF 'Orange'=='Orange'". Therefore, when you say:

So, if you have all of those answers, then you can state, since it is possible to line up a candy bar in X manner, then it is possible that a candy bar will be able to be lined up if X manner is repeated. Because there is no claim that the candy bar should not be able to stand if X manner is repeated, it stands to reason that if we could duplicate X manner many times, 3000 per say, the candy bars would stand aligned. But, if we've never aligned a candy bar one time, we don't applicably know if its possible


You are stating it is possible to line up X manner repeated because "You've experienced 'X manner repeated' before, because you've experienced 'X manner' IFF 'X manner repeated' == 'X manner'". But that IFF does not hold, just like how 'X + 1' != 'X'. Even if you have experienced lining up 2,999 of those particular candy bars in question, and you knew all the other things you mentioned were possible (such as aligning candy bars are possible, horizontally lined up, etc), you would not be able to claim, according to your definition, that it is possible to line up 3,000. What is missing here, and what I think you are also trying to maintain, is potentiality: the abstract consideration. What you claimed is correct, but it is because you abstractly determined, via mathematical operations of repetition, that there is the potential for lining up 3,000 candy bars. Likewise, when you define impossibility in this manner:

Applicable impossibility, is found when new applicable knowledge contradicts our previous possibilities.


What you are stating is the converse of possibility, something like "I've experienced X contradict previous experience Y, IFF X disallows Y". This would directly entail that you have to directly experience the converse, such that "I've experienced X before, which is contradicted by this experience Y, therefore X is impossible". Notice this also disallows abstract consideration. It is:

"I've experienced a cup holding water, therefore it is possible for a cup to hold water"
"I'm now experiencing cups not being able to hold water, therefore it is impossible for them to hold water"
"The most recent experience out of the two takes precedence"

But then I think you introduce potentiality here into impossibility:

Likewise, without ever experiencing it, I can hold that it is irrational to believe that one can fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet (because, abstractly, 1,000 feet can only potentially hold 6,000 2 inch candy bars side by side).


There is an asymmetry between possibility and impossibility in your usage of the terms: the former has no abstract consideration while the former does (aka, the latter allows for potentiality as a consideration whereas the former does not). What I am understand you to hold here, is that you can hold that it is impossible to fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet because you have abstractly considered its lack of potential. You have not determined this based off of "I've experienced the converse of X, which contradicts Y", therefore you haven't determined it an "impossibility" or "possibility", as they both are contingent on the experiences. No, you did not utilize anything except the abstract induction of mathematical operations to warrant it impossible (I'de say you actually warranted it, more specifically, as lacking potential). Admittedly, I have also been conflating potentiality and possibility in our discussion because it is a hard thing to separate. But they are two distinct things. Yes, I am still utilizing experience to do math in the first place, but I am not experiencing the direct converse for something to be considered lacking potential. But, according to your terms, I am also not stating that "I've experienced X before, which contradicts Y". I am stating "I've experienced X before, and the extrapolation of X contradicts Y in the abstract". For example, consider the following:

I claim something is either (1) green, (2) not green, or (3) other option

This does, eventually, boil down to the law of noncontradiction, but, in the immediate, it is the law of excluded middle. What I am trying to explicate is that the rejection of #3 as being a "possibility" isn't experiential based--as in I am not negating the usage of #3 in terms of "I've experienced X, which contradicts Y". I am considering this purely in the abstract and rightfully concluding it cannot have any potential to occur. The reason this feels like a sticky mess to me, and maybe for you too, is that this is traditionally how "possibility" was also used: it had multiple underlying meanings.

So let's go back to this:

I think at this point to construct potentiality as a viable term it will need to

a. Have a clear definition of what it is to be applicably known.
b. It must have an example of being applicably known.
c. Serve a purpose that another applicably known term cannot.


A is:

"what is not contradicted in the abstract"

Although I don't think abstraction has to be directly applicably known (like I would have to go test, every time, the usage of mathematical operations passed what has been previously experienced), but I think B is:

Abstraction is the distinctive knowledge, which is applicably known to a certain degree (i.e. I applicably know that my perceptions pertain to impenetrability and cohesion, etc), that is inductively utilized to determine potentiality.

C is:

The defining of "possibility" as "I've experienced X before, because I've experienced X IFF X==X" removes the capability for the subject to make any abstract determinations, therefore potentiality is a meaningful distinction not implemented already in possibility (and likewise for impossibility).

I think that this is a good start to spark further conversation, so I think we can revisit some of the other things you demonstrated in your post after we find some common ground on the aforementioned.

I look forward to hearing from you,
Bob
Deleted User January 24, 2022 at 22:00 #647251
Reply to Philosophim Very good work for a first, full-throated go at epistemology. However, if you haven't familiarized yourself with the first few chapters, at least, of Introduction to Objectivist Epistemology, I highly recommend you do so. Rand perfectly explicates the concepts you attempted to formalize here in this work. Nice job!

-G
Philosophim January 25, 2022 at 12:58 #647443
Reply to Garrett Travers Thank you! No, I have not read it. Due to time this morning, I got some general concepts. While we may have some similar beginnings, I believe we diverge. The first part of the epistemology I've proposed here is very similar to many other theories of epistemology. But, where people build from that tends to diverge. Have you read all four parts? I'm quite certain I take a few turns from Rand.
Deleted User January 25, 2022 at 13:26 #647446
Reply to Philosophim I did, yes. And I do think you both diverge, but the constituent elements are present in both epistemologies that hearken to the same objective process by which humans allocate information through abstraction via the senses, and then formulate those abstractions into applicable concepts. Plus, I like the eplication on "(3) other option," as this leaves room for Hume's problem of induction; which by the way, I believe I have actually solved after all these years.
Philosophim January 25, 2022 at 14:03 #647457
Quoting Bob Ross
Potentiality is "what is not contradicted in the abstract", whereas possibility is "what has been experienced before".


I rather like your definition of potentiality here. I think it hammers home what we've been trying to get to. However, I think we can also see the problem with it. Almost every single belief of induction is not contradicted in the abstract. Meaning at best we describe all inductions besides irrational induction. Which, an irrational induction, is something that is not rational. This in turn implies that potentiality is a subset of rationality, "That which is not contradicted in the abstract."

It is not the identity I am critiquing, it is the word. Potentiality as a word, because it also implies something beyond this strict reading. Potentiality seems to also go along with "What is possible". What is not contradicted in the abstract, is not necessarily possible as we've discussed. The division between possibility and plausibility has been the focus of the last several posts of discussion. That is because there is an innate human desire to believe that if there is no contradiction in the mind, it must be possible in reality.

But that is a belief, and not rational. Rationally, something that is not contradicted in the mind may have no bearing as to wheather it is contradicted when applied to reality. But, perhaps we can create two identities that try to contain what you are saying while being consistent with the theory. As you can note, I have constantly divided beliefs into two camps, distinctive, and applicable. There are two identities that we could examine then.

1. A belief which is not contradicted by other beliefs.
2. Distinctive knowledge applied to reality which is not contradicted by other distinctive knowledge.

In the second case, this is a different way of describing applicable knowledge. The first case, is distinctive knowledge. Distinctive knowledge is exactly what you describe. When we create distinctive knowledge, we then have to have a reason to attempt to apply that to reality. Rationally, we would want to apply something that we believe to have no contradiction to reality, over a series of contradictory thoughts to reality.

Recall that to know something, there must be an application of essential properties. To apply our distinctive knowledge to reality while expecting an outcome, is always an induction; its always a belief. While it is more cogent, and arguably "safer" to stay within the higher tiers of inductions such as probability and possibility, you will never find new possibilities in the world if you do not explore plausibilities. When we explore plausibilities, we believe there is a chance they are real. But we must also temper our mind with the understanding that there is an equally unspecified chance that they are not real.

Perhaps "potentiality" could be used to describe the drive that pushes humanity forward to extend outside of its comfort zone of distinctive knowledge, and make the push for applicable knowledge. The drive to act on beliefs in reality. But what I think you want, some way to measure the potential accuracy of beliefs, is something that cannot be given. There is no way to measure whether one plausibility is more likely than another in reality, only measure whether one plausibility is more rational than another, but examining the chain of reason its built on.

This is because the nature of induction makes evaluation of its likelihood impossible by definition. An induction is a conclusion that does not necessarily follow the premises. As we've seen with probability, coming up with odds requires defined limits. An induction may be built upon deductions, which have defined limits, but there comes a part of the claim which is not defined by limits. Without limits, we cannot evaluate whether if it is more likely to pass than another claim which is not defined by limits. The only way to know, is to take that chance, that risk, and apply it to reality and see what happens.

Quoting Bob Ross
"I've experienced a cup holding water, therefore it is possible for a cup to hold water"
"I'm now experiencing cups not being able to hold water, therefore it is impossible for them to hold water"
"The most recent experience out of the two takes precedence"


For clarification, if you recall the second paper (Its been a while now!) when we are faced with a contradiction of our applicable knowledge with new applicable knowledge, we have several options of dealing with it. We could create a new term. Adjust our context, which essentially modifies the knowledge we use to avoid the contradiction. Or we can just state that one of the things we applicably knew, is wrong and can no longer be applicably known.

So while I could conclude that it is impossible for a cup to hold water, that is now a new belief that must be applicably known, not just concluded in your mind. What are the essential properties of a cup? Can you find objects that have those properties, but some hold water, and some don't? Do you need to adjust what you define by a cup? Perhaps the essential property of holding water, should become a non-essential property. There are lots of ways to approach it.

It is not that the most recent experience of the two takes precedence, it is that the most recent experience of a cup challenges your applicable knowledge. Right now you are making an induction as to what that means. You can induce, "It is impossible for a cup to hold water now," but is that applicable knowledge? You must apply that belief to reality, and see if it "holds water".

Quoting Bob Ross
What I am understand you to hold here, is that you can hold that it is impossible to fit 7,000 2 in long candy bars, side by side long ways, within 1,000 feet because you have abstractly considered its lack of potential.


If you are stating that the conclusion through distinctive knowledge is that you can't fit X > Y feet into Y feet is impossible, than yes. If you apply this to reality, you must be very specific with the properties of the material in use. A lack of known applicable knowledge in its application means you are working with a plausibility. Since candy bars are malleable, I very well could jam that many candy bars together. If I note I can only use material that is not malleable, then I would be creating a belief that is a possibility.

Since it is possible to find material that is not malleable, and can be stacked or lined together, then I know it is not possible to jam more of those material into a space that is smaller than the entire measurement of those materials. The possibility in this instance, is that it will not fit. We have never experienced in reality, a situation in which unmalleable material can fit in a space smaller than its dimensions.

Quoting Bob Ross
I am stating "I've experienced X before, and the extrapolation of X contradicts Y in the abstract".


You are stating a possibility or plausibility depending on how you word it. If you are combining two possibilities to show that a plausibility cannot occur, you have stated something distinctively impossible. If you are using a possibility to construct a plausibility, or something that is not contradicted by other possibilities in your mind, then you are not stating an impossibility, only a plausibility. Holding to a distinctive impossibility and applying it to reality, is an irrational belief.

But, what is impossible in our distinctive knowledge, may not be impossible when applied to reality. Because inductions are again, beliefs. We may believe something to be impossible, but it may not be impossible when applied to reality.

So, I think the difficulty is in separating the two types of knowledge. Impossibility, is no longer a general word that dictates what can, and cannot be. There is an impossibility within distinctive knowledge, and there is an impossibility within applicable knowledge.

With ALL of this covered, lets go to your break down of potentiality.

Quoting Bob Ross
"what is not contradicted in the abstract"

Although I don't think abstraction has to be directly applicably known (like I would have to go test, every time, the usage of mathematical operations passed what has been previously experienced)


You are correct. Distinctive knowledge does not have to be applicably known. Applicable knowledge is a claim that what we distinctively know, can be applied to reality without contradiction. But we can hold any distinctive knowledge as long as we don't assume it can be applied to reality without contradiction.

Quoting Bob Ross
but I think B is:

Abstraction is the distinctive knowledge, which is applicably known to a certain degree (i.e. I applicably know that my perceptions pertain to impenetrability and cohesion, etc), that is inductively utilized to determine potentiality.


There is no requirement of applicable knowledge for distinctive knowledge. Distinctive knowledge is what we use as a basis for our inductions about reality. But it can exist without such application.

Quoting Bob Ross
C is:

The defining of "possibility" as "I've experienced X before, because I've experienced X IFF X==X" removes the capability for the subject to make any abstract determinations, therefore potentiality is a meaningful distinction not implemented already in possibility (and likewise for impossibility).


I do not see this. Possibilities do not remove the capability of making abstract determinations. I can create the image of a unicorn in my head by taking the distinctive knowledge of a horn, and putting it on the distinctive knowledge of a horse. I can have it run around in my head casting magic and flying through the air leaving a rainbow behind it.

If I think I can find such a thing in reality, I just have to realize its not a possibility, just a plausibility. The hierarchy of inductions is all about assessing which are the more cogent beliefs about reality. It does not say we cannot use them.

I am out of time this morning, but I want to post this to you while it is fresh in my mind. Please feel free to follow up on this!
Alkis Piskas January 25, 2022 at 19:17 #647561
Reply to Philosophim
Quoting Philosophim
Part 1 The basics of knowledge
"Any discussion of knowledge must begin with beliefs. A belief is a will, or a sureness reality exists in a particular state."


Knowledge consists of facts, information and skills acquired through experience or education.
A belief is an acceptance that something exists or is true, especially one without proof.

Beliefs are not knowledge. And in most cases they do not reflect "sureness". Of course, I may say "I believe that ..." and state some fact or something I know well, but it's only an expression, a figure of speech.

Do you believe that "cats are animals" or you know that "cats are animals"? Do you believe that the "Earth is round" or you know that the "Earth is round"? Only people and Copernicus himself might not be certain about that before the later stated it as a fact. And after some time, it has become common knowledge. A belief is something like an hypothesis. When it is proved true, it is a fact.

I'm sorry for not being able to go further in this topic, because it starts and is based on a wrong assumption. I only wanted to point this out.
Philosophim January 25, 2022 at 22:31 #647606
Quoting Alkis Piskas
I'm sorry for not being able to go further in this topic, because it starts and is based on a wrong assumption. I only wanted to point this out.


That's a fairly dishonest reading. I never claimed beliefs were knowledge. I claimed that before we start with knowledge, we had to start by looking at beliefs. Its just an introduction to a paper, not the claim you're presupposing. Knowledge indeed consists of facts, information, and skills acquired through experience. If you had read just until the end of the page, I think you would have understood where this was going.
Alkis Piskas January 26, 2022 at 09:05 #647837
Reply to Philosophim
I don't think that my reading was "dishonest" --actually, a more correct word would be "unfair"-- because I didn't read the whole thing. And I didn't say that the whole description of the topic was wrong. (Of course, since I didn't read everything!) I didn't criticize anything either. As I said, "I only wanted to point this out". I explained why stating that "A belief is a will, or a sureness reality exists in a particular state" is a wtong assumption, because belief is not sureness. I also gave examples why (something that most people don't) . But since you ignored all that, considering maybe that it is just a false idea of mine, here's another reference that describes well the difference between belief and certainty:

"Belief is the state of mind in which an individual is convinced of the truth or validity of a proposition or premise regardless of whether they have adequately proved or attempted to prove their main contention. Certainty is the state of mind in which an individual is convinced of the validity, truthfulness, or evidence of a proposition or premise. While believing is possible without warrant or evidence, certainty implies having valid evidence or proof." (https://www.newworldencyclopedia.org/entry/Belief_and_Certainty)

And as for my "dishonest reading", if your introduction started with something more plausible, I would certainly read more, since "knowledge" is a hot subject for me. But I always stop reading something when it starts and is based on a wrong assumption. Well, this is me! :smile:
Philosophim January 26, 2022 at 17:10 #647953
Quoting Alkis Piskas
But I always stop reading something when it starts and is based on a wrong assumption. Well, this is me! :smile:


Fair. :smile:

I would say though that sureness is not the same as certainty. The intention is to use a word that conveys some conviction, assumption, or emotional indicator that compels a person that they believe X is worth holding. I even posted the word "will" next to it, so you would understand the context of what I was trying to convey.

Look at it this way, what makes you believe anything? For most beliefs, there is some type of conviction behind it. Regardless, you may not like the essay, because you have a prescriptive outlook on what I should be saying, instead of trying to understand what I'm intending to say. As this is an exploratory essay, and not a repeat of what is already known as fact, the latter intention is what is needed when approaching the paper. I do appreciate your comment, and your polite follow up!
Alkis Piskas January 26, 2022 at 18:52 #647982
Quoting Philosophim
I would say though that sureness is not the same as certainty.

"Sureness" from Merriam-Webster (My favorite dictionary, Oxford LEXICO, doesn't have it! :sad:)
"A state of mind in which one is free from doubt."
"Certainty" from the same dic:
"A state of mind in which one is free from doubt." !
(Check for your if you don't believe me! :smile: https://www.merriam-webster.com/thesaurus/sureness and https://www.merriam-webster.com/thesaurus/certainty)
Well, dictionaries are not perfect, of course. There may be nuances between them. But they are certainly (surely :smile:) synonyms.

Whatever is the case, both terms, as well as "belief", are certainly (surely :smile:) are totally different from "knowlwedge" and are connected to it only as a sequence, i.e. from a belief on can pass to knowledge, which was my main point. Again, I bring in the definition of knowledge, for s "fresh" comparison:
"Knowlwedge" from the same dic:
"Information, understanding, or skill that you get from experience or education"

Please note that I don't rely totally on dictionaries. I used them mainly as a common reference. If I now e a term well, i.e. it is "solidly real" for me, I rely more on my own undesrating of the term and sometimes I add elements to the dictionary definitions if I deem that they are important. But in the present case, I don't need to. Things speak for themselves! :smile:

Quoting Philosophim
I even posted the word "will" next to it, so you would understand the context of what I was trying to convey.

I have to confess that I have not undestood "will" in this context, even after having looked it up!
But see, this would create (more) confusion, anyway. Also, whatever you mean by it, the second part --or a sureness reality exists in a particular state-- was more important and enough to raise a protest in me! :grin:

Quoting Philosophim
Look at it this way, what makes you believe anything?

I gave you examples on this.

Quoting Philosophim
For most beliefs, there is some type of conviction behind it.

True. But this doesn't change much what I pointed out, does it? :smile:

Quoting Philosophim
you may not like the essay, because you have a prescriptive outlook on what I should be saying

This not true. I told you that I cannot judge the rest of the discussion and that I would have continued reading if I had read a more plausible introduction. I think this is fair, no?

And, as you can see, I like to converse with you! :smile:
Philosophim January 27, 2022 at 03:42 #648164
Reply to Alkis Piskas Quoting Alkis Piskas
And, as you can see, I like to converse with you!


Thanks for the contribution to the OP! I'll see you around.
Alkis Piskas January 27, 2022 at 06:55 #648219
Reply to Philosophim
:up: I have contributed to 100 topics up to now. Only you and another OP owner have thanked me for that!

(There were even a lot (27) of OP owners who din't even respond to my reply on their topics!)
Bob Ross January 28, 2022 at 02:11 #648489
Hello @Philosophim,
I apologize for such a belated response: I've been quite swarmed recently.

Almost every single belief of induction is not contradicted in the abstract. Meaning at best we describe all inductions besides irrational induction.


I think that the first sentence here is sort of like survivorship bias: it isn't that almost every single induction has potential, it is that all beliefs of induction that hold any substance at all have potential, therefore all the ones that have survived enough for both of us to hear tend to have potential. Most people naturally revoke their own inductions that have no potential without ever verbalizing them, because it is the first aspect of consideration in the process of contemplation. What I am trying to say is that I wouldn't post an inductive belief on here if I was well aware that it had no potential. So I agree, but I don't think it implies what I think you are trying to imply: it doesn't mean that potentiality isn't a worthy, or relevant, consideration just because most don't make it out of our heads to other people. I agree with you that potentiality doesn't get the subject to a completely working, solid claim of knowledge.

With respect to the second sentence, I holistically agree! The point I am trying to make is that "irrational induction" is not just what is contradicted by direct experience but, rather, it is also about whether it is contradicted in the abstract. I think it may be the appropriate time to elaborate on what I mean by abstraction. A contemplation resides in the abstract, in a pure sense, if it isn't pertaining to particular experiences but, rather, is utilizing a combination of those experiences or/and a generic form of those experiences. For example, the consideration of 1 "thing" + 1 "thing" is 2 "things" is purely abstract because it doesn't pertain to particular experiences. Although we can dive in deep into what abstraction really is, I am going to intentionally keep it this vague so you can navigate the discussion where you would like. In light of this, the example of fitting malleable (as you rightly mentioned) candy bars in specific dimensions that cannot occur is not due to it having no possibility (I am not negating it based off of a direct experience), but actually because it lacks any potential. This is an irrational induction. What also is an irrational induction, but not based off of abstraction, would be if I were to hold the belief that some particular apple is poisonous, yet having experienced a person eating that exact apple and they showed 0 signs of poisoning. In this case, no abstraction is needed: the particular experience is enough to warrant it as an irrational induction. That is essentially what I was trying to convey.

Rationally, something that is not contradicted in the mind may have no bearing as to wheather it is contradicted when applied to reality.


I agree. I am not attempting to claim that something that has potential necessarily is possible (which is what I think you are getting at here). I am attempting to claim that something that has potential is more cogent than something that lacks potential.

Perhaps "potentiality" could be used to describe the drive that pushes humanity forward to extend outside of its comfort zone of distinctive knowledge, and make the push for applicable knowledge. The drive to act on beliefs in reality.


No I don't think it is the drive, it is what most subjects do inherently (and what everyone does who has subscribed themselves, legitimately, to the game of rationality--I would argue). It is an important aspect of what constitutes an irrational induction. Without it, I think your epistemology is constrained to the apple example I gave previously: what is irrational, is what is impossible. I am saying: what is irrational, is what is impossible and has no potential.

But what I think you want, some way to measure the potential accuracy of beliefs, is something that cannot be given.


To a certain degree, I agree with you. Potentiality in itself does not warrant a belief accurate, but the lack of potentiality warrants it necessarily inaccurate. In order for me to properly assess potentiality, I think that we ought to define the definition of possibility (define what it means to experience something before), because this greatly determines what is considered abstract. So, how are you defining what "you've experienced at least once before"?

There is no way to measure whether one plausibility is more likely than another in reality, only measure whether one plausibility is more rational than another, but examining the chain of reason its built on.


I think you are wrong, but actually right. I think we can most definitely compare plausibilities in terms of induction hierarchies within it--not in terms of probabilistic quantitative likelihoods. But before I can get into that, I need to do some defining. First, I need to define the relations within the induction hierarchies, so here's how I will be defining it (all of which are open to redefining if you would like):

The Induction = The induction being proposed.
the grounding inductions = The inductions that The Induction is contingent on, which ground it to the subject (derive back to the subject).
induction hierarchy = The Induction considered with respect to its grounding inductions, which can be considered a holistic analysis of The Induction.
components = The distinct claims within The Induction (more on this later).
characteristic = An attribute, descriptor within a component (more on this later).

To summarize what is defined above, The Induction is simply the actual induction that the subject is making, whereas the grounding inductions are, as we previously discussed, what the subject will consider in a holistic analysis of The Induction. The induction hierarchy should be pretty self-explanatory as it is that holistic analysis of The Induction. The components is where it gets interesting. The components are what essentially distinctively makeup The Inductions and the characteristics are, quite frankly (not to use the word to explain it, but I am definitely about to do that) the characteristics of the components.

Let's go through some examples real quick. So, for all intents and purposes right now, let's consider the induction hierarchy as a horizontal holistic analysis, like this:

possibility -> possibility -> plausibility

The two possibilities would be the grounding inductions, the plausibility The Induction, and the whole thing is the induction hierarchy. The components of The Induction are going to be formatted like this (I just made it up, no real rhyme or reason):

possibility -> possibility -> plausibility: (component1, component2, ...)

I am merely separating the components from The Induction with a colon and encompassing them with parenthesis. Also, I will put distinctive knowledge, although it isn't an induction, in the chain (for consistency) like this:

[distinctive knowledge1, distinctive knowledge2, ...] - possibility -> possibility -> plausibility: (...)

Note that I am not claiming that distinctive knowledge are apart of the induction hierarchy, just that they are grounds for it (one way or another). Also, the characteristics are within the components, so I won't have any special characters for those; instead, I am going to bold them.

Now that that is out of the way, let's dive in! Let's use our favorite example: unicorns (: . Let's say I claim this:

1. There are horses (distinctive knowledge)
2. There are horns (distinctive knowledge)
3. It is possible for animals to evolve into having horns (evolution) (possibility)
4. It is plausible that a horse with a horn could exist (plausibility)

Now, we can map this into our induction hierarchy like this:

[horses, horns] - evolution -> unicorn

But we can go deeper than this with components:

[horses, horns] - evolution -> unicorn: (horned horse)

The components are the specific distinctive claims within The Induction itself. In this case, I limited the claim to a horned horse: that is the sole component of my induction and the characteristic is horned. To really illuminate this, let's take a similar claim:

1. There are horses (distinctive knowledge)
2. There are horns (distinctive knowledge)
3. It is possible for animals to evolve into having horns (evolution) (possibility)
4. It is plausible that a horse with a horn and the ability to turn invisible could exist (plausibility)

I can map this one like so:

[horses, horns] - evolution -> unicorn: (horned horse, invisibility capabilities)

Now there are two components to my inductive claim. I think that this is incredibly useful for comparing two plausibilities. At first, I thought I could utilize the sheer quantity to determine the cogencies with respect to one another. I was wrong, it gets trickier than that because the components themselves are also subject to an induction hierarchy within themselves. I can claim that it is possible for an animal to evolve into having a horn, but I cannot claim that an animal has evolved into being invisible (assuming we aren't talking about camo but actual invisibility), so the components themselves are not necessarily as cogent as each other. Therefore, I must take this into consideration.

[horses, horns] - evolution -> unicorn: (horned {possible characteristic} horse)
[horses, horns] - evolution -> unicorn: (horned {ditto} horse, invisibility {plausible characteristic} capabilities)

Therefore, #1 is more cogent than #2, not due to the sheer consideration of quantities of components, but the quantity in relation to an induction hierarchy within the component itself. In other words, a plausibility that has one component which is based off of a possible characteristic is more cogent (doesn't mean it is cogent) than one that has component which is based off of a plausible characteristic.

For example:

[horse, horns] - evolution -> unicorn: (horned horse, has scaly skin)
[horse, horns] - evolution -> unicorn: (horned horse, invisibility capabilities)

The first is more cogent than the second because we can be more detailed with the components like this:

[horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse, has scaly skin {possible characteristic})
[horse, horns] - evolution -> unicorn: (horned {possible char} horse, invisibility capabilities {plausible char})

Therefore, #1 is more cogent than #2 when analyzed from the perspective of quantities (which are equal in this case) and in relation to to type of induction the characteristic is. So:

[horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse, has scaly skin {possible characteristic})
[horse, horns] - evolution -> unicorn: (invisibility capabilities {plausible char})

Even though #2 has only one component, that component is a plausible characteristic and #1 has two possible characteristics--therefore, #1 is more cogent because a possibility is more cogent than a plausibility. However, if it were the case that plausibility #1 had 3 plausible characteristics while #2 had 2 plausible characteristics, then #2 would be more cogent. I am simply applying the same induction hierarchy rules a step deeper to analyze plausibilities. When I state that a component contains a "possible characteristic", note that I am not trying to claim that that characteristic is possible with respect to subject it is describing; I am merely distinguishing characteristics that have been experienced before from ones that haven't been (some are just figments of our imagination, quite frankly). However, it isn't just about the relation to an induction hierarchy within the component itself: it is also about the quantity, but the quantity is always second (subordinate) to the consideration of the relation. For example:

[horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse, has scaly skin {possible characteristic})
[horse, horns] - evolution -> unicorn: (horned {possible characteristic} horse)

First we consider the relation in terms of the characteristics within the components, as it takes precedence over quantity, and we find that both claims utilize possible characteristics. Now, since they are equal in relation, we must consider the quantity: #1 has two components while #2 has one. Now, we must keep in our minds at all times that these are components of a plausiblity and, therefore, a plausibility with more components of the same induction type is less cogent than one that has less of that type. This is because the more I add components, the more speculation I am introducing and, most importantly, in this case, I am adding more of the same type of speculation. Therefore, #2 is more cogent than #1.

I hope that serves as a basic exposition into what I mean by "comparing plausibilities".

This is because the nature of induction makes evaluation of its likelihood impossible by definition


We aren't really using likelihoods to compare plausibilities and if we are, then it is a qualitative likelihood of some sorts. I am going to stop here as this is getting quite long (:

I look forward to hearing from you,
Bob
Philosophim January 28, 2022 at 14:00 #648620
I finally think I see what you have been trying to tell me about "potential". I knew you saw something there that I was unable to grasp, but I think I at last understand what it is.

Quoting Bob Ross
The point I am trying to make is that "irrational induction" is not just what is contradicted by direct experience but, rather, it is also about whether it is contradicted in the abstract.


Yes, I think this works nicely! I think potentiality nicely describes process of creating the useful distinctive knowledge we come up with. Anything which we come up with in our minds that contradicts our other distinctive knowledge, could be said to lack "potential". As long as potential is not used, or is clarified into something like "applicable potential" when being applied to reality, I think we have a clearly defined word that does not have a synonym, and can be applicably known. Do I have the right of it? Feel free to clarify further, but I think I'm seeing the spark you've been thinking about.

Your analysis of the hierarchy is spot on. This is what I've been trying to communicate for a while as well. A great breakdown and example!

Quoting Bob Ross
At first, I thought I could utilize the sheer quantity to determine the cogencies with respect to one another. I was wrong, it gets trickier than that because the components themselves are also subject to an induction hierarchy within themselves.


Correct.

Quoting Bob Ross
[horses, horns] - evolution -> unicorn: (horned {possible characteristic} horse)
[horses, horns] - evolution -> unicorn: (horned {ditto} horse, invisibility {plausible characteristic} capabilities)

Therefore, #1 is more cogent than #2, not due to the sheer consideration of quantities of components, but the quantity in relation to an induction hierarchy within the component itself. In other words, a plausibility that has one component which is based off of a possible characteristic is more cogent (doesn't mean it is cogent) than one that has component which is based off of a plausible characteristic.


Perfect! Yes, this is the conclusion I was hoping you would reach. Its not necessarily quantity we even have to consider. Its just that we have to consider all the essential properties of the grounding inductions (Good phrase!) that build up that induction. Each must be considered within the hierarchy as well. So if you conclude that an induction is built up of two essential properties, one having a direct grounds off of applicable knowledge, while the other has grounds on plausibilities, you can rationally reject the second essential property, but keep the first.

Quoting Bob Ross
However, it isn't just about the relation to an induction hierarchy within the component itself: it is also about the quantity, but the quantity is always second (subordinate) to the consideration of the relation.


You are right on target. Another way to think about it is a chain is built of links. But each link has a chain as well. When I state, distinctive knowledge -> possibility -> plausibility, the chain of reasoning also applies to each base. How did I arrive at that distinctive knowledge? How did I arrive at that possibility? I think you have it.

Quoting Bob Ross
I hope that serves as a basic exposition into what I mean by "comparing plausibilities".


Yes, this is clear, and always what I intended, but did not communicate clearly. When I spoke that you could not compare plausibilities directly, I meant that you could not do so without analyzing the chain of reasoning behind them. But I never described sub chains directly like you did. You have written this much clearer and with greater focus than I have, and it is a wonderful and excellent break down!
Bob Ross January 28, 2022 at 22:49 #648772
Hello @Philosophim,

Yes, I think this works nicely! I think potentiality nicely describes process of creating the useful distinctive knowledge we come up with. Anything which we come up with in our minds that contradicts our other distinctive knowledge, could be said to lack "potential".


Yes, I think we are on the same page now!

So if you conclude that an induction is built up of two essential properties, one having a direct grounds off of applicable knowledge, while the other has grounds on plausibilities, you can rationally reject the second essential property, but keep the first.


I agree. But I suspect that you are only referring to the comparison of plausibilities that relate to one another, so I would like to explicitly state that I am claiming that one can compare all plausibilities to one another in this manner. When comparing to completely unrelated plausibilities, it isn't a matter of choosing which one you should hold: it is about which one is stronger, more sure of. I am not entirely sure if you would agree with me on that.

In light of our recent agreements, I think it is safe for me to move on and explicate some of my other thoughts on your epistemology:

Actual Infinities Are Irrational

I think that, in light of us agreement on potentiality, we can finally prove that actual infinites are irrational inductions. To keep it brief, we can abstractly prove that actual infinites contradict logic: a great example of this is the infinite hotel problem (thought experiment). Therefore, since they contradict logic in the abstract, they lack potentiality. If they lack potentiality, then they are an irrational induction. Therefore, if any induction invokes such a principle, it must be an irrational induction unless the induction can safely separate itself from any actual infinite claims it is actively utilizing. For example, if I say it is possible for an apple to exist in all of time and space, I am holding a legitimate possibility induction because I am utilizing a potential infinite, which has limits (in this case, the limits are space/time itself). However, if I say it is possible for an apple to exist within everything (where everything has no limits), then I am holding an irrational induction because actual infinites have no potential. Therefore, I think that your epistemology quite nicely dictates that our inductions can only be rational, in any sense of the term, if it utilizes limits (which encompasses potential infinites). I think, as you may already be inferring, that this actually have heavy implications with respect to your idea of a "first cause", but I will refrain as I will not continue down that alleyway unless you want me to.

Mathematical Inductions Are Possibilities

I know we had a lot of disputes about mathematical inductions, and so I wanted to briefly continue that conversation with the idea that mathematical inductions do not require another term, contrary to what I was claiming, because they are possibilities. If I say that F(N) works for all integers, N, I am utilizing my distinctive knowledge to claim that it will hold again. This is no different than gravity: I have experienced it, therefore I say it is possible for it to happen again. At its most fundamental level, with math, I am claiming that my experience is differentiated all the time, therefore that differentiation should hold. theoretically, everywhere and everytime. In other words, math is possible. I also see know, that you were right in that probability is its own thing, because it takes it a step deeper: it isn't just a possibility.

We Need to Define The Definition Of Possibility

I think that it would be beneficial to really hone in on what it means to have "experienced something before". Where are we drawing the line? Is there a rational line to be drawn?

Distinctive Knowledge is Assumed

I think that your epistemology, at its core, rests on assumptions. Now, I don't mean this is a severe blow to the your views: I agree with them. What I mean is that, as far as I am understanding, your epistemology really "kicks in" after the subject assumes that perception, thought, and emotion are valid sources of knowledge. If they agree with that assumption, then your epistemology works. However, since we are philosophizing, I think we really need to hone in on these fundamental principles a little deeper. I think so far your epistemology essentially states:

"We think, therefore we think"
"We perceive, therefore we perceive"
"We feel, therefore we feel"

Just some food for thought! I know these are probably loaded, completely separate, propositions of mine. So feel free to guide the conversation as you wish.

I look forward to hearing from you,
Bob
Philosophim January 29, 2022 at 18:29 #649010
Quoting Bob Ross
But I suspect that you are only referring to the comparison of plausibilities that relate to one another, so I would like to explicitly state that I am claiming that one can compare all plausibilities to one another in this manner.


Yes, if you're just comparing the fundamental building blocks of different plausibilities, you can determine plausibility A is more cogent than plausibility B. The problem is, if they aren't within the same context, how useful is that analysis?

Recall that inductions are made because we have limitations in what we applicably know. Further, less cogent inductions are used to compare what belief you should make about a particular situation. Its about comparing your options. If I'm talking about subject X, and I have two plausibilities, going through the chain of rationality to discover which plausibility is more rational, is useful. If I have a plausibility about subject X, and a plausibility about subject A, what does comparing the cogency get me?

It may be that the plausibility about subject X is more rational than the plausibility about subject A, but when considering subject A, I have no alternative belief about A, but that plausibility. In that case, the most cogent thing is to choose to act, or not act on that one plausibility I have. This is the point I wanted to emphasize first, though I'm thinking I should have emphasized the technical comparison, then explained when and what context you should compare.

Quoting Bob Ross
I think that, in light of us agreement on potentiality, we can finally prove that actual infinites are irrational inductions.


Your two examples are great. Unlimited infinities are irrational. But some limited infinities may be inapplicable plausibilities. Perhaps there is no limit to space for example. Its plausible. But it is currently inapplicable. When considering the limits of space, we have no viable inductions we can make, so we must remain in the realm of inapplicable plausibility.

Quoting Bob Ross
I think, as you may already be inferring, that this actually have heavy implications with respect to your idea of a "first cause"


Yes. Stating that everything which has a cause, must have a cause, is an unlimited infinity. It breaks down if you examine it in the argument. All that is left, is that there must be a first cause. BUT, this is still either an applicable or inapplicable plausibility at best. It is simply more cogent to believe that there is a first cause, then not. Since we do not have any higher induction we can make in regards to the a first cause within the context of that argument, it is more cogent to conclude there is a first cause.

Quoting Bob Ross
I know we had a lot of disputes about mathematical inductions, and so I wanted to briefly continue that conversation with the idea that mathematical inductions do not require another term, contrary to what I was claiming, because they are possibilities.


Yes, this seems correct. There is a fine dividing line between possibility and applicable knowledge. To say something is possible, is to say the applicable knowledge you just obtained, will be able to be applied again. But this is if we apply that math to reality by actively putting a number within the equation. The logic of the equation itself, is distinctive knowledge based on the rules we have constructed.

Quoting Bob Ross
I think that it would be beneficial to really hone in on what it means to have "experienced something before". Where are we drawing the line? Is there a rational line to be drawn?


It is when you have concluded applicable knowledge within your context. You can experience something, but not have applicable knowledge of it. Lets say you're in a field with a horned goat and ram. When gazing with the animals behind your back, you get head butted from behind. When you gather yourself off the ground and look behind you, you realize the horns are very similar, and you can't tell which one head butt you.

The thing you can applicably know by going through your distinctive knowledge, is that you were hit by something. There is a bruise on your back in the imprint of a horn, and it is not possible that you could fall down from an impact that bruises you without that being "something". But was it the ram or the goat? Its plausible it was something you weren't aware of at all, but you believe its possible both goats and rams can head butt a person, and it seems more cogent to believe one of them did it.

But will you ever applicably know which one head butt you? No. Its plausible to believe it was only the sheep, or only the ram. But couldn't we say it was possible that it was either the sheep or the ram because we know it is possible for sheep and rams to head butt people? The care is in the intent of the induction. If I say, "I believe it was the sheep, and not the ram," that is the plausibility. If I say, "I believe it was either the sheep or the ram", this is a possibility.

I'm not sure if that answered the question, but I felt this was a good example to show the fine line between what can be applicably known, possibility, and plausibility. Feel free to dig in deeper.

Quoting Bob Ross
I think that your epistemology, at its core, rests on assumptions. Now, I don't mean this is a severe blow to the your views: I agree with them. What I mean is that, as far as I am understanding, your epistemology really "kicks in" after the subject assumes that perception, thought, and emotion are valid sources of knowledge.


I don't believe I make those assumptions at all. Its been a while since we visited the building blocks of the paper on page one to determine the difference between distinctive knowledge, and applicable knowledge. I do not claim that perception, thoughts, and emotions are valid sources of knowledge. I claim they are things we know, due to the basis of proving, and thus knowing, that I can discretely experience.

The discrete experience you have, the separation of the sea of existence into parts and parcels, is not an assumption, or a belief. It is your direct experience, your distinctive knowledge. I form the discrete experience of thoughts as a very low set of essential properties in the beginning, so that I can get to the basic idea of the theory. But now that you have it, go back to the beginning. Use the theory on the formulation of the theory itself. Does it still hold? I think you'll find it will.

You create an idea of a thought, and you confirm it without contradiction immediately, because it is a discrete experience. Later, you can go back and ask, "Can I refine what a thought is? Could I redefine it? What is the difference between a thought and an emotion? Can I find essential properties that differ, and apply this to myself?"

Or back to your original issue, "What is an "I"? Can I define it as more than simply that which discretely experiences? Perhaps other creatures discretely experience, but they obviously do it differently from humans?" The doors are open now that you understand the theory. Tackle mind, tackle ethics, tackle God itself. The system of distinctive knowledge, applicable knowledge, and the inductive hierarchy can be applied to it all.

Will this refine the system itself? Almost certainly. I am under no illusions it is complete, because the reality is, as contexts change, and as more people use it, there are bound to be refinements, and even different contexts of applying the theory itself. But is it a fundamental base that you can retreat to? A base that is consistently logical in its own formation, as well as its application? I believe so. I use it in my own life, which I think adds to the strength of its use as a tool.

If only I could ever get the idea out there in the philosophical community at large. I have tried publication to no avail. Honestly, I don't even care about credit. Perhaps someone on these forums will read it, understand it, and be able to do what I was unable to. Or perhaps someone will come along and finally disprove it. Either way, it would make me happy to have some resolution for it.

But back to your questions and detailed drilling. I feel we are coming to an end of the questions about understanding the theory itself, but let us resolve any remaining ones. If you are satisfied, feel free to test the theory in action. We can use it to address epistemology issues or questions you may have had, like thoughts or "I". Since we understand the theory, honestly the best critique of it is to use it. And what better test of a theory of knowledge then to see if it can know itself?

Thanks again Bob. It has been very gratifying to have someone seriously read and understand the theory up to this point. Whether the theory continues to hold, or crashes and burns, this has been enough.
Bob Ross January 29, 2022 at 20:25 #649043
Hello @Philosophim,

Yes, if you're just comparing the fundamental building blocks of different plausibilities, you can determine plausibility A is more cogent than plausibility B. The problem is, if they aren't within the same context, how useful is that analysis?


I think the comparison is more relevant when you actually have to choose between the two. As a radical example, imagine someone puts a gun up to your head and tells you to bet your life on either plausibility A or B (where both are completely unrelated): I don't think you would just flip a coin, or answer with indifference. I think you would analyze which you are more sure of.

Your two examples are great. Unlimited infinities are irrational. But some limited infinities may be inapplicable plausibilities. Perhaps there is no limit to space for example. Its plausible. But it is currently inapplicable.


Excellent point! You are right: potential infinites, when asserted as if they are actual infinites, are also irrational inductions because they are inapplicable plausibilities. I think you were right in wanting to move inapplicable plausibilities to irrational inductions, because they lack potential. I can never apply the belief that any given infinite, within a limit, is actually infinite. Splendid point!

Yes. Stating that everything which has a cause, must have a cause, is an unlimited infinity. It breaks down if you examine it in the argument. All that is left, is that there must be a first cause. BUT, this is still either an applicable or inapplicable plausibility at best. It is simply more cogent to believe that there is a first cause, then not. Since we do not have any higher induction we can make in regards to the a first cause within the context of that argument, it is more cogent to conclude there is a first cause.


Now that we agree that actual infinites are irrational, you are right: the other option seems to be a first cause. However, claiming their is a first cause would be the same as claiming this particle is actually the smallest particle that can exist: it is an inapplicable plausibility. Inapplicable plausibilities are irrational inductions (because they lack potential). I can, in the abstract, prove that we will never be able to state that "this is the first cause", just like how we cannot state "this is actually the smallest thing". They are both irrational inductions. What we could say is that "this is potentially the smallest thing", and that is an applicable plausibility (if no one finds anything smaller, then it is potentially the smallest thing). So, in light of this, I think that, at best, you could only claim, rationally, that this or that thing is potentially the first cause: never that there actually is one. Then I think we would be on the same page as claiming potentials would restrict us to our true limits of experience and anything attempting beyond that is irrational. This is what I mean by explanatory-collapsibility: restraining oneself from going beyond one's capabilities, where one is susceptible to making actual claims when it is really potential. We are always in a box, and in that box we shall stay.

I'm not sure if that answered the question, but I felt this was a good example to show the fine line between what can be applicably known, possibility, and plausibility. Feel free to dig in deeper..


Although I really appreciate the elaboration, I don't think you addressed the most fundamental issue.

It is when you have concluded applicable knowledge within your context.


I consider this completely ambiguous. Although I understand what you are trying to say. I think, as of now, your epistemology is just leaving it up to the subject to decide what is or isn't possible (because they can make, in the absence on any clear definition, "experienced before" mean anything they want). If we don't draw a line at where something has been experienced before, then I think possibility loses its power, so to speak. Is experiencing that apple enough to justify this apple? Is experiencing gravity on earth enough for the moon? Is my car starting enough to justify another car starting? What if they are the same exact model? What if they are different manufacturers. We've touched this a bit before, but, mereologically, where are we drawing the line such that "experience before" is similar enough to "experience now" to the point where I can logically associated them together?

I do not claim that perception, thoughts, and emotions are valid sources of knowledge.


If you are saying that you aren't claiming your knowledge to necessarily be true, then I agree.

I claim they are things we know, due to the basis of proving, and thus knowing, that I can discretely experience.


My point is that it isn't a proof: it is vicious circle. As far as I understand it, you are stating that "I think, therefore I think", "I perceive, therefore I perceive", and "I feel, therefore I feel". These are not proofs, these are the definition of circular logic.

The discrete experience you have, the separation of the sea of existence into parts and parcels, is not an assumption, or a belief. It is your direct experience, your distinctive knowledge. I form the discrete experience of thoughts as a very low set of essential properties in the beginning, so that I can get to the basic idea of the theory.


I am having a hard time of understanding how this isn't "I discretely experience because I discretely experience".

You create an idea of a thought, and you confirm it without contradiction immediately, because it is a discrete experience.


Again, how is this not "I think, therefore I think"? This boils down to: "I know that I experience discretely, because I do". This is the definition of circular logic.

If only I could ever get the idea out there in the philosophical community at large. I have tried publication to no avail. Honestly, I don't even care about credit. Perhaps someone on these forums will read it, understand it, and be able to do what I was unable to. Or perhaps someone will come along and finally disprove it. Either way, it would make me happy to have some resolution for it.


I am truly sorry that people aren't taking your epistemology seriously: it deserves the credit it is due! I think your biggest adversary are the rationalists. They will put the a priori knowledge at a higher priority than the a posteriori, the egg before the chicken, which I think your epistemology does the reverse (although you don't subscribe to such a distinction, that's typically how they will view it).

Thanks again Bob. It has been very gratifying to have someone seriously read and understand the theory up to this point. Whether the theory continues to hold, or crashes and burns, this has been enough.


Of course, thank you for a such a lovely conversation! I thoroughly enjoyed understanding your epistemology.

Bob
Philosophim January 30, 2022 at 14:43 #649336
Quoting Bob Ross
I think the comparison is more relevant when you actually have to choose between the two. As a radical example, imagine someone puts a gun up to your head and tells you to bet your life on either plausibility A or B (where both are completely unrelated): I don't think you would just flip a coin, or answer with indifference. I think you would analyze which you are more sure of.


I would argue in that case that analyzing the plausibilities is relevant to that situation. :grin: I think we understand the points here.

Quoting Bob Ross
I think you were right in wanting to move inapplicable plausibilities to irrational inductions, because they lack potential. I can never apply the belief that any given infinite, within a limit, is actually infinite.


The reason why I haven't yet lumped it into an irrational induction, is there is an essential difference between the two. An inapplicable plausibility is unable to be applied, while an irrational induction is a belief in something, despite the application contradicting the belief. But as you've noted, niether have potential, so I think they can be lumped together into a category.

Quoting Bob Ross
However, claiming their is a first cause would be the same as claiming this particle is actually the smallest particle that can exist:


I think a more accurate comparison would be "Claiming there is a first cause is the same as claiming there is a smallest particle that can exist." Comparitively, claiming, "This thing is a first cause, is the same as claiming this particle is the smallest particle." Each have different claims of existence and logic behind it. While I believe the most cogent belief is that there is at least one first cause, I find the bar to prove that any one thing is a first cause, may be extremely difficult to claim.

The reason is simple. A first cause has no prior reason for its existence. But there is nothing to prevent it from appearing in such a way, that a person could still interpret that something caused it to exist. If a particle appeared with a velocity, how could we tell the difference between it, and a particle who's velocity was caused by another? We would have to witness the inception of the self-caused particle at the time of its formation. But a historical analysis would make the revelation of certain types of self-caused things impossible.

Quoting Bob Ross
It is when you have concluded applicable knowledge within your context.

I consider this completely ambiguous. Although I understand what you are trying to say. I think, as of now, your epistemology is just leaving it up to the subject to decide what is or isn't possible (because they can make, in the absence on any clear definition, "experienced before" mean anything they want).


Not quite. Recall what is required for applicable knowledge from the self-context.

1. One must have distinctive knowledge first. Distinctive knowledge is the essential properties you have decided something should be. I can define a "tree" as being a wooden plant that is taller than myself.

2. Experience something, and state, "That is a tree." To applicably know it is a tree, your essential properties must not be contradicted. Turns out the plant I'm looking at it wooden, and taller than myself. I applicably know it as a tree. Therefore I know it is possible that there are wooden plants taller than myself.

The "experience" is to have applicably known something before. To applicably know something, the individual must meet these minimum specific standards. They can make distinctive knowledge whatever they want, but the application of that distinctive knowledge must follow the process.

Quoting Bob Ross
My point is that it isn't a proof: it is vicious circle. As far as I understand it, you are stating that "I think, therefore I think", "I perceive, therefore I perceive", and "I feel, therefore I feel". These are not proofs, these are the definition of circular logic.


I don't believe this is the case. Circular logic is when a reason, B, is formed from A, and A can only be formed from B. Thus the simple example of, "The bible states God exists. How do we know the bible is true? God says it is."

But the foundation of discretely experiencing does not rely on the definition of thoughts or perceptions. They do not prove that we discretely experience. Discrete experience is simply the ability to essentially form identities within the wash of experience. A camera can take a picture, but it cannot discretely experience beyond the colors of light it receives. We can. We can focus on certain portions, lump them together as identities, see sheep in fields of grass.

My definition of "thoughts" does not prove discrete experience. My definition of thoughts comes from discrete experience. Thoughts, as defined here, are simply my ability to continue to discretely experience when I stop sensing. I can choose that definition, because I can choose how to discretely experience. I can then apply it without contradiction. If I stop sensing, and still discretely experience, then I am thinking without contradiction.

Where is this circular? I see this as a logical consequence, not a conclusion that is the only source that proving that I discretely experience.

Quoting Bob Ross
I am having a hard time of understanding how this isn't "I discretely experience because I discretely experience".


It is, "I discretely experience, therefore I can define a portion of my experience as "thoughts". When do this without contradiction within a particular context (Saying thoughts != thoughts is a contradiction), then I say I know it.

If you think I do not know that within my self-context, can you disprove it? Can you demonstrate that I do not discretely experience? You cannot, because the act of coming up with an argument alone, and me understanding the counter argument, requires that I discretely experience. Discretely experiencing is a law of a communicable being, built on the principal of non-contradiction. As far as assumptions go, I believe the law of non-contradiction is the one assumption I need to form the theory.

Of course, maybe I've missed something. If you truly believe it is circular, can you demonstrate it? I look forward to your reply.
Bob Ross January 30, 2022 at 17:34 #649391
Hello @Philosophim,


The reason why I haven't yet lumped it into an irrational induction, is there is an essential difference between the two. An inapplicable plausibility is unable to be applied, while an irrational induction is a belief in something, despite the application contradicting the belief. But as you've noted, niether have potential, so I think they can be lumped together into a category.


Yes, but I don't think an "irrational induction" is "despite the application contradicting the belief" anymore, it is when is is impossible and has no potential. However, I do see your point, I don't think all inapplicable plausibilities are irrational, depending on how we define it. There's a difference between claiming something that cannot be applied now or even in one's lifetime (or 30,000 years from now) and something that can be proven to lack potential (meaning it can be abstractly proven to never be able to be applied). For example, the belief in a magical unicorn that can fly and has invisibility powers isn't necessarily an inapplicable plausibility in terms of the latter, it could be that, as our technology advances, that we can actually detect invisible things somehow or maybe we find one on another planet or something. However, the belief that there's an undetectable unicorn is an example, I would say, of the latter: we can, in the abstract, since it is undetectable, determine it is an irrational induction because it lacks potential. If we define inapplicable plausibilities in the manner of the latter, then I would advocate that all inapplicable plausibilities are actually irrational inductions. However, if the former is also utilized to a certain degree, then further consideration is required.



I think a more accurate comparison would be "Claiming there is a first cause is the same as claiming there is a smallest particle that can exist." Comparitively, claiming, "This thing is a first cause, is the same as claiming this particle is the smallest particle." Each have different claims of existence and logic behind it. While I believe the most cogent belief is that there is at least one first cause, I find the bar to prove that any one thing is a first cause, may be extremely difficult to claim.


Although I understand the distinction you are making here: it is still an irrational induction based off of the same logic. I can abstractly prove that when you say it "may be extremely difficult to claim" "there is a smallest particle that can exist" that that can never been applicably known. Therefore, it lacks potential and, subsequently, is an irrational induction. Stating "there is a smallest particle that can exist" is no different than stating "there is an undetectable unicorn". You can never verify either, nor can you disprove it, because it is actually a form of irrationality (I don't need to disprove it beyond demonstrating it lacks potential). I think the only way to amend this is if you were to accept inapplicable inductions, in the manner where they can never be known, as rational. I would disagree (although, yes, this kind of irrational induction targets potentiality and not impossibility).

The reason is simple. A first cause has no prior reason for its existence. But there is nothing to prevent it from appearing in such a way, that a person could still interpret that something caused it to exist. If a particle appeared with a velocity, how could we tell the difference between it, and a particle who's velocity was caused by another? We would have to witness the inception of the self-caused particle at the time of its formation. But a historical analysis would make the revelation of certain types of self-caused things impossible.


I think that you are starting to demonstrate why this has no potential. It can never be applicably known. It is simply a belief within the mind, like an undetectable unicorn.


1. One must have distinctive knowledge first. Distinctive knowledge is the essential properties you have decided something should be. I can define a "tree" as being a wooden plant that is taller than myself.

2. Experience something, and state, "That is a tree." To applicably know it is a tree, your essential properties must not be contradicted. Turns out the plant I'm looking at it wooden, and taller than myself. I applicably know it as a tree. Therefore I know it is possible that there are wooden plants taller than myself.


I have no problem with #1, but #2 is where the ambiguity is introduced: you are clumping "trees" together as if that is a universal, it is a particular. To "experience something, and state "that is X"", is something someone can do with virtually anything. To say that the only requirement in #2 is that the essential properties are not contradicted is like using potentiality is if it is possibility. Just because the essential properties don't contradict doesn't mean I am justified in claiming X and Y are similar enough for me to constitute it as the same experience on two different occasions. Although I am probably just misunderstanding you, there's no real justification here that gravity as experienced here is similar enough to say it is the same there. Sure, we could say that it has the same essential property that it falls both times, but that does not mean they are identical enough to constitute it as the same experience: experiencing it on a mountain isn't the same as in a valley. Can I say, after experiencing it in a valley, that it is possible on a mountain?

I don't believe this is the case. Circular logic is when a reason, B, is formed from A, and A can only be formed from B. Thus the simple example of, "The bible states God exists. How do we know the bible is true? God says it is."


Let's break this down in the proper circular logic format as you described:

A is because of B, B is because of A.
Bible states God exists, We know it is true because God says so.
I discretely experience because I concluded it in my thoughts without contradiction. How do I know I think? Because I discretely experience.

You argument, as I understand it, is also circular. In order for any of the epistemology to work, you must conclude, which is a thought. So you conclude you have discrete experiences. But then it can be posited "how do I know I think?", your answer is: "I discretely experience". They are dependent on one another: this is the exact same thing as A proves B, B proves A. Maybe I am just missing something though.

My definition of "thoughts" does not prove discrete experience. My definition of thoughts comes from discrete experience.


Here is it in action (I think): you are saying you don't prove discrete experience with thought, because you simply discretely experience. But the whole thing, including the acknowledgment that you discretely experience, is dependent on you have a conclusory thought. I think your argument is along these lines:

1. I think, therefore I discretely experience
2. I discretely experience, therefore I think

I don't think this is explicitly what you were arguing for, but, nevertheless, I think your argument is implicitly utilizing this kind of circular logic. You use your ability to think to conclude that you discretely experience, and then you just simply justify those thoughts with the fact that you discretely experience. This is circular. My original way, "I think, therefore I think", was a bad way of demonstrating this, so I apologize for the confusion, it is more about the relationship between thought and discretely experiencing.

Thoughts, as defined here, are simply my ability to continue to discretely experience when I stop sensing. I can choose that definition, because I can choose how to discretely experience.


Again, you are concluding this, which is a thought, so you are using thought to prove discrete experiences, and then vice-versa.

It is, "I discretely experience, therefore I can define a portion of my experience as "thoughts"


Again, how did you conclude that? You thought (concluded) that you discretely experience, and then you justified the process of thinking (which you used to acknowledge discrete experience) with the fact that you discretely experience. The separation of your experience into perception, emotion, and thought in itself depends on thought (a particular kind I called rudimentary reason). If you couldn't conclude, then you would have determined that you discretely experience (I would argue that you can't discretely experience without some form of rudimentary reason).

If you think I do not know that within my self-context, can you disprove it? Can you demonstrate that I do not discretely experience?


I think this is an appeal to ignorance fallacy, I don't have to disprove it. I am simply analyzing your proof for the conclusion that we discretely experience and I think it is circular. Even if you completely agree with me that it is circular, that doesn't disprove that we discretely experience (and I don't think it has to). I am failing to understand why I would need to disprove it?

I look forward to your response,
Bob
Philosophim February 05, 2022 at 12:45 #651551
Sorry for the wait Bob, busy week, and I wanted to have time to focus and make sure I really covered the answers here.

Quoting Bob Ross
If we define inapplicable plausibilities in the manner of the latter, then I would advocate that all inapplicable plausibilities are actually irrational inductions. However, if the former is also utilized to a certain degree, then further consideration is required.


Yes, this is the distinction I am going for. Perhaps I need another name for a belief in something that is counter to what is applicably known. Perhaps that should be classified as an impossibility. Belief in inapplicable plausibilities or impossibilities would be considered irrational inductions. But, I do want to note irrational inductions have their uses. If there are no rational inductions, it is our only option. Further, there are times when the more rational conclusion may be based off an odd context or faulty premises, and irrational inductions are needed to push past those boundaries.

Quoting Bob Ross
Stating "there is a smallest particle that can exist" is no different than stating "there is an undetectable unicorn".


I don't think these are equivalent. The first is a logical conclusion based on our distinctive knowledge. "Smallness" is a state of relativity. Meaning that if two particles are compared, we can observe if one is smaller than the other. This can be applicably known, therefore it is possible that one particle can be smaller than another. We can then construct a formula stating, "If particles can be compared, and we know it is possible for particles to be smaller than another, if we take all of the particles in the universe, there will be a smallest particle."

This formula would be formal logic of possibility. The problem is when the formula is applied. If we are to state, "That particle is the smallest", we would need to gather all of the particles of the universe to know this. The problem is, we cannot gather all of the particles in the universe, and at that point, our claim is now an inapplicable plausibility.

Compare this to an undetectable unicorn. We don't even know if a unicorn is possible. This is constructed on a possibility of a horse with a horn, and the plausibility of something that can exist in the universe, but not be detected. Unlike the former formula which is based directly off of a possibility, we have a plausibility mixed into the chain of rationality to arrive at this conclusion.

Turning particle comparison into a similar cogency of an undetectable unicorn would be something like, "There exists the groggiest particle in the universe." A groggy particle is something that is both larger, and smaller than the particles around it. But a groggy particle is a plausibility based off the possibility of a particle being smaller, and a particle being bigger. We don't know if groggy particles are possible, let alone whether any one particle is the groggiest. Honestly, its a very slight difference in cogency revealed by the chain of reasoning.

Quoting Bob Ross
I have no problem with #1, but #2 is where the ambiguity is introduced: you are clumping "trees" together as if that is a universal, it is a particular. To "experience something, and state "that is X"", is something someone can do with virtually anything. To say that the only requirement in #2 is that the essential properties are not contradicted is like using potentiality is if it is possibility. Just because the essential properties don't contradict doesn't mean I am justified in claiming X and Y are similar enough for me to constitute it as the same experience on two different occasions.


But in the case of the use of tree here, I am not defining it as, "This tree here, is the same as that tree here." I am defining it as a universal. "All things that are wooden and taller than myself are trees." That's all that's required for me to applicably know something as a tree. Every other property would be non-essential to matching that definition.

Plato once postulated that everything had an ideal form. There was an ideal Tree, that our formulation of trees was based on. Epistemology studied variations of platonic forms for many years, and concluded that there was no ideal form of anything. There is no arbiter of reality that declares what a tree is. That is all based on our distinctive knowledge. If I decide to define "tree" as a universal, I can. As long as its useful in application, I should.

The point of epistemology, is to figure out how we can claim knowledge of the world. That requires a method of ordering our ability to discretely experience in a way that is rational. Of course someone within their own context can define anything as they like. The point of introducing the logical constraints of the theory is to give a tool to do it in such a way that uses rational outcomes that are consistently useful and have the highest chance of being accurate in their assessment of the world.

I think you're also using the term potentiality incorrectly. Potentiality has nothing to do with the act of application. It is simply whether we've created distinctive knowledge that is not contradicted by other distinctive knowledge in our head. If I said a tree an essential property of a tree was that it must be taller than myself, but then also said an essential property of a tree is that it must be shorter than myself, this is a contradiction, and inapplicable. Potentiality is a general description of whether something is rational in the hierarchy of inductions, but it does not introduce anything new, or contradict the rules of the hierarchy. When applying the distinctive knowledge you have created, if it is similar enough that it does not contradict the essential properties of your definition, then you applicably know it. When formulating distinctive knowledge in your head, if it is non-contradictory to other distinctive knowledge, then it has potential.

Quoting Bob Ross
Sure, we could say that it (gravity) has the same essential property that it falls both times, but that does not mean they are identical enough to constitute it as the same experience: experiencing it on a mountain isn't the same as in a valley. Can I say, after experiencing it in a valley, that it is possible on a mountain?


It depends on the context, and the definition of the word. If the only essential property of gravity as a definition is, "That which pulls me to the ground," then yes, you experience gravity in both places. If you have only experienced gravity in a valley, and have not yet gone to a mountain or know what it is composed of, this is an applicable plausibility that gravity will also exist on a mountain. If you say, I have experienced gravity on the planet Earth, then it is possible that when you go to anywhere on Earth, you will experience gravity.

It is all about the context and degree of specificity. The more specific and exacting you are in the requirements to applicably know something, the more difficult it becomes to applicably know it, and the more you have to rely on inductions. Of course, define something too broadly and generally, and it isn't very useful. Define something to narrow and exacting, and it generally won't be useful in most cases either.

Quoting Bob Ross
1. I think, therefore I discretely experience


This is incorrect. Thoughts have nothing to do with the ability to discretely experience. I never say, "First I think, then I discretely experience." I eliminate thoughts, and arrive at the idea that discrete experience is the one thing I cannot eliminate. There is nothing to necessitate that I define thoughts in a particular way. I could never define thoughts if I wanted to.

For centuries the number zero did not exist. Nothing in math necessitated that we define zero, but defining zero turned out to be incredibly useful. I only defined thoughts in such a way that was useful and relatable to other people. But I could easily see another person never defining thoughts at all. They could define thoughts as part of the senses that are unceasing. As long as it was defined in such a way as to have potential, and it could be applied without contradiction, then that is what they would distinctively and applicably know thoughts as.

Quoting Bob Ross
Thoughts, as defined here, are simply my ability to continue to discretely experience when I stop sensing. I can choose that definition, because I can choose how to discretely experience.

Again, you are concluding this, which is a thought, so you are using thought to prove discrete experiences, and then vice-versa.


No, I am taking certain discrete experiences, and labeling them as thoughts. Thoughts are a subset of discrete experiences, they do not define discrete experiences. I do not consider "sensing" as thoughts within that context, but they are also discrete experiences.

Could you define it differently? I'm sure you could. Maybe you think everything is a thought, in which case, then thoughts would be a synonym for discrete experiences. If so, you would need to come up with a new word for the sub-thought that happens when you no longer sense. Or maybe that's not an essential property for you. Define it as you will, that fits within the theory. It is only when you apply it that it must not be contradicted by reality for you to applicably know it.

Quoting Bob Ross
If you think I do not know that within my self-context, can you disprove it? Can you demonstrate that I do not discretely experience?

I think this is an appeal to ignorance fallacy, I don't have to disprove it.


Let me rephrase this to what it should have been. Can you disprove that you discretely experience? Recall this is not merely "thinking" as I've defined it in the paper. It is the ability to take the entirety of your experience, and divide it into parts. The question of course is, can you even make an argument against discretely experiencing, if you didn't discretely experience? If you can counter the idea that you discretely experience, then yes, the entire theory fails. But if you can't, then I see nothing against it.

Thanks again Bob, I will be answering more quickly this week.
Bob Ross February 05, 2022 at 19:11 #651684
Hello @Philosophim,

Sorry for the wait Bob, busy week, and I wanted to have time to focus and make sure I really covered the answers here.


No worries! I always appreciate your responses because they are so well thought out!

I think, before I address your post (which is marvelously well done!), I need to try to convey, in terms of our discussions pertaining to thought, the underlying meaning of what I am attempting to portray. Forgive me, but I am still contemplating it and, consequently refurbishing my ideas on the subject as I go on, so the terminology is not what I would prefer you to focus on (as I try to explicate it hereafter): it is the underlying meaning (because I freely admit that these terms I am about to use may not be the best ones, but, unfortunately, they are the best ones I can think of right now).

I first would like to explain back to you my understanding of discrete experience and then utilize that to attempt to convey a problem with it being utilized as the most fundamental in terms of chronological viability (derivation of the subject, and consequently everything, in terms of viability). When I, shortly hereafter, explain your concept of discrete experience, please correct me anywhere I am misunderstanding as it is crucial to what I state afterwards!

Discrete experience is differentiation, that is the capability of impenetrability and cohesion. Without such, we wouldn't have existence, or, at the very least, it wouldn't be anything like we are now as there is nothing more fundamental than differentiation (or at least I think that is how your argument goes, but, again, please correct me!). So when you say:

The question of course is, can you even make an argument against discretely experiencing, if you didn't discretely experience?


I think this is exactly what you are arguing: differentiation is the derivative of all else. So, no matter what thought manifests in my mind to counter your claim, I must concede, as you are right, that it in itself required "discrete experience"--that is, to be more specific, differentiation.

However, I think this is wrong and right. Right in the sense of derivation in terms of extrapolated chronological viability, and wrong in the sense of derivation in terms of just chronological viability. Let me try to explain.

In terms of derivation, we first have thoughts, but, as you rightly pointed out, there is a difference between the concept of thoughts (which is an extrapolated inference of what is typically characterized as the process of thinking) and thought itself. You are absolutely right that a person could never define thought, however, they would still be thinking. This is where, as you also rightly pointed out, a distinction needs to be made: thinking in itself and its own extrapolation of itself into a characterized process. The latter is not required, the former is. Furthermore, this is why I will be disregarding the latter, the characterized process, for now and focusing on the former because I am attempting the derivation of chronological viability of the subject (myself).

With respect to thinking in itself (not to be conflated with Kant's notorious use of thing in themselves, I am making no such noumenon/phenomenon distinction--I just can't think of a better word yet), it, in turn, requires a further derivation: I can question, logically, the very discernment between the thoughts themselves, which is also a thought that relates solely to thinking in itself and not to traditional objects. In other words, I have thought A and thought B, I can ask, logically, "why was I able to have A and B and not just a blob of thought (meaning the cohesion of all thoughts)?". I think this is the level at which your argument determines that the answer to such a question in the more fundamental "discrete experience" (aka, differentiation is required for the thoughts to occur). You are right! But the derivation does not stop there. Now, in terms of the aforementioned question, I could legitimately answer myself with "differentiation must occur for my thoughts". This is 100% valid. However, now I can ask a further question: "how am I able to be convinced and why am I convinced that my answer satisfied it?". I think this reveals to the subject that the most fundamental thing, in terms of just chronological viability, is the fact that they are a motive. They are a perpetual motive towards logic, which any answer (any conclusion) that satisfies logic satisfies the subject. Now I think we are getting more fundamental than simply differentiation. There are rules, identified later as "logic", which the subject, at its most rudimentary form, is perpetually motivated to follow. Without it, differentiation is meaningless. This is because a being could have the capability to differentiate while never being motivated to utilize it within any construct of rules.

Now, I think you could counter this with "logic requires differentiation to occur in the first place", but my point is that motivation doesn't necessarily require differentiation: it is the thing differentiating--based off of that motive. Another problem is that I can answer "what is the motive?" by utilizing that motive, thereby within its motivated constraints, but I cannot really answer, as of yet, "why is there a motive?". To be clear, I don't mean motive in the sense of "I want to do this", no that is already very far away in the sense of derivation and, consequently, utilizes the motive and discrete experience to conclude such a "want". This is what I was trying to get at with rudimentary reason, but I am not sure anymore if that is the best term for it.

I think you are arguing that differentiation is the key to everything (in terms of derivation), I am saying you are right if we are talking about derivation in the sense of extrapolated chronological viability, contrary to just chronological viability. The reason I think this is the case is because, from this motive we differentiate, and thereby, it is something we conclude by means of the motive that we have discrete experience in the first place. In more simple terms, the fact that either of us can argue either way requires a motive we did not choose, for it is our bedrock, which constraints us to logic, which we don't ever have to define to know it is true. That is why we can create a vicious absurdity of questioning where we demand a logical explanation for everything--including the concept of everything. My point is that this demanding requires a motive which precedes differentiation in terms of viability: differentiation requires a motive. However, in term of extrapolating where that motive came from (the "why is the motive there"), I think the best explanation is what you are arguing for: differentiation is required. This is because I could very well counter this with "well, motive itself is a differentiation of sorts". This is true, but I am trying to convey that that is utilizing the motive to even make that statement in the first place, and therefore, everything points back to this motive. But if I ever want to attempt to explain the motive, then I will be bound to its rules, logic, which I must be convinced is being obeyed and, therefore, it will require that I extrapolate, within the inevitable use of the motive, that it all requires differentiation--including itself. Notice that this doesn't actually mean that motive depends on differentiation, only that the use of motive to derive itself will always result, due to abiding by its own rules, that it requires differentiation. Does that make any sense?

So, with that in mind (hopefully I did a good enough job of explaining it for now), when you say:

Can you disprove that you discretely experience?


I cannot, because my use of my motive to derive my motive will inevitably be constrained to its logic, which will require that it be convinced that itself is derived from differentiation. This is not the same thing as stating it actually is derived from differentiation. Do you see what I am, in underlying meaning, trying to convey?

It is the ability to take the entirety of your experience, and divide it into parts.


Again, this requires a motive.

If that didn't make sense, please let me know! But if it did, I think it demonstrates quite effectively that, regardless of how we want to define thought, we are logically bound to utilizing thinking in itself, via the motive, to derive differentiation in the first place. Therefore, I still do think you are arguing "I think, therefore I discrete experience, and vice versa", but "think" in the sense of itself, which requires no defining. In other words, I don't think you are arguing in your paper that the concept of thinking, as defined in your paper, is what derives discrete experience, but, rather, you are implicitly utilizing thinking in itself throughout the entirety of the derivation.

This is incorrect. Thoughts have nothing to do with the ability to discretely experience. I never say, "First I think, then I discretely experience."


I think you are talking about the concept of thoughts, and in that sense I think you are right. But not in the sense of thinking in itself.

I eliminate thoughts, and arrive at the idea that discrete experience is the one thing I cannot eliminate.


You cannot eliminate motive, subsequently thinking in itself, without utilizing it to attempt to do so. I think you are talking about the concept of thoughts, which is a defining, and you are right in that sense, but I am not trying to argue against that at all.

Since this is becoming entirely too long, I will address the possibility and "first cause" points you made after I let you have proper time to respond to the aforementioned comments. Again, splendid post! I really enjoy reading your responses as they are incredibly well thought out!

I look forward to hearing from you,
Bob
Philosophim February 05, 2022 at 20:39 #651710
Quoting Bob Ross
No worries! I always appreciate your responses because they are so well thought out!


The same Bob!

Quoting Bob Ross
Forgive me, but I am still contemplating it and, consequently refurbishing my ideas on the subject as I go on, so the terminology is not what I would prefer you to focus on (as I try to explicate it hereafter): it is the underlying meaning (because I freely admit that these terms I am about to use may not be the best ones, but, unfortunately, they are the best ones I can think of right now).


I fully understand! It is a constant struggle for me as well. One of the reasons I respect you is you are a participant trying to understand what the underlying meaning of what I am saying is as well. I hope I have been as open and understanding back.

Quoting Bob Ross
This is where, as you also rightly pointed out, a distinction needs to be made: thinking in itself and its own extrapolation of itself into a characterized process. The latter is not required, the former is. Furthermore, this is why I will be disregarding the latter, the characterized process, for now and focusing on the former because I am attempting the derivation of chronological viability of the subject (myself).


Yes, I agree with this.

Quoting Bob Ross
Now, in terms of the aforementioned question, I could legitimately answer myself with "differentiation must occur for my thoughts". This is 100% valid. However, now I can ask a further question: "how am I able to be convinced and why am I convinced that my answer satisfied it?". I think this reveals to the subject that the most fundamental thing, in terms of just chronological viability, is the fact that they are a motive. They are a perpetual motive towards logic, which any answer (any conclusion) that satisfies logic satisfies the subject. Now I think we are getting more fundamental than simply differentiation.


I think you have something very clear with motive. Motive can be used to describe "Why I discretely experience" There is something that compels the mind to do so. What is that compulsion?

The issue I have is that this motive is logic. While a motive can be logic, it is unfortunately not the motive of everyone, nor necessarily a basic function of thought. Many thinking things are not motivated by logic. Survival and emotions seem to be the most basic of motives that compel us to discretely experience, and identify the world a particular way.

Logic can be done without training or thought, but it is often something learned. It is a higher order of thinking that one must learn by experience or be taught to consistently think and be motivated in such a manner. That is what the quest for knowledge is. How do I take the fact that I discretely experience, and use it in a logical way? For one's distinctive knowledge, it must not be contradicted. In application to reality, it must not be contradicted. And from there, the rest of the logic can build.

There is nothing to compel us to think logically, but a logical conclusion itself. A person who rejects logic entirely in favor of survival or emotions will not be able to discretely experience in terms of knowable outcomes, but in more of a selfish and basic survival satisfaction. This is part of "context". A person who does not think within the context of logic, cannot really know the world, they just react to it. How do you convince a person to think logically? How do you convince a person to reject their personal emotions, and sometimes "survival of self/personality" in favor of higher order thinking?

That is each person's choice. I do believe that thinking logically will benefit a person more in the long term. But my epistemology cannot convince a person to think logically. It can only convince a person that thinks logically, that it is a logical way to think.

You've used a term a couple of times here, "chronological viability". What does that mean to you? You've noted two types. Could you flesh them out for me? Thanks for the great input!

Bob Ross February 05, 2022 at 22:36 #651749
Hello @Philosophim,

I fully understand! It is a constant struggle for me as well. One of the reasons I respect you is you are a participant trying to understand what the underlying meaning of what I am saying is as well. I hope I have been as open and understanding back.


Thank you! Same to you! I wasn't bringing that up with any implication that you weren't attempting to understand the underlying meaning, just that, since my terminology isn't really on point yet, I wanted to notify you, so to speak, that simple acknowledgement that I haven't worked out all the kinks.

Motive can be used to describe "Why I discretely experience" There is something that compels the mind to do so. What is that compulsion?


Not quite, I would say. Motive is deeper than that: it is the underlying motivation, along with the most fundamentals rules, that must be abided by. Therefore, i consider the statement "I discretely experience" an extrapolation which utilizes this fundamental motive, and subsequently the outlined rules that constraint it, to determine that that is true in the first place. I am trying to convey that it starts, at the most fundamental aspect, with motive, and consequently a set of rules, and not discrete experience.

The issue I have is that this motive is logic. While a motive can be logic, it is unfortunately not the motive of everyone, nor necessarily a basic function of thought. Many thinking things are not motivated by logic. Survival and emotions seem to be the most basic of motives that compel us to discretely experience, and identify the world a particular way.


When I used the term "logic", admittedly, this may not be the right word to use. I am not referring to anything that is taught. When you are using the term "logic", I am thinking of "rationality". Anything that could be taught to the subject must abide by the motive, the rules, that subject necessarily has. When I say "rules", I don't mean all the rules they could ever abide by, but, rather, the rules that are necessarily the case for any convincement to occur. In other words, you can't teach them anything without them first having this motive, for that would mean they don't have any set of rules, fundamental rules that is, that they must follow. This doesn't mean that whatever they are convinced of is rational, but it does mean they are convinced of something. This convincement can only occur, I am trying to argue, given a motive, which perpetuates the rudimentary, fundamental rules that must be abided by for a claim to convince them. Therefore, I see no difference with "survival" or "emotions". If someone does something based off of "emotions", they have considered that their claim abides by the most rudimentary rules, set in place by the motive, and thereby, in the heat of the moment, they are convinced of it.

When you say:
" Survival and emotions seem to be the most basic of motives that compel us to discretely experience, and identify the world a particular way"

I think this is an extrapolation that requires the motive in the first place for you to be convinced of this. You are extrapolating claims, for example in terms of survival, based off of empirical evidence in terms of evolution (which is fine, but this is an analysis at a much higher level, because it utilizes motive in the process, than what I am trying to convey).

Logic can be done without training or thought, but it is often something learned


Again, I may be just misusing "logic", but I would consider your use of "logic" to be "rationality". Everyone, in the sense that I am using it, utilizes "logic". It is simply a rudimentary, most fundamental, set of rules that is perpetuated by a motive. Without it, the subject wouldn't be capable of [i]rationality or irrationality[i].

. It is a higher order of thinking that one must learn by experience or be taught to consistently think and be motivated in such a manner.


I would characterize this as "rationality" (or something like that).

How do I take the fact that I discretely experience, and use it in a logical way?


In terms of what I am trying to convey, this proposition here is too high level. The most fundamental thing isn't that you discretely experience, it is that you are convinced that it is the case. There was a motivation, with you as a subject, to innately attempt convincement by means of a rudimentary set of rules that must be abided by. In other words, when you claim "you discrete experience" or that "the most fundamental thing is discrete experience", these claims are only possible if something has a motive to try to be convinced of these via a set of necessary rules. This set of rules doesn't have to encompass all of rational thought--so basically the motivation towards the necessary use of the rudimentary rules, along with those very rules are rational, but not all rational rules are in that set of rudimentary rules (they can be built off of them). I think you are generally correct in the sense that if I were to lump "logic" into what can be learned and what necessarily isn't learned, but is built off of, then that is true. However, if I did that then "logic" would necessarily have to have two sub-types, and motive would be associated with the aspect that isn't learned.

There is nothing to compel us to think logically, but a logical conclusion itself. A person who rejects logic entirely in favor of survival or emotions will not be able to discretely experience in terms of knowable outcomes, but in more of a selfish and basic survival satisfaction.


Again, I would say "nothing compels us to think rationally". But if I were to go with the way you are using "logic", then I would split it into two sub types, and emphasize the aspect that is necessary to learn anything in the first place. Survival and emotions are still formulated from the motive and its rules. If the subject is truly convinced that their decision to follow emotions isn't correct, then they necessarily would not follow their emotions. They may deny certain rational claims, but they necessarily utilize a particular motive, which they don't control, which perpetuates the rules by which they are convinced of anything at all. Does that make sense?

How do you convince a person to think logically?


Again, you would have to convince them, which would require that it abide by the necessary rules, from the motive, that is in place for them (I don't mean "in place" in the sense that they are choosing them--they aren't).

You've used a term a couple of times here, "chronological viability". What does that mean to you? You've noted two types. Could you flesh them out for me? Thanks for the great input!


Of course, so, in a nutshell, "chronological viability" is the attempt of the subject to derive the chronological order of what must come first before another thing. I call it "viability" because I see the derivation of things in terms of which order produces the necessary viability that I experience. For example, if I were to think that my discrete experience is derived from the car I see, in a literal sense, then that cannot be true because the viability of the car's existence as I see it depends on discrete experience in the first place. But to ask "what must come first" can be taken two different ways (I think). It is sort of like in the sense of the chicken vs the egg: which comes first? I think we can derive something in terms of extrapolated chronological viability and just chronological viability. The former would be more in terms with the egg coming before the chicken (the bedrock of the chicken is the egg, as the egg must come first for the chicken to be there), whereas the latter is in terms of what had to be there for the consideration in the first place (the chicken extrapolated that it came from the egg, therefore the chicken is required for that extrapolation to occur in the first place, therefore it must be more sure of its existence over the fact that it came from the egg). I think these are both important aspects of derivation, but shouldn't be taken into account solely without consideration of the other. I think that the motive, and its rules, is required, in the sense of just chronological viability, before the discrete experience. But once the motive is, whatever that may be, and consequently its rules, then it necessarily follows that anything I can possibly imagine requires discrete experience--including the attempted derivation of the motive itself and its rules. Does that make sense?

Bob
Philosophim February 06, 2022 at 13:04 #652039
Quoting Bob Ross
Therefore, i consider the statement "I discretely experience" an extrapolation which utilizes this fundamental motive, and subsequently the outlined rules that constraint it, to determine that that is true in the first place. I am trying to convey that it starts, at the most fundamental aspect, with motive, and consequently a set of rules, and not discrete experience.


I read the entirety of your post, but I feel this sums it up nicely. The goal of the knowledge theory was to find just one thing that I could "know", and use that to go from there. I can know that I discretely experience, but I explicitly did not try to determine "why" I discretely experience. The reasons being was it was something I could not "know" as a foundation, and also that it wasn't important for what I was trying to accomplish.

That being said, do I believe that there is something which causes us to discretely experience? Absolutely. But I believe this is something beyond the conscious mind. This is neuroscience, the mechanisms by which we think. Do I think its fun to explore in a philosophical manner what it is that causes us to discretely experience? Yes! Philosophy is about trying to get answers to questions that give us new questions to explore.

For my part, I have no skin in that game, and have not considered it beyond a passing thought. Is it something we can applicably know? Maybe. But do I think its needed for the theory to be viable? At this point, no. My question for you is, is there something you feel 'motive' brings to the table that challenges or puts to question the formulation of the epistemology I've put forth so far? If yes, then we'll have to explore it in earnest. If not, then feel free to continue putting forth your idea, I would still like to see what you've come up with. For my part, I feel you are noting something which I feel has promise, and have no disagreements with on first thought.

Quoting Bob Ross
Of course, so, in a nutshell, "chronological viability" is the attempt of the subject to derive the chronological order of what must come first before another thing. I call it "viability" because I see the derivation of things in terms of which order produces the necessary viability that I experience.


So this is sort of a descriptive order of causality, or why we arrive at the point that we are in our thinking?

Quoting Bob Ross
But once the motive is, whatever that may be, and consequently its rules, then it necessarily follows that anything I can possibly imagine requires discrete experience--including the attempted derivation of the motive itself and its rules. Does that make sense


I believe so. It is not that discrete experience causes the motive to be, but we do need to discretely experience to know what the motive is. If I have that wrong, let me know! Thank you for fleshing that out.
Bob Ross February 06, 2022 at 14:42 #652060
Hello @Philosophim,

The goal of the knowledge theory was to find just one thing that I could "know", and use that to go from there. I can know that I discretely experience, but I explicitly did not try to determine "why" I discretely experience.


My question for you is, is there something you feel 'motive' brings to the table that challenges or puts to question the formulation of the epistemology I've put forth so far? If yes, then we'll have to explore it in earnest.


I think that, although I completely understand that it seems completely unrelated to your epistemology, the first quote pertains to my objection. I think that if you are trying to find one thing that you can "know", that this, in terms of derivation, it should be you. Nothing, in terms of just chronological viability, can be derived further than the subject. It doesn't start with discrete experience, it starts with you obtaining the knowledge that you discretely experience by means of thinking in itself: you begin with thought (but,again, in itself and not its characterization of itself). Albeit a very close connection between the two, I do believe you start with the thought that convinces you that you discretely experience, and then go from there. This is why, although I agree with your work, I think your epistemology starts at some mile other than 0 in a 500 mile race: you simply start your endeavor off of an assumption, that is discrete experience, and work your way from there without providing sufficient justification for discrete experience. I may be simply misunderstanding you, but as I far as I can tell it seems like your epistemology simply posits discrete experience as a given, but I am trying to get at that positing itself exposes a more fundamental aspect than differentiation. I truly do think that your argument (1) posits discrete experience as self-evident and (2), in actuality, utilizes the more fundamental aspect that is required to even put forth the argument in the first place which, thereby, causes your argument to really be "I think (in itself), therefore I discrete experience. I discretely experience, therefore I think (in itself)" (this is no different than A -> B, B -> A, which really is A -> A, so I do think you are essentially saying "I discretely experience because I discretely experience--hence #1). In other words, I am disputing the grounds of your epistemology, as I don't think your argument in the essays really provides any sufficient response.

When you say that you don't provide a "why" for discretely experience, I think that is fair enough if you are right in that discrete experience is the most fundamental thing, which I don't think is true. I think you are agreeing with me then that your epistemology starts with an axiom that must be assumed, but it is so effective only due it being a commonality between humans (it isn't a very hard axiom to adopt). My point is that your argument has a fundamental flaw: you are arguing that discrete experience is the most fundamental, but yet you are using thinking in itself to do that in the first place. I think the fact that you can put forth an argument at all provides direct explication into the fact that discrete experience isn't the most fundamental thing. This may just very well be a point that you don't find particularly useful in terms of what you want to portray in your essays, but it really comes down to which requires the other to be viable, thereby which is the most fundamental, the motive to differentiate (to think in itself), or the differentiation that occurs as a result of it? That is what I am trying to get at. This is why I think your epistemology fails: not because it is wrong, only because it posits discrete experience as if it actually is the most fundamental and that that is proven. If your epistemology were to simply concede that it is starting off with the assumption of differentiation, I think everything necessarily follows quite nicely. I'm not sure if that makes sense or not.

So this is sort of a descriptive order of causality, or why we arrive at the point that we are in our thinking?


Yes, it is to question our way into deriving what must precede another for it to be viable. It is to determine what is the most fundamental in terms of what, in terms of your experience, requires for all else. However, on the flip side, it can also be analyzed in the sense of extrapolated precedence. So the utilization of whatever was required to even posit the questions in the first place can be utilized to determine what even itself must logically be preceded by in order for itself to be viable. However, my point concern with extrapolated forms of derivation is that the subject ends up more sure of whatever they found logically necessarily precedes them then themselves and then, subsequently, can fall into a trap of actually doing things that they normally wouldn't do as a result. I like to think of them both as useful forms of derivation, but the derivation to what must exist for the consideration in the first place must always be a more sure fact than anything that can be necessarily, and logically, extrapolated from it (including its own extrapolation).

It is not that discrete experience causes the motive to be, but we do need to discretely experience to know what the motive is.


Yes I believe this is accurate. However, I would like to emphasize that this in no way implies that we start with the differentiation (the discrete experience): it implies that the discrete experience is concluded to be true or exist based off of the motive. I can posit that "I discretely experience", but the fact that I can posit explicates what is actually the most fundamental. If it were possible to experience differentiation (discretely experience) without motive (or thinking in itself), although I can't say it is even possible, I would say that, paradoxically, you wouldn't experience at all. If you lacked any motive to be convinced via a set of rules, then you would never know that you experience in the first place. You never would have posited this epistemological theory. We wouldn't know that we are conversing right now. etc. Now it may be that both motive and its subsequent differentiation are bi-dependent, however my point is that differentiation is subsequent to motive, or, better yet, thinking in itself. Don't get me wrong, I think your concern is very warranted: would this be really worth prepending to your essays? Wouldn't it just over-complicate things? It may very well be that the best approach to your philosophy is to start with the assumption of discrete experience, which isn't the most fundamental just to provide easier comprehension for the reader (or to keep it on point to what you would like to portray). But my point is that it doesn't seem like your essays really acknowledge this, as they actually, on the contrary, seem to be arguing that it is the most fundamental and that that is proven.

Bob
Philosophim February 08, 2022 at 14:11 #652608
Quoting Bob Ross
My point is that your argument has a fundamental flaw: you are arguing that discrete experience is the most fundamental, but yet you are using thinking in itself to do that in the first place.


I'm going to start here and work my way around the post. First, I do not think that discrete experience is the most fundamental thing that explains our existence. I think discrete experience is the most fundamental thing an existence must be able to do to know, and it is a fundamental that can first be defined clearly, and without contradiction. Remember that knowledge is essentially having a clear definition with essential properties that does not compete with any other. Second, when we apply that definition, it must not be contradicted.

I want to be very clear, I do not think there is nothing prior to discrete experience. I also do not think that something that is not a "being" can discretely experience. I believe it is fundamental that there be a "self". One cannot discretely experience without being something. But I find that I cannot define the "self" as a fundamental, without first defining discrete experience.

Now, I could be wrong. Perhaps you can prove this. Can you know something prior to discrete experience? Can you know what an "I" is before you are able to differentiate between the totality of experience? I know that you can believe such, but can you know it? Can you know what eyes are? A mind? The difference between your body and another thing? Conscious and unconscious? I can't reasonably see how this is possible without the ability to discretely experience, and further, without he understanding of discrete experience. Again, I do believe there is a "self", but I cannot define or even conceive of a self without first discretely experiencing.

Quoting Bob Ross
I think that if you are trying to find one thing that you can "know", that this, in terms of derivation, it should be you.


In a way, I do. "I" am the discrete experiencer. That is how I know what "I" am. But, the "I" is not necessary. I could not have a notion of "I" in my head, but still note there is the totality of experience, and there are different things within experience.

If you recall, I never identity an "I" beyond that. Because it is not necessary for the epistemology to occur. One thing we have not covered yet, is that my epistemology is not human centric. It can be applied to insects, plants, animals, and AI. Imagine a simple ant. An ant can discretely experience. Does it know what an "I" is? Does it know it can discretely experience? No, but it can know things, because it discretely experiences. It knows the sugar in front of it is good compared to the dirt that surrounds it. It is of course an extremely limited context, but it knows by taste that some things should be eaten, while others should not.

Can an AI know things? If it can discretely experience, yes. It is a limited context, but a roomba can map out my floor over time, and applicably know where to clean after several cycles. A roomba will never discretely experience the notion of an "I". Can a computer know things without fundamental building blocks that allow it to discretely experience? Of course not. But even if we wanted it to realize it had as self, it would be impossible for it to know it had a self without being able to discretely experience that "it" was different than "the world".

Discrete experience does not require language. It merely requires that you are able to discern separation within existence. That is the fundamental needed to start having knowledge. Then I can discretely experience that "I" am separate from "that other stuff". Can I realize that I am an "I" before I can discretely experience? No, its not possible. Therefore in my view, the most fundamental aspect that we can know, is discrete experience.

If you still have some doubts, think on the philosophy of solipsism. It is the idea that "I" is everything. I am the only consciousness in the world, and everything that happens, is due to the invention of my existence. There are people who consider and debate such a theory. Meaning that the "I" is not as fundamental as you think to knowledge. Further, without being able to discretely experience, one cannot have a debate about what I is. Did we not go back and forth in the beginning? What "I" is is not a fundamental, unquestionably proved thing.

If you can prove that the "I" is necessary to have knowledge, then feel free. Create the definition with essential properties, and apply it to reality without contradiction. That being, that an "I" is more fundamentally known then discrete experience. I am saying you must be able to discretely experience before there is the concept of an "I". You are saying there must be a concept of an "I" before I can have the concept of discrete experience. While "You" must exist to discretely experience, "You" existing does not give you the fundamentals of an epistemology, it is "You" that can discretely experience that does.

Quoting Bob Ross
causes your argument to really be "I think (in itself), therefore I discrete experience. I discretely experience, therefore I think (in itself)" (this is no different than A -> B, B -> A, which really is A -> A, so I do think you are essentially saying "I discretely experience because I discretely experience--hence #1).


I want to be very clear on the proof, because I believe there is still a fundamental misunderstanding of what is being proposed. I discretely experience, because any proposal that I do not discretely experience, is contradicted. The simple proof I put forward is that to present any counter argument to discretely experiencing, to even understand what it is you are trying to counter, you must discretely experience. This is not A -> A.

1. There is experience.
2. Knowledge is a deduction without contradiction.
3. Discrete experience is known = A.
4. A because !A is a contradiction.
5. A allows the description to divide all experience into different aspects and definitions.
6. A allows the idea of a "self/I". The most basic definition being, "I" am what discretely experiences.
7. A allows the idea of "thoughts".
8. 3 and 4 can can be applicably known if they are applied to reality without contradiction.

Understand that I agree there must be an "I" before "it" can discretely experience, but that I cannot define and know what an "I" is, unless I know something first. And the first thing I can truly know, and must know, is there is discrete experience. How am I to differentiate among existence what an "I" is without the ability to discretely experience first? I hope this cleared up what I'm trying to prove.
Bob Ross February 09, 2022 at 04:46 #652859
Hello @Philosophim,

I agree, I think we are still not quite understanding each other, so I will try to do my best to respond to your statements (very thought-provoking as usual!).

First, I do not think that discrete experience is the most fundamental thing that explains our existence. I think discrete experience is the most fundamental thing an existence must be able to do to know, and it is a fundamental that can first be defined clearly, and without contradiction


Fair enough. I don't think that you are arguing that discrete experience is the only thing, or that we can't induce beyond (or before) that, but I am questioning your claim that it is the most fundamental thing an existence must have to be able to know. I am providing a contender: without convincement, discrete experience is useless and cannot be extrapolated in the first place. If you couldn't conclude anything at all, then you wouldn't know you discretely experience. Now, I think this leads me to a good point you made: the distinction between knowing something inherently and conceptualizing it. In other words, you don't need to conclude you discretely experience to discretely experience. However, although it is a splendid point, I think there are two different kinds of knowledge that need to be addressed here: implicit and explicit. For example, I can implicitly know that food is necessary for me to survive without explicitly knowing it at all. But once I conceptualize it to whatever degree, then it necessarily becomes explicit knowledge. I like to think of this in terms of knowledge pertaining of itself vs in itself: the former is the conceptualization of the latter (former is explicit, latter is implicit). The reason I think this to be incredibly important is that I think you are arguing for discrete experience, at its most fundamental state, as implicit knowledge (that can or cannot be made explicit)(aka discrete experience in itself and not of itself, although the latter is a possibility, the former is a necessity). Correct me if I am wrong here, but that is what I am understanding you to be, generally, claiming. I am trying to propose that implicit knowledge can only be actualized (and thus obtained) once it has been made explicit. In other words, existence (whatever thing we are talking about that exists) doesn't obtain implicit knowledge until after it conceptualizes it and extrapolates the implicit therefrom. For example, let's say, hypothetically, that I never had realized, explicitly, that I discretely experience, and, upon reading your brilliant essays, now realize it. Then, and only then, would I then know that I implicitly discretely experienced all those times prior to the moment I realized explicitly that I discretely experience. If, for example, I never realized I discretely experience, then I would never know I discretely experience (because implicit knowledge is extrapolated from explicit knowledge). But there's also a need for considering the point of reference: with respect to you, even if I never realize I discretely experience, I may be a discrete experiencer to some degree or another. This entails that, if you've conceptualized me as a discrete experiencer whereas I haven't, you know I discretely experience but I don't. The moment, if at all, that I realize that I discretely experience, which would only be by means of extrapolating the implicit in itself from of itself, is the moment I know and not before that. Likewise, when I am arguing for thinking in itself, I think I was wrong to use that as the bedrock (along with motive) because the conceptualization of thinking (and motive) is required for me to even realize I think (or have a motive) in the first place, therefore thinking of itself (which is explicit knowledge) is required for me to then extrapolate that I was implicitly thinking in itself in the first place (and that it is a necessary extrapolation)(ditto for motive). I think this is the same process (fundamentally) for all knowledge: including this very statement I am making right now. That previous sentence required that I conceptualized such a thing, explicitly, and which I can claim therefrom to have been occurring implicitly before I made it explicit in my knowledge. Basically, you are claiming (I think) that discrete experience cannot be contradicted because that contradiction also requires discrete experience. I am claiming, although that is fine, it is an extrapolation that first had to be conceptualized (explicitly) to then, only thereafter, be considered implicitly true prior to its conceptualization. Therefore, the conceptualization is required first and foremost in order to ever claim anything ever was implicit previous to something explicitly being known. To know that you think requires that you conceptualized, to some degree, thought itself and then, therefrom, extrapolated you must have been thinking prior to this realization (i.e. implicitly)--my point is that without that explicit conceptualization, you would have never known that you think. Without the conceptualization that you discretely experience, you wouldn't know that you are implicitly discretely experiencing. However, you may still, even though you don't know you discretely experience, know things that stem from discrete experience. For example, if you conclude that you are seeing a blue ball, even if you don't know you discretely experience, you still know of the blue ball because you have conceptualized the blue ball. Moreover, you could then extrapolate that the blue ball was there prior to you conceptualizing it, but my point is that you wouldn't know that it was there unless you extrapolated it from your conceptualization of the blue ball. If you never would have explicitly known the blue ball, then you would never have known it in the first place. You can't even claim to know something if you haven't, to some degree or another, conceptualized that something.

I want to be very clear, I do not think there is nothing prior to discrete experience. I also do not think that something that is not a "being" can discretely experience. I believe it is fundamental that there be a "self". One cannot discretely experience without being something.


Fair enough. I apologize if I portrayed it that way: I never thought you were arguing the contrary.

But I find that I cannot define the "self" as a fundamental, without first defining discrete experience.


I agree, but in a slightly different way: the most fundamental in the sense of conceptualized to be the most fundamental is differentiation. But again, you could make claims pertaining to differentiated things all the while never knowing that you discretely experience, and, more importantly, you wouldn't even know you implicitly discretely experience until you know it explicitly. To even try to prove anything, including discrete experience, you must conceptualize it first (to some degree or another). I am trying to state that knowledge doesn't begin its manifestation with differentiation, it begins when it is conceptualized (made explicit).

Perhaps you can prove this. Can you know something prior to discrete experience?


I am not entirely sure that it is a proof, because I partially agree with you here, but to claim that discrete experience is implicitly required for all else requires explicit knowledge of such. So, in my head, when we are conversing about when someone knows something, it isn't the extrapolated implicit discrete experience that grants the right "to know it": it is the conceptualization of that contextual thing (or even of another concept--as to know in the abstract requires the conceptualization of such first and foremost as well). I think to say it truly is discrete experience is to operate with a hindsight bias after the fact that the person claiming it has extrapolated the implicit knowledge from the explicit knowledge.

Can you know what an "I" is before you are able to differentiate between the totality of experience?


Well, it depends on what you mean by "I". Technically speaking, the "I" isn't necessarily the synonymous with conceptualization. The granting of knowledge is within each context, or avenue. So to know the "I", whatever you are depicting that as, is possible to be known without ever knowing of discrete experience (again, to say the "I" was implicitly discretely experiencing the whole time requires conceptualization of the implicit into something explicit, which is actually how I am able to claim it is the "implicit into something explicit" because I am extrapolating that the implicit must have came before the explicit in order to make sense of it). I get that it seems like I am using discrete experience to attack discrete experience (which is contradictory), but what I am really using is the conceptualized, explicit knowledge I have to base the claim that conceptualization must be the farthest we can derive without beginning extrapolation (this claim in itself is also a conceptualization).

I know that you can believe such, but can you know it?


Honestly, I think your argument is plenty strong enough to even claim that you cannot believe it without discretely experiencing. But this is only known after it has been conceptualized.

Can you know what eyes are? A mind? The difference between your body and another thing? Conscious and unconscious?


Although I understand and agree with you, oddly enough, I disagree (:. It is only after you have the conceptual knowledge (explicit knowledge) of discrete experience that you can claim that discrete experience was implicitly happening all the while when you previously conceptualized an eye ball. Prior to that, you did not know it (but yet you knew of an eye ball). I think when you say something along the lines of "try to disprove your discrete experiences without using your discrete experiences", I would like to agree (firstly) and (secondly) append "try to disprove or prove discrete experience without ever first conceptualizing it".

I can't reasonably see how this is possible without the ability to discretely experience


Again, I think this is hindsight bias: you have explicit knowledge of discrete experience (because you conceptualized it) and, only thereafter, now extrapolate that it was there implicitly all along. Without conceptualization, you wouldn't ever know anything (even if you implicitly discretely experience, for to know that you would have to explicitly conceptualize it first).

Again, I do believe there is a "self", but I cannot define or even conceive of a self without first discretely experiencing.


I understand (fair enough). But, again, you could know of a "self" without ever implicitly or explicitly knowing of discrete experience (discrete experience wouldn't be apart of your knowledge collection, so to speak). Therefore, the real contingency of knowledge is conceptualization, not differentiation. The former is utilized to conceive that the latter is logically necessary for all else. It is also a conceptualization.

On a side note, I would also like to point out that the antonym of "differentiation" is not "nothing", it is "oneness" (cohesion). It isn't necessarily true that you wouldn't exist without differentiation, you may exist as one with everything (therefore terms themselves wouldn't exist for you, but you would exist--in a sense). I would agree with you that, if you were oneness, you wouldn't know anything, but this is due to the lack of conceptualization. I think you are right in the sense that me even claiming "conceptualization", or even conceptualization in itself, is "contingent" on differentiation. However, that statement is a conceptualization first and foremost, and so it is with this statement as well. It all is. Does differentiation come first as an extrapolated truth (whereby it can be equally extrapolated to have been an implicit truth all along), or as the actual spark of manifestation? I think the former.

An ant can discretely experience. Does it know what an "I" is? Does it know it can discretely experience? No, but it can know things, because it discretely experiences.


No, within reference to itself, it knows nothing. With reference to you, it knows things. This is because, it isn't about whether it knows it discretely experiences, it is about whether it conceptualizes to any degree. If it does, to contradict what I previously stated, then it knows. It if doesn't, then it doesn't know. But its knowledge has no direct relation to your knowledge of its knowledge. It could very well be the case that it doesn't conceptualize anything, but yet you, being able to conceptualize, deems that it does based off of your conceptualizations of its actions.

It knows the sugar in front of it is good compared to the dirt that surrounds it.


Again, you conceptualized this and, therefrom, deemed that ant to know. This doesn't mean that it actually knows anything (maybe it does, maybe it doesn't). Just because it is the most rational position for you, as a being capable of conceptualizing, to hold with reference to the ant, namely that it knows to some degree or another, doesn't mean that in reference to itself that it knows anything at all. It would have to be able to conceptualize something. And, yes, again, me claiming "it must conceptualize something" is contingent on differentiation because all conceptualizations I have had are conceptualized as being contingent on differentiation, therefore I deem it so (and me deeming it so is also a conceptualization). The way I see it, conceptualization is the point of manifestation for everything (including everything), it is the point at which you can quite literally seemingly postulate it ad infinitum recursively (reflexively) (although I don't think it is an actual infinite, only potential). It is where, in my opinion, you truly hit bedrock: where anything below, above, before, after, outside, without, and those very concepts themselves is extrapolated from (conceptualized). In other words, I can conceptualize that I must differentiate, but I hit bedrock (recursive potential infinity) if I try to conceptualize conceptualization, and so forth.

I think the same thing is true of AI and other beings (to dive a bit into solipsism). To be clear, I am not a solipsist, but I think I may not be one for a different reason than you: I think that the most rational conclusion is that there are other beings like me with reference my conceptualization of them, but that doesn't mean I've proved that they can conceptualize. It only means that via my conceptualizations, the most rational position to hold is that of not being a solipsist. Again, just because I deem another person to know, or even if it is solely based off of risk analysis (which is also a conceptualization)(as in what if I am wrong and choose to be solipsist vs what if I am wrong and choose to respect other people as actually other people), doesn't mean that, in reference to themselves, they actually know anything.

While "You" must exist to discretely experience, "You" existing does not give you the fundamentals of an epistemology, it is "You" that can discretely experience that does.


I agree, but this is conceptualized and, thereby, only implicitly known after it is explicitly known. You are right that me existing does not ground an epistemology: it is the ability to conceptualize that very statement that grounds it (and the ability to conceptualize that, and this, and this, etc).

I discretely experience, because any proposal that I do not discretely experience, is contradicted.


Again, I think you can only propose this if you are able to conceptualize. First you must have explicit knowledge of this to then extrapolate it as implicit and, therefore, you are extrapolating discrete experience as an implicit truth after you have already gained it as knowledge via explicitly (aka, via conceptualization). I think this is the point at which knowledge is granted, at least initially, or manifested: when it is conceptualized.

The simple proof I put forward is that to present any counter argument to discretely experiencing, to even understand what it is you are trying to counter, you must discretely experience


I think I can use that same argument to prove you are right and that that doesn't mean it is the point at which knowledge manifests. In order to even claim that I can't postulate a counter argument without differentiation, you must have a conceptualization (and same for me). I think that they are both deeply integrated into our existence, but one is the point of manifestation (conceptualization), the other is a product of that manifestation that is manifested as a necessity to all else (differentiation). However, although I think you are using A -> A still, I think that you are actually right: there is a point at which it is circular, and that is fine as long as it is the point of all other manifestation. I think that you think that point is differentiation, I think it is conceptualization.

I hope this cleared up what I'm trying to prove


I think I understand and I hope that I demonstrated that in my responses. If I didn't fully grasp your views, please, as always, correct me!

I look forward to hearing from you,
Bob
Philosophim February 12, 2022 at 16:46 #653949
Great, I think I see where your issue is now, and perhaps I can address is properly.

Quoting Bob Ross
Now, I think this leads me to a good point you made: the distinction between knowing something inherently and conceptualizing it. In other words, you don't need to conclude you discretely experience to discretely experience.


I want to be careful with my words here to communicate this properly. There is no inherent knowledge. You can practice knowledge without knowing that you are doing it. You can have distinctive knowledge. You can even have applicable knowledge. But it is obtained because you are following the steps outlined in the epistemology. You can be blissfully unaware that it is what you are doing, and still have distinctive and applicable knowledge.

Quoting Bob Ross
I think there are two different kinds of knowledge that need to be addressed here: implicit and explicit. For example, I can implicitly know that food is necessary for me to survive without explicitly knowing it at all. But once I conceptualize it to whatever degree, then it necessarily becomes explicit knowledge.


I'm not sure there is implicit knowledge. Knowledge is a process that must be followed to have it. It is a tool. We can measure things as being centimeters long in our minds, but we need an actual measuring stick to say we've measured it. We might get very close with our estimates, but they are not the same as using the tool itself. The same with knowledge.

Its more like accidental vs explicit. I could find a ruler on the street and not know what cm means. But I do notice there are some lines. I measure something and say its 4 ruler lines. I can safely say within that context, that I have measured length with a ruler. But I don't know its a ruler, or how it was made, or what any of the other symbols and lines mean like inch. Within your first few paragraphs, if you replace "implicit" with "accidental" I think you'll see what I'm trying to point out.

Knowledge is like the process of measuring. If I am taught how to measure with a ruler, what the lines and symbols mean, and am trained how to line it up properly, or tips of mathematics, then I can explicitly measure with a ruler. The same with knowledge.

Quoting Bob Ross
The reason I think this to be incredibly important is that I think you are arguing for discrete experience, at its most fundamental state, as implicit knowledge (that can or cannot be made explicit)(aka discrete experience in itself and not of itself, although the latter is a possibility, the former is a necessity).


No, I am not. I am not even referring to discrete experience as accidental. You can discretely experience without a theory of knowledge. I am noting that to explicitly know what knowledge is, the first thing you must come to know, is discrete experience. With this, you can build a theory of knowledge. You don't have to know why you discretely experience. Just as I don't have to know the atomic make up of the ruler I am using. I just have to know what consistent spacing is. Of course, that doesn't mean there aren't atoms that make up that ruler. It also doesn't negate the fact that without atoms, there could be no ruler. But the knowledge of atoms is entirely irrelevant to the invention and use of a ruler. So with knowledge.

Quoting Bob Ross
Basically, you are claiming (I think) that discrete experience cannot be contradicted because that contradiction also requires discrete experience.


Yes! I think you have it.

Quoting Bob Ross
[quote="Bob Ross;652859"]I am claiming, although that is fine, it is an extrapolation that first had to be conceptualized (explicitly) to then, only thereafter, be considered implicitly true prior to its conceptualization.


Absolutely correct. Except replace "implicitly" true with "accidently" true.

Quoting Bob Ross
Therefore, the conceptualization is required first and foremost in order to ever claim anything ever was implicit previous to something explicitly being known. To know that you think requires that you conceptualized, to some degree, thought itself and then, therefrom, extrapolated you must have been thinking prior to this realization (i.e. implicitly)--my point is that without that explicit conceptualization, you would have never known that you think.


Correct. (With implicit to accidental conversion). Think of a runner with natural form who has never been taught how to run properly. One day they are taught how to run properly, and it so happens, their natural form is exactly the optimal from needed to run quickly. They did not know what a form was prior to learning this, but once they did, they now realize it was something they did all along without realizing it.

Quoting Bob Ross
However, you may still, even though you don't know you discretely experience, know things that stem from discrete experience. For example, if you conclude that you are seeing a blue ball, even if you don't know you discretely experience, you still know of the blue ball because you have conceptualized the blue ball. Moreover, you could then extrapolate that the blue ball was there prior to you conceptualizing it, but my point is that you wouldn't know that it was there unless you extrapolated it from your conceptualization of the blue ball. If you never would have explicitly known the blue ball, then you would never have known it in the first place. You can't even claim to know something if you haven't, to some degree or another, conceptualized that something.


If you follow the steps of knowledge, it doesn't matter if you know that's what you did. If you conceptualized (discretely experienced) a blue ball within your mind that had clear essential properties to you, then you would distinctively know the blue ball. And I want to reemphases this:

"You can't even claim to know something if you haven't, to some degree or another, conceptualized (my adjustment: discretely experienced) that something."

Yes, this is exactly the point I've been making.

Quoting Bob Ross
I agree, but in a slightly different way: the most fundamental in the sense of conceptualized to be the most fundamental is differentiation.


Differentiation, is the act of discretely experiencing. Within the sea of your experience, you are able to say, "This" is not "that".

Quoting Bob Ross
To even try to prove anything, including discrete experience, you must conceptualize it first (to some degree or another). I am trying to state that knowledge doesn't begin its manifestation with differentiation, it begins when it is conceptualized (made explicit).


Once I am able to see "this" is different from "that", I can detail it. What is "this"? Maybe, I can make a word. I can use my memory. I can remember this state, and if I find a state that matches what I remember, I'll say its the same state. "This" is a "ball".

Once again, I cannot conceptualize without first being able to tell a difference. Or maybe, they are one and the same. Perhaps differentiation at even the lowest level is some type of conceptualization. The point is, these are words that describe acts of discrete experience. Conceptualization about a discrete experience, is a discrete experience that describes another discrete experience. Discrete experience is a fundamental that underlies all of our capabilities to believe and know. Perhaps the use of conceptualization fleshed out will add greater detail and clarity. That is fine. I just wanted to point out that what you are describing, differentiation and conceptualization, are acts of discrete experience.

Quoting Bob Ross
I think when you say something along the lines of "try to disprove your discrete experiences without using your discrete experiences", I would like to agree (firstly) and (secondly) append "try to disprove or prove discrete experience without ever first conceptualizing it".


To me, this is still, "try to disprove or prove discrete experience without ever first conceptualizing it". Discrete experience is a cat. Conceptualization may be a tiger, but its still a cat.

I want to point out the definition of discrete, and why I chose it. "discrete - individually separate and distinct." I was looking for a fundamental. Something that could describe a situation as a base. I first thought of an eye. An eye does not discretely experience. Its iris opens, and light flood through. The eye cannot tell it sees. It is a tunnel, that has the total experience of its being, but never discerning anything.

The human brain has many tunnels to it. Eyes, ears, nose, etc. This is the sea of existence, the sea of experience. And yet, it is somehow able to find "things" in the light. It can see things as individually separate and distinct, where an eye cannot. It can distinctly experience sound as separate from light. How? Who knows? It is unimportant for what we are trying to do.

Let me leave it at this for now. I will come back to the viewpoint of knowledge being applied to non-humans after this part is digested. Quoting Bob Ross
An ant can discretely experience. Does it know what an "I" is? Does it know it can discretely experience? No, but it can know things, because it discretely experiences.

No, within reference to itself, it knows nothing. With reference to you, it knows things. This is because, it isn't about whether it knows it discretely experiences, it is about whether it conceptualizes to any degree. If it does, to contradict what I previously stated, then it knows. It if doesn't, then it doesn't know. But its knowledge has no direct relation to your knowledge of its knowledge.


Yes, I have to be careful here. The only thing I can truly say is the ant has the "potential" for knowledge if it can discretely experience. It can know things, though it may not process it with intent. Further, its context will never be elevated to that of a human being. The point is, it applicably "knows" dirt shouldn't be eaten, while sugar should. If it did not, it would be constantly testing the dirt if it was hungry. Yet, it doesn't.

Eating dirt would be contradicted by its death, or its taste buds rejection. It is incredibly primitive, at the core of emotion/sensation, but there is something within the ant that can differentiate between dirt and sugar, and something that prevents it from continually testing dirt to see if it is edible.

Quoting Bob Ross
Again, you conceptualized this and, therefrom, deemed that ant to know. This doesn't mean that it actually knows anything (maybe it does, maybe it doesn't). Just because it is the most rational position for you, as a being capable of conceptualizing, to hold with reference to the ant, namely that it knows to some degree or another, doesn't mean that in reference to itself that it knows anything at all.


This is true. You are correct that I don't really applicably know this. I am making an induction based off of the possibility of my own experience. But is it cogent? I believe it is, and further, in relation to other inductions I can make, I believe it is the most cogent in the hierarchy. I would be in favor of exploring this scientifically, in an attempt to find applicable knowledge from this belief. But I do propose that this is a cogent and worthwhile belief to explore.

Quoting Bob Ross
I think that the most rational conclusion is that there are other beings like me with reference my conceptualization of them, but that doesn't mean I've proved that they can conceptualize.


This could also be true. First, lets try to nail down what conceptualization is at both a distinctive, and applicable level. For my part, if we're talking about the results of something which can discretely experience, have beliefs, and then have knowledge, I believe it is possible. Is there an alternative? We can invent plausibilities on how other creatures use discrete experience, but they are lower on the hierarchy. As such I believe it is more rational to examine the possibilities of how other creatures can know, opposed to the plausibilities of how they could know. Its not that I applicably know other creatures can have knowledge, its just that if they can discretely experience, its possible they could.

Quoting Bob Ross
The simple proof I put forward is that to present any counter argument to discretely experiencing, to even understand what it is you are trying to counter, you must discretely experience

I think I can use that same argument to prove you are right and that that doesn't mean it is the point at which knowledge manifests. In order to even claim that I can't postulate a counter argument without differentiation, you must have a conceptualization (and same for me). I think that they are both deeply integrated into our existence, but one is the point of manifestation (conceptualization), the other is a product of that manifestation that is manifested as a necessity to all else (differentiation). However, although I think you are using A -> A still, I think that you are actually right: there is a point at which it is circular, and that is fine as long as it is the point of all other manifestation. I think that you think that point is differentiation, I think it is conceptualization.


To sum up I think you are under the impression that differentiation and conceptualization are separate identities. I am not disagreeing that you can propose such differentiation. What I am noting is that they are subsumed by both being discrete experiences, and I am unsure where differentiation leaves off and conceptualization begins. Even if it is the case, you still need differentiation before conceptualization. One cannot conceptualize before one can differentiate.

As such, I still do not believe there is anything circular here. Differentiation does not lead to conceptualization, which leads to differentiation. The order of which we learn about things is also not circular. Saying that examining a ruler leads you to realize you need atoms for a ruler, is not a circular argument. A circular argument is A -> B -> A. What you are arguing with explicit, and what I think more accurately should be called accidental, is not a circular argument. Knowing knowledge is not required to accidentally practice knowledge. If you could try to present your argument that my proposal is circular with an A -> B -> A format, I think I could understand better where you're coming from, and we could settle that issue once and for all.

As always, fantastic writing. Thanks again and I look forward to hearing your responses!








Bob Ross February 13, 2022 at 01:42 #654095
Hello @Philosophim,


To sum up I think you are under the impression that differentiation and conceptualization are separate identities. I am not disagreeing that you can propose such differentiation. What I am noting is that they are subsumed by both being discrete experiences, and I am unsure where differentiation leaves off and conceptualization begins. Even if it is the case, you still need differentiation before conceptualization. One cannot conceptualize before one can differentiate.


I think we may, after all, be attempting to convey the same underlying meaning with "conceptualization" and "discrete experience"; however, I find myself in only in partial agreement with what you stated. I think that it would be beneficial for me to define all the terms, their relation to one another, and an elaboration on "knowledge" in general.

Firstly, here's my interpretation of some of the definitions:

discrete - individually separate and distinct. (as depicted in your last post)
differentiation - the act of differentiating (I consider this synonymous with "the act of discretely experiencing"--as something being "discrete" is an instance of differentiation)
discretely experiencing - the act of differentiating.

Therefore, given those definitions, I think that your separation of "differentiation" and "conceptualization" as a part of "discrete experience" in your most recent post leads me to believe you may be attempting the same thing I am trying to convey with "conceptualization". As you seem to be using "discrete experience" as something more fundamental than "differentiation", but, where the confusion lies, at the same time, you seem to be also attempting to use them synonymously.

Once again, I cannot conceptualize without first being able to tell a difference. Or maybe, they are one and the same. Perhaps differentiation at even the lowest level is some type of conceptualization.


The first sentence seems to be implying you require differentiation in order to do anything else, which, in my head, directly implies differentiation is discrete experience. However, thereafter, you seem to be claiming that "conceptualization" and "differentiation" may be synonymous, and that they are apart of a more fundamental "discrete experience":

The point is, these are words that describe acts of discrete experience. Conceptualization about a discrete experience, is a discrete experience that describes another discrete experience. Discrete experience is a fundamental that underlies all of our capabilities to believe and know.


And likewise:

Differentiation, is the act of discretely experiencing. Within the sea of your experience, you are able to say, "This" is not "that".


So I am a bit confused if you are arguing for "differentiation" as "discrete experience", or whether that "discrete experience" is more fundamental than "differentiation".

I think this is a perfect time to elaborate on a couple more terms:

Concept - A general idea or understanding of something: synonym: idea.
Conceptualization - The act of manifesting a concept.
Point of manifestation - the grounds of everything in terms of just chronological precedence (contrary to extrapolated chronological precedence).

The reason I chose "concept" is that it is a purposely vague manifestation of an idea, which is (I think) the best term I could come up with for conveying a fundamental, rudimentary point of manifestation. It is like a "thought", but not completely analogous: it isn't truly thinking of itself, for that is a recursively obtained concept that one thinks--which is not necessary for a concept to manifest. Likewise, it isn't thinking in itself, because thinking of itself is required for such. Therefore, I call it "conceptualization": the act of manifesting a concept (or concepts). When I use the term "concept", I don't mean high-level discernment of things: all of it is a concept and concepts can be built off of one another. Everything is manifested as a concept, including "differentiation" itself. This may just be me using the term wrong, but I wanted to clarify my use of the term.

If what you mean by "discretely experience" is "the point of manifestation of everything, including everything itself", then I think we mean the same thing. However, my worry is any implication derived from "discretely" in "discretely experience": any extrapolation that differentiation is the point of manifestation. Notice that my definition here completely lacks any reference to "differentiation" (which, I think, includes "discrete", since it is also the separation of "this" from "that"), as I think it is manifested conceptually by means of the point of manifestation. If this is what you mean by "discretely experience", then we agree (however, I think the use of the term "discrete" in "discretely experience" has unwanted implications then").

I want to point out the definition of discrete, and why I chose it. "discrete - individually separate and distinct." I was looking for a fundamental. Something that could describe a situation as a base.


I am fine with your definition of "discrete"; however, when you say "I was looking for a fundamental", are you implying a fundamental that we must conceptualize to deem it so, or the point of manifestation required for that conceptualization in the first place? (the former I would call extrapolated chronological precedence, and the latter is just chronological precedence). I think this is a perfect segue into "knowledge". I don't think there are only either induced or deduced (or distinctive and applicable) knowledge: there is immediately acquired knowledge, mediated deductive knowledge, and mediated inductive knowledge. So when I was previously (in a subsequent post) asking it in the sense of "whether we must extrapolate differentiation, or whether it is the point of manifestation", I think I may have misled you with the term extrapolation; I am not implying that we induce differentiation, I am trying to imply that, once we conceptualize differentiation, we know it not as deduced nor induced but, rather, as immediately acquired knowledge. Let me explain a bit more on those three types of knowledge:

Of manifestation vs from manifestation of itself - First I need to distinguish these two concepts (which I previously stated as "of itself" vs "in itself", but to resolve some confusion I think these other terms are better). "of manifestation" is as it is presented (its manifestation), whereas "from manifestation" is a form of knowledge either induced or deduced based off of "of manifestation" (that which was presented).

Immediately acquired knowledge - that which is directly manifested (as a concept, I would argue) and, thereby, is immediately known. I think generally this is the principles of rudimentary logic (so to speak), perception, thought, and emotion of manifestations of themselves and, more importantly, any conceptualizations of manifestations of themselves that may stem from any of the aforementioned. I don't need a tool of knowledge, i.e. an epistemology, to "know" that I differentiate, require a sufficient answer to everything ('sufficient' can vary though), perceive, think, feel, or any form therein (within emotion, I don't need an epistemology to "know" "pain", generally, from "pleasure" of manifestations of themselves).

Mediated deductive knowledge - that which is deduced based of off immediately acquired knowledge. This, in terms of immediately acquired knowledge, is distinguished by it being from manifestations of themselves in terms of perception, thought, emotion, and any form therein in reference to from a manifestation of itself. For example, I have an immediately acquired knowledge of "emotion" in terms of manifestation of itself, but the conclusion of the concept of "emotion", holistically, required the use of the individual concepts of feeling (such as pain and pleasure) to deduce it (this is "emotion" from manifestation of itself--it is the deduced knowledge which was deduced by the of manifestations of itself). I call it mediated, because, although "emotion" of manifestation and from manifestation of itself are both conceptualized (manifested as a concept), one concept is clearly mediated by the immediate forms of knowledge while the other is, well, immediately known.

Mediated inductive knowledge - that which is induced based of off immediately acquired knowledge and meditated deductive knowledge. It is essentially the realm of hierarchical inductions. For example, I know "emotion" of manifestations of itself and from manifestations of itself so far, and I can induce why I have "emotion" in the first place (in terms of evolution or biology for example).

It is important to note that I am claiming that conceptualization is occurring in all three forms of knowledge: these are all manifestations of concepts. However, there's nevertheless a meaningful distinction that can be produced because they are all conceptualized in this necessary hierarchy. For example, mediated knowledge (both forms) adhere and obey the immediately acquired form. Differentiation and the principle of sufficient reason are two great examples of immediately acquired knowledge that is necessarily imposed on all mediated knowledge. The reason why this is the case, as you mentioned, is not the subject of our conversation (as of yet), but merely that it is. They are necessarily imposed because all concepts that conform to the mediated type are always conceptualized, manifested via concept by the point of manifestation, as obeying such. Also, it is important to note that these are in relation to after it is conceptualized. So I am not claiming that you immediately know differentiation is occurring, only that, once it is conceptualized, it necessarily is known and requires no deducing nor inducing.

So, with this in mind, when you stated:

I'm not sure there is implicit knowledge. Knowledge is a process that must be followed to have it.

There is no inherent knowledge. You can practice knowledge without knowing that you are doing it. You can have distinctive knowledge. You can even have applicable knowledge. But it is obtained because you are following the steps outlined in the epistemology. You can be blissfully unaware that it is what you are doing, and still have distinctive and applicable knowledge.


I think you are 100% right in terms of knowledge as a tool, which I would say is mediated knowledge (it is, therefore, what one can learn). What one can't learn, what one cannot rationalize or reason their way away or towards, is the immediately acquired knowledge. Although I want to emphasize this is all the act of manifesting concepts (i.e. conceptualization), the immediately acquired knowledge isn't conceptualized in terms of deduction or induction (in other words, it is not a concept manifested in relation to other, more primitive or fundamental, concepts) but, rather, it is manifested as the basis--the ultimate bedrock. What I meant by implicit and explicit is more in terms of the relation of some concept being induced or deduced to be occurring prior to its manifestation. For example, when we conceptualize that we must differentiation, then that becomes something that must have been implicitly occurring all the while (i.e. the immediately acquired knowledge of manifestation of itself is utilized to induce, therefore mediated inductive knowledge, that it was occurring all the while--it is not to say that you knew that at the time). I don't think this is what you were referring to with your example of the runner: in terms of knowledge as a tool, thusly mediated forms, i would agree that the runner lucked, or "accidentally", followed the rules of the epistemology. However, and this may be a fundamental disagreement between us, I would state that "knowledge as a tool" (and, thusly, knowledge that can be learned) is a mediated form of knowledge, not all knowledge.

Its more like accidental vs explicit. I could find a ruler on the street and not know what cm means. But I do notice there are some lines. I measure something and say its 4 ruler lines. I can safely say within that context, that I have measured length with a ruler. But I don't know its a ruler, or how it was made, or what any of the other symbols and lines mean like inch. Within your first few paragraphs, if you replace "implicit" with "accidental" I think you'll see what I'm trying to point out.


This is with reference to knowledge as a tool and, therefore, mediated forms of knowledge (I'm fine with that). But my point was that you can't induce or deduce any concept as to have been occurring implicitly all the while without it first being explicitly known.

You can discretely experience without a theory of knowledge. I am noting that to explicitly know what knowledge is, the first thing you must come to know, is discrete experience.


Your first sentence seems to be sort of aligning with my view of immediately acquired forms of knowledge, I think you just aren't categorizing it as "knowledge". I agree with the second sentence if you are defining "discrete experience" as "the point of manifestation of everything, including everything itself". I would then also like to append that the next step, after "discrete experience", in order to know explicitly what knowledge is, you have to know what "differentiation" is (and things pertaining to such: like the principle of noncontradiction). I think this is generally what you are arguing for, but I think "discrete experience" and "differentiation" are used both synonymously and not synonymously in your statements.

With this, you can build a theory of knowledge. You don't have to know why you discretely experience. Just as I don't have to know the atomic make up of the ruler I am using. I just have to know what consistent spacing is. Of course, that doesn't mean there aren't atoms that make up that ruler. It also doesn't negate the fact that without atoms, there could be no ruler. But the knowledge of atoms is entirely irrelevant to the invention and use of a ruler. So with knowledge.


This is true. But I would like to emphasize that even if it is necessarily the case that it is made up of atoms, this is all apart of extrapolated chronological precedence and not just chronological precedence. Yes, I am made of atoms, so in that sense I am derived (one way or another) from those atoms, which necessarily precede me (as a subject, reflexive self that is). However, all of this, including that previously rationalized statement, is derived from the point of manifestation, which manifests certain concepts as necessarily the case (such as our immediately acquired knowledge of differentiation and the principle of noncontradiction). So I would state that with respect to conceptualization, it necessarily follows that I am preceded by atoms. Notice that the conceptualization is required, and is the spring of life (so to speak), of that very extrapolated truth.

Basically, you are claiming (I think) that discrete experience cannot be contradicted because that contradiction also requires discrete experience. — Bob Ross

Yes! I think you have it.


If you agree with me here, then I would like to ask you how you or I derived this? I would say from a manifestation of a concept that is immediately known and is revealed, so to speak, as necessarily true absolutely. To be clear, I'm not asking you to explain why we discretely experience, only how you or I came up with that very claim. Did we just discretely experience it?

If you conceptualized (discretely experienced) a blue ball within your mind that had clear essential properties to you, then you would distinctively know the blue ball.


The essential properties themselves are concepts. When you have the belief that there is a blue ball, regardless of whether it is true or not, you know you have that belief. Moreover, if you want to take it a step deeper, if I want to determine whether I still hold a belief, then it will have to applied without contradiction; However, the concept of manifestation of the consideration of whether I still hold a particular belief is not induced nor deduced nor applied: it is immediately acquired. No process or tool of knowledge is required to know that. Likewise, if you are seeing a ball right in front of you, the belief aspect is the mediated deductive knowledge that it is a "blue ball" or mediated inductive knowledge of anything pertaining to the "blue ball", but the immediately acquired knowledge of the perception of the "blue ball" of manifestation of itself is not a belief (nor deduced nor induced).

"You can't even claim to know something if you haven't, to some degree or another, conceptualized (my adjustment: discretely experienced) that something."

Yes, this is exactly the point I've been making.


If you are claiming "discrete experience" is the point of manifestation--not directly differentiation, then we agree. If not, then I don't think you can perform that substitution there.

Once I am able to see "this" is different from "that", I can detail it.


You are either deducing or inducing this, which is not immediately acquired knowledge and, most importantly, you first must conceptualize it.

Discrete experience is a cat. Conceptualization may be a tiger, but its still a cat.


A cat and a tiger are concepts. Again, I think we may be trying to utilize the same underlying meaning here, but I'm trying to understand if you are saying the fundamental base is differentiation, or if it is a separate, more fundamental, discrete experience.

If you could try to present your argument that my proposal is circular with an A -> B -> A format, I think I could understand better where you're coming from, and we could settle that issue once and for all.


Here's my understanding of circular arguments:

1. Posited inquiry
2. Justified explanation for 1
3. Posited inquiry of that justification used in 2
4. Justified with 1

So, essentially, it is 1 -> 1 (or A -> A). Let me attempt an example:

1. Posited inquiry: Is the bible true?
2. Justified explanation: Yes, God says so.
3. Posited inquiry of 2: How do we know God tells the truth?
4. The bible says so

1 -> ... -> 4 is actually just 1 -> 1. So I think it is with discrete experience in relation to reasoning:

1. Posited inquiry: Do we discretely experience?
2. Justified explanation: Yes, because reasoning deems it so (i.e. I cannot conceive without the use of discrete experience)
3. Posited inquiry of 2: How do we know reasoning is a valid means of acquiring such knowledge?
4. Because we discretely experience, and that is all that is required to begin our epistemic exploration.

1 -> ... -> 4 is actually just 1 -> 1. I think that if you are using "discrete experience" in the same manner that I am using "conceptualization", as previously defined, then it isn't circular as it is the basis of reasoning itself, which isn't differentiation I would say. I think you may be arguing for this kind of thing with discrete experience but yet still implying differentiation in there a bit.

I look forward to hearing from you,
Bob
Philosophim February 13, 2022 at 16:26 #654245
Absolutely spot on post Bob. I think we're on the same page here, and I have to compliment you greatly for trying to refine what I am stating.

My theory is what we'll call, elementary, and general. The point is to widely capture certain broad concepts, and show how they interact with each other. Discrete experience could be noted as a general word with very clear, but basic essential properties. Its as if I'm using the word "tree" to describe all plants that are made up of wood. You are coming along and noting, "Aren't trees tall and have on e trunk? What about this bush has a couple of trunks and is short?" You and I are not in any disagreement.

Essentially I am in an incredibly broad context, while you are trying to narrow and detail it. It is excellent. Let me address you're points and see if I can show how we're on the same page.

Quoting Bob Ross
As you seem to be using "discrete experience" as something more fundamental than "differentiation", but, where the confusion lies, at the same time, you seem to be also attempting to use them synonymously.


A pine tree and an oak tree are different trees. But they are still trees. Discrete experience is a tree. Differentiation an oak tree. Conceptualization is a pine tree. At the end of the day, they are both trees. For a certain context, identifying types of trees is not important. For example, when first introducing what a tree is. But then, a curious mind might say, "But isn't there a difference between an oak and a pine tree?" Yes. Yet both are still trees. And this is what I'm noting with differentiation and conceptualization. They are both still at their core, discrete experiences.

Quoting Bob Ross
The reason I chose "concept" is that it is a purposely vague manifestation of an idea, which is (I think) the best term I could come up with for conveying a fundamental, rudimentary point of manifestation. It is like a "thought", but not completely analogous: it isn't truly thinking of itself, for that is a recursively obtained concept that one thinks--which is not necessary for a concept to manifest. Likewise, it isn't thinking in itself, because thinking of itself is required for such. Therefore, I call it "conceptualization": the act of manifesting a concept (or concepts). When I use the term "concept", I don't mean high-level discernment of things: all of it is a concept and concepts can be built off of one another. Everything is manifested as a concept, including "differentiation" itself. This may just be me using the term wrong, but I wanted to clarify my use of the term.


If conceptualization is useful as a word, then simply follow the process. Discretely experience the word in your mind. Make it have essential properties that are non-synonymous, or distinct enough from another word as to be useful so that it is distinctive knowledge. Then, apply it to reality without contradiction. If you can do it once, then you have applicable knowledge that such a word is useful in reality.

It is not that I disagree with your attempt at proposing conceptualization. For my purposes, I have clear and broadly defined words that follow the process of knowledge. From discrete experience, I define thoughts, sensations, and memory. Then I apply them to reality. The issue with your current definition of conceptualization, is it isn't clear enough to show how it is separate enough from other useful words that can be applied to reality, and I'm not sure you've successfully applied it to reality yet without contradiction.

But, I understand the intuition. There does seem to be something different from the act of first identifying "this" from "that", then adding a concept to it. For my purposes, its just a definition. But perhaps "conceptualization" covers that which is not yet clear enough from the definitions used so far. In a way, it is not a discrete experience, but a fuzzy experience. It is not clearly cut out of the sea of existence, but a murky pair of binoculars that you are trying to focus into view. For my initial purposes, I did not dwell on this concept, because it did not help me get to the end. This was the refinement I thought others would introduce. So please do not take my notes as discouragement. Continue please. I just think the clarity isn't quite there yet on the definition, so lets keep trying!

Quoting Bob Ross
I am fine with your definition of "discrete"; however, when you say "I was looking for a fundamental", are you implying a fundamental that we must conceptualize to deem it so, or the point of manifestation required for that conceptualization in the first place?


No. I was looking for a fundamental to describe the reason why we are not like an eyeball or a camera. "Fundamental" in this case is trying to come up with a concept that does not depend or minimizes anything within its constituent parts to understand it. It is why I note we do not need to know why we discretely experience, it is simply an undeniable fundamental that we do. We are not beings that simply take in all existence at once without the capability of creating distinction within it. We are able to take that mess of sensation and thoughts, and create distinction. That is what I call discrete experience. Perhaps the word "discrete" is too strong to describe the different levels of distinction we can create. It is more like a fuzzy separation that we can continue to focus until we are at a comfortable enough level that it is useful to us. The attempt to describe this level of acceptable focus to an individual is "context".

Quoting Bob Ross
I think this is a perfect segue into "knowledge". I don't think there are only either induced or deduced (or distinctive and applicable) knowledge: there is immediately acquired knowledge, mediated deductive knowledge, and mediated inductive knowledge.


Quoting Bob Ross
Immediately acquired knowledge - that which is directly manifested (as a concept, I would argue) and, thereby, is immediately known.


This is simply a discrete experience as I describe it. "This" is not "that" is known by fact, because it is not contradicted. Of course, how do we know that a contradiction means it cannot be known? Because "This" cannot be separate from "That" if "This" is also identical to "That". It is a fundamental of discrete experience. To have a blend of something that you cannot discretely experience, means it is part of the sea of existence. Are the desk and keyboard in front of you both 100% separate and 100% not separate? If this were the case, you could not discretely experience them. At best, you can make a new word that describes both concepts together.

As I've noted earlier, math is the logic of discrete experience, which all starts with the identification of a "this" (1) the ability to group more than one "this" together (2), equality of discrete experience, and inequality of discrete experience.

Quoting Bob Ross
(Immediately acquired knowledge continued) perception, thought, and emotion of manifestations of themselves


All are discrete experiences. Or as mentioned earlier, "fuzzy experiences" that we can focus into greater clarity. We can create definitions to bring focus to those concepts, but the act of those concepts themselves does not require a definition to occur. If I am experiencing the emotion of happiness, one may question the degree or where it fits into my greater outlook on the world, but may not question the fact that currently, that is what I'm discretely experiencing itself.

Quoting Bob Ross
and, more importantly, any conceptualizations of manifestations of themselves that may stem from any of the aforementioned.


If you mean, "I experience "happiness" and now I'm going to create a new term called "happiness" to describe it," then yes.

Quoting Bob Ross
Mediated deductive knowledge - that which is deduced based of off immediately acquired knowledge.


Quoting Bob Ross
For example, I have an immediately acquired knowledge of "emotion" in terms of manifestation of itself, but the conclusion of the concept of "emotion", holistically, required the use of the individual concepts of feeling (such as pain and pleasure) to deduce it (this is "emotion" from manifestation of itself--it is the deduced knowledge which was deduced by the of manifestations of itself). I call it mediated, because, although "emotion" of manifestation and from manifestation of itself are both conceptualized (manifested as a concept), one concept is clearly mediated by the immediate forms of knowledge while the other is, well, immediately known.


I believe you've blended implicity knowledge and mediated knowledge here. I noted that I can create "distinctions about distinctions". I can see a sea of grass, a blade of grass, and a piece of grass. I can see happiness as great, average, and little. But let me see if I can address what you were intending to say. I can define and refine happiness in relation to other emotions. Lets say I have defined three emotions, pain, excitement, and happiness. I feel an emotion. It does not meet the standard for pain or excitement. If I am non-inventive and do not feel like creating another identity, it must logically be happiness. Of course, if it is nothing like any happiness I've experienced before, I must adjust my definition of happiness to now accommodate this state.

This level of thinking is distinctive knowledge. The question after you realize you discretely experience is, "How do I know I discretely experience?" You try to contradict it. And as I've noted before, you cannot. With this, you can discretely experience whatever you like as long as it follows a few rules. It must be a distinct discrete experience that is in some way different from other discrete experiences in your head to avoid being a synonym, and it must not be contradicted by other discrete experiences you hold in your head.

Applicable knowledge is merely an application of this rule. In essence, you can applicably know the distinctive knowledge in your head. The reason I've made a distinction, is applicable knowledge as a concept is useful in regards to reality, or "that which does not necessarily correlate with my discrete experiences". Distinctive knowledge is the world entirely in your own head. You can do whatever you want. But there is this situation of having things happen that are outside of the control or opinion of your head. Define a rotten apple as healthy, but you will still grow sick and possibly die.

And of course we've covered inductions in depth. The reason why I wanted to go over your definitions, is underlying those concepts, are my concepts. Lets not even say underlying. Concurrently is probably better. My context and definitions serve a particular purpose, while yours serve another. The question is, while your definitions can be distinctively know, can they be applicably known? I am not saying they cannot, they just haven't really been put to the test yet.

I think the question between us, and why you've proposed a different set of definitions is because you want something that the current definitions I've used, does not give you. It is not that the context I've provided is logically incorrect, it is I believe in your mind, logically inadequate. You want greater refinement and clarity to fuzzy distinctions you feel intuitively. And that is wonderful.

If I had to sum up what you are looking for, I think the real difference in our outlooks is that fundamental start. I don't think we disagree broadly, only in clarity and the necessity of new words in the specifics. As such, I will present some challenges to your terms that are not negations, but considerations.

Why did I separate the act of discrete experience from knowledge? Because as you agree, knowledge is a tool. A tool is an invention that we build from other things that allows us to manipulate and reason about the world in a better way. Discrete experience is a natural part of our existence. Knowledge is a tool built from that natural part of our existence. It is the fundamental which helps to explain what knowledge is.

When you use the term, "implicit knowledge", this overlaps with having discrete experiences. But this leaves you open to a question. How do you know its knowledge? Knowledge is now integrated into the description of a natural experience. It is no longer a tool, but the source itself. How then do I separate knowledge from a belief? If I can have knowledge that is a tool, and knowledge that is not a tool, isn't that an essential enough property for separating the concepts into two different concepts? Does the definition you use increase clarity, or cause confusion?

This of course, is a critique which can be applied to my own concepts. Is discrete experience as a broadly defined word a good term that has clear essential properties and does not muddle the water? Can we break it down into greater distinctions that will capture the overall goal of the knowledge theory, but makes it easy to comprehend and accessible to others? But I have to be careful. Too detailed, and it can quickly address unimportant details that aren't important to the overall concept. Too broad and it can be misapplied.

The goal here is to apply just the correct amount of logically consistent terms that are not too separate from our current way of speaking and understanding. It must have the right amount of detail to be applicable in daily life, but also open to refinement for deeper questions. What you are doing right now is seeking that refinement. But I do not think at this point that there is any disagreement with the overall structure. The basic methodology is still applied to the terms you propose. With that, continue to refine.

Quoting Bob Ross
But the knowledge of atoms is entirely irrelevant to the invention and use of a ruler. So with knowledge.

This is true. But I would like to emphasize that even if it is necessarily the case that it is made up of atoms, this is all apart of extrapolated chronological precedence and not just chronological precedence.


If I am understanding your terms of chronology correctly, I would argue that it is both. It is necessary that atoms exist for the ruler to exist, whether you know it or not. You can also extrapolate that atoms are necessary for the ruler to exist later. But does the existence of atoms, or the knowledge of atoms have any import into how you use a ruler? No.

Quoting Bob Ross
So I would state that with respect to conceptualization, it necessarily follows that I am preceded by atoms.


I believe this is a conclusion of applicable knowledge, not simply distinctive knowledge or merely discrete experience.

Quoting Bob Ross
Basically, you are claiming (I think) that discrete experience cannot be contradicted because that contradiction also requires discrete experience. — Bob Ross

Yes! I think you have it.

If you agree with me here, then I would like to ask you how you or I derived this? I would say from a manifestation of a concept that is immediately known and is revealed, so to speak, as necessarily true absolutely. To be clear, I'm not asking you to explain why we discretely experience, only how you or I came up with that very claim. Did we just discretely experience it?


A great question. Short answer? Yes. Long answer? It is the logic we derive from the ability to discretely experience. As I mentioned before, we cannot discretely experience a contradiction. Because experiencing a contradiction, in the very real sense of experiencing something as 100% identical and both 100% not identical to another concept is something we cannot experience. But lets say we could experience it. It would not be applicably known. It would not be distinctively known. It is beyond our ability to comprehend or experience as something knowable. It cannot be discretely experienced, but would be some other type of experience. Therefore it would be outside of the realm of comprehension and knowledge.

Quoting Bob Ross
If you conceptualized (discretely experienced) a blue ball within your mind that had clear essential properties to you, then you would distinctively know the blue ball.

The essential properties themselves are concepts. When you have the belief that there is a blue ball, regardless of whether it is true or not, you know you have that belief. Moreover, if you want to take it a step deeper, if I want to determine whether I still hold a belief, then it will have to applied without contradiction; However, the concept of manifestation of the consideration of whether I still hold a particular belief is not induced nor deduced nor applied: it is immediately acquired. No process or tool of knowledge is required to know that. Likewise, if you are seeing a ball right in front of you, the belief aspect is the mediated deductive knowledge that it is a "blue ball" or mediated inductive knowledge of anything pertaining to the "blue ball", but the immediately acquired knowledge of the perception of the "blue ball" of manifestation of itself is not a belief (nor deduced nor induced).


This touches on the issue I noted earlier with the idea of "implicit knowledge". You can discretely experience whatever you want. You know you can, because you have deduced it logically without contradiction. The tool of knowledge is the logic of concluding our distinctions are not contradicted by reality. We do not have to have knowledge to have distinctions that are not contradicted by reality. We do not have to know why we do what we do. But when we attempt to describe why, knowledge is the tool that gives us the best chance of determining whether our distinctions are not contradicted by reality. When you state that the act of having discrete experiences is the act of knowledge itself, the word knowledge becomes muddled and runs into issues.

Another thing to consider is your terms are causing you to construct sentences that are difficult to grasp their meaning (not that I am not guilty of this too!) "The concept of the manifestation of the consideration". This seems verbose and I'm having difficulty seeing the words as clearly defined identities that help me understand what is trying to be stated here. I can replace that entire sentence with, "However, the discrete experience of whether I hold a particular belief is not induced, nor deduced, nor applied, it is immediately acquired." It is something we simply do.

Quoting Bob Ross
"You can't even claim to know something if you haven't, to some degree or another, conceptualized (my adjustment: discretely experienced) that something."

Yes, this is exactly the point I've been making.

If you are claiming "discrete experience" is the point of manifestation--not directly differentiation, then we agree. If not, then I don't think you can perform that substitution there.


No, I am not using the terms manifestation or conceptualization. I'm not saying you can't. Those are your terms, and if you have contradictions or issues with them, it is for you to sort out. All I am saying is if a being can't part and parcel the sea of existence, it lacks a fundamental capability required to form knowledge.

Finally, let me address the proofs.

The bible proof doesn't quite capture circular logic. It is not 1 -> 1 Symbols in logic are meant to be 100% separate from other symbols conceptually. 1 is not the same as .999999 The bible and God are clearly distinct entities, and not equivalent.

So, we propose A
We say, If and only if A -> B
Then we say, If and only if B -> A.

So our only proof for God's existence is that the bible tells us, and the only proof for the bible's truth, is that God tells us. That is circular.

My argument is not a circular logic, but fundamental.

Lets compare this to a simple proof, the logic of a bachelor.

1. A bachelor is an unmarried man.
2. The possible contradiction to a person being a bachelor, is if they are not a man, or are not married.
3. Joe is both unmarried and a man.
4. Therefore he is a bachelor.

The above is not circular, it is a logical conclusion from the definitions proposed. Lets look at mine again.

1. Discrete experience is the ability to have distinct differences within the totality of your experience.
2. The contradiction to this, is if you cannot comprehend distinctions within the totality of your experience.
3. To read and comprehend these words, you must be able to comprehend distinctions within existence.
4. If you are reading and comprehending these words, then you have the ability to comprehend distinctions within existence.
4. Therefore you discretely experience.

Thanks again Bob, let me know what you think as always.








Bob Ross February 14, 2022 at 20:58 #654921
Hello @Philosophim,


A pine tree and an oak tree are different trees. But they are still trees. Discrete experience is a tree. Differentiation an oak tree. Conceptualization is a pine tree. At the end of the day, they are both trees.


If I am understanding your analogy correctly, then I would say (1) you are agreeing with me that discrete experience is not synonymous with differentiation (oak tree is derived from a tree) and (2) I would say, with respect to what I am attempting to convey, conceptualization would be a tree (not a pine tree). With respect to 2, this leads me to agree with you that we are essentially in agreement with one another; however, I am hesitant to completely agree with you as 1 directly entails to me that the fundamental is not differentiating "this" from "that", which I generally think your epistemology begins with such (that and the principle of noncontradiction). When you say "tree", in this analogy, I am arguing it is specifically not differentiation: it is the point of manifestation (and I think there is a difference). When I read your essays, "discrete" in "discrete experience" tended to be implying differentiation is the tree: maybe I just misunderstood you.

For a certain context, identifying types of trees is not important.


I agree, but I wouldn't constitute conceptualization as a pine tree, it would be the tree. Most notably, it would not be anything directly pertaining to a "discrete" anything.

And this is what I'm noting with differentiation and conceptualization. They are both still at their core, discrete experiences.


Again, I am interpreting this (1) as agreeing discrete experience is not differentiation directly (in that case, why use the term "discrete" if not to imply differentiation as apart of the fundamental) and (2) I think the ability, or act, of discretely (in the sense of differentiation) experiencing things comes after the experience itself. You first have a manifestation, an interpretation, and then, only after, can it be concluded that one necessarily discretely experiences. I think you may be attempting to use the term "discrete experience" synonymous with my attempted use of "conceptualization", however I find "discrete experience" to have confusing, almost contradictory, implications (no offense).

If conceptualization is useful as a word, then simply follow the process. Discretely experience the word in your mind. Make it have essential properties that are non-synonymous, or distinct enough from another word as to be useful so that it is distinctive knowledge. Then, apply it to reality without contradiction. If you can do it once, then you have applicable knowledge that such a word is useful in reality.


I think this is another big difference between us both: I don't think you can apply a tool of knowledge to that which is immediately known. I think you are attempting to acquire, holistically, all the knowledge you can claim to have via a tool: I don't think it makes sense to claim you can "know" something via a tool, yet you "do not know" the manifestations that were required for the tool of knowledge in the first place. Now here's where it gets a bit complicated (and you are right to point out my confusing terminology), because there's a difference between the manifestations and anything built off of those manifestations. For example, when I state that you "do not know" the manifestations that were required for the tool of knowledge in the first place, I am not referring to anything concluded to precede that tool of knowledge; in other words, a concluded manifestor by means of the manifestations. I think this is what you were meaning by the "I" and how it doesn't constitute knowledge: a concept of a manifestor must be subject to the tool of knowledge. We are in no disagreement there. However, the manifestations themselves are necessarily not subject to the tool of knowledge: it is the point of absolutely no movement (metaphorically speaking)(point of manifestation). It is the point of neither deduction nor induction, it is given. However, to emphasize this a bit more, when I state "it is given", I am doing so without conceding a giver. A giver would (metaphorically speaking) require movement and, therefore, would be subject to the tool of knowledge. I wanted to try and make that clear, first and foremost, that the division I am seeking is that of no movement vs movement, all of which is "knowledge"--but the former is given (with restraint from conceding a giver) while the other is obtained (via a tool of knowledge). So, with that in mind, I think you are not addressing my point here (and it is not your fault, I am doing a poor job of explaining it), which is self-evident to me due to you attempting to apply it. Anything applied is subject to the tool of knowledge. Conceptualization is not subject to such: it is absolutely no movement.

From discrete experience, I define thoughts, sensations, and memory. Then I apply them to reality.


Again, I think we agree that we can't apply discrete experience to reality because it is reality. However, if that is the case, I don't see how we could logically attribute something acquired via it as knowledge without conceding it is itself knowledge. Also, although you can apply thoughts, sensations, and memory to reality, you don't obtain that you know of them themselves and, thusly, cannot (not just do not) apply them to anything. It is like, you can apply a belief to reality to see if it stands, but necessarily without application you "know" you have a belief. I'm not sure if we are in agreement here or not. In other words, there are two aspects to those terms (thoughts, sensations, and memory), you are right with respect to one aspect, but I think you are disregarding the other.

The issue with your current definition of conceptualization, is it isn't clear enough to show how it is separate enough from other useful words that can be applied to reality, and I'm not sure you've successfully applied it to reality yet without contradiction.


I think, again, the confusion lies in the fact that I will never attempt, nor can I, to apply it to reality.

There does seem to be something different from the act of first identifying "this" from "that", then adding a concept to it.


For clarification purposes, I am not married to the term "conceptualization", it is just the best term I've come up with so far. But conceptualization is the identification that is the point of no movement I am trying to convey. "something different from the act of first identifying "this" from "that"" requires movement. I'm not really trying to address it in the sense of "well I have this discrete experience, let me induce/deduce a useful concept out of it". I am more trying to address it in the sense of the actual manifestation in the first place (without initially conceding a "manifestor")(without ever initially conceding a differentiation of "this" from "that"). I think you are more arguing that this cannot be done, namely without conceding differentiating "this" from "that", and that is where I think we mainly differ.

So please do not take my notes as discouragement. Continue please. I just think the clarity isn't quite there yet on the definition, so lets keep trying!


I completely understand: fair enough! I've been definitely making things more confusing, and I apologize, I'm trying to make it simpler, but haven't quite gotten there yet.

It is why I note we do not need to know why we discretely experience, it is simply an undeniable fundamental that we do.


This is fair and true. However, I would like to emphasize it is "an undeniable fundamental" (as in one of many, of which are not the fundamental in terms of the point of all manifestation) and thusly is derived from the manifestations themselves, the point of no movement, the point of manifestation, without conceding a manifestor, interpreter, etc (as those would be subject to the tool of knowledge to obtain it).

This is simply a discrete experience as I describe it. "This" is not "that" is known by fact, because it is not contradicted.


I would ask, how are you able to state it is not contradicted? Because the point of manifestation is cognitive in a sense; in other words, this is derived from a point of manifestation. "this" is not "that" is known because it is an immediately given, you seem to still be claiming you are applying it without contradiction, and that is how you have obtained it as known. I would say you necessarily cannot apply it, it is what you apply to. I think your use of the principle of noncontradiction is simply assumed, but I think it actually exposes the true point of no movement. You are first utilizing something that necessarily derives everything else: this is not the differentiation of "this" from "that", it is what allows for "this" is not "that" (without conceding an allower).

Are the desk and keyboard in front of you both 100% separate and 100% not separate? If this were the case, you could not discretely experience them. At best, you can make a new word that describes both concepts together.


I agree, but I would argue you are using the fundamental, point of manifestation, which dictates (without conceding a dictator) the necessity of the principle of noncontradiction. It isn't differentiation itself, nor the ability to "discretely experience" (in terms of the use of "discrete").

The question after you realize you discretely experience is, "How do I know I discretely experience?" You try to contradict it. And as I've noted before, you cannot.


Again, the question itself, the act of attempting to contradict it, and the realization is all "the tree", it is the point of manifestation, the point of zero movement. That is the fundamental of everything. This is why I would argue you can't actually even try to contradict it, it's just the fact that nothing happens that makes us feel like we successfully passed it through a test, but the manifestation of the "test" itself is what we were trying to pass through. It cannot be done. It is no different than trying to justify the principle of noncontradiction by trying to contradict it, it literally cannot occur (even as an attempt).

With this, you can discretely experience whatever you like as long as it follows a few rules. It must be a distinct discrete experience that is in some way different from other discrete experiences in your head to avoid being a synonym, and it must not be contradicted by other discrete experiences you hold in your head.


I agree, but these rules themselves require movement, which is derived from the point of no movement. They are manifestations which require a point of manifestation. Without conceding a mover or manifestor initially, as that would be subject to the tool of knowledge to be either rejected or obtained. It is essentially a thing recursively exploring itself, using it's own manifestations and rules to determine it has manifestations and rules.

And of course we've covered inductions in depth. The reason why I wanted to go over your definitions, is underlying those concepts, are my concepts. Lets not even say underlying. Concurrently is probably better. My context and definitions serve a particular purpose, while yours serve another. The question is, while your definitions can be distinctively know, can they be applicably known? I am not saying they cannot, they just haven't really been put to the test yet.


I am hesitant to say we are meaning the same exact thing, or that I am implicitly holistically using your epistemology yet, because I think you are still determining knowledge to be holistically that which must be tested. I am never going to test what is immediately known. And, likewise, I would consider just as much knowledge as any tool of knowledge we can conceive of. Although I may just be misunderstanding you, I am not attempting to apply your tool knowledge to the point of no movement, the point of manifestations. Also, I am only in agreement with you on "applying to reality without contradiction" if we are using "reality" in the sense of holistically all experience (which I think you are arguing for, but just wanted to clarify). Your thoughts are enough to create mathematics (in a general sense, obviously not for the derivation of math equations that pertain to things that must be seen in order to make sense of it, but math, as being the discrete logic, requires nothing but differentiation--I don't need to see "this" from "that").

Again, I would be hesitant to state we are concurrent, because I am only agreeing with you in the sense of the tool of knowledge, which is not holistically knowledge (I would argue). You seem to be even attempting to apply our terms to a test, which I am saying there is such a thing as a known untestable piece of knowledge (specifically one: the test itself, not that which tests--again, not conceding a tester, just the test itself so to speak). I don't think we are in agreement about that.


Why did I separate the act of discrete experience from knowledge? Because as you agree, knowledge is a tool. A tool is an invention that we build from other things that allows us to manipulate and reason about the world in a better way. Discrete experience is a natural part of our existence. Knowledge is a tool built from that natural part of our existence. It is the fundamental which helps to explain what knowledge is.


Hopefully I've demonstrated that I do not think this is holistically the case. When I say "knowledge as a tool", I am meaning it as one subtype out of two distinct types. I don't see how someone could logically claim to "know" something by means of obtaining it from a tool, but yet equally claim they "do not know" that which it is built off of (again, not an interpreter, but the mere interpretation itself). I also find it wrong to claim to "know" you discretely experience by means of applying it. Likewise, that you know you hold a belief (not pertaining to the truth of the content of such), or that you know that immediate perception, thought, emotion, etc. It seems like you aren't granting these as known, or you are attempting to pass them through the tool to obtain them as knowledge (which I think you are incapable of such, we are incapable of such).

How do you know its knowledge?


My point is that you are immediately given, granted, the knowledge that you "know" that you are questioning how you know its knowledge. I am in agreement with you that a tool would be required to evaluate the truth of the content, so to speak, of the question itself, but not the question as immediately manifested.

It is no longer a tool, but the source itself.


Again, I want to careful with "source itself". In terms of movement, anything concluded, such as a source in the sense of an interpreter or manifestor, is subject to the tool. I am in agreement with you on that. However, the "source" as the immediate manifestations themselves, this is just known. And I don't think it would make logical sense to claim we can know something if the latter definition of "source" isn't known. So in a sense, you are right, in another sense, you are wrong (it is the "source" and tool, but not in the sense of any sort of movement).

How then do I separate knowledge from a belief? If I can have knowledge that is a tool, and knowledge that is not a tool, isn't that an essential enough property for separating the concepts into two different concepts?


Again, to determine the truth in terms of the content, or proposition, of a belief, it requires a tool. But you immediately know that you are having a belief as it was immediately manifested as such. In other words, the belief that there is a red squirrel in my room would require a tool of knowledge to obtain whether it is true or false, but the belief itself (as a belief) is necessarily known immediately. This doesn't erode the distinction between knowledge and the content of a belief. You can have a proposition you don't immediately know while still knowing that the very manifestation of the proposition itself is true (i.e. I don't immediately know if there is a red squirrel, but it is true that the belief--the proposition--has occurred to me). Likewise, I would say that the propositions in our thoughts, also called beliefs, are distinguished from knowledge, however the thought itself is necessarily a true fact (and thus known). Not that the proposition is true, but that the fact that there is a proposition is necessarily true.

Does the definition you use increase clarity, or cause confusion?


It most definitely creates more confusion and fair enough! And so, if the objective is to try to portray as much as possible for the masses, then it may very well be useful to start simply with the fact that we differentiate. Fair enough, however, I don't think, in terms of philosophy, our goal should be to just simplify positions due to it sometimes becoming an oversimplification--a lot of philosophical principles and achievements necessarily required at least some complex elaboration. I'm trying to say that starting with differentiation may be a necessarily false presupposition that can be used to better portray the epistemology as a whole to the masses. Fair enough.

Too detailed, and it can quickly address unimportant details that aren't important to the overall concept. Too broad and it can be misapplied.


Absolutely fair enough! It is definitely a trade off, but I am more trying to attempt this from what is the fundamental and not how to get the most conveyed to the most people. I think both are worthy considerations.

What you are doing right now is seeking that refinement. But I do not think at this point that there is any disagreement with the overall structure. The basic methodology is still applied to the terms you propose.


Again, I am hesitant here to agree. For these reasons:

1. You seem to be deriving from differentiation, not the point of manifestation
2. You seem to be claiming knowledge is strictly obtained and never given without conceding a giver

I don't think I can really say I subscribe to your epistemology with such fundamental differences. I think you are more speaking in terms of once we are discussing the tool of knowledge, differentiation, and the principle of noncontradiction, then we generally agree and, thereby, I am subscribed in that sense.

I would argue that it is both. It is necessary that atoms exist for the ruler to exist, whether you know it or not.


I would like to careful here as well, it seems to be implying an "objective" reality that is an absolute reality (that which is not contingent on the subject). When I state "objective" reality, it is still in relation, and thus contingent to a degree, to the subject. It is necessary that atoms exist for the ruler to exist within the constraints of what has been manifested for you as the subject. We cannot claim beyond that.


I believe this is a conclusion of applicable knowledge, not simply distinctive knowledge or merely discrete experience.


This is true, but not in relation to an absolute "objective" reality. However, as you probably agree, it is not strictly applicable knowledge either: it is a combination as it all stems from those rules and the point of manifestation (what you would call discrete experience, which I would argue isn't sounding synonymous to me yet).

As I mentioned before, we cannot discretely experience a contradiction. Because experiencing a contradiction, in the very real sense of experiencing something as 100% identical and both 100% not identical to another concept is something we cannot experience.


Again, this isn't because we applied the principle of noncontradiction and found it not to contradict, therefore we obtained such knowledge, we simply "know" it because it is manifested necessarily that way. It is no different, I would say, to trying to legitimately apply the principle of noncontradiction on itself. I don't think it makes sense to constitute knowledge as strictly what has been applied (which implies strictly that which can be applied). Don't get me wrong, there is a very real sense where you are right, we can make up plausibilities that are inapplicable (which I would argue are irrational inductions), and that will never constitute knowledge. But there is a difference between something we moved to in our reasoning that cannot be applied and the reasoning itself which cannot be applied. These, in my head, are not the same "cannot be applied".

You can discretely experience whatever you want. You know you can, because you have deduced it logically without contradiction.


Although I understand what you are stating, and I agree in a sense, those two statements contradict each other. Also, it exposes the fact that of the manifestations and the seemingly necessary contingency (which is also a manifestation) of the principle of non-contradiction.

This leads me to another point, "reality" isn't just object, it is also subject. The thoughts themselves are apart of reality. When you "apply" your thoughts, strictly in the abstract, you are "applying to reality" without contradiction because the principle of noncontradiction is ingrained in us.

Another thing to consider is your terms are causing you to construct sentences that are difficult to grasp their meaning (not that I am not guilty of this too!) "The concept of the manifestation of the consideration". This seems verbose and I'm having difficulty seeing the words as clearly defined identities that help me understand what is trying to be stated here. I can replace that entire sentence with, "However, the discrete experience of whether I hold a particular belief is not induced, nor deduced, nor applied, it is immediately acquired." It is something we simply do.


Fair enough! However, I would say that your insertion of "discrete experience" necessarily erodes some of the meaning away, albeit my definitions aren't very good at all.

"You can't even claim to know something if you haven't, to some degree or another, conceptualized (my adjustment: discretely experienced) that something."

Yes, this is exactly the point I've been making.

If you are claiming "discrete experience" is the point of manifestation--not directly differentiation, then we agree. If not, then I don't think you can perform that substitution there. — Bob Ross


No, I am not using the terms manifestation or conceptualization. I'm not saying you can't. Those are your terms, and if you have contradictions or issues with them, it is for you to sort out. All I am saying is if a being can't part and parcel the sea of existence, it lacks a fundamental capability required to form knowledge.


I am honestly not quite following your response here. It seems like you didn't really answer question but, instead, referred it back to me. Either you agree that "discrete experience" is synonymous with the "point of manifestation and not directly differentiation", or you don't. I am just simply trying to understand whether you are attempting the same thing I am with the term "discrete experience", or whether you are not. Again, when you say "a being can't part and parcel the sea of existence", you are implying "differentiation" is "discrete experience", which is not what I am trying to convey. Also, the "sea of existence" seems to me to be implying, again, an absolute reality which is considered "objective". In other words, the subject is parsing the "sea of existence". It isn't that I am arguing the subject is the sea of existence, or that the sea of existence doesn't exist, but it is that we only view it as the sea of existence from what actually is existence: the manifestations themselves. We only induce there is sea of existence from, not that which induces, but the manifestations of those inductions themselves. I think there is a big difference.

I think, in terms of your circular logic rebuttal, you are right if you are talking about the actual fundamental, but I don't think you are. I think you are taking a tiny step by means of the manifestations to prove differentiation, but then proving manifestations with differentiation (which I think is a IFF contingency, however I do see your point).

I look forward to hearing from you,
Bob
Philosophim February 17, 2022 at 14:21 #655881
Thank you Bob. I think we are both struggling here to convey each other's intentions. I'm going to take another stab at describing discrete experiences once again, because I believe you are making it more complicated then I ever intended. On the flip side, it could be that what I am saying is too simple for your liking, and perhaps you understand it, but disagree that is what it could be because it does not cover certain complexities you see that I do not.

First, what do I mean by the sea of existence? Think of a camera lens. All a lens does, is take in light. It cannot section the light into different wavelengths. It cannot detail or zoom in on one part of the light over the other. It is the receiver.

Now imagine that everything you do, thoughts, feelings, light, sound, etc, are the light that streams in from a lens. You don't comprehend anything but the light. The sea of existence. But then, you do. You are able to separate that "light" into sound and sight. Technically, this is the brain. If you had no brain, all the pulses from your eardrums and the light hitting the back of your eyes would mean nothing. The brain takes that mess of light, and creates difference within it. Manifestation, differentiation, conceptualization, whatever you want. There is nothing more complicated to discrete experience, then the act of there being light through a lens, and then that something that can take it apart into individual identity. It is as basic as a camera sending a picture to an AI, and an AI can identify different parts of the picture.

With this, lets address your idea of manifestation. I realize I did not understand what you meant by it, and maybe still don't. Lets try using the theory. What are the essential properties of a manifestation? If its not a discrete experience, can you explain what makes it different?

For my part, I will attempt to convey how I have interpreted it. It seems when you speak of manifestation, it seems to be the act of awareness of an identity. Now when I say identity, I don't mean words that describe that identity. I mean the act of experiencing a "thing". If I saw a red squirrel, its not the identification of a red squirrel, or the comparison of that squirrel to other things. It is the blob of color, movement and action that makes it a "that".

To me, this is just the most basic form of discrete experience. It is the fact that I am currently discretely experiencing "that". The way I've been interpreting the rest is as such.

Differentiation - The discrete experience of comparison between two or more identities.
Conceptualization - The act of analysis on a discrete experience that defines it as something memorable (possibly the addition of a defintion or descriptive word)

So from my understanding, you are simply adding different degrees of discrete experience. But none of this changes the logic or the outcome of the theory. It only refines with additional identities, the levels of discrete experience that one can have. This is no different from my fundamental of identifying thoughts and sensations. Both are discrete experiences, just different identities.

Quoting Bob Ross
You first have a manifestation, an interpretation, and then, only after, can it be concluded that one necessarily discretely experiences.


If I my definitions are correct, then this makes sense to me. No quibbles at all! What I think you have been confused on, is that they are all discrete experiences, and are simply different identities that do not counter what I'm stating, but are refinements of what I'm stating. I'm stating A -> B. You're stating A -> a.1 -> a.2 -> B.

Quoting Bob Ross
I don't think you can apply a tool of knowledge to that which is immediately known. I think you are attempting to acquire, holistically, all the knowledge you can claim to have via a tool: I don't think it makes sense to claim you can "know" something via a tool, yet you "do not know" the manifestations that were required for the tool of knowledge in the first place.


What I am claiming is that knowledge is a process that we can use to logically determine that which is most likely to not be contradicted by reality. You can use the process without being aware of it. If you are aware of the process of knowledge, and you use that process of knowledge, you can determine what you know. If my definition of your three definitions works for you, I conclude that these types of discrete experiences are all distinctive knowledge. That is because when we apply the process of knowledge to them, we realize they are simply part of our ability to discretely experience, and are of our own design.

If I manifest a pink elephant (using my example with discrete experience), then there is nothing which contradicts the fact that I do in fact, manifest a pink elephant. With the process of knowledge, I can conclude this manifestation is something I know. Without understanding the process of knowledge, I could of course doubt that what I am manifesting, is what I am manifesting. And without the understanding the process of knowledge, I could also NOT doubt that what I am manifesting, is what I am manifesting. But I need a logical process to make this more than a belief. One can believe in something that is not contradicted by reality. But what makes it knowledge, is the process one follows to arrive at that conclusion.

Looking at a manifestation, a basic discrete experience, we can logically conclude that what we manifest, is something we know we manifest. The manifestation itself is not contradicted by reality. Thus this is part of distinctive knowledge. I can also differentiate the pink elephant manifestation from a grey elephant manifestation. "This" is not "that". Finally, I can start conceptualizing that I will call both "elephants" and one is "pink" while the other is "grey".

All of this is what my theory covers. This is not a counter to what I'm stating, it is in fact, what I am stating. I just never subdivided the process of discrete experience to your level, which I think is well done! But your introduction of more identities does not introduce the idea of "implicit knowledge". One cannot have knowledge, without following the process of knowledge. If one follows the process of knowledge without knowing they are, that is accidental knowledge, not implicit.

I'll define what I see "implicit" as meaning. "Implicit" seems to me that it is implied or natural. Knowledge can never be implied or natural, because it is a clearly defined process. It doesn't mean we can't conclude that others accidently know things. I can conclude that an ant knows the manifestation of dirt and sugar, and also claim that it does not know the words "dirt" and "sugar" that could be conceptualized about that manifestation. Perhaps the ant follows a process with its manifestations to know that sugar is edible, while dirt is not. And perhaps that process, is the process of knowledge put forth. But can the ant "know that it has knowledge"? With our current understanding of ant intellect, no.

Quoting Bob Ross
How do you know its knowledge?

My point is that you are immediately given, granted, the knowledge that you "know" that you are questioning how you know its knowledge. I am in agreement with you that a tool would be required to evaluate the truth of the content, so to speak, of the question itself, but not the question as immediately manifested.


How do you know that what is manifested is knowledge? Without a process of knowledge, you don't. Without knowing what the process of knowledge is, you cannot know that you know anything. A manifestation is nothing but a discrete experience. How we evaluate that logically is either a belief, or knowledge. Again, you could use the process of knowledge to know it, without knowing the process of knowledge. You would know it, from our outside perspective, but you yourself would not know that you know it. Again, this is accidental knowledge, not implicit. We are not born with an innate understanding of knowledge, most of us are born with the capability to use the process of knowledge.

Quoting Bob Ross
Again, to determine the truth in terms of the content, or proposition, of a belief, it requires a tool. But you immediately know that you are having a belief as it was immediately manifested as such. In other words, the belief that there is a red squirrel in my room would require a tool of knowledge to obtain whether it is true or false, but the belief itself (as a belief) is necessarily known immediately


To clarify. I do not immediately know I am having a belief. I have to determine that. And yes, with the process of knowledge, I can determine that what I manifest, what I differentiate, and what I conceptualize are all forms of distinctive knowledge.

I believe this is essentially what you are trying to say. You believe that manifestations are implicitly known, while I am stating manifestations are the act of discrete experience, and by using the logical process of knowledge proposed here, we can determine that manifestations, differentiations, and conceptualizations, are all acts of discrete experience. And the act of discretely experiencing "I discretely experience" is something which cannot be contradicted by reality. As such, we know that we discretely experience, and we can label what we discretely experience as "distinctive knowledge".

My challenge to you, is for you to demonstrate how you implicitly know that you manifest without first showing the process of what knowledge is. Clearly define the word, and apply it to reality without contradiction. As it is now, I cannot agree that there is innate knowledge within humanity. We are innately capable of knowing, but we are not innately capable of knowing what knowledge is, and thus are incapable of innately claiming we know things without the knowledge of that process.

One final mention, as I believe the rest is just repetition over this subject.

Quoting Bob Ross
As I mentioned before, we cannot discretely experience a contradiction. Because experiencing a contradiction, in the very real sense of experiencing something as 100% identical and both 100% not identical to another concept is something we cannot experience.

Again, this isn't because we applied the principle of noncontradiction and found it not to contradict, therefore we obtained such knowledge, we simply "know" it because it is manifested necessarily that way.


I am quite certain that someone out there could claim "I know the principle of noncontradiction is wrong innately". You would then ask, "How do you know"? And they would ask you, "How do YOU know the principle of noncontradiction is real?" Someone could very well believe and live with the idea that contradictory things exist. What about a God that is all good, all powerful, and all knowing? Or a being that exists outside of time? These are all contradictions that some people swear they know is true. It is not that people cannot follow the process of knowledge, it is that people do not innately know what knowledge is. Only after discovering what knowledge is, can a person identify what they know, versus what they do not know.

Has this clarified what each of us is trying to communicate to the other? I must also add that I think your division of manifestation, diffentiation, and concpetualization are fantastic, and wonderful additions to the theory (if I have the proper understanding of your intentions)!

Bob Ross February 19, 2022 at 19:39 #656779
Hello @Philosophim,

I think we are both struggling here to convey each other's intentions.


I agree. Furthermore, I really appreciate your elaboration as I now understand better where exactly you are coming from. Likewise, I will do my best to keep this response concise and aimed at conveying my point of view.

Now imagine that everything you do, thoughts, feelings, light, sound, etc, are the light that streams in from a lens. You don't comprehend anything but the light. The sea of existence. But then, you do. You are able to separate that "light" into sound and sight.


I am understanding this as what is scientifically typically considered "sensations". Am I correct?

Technically, this is the brain. If you had no brain, all the pulses from your eardrums and the light hitting the back of your eyes would mean nothing. The brain takes that mess of light, and creates difference within it.


This seems, generally speaking, to be "sensations" vs "perceptions"--the former being the raw input and the latter being the interpretation (in your case, I think you are more stating "differences" instead of "interpretations", but I think they essentially convey the same thing here). My point is that, although you are right in everything you have said, this is all obtained knowledge pertaining to how you derived yourself (or how you, thereafter, derived someone else in relation to themselves). This is the chicken figuring out it came from an egg (or the chicken concluding another chicken must have come from an egg). Maybe instead of calling it "extrapolated chronological precedence" we could call this simply "that which was obtained or determined to be true regarding what must precede itself (itself being the "subject")". This is contrary to “just chronological precedence”, which maybe we could call this simply "that which is deriving or that which is required for the consideration in the first place". The chicken derives that it came from an egg: that derivation requires it in the first place. It could very well be, even given that it makes the most logical sense (or may even be considered necessary) that the chicken came from the egg, this is all formulations of that chicken. What if this "truth", that it must come from the egg, is simply that which is a product of cognition? What if it is a product of the chickens ability to differentiate things (and not “objectively” known, or absolutely apart of the “universe”)? What if it is only necessary in relation to itself? When we analyze a brain, it is an interpretation of a brain via a brain. Therefore, you will only know as much as is allowed via your brain's interpretation of that brain it is analyzing. Although I don't like putting it in those terms (because I am utilizing what I am criticizing to even put this forth), maybe that will make more sense (I'm not sure). Do you think it must necessarily be the case that it comes from the brain, or that it must necessarily be the case in relation to itself? I agree with the science you are invoking here (no problem), but hopefully my proposition here is making a bit of sense.

What are the essential properties of a manifestation? If its not a discrete experience, can you explain what makes it different?


Although your definitions are all splendid, I don't think your use of "manifestation" quite fits what I am trying to convey. I was using "manifestation" and "conceptualization" interchangeably (I apologize for the confusion there). For all intents and purposes right now, I am going to try to explain it in terms of "conceptualization" by means of a poor analogy.

Imagine that an envelope (mail) pops into existence out of thin air into your hands periodically. You don't know the contents of the mail initially or where they came from or how they came to be, but you open them. In each envelope, which occur in succession to one another (and only once you read the one currently in your hand), there is a message that you read. Imagine you (1) necessarily always participate in this periodic reading of the contents of envelopes and (2) that you are always immediately convinced of the contents that you read. This is essentially how I view you (for all intents and purposes right now: "thought"). Let's take it for a spin.

Let's take your pink elephant example. When you say you had a basic discrete experience of a pink elephant, I am going to map that to an envelope, of which you have no clue where it came from or how it is, which you necessarily opened and read--immediately convinced of its contents: the discrete experience of the pink elephant. Now, as the envelopes are in succession of one another, you are unable to continue until the next envelope pops into your hands and you can be convinced of its contents. Therefore, when you say you could (1) be in doubt that what you manifested was a pink elephant or (2) you could apply a tool of knowledge to determine whether you did in fact manifest it, these both (whichever occurs) would be the next envelope's contents (or an envelope simply after that envelope). So, for example:

**discrete experience of "pink elephant"**
envelope 1: I just discretely experienced a pink elephant [convinced]
envelope 2: did "I" really just discretely experience a "pink elephant" [convinced you are doubting envelope 1]
envelope 3: "I know I" discretely experienced a "pink elephant" because I can apply it without contradiction to reality [convinced]

Notice the succession of envelopes and how you cannot (in this hypothetical) be convinced without the use of reading an envelope. Now, this would mean even if 600 million years or 2 seconds go by between manifestations of these envelopes (between when you get convinced via reading one), for you that time would never have occurred. If envelope 2 was read 600 million years after envelope 1, it would be no different for you than reading it 10 seconds after the other. Notice that the **discrete experience** is not “known” (or may “recognized” is a better word?) until envelope 1, not at the point of discrete experience. If envelope 1 never occurred, then the discrete experience “would have never occurred”. That envelope 1 is what enabled you to even logically consider the discrete experience in the first place. When I say “it never would have occurred”, I mean in the sense that even if hypothetically it is still objectively (or absolutely) occurring without the envelopes, it would be completely unverifiable and thereby meaningless to the subject.

The conceptualization I am referring to is (more or less) the envelopes: a concept manifested in the same essential manner that is immediately convinced of. Even if I read an envelope, get convinced of it, and then immediately in the next envelope am unconvinced of the previous one, this process still occurred. Also notice that the correspondence, so to speak, of each envelope is necessarily one off: envelope > n can pertain to envelope n, but n cannot pertain to n. For example:

envelope 1: I just discretely experienced a pink elephant
envelope 2: I was convinced I was discretely experiencing a pink elephant when I read envelope 1

Notice the convincement that one was convinced during envelope 1 occurs, at a miminum, n + 1 later and cannot occur at n (at 1 in this case). Likewise:

envelope 1: I am convinced of this very sentence right now as I say it

I have not solidified, so to speak, that very assertion until > 1 envelope pops up with a message pertaining to it. In other, more confusion, words: I am immediately convinced of “I am convinced of this very sentence right now as I say it”, but necessarily not immediately convinced of my convincement of that very statement until (if at all) envelope > n.

Last thing to briefly elaborate on is, if this is the case, then how would the reader get convinced of the envelope process? Wouldn't this also be an extrapolation of some sort? The 'egg' of my analogy, so to speak? I think not. This is because, although envelopes can only pertain to each other in chronological order (> n pertains to n) (therefore I would be using the contents of those envelopes to verify the process itself in a logical manner, which is an extrapolation of some sort: a use of logic to derive the logic), I am not basing the argument (or at least not trying to) off of the process of the envelope as extrapolated, but by the form of the envelopes themselves. In other words, by means of the contents of the envelopes, all of these previous and currently continually manifesting envelopes assume the same form--that is, something of which I am immediately convinced of (which can equally be unconvinced of later on). The form is the concept in a pure sense (but I like your definition as well, but notice yours, as you rightly point out, is within discrete experience whereas the convincement of these envelopes I would argue is not). Also, it is important to note that "convincement" that I am referring to is being used not necessarily in terms of an envelope that explicitly contains "I am convinced of X", for that very statement is immediately convincing you of X and not itself. So when I "prove" conceptualization, I am merely reading the contents of envelopes and I assert I was convinced immediately of the content of envelope n by means of another envelope > n--and, in turn, everything in this general sense is a conceptualization (an envelope). I cannot break this immediate conceptualization loop that seems to occur ad infinitum.

Likewise, when we talk about differentiation, I agree with your definition, but when we provide any logic, or illogical, or rationale, or irrationale, or absurdity, etc, we are doing so in a manner of reading envelopes that we are convinced of immediately, which can most definitely be unconvinced of later. Therefore, what you said is true pertaining to discrete experience, but the whole argument, including differentiation in the sense of experience being discrete, is a succession of envelopes. Actually, I would refurbish it to "is a succession of envelopes without conceding a succession of envelopes beyond the reading of the envelopes themselves". But I think that may be confusing (not sure).

The manifestation itself is not contradicted by reality.


So, to keep this as fundamental as I think possible, the idea of "contradiction" is read via an envelope. However, the important aspect of it that makes it "special", so to speak, is that it can be later on the contents of another envelope that asserts the necessity of the principle of noncontradiction and, most importantly, every envelope that manifests pertaining to such will assert the very same thing. This is why it is an axiom: you cannot apply the principle of noncontradiction to itself because that always leads to the use of it in the assertion.

I can also differentiate the pink elephant manifestation from a grey elephant manifestation. "This" is not "that". Finally, I can start conceptualizing that I will call both "elephants" and one is "pink" while the other is "grey".


What I am trying to get at is more fundamental than this, the differentiation of "this" is not "that" and the conceptualizing (in your use of the term) "elephants" and "pink" and "grey" are both contents of an envelope (or several). They take the necessary form of an idea popping into existence, so to speak, manifesting, that is immediately convinced of. I think your use of the terms, within discrete experience, are fine though.

But your introduction of more identities does not introduce the idea of "implicit knowledge". One cannot have knowledge, without following the process of knowledge. If one follows the process of knowledge without knowing they are, that is accidental knowledge, not implicit.


So there's two aspects needing to be addressed here. One aspect, which was my initial intention for the term “implicit”, is simply the acknowledgment that we, once we say we "know" something, may induce that that thing we know now was occurring the whole time prior to us knowing it (in light of us knowing it). This isn't to say, prior to us knowing it, we knew it. Just that, for example, when we due say "we know that differentiation necessarily occurs", we extrapolate that as occurring prior to when we even knew that. It is "implicit", with respect to this first aspect, in the sense that we are claiming differentiation was occurring, implicitly without our recognition, the whole while prior to our recognition of it. I think my concatenation of "implicit" with "knowledge" was confusing and wrong, so I apologize. My point was that we don't "know" it until we conceptualize it (until it pops up in an envelope). If we had never conceptualized it, it would been as if it never existed (it very well could have never existed). I think, now in hindsight, this is more or less what you meant by "accidental", but this leads me to the second aspect: it ended up, somewhere along the way, sort of morphing into a conversation about if you can "know" something without applying it to a tool (this is separate from my initial intention for the use of the term “implicit”). This sense, although I don't think "implicit" is the best word, I was meaning that the envelope itself is a given, without conceding a giver in the sense that any derivation of a giver would, well, be a derivation, which is derived from the content of the envelopes. This aspect, admittedly, isn't really "implicit", it is "manifested", or "given", or something. For all intents and purposes, this:

"how do I know of my previous envelope I read?" -> "because I remember reading it"

and this:

"how do I know of my previous envelope I read?" -> "because I can apply that belief to reality without contradiction"

Are of the same form. This form, this conceptualization, is the most fundamental in terms of "that which is deriving or that which is required for the consideration in the first place". On a separate note, I would even argue (and the argument itself was read from envelopes) that there is a difference between applying A to B within "reality" without contradiction, applying A to A within "reality" without contradiction, and applying "reality" to "reality" without contradiction. I think your use of "without contradiction" is utilizing the latter (with respect to immediately “known” things). Technically you are right though, I can't contradict that I had read previous envelope n, but how could I contradict it? How do I apply reality to reality? How do I pass a test through itself to see if it passes? My point is that it is impossible. Imagine you forgot that you read envelope n, then you wouldn't be applying anything in the first place: it would not become a consideration until an envelop > n pops up with a manifestation of that consideration. If an envelope pops up with a manifestation about whether a previous envelope occurred, and it is resulted with another envelope that concludes you did, then you did. Likewise, if we were to postulate that an envelope manifests asserting your use of drugs during envelope n's contents being read, and thereby questioning whether it is "objectively true", the fact that you had the envelope n occurred is necessarily solidified as true regardless of whether it is "objectively true". By example of yet another poor analogy, imagine our tool for determining motion was based off of a specific train, T, which is continually moving at a constant speed. Everything we characterize as “moving” or “not moving” (or any consideration of “how fast” or “how slow”) is relative to T. I am having a hard to understanding how we aren’t, when trying to applying an envelope succession to itself “without contradiction”, trying to determine whether T is moving. T is the standard, it is that which springs the very notion of “movement”. When we try to apply A and B, or even A to A, relative to reality (“to reality”), we can determine whether it is a contradiction; However, when apply “reality to reality” I don’t see how we are actually performing any “applying”, just like trying to “apply” T to T relative to T to see if T is moving.

Perhaps the ant follows a process with its manifestations to know that sugar is edible, while dirt is not. And perhaps that process, is the process of knowledge put forth. But can the ant "know that it has knowledge"? With our current understanding of ant intellect, no.


I'm thinking now in terms of "accidental knowledge", as you put it. My point is that the "accidental" or "unaccidental" knowledge we deem it to have has no relation to what it has in relation to itself. It may be the most logical thing for us to deem, but that has no impact on whether it knows anything. So I think you are right, but with careful consideration in relation to ourselves, not in relation to itself.

How do you know that what is manifested is knowledge? Without a process of knowledge, you don't.


To keep this brief, there’s two means of looking at this. One is the envelope succession is a loop the subject cannot break. The other is the envelope succession is what manifests any tool of knowledge we can come up with and process of acting out that tool, which necessarily means “know” those manifestations are “true” (aka, convinced of immediately) in order to do either of the two aforementioned.

Likewise, I would argue certain aspects of this envelope process are necessary in the sense that the contents of the envelopes always conform to a specific convincement, such as the principle of noncontradiction.

But building off of that, in terms of a tool of knowledge, I think we can also prove we have “implicit” knowledge in the sense of exactly what you were depicting with light and the brain: your brain, or to be more specific you as the subject, conforms to specific motivations which require “knowledge” in that sense. But this would be getting into the “why” of discrete experience—nevertheless, this is an innate form of knowledge that we obtain via the tool of knowledge (i.e. that your “tool of knowledge” would never have been created in the first place if you didn’t have some sort of motive to differentiate). I don’t think we are in any disagreement here, as this would have to be obtained via a tool of knowledge that there is “implicit”, innate knowledge in the first place.

I will stop here for now: hopefully this exposes a bit better what I am trying to say.

I look forward to hearing from you,
Bob
Philosophim February 20, 2022 at 17:04 #656993
Thanks for your reply Bob! Lets see if we can get this figured out between us.

Quoting Bob Ross
Now imagine that everything you do, thoughts, feelings, light, sound, etc, are the light that streams in from a lens. You don't comprehend anything but the light. The sea of existence. But then, you do. You are able to separate that "light" into sound and sight.

I am understanding this as what is scientifically typically considered "sensations". Am I correct?


I don't want to use the term sensations, as its not a very clear term. Some people think it means "from the senses". Some people think it means thoughts. I want you to imagine a camera taking in light. Then I want you to imagine the first step that you do with that light. Why are you able to see color differences? Why are you able to see a sheep in a field? When you hear a breeze, why is it that the color and sound don't blend together? How can you see something within everything you're experiencing, when an inanimate object does not. How we do, is for science to understand. But for knowledge, all we need to know, is that we do.

You live in a sea of existence, and you can part and parcel it. That's anything you do. Anything. Any "thing" you do. Does this make sense now? You are a "thing" that can experience other "things" instead of an amorphous wave of existence within the sea that does not know where it is, what it is, what things are, simply subsumed by the sea it will never realize it is part of.

I am going to short circuit your mail analogy here to show you what I am saying. Imagine there is nothing (which is really everything, but you can't discretely experience). Now you have an envelope in your hand. Suddenly, there is some "thing". That's all a basic discrete experience is.

Anything that comes after that is subsumed in the theory I've put forward. That's manifestation, differentiation, and conceptualization, as I defined it previously. The envelope manifests. You differentiate its parts. You conceptualize the letter inside as you read it.

You don't have to read the envelope. You don't have to have certainty. But you experience the envelope appearing in your hand. You could be certain about it, or doubt it, that's your choice. The point I'm trying to make, is that you aren't going prior, or deeper than where I've started. Whatever you're envisioning beyond the point of the envelope appearing in your hand, requires you to have discretely experienced the envelope in the first place. That is the thing you have no choice in. If you start with any "thing", then you are starting with a discrete experience, and you cannot escape it. As for "certainty", what is certainty in your mind? Does that mean I know? That I believe strongly? It seems the word "certainty" cannot exist without belief or knowledge, in which case we are entering the step of deciding whether I have a belief, or distinctive knowledge. But this does not negate the fact that you could not have certainty about any thing, without having the ability to discretely experience.

Regardless, the envelope is the start, not the reading or the the feeling of certainty. To counter that discrete experience is the first fundamental that I can know, you must show something that comes prior to discrete experience that I can know as a fundamental. And Bob, you can't, because it requires that you have discrete experience as a fundamental, to debate that discrete experience is a fundamental. There is the ability to not have discrete experience, in which you are merely a lens, an object. Then there is the ability to have discrete experience. There is nothing in my mind more fundamental to know, then discrete experience.

Quoting Bob Ross
My point is that, although you are right in everything you have said, this is all obtained knowledge pertaining to how you derived yourself (or how you, thereafter, derived someone else in relation to themselves).


Yes! If we understand each other correctly, knowledge is derived from our base ability to discretely experience. I've never stated anything else.

Quoting Bob Ross
This is contrary to “just chronological precedence”, which maybe we could call this simply "that which is deriving or that which is required for the consideration in the first place". The chicken derives that it came from an egg: that derivation requires it in the first place. It could very well be, even given that it makes the most logical sense (or may even be considered necessary) that the chicken came from the egg, this is all formulations of that chicken. What if this "truth", that it must come from the egg, is simply that which is a product of cognition?


If you want to use chronological precedence, that is fine. But I'm not using that term. I'm not sure why chronological precedence is important here. You need atoms to exist, but you don't have to talk about atoms to look at your watch and tell time. You just need to know the signs of time, and then see if your watch matches those signs. If the watch existed before you learned about it, I don't see that being important to whether you applicably know watches exist later on, unless I'm missing something you're trying to convey.

I've noticed you've been using the word truth. I've never claimed knowledge is truth. Its merely a rational means of applying our discrete experience in such a way, that we are the least likely to be in contradiction with reality. "Truth" is defined within my theory of knowledge as being the combination of all possible contexts, and their applications to reality without contradiction. It is something plausibly unobtainable.

I want to be clear, you can applicably know things in one context, that would not be knowledge in another context. I might look at a pear and an apple, and define them both as an apple in my context without contradiction from reality. However, someone with the distinctive knowledge of both a pear and an apple could come along and state that I'm ignorant, and one is a pear, while the other is an apple. Both of us applicably know different things within our contexts.

Just look at science over the years. It is recorded with instances of people that applicably knew things that today, with our expanded distinctive knowledge and expanded tool set, applicably know as false. It doesn't mean that the scientists back then did not applicably know their own theories. They did within their context.

Quoting Bob Ross
When we analyze a brain, it is an interpretation of a brain via a brain. Therefore, you will only know as much as is allowed via your brain's interpretation of that brain it is analyzing.


Yes, you have it! Did you know there are certain people who cannot visualize within their mind? They can never applicably know what it is like to visualize in their mind. The limits of what we can applicably know are limited by our distinctive context. If you want, skim through part 3 again for a reminder. That which cannot be discretely experienced can never be applicably known.

Quoting Bob Ross
Do you think it must necessarily be the case that it comes from the brain, or that it must necessarily be the case in relation to itself?


No. All that I am stating that cannot logically or applicably be contradicted by reality, is that I discretely experience. Everything else is the act of logically applying that discrete experience in a way that gives me the best chance of not being contradicted by reality. If current science concludes that the brain is the physical source of our "being", then that is the applicable knowledge we have within our current context of history. But, its plausible we're wrong. Still, we'll take what we applicably know today and work with that, than what could plausibly be known tomorrow.

Back to your envelope argument. I found it mostly confusing and away from the point. I wanted to address certain points of the argument, but realized that as it was intertwined with a lot of premises that do not make sense, or have missed the point of what discrete experience is, that it would be best to reorient to the fundamental premise, and drop most of the envelope argument entirely.

The point I want to make, is at the part where you say "an envelope appears in my hand," then you've described a discrete experience. Anything else is simply details about discrete experience, like thoughts, concepts, etc. And at that point, you've accepted my initial premise of the knowledge theory. If you have, then the rest follows as its unfolded. Anything after the envelope is simply a refinement or debate about the details of how thoughts form, concepts interact, etc. But none of that counters the origin and logic of the theory I've put forward.

The question is, "Can you come up with something more fundamental that you can distinctively and applicably know, prior to being able to discretely experience?" If you can, then yes, you've challenged the theory. But if not, the theories initial premise from which everything is built off of, stands. Of course perhaps there is a problem in the next step of the theory. But to counter the initial premise of the theory, you have an incredibly high, and in my mind, impossible bar to challenge.

Regarding implicit knowledge:
Quoting Bob Ross
So there's two aspects needing to be addressed here. One aspect, which was my initial intention for the term “implicit”, is simply the acknowledgment that we, once we say we "know" something, may induce that that thing we know now was occurring the whole time prior to us knowing it (in light of us knowing it).


This is an induction. An induction, is not knowledge. An induction is a belief with a degree of cogency. Your second aspect used the envelope analogy again, which I believe accidently wandered away from what was being discussed. Perhaps what would help, is if you use the analogies I've already provided, so that way we're using a similar base. The analogies I provide come from my understanding of the theory, so you can be confident I will understand what you are trying to convey if you use them.

At this point, I am still not seeing that we can have implicit knowledge, as it seems you are describing either

1. Having discrete experience, which is something that we can ascertain with knowledge, is known in the act of experiencing it.
2. Beliefs, which are inductions of varying cogency.
3. Accidental knowledge, or conclusions that we have arrived at using the process of knowledge, without knowing that we actively used the process of knowledge.

I hope this focusses the points, and that I'm accurately pointing out the main contentions so far. Please let me know if I've missed anything. I am also sorry that I did not tackle a few of your points within the envelope arguments that I think had merit. It is just that in doing so, I think it would have presented confusion because of the flawed premises within the envelope argument they were tied with. Thanks again Bob, great discussion as always!
Bob Ross February 20, 2022 at 21:44 #657073
Hello @Philosophim,

I am also sorry that I did not tackle a few of your points within the envelope arguments that I think had merit. It is just that in doing so, I think it would have presented confusion because of the flawed premises within the envelope argument they were tied with


No problem! I do think that you aren't quite following what I am trying to convey. So I am going to keep this response incredibly brief so you can fire back with your thoughts (without having to decipher a long reply).

I think that, although we both have good intentions, we are mostly talking over each other. I feel like I followed everything you stated in your last post, and mostly agree, but I must be missing something as well. When you state:

The question is, "Can you come up with something more fundamental that you can distinctively and applicably know, prior to being able to discretely experience?"


I think this is missing my point, as it is framed in a way where it is impossible for me to do so: "distinctively and applicably know" is within the discrete experience "framework", so to speak. And, as far as I am understanding you, this coincides quite nicely with your view of discrete experience being something of which I cannot possibly counter with a more fundamental.

For all intents and purposes, I am going to simplify my "conceptualization" to "thoughts". I think, as far as I understand your point of view, you are viewing it like this:

"discrete experience" -> "subject" -> *

Where '*' is just a placeholder that can be filled with nothing or something else (objects). Whereas I am viewing it like:

"discrete experience" <- "subject" -> *

When you state that it starts (at the fundamental) with "discrete experience", I am thinking, from my point of view, that that is a "thought". You are "thinking" that everything requires "differentiation", "a discrete", which is where, I would argue, it starts. Even when you state (rightfully so) that "thinking" is a process of discrete experience: that is a "thought". So even if we go with:

"discrete experience" -> "thought" -> *

I am viewing it as:

"discrete experience" <- "thought" -> *

Obviously, there are many problematic issues with substituting "thought" with "subject", but I am just trying to convey the bare bone difference between us (stripping away everything else). From my view, you cannot claim "discrete experience is the fundamental" without "thinking it", where "thought" is the fundamental. This is why I think we are deriving in two completely different senses of the term. This is the challenge: you are not starting with a "discrete experience", you are starting with a "thought". The "thought" which states that thoughts itself are "discrete experiences", etc.

I will let you fire back with what you think. I think this is the bare bone difference between us.
Bob
Philosophim February 21, 2022 at 00:23 #657107
I think this helps a little. Here is what I am saying.

A subject is a discrete experiencer. I am a discrete experiencer. They are one and the same for the most primitive of knowledge. You cannot ask the question, what am "I" without realizing there is a separation of "yourself" from "something else". You cannot discretely experience, if you do not exist. You cannot exist, if you do not discretely experience. You are not a lens, you are a discrete experiencer.

When breaking down the fundamentals of knowledge, what is it that must be known first? An I? Or discrete experience? The answer is discrete experience. While you cannot discretely experience if you are not first an "I", you cannot even comprehend what an "I" is, without discretely experiencing. To have the most primitive notion of "I", you must divide your experience into "I", and "Not I". "This" and "that".

I can doubt what "I" am without the knowledge of discrete experience. I could say I was a conscious being that transcends physical existence. Or a brain. Or "I" am someone one second, and a few seconds later, "I" am someone else. It is nebulous and unknowable.

If I realize I discretely experience, that is the one thing I can claim clearly, and without contradiction. "I" am a discrete experiencer. From that, "I" have a foundation to build knowledge from. If I am a discrete experiencer, then logically, what is the best way to discretely experience? And thus the theory goes.

Quoting Bob Ross
The question is, "Can you come up with something more fundamental that you can distinctively and applicably know, prior to being able to discretely experience?"

I think this is missing my point, as it is framed in a way where it is impossible for me to do so: "distinctively and applicably know" is within the discrete experience "framework", so to speak. And, as far as I am understanding you, this coincides quite nicely with your view of discrete experience being something of which I cannot possibly counter with a more fundamental.


If that is the case, then that means I have created a fundamenal claim for the theory. Of course, my logic from that fundamental may be flawed. It has seemed to me you are questioning that fundamental, which is what I've been trying to defend. But if this fundamental is not being challenged, where and what does the theory fail at obtaining?

Quoting Bob Ross
This is why I think we are deriving in two completely different senses of the term. This is the challenge: you are not starting with a "discrete experience", you are starting with a "thought". The "thought" which states that thoughts itself are "discrete experiences", etc.


How do I know what a thought is, if I do not first discretely experience? Using a higher level concept to discover a lower level concept does not mean the higher level concept is more fundamental than the lower level concept. We only discovered atoms because of science that was not based upon upon understanding atoms. Does that mean that the science that discovered atoms, is more fundamental than the atoms themselves? No.

If I had to guess, because you've been noting chronological use and chicken and egg scenarios, I think you are going in another direction from what I mean by fundamental. I do not mean a fundamental as a means of chronological use. I mean its smallest constituent parts. When I break down what I can know without challenge or contradiction, I find nebulous and unprovable assertions. But there is one assertion which cannot be countered. There is discrete experience. I am a discrete experiencer. Come all the details of an "I", or the types of discrete experiences, this is a fundamental which I know.

It doesn't matter why I discretely experience. It doesn't matter that I used thoughts, language, and my brain to discover that I discretely experience. That doesn't change the fact is a fundamental of knowledge that I discovered. Is this what you've been trying to tell me? That if I use language to put discrete experience as a comprehensible sign, that language is more fundamental than the ability to discretely experience?

I'll take your thoughts from here Bob.
Bob Ross February 21, 2022 at 16:29 #657343
Hello @Philosophim,

Let me start off with a concession: you are 100% correct here. I apologize for the confusion; I am currently slapping myself upon the head!

I most definitely have to utilize the principle of non-contradiction (pon for short) to claim anything. To claim that there was a manifestation, or that I am currently having one, requires pon. Therefore, I would argue that it all starts with pon.

However, most notably, I don't think pon and "discrete experience" are synonymous with one another: the latter is formed from the former. I think there is actually something in between the two, so to speak: the realization of the manifestations themselves. Let me put it forth bluntly:

1. Starts with pon
2. Utilizes pon to immediately realize the "thoughts" themselves.
3. Utilizes those thoughts to realize that I "discretely experience"

Therefore, my only adjustment is the insertion of #2. Again, I apologize for the confusion, the missing piece for me was realizing I am utilizing pon to get to 2, so I was just starting with 2.

I would also like to respond to your elaboration on "fundamental":

Using a higher level concept to discover a lower level concept does not mean the higher level concept is more fundamental than the lower level concept.


The converse is exactly what I am saying. This is why I was using the egg and chicken analogy so much. This is also why I identified two types of chronological derivation (two types of derivations in terms of fundamentals).

We only discovered atoms because of science that was not based upon upon understanding atoms. Does that mean that the science that discovered atoms, is more fundamental than the atoms themselves?


Yes. This is exactly what I am trying to point out. There's two kinds of derivation, and I find people typically focusing on only one of the two. One "fundamental" is in the sense of the highest level thing which derives all else, the other "fundamental" is what the highest level thing concluded is lower levels. Likewise, it is only the "highest level" thing in relation to the latter form "fundamental", where it concluded there are lower level things, so to speak, than itself or what it is discretely experiencing. Moreover, it is the lowest level in relation to the former kind of "fundamental": everything is contingent on it. But that doesn't mean it controls everything, or that it isn't a fair point to conclude other contingencies in terms of other objects. It is fair to conclude tables are contingent on atoms, but both are contingent on the subject insofar as it may or may not be there absent the subject. Likewise, the atom is more fundamental than the table, but it is also less fundamental to the table. The microscope used to see a germ is more fundamental than the germ itself insofar as it is necessarily contingent on such a tool for its discovery. It may very well be, in 1000 years from now, that a much better tool we come up with renders our previous view of germs obsolete (not saying it definitely will, but it is possible). That microscope which you can immediately see for yourself is a much more concrete, sure fact than anything it produces for you to see. Likewise, although this may never happen either, we may, in 2000 years, determine that our view of atoms was completely off. The "atom", conceptually, is contingent on more "fundamental", "higher level" objects we use to discover them and could very well "undiscover them", if you will. Furthermore, this isn't to say we don't consider the "atom", conceptually, as more "fundamental" than the table, it is just with careful consideration that they both fundamentally contingent on one another in two different regards. Does that make sense?

I do not mean a fundamental as a means of chronological use. I mean its smallest constituent parts.


Firstly, you are 100% correct in your inference that I am using "fundamental" in a totally different way, as previously described. However, with respect to "discrete experience", I don't see how you are using "fundamental" in the sense of "smallest constituent parts". "Differentiation" is not the smallest parts. Just like how it was posited that the scientific tool utilized to discover the atoms are not more fundamental than the atom itself, differentiation is not more fundamental than the atoms that are discovered therefrom. I am probably just misunderstanding you, but if the goal is to use the smallest constituent parts, then you would have to derive back to a quark or something. Differentiation is in terms of my sense of the term "fundamental": it is scientific tool used to discover the item (analogously, not literally a scientific tool of course). In that case, it is pon.

But there is one assertion which cannot be countered. There is discrete experience. I am a discrete experiencer.


I would like to agree, but also emphasize pon -> thoughts -> discrete experiencer. You first must be convinced of the thoughts themselves to then conclude you are a discrete experiencer.

It doesn't matter that I used thoughts, language, and my brain to discover that I discretely experience.


I am hesitant here. There's a difference between the thoughts themselves, as immediately known via pon, and those thoughts concluding they are being produced by a brain. Same with language. I am not trying to constrict this to internal monologue. You must necessarily "know" your thoughts, via pon, before you can conclude you discretely experience. I am not referring to any inferences to where the thoughts themselves, or the use of pon, is coming from. I would say the fundamental is pon (after further contemplation and a couple slaps to the face).

As I definitely overcomplicated this into a much longer discussion than it needed to be, although I am more than happy to continue the conversation, I don't want to squander any of your time. So, I will leave it up to you if you would like to terminate our conversation now, or continue the quest. I have much more to say pertaining to the ambiguity that worries me within your epistemology. Seems as though I really can define whatever I want, because "meaningfulness" is no where to be concretely found in your epistemology. There's a lot one can do without violating pon. Likewise, I can quite literally define two unique sets of essential properties to the same name without contradiction: there's nothing in your epistemology stopping me from doing this. But, again, I will only continue down that road if you would still like to continue our conversation.

I look forward to hearing from you,
Bob
Philosophim February 21, 2022 at 18:18 #657391
Reply to Bob Ross

Ok, I am glad we've figured out each other's points! I have no objection to you noting the order in which we assessed the theory. Yes, thoughts were used to create the pon, to create the term discrete experience. My point, is that out of all the things I could know first, and build all other knowledge off of, discrete experience was fit the fundamental I needed.

From there, I can then then show that the logic of discrete experience justified pon. That is because I cannot discretely experience something that is both 100% one thing, and 100% a different thing. I can then use discrete experience as a base to know thoughts. And so on.

But at this point, this may just be a matter of difference that we understand, and have to accept with each other. There is nothing wrong with that. I have the highest respect for your thinking, and it is the different outlook of every person that sees the world in their own viewpoint that adds our understanding. We also may be cutting hairs as well. I've already noted that you absolutely must be able to think to figure out that you discretely experience. I think we're just having a disagreement over "fundamental", and that's pretty insignificant at this juncture.

I do want you to address your other problems with the theory. How do you define meaningfulness? To your point, you can define anything however you want within distinctive knowledge. But when you apply that to reality, it must be able to persist without reality contradicting it. So, there is that limiting factor.

And yes, you can create the same word and apply two separate concepts to it. There is nothing in reality that prevents you from doing so. Here is an example of a famous Chinese poem that has 94 characters that all sound the same. https://en.wikipedia.org/wiki/Lion-Eating_Poet_in_the_Stone_Den#:~:text=%22Lion%2DEating%20Poet%20in%20the,Standard%20Mandarin%2C%20with%20only%20the

However, I would argue this isn't the best practice in most cases, and should be minimized. That is because the only way to tell the difference between the identical symbols, is the context that they are used in. Contextual word choice is already slippery enough within cultural context, so having formal definitions that should hold within most contexts can help with consistency of thought and application.

So these are things I've thought about, and there is a strength to them that you might not be aware of. As such, I want to hear from your thoughts first. Taking the theory as I've noted it, please note your issues. Don't worry, I am enjoying myself in these conversations. That being said, if you tire of them, feel free to let me know without any guilt or worry. I would like you to enjoy them as well, and not feel forced or pressured to continue.
Bob Ross February 21, 2022 at 23:32 #657616
Hello @Philosophim,

Don't worry, I am enjoying myself in these conversations. That being said, if you tire of them, feel free to let me know without any guilt or worry. I would like you to enjoy them as well, and not feel forced or pressured to continue.


Likewise, I thoroughly enjoy our conversations! I have a lot of respect for how well thought out your positions are! I don't think enough people on this forum give you enough credit where it is due! I just wanted to make sure that you are just as intrigued by this conversation as me (:

Moreover, I agree: I think our different outlooks on the "fundamental" is trivial enough, to say the least. I think it is time to continue to different aspects of the epistemology.

The main objection, or more like issue, that I am internally thinking about pertains to the ambiguity, or almost incredibly limited scope, of what is covered in the epistemology as is. Again, as always, I may be just misunderstanding (you tell me!), but, although the epistemology is rock solid hitherto, it doesn't really provide a concrete structure for societal contexts (I would say--or at least that's how my internally raised dilemma goes). In light of your chinese poem example (which is a great find by the way!), I don't think I need to go too into depth about what I mean by ambiguity with respect to defining (more like creating) terminology. Just as a quick example, in the abstract, I can legitimately determine essential properties X, Y, Z and (distinctly different) essential properties A and B to the same term. So when I refer to that term, it could be in relation to either one of those two essential property sets (so to speak), and there is no contradiction here to be found: ambiguity is not a contradiction (in the form of A is A and not A).

Although I think we both agree that the definitions that provide the most clarity should prevail, my dilemma is: "what justification do I have for that?". What in the epistemology restricts the other person from simply disagreeing? I found nothing stopping them from doing so. That is a worry for me, as it seems like, if I follow the trajectory of the epistemology in this manner, we end up with incomprehensible amounts of deadlocks (stalemates).

I actually think I have come up with a solution to this. I think that the subscription to the pon actually provides more rigidity than I originally thought. I think that we can clearly argue that ambiguity is actually wrong (or, more specifically, best clarity right) if the individual subscribes to pon. They cannot hold both. The argument, loosely, is as follows:

1. Ambiguity does not represent experience in the most clarifying manner.
2. Every "thought" the subject has is motivated towards acquiring an explanation.
3. The explanation that provides the most clarity for the subject becomes the explanation they accept. ("most clarity" being what they cognitively decipher as such, I'm not saying it is with respect to other subjects)
4. Defining ambiguously contradicts providing the most clarity.
5. Therefore if a less ambiguous definition is provided (that they also consider less ambiguous), it must be accepted by the subject.

In my thinking, very premature I do admit, I think that even to provide a counter to this would be an attempt to provide a better clarifying explanation (conclusion), therefore it is self-defeating to reject this given pon. But, to dive in deeper:

#1 This is based off of pon: "ambiguity" is defined as the contrary to that which provides clarity. Therefore, to reject this, I think one would be obligated to reject pon.

#2 This is also based off of pon: I don't think this can be contradicted. Conclusions of any kind are an explanation. The sole purpose of questions is a "thought" driven towards the goal of explanation. Even to say "it just is", or anything like that that provides no real good explanation, is still an explanation--in a generic sense. A statement, blunt and without a question, is still an explanation. I don't mean "explanation" in the sense that we deem it "sufficient" in the sense of academia. Therefore, I think they would be obliged to reject pon in order to reject this.

#3 Any attempt to counter this is implicitly trying to provide a better explanation than my proposition here, so even in the case they reject this, their rejection is quite literally them accepting the explanation (counter) they deemed to provide better clarity. Therefore, this cannot be contradicted.

#4 This is honestly just a reiteration of #1. I'm not sure if it is even needed.

#5 Again, even if they reject this, they would be acting it out implicitly, therefore it cannot be contradicted.

Therefore, I think this argument conforms to a specific protocol, so to speak, which is simply use of pon. The only thing they must accept is pon to be obligated to accept that ambiguity is actually wrong. I can actually tell that person they are wrong even within their own context IFF they accept pon. That is our common ground.

Accordingly to this kind of pon argument anchoring (where they must choose to accept whole heartedly or reject pon), I think we could most definitely add principles like these (as long as they conform properly) to the epistemology and, thereby, provide stronger, more structured, system for people to abide by.

Likewise, I was wondering: "couldn't the other person just reject possibility (or some other induction hierarchy) as more cogent than plausibility (or some other induction)?". I think, as is, although you argue just fine for it, they could. They could utilize the most basic discrete and applicable knowledge principles in your epistemology to reject the hierarchy without contradiction. However, I think I can provide yet another pon anchored argument that forces them to either accept or reject the pon:

1. Anything you experience requires a conclusion.
2. Therefore, in order to concede objects, the subject is required.
3. Therefore, that which is closer to immediate experience the subject can be more sure of.
4. Therefore, possibility (as defined in epistemology) is more cogent than plausibility (ditto) because it is closer to the subject's immediate experience.

This is just a raw rough draft, and it definitely could use some better terminology, but I think you get the general idea:

#1 This cannot be contradicted. It would require a conclusion.
#2 Just a specific elaboration of #1
#3 Must reject 1 in order to reject this, which cannot be contradicted.
#4 This logically follows. They would have to reject pon in #1.

I think this kind of pon anchoring could really expand the epistemology with respect to a lot of other principles the subject would be bound to (unless they reject pon). Let me know what you think.

The second idea I have been thinking of, to state it briefly, is what I can "axiomatic contracts". What I mean is that, in the case that something isn't strictly (rigidly) pon anchored, two subjects could still anchor it to pon with respect to an agreed upon axiom. For example, although my previous argument is much stronger (I would say), we could also legitimately ban ambiguity IFF the other subject agrees to the axiom that they want to convey their meaning to me. With that axiom in mind, thereby signing an "axiomatic contract", they would be obligated to provide as much clarity as possible, otherwise they would be violating that "axiomatic contract" by means of violating the pon. In other words, they would be contradicting the agreed upon axiom, which would, in turn, violate the contract. Just some food for thought!

I look forward to hearing from you,
Bob
Bob Ross February 25, 2022 at 04:24 #659104
Hello @Philosophim,

I apologize for the double post here, but I've had more time to think and wanted to share a bit more with you (so you can mow it over in your head).

1. "Accidental Properties" should be "Unessential Properties". If I am remembering correctly, your epistemology utilizes the terms "essential" and "accidental" to refer to properties. However, although I understand the underlying meaning, I don't think "accidental" properly addressed what is trying to be conveyed. The way I am thinking about it, there's nothing "accidental" about properties that may be decided to be removable from the term. I would say those properties are "unessential", and they are predefined. If an "essential property" turns out to be something I deem unworthy of such a title, then that term is being fundamentally altered to mean something different (and not merely a refurbishment).

For example, let's say I am defining "monitor" with the essential properties of ["displays things on a screen"] (where [] denotes a set). I think I am logically constrained to the following with consideration to object O:

IF O lacks the potential to have had the essential properties necessary to be a monitor, then it is not a monitor. (i.e. in the abstract, if O lacks the necessary components, even in the sense of dysfunctional components, that make the essential property of displaying a screen, then it is not a "monitor")

IF O has the potential to have had the essential properties necessary to be a monitor, then it is a monitor--"dysfunctional monitor". (i.e. in the abstract, I can consider O, given just a slightly torn wire or a completely empty wire port, if it were intact, to would have produced the essential property of displaying things).

IF O has the essential properties, then it is a "monitor" ("functional monitor").

The reason why this is of particular importance to me is that I was encountering essentially the issue of the Ship of Theseus again, but with doors. What makes a door a door? It doesn't seem like there is really, in colloquial speech, a clear line that is drawn (no real essential properties). Is it that it has a knob? No, doors can not have knobs. Is it that it has rectangular shape? No. Does it need to open? No. Does it need to close? no. Does it need hinges? no. But then I realized, and I'm pretty sure you probably meant this when we previously discussed the ship paradox, that essential properties are the exact same, in terms of arbitrariness, as unessential properties except that they are determined to be the fundamental aspects of the term. Therefore, if an essential property turns out to not be essential, then what is actually happening is that the subject is completely disbanding from that term and creating a whole new one (it is not a refurbishment, that can only occur with unessential properties).

Therefore, I think each term must have at least one essential property, and that is the anchor, so to speak, of the term. So, for example, if I define a "door" as "that which can open", then it doesn't matter anything else (such as the shape, texture, color, material, etc). And if I decide that, actually, that essential property is no more, then so is the term "door". Now, there's two important things to note here: (1) I can most definitely still, after disbanding the term "door", define "door" again with different essential properties (it is just that it is no longer the same concept) and (2) the essential property, as previously defined, is constrained to potentiality (so even if a "door" won't open, that doesn't mean it hasn't qualified as "that which can open").

Further, quite frequently when we say "that is a door", "that is a fake door", or the like, what we really are referring to is "likeness", which I consider to be only useful for anticipation purposes (strictly hypotheticals), and are not actually assigned to the term "door". For example, given my previous essential property qualifier for "door", if I see an object that resembles all the unessential, stereotypical, properties of a "door", I may be inclined to treat it as such--or, in the case that treating it as such produces no meaningful results, I may be inclined to define it as a "fake door". But my emphasis is that that which does not contain all the essential properties is not included in that term. So I would be inclined to say "it is like a door" when there is an object that lacks any potential to open but yet resembles a door.

2. I think it is finally time to address "plausibilities". "Plausibility" typically means "Seemingly or apparently valid, likely, or acceptable; credible". I don't think this even remotely resembles what you are trying to convey in the epistemology and, although we could legitimately rebrand the term, I think it is in our best interest (or at least my best interest) to use more pertinent terminology. I hereby propose terminology more resembling "speculative potentials", which directly eliminates "credibility" and "likelihood" from the terminology (as I don't think either should be attributed to a "plausible induction"). Therefore, I think "plausibilities" are actually "speculative potentials". A "speculation" is "Reasoning based on inconclusive evidence; conjecture or supposition" and "potentiality" is referring to "that which is not contradicted in the abstract". To say something is "plausible" is not, as you are probably well aware, to claim something only based off of it having potential (it is weightier than that).

Moreover, since "inapplicable plausibilities" have no potentiality (because they can be contradicted in the abstract: namely that they are not truth-apt, which contradicts the investigation of the claim in the first place), they will be hereby moved to "irrational inductions" and, most importantly, the terminology would now reflect that concisely and clearly ("speculative potential" directly explicates that it necessarily involves potentiality).

Likewise, there needs to be some subcategories of "speculative potentials", for they are all not equal claims (potentiality is quite a low bar to pass). I hereby propose we separate it as follows:

Divide "speculative potentials" into two subgroups: "considerable speculative potentials" and "inconsiderable speculative potentials". "Considerable" being defined as that which is worthy of consideration, which would be constituted by "a speculation, that has potential, that provides some form of negative and/or positive evidence beyond its mere potentiality". "Inconsiderable" is simply "that which has not provided anything beyond its potentiality as a basis of evidence".

Now, it will have to probably be voiced in greater depth in a subsequent post, but I would like to briefly point out that I would like to also refrain from accepting "inconsiderable speculative potentials".

Within "considerable speculative potentials", we can split it further into two subcategories: "credible speculative potentials" and "incredible speculative potentials". "Credible" being defined as "that which, upon consideration, (1) passes a threshold as defined in an axiomatic contract, (2) abides by a well defined and coherent logical system, or/and (3) directly abides by the principle of noncontradiction". Anything that doesn't constitute as "credible" is thereby "incredible".

3. I am still not sure if I am right in trying to logically tie the subject down to avoid deadlocks (as discussed in the previous post), but I have thought a starter point. Firstly, in order to be a "societal context", there must be some sort of inter-subjective or inter-objective agreement. If not, then it is not a "societal context"--and thereby is a "personal context". This cannot be contradicted as it is a deduced term. Secondly, the subject can hold a subjective claim and it's inter-subjective converse without contradiction. Likewise, the subject can hold an objective claim and it's inter-objective converse without contradiction.

My initial flaw, I think, in my contemplation of societal context deadlocks was my fundamental viewing of it as all "objective". However, I think we can split it into two meaningful terms: "objective" and "inter-objective". "Objectivity" is "that which the subject considers object in relation to itself", whereas "inter-objectivity" is "that which is agree upon, by a collective of subjects, as the object in relation to themselves as a shared experience". For example, when a red-green colorblind and non-colorblind person fundamental disagree (thus seemingly at a deadlock), they are actually disagreeing "objectively", but not necessarily "inter-objectively". The colorblind person could very well hold that it is "objectively" "true" that they are seeing green, while also holding that it is an "inter-objective" fact that what they are seeing is red--meaning they accept that it would be a contradiction for them to claim that it is green for the majority of people, but, nevertheless, it is not a contradiction to apply it to reality for themselves. To keep it brief, I think that "inter-objectivity", just like "inter-subjectivity", is a complicated subject that isn't merely "the majority deem what is inter-objective". No, I think it pertains more to a power dynamic, which tends to end up being the majority deem it so in more representational government systems. But that is for a later discussion. My main point here is that someone could reject someone else's claim at the "objective" or "subjective" level, but not be able to do so with respect to the inter-levels. I can apply to reality without contradiction that I value this particular loaf of bread at $100,000 (or pounds or pesos, whatever you fancy), but I cannot apply without contradiction the claim that that loaf of bread is valued inter-subjectively at $100,000 (it's probably not).

Now, none of the aforementioned completely solves anything, but I thought I would get it on your radar so you can mow it over too.

I look forward to hearing from you,
Bob
Philosophim February 26, 2022 at 14:55 #659631
Good morning Bob! My week was busy, but now I have time to reply with a cup of coffee in my hand. :)

Let me address your second post first. Your example in how to view essential and non-essential properties is 100% spot on.

I can understand your dislike with the term "plausibility". I came up with the term when I was first trying to separate that which had been applicably known, versus what was not applicably known. one of the considerations was tone. The initial inclination can be to dismiss plausibilities as lower level thinking. But the reality is, it is how we are able to discover anything new. I didn't want there to be an implicit suggestion that thinking in terms of plausibilities was innately wrong, its just when a person has a better alternative, it might be wiser to use a possibility or probability instead. And of course, any induction has an air of uncertainty until its applied to reality. Holding and using a possibility does not mean you are correct, even if it is more rational to do so.

The second point of using the word was to find something more fundamental. One word descriptions are easier to think on, and if you have to add qualifiers to it later, you only have to qualify one word. But, I can agree after our conversation that the word doesn't accurately convey what was intended in a simple or fundamental way. The idea of a something being plausible is purely in our minds; an abstract wish that we can seek out in reality.

So you are right. But with the above considerations, let me suggest some slight adjustments. I think the fundamental concept of 'speculation' is extremely good. A speculation conveys that it is an invention of the mind that you would seek to discover in application. I would not necessarily use the word "potential". Potential seems to me something reserved for probability and possibility, because we've applicably known them to exist at least once. If something is unknown to exist, does that mean it has potential? It seems too strong for speculation.

Speculation though seems to convey the attitude of what I was trying to define with "applicable plausibility". Sherlock Holmes speculates, and his reason is to find out the reality to the mystery. Speculation seems to confer the intelligence behind "applicable plausibilities", and that when other modes of reasoning are exhausted, we sally forth into the unknown seeking reality.

But of course, that leaves the lovely and incredibly useful word "potential" out in the cold again. I understand your draw to it, it defines many concepts conveniently. The problem is, "potential" is a word defined before I defined a split in knowledge between distinctive and applicable. While it is convenient, it has the same problem the old "knowledge" did. It is being implicitly used differently in many situations, and opens it up to confusing misuse and misinterpretation.

It is annoying to me that I can't find a good fit for the word, as it is to you. Its a perfectly darn good word, but how to fit it in without leaving ambiguity or confusion? The best I can think of right now is the use of the word to separate cogent inductions versus non-cogent inductions. As we've noted, probabilities, possibilities, and speculations all have potential, by the fact that they aren't contradicted in the mind. But is that enough? I don't think so. That is because potential is also used to convey what is applicable. For example, if it is possible that a person who wakes up every day at 8 am could potentially wake up tomorrow at 8 am, that's a distinctive potential. But if unknown to us, they died five minutes prior to our prediction, there is no applicable potential anymore.

Of course, that doesn't seem to make things any clearer. In the later case of applicable potential, I am addressing a reality that I have no knowledge of. Aren't all inductions prior to application in the same situation? Applicable potential seems to be a term when there is another party with knowledge, or in reference to the past. "I thought he would potentially wake up at 8 am today, but it turns out they had died last night". Or referencing the Gettier problem. "Smith thinks Jones potentially has 5 coins in his pocket, but we the audience knows, that he does not (thus this is not an applicable potential).

So does potentiality describe cogent inductions within one's context? Because in one context, its a plausibility might have distinctive potential, while in another, it does not. In a way, the word potential has been subsumed into cogency. Any speculation has distinctive potential, as if it did not, it would be an irrational induction then.

And that is my problem with the word potential. It seems to have been swallowed up by other terms. I can't find a unique and distinctive use of the word that serves a clear purpose anymore. Not that you should stop trying. I am merely conveying the difficulties using the word carries.

I still think "inapplicable plausibilities" is useful, but should take a refinement from my original declaration as you have noted. I had inapplicable plausibility defined as "that which we are unable to apply to reality at this time." For example, let us say that a man uses a stick and shadows to determine the Earth is round, and calculate the approximate circumference. The only way to applicably know, is to travel the world and measure your journey. But at the time you do this in ancient Greece, it is outside of you or societies capability to test such a claim.

Labeling such a speculation as irrational seems incorrect here. Think of many inventions such as the submarine thought of long before the technology was available to make it happen. I believe irrational inductions should remain a contradiction with what is applicably known. It serves a clear and distinct purpose with less ambiguity.

But, what of inapplicable plausibilities that can never be applied? For example, a unicorn that cannot be sensed? This does seem irrational. Or perhaps, lacks potential? Have we finally stumbled up on a use for the word? A speculation with potential, versus a speculation without potential? This seems to fit in with your subcategories earlier. If so, then perhaps we can state that what is potential is distinctive knowledge which is constructed in such a way as it has a clear of measure of how it can be applied to reality. Perhaps this is what you were trying to say, and I think this could "potentially" be useful.

Quoting Bob Ross
I am still not sure if I am right in trying to logically tie the subject down to avoid deadlocks (as discussed in the previous post), but I have thought a starter point. Firstly, in order to be a "societal context", there must be some sort of inter-subjective or inter-objective agreement. If not, then it is not a "societal context"--and thereby is a "personal context". This cannot be contradicted as it is a deduced term. Secondly, the subject can hold a subjective claim and it's inter-subjective converse without contradiction. Likewise, the subject can hold an objective claim and it's inter-objective converse without contradiction.


100% correct.

In regards to inter-objectivity and objectivity, this is what I tried to communicate with distinctive and applicable contexts. An applicable context refers to what someone can applicably discover. A blind person will never applicably know what it is like to see, and thus in communicating with someone who can see, there is this applicable context to consider. Distinctive context is when we essentially have different applied knowledge and inductions based on what we've formed in our own heads.

One great example is our discussion of the word "potential". You have a view of the word, and I have a view of the word. We are trying to discuss a use of the word that can satisfy both of our world outlooks. The issue is not that we are unable to attempt to apply the word as discussed, but what the meaning of the word should be between us and any others who would come along.

I have tried to avoid using the word "objective" within contextual differences, because I think there is something core to the idea of "objective" being something apart from the subject, or in this case, subjects. As you have noticed, there is a dissatisfaction if a person re-appropriates a word that is too far from our common vernacular. I believe a way to avoid this is to try to find the essential properties of the word that society has, and avoid adjusting those too much. In this case, I think objective should avoid anything that deals with the subject, as I believe that counters one of the essential properties that society considers in its current use of the word.

Fantastic thoughts, and please continue at it. I will address your first post shortly.

Philosophim February 26, 2022 at 15:21 #659637
Quoting Bob Ross
Likewise, I thoroughly enjoy our conversations! I have a lot of respect for how well thought out your positions are! I don't think enough people on this forum give you enough credit where it is due! I just wanted to make sure that you are just as intrigued by this conversation as me (:


Thank you! And yes, I am enjoying the conversation greatly.

Quoting Bob Ross
Just as a quick example, in the abstract, I can legitimately determine essential properties X, Y, Z and (distinctly different) essential properties A and B to the same term. So when I refer to that term, it could be in relation to either one of those two essential property sets (so to speak), and there is no contradiction here to be found: ambiguity is not a contradiction (in the form of A is A and not A).


The solution to this is to use contexts. If you recall my example of the word tree. One of my friends views a bush as a tree, while the other who has some knowledge of botany, considers that the essential properties of the "tree" do not match what he defines as a "bush". Yet, if the friend does not want to use the context of botany, there is nothing in reality that forces them to do so besides possible social ostracization and shame.

Quoting Bob Ross
Although I think we both agree that the definitions that provide the most clarity should prevail, my dilemma is: "what justification do I have for that?". What in the epistemology restricts the other person from simply disagreeing? I found nothing stopping them from doing so. That is a worry for me, as it seems like, if I follow the trajectory of the epistemology in this manner, we end up with incomprehensible amounts of deadlocks (stalemates).


I think your proof is great, I really have no disagreements with it. But there is a core assumption that we're making. That the person decides to be rational. You can never force a person to be rational. You can persuade them, pressure them, and give them the opportunity to be, but you can never force them to be. Knowledge is a tool. Someone can always decide not to use a tool. I could tell a person why they should use a screw driver to take the screws out instead of using pliers. But if someone wants to use pliers, even though its more difficult and less rational, that is their choice.

Quoting Bob Ross
Likewise, I was wondering: "couldn't the other person just reject possibility (or some other induction hierarchy) as more cogent than plausibility (or some other induction)?". I think, as is, although you argue just fine for it, they could. They could utilize the most basic discrete and applicable knowledge principles in your epistemology to reject the hierarchy without contradiction.


Again, this is true. I don't think its a problem for the epistemology, that a person can choose not to use it. I think the problem with the epistemology, is that it reveals that humans do not have to be rational. That is an uncomfortable notion. It not only reveals that about others, but about ourselves as well. How often have we rejected rationality in the pursuit of our own desires and biases? The idea of knowledge as some type of objective truth that forces us to follow reality is appealing. But at the end of the day, there is nothing that forces us to do so.

That being said, I wanted to point out a slight issue with your proof for the hierarchy. If you recall, I use math based on the distance from deductive certainty. Knowledge is 1, and any induction based off of that is less than one. Something like speculation is a culmination of knowledge and possibilities. So 1 * x (probability) * y (created speculation. I don't use the term "immediateness" because it isn't a clear and provable term. One could "immediately" conclude a speculation, but that doesn't make it more cogent then a long ago concluded probability.

Quoting Bob Ross
The second idea I have been thinking of, to state it briefly, is what I can "axiomatic contracts". What I mean is that, in the case that something isn't strictly (rigidly) pon anchored, two subjects could still anchor it to pon with respect to an agreed upon axiom. For example, although my previous argument is much stronger (I would say), we could also legitimately ban ambiguity IFF the other subject agrees to the axiom that they want to convey their meaning to me. With that axiom in mind, thereby signing an "axiomatic contract", they would be obligated to provide as much clarity as possible, otherwise they would be violating that "axiomatic contract" by means of violating the pon.


Nothing wrong with this either. The issue once again is, "its their choice". Its so nice to think that we could find an epistemology that is irrefutably rational, and everyone would line up to use it. The reality is, people are not motivated entirely by rationality. Even with the perfect epistemology, not everyone would be capable of, or willing to use it. But is this a problem with the epistemology I've proposed? No, I think this is just a reality of human kind, and a problem that any epistemology will run into.
Bob Ross February 27, 2022 at 18:18 #660263
Hello @Philosophim,

Fantastic points! To keep this condensed into one response, I am going to try and address your points more generally (but let me know if there's anything I didn't properly address). Just as a side note, this is entirely my fault, as I was the one who double posted (:

Firstly, I think some exposition into "potentiality" is probably necessary. In general, although I may just be misunderstanding you, I think that some of your concerns are perfectly warranted (and thus I will be trying to resolve them) and some are simply misunderstandings of what I mean by "potentiality". First off, potentiality is an abstract consideration. You seemed to be trying to apply potentiality distinctively and applicably (and finding issues with it): abstract considerations are always applications to reality. I don't think that "application to reality" is limited to empirical verifications: abstract considerations are perfectly reasonable (I think). For example:

For example, if it is possible that a person who wakes up every day at 8 am could potentially wake up tomorrow at 8 am, that's a distinctive potential. But if unknown to us, they died five minutes prior to our prediction, there is no applicable potential anymore.


I think this is a misunderstanding of potentiality. Firstly, what do you mean by distinctive potential? Anything that "isn't contradicted in the abstract" (assuming it isn't directly experienced as the contrary) is something that got applied to reality without contradiction. I might just be misremembering what "distinctive knowledge" is, but I am thinking of the differentiation within my head (my thoughts which haven't been applied yet to see if the contents hold). If that is the case, then potentiality can never be distinctive knowledge, it is the application of that distinctive knowledge in the abstract. If I have a belief that unicorns exist, I can abstractly verify whether it is "true" that I have a belief that unicorns exist. If I can't contradict the idea that I am having a belief that unicorns exist, then that is applicable knowledge (because I applied it to reality without contradiction). Secondly, this objection you are voicing also applies to possibility. If I have experienced person X get up at 8 am before, then I can say it is "possible" for X to get up at 8 am tomorrow morning. However, unbeknownst to me, they actually died today: therefore it isn't possible for them to get up at 8 am tomorrow. I don't see this as a flaw in potentiality or possibility, because it is not about what you don't know: it is about, contextually, what you do know. Let's take the same situation, for possibility and potentiality, but add you to the mix. Let's say that I don't know X died today, but you do. For me, it is the most cogent position for me to hold that X can "possibly" (and "potentially") wake up at 8 am tomorrow. For you, it is the exact contrary. The way I interpreted "no applicable potential anymore" is that of something objective, which isn't what I am getting at with potentiality or possibility.

However, I think you are right in potentiality seems to be consumed by other terms, but I'll get into that in later on (I think we need to hash some other more fundamental things first). I've realized that, although your epistemology is great so far, it doesn't really address the bulk of what epistemologies address. This is because your epistemology, thus far, has addressed some glasses of water (possibility, probability, and irrational inductions), but yet simply defined the whole ocean as "plausibility". Even with a separation of "inapplicable" and "applicable", I find that this still doesn't address a vast majority of "knowledge". So I don't think keeping a concise, one-word description of "speculations" is productive unless we dive into the subparts of that gigantic ocean.

Now, with that in mind, I want to really explicate how narrow "possibility" truly is. I think it is, as of now, not clearly defined. Let's recall that possibility is "that which has been experienced at least once before". Now, let's dive into your example you gave about the coins:

"Smith thinks Jones potentially has 5 coins in his pocket, but we the audience knows, that he does not (thus this is not an applicable potential).


Again, as a side note, the audience would claim it has no potential and Smith would (no contradiction here). But at a deeper level, imagine Smith has never experienced 5 coins in a pocket, but he's experienced coins before. Therefore, Smith cannot claim that it is "possible" for there to be 5 coins in Jones' pocket. He can speculate based off of the possibility of coins and the abstract consideration that he can't contradict the idea that 5 coins could be in Jones' pocket. Therefore, his position is a possibility (coins) -> speculation (5 coins in pocket). What would he say? He can't say it is "possible". Normally, Smith would have, in colloquial speech, deemed this abstract speculation as a "possibility", but now it seems as though he has been stripped of his words. Therefore, I introduced something back from the old word "possibility": the abstract consideration. He can claim "it is potentially the case that Jones' has 5 coins in his pocket". But this can get weirder. Imagine Smith has experienced 5 coins in his own pocket, but not 5 coins in Jones' pocket: then he hasn't experienced it before. Therefore, it is still not a possibility, it just has the potential to occur. Now, I think we are both inclined to try and reconcile this with something along the lines of "contexts, bob, contexts". But what are "contexts"? If we allow Smith to decide what a context is, then it seems as though the epistemology is simply telling him to do whatever he wants (as long as he doesn't contradict himself). But then we could make this much, much weirder. Imagine Smith has experienced 5 coins in Jones' pocket yesterday, but he hasn't today. Well, if the context revolves around time, then Smith still can't claim it is possible. It is only potentially the case. Likewise, Garry could have a location based contextual system, where he's experienced 5 coins in Jones' pocket in location X, but Jones' is now in location Y. Garry and Smith would agree that it is not "possible" (not to be confused with "impossibility") that Jones has 5 coins in his pocket--but for completely different reasons. Moreover, as you can imagine, without clearly defined meaning of "context", Smith could claim it is "possible" while Garry claims it isn't. But to take "experience it at least once before" literally, then possibility is incredibly narrow. And to take it not literally is to create a superficial boundary with no clear meaning (as of yet).

Also, I would like to point out, it wouldn't really make sense for Smith, although it is a speculation, to just merely answer the question with "I speculate he has 5 coins in his pocket", because Smith isn't necessarily claiming that Jones does have 5 coins, he is merely assessing the potentiality. Again, at a bare minimum, he would have to had experienced 5 coins in Jones' pocket before in order to claim it is possible. Most of the time we don't have that kind of oddly specific knowledge, therefore potentiality was born: it is a less strong form of possibility. It is to apply a concept to reality, in the abstract, without contradiction. Likewise, imagine Smith has experienced 4 coins in Jones' pocket, but not five. Then it also wouldn't be a possibility that Jones has 5 coins in his pocket: it would be an abstract consideration that is not contradicted.

Furthermore, I would like to revisit the 8 am dead person example: it isn't necessarily the case that it is impossible either just because they are dead. Let's say I heard from a trusted friend that they died today: I didn't experience their death. This would be an abstract consideration. Do I trust them? If I do, what logically follows? It logically follows that there's no potential for them to wake up tomorrow at 8 am. But notice that in doing so, I've necessarily revoked any "possibility" as well, but not on the basis of "impossibility".

To sum it up, I think we need to clearly and concisely define "context", "possibility", "impossibility", and "potentiality". If I can make up whatever I want for "context", I could be so literally specific that there is no such thing as a repetitive context, or I could be so ambiguous that everything is possible. Then we are relying on "meaningfulness", or some other principle not described in your epistemology, to deter them from this. If so, then why not include it clearly in the epistemology?

I had inapplicable plausibility defined as "that which we are unable to apply to reality at this time."


I think that, in this sense, I agree. But originally it encompassed two senses: that which can't be applied right now, and that which never will be. The latter is irrational. The former may be rational in the sense that it isn't an irrational induction, but it isn't necessarily the case that it should be pursued either. It would merely be a speculative potential: specifically, given no further context, an incredible speculative potential. Which leads me to my next question: when you say "unable to apply", what do you mean? I think that if nothing can be applied at all, then it isn't worth pursuing. If you can't find any evidence for that concept or idea at all, why pursue it? The great inventors of the past, albeit invented "crazy" "impossible" things, had some sort of evidence backing their speculations. They didn't tell themselves: "I am trying to discover a teapot 100 billion light years away in another galaxy, of which I have no evidence to support it is there, but I am going to incessantly keep trying anyways".

For example, let us say that a man uses a stick and shadows to determine the Earth is round, and calculate the approximate circumference. The only way to applicably know, is to travel the world and measure your journey.


I disagree. The journey across the world is not the only way to verify the spherical nature of the earth. The stick and shadows is just the beginning. One can find many more forms of scientific evidence (that doesn't require a round trip): it would be, given that kind of evidence it has, a "credible speculative potential".

However, I do have my worries, like you, about even calling them "speculations": a lot of enormously backed scientific theories would be a "credible speculative potential", which seems to undermine it quite significantly. This is honestly the main issue with "plausibilities": it is really where epistemology mainly lies. It may be in our best interest to just dedicate more terminology, more explanations, towards speculations: there has to be further hierarchies within it. This is why, upon further reflection, although it is great so far, I don't think your epistemology really gets into any of the pressing dilemmas an epistemology is supposed to address. Now we must determine the thresholds of evidence that would constitute a scientific theory as significantly more reliable than, let's say, simply a man speculating with a stick and shadows (both of which could potentially be considered "credible speculative potentials"). Don't get me wrong, your epistemology does a splendid job at the fundamentals, especially in terms of inductions, but there's a lot of work needed to be hashed out in terms of speculations.

I believe irrational inductions should remain a contradiction with what is applicably known


I disagree, if what you mean by "application" is empirical evidence. I am claiming potentiality is applicably known (always). I can applicably know, in the abstract, that a logically unobtainable idea is irrational to hold. For example, take an undetectable unicorn:

1. A truth-apt claim is a claim that has the ability to be falsifiable (true or false).
2. An undetectable unicorn is unfalsifiable.
3. An undetectable unicorn is not truth-apt.
4. The pursual of a claim implies it is truth-apt.
5. An undetectable unicorn is not pursuable.
6. Therefore, to attempt to pursue the idea of an undetectable unicorn, leads to a contradiction: the pursual implies its truth-aptness, but yet the claim itself is not truth-apt.

I have tried to avoid using the word "objective" within contextual differences, because I think there is something core to the idea of "objective" being something apart from the subject, or in this case, subjects. As you have noticed, there is a dissatisfaction if a person re-appropriates a word that is too far from our common vernacular. I believe a way to avoid this is to try to find the essential properties of the word that society has, and avoid adjusting those too much. In this case, I think objective should avoid anything that deals with the subject, as I believe that counters one of the essential properties that society considers in its current use of the word.


Although you are right that I am refurbishing the term "objective", I think it is a step in the right direction. I think this is actually what people implicitly are doing when they say something is "objective": it is something they've deemed to out of their control (an object). Some people will go a step further and claim there's actual an absolute something out there, of which is separate from all subjects: this is a speculation that lacks potential. For a color blind person, I think they will be more than happy to accept that what is objective for them, isn't objective for other people. So, although I agree and you are right, I think society needs to stop making such bold, unnecessary claims that there's some sort of absolute instantiation of objects. It is something that is unfalsifiable.

That the person decides to be rational. You can never force a person to be rational. You can persuade them, pressure them, and give them the opportunity to be, but you can never force them to be. Knowledge is a tool. Someone can always decide not to use a tool


This is true. But I want to be careful with the term "rationality": I find too many people using it in an ambiguous way to justify their reasoning (without actually justifying it). For me, "rationality" is a inter-subjectively defined concept. Therefore, we are not all rational beings (like Kant thought), but we are all reasoning beings. My goal, in terms of epistemology, is to attempt to make the arguments based off of reasoning, so as to make it virtually impossible for someone to deny it (if they have the capacity to understand the arguments). I agree that people don't have to be rational, but they are "reasonable" (just meaning "reasoning").

I look forward to hearing from you,
Bob
Philosophim March 03, 2022 at 23:57 #662615
Good discussion Bob, lets see if we can come to common ground here.

Quoting Bob Ross
First off, potentiality is an abstract consideration. You seemed to be trying to apply potentiality distinctively and applicably (and finding issues with it): abstract considerations are always applications to reality. I don't think that "application to reality" is limited to empirical verifications: abstract considerations are perfectly reasonable (I think)


I think the notion of something abstract is it is a concept of the mind. Math is abstract thinking, and we discussed earlier how "1" represents "an identity". We really can't apply an abstract to reality without greater specifics. I need to apply 1 brick, or 1 stone. The idea of applying 1 is simply discretely experiencing a one.

Quoting Bob Ross
Anything that "isn't contradicted in the abstract" (assuming it isn't directly experienced as the contrary) is something that got applied to reality without contradiction. I might just be misremembering what "distinctive knowledge" is, but I am thinking of the differentiation within my head (my thoughts which haven't been applied yet to see if the contents hold). If that is the case, then potentiality can never be distinctive knowledge, it is the application of that distinctive knowledge in the abstract.


I am not sure what you mean by applying distinctive knowledge in the abstract. All this seems to be doing is sorting out the different ideas within my head to be consistent with what I know. Math again is the perfect example. I know that 1 + 1 make 2. Could I add another 1 to that 2 and get 3? Yes. But when its time to apply that to reality, what specifically is the 1, the 1, and the 2?

Quoting Bob Ross
I've realized that, although your epistemology is great so far, it doesn't really address the bulk of what epistemologies address. This is because your epistemology, thus far, has addressed some glasses of water (possibility, probability, and irrational inductions), but yet simply defined the whole ocean as "plausibility". Even with a separation of "inapplicable" and "applicable", I find that this still doesn't address a vast majority of "knowledge".


Plausibilities are not deductions though. They are inductions. And inductions, are not knowledge. Now can we further study inductions now that we have a basis of knowledge to work with, and possibly refine and come up with new outlooks? Sure! You have to realize, that without a solid foundation of what knowledge is, the study and breakdown of inductions has been largely a failure. I wouldn't say that not yet going into a deep dive of a particular induction is a weakness of the epistemology, it just hasn't gotten there yet.

Quoting Bob Ross
Now, let's dive into your example you gave about the coins:

"Smith thinks Jones potentially has 5 coins in his pocket, but we the audience knows, that he does not (thus this is not an applicable potential).


Quoting Bob Ross
But at a deeper level, imagine Smith has never experienced 5 coins in a pocket, but he's experienced coins before. Therefore, Smith cannot claim that it is "possible" for there to be 5 coins in Jones' pocket.


Correct. And I see nothing wrong with that. Once he slides the coins into a pocket, then he'll know its possible for 5 coins to fit in a pocket of that size.

Quoting Bob Ross
He can claim "it is potentially the case that Jones' has 5 coins in his pocket".


Again, I'm not seeing how we need the word potential when stating, "Smith speculates that Jones has 5 coins in his pocket."

Quoting Bob Ross
But this can get weirder. Imagine Smith has experienced 5 coins in his own pocket, but not 5 coins in Jones' pocket: then he hasn't experienced it before. Therefore, it is still not a possibility, it just has the potential to occur.


We have to clarify the claim a bit. Does Smith know that Jones' pocket is the correct size to fit five coins? Further, Smith knows it is possible if Jones' pocket is that big that 5 coins could fit into that pocket. But as to whether there are five coins in there at this time? Smith has never seen Jones put the five coins in his pocket. Its plausible, not possible.

So Smith can know that its possible five coins can fit into a pocket of X size.
What is it Smith is saying is possible vs his speculation?
Is he saying he knows Jones' pocket is big enough to where it is possible to fit 5 coins? Is he speculating that there are 5 coins in Jones' pocket right now, even though there is no evidence? Is he trying to claim it is possible that Jones' slipped five coins into his pocket earlier when Smith wasn't looking?

Again, the term possible vs. speculation/plausible all results in the specific claims of what are being stated. I see nothing wrong with noting very clear states of Smith's limited knowledge and inductions.

Quoting Bob Ross
If we allow Smith to decide what a context is, then it seems as though the epistemology is simply telling him to do whatever he wants (as long as he doesn't contradict himself).


The epistemology is not telling Smith to do what he wants. The epistemology recognizes the reality that Smith can do whatever he wants. Of course if Smith does whatever he wants, he'll likely end up doing the wrong thing, and we can give a host of reasons to Smith to use certain contexts over others.

Quoting Bob Ross
Imagine Smith has experienced 5 coins in Jones' pocket yesterday, but he hasn't today. Well, if the context revolves around time, then Smith still can't claim it is possible.


Correct. What you're running into is what happens if you consider every context that a person could be in. The problem isn't the reality that anyone can choose any context they want. The problem is that certain contexts aren't very helpful. Thus I think the problem is demonstrating how certain contexts aren't very useful.

Quoting Bob Ross
Also, I would like to point out, it wouldn't really make sense for Smith, although it is a speculation, to just merely answer the question with "I speculate he has 5 coins in his pocket", because Smith isn't necessarily claiming that Jones does have 5 coins, he is merely assessing the potentiality. Again, at a bare minimum, he would have to had experienced 5 coins in Jones' pocket before in order to claim it is possible.


If Smith isn't claiming that Jones has 5 coins in his pocket, then he's speculating Jones could, or could not have 5 coins in his pocket. And if Smith had experienced that Jones had 5 coins in his pocket at least once, depending on the context, Smith could say it was possible that Jones had 5 coins, or did not have 5 coins in his pocket.

Quoting Bob Ross
Most of the time we don't have that kind of oddly specific knowledge, therefore potentiality was born: it is a less strong form of possibility.


Once again, this is describing speculation/plausibility. I'm still not seeing "potentiality" used any differently.

Quoting Bob Ross
To sum it up, I think we need to clearly and concisely define "context", "possibility", "impossibility", and "potentiality". If I can make up whatever I want for "context", I could be so literally specific that there is no such thing as a repetitive context, or I could be so ambiguous that everything is possible. Then we are relying on "meaningfulness", or some other principle not described in your epistemology, to deter them from this. If so, then why not include it clearly in the epistemology?


No disagreement in formulating what contexts would be useful, and not be useful to individuals and societies. The purpose of the original paper was simply to establish how knowledge worked. Now that we have this, we can definitely refine it. Since you have your own ideas on proposals for contexts that work, lets start with that.

Quoting Bob Ross
Which leads me to my next question: when you say "unable to apply", what do you mean?


When you think of something in your head that you distinctively know is not able to be applied. For example, if I invent a unicorn that is not a material being. The definition has been formulated in such a manner that it can never be applied, because we can never interact with it.

Quoting Bob Ross
For example, let us say that a man uses a stick and shadows to determine the Earth is round, and calculate the approximate circumference. The only way to applicably know, is to travel the world and measure your journey.

I disagree.


In your opinion you do, but can you disagree in application? Based purely on this experiment, its plausible that the Earth is round, and its plausible that the distance calculated is the size of the Earth. The actual reality of the diameter of the Earth must be measured to applicably know it. You have to applicably show how the experiment shows the Earth is round and that exact size. The experiment was close, but it was not the actual size of the Earth once it was measured.

I think one of the issues you might have with speculations, is that they are less cogent than the other inductions. That does not make them useless, or irrational. Recall that it is a hierarchy of induction. In the case of measuring the Earth with the experiment, at that that time, that was all they had to work with. While it was a speculation, it was the most reasonable induction that a person could work with at the time.

Perhaps one issue you have with the epistemology, is it puts humans into situations where they are powerless to know. That is an uncomfortable reality, but one that I cannot mitigate if I am to be consistent. We like to imagine we have a reasonable assessment of reality, and that we are reasonable people. We really aren't unless we train to be. Even then, there are limits.

Quoting Bob Ross
However, I do have my worries, like you, about even calling them "speculations": a lot of enormously backed scientific theories would be a "credible speculative potential", which seems to undermine it quite significantly.


It only undermines them if there are other alternatives in the hierarchy. If for example a scientific experiment speculates something that is not possible, it is more rational to continue to hold what is possible. That doesn't mean you can't explore the speculation to see if it does revoke what is currently known to be possible. It just means until you've seen the speculation through to its end, holding to the inductions of what is possible is more rational.

Quoting Bob Ross
I believe irrational inductions should remain a contradiction with what is applicably known

I disagree, if what you mean by "application" is empirical evidence. I am claiming potentiality is applicably known (always). I can applicably know, in the abstract, that a logically unobtainable idea is irrational to hold. For example, take an undetectable unicorn:


No, you can distinctively know that a logically unobtainable idea is irrational to hold. A logic puzzle must be reasoned before it can be distinctively known. Only applying the rules in a logical manner gets you a result. While we could invent a result in our heads to be anything, it fails when the rules of the logic puzzle are applied. Perhaps we're missing an identity, and this is where abstraction comes in. You'll recall that context was defined both distinctively, and applicably. Distinctive contexts could be called abstractions. To have distinctive knowledge, one must hold ideas that are non-contradictory within a particular context. Logic, is a context. So within the abstraction (distinctive context) of logic, we can conclude a correct and incorrect solution to a puzzle.

Quoting Bob Ross
For a color blind person, I think they will be more than happy to accept that what is objective for them, isn't objective for other people.


On the notion of objective, a color blind person would hold it to be objective would also be consistent within another color blind person. The subjective difference would be seeing the world color blind, versus with color. This is applicable context. What one can applicably know is based off of what one is applicably capable of. Applicable context can be subjective, or by a group of people. While I agree we cannot define "objective" as "true", I think it needs to remain in the realm of "Remaining uncontradicted by most contexts".

Quoting Bob Ross
For me, "rationality" is a inter-subjectively defined concept. Therefore, we are not all rational beings (like Kant thought), but we are all reasoning beings. My goal, in terms of epistemology, is to attempt to make the arguments based off of reasoning, so as to make it virtually impossible for someone to deny it (if they have the capacity to understand the arguments). I agree that people don't have to be rational, but they are "reasonable" (just meaning "reasoning").


Can I clarify that I agree, but people have the capacity to reason with varying levels? Some people aren't very good at reasoning. Some people can reason, but follow emotions or whims more. The epistemology I've presented here is formed with reason. It can convince a person who uses reason. But it cannot convince a person who does not want to reason, or is swayed by emotion. All I am stating is you can't force a person to use reason, or be persuaded by reason if they don't want to be. I think on this you and I might agree.

Another good round of conversation! I will try to respond again this Saturday morning, but I will be gone for the rest of the weekend after.
Bob Ross March 06, 2022 at 04:34 #663430
Hello @Philosophim,

I think the notion of something abstract is it is a concept of the mind. Math is abstract thinking, and we discussed earlier how "1" represents "an identity". We really can't apply an abstract to reality without greater specifics. I need to apply 1 brick, or 1 stone. The idea of applying 1 is simply discretely experiencing a one.

I am not sure what you mean by applying distinctive knowledge in the abstract. All this seems to be doing is sorting out the different ideas within my head to be consistent with what I know. Math again is the perfect example. I know that 1 + 1 make 2. Could I add another 1 to that 2 and get 3? Yes. But when its time to apply that to reality, what specifically is the 1, the 1, and the 2?


So I think I have identified our fundamental difference: you seem to be only allowing what is empirically known to be what can be "known", whereas I am allowing for knowledge that can, along with what is empirical, arise from the mind. I think that the flaw in taking your approach here, assuming I have accurately depicted your position, is that certain aspects of knowledge precede empirical observation. For example, try applying without contradiction (in the sense that you seem to be using it--empirically) the principle of noncontradiction. I don't think you can: it is apodictically true by means of reason alone. Likewise, try to empirically prove the principle of sufficient reason (which can be posited equally as "causation") by applying it to reality without contradiction (in the sense you are using it): I don't think you can. The principle of sufficient reason and causality are both presupposed in any empirical observation. Furthermore, try proving space empirically: I don't think you can. Space, in one unison, is proven apodictically (by means of the principle of noncontradiction) with reason alone. Moreover, try to prove time without appealing to causation, which in turn cannot be empirically proven, without appealing to reason. Maybe we are just using the term "reality" differently? I mean the totality of existence: not just the "external" world. Again, just as another example, try creating a logical system, which is utilized by everyone (whether they realize it or not) every day, without appealing strictly to reason.

To take your example of mathematics, there are two completely separate propositions that I think you are combining into one in your example. The abstract consideration of mathematics, regardless of whether it is instantiated in the "external" world, is still known (which I think you admit just fine): this is an abstract consideration (meaning within the mind). I find your example a bit confusing as I think you are agreeing with me, but yet arguing against me. If you say that "I know that 1 + 1 make 2", that seems like you are agreeing you can know things without "applying them to reality" (as you use that term), but yet then you attempt to use a (completely valid I must say) argument for why abstract numbers don't necessary map to real quantities in the external world to prove we must apply things without contradiction to reality to "know" them. If we have a mathematical formula, we can "know" it will work in relation to the "external" world regardless of whether it actually is instantiated in it. As we have previously discussed, mathematical inductions aren't really inductions, they are true with an if condition: but that if condition doesn't mean I can't claim to know that N + M abides by certain rules regardless. This is done with reason, which is what I mean by abstract consideration.

That leads me to what I think is our second fundamental disagreement: whether inductions are knowledge or not. Initially, I was inclined to adamantly claim it is, but upon further contemplation I actually really enjoy the idea of degrading inductions to beliefs with different credence levels (and not knowledge). However, I think there may be dangers in this kind of approach, without some means of determining something "known", in terms of inductions, vs what is merely a belief, I am not sure how practical this will be for the laymen--I can envision everyone shouting "everything is just a belief!". Likewise, it isn't just about what is more cogent, it is about what we claim to have passed a threshold to be considered "true". Although I'm not particularly fond of that, it is an obvious distinction between a rigorously tested scientific theory and any other speculation.


Plausibilities are not deductions though. They are inductions. And inductions, are not knowledge. Now can we further study inductions now that we have a basis of knowledge to work with, and possibly refine and come up with new outlooks? Sure! You have to realize, that without a solid foundation of what knowledge is, the study and breakdown of inductions has been largely a failure. I wouldn't say that not yet going into a deep dive of a particular induction is a weakness of the epistemology, it just hasn't gotten there yet.


This the aforementioned in mind, when I stated your epistemology hasn't quite addressed the pressing matters, I was claiming that without the full understanding that you are claiming inductions are not knowledge: therefore, your epistemology does cover what "knowledge" is holistically. However, I don't think this fully addresses the issue, as it can be posited just the same now in terms of "belief". I find myself in the same dilemma where the theory of evolution and there being a teapot floating around Jupiter are both speculations. What bothers me about this is not that they both are speculations, but, rather, that there is no distinction made between them: this is what I mean by the epistemology isn't quite addressing the most pressing matters (most people will agree that which they immediately see--even in the case that they don't even know what a deduction is--but the real disputes arise around inductions). This isn't meant as a devastating blow to your epistemology, it is just an observation that much needs to be addressed before I can confidently state that it is a functional theory (no offense meant). I think we agree on this, in terms of the underlying meaning we are both trying to convey.

Correct. And I see nothing wrong with that. Once he slides the coins into a pocket, then he'll know its possible for 5 coins to fit in a pocket of that size.


Although I understand what you are saying, and kind of like it, I think this is much more problematic than you are realizing. Firstly, he most likely won't know the size of Jones' pockets. Even if he did take the time to measure them, then even with the consideration that he has witnessed 5 coins in Johns' pockets for size L * W * D, he cannot claim it is possible for those 5 coins to fit in a pocket of (> L) * (> W) * (> D). He could abstractly reason that if he experienced 5 coins in a pocket of some size, that, considering mathematics in the abstract, it is possible for 5 coins to fit in a pocket that is greater than that size (assuming the pocket is empty): but he didn't experience it for the greater sized pocket. To me, it seems wrong to think that I cannot reason conditionally that, regardless of whether the pocket of greater size is instantiated in the external world, it is possible to fit 5 coins into that greater sized pocket. Likewise, if I have experienced 1 coin, know the dimensions of that coin, and know the dimensions of Johns' pocket, I can claim it is possible to fit 5 coins in Johns' pocket with the consideration of math in the abstract. The only way I can fathom countering this is to deny the universality of mathematics, which seems obviously wrong to me.

Again, I'm not seeing how we need the word potential when stating, "Smith speculates that Jones has 5 coins in his pocket."


Firstly, claiming "smith speculates that Jones has 5 coins in his pocket" is completely different from claiming "smith thinks it is possible for 5 coins to be in Jones' pocket". One is claiming there actually are 5 coins, whereas the other is claiming merely that 5 coins could be in his pocket. These are not the same claims. But notice that, within your terminology, Smith cannot claim it is "possible", "probable", or "irrational". Therefore, by process of elimination he is forced to use "speculation"; however, as I previously just explained, this does not represent what he is trying to claim: he is not necessarily claiming Jones' actually has 5 coins in his pocket. Likewise, stating it as "smith speculates that there could be 5 coins in Jones' pocket" is just to claim "possibility" in wordier terminology. Speculations are not just claims about "what could", as "could" is purely abstract consideration: speculations pertain to positive or negative claims with respect to what actually is (not what could be). That is why potentiality is a prerequisite to speculation: you must not be able to contradict your claim about what is in the abstract, as that would negate it, but, thereafter, you are necessarily making a claim about "reality".

We have to clarify the claim a bit. Does Smith know that Jones' pocket is the correct size to fit five coins?


Again, empirically speaking, he cannot claim "possibility" based off of a pure abstract consideration of sizes unless that pocket is the exact same size as that which has been experienced before.

Is he saying he knows Jones' pocket is big enough to where it is possible to fit 5 coins?


Again, this is only considered possible if pocket size X = Jones' pocket size, not if pocket size X > Jones' pocket size. But clearly (I think) we can still claim it is possible (just not under your terminology, therefore it has the potential).


The epistemology is not telling Smith to do what he wants. The epistemology recognizes the reality that Smith can do whatever he wants.


He can only do whatever he wants in so far as he doesn't contradict himself. If I can provide an argument that leads Smith realize he is holding a contradiction, then he will not be able to do it unless he uncontradicts it with some other reasoning. Therefore, if we can come up with a logical definition of "contexts", then I think we ought to. This is really the root of the problem with possibility and contexts: they are not clearly defined (as in, the subject gets to do whatever they want).

We can somewhat resolve this if we consider "possibility", in the sense of "experiencing it once before", as "a deductively defined concept, with consideration to solely its essential properties, that has been experienced at least once before". That way, it is logically pinned to the essential properties of that concept. I may have the choice of deductively deciding concepts (terms), but I will not have as much free reign to choose what I've experienced before. To counter this would require the subject to come up with an alternative method that identifies equivalent objects in time (which cannot be logically done unless they consider the essential properties).

Although I am not entirely certain about contexts yet, I think I have distinguished two types: mereological context and temporal context. The former is what the subject typically deciphers as contextual structures of objects, whereas the latter is the summation of time up to present. Therefore, in terms of temporal contexts, I can claim that I am in a particular context now, which is the summation of my knowledge up to the present moment, which influences my judgements. Therefrom, I can also posit the charity of considering temporal contexts in relation to people (including myself). For example, this is my justification for claiming I may contradict what was considered "true" today by new knowledge that is acquired tomorrow (and, likewise, to people who came historically before me). In terms of mereological contexts, there is an aspect of contexts that has no relation to temporal frameworks: the structures of objects. I can equally claim that what is known now in terms of an object in relation to what is immediately seen does not in any way contradict that which is supposed in terms of an underlying structure of that thing now (i.e. it can be a table and be much less distinctly a table at the atomic level). In summary, I can claim that contradictions do not arise in terms of time as well as structural levels. These are the only two aspects of contexts and, therefore, as of now, this is what I consider "context" to be. It is important to emphasize that I am not just merely trying to advocate for my own interpretation of "context": I am trying to derive that which can not be contradicted in terms of "context"--that which all subjects would be obliged to (in terms of underlying meaning, of course they could semantically refurbish it).

The problem isn't the reality that anyone can choose any context they want.


I think they can do whatever they want as long as they are not aware of a contradiction. Therefore, if I propose "context" as relating to temporal and mereological contexts, then they either are obliged to it or must be able to contradict my notion. My goal is to make it incredibly hard, assuming they grasp the argument, to deny it (if not impossible). Obviously they could simply not grasp it properly, but that doesn't negate the strength of the argument itself.

The problem is that certain contexts aren't very helpful. Thus I think the problem is demonstrating how certain contexts aren't very useful.


I agree: but what in the epistemology explicates "usefulness"?

If Smith isn't claiming that Jones has 5 coins in his pocket, then he's speculating Jones could, or could not have 5 coins in his pocket.


To say "speculate could" is to say it is "possible" in the colloquial sense of the term. Therefore, if we are using it that way, you have only semantically eradicated the ambiguity from "possibility". Otherwise, speculation cannot refer to "could", but what is.

The purpose of the original paper was simply to establish how knowledge worked.


Again, since you are defining "knowledge" strictly in the deductive sense (which I partly think is correct), then technically you have achieved your goal here. But, for the reader, I don't think it is quite accurate to say that the epistemology holistically covers all it should: we've merely semantically shifted the concern from "speculative knowledge" to "speculative beliefs".

When you think of something in your head that you distinctively know is not able to be applied. For example, if I invent a unicorn that is not a material being. The definition has been formulated in such a manner that it can never be applied, because we can never interact with it.


But you can apply the fact that you distinctively know that it cannot be applied without ever empirically applying it (nor could you). So you aren't wrong here, but that's not holistically what I mean by "apply to reality".

In your opinion you do, but can you disagree in application? Based purely on this experiment, its plausible that the Earth is round, and its plausible that the distance calculated is the size of the Earth. The actual reality of the diameter of the Earth must be measured to applicably know it. You have to applicably show how the experiment shows the Earth is round and that exact size. The experiment was close, but it was not the actual size of the Earth once it was measured.


I think you are conflated two completely separate claims: the spherical nature of the earth and the size of the earth. The stick and shadow experiment does not prove the size of the earth, it proves the spherical shape of the earth. You do not need to travel the whole planet to know the earth is a spherical shape: the fact that sticks of the same length can throw different shadows contradicts the notion that the earth is flat. It cannot be the case that the earth is flat given that.

It only undermines them if there are other alternatives in the hierarchy. If for example a scientific experiment speculates something that is not possible, it is more rational to continue to hold what is possible. That doesn't mean you can't explore the speculation to see if it does revoke what is currently known to be possible. It just means until you've seen the speculation through to its end, holding to the inductions of what is possible is more rational.


I sort of agree, but am hesitant to say the least. Scientific theories are not simply that which is the most cogent, it is that which has been vigorously tested and thereby passed a certain threshold to be considered "true". I think there is a difference (a vital one).

No, you can distinctively know that a logically unobtainable idea is irrational to hold. A logic puzzle must be reasoned before it can be distinctively known. Only applying the rules in a logical manner gets you a result.


I disagree. You do not need to empirically apply rules in a logical manner to get a result. I obtain knowledge that never leaves my head: principle of noncontradiction, principle of sufficient reason, consideration of mathematics, space, time, causality, logical systems (such as classical logic), etc. What I think you are referring to is claims about what actually is vs what actually can be: both are obtained knowledge. Likewise, not all is claims are proved empirically. Again, try to prove space without presupposing it in an empirical application.

While we could invent a result in our heads to be anything


This is not true, we are subjected to certain rules which are apodictically true for us. However, I do see your point that we don't "know" what is by what can be. Also, somethings aren't just determined to be abstractly something that "can be", we also determine things as necessary. I abstractly conclude the concept of space itself from its apodictic nature: this is not something that can be empirically tested--"tests" presuppose such.

it fails when the rules of the logic puzzle are applied


I agree in the sense that what is applied to the external world may end up exposing contradictions that we hadn't thought of, but this doesn't negate the fact that there is such a thing as non-empirically verified knowledge (abstractly determined knowledge).


Can I clarify that I agree, but people have the capacity to reason with varying levels?


I agree, but when you say:

Some people aren't very good at reasoning.


I don't think we are using the term in the same sense. I don't mean what is rational, which is what we define inter-subjectively as a coherent form of reasoning. I am referring to that which necessarily occurs in all subjects, lest they not be a subject anything related to me. To put it in a sentence (admittedly from Kant, although I don't holistically agree with him at all): I can believe whatever I want as long as I don't contradict myself. This is the grounding I am trying to subject epistemology to (to the best of my ability). You absolutely right that people aren't very good at rationalizing, but when I refer to reason: we all have it.

But it cannot convince a person who does not want to reason, or is swayed by emotion.


The ability to act on emotion first must be decided by reason. Not to say it is rational, but it is always necessarily routed in a reasoning. I agree with you though, in terms of underlying meaning, but I am trying to emphasize that, once it is realized we are all reasoning beings, there is at least something to work with: something to ground in. That's all I am trying to say. But I think we are in agreement.

Also, no worries! Enjoy your weekend!

I look forward to hearing from you,
Bob

Philosophim March 08, 2022 at 01:52 #664206
Quoting Bob Ross
So I think I have identified our fundamental difference: you seem to be only allowing what is empirically known to be what can be "known", whereas I am allowing for knowledge that can, along with what is empirical, arise from the mind.


No, not at all! There are two types of knowledge. Applicable knowledge, and distinctive knowledge. What you have been trying to do, is state that distinctive knowledge can be applicable knowledge without the act of application. This is understandable, as "knowledge" in general use does not have this distinction. But here, it does. And in the study of epistemology, I have found it to be absolutely necessary.

Quoting Bob Ross
For example, try applying without contradiction (in the sense that you seem to be using it--empirically) the principle of noncontradiction. I don't think you can: it is apodictically true by means of reason alone.


As an abstract, you can distinctively know the principle of non-contradiction. To apply it, you must create a specific example. For example, if I stated, this color red, is both the color red, and blue at the same time, I can test it. I look at the color, find it is red, and that it is not blue. Therefore that color right there, cannot be both red and blue at the same time.

Distinctively, I can imagine the color red, then the color blue, and determine that the color I am envisioning in my head cannot both be the color red I am envisioning, and the color blue I am envisioning. This is known to me, as I am contradicted by my inability to do it.

But, what if I smell a color? For example, I smell a flower whenever I envision purple in my head. I distinctively know this. However, if I point out a purple object and I don't smell flowers, then I cannot say I applicably know that the color purple in reality smells like flowers. Does this make sense? I can distinctively know that when I envision a color, I also imagine a smell. But that doesn't mean that happens if I apply that to reality.

Quoting Bob Ross
Furthermore, try proving space empirically: I don't think you can. Space, in one unison, is proven apodictically (by means of the principle of noncontradiction) with reason alone.


No, space in application, is not proven by distinctive knowledge alone. I can imagine a whole set of rules and regulations about something called space in my head, that within this abstract context, are perfectly rational and valid. But, when I take my theory and apply it to a square inch cube of reality, I find a contradiction. I can distinctively have a theory in my head that I know, but one that I cannot apply to reality.

The notions of space that we use in application today, such as the idea of an "inch", have all been applied to reality without contradiction. There are many distinctively known ideas of space that have not been applied. String theory, field theory, and multiverse theory are all theories of space you can distinctively know in the abstract. But they cannot be currently known in application.

Recall that I can distinctively know 1 and 1 are two. But what is that in application? 1 what? 1 potato and 1 potato can be applicably known as two potatoes. That is the key I think you are missing.

Quoting Bob Ross
If we have a mathematical formula, we can "know" it will work in relation to the "external" world regardless of whether it actually is instantiated in it.


What I am saying is you can distinctively know that if you have an identity of 1, and an identity of 1, that it will make an identity of two. But if you've never added two potatos before, you don't applicably know if you can. While this may seem silly, lets take it to something less silly now. I have two Hydrogen atoms and 1 Oxygen atom together. What do I mean by this in application? Are they in orbit to make a molecule of water? Are the electrons orbiting slowly to be ice? Are they simply in a certain proximity? It it just Hydrogen and Oxygen in the air together? We can imagine all of these abstractly and know in our context of logic and the rules of chemistry the answers. But when we test actual Hydrogen and oxygen, our abstract rules must be applied to applicably know the answers for those specific atoms in reality.

Quoting Bob Ross
I was inclined to adamantly claim it(inductions are knowledge) is, but upon further contemplation I actually really enjoy the idea of degrading inductions to beliefs with different credence levels (and not knowledge).


Understandable! Yes, inductions are essentially beliefs of different credence levels.

Quoting Bob Ross
However, I think there may be dangers in this kind of approach, without some means of determining something "known"


And that is why there must be a declaration of what can be known first. I establish distinctive and applicable knowledge, and only after those are concluded, can we use the rules learned to establish the cogency of inductions. Without distinctive and applicable knowledge first, the hierarchy of inductions has no legs to stand on.

Quoting Bob Ross
I am not sure how practical this will be for the laymen--I can envision everyone shouting "everything is just a belief!".


The layman already misuses the idea of knowledge, and there is no rational or objective measure to counter them. But I can. I can teach a layperson. I can have a consistent and logical foundation that can be shown to be useful. People's decision to misuse or reject something simply because they can, is not an argument against the functionality and usefulness of the tool. A person can use a hammer for a screw, and that's their choice, not an argument for the ineffectiveness of a hammer as a tool for a nail!

Quoting Bob Ross
Likewise, it isn't just about what is more cogent, it is about what we claim to have passed a threshold to be considered "true".


I want to emphasize again, the epistemology I am proposing is not saying knowledge is truth. That is very important. A common mistake people make in approaching epistemology (I have done the same) is conflating truth with knowledge. I have defined earlier what "truth" would be in this epistemology, and it is outside of being able to be applicably known. I can distinctively know it, but I cannot applicably know it.

To note it again, distinctive and applicable truth would be the application of all possible contexts to a situation, and what would remain without contradiction after it was over. Considering one human being, or even all human beings could experience all possible contexts and apply them, it is outside of our capability. But what we can do is take as many contexts as we can, apply them to reality, and run with what hasn't been contradicted yet. While what is conclude may not be true, it is the closest we can rationally get.

Quoting Bob Ross
I find myself in the same dilemma where the theory of evolution and there being a teapot floating around Jupiter are both speculations. What bothers me about this is not that they both are speculations, but, rather, that there is no distinction made between them: this is what I mean by the epistemology isn't quite addressing the most pressing matters (most people will agree that which they immediately see--even in the case that they don't even know what a deduction is--but the real disputes arise around inductions). This isn't meant as a devastating blow to your epistemology, it is just an observation that much needs to be addressed before I can confidently state that it is a functional theory (no offense meant). I think we agree on this, in terms of the underlying meaning we are both trying to convey.


I fully understand and respect this! I believe this is because you may not have understood or forgotten a couple of tenants.

1. Inductions are evaluated by hierarchies.
2. Inductions also have a chain of reasoning, and that chain also follows the hierarchy.
3. Hierarchies can only be related to by the conclusions they reach about a subject. Comparing the inductions about two completely different subjects is useless.

To simplify, if I have a possibility vs a plausibility when I am rationally considering what to pursue, I can conclude it is more rational to pursue what I already know is possible. That doesn't mean being rational results in asserting what is true. Inductions are, by definition, uncertainties. The conclusion does not necessarily follow from the premises. Sometimes people defy what is possible, pursue what is plausible, and result in a new discovery which erases what was previously applicably known.

Of course, when the person decided to pursue what was only seen as plausible, and against what was possible, society would quite rightly claim that the pursuit of what is plausible is not rational. Rationality is incredibly powerful. But depending on a person's context, and the limits of what they already known, it is not the only tool a person needs. Sometimes, it is important to defy and test what is rational. Sometimes, in fact, many times, we are simply in a position where we are certain that the outcome is uncertain, and must sometimes make that leap into the next second of life.

But, making that leap without some type of guideline, would be chaos and randomness. So we can use the hierarchy and the chain of reasoning to give us some type of guide that more often than not, might result in less chaos and more order.

So, I can first know that the hierarchy is used in one subject. For example, we take the subject of evolution. We do not compare inductions about evolution, to the inductions about Saturn. That would be like comparing our knowledge of an apple to the knowledge of a horse, and saying that the knowledge of a horse should have any impact on the knowledge of this apple we are currently eating.

So we pick evolution. I speculate that because certain dinosaurs had a particular bone structure, had feathers, and DNA structure, that birds evolved from those dinosaurs. This is based on our previously known possibilities in how DNA evolves, and how bone structure relates to other creatures. To make this simple, this plausibility is based on other possibilities.

I have another theory. Space aliens zapped a plants with a ray gun that evolved certain plants into birds. The problem is, this is not based on any applicable knowledge, much less possibilities. It is also a speculation, but its chain of reasoning is far less cogent than the first theory, so it is more rational to pursue the first.

When plausibilities are extremely close in hierarchy through their chain of reasoning, it is more palatable to take the less rational gamble. So for example, lets say we take the first theory, and change it to, "Perhaps our current understanding of how bones evolve among species is false." And the reason we say this, is because we found a new mammal, and it might contradict our previous findings.

This plausibility is essentially only one step away from the the first theory, and most would say it is viable to pursue. However, if a person did not have the time or interest to pursue this speculation, it would still be rational to hold onto the possibilities that our current understanding of bone structure until the speculation is fully explored.

Your coins problem is extremely good! Quoting Bob Ross
He could abstractly reason that if he experienced 5 coins in a pocket of some size, that, considering mathematics in the abstract, it is possible for 5 coins to fit in a pocket that is greater than that size (assuming the pocket is empty): but he didn't experience it for the greater sized pocket.


You have it correct. He can distinctively know that five coins should be able to fit into a pocket of LWD. He can measure the pocket from the outside and see that it is greater than LWD. But until he applies and attempts to put the five coins into that specific pocket, Smith doesn't applicably know if they can fit. Why? What if there is something in the pocket Smith wasn't aware of? What if part of it is sewn shut, or caught?

To sum it up, application is when we apply to a specific situation that is outside of our distinctive knowledge. We can make a thought experiment, but that is not an application experiment. Smith can have abstract distinctive knowledge about coins, pocket, dimenstions, and even Jones. Smith could conclude its probable, possible, speculate, or even irrationally believe that Jones has five coins in that specific pocket. But none of those are applicable knowledge. He can only applicably know, if he's confirmed that there are five coins in Jones pocket without contradiction from reality.

Quoting Bob Ross
But notice that, within your terminology, Smith cannot claim it is "possible", "probable", or "irrational". Therefore, by process of elimination he is forced to use "speculation"


Within the context you set up, you may be correct. But in another context, he can claim it is possible or probable. For example, Smith sees Jones slip five coins into his pocket. Smith leaves the room for five minutes and comes back. Is it possible Jones could fit five coins in his pocket? Yes. Is it possible that Jones did not remove those five coins in the five minutes he was gone? Yes. We know Jones left those coins in his pocket for a while, therefore it is possible that Jones could continue to leave those coins in his pocket.

Quoting Bob Ross
The epistemology is not telling Smith to do what he wants. The epistemology recognizes the reality that Smith can do whatever he wants.

He can only do whatever he wants in so far as he doesn't contradict himself. If I can provide an argument that leads Smith realize he is holding a contradiction, then he will not be able to do it unless he uncontradicts it with some other reasoning.


I really wish this was the case. People do things while contradicting their own rationality all the time. People do not have to be rational, or respect rationality in any way. You can conclude he Smith would be irrational using rationality. You could even explain it to Smith. Smith could decide not to care at all. There is absolutely nothing anyone can do about it.

Quoting Bob Ross
We can somewhat resolve this if we consider "possibility", in the sense of "experiencing it once before", as "a deductively defined concept, with consideration to solely its essential properties, that has been experienced at least once before". That way, it is logically pinned to the essential properties of that concept. I may have the choice of deductively deciding concepts (terms), but I will not have as much free reign to choose what I've experienced before. To counter this would require the subject to come up with an alternative method that identifies equivalent objects in time (which cannot be logically done unless they consider the essential properties).


Correct. But this is only if a person chooses to think and act logically. So to clarify, I can convince someone to do something rational, if they are using rationality (and of course, I'm actually being rational as well). But my being rational does not preclude they must be rational. And if they decide to not be rational, no amount of rationality will persuade them.

Quoting Bob Ross
In summary, I can claim that contradictions do not arise in terms of time as well as structural levels. These are the only two aspects of contexts and, therefore, as of now, this is what I consider "context" to be. It is important to emphasize that I am not just merely trying to advocate for my own interpretation of "context": I am trying to derive that which can not be contradicted in terms of "context"--that which all subjects would be obliged to (in terms of underlying meaning, of course they could semantically refurbish it).


I think you're getting the idea of contexts now. The next step is to realize that your contexts that you defined are abstractions, or distinctive knowledge rules in your own head. If we can apply those contexts to reality without contradiction, then they can be applicably known, and useful to us. But there is no one "Temporal context". There is your personal context of "Temporal". I could make my own. We could agree on a context together. In another society, perhaps they have no idea of time, just change.

To answer your next question, "What is useful", is when we create a context that can be applied to reality, and it helps us live, be healthy, or live an optimal life. Of course, that's what I consider useful. Perhaps someone considers what is useful is, "What makes me feel like I'm correct in what I believe." Religions for example. There are people who will sacrifice their life, health, etc for a particular context.

Convincing others to change their contexts was not part of the original paper. That is a daunting enough challenge as its own topic. In passing, as a very loose starting point, I believe we must appeal to what a person feels adds value to their lives, and demonstrate how an alternative context serves that better than their current context. This of course changes for every individual. A context of extreme rationality may appeal to certain people, but if it does not serve other people's values, they will reject it for others.

Quoting Bob Ross
I think they can do whatever they want as long as they are not aware of a contradiction. Therefore, if I propose "context" as relating to temporal and mereological contexts, then they either are obliged to it or must be able to contradict my notion. My goal is to make it incredibly hard, assuming they grasp the argument, to deny it (if not impossible). Obviously they could simply not grasp it properly, but that doesn't negate the strength of the argument itself.


I think you're getting it. Others decision to accept or reject your context has no bearing on whether that context serves yourself optimally for your own life. (Unless of course that rejection results in potential harm to yourself!) Further, a person's rejection of your context, is not a rejection of the rationality of the context. That stands on its own regardless of others input. Others input can introduce you to distinctive and applicable knowledge you may not have known prior, which can cause you to question and expand what you know. But there may be people who do not care, who are happy with their own little world as it gets them through their day. Perhaps they would be happier or more successful if they embraced a more rational or worldly context, but plenty of people are willing to embrace the devil they know instead of the angels they don't.

But with this, I also defend my epistemology. People's decisions not to use it, does not make it irrational or useless to other people who would like a rational approach to knowledge. For the epistemology to not be rational, it must contradict itself in application. So far, I don't think it has. But that doesn't mean we shouldn't keep trying to!

Quoting Bob Ross
When you think of something in your head that you distinctively know is not able to be applied. For example, if I invent a unicorn that is not a material being. The definition has been formulated in such a manner that it can never be applied, because we can never interact with it.

But you can apply the fact that you distinctively know that it cannot be applied without ever empirically applying it (nor could you). So you aren't wrong here, but that's not holistically what I mean by "apply to reality".


My inability to apply something, is the application to reality. When I try to apply what I distinctively know cannot be applied to reality, reality contradicts my attempt at application. If I were to apply what I distinctively know cannot be applied to reality, and yet reality showed I could apply it to reality, then my distinctive knowledge would be wrong in application. But when you lack any distinctive knowledge of how to apply it to reality, there is nothing you can, or cannot apply to reality. So by default, it is inapplicable, and therefore cannot be applicably known.

Quoting Bob Ross
I think you are conflated two completely separate claims: the spherical nature of the earth and the size of the earth. The stick and shadow experiment does not prove the size of the earth, it proves the spherical shape of the earth.


No, it at best proves the possibility that the Earth is round. If you take small spherical objects and show that shadows will function a particular way, then demonstrate the Earth's shadows also function that way, then it is possible the Earth is spherical. But until you actually measure the Earth, you cannot applicably know if it is spherical. Again, perhaps there was some other shape in reality that had its shadows function like a sphere? For example, a sphere cut in half. Wouldn't the shadows on a very small portion of the rounded sphere act the same as a full sphere? If you are to state reality is a particular way, it must be applied without contradiction to applicably know it.

Quoting Bob Ross
It only undermines them if there are other alternatives in the hierarchy. If for example a scientific experiment speculates something that is not possible, it is more rational to continue to hold what is possible. That doesn't mean you can't explore the speculation to see if it does revoke what is currently known to be possible. It just means until you've seen the speculation through to its end, holding to the inductions of what is possible is more rational.

I sort of agree, but am hesitant to say the least. Scientific theories are not simply that which is the most cogent, it is that which has been vigorously tested and thereby passed a certain threshold to be considered "true". I think there is a difference (a vital one).


Science does not deal in truth. Science deals in falsification. When a theory is proposed, its affirmation is not what is tested. It is the attempt at its negation that is tested. Once it withstands all attempts at its negation, then it is considered viable to use for now. But nothing is science is ever considered as certain and is always open to be challenged.

I think the rest of your post has been covered, and I would be repeating what has already been stated. Fantastic post again! Keen questions and great insights. I hope I'm adequately answering your points, and what I'm trying to point out is starting to come into view.

Bob Ross March 09, 2022 at 04:57 #664605
Hello @Philosophim,

First of all, an apology is due: I misunderstood (slash completely forgot) that you are claiming that abstract reasoning is knowledge (as you define it, “distinctive knowledge”). Our dispute actually lies, contrary to what I previously claimed, in whether both types of knowledge are applied.

For starters, this may very well merely be a semantical dispute: only time will tell.

When you state:
What you have been trying to do, is state that distinctive knowledge can be applicable knowledge without the act of application.


I think you are simply semantically defining your way into an obvious contradiction. As you are probably already well aware, if it is true that there is no act of application, then it logically follows that there is no application. I am claiming the contrary: distinctive knowledge is applied. However, using that terminology (distinctive) may be causing some issues (I’m not sure), so let me try to explicate my position more proficiently. First of all, I don’t think we are using the term “reality” equivocally: you seem to be referring to what I would deem “the external world” (to be more precise: “that which is object”--which includes the body to an extent), whereas I refer to “reality” as holistically the totality of existence (which includes the subject and object). Therefore, when you state “applicable knowledge”, I interpret that as “that which refers to the external world and is thusly applied to it for validity”. When you state “distinctive knowledge”, I am implicitly interpreting it as “that which refers to the mind, or that which resides in it, and thusly is applied to it for validity”. Please note that I am using “subject” and “object” incredibly, purposely loosely: simply for explication purposes of two major distinctions I think you are making. So when you talk about how what I reason in my mind doesn’t grant me knowledge about how that thing truly is in “reality” (i.e. your hydrogen + oxygen example), I proclaim “that is true!”. But why is this? It is because, I would say, the reasoning is pertaining to objects specifically. Therefore, the application necessarily cannot be merely from the mind. There are three types I would like to expose hereinafter:

1. That which is in relation to a specific object
2. That which is in relation to an object, but pertains to the general form of all or some objects
3. That which is in relation purely to the subject

Everything is derived from reason (or at least that is the position I take) and, consequently, the distinction between the external world and the internal world (so to speak, very loosely) is blended together (in to those three aforemented types). Certain aspects that do not directly pertain to an object can, and potentially must, be derived purely from reason. For example, when you say:

What I am saying is you can distinctively know that if you have an identity of 1, and an identity of 1, that it will make an identity of two. But if you've never added two potatos before, you don't applicably know if you can


The deductive assertion of “two potatoes” (as conceptualized without refurbishment from the standard definitions) necessitates the operation of addition: regardless of whether (1) the operation has been applied in the external world or (2) potatoes even exist in the external world. If we are utilizing distinction (which is implied with “potatoes” in “two potatoes”, as well as multiplicity in terms of “two”), then pure reason can derive knowledge that “one” potatoe + “one” potatoe = “two” potatoes. This is, as you are already inferring, simply the exact same thing as your first sentence (in the quote): 1 + 1 = 2. As far as I understand your example here, you are referring specifically to the addition operation and not the existence of potatoes (“you’ve never added two potatos before, you don’t applicably know if you can”): but, as I’m hopefully demonstrating, you definitely can know that. In simpler terms, math applies before any application to the empirical world because it is what the external world is contingent on: differentiation. This application, although it can be better understood with the use of objects, can be solely derived from reason (1 thought + 1 thought = 2 thoughts, this abstractly applies to everything). Therefore, if I distinctively define a potato in a particular way where it implies “multiplicity” and “quantity”, then the operation of addition must follow. The only way I can fathom that this could be negated is if the universality of mathematics is denied: which would entail the rejection of differentiation (“discrete experience” itself). Notice that 1 + 1 = 2 is of type #3, but, due to the intertwining nature of subject-object, is also utilized in #2 without any application to the external world. The object presupposes mathematics (differentiation): without it, there is no object in the first place. Differentiation is a universal, necessarily so, form of experience: thusly, mathematical operations (to go back to your example) applied in the abstract are, thereby, also applied (if you will) to the external world.

Likewise, consider shapes: these are universal forms (not in the sense of Universals in philosophy, they can be, and I think they are, particulars in that sense) of experience derived solely from reason. I can know, in the abstract, that a circle can fit in a square. I do not need to physically see (empirically observe) a circle inscribed in a square to know this. Not only can I know, applied via reason in the abstract, this in relation to subject, but, since it abides by type #2, also in relation to object. Again, the universality of differentiation would have to be refuted for this not to hold for both subject (that which is conjured) and object (that which isn’t).

Moreover, consider mathematical equations. If I have x + y = 1, I can, purely with reason, solve for x to see what x = ? is. Prior to this abstract application of the process of thoughts, I did not “know” what x = ? entails; afterwards, without any external application, I figured it out: this was abstractly obtained, not given.

Now, the consideration of whether a “potatoe” exists in the external world, just like your hydrogen-oxygen example, requires empirical observation and, therefore, pertains to type #1 only. The mere form of the instantiation of objects will not get you to knowledge about a particular object you have the ability to imagine. But this does not negate the fact that we are able to apply in the abstract. I would also like to note that this also entails that you do know, by application, that your best guess, from reasoning abstractly, is whatever you deemed your best guess.

To quickly cover #3, the knowledge that I did imagine a unicorn in my head, regardless of it is or isn’t instantiated in the external world, was applied strictly by reason (no empirical observation). It was not given, it was obtained. No matter how swift the conclusion was, I had to reason my way, which is the application of the principle of noncontradiction (along with other principles), into knowing such.

With this in mind, I am not referring to objects when I assert space is purely apodictically true. Nor is it in relation to other spatial frameworks we can hold within the uniform spatial plane (like string theory, etc): I am referring to that which reason will always apodictically find true of all of its thoughts and, subsequently, all of its experience holistically—the inevitable spatial reference. Yes, we can conceive of multiple spatial frameworks, but they are necessarily within space. Nothing I can conceive of nor can I claim will ever not be within a spatial reference. Although this is slightly off topic, this is why I reject the notion of non-spatial claims: it is merely the fusion of absence (as noted under the spatial reference), linguistic capability (we can combine the words together to make the claim), and the holistic spatial reference (i.e. “non-” + “spatial”). This is, in my eyes, no different than saying “square circle”. So when you say:

No, space in application, is not proven by distinctive knowledge alone. I can imagine a whole set of rules and regulations about something called space in my head, that within this abstract context, are perfectly rational and valid. But, when I take my theory and apply it to a square inch cube of reality, I find a contradiction. I can distinctively have a theory in my head that I know, but one that I cannot apply to reality.


I am not referring to what we induce is under our inevitable spatial references (such as the makeup of “outer space” or the mereological composition of the space), but, rather, the holistic, unescapable, spatial captivity we are both subjected to: we cannot conceive of anything else. Does that make sense?

The layman already misuses the idea of knowledge, and there is no rational or objective measure to counter them. But I can. I can teach a layperson. I can have a consistent and logical foundation that can be shown to be useful. People's decision to misuse or reject something simply because they can, is not an argument against the functionality and usefulness of the tool. A person can use a hammer for a screw, and that's their choice, not an argument for the ineffectiveness of a hammer as a tool for a nail!


Fair enough.


I want to emphasize again, the epistemology I am proposing is not saying knowledge is truth. That is very important. A common mistake people make in approaching epistemology (I have done the same) is conflating truth with knowledge. I have defined earlier what "truth" would be in this epistemology, and it is outside of being able to be applicably known. I can distinctively know it, but I cannot applicably know it.


Completely understandable. I would also like to add that even “truth” in terms of distinctively known is merely in relation to the subject: it is still not absolute “truth”--only absolute, paradoxically, relative to the subject.

To note it again, distinctive and applicable truth would be the application of all possible contexts to a situation, and what would remain without contradiction after it was over.


I am a bit confused by this quote: you conceded that “distinctive..applicable truth is the application of all possible contexts to a situation”, which concedes that it is applied. I am presuming this is not what you meant.


1. Inductions are evaluated by hierarchies.
2. Inductions also have a chain of reasoning, and that chain also follows the hierarchy.
3. Hierarchies can only be related to by the conclusions they reach about a subject. Comparing the inductions about two completely different subjects is useless.


I am still hesitant about #3, but I will refrain for now (and let you respond to the rest first).

So, I can first know that the hierarchy is used in one subject. For example, we take the subject of evolution. We do not compare inductions about evolution, to the inductions about Saturn. That would be like comparing our knowledge of an apple to the knowledge of a horse, and saying that the knowledge of a horse should have any impact on the knowledge of this apple we are currently eating.


I think for now, I will refurbish my initial analogy to your other one (because I think mine was deviating from the main purpose):

So we pick evolution. I speculate that because certain dinosaurs had a particular bone structure, had feathers, and DNA structure, that birds evolved from those dinosaurs. This is based on our previously known possibilities in how DNA evolves, and how bone structure relates to other creatures. To make this simple, this plausibility is based on other possibilities.

I have another theory. Space aliens zapped a plants with a ray gun that evolved certain plants into birds. The problem is, this is not based on any applicable knowledge, much less possibilities. It is also a speculation, but its chain of reasoning is far less cogent than the first theory, so it is more rational to pursue the first.


This is more in line with the main point I am trying to convey: theories are not what is most cogent, they are what has passed a threshold. Whether either of us like it, we do not claim “theory”, scientifically, to the most cogent induction out of what we know: that is a hypothesis at best. Even in relation to the same exact claim (so forget the saturn comparing to a horse for now—although we can definitely talk about that too), we hold uncertainty in most fields of study until it is considered worthy of the title “theory” or “true” or “fact” (etc). It isn’t necessarily bad that your epistemology erodes this aspect, if, and only if, it addresses it properly (I would say). As another example, historians do not deem what is historically known based off of what is the most cogent induction (currently), it has to pass a threshold. We don’t take the knowledge of one reference to a guy named “bob” and go with best speculation we can rationally come up with. As of now, your epistemology doesn’t seem to account for this. We do not accept all contextually “most cogent” inductive beliefs, we are typically selective. Are you claiming we should just accept all of the most cogent beliefs (with respect to each hierarchical context)?

Within the context you set up, you may be correct. But in another context, he can claim it is possible or probable. For example, Smith sees Jones slip five coins into his pocket. Smith leaves the room for five minutes and comes back. Is it possible Jones could fit five coins in his pocket? Yes. Is it possible that Jones did not remove those five coins in the five minutes he was gone? Yes. We know Jones left those coins in his pocket for a while, therefore it is possible that Jones could continue to leave those coins in his pocket.


I don’t think you really address my issue (I probably just didn’t explicate it properly). In my scenario with Smith, he isn’t speculating that Jones has 5 coins in his pocket: he is claiming it has the potential to occur. The dilemma is this:

1. He can’t claim possibility (in my scenario)
2. He can’t claim probability (in my scenario)
3. He can’t claim irrationality
4. He can’t claim speculation

So what does he claim in your terminology? They are all exhausted. If he claims that he speculates it could be the case that Jones has 5 coins in his pocket, then he is literally claiming the colloquial use of the term possibility. I am salvaging this with “could” referring to potentiality. I am not quite following how you are reconciling this dilemma?

Again, to keep this relatively short, I will address the rationality vs reason parts later. I would just like to point out that I agree, but you were referring to rationality, not reason. But more on that at a later time (I think we need to resolve the previous disputes first).

I think you're getting the idea of contexts now. The next step is to realize that your contexts that you defined are abstractions, or distinctive knowledge rules in your own head. If we can apply those contexts to reality without contradiction, then they can be applicably known, and useful to us. But there is no one "Temporal context". There is your personal context of "Temporal". I could make my own. We could agree on a context together. In another society, perhaps they have no idea of time, just change.


Time is change. What you are referring to is our abstraction of time into clocks (I presuming), which is most definitely correct. However, assuming I can converse with them (or communicate somehow somewhat properly), they will not be able to contradict the notion of space and time. You are right that they may reject any further extrapolation of mereological structures other than what they immediately see, but that would have any effect on my definition of “context” since any mereological consideration would thereby be omitted anyways. I’m not quite following how you can create a different “temporal context” than me, other than semantically refurbishing the underlying meaning. You can surely deny abstract clocks, but not causality.

To answer your next question, "What is useful", is when we create a context that can be applied to reality, and it helps us live, be healthy, or live an optimal life. Of course, that's what I consider useful. Perhaps someone considers what is useful is, "What makes me feel like I'm correct in what I believe." Religions for example. There are people who will sacrifice their life, health, etc for a particular context.

Convincing others to change their contexts was not part of the original paper. That is a daunting enough challenge as its own topic. In passing, as a very loose starting point, I believe we must appeal to what a person feels adds value to their lives, and demonstrate how an alternative context serves that better than their current context. This of course changes for every individual. A context of extreme rationality may appeal to certain people, but if it does not serve other people's values, they will reject it for others.


This is feels like “context” is truly ambiguous. The term context needs to have some sort of reasoning behind it that people abide by: otherwise it is pure chaos. I think the main focus of epistemology is to provide a clear derivation of what “knowledge” is and how to obtain it (in our case, including inductive beliefs). Therefore, I don’t think we can, without contradiction, define things purposely ambiguously.

My inability to apply something, is the application to reality. When I try to apply what I distinctively know cannot be applied to reality, reality contradicts my attempt at application


This is an application in the abstract. You didn’t observe any contradiction with respect to objects, you reasoned that, in this case, that the term “non-” + “material” + “being” cannot exist in what is deemed a “material” + “world”. This is a contradiction that did not get applied to any objects.

If I were to apply what I distinctively know cannot be applied to reality, and yet reality showed I could apply it to reality, then my distinctive knowledge would be wrong in application.


In your example, specifically as you outlined it, this impossible. You defined your way into a contradiction, which means you are abiding by type #3: pure reason. Saying there is a non-material unicorn in a strictly material world is just like the consideration of a square circle. Now, to claim that a material unicorn, as imagined, cannot exist in the material world would be something that abides by the quote here (that you said), because there’s no pure reason that can be applied (at least not without further context): empirical observation is required.



No, it at best proves the possibility that the Earth is round. If you take small spherical objects and show that shadows will function a particular way, then demonstrate the Earth's shadows also function that way, then it is possible the Earth is spherical. But until you actually measure the Earth, you cannot applicably know if it is spherical. Again, perhaps there was some other shape in reality that had its shadows function like a sphere? For example, a sphere cut in half. Wouldn't the shadows on a very small portion of the rounded sphere act the same as a full sphere? If you are to state reality is a particular way, it must be applied without contradiction to applicably know it.


It is true that it does not prove that the earth is completely a sphere, but it does prove it is spherical (round and not flat). It isn’t merely a possibility, it cannot, even under what you described, be a flat plane. Sure it could be even 3/4ths a sphere, but it is nevertheless spherically shaped. Maybe that is what you were getting at, in that case we agree.

Science does not deal in truth. Science deals in falsification. When a theory is proposed, its affirmation is not what is tested. It is the attempt at its negation that is tested. Once it withstands all attempts at its negation, then it is considered viable to use for now. But nothing is science is ever considered as certain and is always open to be challenged.


This is not true. What you have described in a really vigorous form of the appeal to ignorance fallacy. Science does not deal with solely falsification; however, it does holistically deal with falsifiability (which is not equivocal). It is necessary that something that is claimed is falsifiable, but we do not assert “theories” as that which hasn’t been falsified in tests. We not only try to falsify the hypothesis, but we all verify that what should be expected is the results. We confirm, not by simply saying we can’t negate it in terms of this piece of evidence directly contradicts the idea of it. It pertains to “truth” relative to objects, which are relative to subjects.

I look forward to hearing from you,
Bob
Philosophim March 12, 2022 at 16:19 #666024
Quoting Bob Ross
First of all, an apology is due: I misunderstood (slash completely forgot) that you are claiming that abstract reasoning is knowledge (as you define it, “distinctive knowledge”).


No apology needed! We've been discussing this some time, and have not addressed the beginning in a while. I'll re-explain if something is forgotten without any issue or negative viewpoint on my part.

Quoting Bob Ross
Our dispute actually lies, contrary to what I previously claimed, in whether both types of knowledge are applied.


They are both obtained in the same way. Knowledge in both cases boils down to "Deductions that are not contradicted by reality." Distinctive knowledge is just an incredibly quick test, because we can instantly know that we discretely experience, so what we discretely experience is known. Applicable knowledge is distinctive knowledge that claims knowledge of something that is apart from immediate discrete experience. Perhaps the word choice of "Application" is poor or confusing, because we are applying to reality in either case. Your discrete experience is just as much a reality as its attempts to claim something beyond them.

It is why I avoided the inevitable comparison to apriori and aposteriori. Apriori claims there are innate things we know that are formed without analysis. This is incorrect. All knowledge requires analysis. You can have beliefs that are concurrent with what could be known, but it doesn't mean you actually know them until you reason through them. Perhaps there is a better word phrase then "applicable knowledge" that describes the concept. Feel free to suggest one!

As I've noted many times, there is nothing wrong with digging in and refining the words or definitions. Its not the words that matter, its the ideas behind those words. I feel that it might be helpful to break down distinctive knowledge further so I can effectively communicate what concepts are, abstractions, and how knowing them distinctively does not mean you know them applicably.

Distinctive awareness - Our discrete experiences themselves are things we know.

Contextual logical awareness - The construction of our discrete experiences into a logical set of rules and regulations.

A contextual logical viewpoint holds onto discrete experiences that are non-contradictory with each other. When thinking in a logical context, to hold things which would contradictory, we invent different contexts. For example, "Gandolf is a good person, therefore he would fight to save a hobbit's life if it were easy for him to win." Perfectly logical within his character, because we've made a fictional character. But we could create another context. "Gandolf is sometimes not a good person, therefore we can't know if he would fight to save a hobbit's life if it would be easy for him to win."

We distinctively know both of these contexts. Within our specially made contexts, if Gandolf is a good person, he WILL do X. The only reason Gandolf would not save the hobbit if it was an easy victory for him, is if he wasn't a good person. Here I have a perfectly logical and irrefutable context in my head. And yet, I can change the definitions, and a different logic will form. I can hold two different contexts of Gandolf, two sets of contextual logic, and distinctively know them both with contextual awareness.

Of course, I could create something illogical as well. "Gandolf is a good person, therefore he would kill all good hobbits in the world." Do I distinctively know this? Yes. But I really don't have contextual logical awareness. I am not using the "context of logic". I could think this way if I really wanted to. Perhaps we would say such a person is insane, especially if such contextual thinking was applied to reality, and not a fantasy of the mind.

The rational behind thinking logically, is when you apply logical thinking to reality, it has a better chance of your surviving. Of course, this does mean in situations in which harm to ourselves is not an immediately known outcome, we can entertain illogical contexts instead. Philosophy is arguably an exercise of trying to see if the logical contexts we've created in our head actually hold up when discussing with another person.

You can see plenty of people who hold contexts that do not follow logic, and when they are shown it is not illogical, they insist on believing that context regardless. This is the context they distinctively know. It doesn't work in application to reality, but that is not as important to them as holding the context for their own personal emotional gratification. I do not mean to imply it is "others" that do this. I am willing to bet almost every human in the world does this, and it is only with vigilance, training, and practice that people can minimize holding the emotional value of a context over its rational value.

So to clarify again, one can hold a distinctive logical or illogical context in their head. They distinctively know whatever those contexts are. It does not mean that those contexts can be applied beyond what is in their mind to reality without contradiction. We can strongly convince ourselves that it "must" be so, but we will never applicably know, until we apply it.

With that, let me address your points.

Quoting Bob Ross
In simpler terms, math applies before any application to the empirical world because it is what the external world is contingent on: differentiation.


No, that is what our context of the world depends on. The world does not differentiate like we do. The world does not discretely experience. Matter and energy are all composed of electrons, which are composed of things we can break down further. Reality is not aware of this. This is a context of distinctive knowledge that we have applied to reality without contradiction. It is not the reverse.

I've noted before that math is the logical consequence of being able to discretely experience. 1, is the concept of "a discrete experience." That is entirely of our own making. It is not that the external world is contingent on math, it is that our ability to understand the world, is contingent on our ability to discretely experience, and logically think about what that entails.

Does this mean that reality is contingent on our observation? Not at all. It means our understanding of the world, our application of our distinctive knowledge to reality, is contingent on our distinctive knowledge.

Quoting Bob Ross
Therefore, if I distinctively define a potato in a particular way where it implies “multiplicity” and “quantity”, then the operation of addition must follow. The only way I can fathom that this could be negated is if the universality of mathematics is denied: which would entail the rejection of differentiation (“discrete experience” itself).


Exactly. If you use a logical context that you distinctively know, there are certain results that must follow from it. But just because it fits in your head, does not mean you can applicably know that your logical context can be known in application to reality, until you apply it to reality by adding two potatoes together. To clarify, I mean the totality of the act, not an abstract.

When I add these two potatoes together, what happens if one breaks in half? Do I have two potatoes at that point? No, so it turns out I wasn't able to add "these" two potatoes. Since I have added two potatoes in reality before, I know it is possible that two identities I know as potatoes, can be added again. But do I applicably know I can add those other two potatoes before I add them together? No.
Can I add two potatoes abstractly in my head, and the result will always logically equal two? Yes. Can I imagine that adding "those" two potatoes in my head, and they will not break and everything will perfectly equal two? Yes. Does that mean I applicably know this? No. I hope this clarifies what I'm trying to say.

Quoting Bob Ross
I can know, in the abstract, that a circle can fit in a square. I do not need to physically see (empirically observe) a circle inscribed in a square to know this.


Yes, you can distinctively know this, which is what abstract logical contexts are. But do you applicably know that you can fit this square and circle I give you in that way before you attempt it? No. You measure the square, you measure the circle. Everything points that it should fit perfectly. But applicably unknown to you, I made them magnetized to where they will always repel. As such, they will never actually fit due to the repulsion that you would not applicably know about, until you tried to put them together.

Quoting Bob Ross
I am not referring to what we induce is under our inevitable spatial references (such as the makeup of “outer space” or the mereological composition of the space), but, rather, the holistic, unescapable, spatial captivity we are both subjected to: we cannot conceive of anything else.


I understand. But your inability to conceive of anything else is because that is the distinctive context you have chosen. There are people who conceive of different things. I can make a context of space where gravity does not apply. I can conceive of space as something that can allow warp travel or teleportation. What I cannot do, is applicably know a conception of space that I have never applied without contradiction. That part which is inescapable, is the application of our concepts to reality. Reality does not care about our logical constructs and rational thinking, aka, our distinctive knowledge. If we are unable to create a distinctive context of logical thinking that fits in reality without contradiction, then we lack any applicable knowledge of that reality.

Quoting Bob Ross
Although this is slightly off topic, this is why I reject the notion of non-spatial claims: it is merely the fusion of absence (as noted under the spatial reference), linguistic capability (we can combine the words together to make the claim), and the holistic spatial reference (i.e. “non-” + “spatial”). This is, in my eyes, no different than saying “square circle”.


To hammer home, that is because of our application. When you define a logical context of space that cannot be applied and contradicts the very moment of your occupation of space, it is immediately contradicted by reality. A distinctively known logical context that is rationally perfect in our heads cannot be claimed to be an accurate representation of reality, until it is applied to reality.

Quoting Bob Ross
Whether either of us like it, we do not claim “theory”, scientifically, to the most cogent induction out of what we know: that is a hypothesis at best.


I think you misunderstood what I was trying to state. I was not stating a scientific theory. I was stating a theory. A scientific theory is combination of applicable knowledge for the parts of the theory that have been tested. Any "theories" on scientific theories are speculations based on a hierarchy of logic and inductions.

Quoting Bob Ross
As another example, historians do not deem what is historically known based off of what is the most cogent induction (currently), it has to pass a threshold.


If they are using knowledge correctly, then yes. But with this epistemology, we can re-examine certain knowledge claims about history and determine if they are applicably known, or if they are simply the most cogent inductions we can conclude. Sometimes there are things outside of what can be applicably known. In that case, we only have the best cogent inductions to go on. We may not like that there are things outside of applicable knowledge, or like the idea that many of our constructions of the past are cogent inductions, but our like or dislike of that has nothing to do with the soundness of this epistemological theory.

In other words, my epistemology is not "not taking into account" these situations. It does. The question is, does the application of the epistemology continue to be the best tool currently available to assess reality rationally?

Quoting Bob Ross
Completely understandable. I would also like to add that even “truth” in terms of distinctively known is merely in relation to the subject: it is still not absolute “truth”--only absolute, paradoxically, relative to the subject.


No, that is not "truth" as I defined it. That is simply applicable knowledge. And applicable knowledge, is not truth. Truth is an inapplicable plausibility. It is the combination of all possible contexts applied to all of reality without a contradiction. It is an impossibility to obtain. It is an extremely common mistake to equate knowledge with truth; as I've noted, I've done it myself.

To explain, I am limited by my distinctive context. I can take all the possible distinctive contexts I have, and apply them to reality. Whatever is left without contradiction is what I applicably know. But because my distinctive contexts are limited, it cannot encompass all possible distinctive contexts that could be. Not to mention I'm limited in my applicable context as well. I will never applicably know the world as a massive Tyrannosaurus Rex. I will never applicably know the world as someone who is incapable of visualizing in their mind. As such, truth is an applicably unobtainable definition.

Quoting Bob Ross
In my scenario with Smith, he isn’t speculating that Jones has 5 coins in his pocket: he is claiming it has the potential to occur.


Quoting Bob Ross
If he claims that he speculates it could be the case that Jones has 5 coins in his pocket, then he is literally claiming the colloquial use of the term possibility. I am salvaging this with “could” referring to potentiality.


The problem here is in your sentence, "he speculates it could be the case". This is just redundancy. "Speculation" means "I believe X to be the case despite not having any experience of applicable knowledge prior". "It could be the case" means, "I believe it to be the case", but you haven't added any reasoning why it could be the case. Is it the case because of applicable knowledge, probability, possiblity, etc? I could just as easily state, "He speculates that its probable", or "He speculates that its possible".

And this is what I mean by asking for a clear definition of "potential" that serves an indicator of something that cannot be described by the hierarchy. If potential simply means, "it could be the case", its just a generic and unspecified induction. It is a claim of belief, without the clarification of what leads to holding that belief. I don't think this is what you want. I felt I did use your example and successfully point out times we can claim probability and speculation, but that's because I fleshed out the scenario to clarify the specifics. If you do not give the specifics of what the underlying induction is based on, then it is simply an unexamined induction, and at best, a guess.

Quoting Bob Ross
This is feels like “context” is truly ambiguous. The term context needs to have some sort of reasoning behind it that people abide by: otherwise it is pure chaos. I think the main focus of epistemology is to provide a clear derivation of what “knowledge” is and how to obtain it (in our case, including inductive beliefs). Therefore, I don’t think we can, without contradiction, define things purposely ambiguously.


I'm hoping that at this point I've laid out what context is. The term distinctive context is clearly defined as a set of distinctive identities that are held together in the mind. Distinctive contexts can include other contexts, like logic, and we generally consider those more valuable. Rational people ensure that their contexts include the "logical context" which allows us to make rational abstractions.
Applicable context is the ability of a person to apply their distinctive context to reality. If I have a context of metric measurement, but I do not have a ruler with centimeters, it is outside of my applicable context. If I later go blind in life, I may have visions of what the world looks like in my head, but I can no longer applicably know the world with sight.

What can be ambiguous, is the context another person holds. Our own conversation is a fine example! We are discussing not only to see if the application of this epistemology context can be applied to reality without contradiction, but to also to convey and see if the distinctive context of our words is understood by each other as we intended, and to see if it fits within a rational and logical context as well.

Whew! This has already gone on long enough, so let me shorten the rest. I believe I've added enough to address the points on calculating the Earth distinctively versus applicably knowing what the Earth's circumference is, as well as noting what cannot be applicably known. If you still feel my points have not adequately addressed those, let me know.

A very quick article on science. https://www.forbes.com/sites/paulmsutter/2019/10/27/science-does-not-reveal-truth/?sh=431c861c38c3

If you still want me to address my claims of science, I will as well next post.













Bob Ross March 12, 2022 at 21:26 #666143
Hello @Philosophim,

I think we are still misunderstanding eachother a tad bit, so let's see if I can resolve some of it by focusing on directly responding to your post.

They are both obtained in the same way. Knowledge in both cases boils down to "Deductions that are not contradicted by reality." Distinctive knowledge is just an incredibly quick test, because we can instantly know that we discretely experience, so what we discretely experience is known. Applicable knowledge is distinctive knowledge that claims knowledge of something that is apart from immediate discrete experience. Perhaps the word choice of "Application" is poor or confusing, because we are applying to reality in either case. Your discrete experience is just as much a reality as its attempts to claim something beyond them.


This is why I think it may be, at least in part, a semantical difference: when you refer to "application", you seem to be admitting that it is specifically "application to the external world" (and, subsequently, not the totality of reality). In that case, we in agreement here, except that I would advocate for more specific terminology (it is confusing to directly imply one is "application" in its entirety, which implies that the other is not, but yet claim they are both applications).

The other issue I would have is the ambiguity with such a binary distinction. When you say "Applicable knowledge is distinctive knowledge that claims knowledge of something that is apart from immediate discrete experience", fundamental aspects of the "external world" are necessarily aspects of our experience (as you note later on). This is different (seemingly) to things that solely arise in the mind. My imagination of a unicorn is distinctive knowledge (pertaining to whatever I imagined), but so is the distinction of the cup and the table (which isn't considered solely apart of the mind--it is object). It blends together, which is why certain aspects cross-over into the external world from the mind. But more on that later.

Likewise, when you state "Your discrete experience is just as much a reality as its attempts to claim something beyond them": the subject cannot rationally claim anything beyond discrete experience, that is all they have. I cannot claim that the table is a thing-in-itself, nor can I claim it is purely the product of the mind: both are equally inapplicable. However, if what you mean by "attempts to claim something beyond them" is simply inductions that pertain to the discrete experience of objects, then I have no quarrel.

It is why I avoided the inevitable comparison to apriori and aposteriori. Apriori claims there are innate things we know that are formed without analysis. This is incorrect. All knowledge requires analysis. You can have beliefs that are concurrent with what could be known, but it doesn't mean you actually know them until you reason through them.


This is not how I understood Kant's a priori vs a posteriori distinction: it is not blindly asserted. It is analyzed via reason by means of recursively examining reason upon itself, to extrapolate the apodictic forms it possesses. This is applied and, to an extent, true. A priori actually salvaged the empiricist worldview, as even Hume noted that empiricism is predicated on causality (which is a problem if one is asserting everything must be applied to the external world to know it). Kant, generically speaking, simply provided (although he was against empiricism) what logically is demonstrably true of the form of reason itself (of subjectivity in a sense). We applicably know, via solely reason, that we are within an inescapable spatial & temporal reference. We are constrained to the principle of non contradiction and sufficient reason, and, with the combination of the aforementioned, presuppose causality in any external application. We cannot empirically verify causality itself: it is impossible. Nor pon, etc. I do have to somewhat agree with you that Kant does extrapolate much further than that, and claims things about a priori that cannot possibly be known (like non-spatial, non-temporal, etc), but within the logical constraints that are apodictically true for the subjects reason, it logically follows from the usage of such that there are certain principles that must exist for any observation to occur in the first place. Obviously there's the issue that we can't escape the apodictic rules of our reason, which is being utilized reflexively to even postulate this in the first place, and therefore it is only something that logically follows. But this applies to literally everything. To say that it makes the most sense (by a long shot) that we are derived from a brain is only something that logically follows (that which also does not escape our apodictic rules of reason).


Distinctive awareness - Our discrete experiences themselves are things we know.
Contextual logical awareness - The construction of our discrete experiences into a logical set of rules and regulations.


To clarify, our discrete experiences themselves are things we know by application via reason. Our awareness of the distinctions is also known by the same sort of application: reason. If that is what you are stating here, then I agree: I am just not finding this sentence very clear at what you are trying to state. It could be that you are claiming they are essentially given, which I don't think you are stating that, which means it logically follows the stemming is from reason. Moreover, I think the problem here is that both are constructions of logical rules and regulations: distinctive awareness is derived from reason and reason is, upon reflexive examination, regulated by necessitous rules, whereas the "logical set of rules" you reference in "contextual logical awareness" is rules that, I think you are claiming, are not necessitious (as in a diversity of contexts can be produced, but it is important to remember that it is derived from those necessitous rules that reason manifests itself, apodictically, in).

We distinctively know both of these contexts. Within our specially made contexts, if Gandolf is a good person, he WILL do X. The only reason Gandolf would not save the hobbit if it was an easy victory for him, is if he wasn't a good person. Here I have a perfectly logical and irrefutable context in my head. And yet, I can change the definitions, and a different logic will form. I can hold two different contexts of Gandolf, two sets of contextual logic, and distinctively know them both with contextual awareness.


This is all fine, with the emphasis that this is applicably known via reason. IF conditionals are an apodictic instantiation of our reason: one of the logical regulations, upon recursive reflection, of reason itself. Depending on how you are defining those two conditional claims, it may solely pertain to reason or it may also pertain to the form of objects. If you mean in this example to define logically that a "good Gandolf" directly necessitates him doing X and, logically, if he doesn't do X, then he isn't "good", then is not only known in the mind (via reason pertaining to solely what lies in the mind), but also to all objects (all discrete experience of "objects"). You know, without application to the external world that the logical defining of person P is "good" if they do X and P is not good if not X will hold for all experience (including that which pertains to the external world). This is "applicably known" and "distinctively known" (as you would define it) without "applying" to the external world due it relating to the necessary logical form of discrete experience.


Of course, I could create something illogical as well. "Gandolf is a good person, therefore he would kill all good hobbits in the world." Do I distinctively know this? Yes. But I really don't have contextual logical awareness. I am not using the "context of logic".


It depends on what you mean by "logic". If you are referring to an adopted logical system (such as classical logic), which I should emphasis is based off of reason (which everyone has), then you are right. But you did still have a context of "logic" in the sense of the apodictic necessitous forms of the instantiation of reason. Firstly, if you define "good person" in a contradictory why (previously) to killing what is defined as "good hobbits", then you do not know that sentence distinctively--you know the exact contrary (the statement is false). However, one can hold such a contradiction if it is reasoned, no matter how irrational, to no longer be a strict contradiction. Maybe I decide that the end justifies the means: now that sentence is perfectly coherent. However, I could very well accept that sentence as "true", although I know it is contradictory, solely based off of "it makes me sleep better at night thinking it is true": this is still a reason. I could claim to hold it as a lie to annoy you, or just because I like lying: these are all reasons (not rational, but reasons). But my main point is that a person cannot conceive of whatever they want: they cannot hold that they are seeing a circle and a square (pertaining to the same object) at the same time. They can lie, for whatever reason, about it, but I know that they also do not distinctively "know" this. They may distinctively "know" that they want to lie about it for whatever reason, but they do not distinctively actually "know" that they are seeing two completely contradictory things. Likewise, even in the realm strictly pertaining to the mind, they cannot distinctively know a circle as a circle and a rectangle. They can lie about it, or convince themselves it is somehow possible, but they cannot actually distinctively know this (this is not merely my contextual interpretation--unless they are no human).

The rational behind thinking logically, is when you apply logical thinking to reality, it has a better chance of your surviving.


In a general sense, I agree that my survival is more likely if I abide by a coherent logical system (such as classical logic or something), but "survival" alone doesn't get you to any sort of altruism.

You can see plenty of people who hold contexts that do not follow logic


It doesn't follow a logical system that we have derived from our ability to reason. Everyone reasons. Not everyone is rational. There are apodictically true regulations of reason (which are obtained by analysis of a recursive use of reason on reason).

and when they are shown it is not illogical, they insist on believing that context regardless. This is the context they distinctively know.


They do not necessarily distinctively "know" the content of the entirety of the context they hold. Again, they cannot hold they imagined a circle that was also a rectangle that was also a triangle.

It doesn't work in application to reality, but that is not as important to them as holding the context for their own personal emotional gratification


I agree, but what you mean by "application to reality" is "application specifically to the external world".

1. Some things they can know in the mind which is not known in the external world.
2. Some things they cannot know in the mind nor the external world.
3. Some things they can know in the mind and the external world (by means of what is known in the mind).
4. Some things they can know by means of application to the external world.

I think you are trying to reduce it to simply 2 options: application to the mind, or application to the external world.

So to clarify again, one can hold a distinctive logical or illogical context in their head. They distinctively know whatever those contexts are. It does not mean that those contexts can be applied beyond what is in their mind to reality without contradiction. We can strongly convince ourselves that it "must" be so, but we will never applicably know, until we apply it.


You are right in the sense that we cannot claim that my imagination of a unicorn entails there is a unicorn in the external world, but doesn't negate that discrete experience itself is the external world. Therefore, certain forms are apodictically true of the mind and the external world by proxy of the mind. A great example is causality.

No, that is what our context of the world depends on. The world does not differentiate like we do. The world does not discretely experience. Matter and energy are all composed of electrons, which are composed of things we can break down further. Reality is not aware of this. This is a context of distinctive knowledge that we have applied to reality without contradiction. It is not the reverse.


Again, discrete experience is the world. We cannot claim that an electron exists as a thing-in-itself (apart from the subject) nor can we claim that it doesn't exist as a thing-in-itself (completely contingent on the subject). We can claim certain aspects of objects, which are apart of discrete experience, are contingent on particular objects that we deem obtained our sensations and produced our perceptions (i.e. color is not an aspect of my keyboard, it is a matter of light wavelength directed through my eyes which are then interpreted by my brain--all of these are objects that are apart of discrete experience). All of it logically follows, but that is just it: logically follows via reason. Without such, which is the consideration of the absence of reason by reason itself, we can only hold indeterminacy. The "external world", object, is simply that which reason has deemed out of its direct control, but those deemed "objects" follow necessary forms (discrete experience) that form from reason.

I've noted before that math is the logical consequence of being able to discretely experience. 1, is the concept of "a discrete experience." That is entirely of our own making. It is not that the external world is contingent on math, it is that our ability to understand the world, is contingent on our ability to discretely experience, and logically think about what that entails.


I think, given that discrete experience is the world, that you agree with me (at least partially here). Nothing you said here is incorrect, your positing of a external world that is a thing-in-itself is where you went wrong. Just as someone could equally go wrong by positing the exact opposite.

Does this mean that reality is contingent on our observation? Not at all. It means our understanding of the world, our application of our distinctive knowledge to reality, is contingent on our distinctive knowledge.


Again, we cannot claim either. We have reason, and from it stems all else: this doesn't mean that there are no things-in-themselves or that there are. Only that we discretely experience things, which are deemed objects, and all of those objects abide by mathematics because, as you said, discrete experience is what derives multiplicity in the first place. Therefore, certain aspects of the external world are known by reason alone because certain aspects of the external world abide by, necessarily, those regulated forms of reason. This is not to say that you are entirely wrong either, as we can claim "objects", what are out of our control, but with the necessary understanding that mathematics is true of all objects (because it is discrete experience).

Exactly. If you use a logical context that you distinctively know, there are certain results that must follow from it. But just because it fits in your head, does not mean you can applicably know that your logical context can be known in application to reality, until you apply it to reality by adding two potatoes together. To clarify, I mean the totality of the act, not an abstract.


I am having a rough time understanding what you mean here.

When I add these two potatoes together, what happens if one breaks in half? Do I have two potatoes at that point? No, so it turns out I wasn't able to add "these" two potatoes.


I feel like you aren't referring to mathematical addition, but combination. Are you trying to get at that two potatoes aren't necessarily combinable? Like meshing two potatoes together? That's not mathematical addition (or at least not what I am thinking of). We know that one potatoe and another potatoes make up two potatoes. Even if one breaks in half, one half + one half + one entails two. Combining two potatoes won't give you two distinct potatoes, it will give you one big potatoe (assuming that were even possible) or two potatoes worth of smashed potatoes. If that is what you are referring two, then I would say you are talking about what must be empirically verified about the cohesion of "potatoes" in the external world, which definitely requires an empirical test to "know" it. However, to perform the mathematical addition of one potatoe to another, where two distinct potatoes are the result, is known about the external world by means of the mind via reason.

But do you applicably know that you can fit this square and circle I give you in that way before you attempt it? No. You measure the square, you measure the circle. Everything points that it should fit perfectly. But applicably unknown to you, I made them magnetized to where they will always repel. As such, they will never actually fit due to the repulsion that you would not applicably know about, until you tried to put them together.


This is 100% correct. It is pertaining directly to objects themselves, which requires empirical observation. However, that does not negate my claim that the ability to fit a circle in a square is known in the mind. Shape itself is a form of all discrete experience, and therefore can pertain to the external world with merely reason. I know that rectangular shapes take a specific form, and that pertains not only to what I imagine but necessarily objects as well. Think of it this way: I can also "know" what cannot occur in the external world without ever empirically testing it based off of shapes--which encompass the external world as it is discrete experience. Can you fit a square of 5 X 5 inches in a circle of radius 0.5 inches? No. Now, I think what you are trying to get at is that I will not know this about a particular circle and square in the external world until I attempt it--as my calculations (dimensions) may be off and they can fit because they are not the aforementioned dimensions. However, this does not negate the fact that I cannot, in the external world, fit such dimensional shapes into one another as specified. I know this of the external world as well as the mind without application to the external world. However, if the same ruler is utilized in both readings, then I do not need to even apply an attempt to fit them together in the external world because I do know it will not happen. Firstly, if "inches" is consistent (which is implied with using the same ruler), then it doesn't matter if my measured "in" actually is what we would define as an "in". Secondly, the significant digits is a vital consideration with determines whether one actually has to attempt fitting them together to "know" if they can fit. In this case, the significant digits can, with solely reason, be determined to not have an effect that would allow for such a margin of error that would allow it to fit. 5 "whatevers" (inches) by 5 "whatevers", will not fit in 0.5 "whatevers". Even if the significant digit, which would be 5.X and 0.5X (where X is the estimated digit) will not allow for any sort of variance that will allow either of us to claim we could have a large enough margin of error to presume we need to physically test it. If it were that it was a square of 1.X "whatevers" by 1.X "whatevers" and the circle had a radius of 1.Y "whatevers" (where Y is estimated smaller than X), then we now can reason that we could be wrong.

Now, I like your example of magnets to show that I still wouldn't know, even if I new the dimensions checked out, that they would fit. However, I can "know":

1. That dimensions that cannot mathematically fit, considering the margin of error as uneffective, cannot fit in the external world (this is a reason consideration within the mind which necessarily translates to the external world, as it is simply discrete experience).

2. That a square can fit in a circle (this is sole consideration of the mind, but also translates into what cannot happen in the external world). I know, if that is true, that nothing pertaining to the shape of an object will necessitate that an object of shape "square" cannot fit into shape of "circle". As you noted earlier, it is true that to know two particular shapes fit into one another in the external world requires empirical observation, but I still nevertheless know that circularity and squareness, in shape, do not necessitate that they cannot be fit together: this is true of the external world as much as my mind.


I understand. But your inability to conceive of anything else is because that is the distinctive context you have chosen. There are people who conceive of different things. I can make a context of space where gravity does not apply. I can conceive of space as something that can allow warp travel or teleportation.


This is not the uniform, holistic, spatial reference I am referring to. Yes, people can conceive of spatial frameworks under the holistic spatial reference that do not abide by the same principles as that which we discover of the external world. My inability to conceive something else is not distinctive context I have chosen. Yes, I could choose to envision a spatial framework under space where I can fly and, yes, this would be a distinctive context. However, distinctive contexts themselves are depending on a regulated unescapable form which is space which cannot be contradicted: it is not chosen, it is always demonstrably true. Even the imagined spatial frameworks abide by space itself. This is not to be confused with it abiding by "outer space" or "string theory" or "my made up gravity free world". A necessary rule of the manifestations of reason is that it is spatial referenced (inevitably). Does that make sense?


To hammer home, that is because of our application. When you define a logical context of space that cannot be applied and contradicts the very moment of your occupation of space, it is immediately contradicted by reality.


Again, you are right, but this is not relevant to what I am trying to say. I am not referring to me being able to attempt my an application of my gravity free spatial framework to the external world to be met with gravity. I am referring to that which is discovered, projected, and conceivable--holistically all experience. You don't apply the holistic reference of space to anything (you cannot), it is that which necessarily always utilized by reason, in its manifestations (like thoughts), apply anything in the mind or in the external world. With respect to what you were getting at (or at least what I am understanding you to say), you are right.

I think you misunderstood what I was trying to state. I was not stating a scientific theory. I was stating a theory. A scientific theory is combination of applicable knowledge for the parts of the theory that have been tested. Any "theories" on scientific theories are speculations based on a hierarchy of logic and inductions.


I am not following what you are trying to say here. I was under the impression we were discussing science and the theories therein: those are all scientific theories. When you say "I was stating a theory", what do you mean? Colloquially a "theory"? What else is there in science that is a theory besides scientific theories? My point was that we do not simply accept that which is most cogent, it must pass a threshold of cogency in terms of a vast majority of institutions that are in place for developing knowledge. At what point is it cogent enough for me to base my actions off of it? How cogent of an induction does global warming and climate change have to be for me to change my lifestyle? How cogent does evolution need to be for me be base biology off of it? Just simply the most cogent? Scientific theories require much more than that, no?

If they are using knowledge correctly, then yes. But with this epistemology, we can re-examine certain knowledge claims about history and determine if they are applicably known, or if they are simply the most cogent inductions we can conclude. Sometimes there are things outside of what can be applicably known. In that case, we only have the best cogent inductions to go on. We may not like that there are things outside of applicable knowledge, or like the idea that many of our constructions of the past are cogent inductions, but our like or dislike of that has nothing to do with the soundness of this epistemological theory.


I think I following what you are saying now. We don't ever, under this epistemology, really state "historical facts" other than that which is deduced. Everything else is simply a hierarchy of inductions, which we should always simply hold the most cogent one. The problem is that there's never a suspension of judgement: we also claim a belief towards whatever is most cogent. Again, when is it cogent enough for me to take action based off of it?

No, that is not "truth" as I defined it. That is simply applicable knowledge. And applicable knowledge, is not truth. Truth is an inapplicable plausibility. It is the combination of all possible contexts applied to all of reality without a contradiction. It is an impossibility to obtain. It is an extremely common mistake to equate knowledge with truth; as I've noted, I've done it myself.


Again, this isn't true. "truth" being the "combination of all possible contexts applied to all of reality without contradiction" is the definition of that which is apodictically true for the subject. Again, take space, or causality, or pon: this is true of all reality because I am not just talking about the external world, I am referring to everything, which is discrete experience (as you put it). the world is reason. This doesn't mean that we can obtain "truth" of anything sans reason, but we must understand that we can't even conceive of such a question: without (sans) reason is considered via reason and its necessary form (i.e. without is a spatial reference and the entire question is via reason).

To explain, I am limited by my distinctive context. I can take all the possible distinctive contexts I have, and apply them to reality. Whatever is left without contradiction is what I applicably know. But because my distinctive contexts are limited, it cannot encompass all possible distinctive contexts that could be. Not to mention I'm limited in my applicable context as well. I will never applicably know the world as a massive Tyrannosaurus Rex. I will never applicably know the world as someone who is incapable of visualizing in their mind. As such, truth is an applicably unobtainable definition.


I think you are positing an objective world that is a thing-in-itself, where "truth" is if we were essentially omniscient with respect to the understanding of an object via all contexts. In that sense, I agree. But I don't think you can posit such.

The problem here is in your sentence, "he speculates it could be the case". This is just redundancy. "Speculation" means "I believe X to be the case despite not having any experience of applicable knowledge prior". "It could be the case" means, "I believe it to be the case", but you haven't added any reasoning why it could be the case. Is it the case because of applicable knowledge, probability, possiblity, etc? I could just as easily state, "He speculates that its probable", or "He speculates that its possible".


I don't think really addresses the issue. I used the terminology "speculates it could" because you used it previously, and I was trying to expose that it is the same thing as possibility (in a colloquial sense). It is redundant: to say "it could" is to say "it is possible" (in the old sense of the term). And, no, "it could be the case" is not equivocal to "I believe it to be the case". If I claim "Jones could have 5 coins in his pocket", I am not stating that I believe he does have 5 coins in his pocket. I am saying nothing contradicts the idea that he has 5 coins in his pocket (e.g. the dimensions dictate otherwise, etc). My reasoning for why "it could be the case" is abstract, but has nothing to do with reasons why he does have 5 coins in his pocket (or that I believe he does). In my scenario, he can't claim it is probable or possible. There's a difference between claiming there is colloquially a possibility that something can occur and that you actually believe that it occurred. Does that make sense? The dilemma is the latter is non-existent in your epistemology. Smith, in the sense that he isn't claiming to believe there are 5 coins in Jones' pocket, is forced to say nothing at all.

It is a claim of belief, without the clarification of what leads to holding that belief.


Potentiality is very clear (actually more clear, I would say, than possibility): that which is not contradicted in the abstract which allows that it could occur. Now, I don't like using "could" because it is utilized in colloquial speech in the sense of possibility and potentiality (possibility as something we could colloquially claim has been proven to occur and potentiality being that which simply hasn't been contradicted yet).

I felt I did use your example and successfully point out times we can claim probability and speculation, but that's because I fleshed out the scenario to clarify the specifics. If you do not give the specifics of what the underlying induction is based on, then it is simply an unexamined induction, and at best, a guess.


I felt like I made it clear. Smith is not claiming it is probable: there's not denominator there. He isn't claiming possibility: he has not seen 5 coins in Jones' pocket before. He isn't going to claim irrational induction, because he hasn't found any contradictions. He is not claiming speculation that Jones has five coins in his pocket: he is claiming that Jones' could potentially have five coins in his pocket. So what does he claim? As you agreed, saying he "speculates that it could happen" is redundant: either he is claiming that it "could" happen in the sense of possibility (as in he has experienced it once before)(which he is not in this case) or he is claiming that he can't contradict the idea that potentially has five coins in his pocket. He isn't asserting that he does, just that it could be the case (given his current understanding).

I look forward to hearing from you,
Bob
Philosophim March 13, 2022 at 16:32 #666386
Good response Bob! I can see we're still on different tracks of thought, but I think we're close.

Quoting Bob Ross
This is why I think it may be, at least in part, a semantical difference: when you refer to "application", you seem to be admitting that it is specifically "application to the external world" (and, subsequently, not the totality of reality). In that case, we in agreement here, except that I would advocate for more specific terminology (it is confusing to directly imply one is "application" in its entirety, which implies that the other is not, but yet claim they are both applications).


Yes, I believe the term has brought confusion as noted before. Here's the thing, I can't say "external world" for a foundational theory of knowledge. Perhaps we can conclude there is an external world, but I never did that in the theory. All I noted in the beginning was that there was a will, and that reality sometimes went along with that will, and sometimes contradicted that will.

The only reason we have a definition of reality, is that there are some things that go against our will. Reality is the totality of existence that is in accordance with our will, and contrary to our will. I have never attempted to define an external world, though my vocabulary has not been careful enough with this in mind.

All knowledge is "Deduction based on what is not contradicted". The separation of distinctive and applicable is based on its simplicity versus complexity. Also, its general relation to how people speak. It is a model intended to mirror the idea of a proven external world without actually stating "there is an external world".

So why have I not declared an "external world" as synonymous with applicable knowledge? Because there are things we can do in our own mind that go against our will. Lets say I imagine the word elephant, and say, "I'm not going to think of the word elephant." Despite what I want, it ends up happening that I cant' stop thinking of the word.

Distinctive knowledge comes about by the realization that what we discretely experience, the act itself, is known. But anytime there is a claim of knowledge that could potentially go beyond our will, that is an attempt at applicable knowledge. So, if I claim, "I will not think of the word elephant 1 second from now," I must apply that to reality. One second must pass, and I must not have thought of the word. If I did, I applicably know that your earlier statement was false.

Basically, when your distinctive knowledge creates a statement that the act of the discrete experience alone cannot confirm, you need to apply it. I can discretely experience an abstract set of rules and logical conclusions. But if I apply those abstract rules to something which cannot be confirmed by my current discrete experience, I have to apply it.

So, if I construct a system of logic, then claim, "X functions like this," to know this to be true, I must deduce it and not be contradicted by reality. Once it is formed distinctively, It must be applied, because I cannot deduce my conclusion about the world from the act of discretely experiencing alone. I can discretely experience a pink elephant, but if I claim the elephant's backside is purple, until I discretely experience the elephants backside, I cannot claim to applicably know its backside is purple. This is all in the mind, which is why I do not state applicable knowledge is "the external world".

Quoting Bob Ross
My imagination of a unicorn is distinctive knowledge (pertaining to whatever I imagined), but so is the distinction of the cup and the table (which isn't considered solely apart of the mind--it is object).


Correct. There is no question that when you discretely experience what you are calling a cup and table, you have distinctive knowledge that it is what you are experiencing. But if you claim, "That is a cup and a table", you must apply your distinctive knowledge to the cup and table to ensure reality does not contradict you. You must take the essential properties of the distinctive knowledge of a cup and a table, and test them. Only if you do without contradiction, can you applicably know that is a cup and a table.

Quoting Bob Ross
However, if what you mean by "attempts to claim something beyond them" is simply inductions that pertain to the discrete experience of objects, then I have no quarrel.


So yes, if I claim that what I am discretely experiencing does in fact fit my definition of cup and table, I am inducing that is so. I must then apply my discrete experience to applicably know whether my induction is true or false.

Addressing Kant, yes, there are aspects of apriori and aposteriori that are good, it is just as a whole, I find their logic and conclusions incorrect. Lets not get into Kant, just know that I did not find the terms logically consistent or useful enough to use, and felt they would lead people away from the concept I'm trying to convey.

Quoting Bob Ross
Distinctive awareness - Our discrete experiences themselves are things we know.
Contextual logical awareness - The construction of our discrete experiences into a logical set of rules and regulations.

To clarify, our discrete experiences themselves are things we know by application via reason.


I think its necessary at this point that we define "reason". I've never used the word reason in the paper, and with good "reason" :grin: I tried defining as few concepts as I could, and tried to avoid introducing anything that I had not fully defined first. I'm not saying I succeeded, but that was the intent.

When you say we know our discrete experiences by reason, I've already stated why we know them. We know we discretely experience because it is a deduction that is not contradicted by reality. So, if I am to define reason according to the epistemology I've proposed, reason would be utilizing the distinctive and applicable contexts of deduction, induction, and pon. But that is all I have at this moment (I think).

However, I've noted that "reason" is an option. It is not a necessary condition of being human. There is nothing that requires a person to have the contexts of deduction, induction, and pon. One may of course act with inductions, deductions, and pon, but not actively have knowledge that is what they are doing. You are a very rational person, likely educated and around like people. It may be difficult to conceive of people who do not utilize this context. I have to deal with an individual on a weekly basis who are not "rational" in the sense that I've defined.

So I have defined the utilization of reason as having a distinctive and applicable context of deduction, induction, and lets go one further, logic. I have also claimed that there are people who do not hold this context, and in my life, this is applicably known to be true. But, that does not mean that is what you intend by reason. Could you give your own definition and outlook? Until we both agree on the definition, I feel we'll run into semantical issues.

Quoting Bob Ross
When I add these two potatoes together, what happens if one breaks in half? Do I have two potatoes at that point? No, so it turns out I wasn't able to add "these" two potatoes.

I feel like you aren't referring to mathematical addition, but combination.


What is addition in application, versus abstraction? If I add two potatoes together, my first thought is, "I'll put them in proximity." If you just mean counting, then that would be different. In that case, we still have to do something more to applicably know we can add those two potatoes. Very simply put, we need to applicably know if they are actually potatoes. If so, then we can add them. If one was really not a potato, then we wouldn't have applicably added those two potatoes. At best, we can say we applicably added two identities. So lets go with that, as I think this is closer to your intention.

Lets say I have the abstraction that I can count two identities. This is distinctive knowledge. But to applicably know that I can, I have to actually count two identities. This of course is trivial, but this triviality is the fine point between distinctive and applicable knowledge. One is the formation of a set of definitions and rules. The second, is its application.

The formulation of definitions and rules in our head may be sound to our minds. We distinctively know what they are. But do we know they will work when applied to a particular situation? Not until we actually apply the rules to the situation itself. The mistake of "generic" knowledge is believing that the construction of definitions and rules means that we know the outcome of their application, even if we have not attempted it before.

Quoting Bob Ross
Think of it this way: I can also "know" what cannot occur in the external world without ever empirically testing it based off of shapes--which encompass the external world as it is discrete experience. Can you fit a square of 5 X 5 inches in a circle of radius 0.5 inches? No.


When you state "know", try to divide it into distinctive versus applicable knowledge. Do you applicably know this, or distinctively know this? Because you are not dividing the knowledge as noted in the epistemology, I think you believe that I am claiming that we don't know math. We distinctively know math. We also have applicably known and used math in the world numerous times. There's no question that in the abstract we can't fit a square of 5X5 into a circle of radius .5 inches. But that does not mean we can applicably know that "that" particular square that we discretely experience cannot fit into "that" circle of radius .5 inches until we actively try, and find we can do so without contradiction.

Quoting Bob Ross
(In regards to space) I am referring to that which is discovered, projected, and conceivable--holistically all experience.


Again, is this distinctive knowledge, or applicable knowledge? Try to fit it into one of those categories. If you are unable to, then perhaps you can demonstrate that the distinction is broken, not useful, or lacking. But if you're not making that distinction, then you're not really discussing in terms of the epistemology, but in the terms of a completely different context that we have not really agreed on. To me, "holistic" means I'm applying my distinctive knowledge, not merely armchairing in my mind. In which case, this means you agree with me that we can applicably know certain distinctive contexts of space by the application of our very existence, but have not applicably known others.

Quoting Bob Ross
I think I following what you are saying now. We don't ever, under this epistemology, really state "historical facts" other than that which is deduced. Everything else is simply a hierarchy of inductions, which we should always simply hold the most cogent one. The problem is that there's never a suspension of judgement: we also claim a belief towards whatever is most cogent. Again, when is it cogent enough for me to take action based off of it?


I'm not sure what you mean by "there's never a suspension of judgement". If I'm judging that one induction is more cogent than another, how am I suspending judgement? In regards to when is something cogent enough to take action, that is a different question from the base epistemology. I supply what is more rational, and that is it. At its most simple, one should simply act based on the best applicable knowledge and inductions you have. That being said, I do have a much broader answer. It is just that your question is not a negation of the epistemology proposed, and I want to make sure we understand that first. If you would like this explored in the next post, let me know and I'll cover it.

Quoting Bob Ross
I don't think really addresses the issue. I used the terminology "speculates it could" because you used it previously, and I was trying to expose that it is the same thing as possibility (in a colloquial sense). It is redundant: to say "it could" is to say "it is possible" (in the old sense of the term). And, no, "it could be the case" is not equivocal to "I believe it to be the case"


I think we're stuck on definitions here. Saying "it could" needs to be specified. While you might say "it could, because it is possible", you could just as easily say, "it could, because I speculate, or its probable, etc." And yes, if you intend "I believe it to be the case" as an affirmation, then it is not equivalent to "it could be the case". The problem is "it could be the case" is too ambiguous. In my mind, I added, "I believe it could be the case".

Quoting Bob Ross
If I claim "Jones could have 5 coins in his pocket", I am not stating that I believe he does have 5 coins in his pocket. I am saying nothing contradicts the idea that he has 5 coins in his pocket (e.g. the dimensions dictate otherwise, etc).


Explicitly, what you are stating is, "I believe Jones could have 5 coins in his pocket." But what is the reasoning of "could have" based on? A probability, possibility, speculation, or irrational induction? Pointing out that "could have" means I can't clearly assert if Jones could have 5 coins in his pocket, is a criticism of the old epistemology that does not have a hierarchy of inductions to clarify such situations. I have a clear breakdown of inductions. Since we are not using those here, we are not using my epistemology, but the old (which has several more problems besides this one!)

My epistemology simply asks you to clarify what type of induction you are making by saying "could". I provided examples with this epistemology that could give you the answers. While using the epistemological breakdown of the induction of "could", is there some type of scenario you feel the breakdown is missing? The epistemology notes that "could" is simply ambiguous, and a more rational assessment can be obtained by breaking the induction down into the hierarchy. Is this wrong?

Quoting Bob Ross
My reasoning for why "it could be the case" is abstract, but has nothing to do with reasons why he does have 5 coins in his pocket (or that I believe he does).


What do you mean by "abstract"? It seems to me this is just ignoring the hierarchy. Which again, is not a slight on the hierarchy, its just a rejection of its use. If we reject its use, we cannot criticize it for not being used. The hierarchy notes you need to specify which type of induction you are using. If you don't, then you're not using the epistemology, but some other type of system.

Quoting Bob Ross
There's a difference between claiming there is colloquially a possibility that something can occur and that you actually believe that it occurred. Does that make sense? The dilemma is the latter is non-existent in your epistemology. Smith, in the sense that he isn't claiming to believe there are 5 coins in Jones' pocket, is forced to say nothing at all.


Just to ensure the point is clear, both situations exist in the epistemology. I can induce that it is possible that Jones has 5 coins in his pocket based on reasons. Every induction could turn out correct, or incorrect. So I can state, "Its possible that Jones has 5 coins in his pocket, but I'm going to believe he does/does not". My belief that Jones does not have 5 coins in his pocket does not negate the fact that I still think it is possible that he could. I hope in this way, I've used "could" unambiguously. If you are asserting an affirmative, that is not considering whether they "could". Considering a could, and asserting an affirmative are two separate conclusions.

If your follow up question is, "Which affirmative should we choose when faced with the induction we've concluded is most cogent", I can address that next response for that will be a large topic.

Quoting Bob Ross
Potentiality is very clear (actually more clear, I would say, than possibility): that which is not contradicted in the abstract which allows that it could occur.


Perhaps it is clear to you, but for my purposes, it was not yet. That is not a your fault, but mine. I think the problem here again is the ambiguity of "could occur". I can create abstract knowledge distinctively. And I can attempt to apply it to reality. Essentially, I'm making an induction that my abstract can be applied in X situation without contradiction.

An induction by definition, is uncertain. For potentially to be meaningful, we also have to consider its negation. If something did not have potential, this translates to, "Distinctive knowledge that cannot be attempted to be applied to reality." This seems to me to be an inapplicable speculation. Which means that any induction that could attempt to be applied would be considered a "potential', even irrational inductions.

Basically, its a short hand identity that wraps up probability, possibility, speculation, and irrational inductions. It ignores the hierarchy besides inapplicable speculations. And of course, this leads to problems, because its essentially ignoring the valuable differences between the different types of inductions. This is of course the problem with the old knowledge. Without a hierarchy of inductions, you run into massive problems in epistemology when trying to analyze inductions. Again, any criticism against the epistemology you come up with while using the word "potential" is because you're effectively ignoring the epistemological hierarchy, and really criticizing what happens when you don't use that hierarchy.

Quoting Bob Ross
He is not claiming speculation that Jones has five coins in his pocket: he is claiming that Jones' could potentially have five coins in his pocket.


Exactly. So Jones is claiming, "I have an induction but I'm not going to use the hierarchy to break down what type of induction I'm using". Again, not a criticism of the epistemology, it is simply not using the epistemology, then trying to point out that the epistemology cannot handle a case in which it is not used.

Really fantastic and deep points Bob!


Bob Ross March 13, 2022 at 20:38 #666505
Hello @Philosophim,

I am glad you dived into "applicable" vs "distinctive" knowledge, because I think I was fundamentally misunderstanding your epistemic claims. I was never under the impression anything was related to a "will" in your epistemology, albeit I understand the general relation to the principle of noncontradiction.

I think we have finally come to a point where our fundamental differences (which we previously disregarded) are no longer so trivial. Therefore, as you also stated, it is probably time to dive into "reason", which inevitably brings us back to the general distinction between our fundamentals. Previously, I understood the distinction between our fundamentals like so (as an over-simplification):

Yours: object <- discrete experiences -> subject
Mine: object <- discrete experiences <- subject

However, "subject" was, and still is, a term with vast interpretations, therefore it is more accurate, as of now, to demonstrate mine as:

object <-discrete experiences <- reason -> subject

However, now you seem to be invoking "will", which adds some extra consideration on my end to my interpretation of your fundamental (and I am invoking "reason" which probably is confusing you as well). When you say:

All I noted in the beginning was that there was a will, and that reality sometimes went along with that will, and sometimes contradicted that will.


I didn't understand this from your essays (unless, and this is completely plausible, I am forgetting): the fundamental was "discrete experience" which was postulated on the principle of noncontradiction. A "will", in my head, has a motive, which is not implied at all (to me) with "discrete experience". I think we are actually starting to converge (ever so slowly), as I would claim that there are "wills" (as in plural) is in relation to reason. I think I would need a bit more explication into your idea of "will" to properly address it.

The only reason we have a definition of reality, is that there are some things that go against our will.

Reality is the totality of existence that is in accordance with our will, and contrary to our will.


I think you aren't using "reality" synonymously throughout your post. The first statement seems to contradict the second. You first claim that we only can define "reality" as that which goes against our "will", yet then, in the second, claim that "reality" is both what goes against and what aligns with our "will"--I don't see how these are reconcilable statements. Your first statement here is only correct if we are talking about the distinction between "object" and "subject", generally speaking, not "reality" in its entirety. The entire "reality" could be aligned with all of my "will" and still be defined as "reality". I sort of get the notion that you may be using "will" synonymously with "principle of noncontradiction"--I don't think they are the same.

Because there are things we can do in our own mind that go against our will. Lets say I imagine the word elephant, and say, "I'm not going to think of the word elephant." Despite what I want, it ends up happening that I cant' stop thinking of the word.


I was misunderstanding you: distinctive knowledge is what you are claiming is given because it is simply discrete experience, whereas applicable could be within the mind or the external world.

First and foremost, I need to define "reason" for you, because it probably is something vague as it currently stands. Reason is "the process of concluded". This is not synonymous with "rationality", which is a subjective and inter-subjective term pertaining to what one or multiple subjects determine to be the most logical positions to hold (or what they deem as the most logically process to follow in terms of derivation): "rationality" is dependent on "reason" as its fundamental. "Reason" is simply that ever continuing process of conclusions, which is the bedrock of all derivation. 1 + 1 = 3 (without refurbishing the underlying meaning) is an exposition of "reason", albeit not determined to be "rational". If, in that moment, the subject legitimately concluded 1 + 1 = 3, then thereby "reason" was invoked. As a matter of fact, "reason" is invoked in everything, and a careful recursive examination of reason by reason can expose the general necessary forms of that reason: it abides by certain inevitable rules. To be brief, principle of noncontradiction, space, time, differentiation, and causality (and debatably principle of sufficient reason). The first and foremost is the principle of noncontradiction, which is utilized to even began the discovery of the others. To claim that I discretely experience, I concluded by means of pon. This was "reason" and, depending thereafter how "rationality" is inter-subjectively defined, may have been "rational". There's definitely more to be said, but I'll leave it there for now.

Distinctive knowledge comes about by the realization that what we discretely experience, the act itself, is known.


I think this is false. The act itself is not just known (as in given), it is determined by means of recursive analysis of reason. You and I determined that we discretely experience. And, if I may be so bold, the act of discretely experiencing does not precede reason: it becomes a logical necessity of reason (i.e. reason determines it must be discretely experiencing multiplicity to even determine in the first place--but this is all dependent on reason). When I say logical, am not referring to "rationally" determined logical systems, merely, in this case, principle of noncontradiction (I cannot hold without contradiction that the aforementioned is false).

Basically, when your distinctive knowledge creates a statement that the act of the discrete experience alone cannot confirm, you need to apply it. I can discretely experience an abstract set of rules and logical conclusions. But if I apply those abstract rules to something which cannot be confirmed by my current discrete experience, I have to apply it.


I think, as I now understand your epistemology, I simply reject "distinctive knowledge" in literal sense (everything is always applied), but am perfectly fine with it as a meaningful distinction for better understanding for the reader (or as a subset of applied knowledge). Anything we ever do is a concluded, to some degree or another, which utilizes reason, and any conclusion pertaining to reason or discrete experience is application.


So, if I construct a system of logic, then claim, "X functions like this," to know this to be true, I must deduce it and not be contradicted by reality.


The only reason this is true is because you have realized that it would be a contradiction to hold that the contents of the thoughts of a mind can suffice pertaining to what the mind deems objects. This is all from reason and, depending on what is considered rationality, rational.

Once it is formed distinctively, It must be applied, because I cannot deduce my conclusion about the world from the act of discretely experiencing alone. I can discretely experience a pink elephant, but if I claim the elephant's backside is purple, until I discretely experience the elephants backside, I cannot claim to applicably know its backside is purple. This is all in the mind, which is why I do not state applicable knowledge is "the external world".


I think I understand, and agree, with what you are saying--with the consideration that they are both applied. We can define a meaningful distinction between "distinctive" (that which is discrete experience) and "applicable" (that which isn't), but only if we were able to reason our way into the definitions. No matter how swift, I conclude that I just imagined an elephant--I am not synonymous with the discrete experience of an elephant (I am the reason).

When you say we know our discrete experiences by reason, I've already stated why we know them.


We know discrete experience by reason: the principle of noncontradiction--therefrom space & time, then differentiation, then causality.

We know we discretely experience because it is a deduction that is not contradicted by reality.


Your using reason here. You applied this to then claim we have distinctive knowledge that is not applied, but there was never anything that wasn't applied. In other words, you, by application, determined some concepts to be unapplied: given. That which you determined was given, was not given to you, it was obtained by you via application. Nothing is given to you without reason.

However, I've noted that "reason" is an option. It is not a necessary condition of being human.


For me, reason is a necessary condition of being human. Not "rationality", but reason.

There is nothing that requires a person to have the contexts of deduction, induction, and pon


We can most definitely get into this further, but for now I will just state that pon is the fundamental of everything: everyone uses it necessarily.

You are a very rational person, likely educated and around like people. It may be difficult to conceive of people who do not utilize this context. I have to deal with an individual on a weekly basis who are not "rational" in the sense that I've defined.


Thank you! I appreciate that, and I can most definitely tell your are highly rational and well educated as well! To be clear, I am not disagreeing with you on how people are not all rational: I am also around many people that shock me at how irrational they are. I am making a distinction between "reason" and "rationality" to get more at what is fundamental for everything else (reason) and what is built off of that as the best course of action (rationality). One is learned (the latter), the other is innate (reason). It may be confusing because being "reasonable" and "rational" are typically colloquially utilized the same: but I am not.

So I have defined the utilization of reason as having a distinctive and applicable context of deduction, induction, and lets go one further, logic. I have also claimed that there are people who do not hold this context, and in my life, this is applicably known to be true. But, that does not mean that is what you intend by reason. Could you give your own definition and outlook? Until we both agree on the definition, I feel we'll run into semantical issues.


I agree, I think there is much to discuss. I think that, in terms of logic as derived from rationality (such as classical logic)(which may require the subject to learn it), you are absolutely right. But in terms of logic in the sense of pon, I think everyone necessarily has it. Now I know it's obvious that people hold contradictions (in colloquial speech), but that isn't what pon is at a more fundamental level (I would say).

What is addition in application, versus abstraction?


I find nothing wrong with your potatoe analogy anymore, I think I understand what you are saying. The application is the abstraction, which, in your terms, is not "distinctive" knowledge--so we agree on that (I think).

We distinctively know math.


I think we applicably know math. Reason derives what is mathematical and what doesn't abide by it. Solving x = y + 1 for y is application, not distinction. Even the understanding that there's one distinct thing and another one is application (of pon). What exactly is purely distinctive about this? Of course, we can applicably know that there's discrete experience and that we could label discrete experience as "distinctive knowledge", but all that is application. There's never a point at which we rest and just simply know something without application. Is there?

In terms of space, I am not completely against the idea of labeling the holistic space as distinctive, but that was also applied. To know that space is apodictically true is application of reason inwardly on itself in an analysis of its own forms of manifestation. I could rightfully distinguish apodictically true forms of reason as "distinctive knowledge" and that which is derived from them as "applicable knowledge", which I think (from my perspective) is what you are essentially doing. But my point is that they are all applied: when do I ever not apply anything?

In regards to when is something cogent enough to take action, that is a different question from the base epistemology. I supply what is more rational, and that is it.


My question essentially pertained to when something is considered a "historical fact", considering most historical facts are speculations, when we are simply determining which induction is most cogent. I think you answer it here: seems that you think that it isn't a base concern of the epistemology. I think this is a major concern people will have with it. Everyone is so used to our current scientific, historic, etc institutions with their thresholds of when something is validated that I envision this eroding pretty much society's fundamental of how knowledge works. It isn't an issue that it erodes the fundamentals of "knowledge" hitherto, but not addressing it is. You don't have to address it now if you don't want to, but feel free to if you want.


Explicitly, what you are stating is, "I believe Jones could have 5 coins in his pocket." But what is the reasoning of "could have" based on? A probability, possibility, speculation, or irrational induction?


The point is that it isn't based off of any of them. And it isn't simply using a different epistemology, it is that your epistemology completely lacks the category. The way I see it, "could have" was colloquially "possibility". Now "possibility" is about experiencing it before, which is only half of what possibility used to mean. The other "could have" was not that the person had seen it before, it was that it had potential to occur because they couldn't outright contradict it. This is still a meaningful thing to say in speech: the only affirmation being the affirmation that one cannot contradict the idea outright. However, I think I may be understanding what you are saying now: potentiality isn't really inducing an affirmation. It is more like "I cannot contradict the idea, therefore it may be possible". Maybe it is the possibility of possibility? But that wouldn't really make any sense (in your terminology). For example, mathematics. I could abstractly determine that I could fit that particular 5 foot brick into that particular 100 x 100 foot room, but, as you noted, until I attempt it I won't know. What I am trying to get at, if I haven't experienced it before then it is not possible. If I have no denominator, then it isn't probable. If I can't contradict it, then it is not irrational. I guess it could be called a speculation, but I am not saying that I can fit the brick in the room, just that I can't contradict the idea that it could. In other words, I am thinking of "speculation" as "that brick will fit into that room" (given it is possible, probable, or irrational), but what about "I can't contradict the idea that that brick will fit into that room". Are they the same? Both speculations?


"There's a difference between claiming there is colloquially a possibility that something can occur and that you actually believe that it occurred." -- Bob

Just to ensure the point is clear, both situations exist in the epistemology.


I'm not sure if they both do. You do have "something can occur" in the sense of experienced before, but is "something can occur due to no contradictions" simply a speculation without affirmation?

If something did not have potential, this translates to, "Distinctive knowledge that cannot be attempted to be applied to reality." This seems to me to be an inapplicable speculation. Which means that any induction that could attempt to be applied would be considered a "potential', even irrational inductions.


As I have proposed it, inapplicable speculations do not exist: they have been transformed into irrational inductions. Speculations entail that it is applicable. Therefore, this is not an appropriate antonym to potentiality. The antonym is "that which is contradicted".

Exactly. So Jones is claiming, "I have an induction but I'm not going to use the hierarchy to break down what type of induction I'm using".


Leaving the individual voiceless in a perfectly valid context is not purposely not using the epistemology: it is the absence of a meaningful distinction that is causing the issue. There is a meaningful distinction, as you noted, between asserting affirmation, and simply asserting that it isn't contradicted. Or is that simply not within the bounds of your epistemology? Or is it also a speculation? I am having a hard time accurately defining it within your terminology.

I look forward to hearing from you,
Bob
Philosophim March 18, 2022 at 14:49 #668910
Bob, I admit, this tripped me up at first. I had to think a while on your post, to try to get to what felt like was missing. Maybe I'm generalizing too broadly the difference between distinctive and applicable, and need to narrow down more. Lets see if we can figure this out.

Quoting Bob Ross
I was never under the impression anything was related to a "will" in your epistemology, albeit I understand the general relation to the principle of noncontradiction.


Not a worry! Its in the first paragraph of the entire paper which you read one time many months ago at this point.

Quoting Bob Ross
I think I would need a bit more explication into your idea of "will" to properly address it.

The only reason we have a definition of reality, is that there are some things that go against our will.

Reality is the totality of existence that is in accordance with our will, and contrary to our will.

I think you aren't using "reality" synonymously throughout your post. The first statement seems to contradict the second. You first claim that we only can define "reality" as that which goes against our "will", yet then, in the second, claim that "reality" is both what goes against and what aligns with our "will"--I don't see how these are reconcilable statements


Certainly, that was poor language on my part. What I meant to convey was the only reason we can have a concept of reality as something separate from ourselves, is because there are things that go against our will. If everything went in accordance to our will, there would be no need for the term "reality". There would just be whatever we willed would happen.

So no, I am not saying reality is what contradicts our will. Just noting that because everything we will does not come to pass, we realize there is something besides our will. No, I define reality as what is. Sometimes "what is" is when our will happens. Sometimes "what is" is when it does not happen.

Quoting Bob Ross
A "will", in my head, has a motive, which is not implied at all (to me) with "discrete experience"


A "will" like everything really, is a discrete experience. At a very basic level, I think we would both agree it is an intent of action. I will to wave my hand, and reality does not contradict that will. I will to fly by my mind alone, and reality contradicts this.

Quoting Bob Ross
I was misunderstanding you: distinctive knowledge is what you are claiming is given because it is simply discrete experience, whereas applicable could be within the mind or the external world


Yes, this is it. To clarify, distinctive knowledge is the knowledge of the discrete experience itself. Applicable knowledge is when we claim the distinctive knowledge we have applies to something besides its immediate self, and its immediate self is not enough to state with rational certainty that it is not contradicted by reality

Quoting Bob Ross
"Reason" is simply that ever continuing process of conclusions, which is the bedrock of all derivation. 1 + 1 = 3 (without refurbishing the underlying meaning) is an exposition of "reason", albeit not determined to be "rational". If, in that moment, the subject legitimately concluded 1 + 1 = 3, then thereby "reason" was invoked.


I believe I understand a bit. In that case, would every living thing reason? At the most fundamental level, an organism must decide whether X is food, or not food. I'm not saying its advanced reason, but reason at its most fundamental?

Quoting Bob Ross
(Philosophim) "Distinctive knowledge comes about by the realization that what we discretely experience, the act itself, is known."

I think this is false. The act itself is not just known (as in given), it is determined by means of recursive analysis of reason. You and I determined that we discretely experience.


Correct in a way. When I introduced the idea of discrete experience to you, you had to distinctively know what I meant first. Then, you tried to show it could be contradicted through application. I created the abstract with the conclusion that it could not be contradicted. But if it is ever contradicted in application, while we will still have the distinctive knowledge of "distinctive knowledge", we would applicably know that it was contradicted in its application to reality, not contradicted distinctively.

The line however, is incredibly fine between distinctive, and applicable. More on this later.

Quoting Bob Ross
And, if I may be so bold, the act of discretely experiencing does not precede reason: it becomes a logical necessity of reason (i.e. reason determines it must be discretely experiencing multiplicity to even determine in the first place--but this is all dependent on reason).


Agreed based on my understanding of your definition of reason. I think this is semantical however. By being a logical necessity for reason to exist, this is similar to what I meant by, "Before reason can form".

Quoting Bob Ross
Anything we ever do is concluded, to some degree or another, which utilizes reason, and any conclusion pertaining to reason or discrete experience is application.


If you mean "conclusion pertaining to application" as "application", yes, I think this fits. Do we need application to distinctively know things? No, distinctive knowledge it what we use to find if we can applicably know it. We can reason using distinctive knowledge to create a set of concepts. But distinctively knowing concepts does not mean we can know them in application.

Quoting Bob Ross
The only reason this is true is because you have realized that it would be a contradiction to hold that the contents of the thoughts of a mind can suffice pertaining to what the mind deems objects. This is all from reason and, depending on what is considered rationality, rational.


No disagreement here either. But it is an abstract invention. I have simply shown that to claim I know I do not discretely experience is irrational. That does not mean I could suddenly lose the capability to discretely experience 2 years from now for some time due to something like a disease or death. In such a case, the application that I discretely experience, would be contradicted by reality.

Quoting Bob Ross
We can define a meaningful distinction between "distinctive" (that which is discrete experience) and "applicable" (that which isn't),


Almost, but not quite. A discrete experience is anything that is separate from something else in your viewpoint. That is any identity, and essentially every "thing" that you experience. Distinctive knowledge and applicable knowledge are both discrete experiences as is any "thing". It is the type of knowledge that we are discretely experiencing where the difference comes in.

Quoting Bob Ross
No matter how swift, I conclude that I just imagined an elephant--I am not synonymous with the discrete experience of an elephant (I am the reason).


Considering you have stated that discrete experience is a logically necessary part of reason, I think this follows. I stated "I am the discrete experiencer," and you have stated, "I am the reasoner". If my understanding of reason is something that every being would have, then I can agree.

Quoting Bob Ross
We know we discretely experience because it is a deduction that is not contradicted by reality.

Your using reason here. You applied this to then claim we have distinctive knowledge that is not applied, but there was never anything that wasn't applied. In other words, you, by application, determined some concepts to be unapplied: given. That which you determined was given, was not given to you, it was obtained by you via application. Nothing is given to you without reason.


Yes, I think you have it! But to clarify again, there is a separation between the distinctive obtainment of knowledge, and the applicable obtainment of knowledge. One if the abstract concept and logical rules. The other is the application of those rules to something without contradiction.

Quoting Bob Ross
However, I've noted that "reason" is an option. It is not a necessary condition of being human.

For me, reason is a necessary condition of being human. Not "rationality", but reason.


Yes, with your definition as I understand it, I agree. But, I will add again based on your definition that reason at its most fundamental is a necessary condition for any living being, not confined to humanity.

Quoting Bob Ross
I think we applicably know math. Reason derives what is mathematical and what doesn't abide by it. Solving x = y + 1 for y is application, not distinction. Even the understanding that there's one distinct thing and another one is application (of pon). What exactly is purely distinctive about this? Of course, we can applicably know that there's discrete experience and that we could label discrete experience as "distinctive knowledge", but all that is application. There's never a point at which we rest and just simply know something without application. Is there?


There is never a point that you applicably know math without application. Distinctive and applicable knowledge are simply subdivisions of "Deductions that do not lead to contradiction by reality. We can applicably know math, and distinctively know math. Keeping it simple, I can distinctively know that 1 is an identity. Then I encounter an identity, and say, "that is 1 identity". But I could just distinctively know that 1+1=2 purely as a set of symbols. If later I see that set of symbols and state, "Ah yes, that is 1+1=2", then I applicably know that math if my claim is not contradicted.

Perhaps a better way to break down the distinction is by what is implied by our discrete experiences. Distinctive is simply knowing we have every logical reason to believe that we are experiencing the discrete experience itself. If however, the discrete experience implies something beyond the act of having the experience itself, this is when application occurs.

Of course, how do we have the knowledge that what we are discretely experiencing, is what we are discretely experiencing? At first, it is because we claim it is a contradiction. So is this an application? Or is this what is needed before one can apply? Essentially, distinctive knowledge is the rational conclusion that what we experience, is what we experience. And we conclude that because logically, any other alternative is inapplicable. It is when we apply this distinctive knowledge to something else, for example "I distinctively know 1 banana +1 banana =2 bananas, and I'm going to apply it to those two bananas over there," you can see this dividing line.

Quoting Bob Ross
when do I ever not apply anything?


If I conclude that I discretely experience, it is not by application to something beyond itself. Because it is not a question that it can be contradicted by reality. It is a logical conclusion. And logic on its own, is a set of rules we construct. If we apply it and its not contradicted, then we applicably know it. But that doesn't deny the distinctive knowledge of it before the application. So we are not applying discrete experiences, when we are recognizing that we know we have discrete experiences in themselves. When we are trying to assert more than the experience itself, such as applying the experience to another that we say results in X, we are applying.

A question for you Bob, is can you see this dividing line? Do you think there are better words for it?
Do you think there is a better way to explain it?

Quoting Bob Ross
My question essentially pertained to when something is considered a "historical fact", considering most historical facts are speculations, when we are simply determining which induction is most cogent. I think you answer it here: seems that you think that it isn't a base concern of the epistemology. I think this is a major concern people will have with it. Everyone is so used to our current scientific, historic, etc institutions with their thresholds of when something is validated that I envision this eroding pretty much society's fundamental of how knowledge works. It isn't an issue that it erodes the fundamentals of "knowledge" hitherto, but not addressing it is. You don't have to address it now if you don't want to, but feel free to if you want.


People used to think the Earth was the center of the universe. From their perspective, it was understandable. Some people didn't like it when it was pointed out that the Sun was the center. "How could that be possible? Its obvious the Sun circles us!" People's uncomfortableness with something new isn't an argument against proposing something new.

I think the emotional problem you are noting, is that people will be uncomfortable with the idea that many of the things we purport to know are inductions. Given the idea that inductions have been seen as "irrational", I can see this dislike. But what I am trying to show is that certain inductions are more rational than others. Inductions can be a rational tool of the mind when it reaches limitations. I originally had a few pages added to the induction hierarchy demonstrating when each type of induction was actually very invaluable, even irrational inductions. I can go into that, but I feel like I should address these other points first.

Quoting Bob Ross
Explicitly, what you are stating is, "I believe Jones could have 5 coins in his pocket." But what is the reasoning of "could have" based on? A probability, possibility, speculation, or irrational induction?

The point is that it isn't based off of any of them. And it isn't simply using a different epistemology, it is that your epistemology completely lacks the category.


I believe it does. What you term the "colloquial" use of possible is what I divided into possible and plausible(speculation as we've been calling it now).

Quoting Bob Ross
However, I think I may be understanding what you are saying now: potentiality isn't really inducing an affirmation. It is more like "I cannot contradict the idea, therefore it may be possible".


What I'm claiming is that potentiality is simply an induction without the distinction of the hierarchy. An induction is not inducing affirmation. An induction is always a prediction, and we can never know if a prediction is correct until we apply that prediction. The hierarchy recognizes this, but also recognizes that some inductions are more rational than others. Without the hierarchy, how could you tell which induction is more useful Bob? How can we tell if something has actual potential if there is no subdivision of inductions? Perhaps this will help us resolve the issue of potentiality, and why you believe it to be more useful.

Quoting Bob Ross
"There's a difference between claiming there is colloquially a possibility that something can occur and that you actually believe that it occurred." -- Bob

Just to ensure the point is clear, both situations exist in the epistemology.

I'm not sure if they both do. You do have "something can occur" in the sense of experienced before, but is "something can occur due to no contradictions" simply a speculation without affirmation?


Lets really break down what you mean by this sentence. "Something can occur due to no contradictions". I think this lacks clarity, and a lack of clarity is not something we should consider. What type of contradictions is this referencing? Is it referencing contradictions of an abstract logic? Or is it the contradiction of reality against my will?

For example, I can construct a set of abstract rules that work by allowing an object to appear at two places at once. I distinctively know this. In my set of rules of the discrete experiences themselves, there is no contradiction if a thing can be in two places at once. In your terms, this would be potential. In my terms, this would be an abstraction, or a context of distinctive knowledge.

Now, if we apply that set to reality, we find that an object cannot be in two places at once, no matter how much we try. This is a contradiction of the context when applied to reality, but not a contradiction within the context itself. Just because the person cannot prove that two things can exist in one spot, it does not mean their entire system of logic based on two things existing at once suddenly had contradictions within it. If his assumption was true, the logic would hold. But something being logical within the abstract does not necessarily hold true when applied.

To put it in terms of logic
A -> B
A exists.
Therefore B

But what if A does not exist? A -> B is a distinctive knowledge, a logic. But it is not applied to anything in particular. If I say, "If Santa exists, it will rain" for A and B, I have to apply this logic and show that Santa exists for the logic to be true in application. If I find there is no Santa, I can still distinctively know the logical statement I just made, I just cannot know that it applies without contradiction.

Quoting Bob Ross
As I have proposed it, inapplicable speculations do not exist: they have been transformed into irrational inductions. Speculations entail that it is applicable. Therefore, this is not an appropriate antonym to potentiality. The antonym is "that which is contradicted".


Again, contradicted based on one's own distinctive context, or contradicted based on application? It seemed to me potentiality was an induction. Is that induction free of contradictions distinctively, or applicably. An irrational induction in this case, is a distinctive contradiction, not an applicable contradiction. An induction is not an assertion of certainty. Even irrational inductions have the potential of being contradicted in application. They are simply the least rational induction a person can make distinctively, not an assertion of applicable knowledge.

Quoting Bob Ross
Exactly. So Jones is claiming, "I have an induction but I'm not going to use the hierarchy to break down what type of induction I'm using".

Leaving the individual voiceless in a perfectly valid context is not purposely not using the epistemology: it is the absence of a meaningful distinction that is causing the issue.


You can have a perfectly valid context that does not use the epistemology. If you don't want to use the hierarchy in your distinctive context, you don't have to. I'm just trying to point out it is more beneficial to.

Quoting Bob Ross
There is a meaningful distinction, as you noted, between asserting affirmation, and simply asserting that it isn't contradicted.


I think this is where you've missed what I've been stating. There are distinctive and applicable views. You can be contradicted distinctively, and you can be contradicted applicably. They aren't the same thing. When you use "contradiction" without clarifying what type of contradiction, distinctive or applicable, then you aren't using the epistemology.

That was one heck of a write up! Fantastic points which made me really dig deep and make sure I was being consistent, and conveying my intentions correctly. Let me know what you think Bob.
Bob Ross March 19, 2022 at 18:23 #669493
@Philosophim,

Bob, I admit, this tripped me up at first. I had to think a while on your post, to try to get to what felt like was missing.


I am glad that my responses are thought-provoking (and I assure you that I find yours equally so)! I would hate for our conversation to not be fruitful for us both.

I think I am still not quite able to pin point what you are conveying with "will" or the dividing line for distinctive vs applicable, so let me try to explain my position on the topic.

What I meant to convey was the only reason we can have a concept of reality as something separate from ourselves, is because there are things that go against our will. If everything went in accordance to our will, there would be no need for the term "reality".


If I am understanding you correctly, I think you are claiming that something being a member of "reality" must have at least gone against our will once before, which means that something that is apart of "reality" can go in accordance with our will but as long as it has gone against our will once before then it is a member of "reality". If that is not what you are claiming, then I am not quite following. Because you then stated:

No, I define reality as what is. Sometimes "what is" is when our will happens. Sometimes "what is" is when it does not happen.


This makes me believe the aforementioned is what you are claiming because, otherwise, "what is" that is in accordance with our will would not be a member of "reality" (but, rather, a member of ourselves). I'm thinking you are claiming that it is a member of "reality" regardless of whether it was in accordance with our will as long as it went against our will previously (at least once): am I correct here?

Moreover, I am also trying to hone in on what you mean by "will". When you say:

I will to wave my hand, and reality does not contradict that will. I will to fly by my mind alone, and reality contradicts this.


This makes me think you may be using "will" as one shared will between the mind and the body, but, given that the body doesn't have to abide by the will of the mind, I don't think this is what you are saying. I think you are trying to keep this a bit more high level, conceptually, than I am.

I make a distinction between the body's will and reason's will. The latter is that which manifests in the head in relation to reason (obtained by recursively analyzing reason on its previous manifestations), and the former is the will of the body extrapolated from its actions by reason. I think it was Nietzsche that first exposed me to a preposterous claim pertaining to "free will" being like that of a man who awoke in the morning, stepped outside, and "willed" the sun to rise. Seeing it rise, he determined it was from his will. Taking that example seriously, I honestly don't see how we can know whether any given object's (including the body) actions were from reason's will or whether it was simply assumed due to continual repetition. If the sun always rose every time I freely willed it to, would I thereby know that the sun actually abides by my will? Or is it that the action happened to correlate purely consistently with my will. Likewise, imagine two people who act in accordance to the exact same intentions, without any deviation. Do we really "know" whether they act in accordance to a shared will or simply two separate identical wills? In terms of inductions (and cogency therein), my best inductive belief is to side with the repetition (but I definitely still need to think about it more). So, on a daily basis, I believe that when I think "I should lift my right arm" and my arm actually lifts, that it was from my will (reason's will) even though I do not, in fact, "know" that. Likewise, there's actually, as you are well aware, incidents where my body doesn't abide at all to what I willed. Therefore, I separate them as two wills (body and reason) while holding the belief that most of the time, whether by repetitive happenstance or by actual accordance, my body's will aligns with my own (reason's).

Now, I think that you are getting at (correct me if I am wrong) is that I won't conceptually separate something else from myself if it abides by whatever I will (regardless of whether it is happenstance or in actual accordance). Honestly I am not convinced of this either. Let's take the two people acting in the exact same manner (intentions) (without deviation) example: if I were to claim that they are actually of the same will, then I would still identify them as separate objects with a shared subjective will (i.e. "those two objects are of the same will"--I would not refer to them both as one object). Likewise, if I were not analyzing two separate wills from me (to determine, like in the previous example, whether they are one or two wills), but, rather, analyzing this in accordance to mine, I would still distinguish the objects regardless of their connection to my will. If my intentions always align with my body's and some other body's, then, at best, i would connect them to my will but as two objects connected to one will. I don't see how the separation really, at a physical-objective level, dissipates. I only see that, at best, it dissipates at the level of the subject (or, to be more specific, reason). If there were two body's that abided repetitively to my will, my wanting to investigate the inner workings of those bodies would be abided by both, but I will still be acknowledging thereby (in wanting an investigation) that there's a separation of parts within the bodies (and the separation of me from those bodies).

Just to really hone in on this. Imagine I am walking on a concrete sidewalk and will that it become concrete. Is it now concrete because it already was or because I willed it to be? It was going to be concrete either way, but how could I "know" or "not know" that it didn't abide by my will? I think the same is true of bodily actions. I think "lift arm", arm lifts. Was the arm going to lift anyways or is it lifting because I willed it? How do I "know" or "not know" whether it did abide by my will? I separate them as two wills because I do think there is evidence that the body doesn't abide by my will, but can coincide with it most of the time. But the real question is whether or not I would be able to claim either way if the body always, without any deviation, coincided with my wants. If every time I command "lift the arm", the arm lifts, then would I claim that they are of the same will? Either way, the separation of arm and thoughts (object and reason) would be intact, wouldn't it?

Maybe, on the contrary, you are referring to everything having my reason? Everything concluding thoughts as if they are from me? I am presuming you mean is that my will from my reason as is were to always coincide with what happens in what we call "reality".

I think this could also derail into omnipotence dilemmas as well, but don't think that is the main focus of this discussion. But there's a level to this where the logical contradiction of "I want a square circle" is what I think you are referring to as "reality" going against my will. However, I don't think that reason's will (as manifestations in thoughts) has any provable bearing on the relation between objects. Again, how would I distinguish that which is repetitive coincidence and that which actually abides by my will. Reason is the aboutness, which pertains to conclusions about the objects and its relations. I can conclude a "belief" that my arm, when it lifted, actually abided by my will by accepting repetition as a more cogent belief than coincidence, but that's all just reason making connections pertaining to objects, not it actually doing anything in the physical world. I observe that my arm moved, I now try to analyze the connections to gather an explanation of that physical action. Does that make any sense? I'm not sure if I am explaining this very well.

I believe I understand a bit. In that case, would every living thing reason? At the most fundamental level, an organism must decide whether X is food, or not food. I'm not saying its advanced reason, but reason at its most fundamental?


Although I haven't pondered it nearly enough yet, I think this is fair and plausible. If an animal (or even plant maybe) decides whether X is Y, or is not, then it thereby used the principle of noncontradiction--which would entail some level of reason I would presume. This would get into solipsism though, as I would hold that reason can never verify other reason, only obtain an inductive belief that there are other "reasons" by means of analyzing it's body's actions in relation to another body's actions (to see if it makes sense that it has reason). Does that make sense? Just like how there is no distinction between repetitive coincidence and actual accordance, I cannot distinguish the two in other people or animals or plants either. I just believe it is the case.

When I introduced the idea of discrete experience to you, you had to distinctively know what I meant first... But if it is ever contradicted in application, while we will still have the distinctive knowledge of "distinctive knowledge", we would applicably know that it was contradicted in its application to reality, not contradicted distinctively.


I am not entirely sure what you mean here. I conclude that I must have differentiation, that distinctiveness, before I could even conclude anything in the first place. If the converse was concluded (legitimately), the closest thing I can conceive of would be oneness. If we concluded the converse with respect how even reason itself operates now, which is not unified into oneness, then we would simply be mistaken (probably haven't realized that the very thinking process requires differentiation). Is that what you mean? Distinctive knowledge would still be there even if we concluded it wasn't there because we are simply mistaken?

Do we need application to distinctively know things? No, distinctive knowledge it what we use to find if we can applicably know it.


I think I am following, and I agree. But that was also concluded. I can distinctively think that I should envision an elephant, but it turns out I envision a lion instead. But by application, I know that my reason manifested a thought which introduced a will to envision an elephant and I know that the object that appeared in my mind was not an elephant. Even to claim that I initially had to use that distinctive knowledge of wanting to envision an elephant requires, thereafter, application (consideration of that previous manifestation by reason). I don't hold that what is "in the mind" is equivocal to reason. I don't think that what "I" envision (or imagine) is apart of reason, it is, just like any other object, what is concluding about that envisioning that is reason. I think what you are trying to get at is that I hold this on principle to the fact that I don't control those images (or that they have gone against my will at least once previously). I think that even if what I want to will (in reference to visions) always repetitively comes to pass, there's still a distinction between the object (the about) and what is asserting the aboutness.

Distinctive knowledge and applicable knowledge are both discrete experiences as is any "thing".


I apologize, I should have used my words more carefully. I think that you are making a meaningful distinction, but it is still, in my eyes, all application. I think that you are saying I distinctively know the words you right, but don't applicably know the contents of those words until I apply them to "reality" without contradiction: is that right?

But I could just distinctively know that 1+1=2 purely as a set of symbols. If later I see that set of symbols and state, "Ah yes, that is 1+1=2", then I applicably know that math if my claim is not contradicted.


I understand, and it is a meaningful distinction. But to claim to know a set of symbols purely as distinctive knowledge is application of reason. I have no problem with this distinction you are making though. I would also say that the abstract consideration of the operation of addition is applicable knowledge (in your terms)(and identifying the shapes again, like you said), but the recognition of the shapes of "1" "+" "1" "=" "2" is distinctive knowledge: is that correct?

The problem I have is that it seems as though you are claiming distinctive knowledge is not "application to reality without contradiction". How is it not? How did I not apply the recognition of symbols to reality? "reality" is just the principle of noncontradiction. I contradict the idea that I did distinctively recognize "1", then I didn't distinctively recognize "1". I can't, however, contradict the idea that I distinctively recognized some symbol, therefore I distinctively recognized that symbol. To you, is this all application to reality, with a meaningful subdivision?

Distinctive is simply knowing we have every logical reason to believe that we are experiencing the discrete experience itself. If however, the discrete experience implies something beyond the act of having the experience itself, this is when application occurs.


I understand your distinction here, but the claim that "the act of having the experience itself" I see no different than claiming something beyond the act itself. I must not be able to contradict it. Once I've obtained the act of having the experience itself as true, I can meaningfully distinguish that from whatever is utilizing that experience to attempt to derive something else (which I think is what you are getting at ). I am having a hard time distinguishing the two as not really the same thing (fundamentally).

Essentially, distinctive knowledge is the rational conclusion that what we experience, is what we experience..."I distinctively know 1 banana +1 banana =2 bananas, and I'm going to apply it to those two bananas over there," you can see this dividing line.


To me it just seems like you applicably know (not in your terms) that you distinctively recognize things, and then anything built off of that is "applied": but both were, no? Don't get me wrong, your distinction is something I distinguish as well (I hold that I distinctly recognize things as well).

If I conclude that I discretely experience, it is not by application to something beyond itself...
So we are not applying discrete experiences, when we are recognizing that we know we have discrete experiences in themselves.


I think this is the difference: you are making a subdivision in application in terms of what recursively refers to itself vs what refers beyond itself. I am pointing out that it is still application, albeit meaningful distinctions. And nothing ever refers to itself in a literal sense. The distinctive knowledge of 1+1=2 was analyzed by reference in a subsequent thought.

And logic on its own, is a set of rules we construct


I think there are fundamental rules of logic we do not construct.

When we are trying to assert more than the experience itself, such as applying the experience to another that we say results in X, we are applying.


Then applicable knowledge is always inductive then? I believe applying one experience to another will hold, but it may not.

A question for you Bob, is can you see this dividing line? Do you think there are better words for it?Do you think there is a better way to explain it?


I think some further elaboration would be useful: I don't think I am still quite understanding you.

Is it referencing contradictions of an abstract logic? Or is it the contradiction of reality against my will?


I think to properly address your elaboration into potentiality, I need to hear your feedback on what you mean by "will". I hold that there is one kind of "contradiction" and it is pon. There's no difference between a contradiction in abstract logic vs against my will. Is there?

Firstly, I don't think you can construct a distinctive context where something is at two places at once: it would be two identical things, which I don't think is the same thing. But let's say that I could imagine the same chair in two different places (and they weren't identical clones), then I would applicably know that my imagination can hold the same thing in two different locations and, when applied to objects outside of imagination, that there cannot be the same thing in two different locations in non-imagination.

A -> B
A exists.
Therefore B


It's more like:

A -> B
A is not contradicted, thereby true
Therefore B

The same thing in two different locations is not contradicted in imagination (hypothetically), but is in what you call "reality" (what I would call non-imagination to be precise). It would be a contradiction to transport the conclusion pertaining to the imagination to non-imagination because they don't hold the same identity in terms of essential properties (hence "non"-imagination). To hold that they are the same, would be contradicted by reason (potentially, someone may not ever realize it). Even if I could apply the same thing in two different locations as true in non-imagination and imagination, I would still have to deal with the contradiction that I they, by definition, are not correlated to one another: they share an unessential property.

It terms of your santa example, you know by application that modal statements like IF...THEN are true in terms of their form, but not necessarily that the IF conditional is automatically true. You and I had to conclude that we both implicitly utilize IF...THEN style logic even prior to us realizing it: that is application. I don't think we are ever knowing anything without applying to "reality" because reason recursively analyzes itself in the exact same manner.

I'll stop here for now: this is getting long!

I look forward to hearing from you,
Bob
Philosophim March 23, 2022 at 23:58 #672114
Wonderful analysis Bob. I think you're seeing the distinction, but also the underlying sameness that runs through them both. This is because at their core, both types of knowledge are solved the same way; they are both deductions that are concluded without contradiction. However, there is a mix up of language here. I think you've been stating the only way to conclude anything is not a contradiction, is to "apply" it. This is not the same meaning as "Applicable knowledge". Since the vocabulary is confusing, a better way would be to state the phrase, "use reason" instead of "apply it". I'll flesh this out more through this response.

The distinction between Distinctive and Applicable is really more of a differentiation of steps in knowledge. Distinctive knowledge is obtained, and only after, can applicable knowledge be obtained. Perhaps the difficulty comes from defining "reality". As a foundational argument, I am restricted in what I can claim in my building blocks. So I will start with your question about "will".

As this is foundational, I'm trying to embrace definitions that any person could come to on their own. So in the beginning, reality is simple. If everything went according to my will, there would be no need for the identity of "reality". Everything I willed would happen. But, there is an existence which can counter my will. Sometimes it does. Sometimes it does not. Regardless, it has the capability to deny my will. Reality is the existence that can, or does not counter my will. That's all there is to it.

Quoting Bob Ross
Moreover, I am also trying to hone in on what you mean by "will". When you say:

I will to wave my hand, and reality does not contradict that will. I will to fly by my mind alone, and reality contradicts this.

This makes me think you may be using "will" as one shared will between the mind and the body, but, given that the body doesn't have to abide by the will of the mind, I don't think this is what you are saying. I think you are trying to keep this a bit more high level, conceptually, than I am.


To understand this fundamental definition of will, there is no mind or body initially considered. Will is intention, and that outcome is decided by reality. I have not fundamentally defined the body vs the mind at this point. If that is important to you, I will, but I don't want to add in things that should be unimportant to understanding what will and reality is.

Hopefully this will allow us greater clarity between distinctive and applicable knowledge. First, understand that we are currently not including social context. That changes things. In a solo context, I conclude that knowledge is what is deduced and what reality does not contradict. This is entirely to my will, and reality cannot deny that I made it. Is this distinctive, or applicable? This is distinctive. I formulated it, therefore its there. A = A, because I have defined it as such. I could just as easily have stated, A=B. I would distinctively know that A=B, but I could not apply it in any meaningful way.

But if I say, "That letter A is equivalent to that letter A over there", I need to carefully craft my context to ensure I'm not contradicted by reality. If I say, "I deduce that these two objects that I perceive by sight to be tomatoes, I must carefully check that they really do fit all of my essential properties of what a tomato is. I think what I'm finally realizing as I've seriously thought about this, is all applicable knowledge were initially beliefs that needed to be confirmed before they could be considered deduced conclusions.

A claim that needs needs to go through the process to determine if it can be applicably known, is always an induction, or a belief. Honestly, its a relief to finally smack my forehead and realize this clearly. I can claim A=A, but can I claim those two A's over there are equivalent before going over them closely? No. That's an induction. I suppose an induction which has a deductively concluded outcome is applicable knowledge.

Quoting Bob Ross
And logic on its own, is a set of rules we construct

I think there are fundamental rules of logic we do not construct.


There are fundamental rules that we construct, and are not valid as applicable knowledge. There are fundamental rules of logic we construct, and are valid as applicable knowledge. The application of reason, or "Deductions which are not contradicted by reality" runs through both. Abstract logic is something you create. You will that a particular definition means X. To hold the definition of X, Y is entailed. In other words, you've created a deduction. Now, you could create another definition Z, that entails Y does not exist, but X does exist. At this point, there is a contradiction from reality, but the reality that your will can create and modify. I can change the definition Z and what it entails. Same with X and Y. The contradiction exists only because I choose to hold definitions that contradict. In other words, no inductions are created and tested. This is distinctive knowledge.

If that abstract logic is applied to anything but other distinctive identities, then it is no longer an deduction, but an induction. And at that point, steps must be set out to determine a deduced conclusion. Once that conclusion is deduced, I call that "applicable knowledge".

Is application a good word to describe this though? Does it lend confusion? It appears it does. I don't know of a word that describes the process of finding the deduced result of an induction. Perhaps there is no word yet. Perhaps instead of actions I should be thinking in steps or tiers. Like tier 1 knowledge is distinctive while tier 2 is applicable. Instead of 'applicable', maybe another word? Processed? Gleaned? I'm open to suggestions!

Quoting Bob Ross
But to claim to know a set of symbols purely as distinctive knowledge is application of reason.


Right, recall again that both distinctive and applicable knowledge are both concluded exactly the same way. "A deduction with a conclusion that is not contradicted by reality". Reason can be said to be "applied", but it is not the same as taking what is deduced to be an induction, and taking the steps necessary to confirm its result.

Quoting Bob Ross
There's no difference between a contradiction in abstract logic vs against my will. Is there?


A good question. There may be. If you construct your abstract logic, (within a solo context) you are the one defining your terms and rules. You are deciding to hold onto them when a contradiction is met. This is not the same thing as using your logical set to induce an outcome that you must then confirm. By this I mean you are holding onto your definitions of logic, but cannot decide the outcome. When you can hold onto your definitions of logic, and decide your outcome, this would be considered distinctive knowledge.

Quoting Bob Ross
It terms of your santa example, you know by application that modal statements like IF...THEN are true in terms of their form, but not necessarily that the IF conditional is automatically true.


I think with this clarified, I know by distinction that IF THEN statements are true of their form. But if I am going to apply that if conditional to something that I do not yet know the outcome of, and its outcome is not something I can decide, then it would need to undergo the knowledge process to see which applicable knowledge I would learn from this application.

Bob, I can't thank you enough for your keen and pointed comments on this. I always knew distinctive and applicable knowledge worked, but I always felt it lacked refinement or a clear way to explain and demarcate it. I think I've found that now thanks to you. I hope this clarifies this issue for you as well!
Bob Ross March 24, 2022 at 23:48 #672918
Hello @Philosophim,

I really appreciate your elaboration, as I think I am starting to grasp the "distinctive" vs "applicable" distinction you are making. Your uncovering of inductions vs inductions verified via deductions is marvelous (and not to mention, helped me understand your viewpoint better)! However, sadly, I am still having troubles truly concurring with you. Let me try to explain (slash simply ask you questions).

Firstly, I am not finding it self-apparent that your definitions of "distinctive knowledge" and "applicable knowledge" are mutually exclusive:

When you can hold onto your definitions of logic, and decide your outcome, this would be considered distinctive knowledge.

I suppose an induction which has a deductively concluded outcome is applicable knowledge.


Let me give you an example where I am finding these definitions problematic (and you tell me where I am getting it wrong, because I am fairly confident it is probably just me misunderstanding). Imagine I am contemplating the square root of 25. Let's say I immediately (without performing the math) assert that it is 6 (because I memorized the square roots of certain numbers previously and, albeit incorrect, associated my memory of one particular square root problem as being answered by 6 with it being the square root of 25). My assertion here is a belief (that it is 6), and is therefore an induction (my premises do not necessarily constitute the conclusion). To determine my assertions validity, I perform the necessary mathematical operations, which is how I am able to deduce that my inductive belief was incorrect (5 * 5 = 25, 6 * 6). Since this example abides by the form you have defined for "applicable knowledge", it was "application" (all of which was pure, abstract reason).

However, if I had never asserted anything (i.e. that it was 6), then it would have been "distinctive knowledge" because it was a pure deduction (which is entirely within my control, as it is abstract).

But in either case the belief (or lack thereof) was irrelevant. If I say 1 + 1 probably equals 2, and then perform addition to determine (deductively) that it actually is, then technically that is "applicable knowledge". If I merely hadn't guessed anything prior to the deduction of mathematical principles, then it would have been distinctive knowledge.

Furthermore, I think you are claiming that distinctive knowledge precedes (always) applicable knowledge, but in this case (depending on whether a belief is conjured) applicable knowledge could be obtained without using any prior distinctive knowledge (e.g. without asserting a preliminary belief, the deductive application of addition to 1 + 1 would produce distinctive knowledge, but with a preliminary belief it would have produced applicable knowledge without any preceding distinctive knowledge).

I think, if I am understanding you correctly, what you are more trying to convey is abstract vs non-abstract knowledge, and you seem to be arguing that line is drawn by what you do or do not control. But abstract knowledge under your definitions would not be exclusively distinctive.

Likewise, an induction that is verified via a deduction is not a "deduction which is not contradicted by reality": it an induction which is not contradicted by reality, but is distinguished from other inductions by the manner in which is confirmed (deduction). So it seems like distinctive and applicable knowledge do not, after all, utilize the same method (but nevertheless utilize pon). To make it more confusing on my end, it also doesn't seem like you are strictly claiming an abstract divide either, because the coining of a term in reference to an object in front of me would be a pure deduction (which pertains to something non-abstract) and, thusly, would be distinctive knowledge. Whereas my belief that some object that isn't in front of me is the same as the one that is would be merely an induction (that happens to be verified/unverified by means of a deduction), therefore applicable knowledge. And, moreover, when I go verify that that other object is indeed like the other one that I previously saw (thereby using deduction), that would be distinctive knowledge in the sense that it is a pure deduction. And my consideration of that object, grounded in a pure deduction, being that of the same as the previous object would be a purely abstract consideration (i.e. I am comparing the properties of this object, gathered deductively, to the previous properties I deductively found of the other object--none of this is non-abstract). It is almost like a pure deduction is always distinctive, regardless to what it pertains, and applicable is really the attempt to verify inductions. Don't get me wrong, I share many examples with you where this dividing line seems clear, but upon deeper reflection I am left with nothing but confusion. Did you not also distinctively know the two A's over there when you verified your inductive belief about them? Then didn't you abstractly compare those properties to the A's you conjured up in your mind (which is still within the realm of "distinctive knowledge")? When they abstractly matched (in essential properties) you thereby asserted it valid (wouldn't that be distinctive knowledge?). So really "applicable knowledge" is inductions which you distinctively know to be true, no?

As this is foundational, I'm trying to embrace definitions that any person could come to on their own. So in the beginning, reality is simple. If everything went according to my will, there would be no need for the identity of "reality". Everything I willed would happen. But, there is an existence which can counter my will. Sometimes it does. Sometimes it does not. Regardless, it has the capability to deny my will. Reality is the existence that can, or does not counter my will. That's all there is to it.


I admire your desire to keep it fundamentally easier to comprehend (and honestly that is your prerogative, I respect that), but I find your "will" incredibly ambiguous (I am gathering it might be purposely so?). For example, if "reality" is simply "what I do not control", then my body could very well not be apart of "reality". Moreover, my imagination may be apart of and not apart of reality (depending): what if I can't control my imagination, or maybe only particular aspects? What if I could control my breathing in my dream, but not the my arms movements: are my imaginative arms apart of "reality" but not my imaginative breathing? This also opens the doors to everything being consumed by "reality": if I will that my next thought be a continuation of the subject I am currently contemplating and the very next thought segues into something completely irrelevant, then my thoughts are also "reality", which inevitably begs the question of what isn't "reality"? All I have are thoughts, what is left? I think, like you are saying, at a surface level "control vs no-control" seems intuitive, but upon deeper reflection it isn't that solid (nor clear) of a distinction. Objects, regardless of who is willing their actions, are still objects. Objects are "reality". I am having a hard time seeing (beyond simply trying to keep it intuitive for the layman) how this distinction has any bearing on control? Even if my body always was aligned with my will, it would still be apart of reality. I honestly don't think what you are trying to convey really has any bearing on control either (unless I am just misunderstanding): abstract vs non-abstract knowledge is still a meaningful distinction regardless of who willed what. But at the same time, maybe you aren't making such a distinction (abstract vs non-abstract), because you definitions seem to be implying other things (a deeper divide) than what I am understanding (I think).

Abstract logic is something you create. You will that a particular definition means X.


I agree that we can create abstract logic, but it follows from necessary logic. IF X -> THEN Y is logically constructed in the sense that I can choose to reject the relation of X to Y (i.e. Y does not follow from X). In that sense I agree, but the form of IF THEN conditional logic is necessarily already there, and cannot be rejected. I can always innately, whether I like it or not, construct logic which is built off of a conditional (something not asserted as true, but assumed as such for further exposition). To even try to negate IF THEN in terms of its form, I would have to conditionally assume a hypothetical where I don't necessarily utilize IF THEN, which thereby solidifies its necessity. It is easier to see with pon: I can construct logic utilizing pon, but pon is necessarily the bedrock of logic itself. Maybe it should be distinguished as a different kind of logic (but then we might start getting into controversial terminology, such as transcendental logic or something like that, which we both may not agree with). But I see your point and agree: we can make up, built off of the fundamentals of logic, what we conceptualize as "logic".

In other words, no inductions are created and tested. This is distinctive knowledge.


Again, this I find problematic (see original examples: it seems, so far, to be a superficial distinction). Sometimes the induction conjured up doesn't matter at all.

Like tier 1 knowledge is distinctive while tier 2 is applicable. Instead of 'applicable', maybe another word? Processed? Gleaned? I'm open to suggestions!


Hopefully I've demonstrated that it isn't always tier 1, but application could be tier 1 as well. It really seems like you are distinguishing a deduction from an induction (that can only be verified by deduction--which would be thereby something verified distinctively). I still think, so far, that the only clear distinction here would be reason and everything referred to by it (aboutness vs about).

This is not the same thing as using your logical set to induce an outcome that you must then confirm. By this I mean you are holding onto your definitions of logic, but cannot decide the outcome


Again, if it is about being able to decide the outcome, then my original examples are distinctive knowledge, but if it is about whether it is an induction verified by deductions, then it is applicable knowledge. I can have inductions that do not pertain to objects (i.e. are abstract) which I can then thereafter determine whether they are true via abstract deduction.

Bob, I can't thank you enough for your keen and pointed comments on this. I always knew distinctive and applicable knowledge worked, but I always felt it lacked refinement or a clear way to explain and demarcate it. I think I've found that now thanks to you. I hope this clarifies this issue for you as well!


I am glad I was of service! However, although it did clear things up a bit, I still am not fully agreeing with it nor do I think it is a clear distinction.

I look forward to hearing from you,
Bob
Philosophim March 27, 2022 at 09:26 #674157
Quoting Bob Ross
I am glad I was of service! However, although it did clear things up a bit, I still am not fully agreeing with it nor do I think it is a clear distinction.


Perfectly fine! For me it gave me a new avenue and way of describing what I've been thinking. Lets see if I can clear up your further issues.

Quoting Bob Ross
Firstly, I am not finding it self-apparent that your definitions of "distinctive knowledge" and "applicable knowledge" are mutually exclusive


Recall that what entails knowledge is a deduction that is not contradicted by reality. But now, I think with my further realization of the difference, I can finally remove "reality". Knowledge ultimately is a deduction. A deduction is a conclusion which necessarily follows from its premises. Adding, "reality" is redundant. Any legitimate contradiction to a deduction, means its not a deduction any longer. "Reality" was a place holder for basically, "legitimate challenges to deductions". If a deduction can hold despite other challenges to it, it is knowledge.

Knowing that this runs through both applicable and deductive, I've always noted there was a fine dividing line that we craft. The front and back of a piece of grass are different and necessary existences, but it can be difficult to tell the difference between the two without a zero point. A zero point is the origin of an X and Y graph. When you are looking at a line pattern, putting it to the zero point can give clarity on comparing its symmetry and slopes. What we're doing with definitive and applicable knowledge is putting knowledge on a zero point, and noting the X and Y dimensions. It is in essence a drawn line or parabola, but charted in a graph in such a way as to break it down into an easier calculation.

Honestly, my realization that applicable knowledge is simply the actual result of an induction makes me want to rewrite the entire thing. I believe I can make it so much clearer now. You see, you can have deductions without inductions. You can have inductions without deductions. X and Y. But you can only get certain outcomes when you combine the two. And when you combine the two, that result cannot be obtained without both an induction, and a deduction. 2,3 as a mark on a grid requires both to be. That point exists without a graph of course, but put it on a graph and you can make a breakdown far more useful.

But I go on. The entire point of the example is to agree with you, that sometimes certain knowledge outcomes are going to bleed into each other without clear definitions. The coordinate 2, 3 are clearly X and Y coordinates, but their existence as a combined coordinate is impossible without each other together. Remember that we can discretely experience whatever we want. We can throw away the grid if we want. But what would we lose if we do? Lets examine your points.

Quoting Bob Ross
Imagine I am contemplating the square root of 25. Let's say I immediately (without performing the math) assert that it is 6 (because I memorized the square roots of certain numbers previously and, albeit incorrect, associated my memory of one particular square root problem as being answered by 6 with it being the square root of 25).


What you are missing here is another ingredient we have not spoken about very much, but is important. Social context as mentioned in part 3. I realized I needed to point it out more last time we spoke. Implicitly, when I am talking about knowledge as a foundation in my head, I am referring to a person without any social context. I need to be pointing that out every time, and it is my fault for not doing so.

English and the symbols of logic of math, are not solo contexts. They are social contexts. You have an external reference to tell you that you are right or wrong. When you say you're making an induction that the square root of 25 is six, you're making an induction against societies definition of math, not your own. I can create my own math in my head where the square root of 25 is 6. Of course, my underlying essential property of what 25, 6, and all the words involved would need to be non-synonymous with societies. But within my personal context, I can make it whatever I want.

When you are learning 1+1=2, you are learning a societal definition of math. If you question, "What does 1+1 equal again?" you are asking for a definition that is not your own. You can learn math from other people. But when you are doing a math problem, and you cannot deduce the answer, you are making an induction about what societies rules would conclude the answer should be. Implicitly, you are unsure you have all the rules and process of thinking correct, and you need to check with others. In this way, once you find the answer, you have obtained applicable knowledge of the answer.

I feel in a self-contained context, the descriptors of distinctive and applicable are clear. It is when societal context enters in, that it can be potentially blurred. If someone tells you 1+1=2, and you clearly remember that, that would seem to be distinctive. If someone then asked you, "What does 1+1 equal"? you would distinctively know 1+1=2, but would you know that will be the accepted answer in this particular question? What separates an induction from a deduction is just a little uncertainty to that person's reaction to your answer.

Quoting Bob Ross
Likewise, an induction that is verified via a deduction is not a "deduction which is not contradicted by reality": it an induction which is not contradicted by reality, but is distinguished from other inductions by the manner in which is confirmed (deduction).


I want to word it more clearly from my end, though this may be semantics at this point. An induction, who's conclusion has been reached deductively, is applicable knowledge. As an example, I make an induction that the next coin flip will be heads. We could use the hierarchy to examine the cogency level of that induction. Whether it flips to heads or tails (or the ridiculous unlikelihood of landing on that knifes edge) we can examine the essential properties of the result, and deduce a conclusion.

That conclusion, no matter the result, is applicable knowledge. It doesn't mean we didn't make an induction. If for example I guessed heads, and it landed on heads, my induction did not itself become a deduction because I guessed correctly. It is only when the answer to that induction is deduced, that we have applicable knowledge. That knowledge may be, "I guessed heads, but it landed on tails". This differentiates itself from my distinctive knowledge, or definition of the essential properties of "landing on heads or tails" entails.

Finally, it is essential to note how the induction is concluded. Having an induction that happens to be correct is not the same as knowledge in any epistemological analysis I've ever read. And for good reason. A guess that happens to be right is not knowledge, its just a lucky guess. We can have knowledge that we made a guess, and we can have knowledge of the outcome of that guess, but that is it.

Quoting Bob Ross
Furthermore, I think you are claiming that distinctive knowledge precedes (always) applicable knowledge, but in this case (depending on whether a belief is conjured) applicable knowledge could be obtained without using any prior distinctive knowledge (e.g. without asserting a preliminary belief, the deductive application of addition to 1 + 1 would produce distinctive knowledge, but with a preliminary belief it would have produced applicable knowledge without any preceding distinctive knowledge).


I still believe distinctive knowledge always comes from applicable knowledge. If I experience something for which I have no distinctive knowledge, I first may try to match it to the dictionary in my brain. If I deduce that I cannot, I applicably know what I am seeing does not match what is in my brain. At that point, I create an identity for it. Its the sheep and goat example all over again. To avoid retyping it up again, do a ctrl-f 'goat' on section 2 to re-read the example.

To sum it up, we can use the deductions we arrive at from our inductions to amend or create new distinctive knowledge (solo context again). But distinctive knowledge is not an induction itself. It is the creation of an identity that can be used in a later induction or deduction. It can be amended, created, and destroyed. But the experience itself is created and thus known by us without any induction involved.

Quoting Bob Ross
But abstract knowledge under your definitions would not be exclusively distinctive.


Again, in a social context, you are somewhat correct. Because in this case, the abstract is something invented by society, something we do not have control over. It is the distinctive knowledge of society, and if we use inductions to say, "Do I understand societies distinctive knowledge correctly?" those deduced solutions are applicable knowledge. I also want to use "distinctive knowledge of society" with care. I think that's not quite clear, and I would very much consider this to be ambiguous and possibly confusing. I might need a new phrase here, which I believe I will think into more. This post is already massive enough as it is. :)

Quoting Bob Ross
the coining of a term in reference to an object in front of me would be a pure deduction (which pertains to something non-abstract) and, thusly, would be distinctive knowledge. Whereas my belief that some object that isn't in front of me is the same as the one that is would be merely an induction (that happens to be verified/unverified by means of a deduction), therefore applicable knowledge.


A fantastic summary.

Quoting Bob Ross
And, moreover, when I go verify that that other object is indeed like the other one that I previously saw (thereby using deduction), that would be distinctive knowledge in the sense that it is a pure deduction.

Let me clarify a little here. The result of a deduced conclusion from an induction would be applicable knowledge. Using a deduction is knowledge. It is the situation that we use the deduction in that determines the classification of knowledge we are receiving.

[quote="Bob Ross;672918"]And my consideration of that object, grounded in a pure deduction, being that of the same as the previous object would be a purely abstract consideration (i.e. I am comparing the properties of this object, gathered deductively, to the previous properties I deductively found of the other object--none of this is non-abstract). It is almost like a pure deduction is always distinctive, regardless to what it pertains, and applicable is really the attempt to verify inductions.


I would clarify that the applicable is not the attempt to verify inductions, it is the deductive result of an induction. Again, a deduction is a deduction. It is about whether it follows an induction, or another deduction, that determines the classification of knowledge.

There is another implicit question you're likely asking as well. "Are inductions and deductions classifications of knowledge themselves?

We can have distinctive knowledge of our inductions and deductions of course. But what of the underlying logic itself of deduction vs induction? That is distinctive. We have created a set of rules and definitions that we use. We have applicable knowledge that both inductions and deductions can be used without contradiction. I can make the induction, "I believe I can use a deduction without contradiction", and applicably know this to be true after its resolution.

This is the part you might like Bob, as I believe you've been wanting some type of fundamental universal of "reason". This logic of induction and deduction is reached because we are able to think in terms of premises and conclusions. This is founded on an even simpler notion of "predictions" and "outcomes to predictions". Much like our capability to discretely experience, this is an innate capability of living creatures. I believe this coincides with your definition of "reason" earlier as "decisions with expectations".

Can we define this in a way that is undeniable, like discretely experiencing? If discretely experiencing is an act of "existence" perhaps "action" is the next act needed for an existence to sustain itself. I do not have it well thought out to the point where it is simple, incontrovertible, and self-evident, but an initial proposal is "the act of breathing". I cannot stop discretely experiencing no more than I can cease breathing entirely. From this autonomous action, comes the next evolution, agency; the act of intention with an expected outcome. This is evidenced by eating. A being cannot eat if if it has not intention and action on that intention.

With intention and expected outcome, and the evolution of imagination and the capability of language, we can arrive at inductive, and deductive thought processes. Premises can either lead to only one outcome, and premises can lead to more than one outcome. In a broad sense, the definitions of inductive and deductive cover these scenarios. The recognition and analysis of these is beneficial to a living being, because a being can figure out when there is higher and lower chances of their intentions arriving at a predicted outcome.. This allows the maximum type of agency afforded to a being, and the greater the agency of intention and outcomes, the more likely what one expects to happen, will come to pass.

So then, the knowledge of induction and deduction are formed distinctively in the solo context. Of course, if we use either of these in an induction, and deductively determine the outcome, then whatever is determined is applicable knowledge.

Quoting Bob Ross
I admire your desire to keep it fundamentally easier to comprehend (and honestly that is your prerogative, I respect that), but I find your "will" incredibly ambiguous (I am gathering it might be purposely so?). For example, if "reality" is simply "what I do not control", then my body could very well not be apart of "reality".


Perhaps it was how I explained it that made it ambiguous. Will is simply intention of action. That's all. If my intention of action is denied, than that is because of reality. Reality is an ever constant unknown which can deny my will at any time. Essentially reality is the potential my will can be denied. If I will my body to do something, and it does not happen, that is reality that I cannot deny. Whether reality denies me or not, is the outcome I await. I feel the current discussion on it is overcomplicating the issue for what we need at this time. If you want to flesh out will more, perhaps this should be saved for a later post. I don't think its necessary to discuss the current issues of applicable and distinctive knowledge, and I don't want the topic to lose that focus.

Quoting Bob Ross
I agree that we can create abstract logic, but it follows from necessary logic.


I don't know what "necessary logic" is. If you mean we have the innate capability to intend an outcome, no disagreements there. But that is not knowledge, that is action. Just like the ability to discretely experience is not knowledge either. I can distinctively know what I discretely experience, and I can distinctively know what I intend in my outcome. The creation of logic is distinctive, but if I use that logic in an induction, I must deductively conclude that outcome. That result of using that logic is applicable, and not distinctive.

Quoting Bob Ross
I still think, so far, that the only clear distinction here would be reason and everything referred to by it (aboutness vs about).


We have touched upon reason only in a few sentences. It has not undergone the same rigor as the rest of the arguments. I have tried to flesh it out here. Reason, as I initially understood it, doesn't seem to do any more than simply describe that we make actions with intention. I have hopefully broken down how this plays in with the analysis above, but as always, please put your input in and feel free to clarify or add to the initial meaning.

Quoting Bob Ross
To even try to negate IF THEN in terms of its form, I would have to conditionally assume a hypothetical where I don't necessarily utilize IF THEN, which thereby solidifies its necessity.


Its "necessity" is distinctively known. This is a deduction you have made without any other inductions involved.

Quoting Bob Ross
Hopefully I've demonstrated that it isn't always tier 1, but application could be tier 1 as well. It really seems like you are distinguishing a deduction from an induction (that can only be verified by deduction--which would be thereby something verified distinctively).


Application cannot be done prior to distinctive knowledge, because you must first make an induction. Do you distinctively know the induction you are making? Yes. Can you make a deduction without first distinctively knowing premises and rules? No. You can experience something, but experiencing something in itself is not applicable knowledge. Recall, you can experience a "sheep" for the first time, and that is your distinctive knowledge of the experience. If you later make an induction based off of that distinctive knowledge, "That over there is a sheep," the deduced outcome to that induction will be your applicable knowledge.

Quoting Bob Ross
I can have inductions that do not pertain to objects (i.e. are abstract) which I can then thereafter determine whether they are true via abstract deduction.


In a solo context, I do not believe it is possible to make an induction about abstract logic. You create the rules, so everything follows from your premises. You can create a logic that also does not have set outcomes. You distinctively know this, because you created it to be that way. For example, lets note that we conclude when a coin is flipped without knowledge of the force applied, it has a 50/50 chance of landing on either side. Barring all applicable knowledge where's the induction? The induction only happens if we predict a particular outcome by flipping an actual (non-abstract) penny. I can flip an abstract penny in my mind, but I determine the outcome don't I?

In claiming that we can have abstract inductions that we can then solve deductively, we have to be careful not to sneak in any applicable knowledge. Applicable knowledge is knowledge is the deduced result from an induction we don't have control over. We can create further distinctive knowledge from applicable knowledge, but that is a combination of abstract (distinctive) with non-abstract (applied).

Whew, major write up here from me. And yet still a lot I'm sure you want covered, such as societal context, and perhaps a further exploration into "will". To focus, I think it would be best if we finish the idea of distinctive and applicable in a solo context, and start bleeding that into societal context next. If you need a refresher on societal context, section 3 is where I went over it. Thanks again Bob, I look forward to your responses!



Bob Ross March 29, 2022 at 22:12 #675337
@Philosophim,

I decided to give it a couple days to mow it over in my head, as I didn't feel like I am completely understanding you, and now I think I understand what you are trying to convey. As always, I could be utterly wrong, so I am going to explicate it here (along with some suggestions that presuppose I am correct in my inference).

Firstly, "distinctive knowledge" is "deductions". "Applicable knowledge" is merely referencing the means of achieving that "distinctive knowledge" (i.e. the transformation of an inductive belief into deductive knowledge--belief into knowledge) and, therefore, is unnecessary for this distinction you are trying to convey. If it was induced and confirmed via a deduction, then that is "distinctive knowledge" (an induction that transformed into a deduction). Therefore, the dividing line I think you are looking for is "deductions" vs "inductions", so "knowledge is what is deduced" and "beliefs are what are induced": I honestly don't think it gets any clearer or more concise than that. Therefore, I suggest removing the terms "distinctive" and "applicable" knowledge outright in exchange for "deduction (knowledge)" and "induction (belief)". That would also resolve my confusion with abstraction vs non-abstraction, as abstraction can be induced and deduced (i.e. I can induce about my capabilities of reason or imagination, etc or I can deduce that the principle of non contradiction is a necessity). The dividing line of abstraction vs non-abstraction doesn't line up with deduction vs induction, which it doesn't have to in your case (unless I am wrong here).

But I want to be very careful here, as I would not agree that all deductions are knowledge. For all intents and purposes here, I am going to elaborate with a distinction of "categorical" vs "hypothetical" deductions (not married to the terms, just for explanation purposes). Although they are both deductions (and, consequently, their conclusions necessarily follow from the overlying principle and subsequent premises), they differ in the validity of the overlying principle itself. If a deduction was "categorical", then it is necessarily (categorically) true. However, if it was "hypothetical", then the conclusions are only true in virtue of granting the overlying principle as hypothetically true. This can be demonstrated (both of them) in one example:

1. All cats are green
2. Bob is cat
3. Bob is green

If I am asserting this as "categorically" true, then I am actually defining "greeness" as an essential property of "cat", therefore this deduction is necessarily true unconditionally. However, if I am asserting this "hypothetically", then I am thereby asserting in virtue of hypothetically holding that it is true that all cats are green. In your terms, it would be that "greeness" is an induced unessential property of "cat", which has thus far been true of all "cats" (let's just hypothetically say). Likewise, deductively obtaining the properties of an object doesn't necessitate that you deductively obtain that it is or isn't something (i.e. you don't necessarily obtain knowledge). if I define "glass" as having the essential property of being "(1) clear and (2) made from melting sand", then, assuming I didn't watch it get made, I can't assert that this pane in front of me is actually "glass": it would be an induction. So, although I deductively discover the properties of the presumably "glass pane" in front of me, I do not deductively obtain that it is thereby "glass" (I inductively assert it is).

So, I would hereby agree that "distinctive knowledge" should actually simply be "categorical deductions" (it can semantically be whatever term you want), which encompasses "knowledge". Inductions (and abductions) are beliefs, and hypothetical deductions are not knowledge, but with respect to what hypothetically follows they are (which doesn't mean one "knows" anything beyond the logical consequence of the overlying principle being taken in virtue as true--aka the logical consequences, or premises, are "categorically" true in the sense that they logically follow, therefore knowledge but the hypothetical is not).

I would also like to briefly clarify that my square root of 25 example was meant as a "solo context", as it can be posited as either one (but I should have made that clear, so that's my bad). The dilemma is still there if we were to presume that I came up with the mathematical operation of the square root. I came up with it a year ago, by myself, and began memorizing the answers of the square roots of like 100 integers (or what have you). Then, a year later, I ask myself "what's the square root of 25?". I immediately assert it is "6" in reference to what I believe was what I memorized a year ago (in accordance with the mathematical rules I produced). That's an induction. I then deductively invalidate it by means of actually performing the mathematical operation in accordance to how I defined it a year ago. Same dilemma. It is trivial in the sense that I could semantically change it, but I would still be incorrect with respect to what I constructed a year ago.

If you agree here with me (presuming I understood what you were trying to convey correctly), then this inevitably segues into "essential" vs "unessential" properties. Even in the sense of what you were explicating in the sense of "premises" and "conclusions" (in an attempt to ground them absolutely), we will need to revisit what essential properties are. I think in theory they sound great, but I'm having difficulties actually implementing them. For example, what is the essential property (or properties) of a "dog"? I get that I could, in a solo context, categorically define it. But soon enough I would realize it is insufficient, as I would be necessarily excluding a predominant population of "dogs" no matter how I try to divide the line (create essential properties). The only feasible means I've found is making one essential property that is a combination (conjugation) of all properties of being a "dog". Thusly, there's no unessential properties, and every "dog" is compared abstractly to what my mind comes up with as a "perfect dog". Otherwise, I could keep performing a reductive approach where I never derive a true essential property of being a "dog". For example, if I cut a "dog" in half, neither side shares anything I would imagine even remotely has to with an essential property of a "dog". Nevertheless, I would reference them as "two halves of a dog". But in referencing it to "dog", I've thereby implicitly conceded it resembles a "dog". But resemblance necessitates a communal property (in this case, a communal essential property). So even if I were to claim that I have different "essential properties" for "bottom half of a dog" and "top half of a dog", I am still implicitly conceding they necessarily share a trait with "a dog" (but with essential properties it is impossible they share anything related to such--and I truly hope you can prove me wrong here). Therefore, I am starting to think the resemblance is my mind's abstract "mapping" or "consideration" of both halves, whereby I am able to assert they are halves "of a dog". That's essentially my dilemma.

I think that the aforementioned consideration of "essential properties" is important to distinguishing "hypothetical" vs "categorical" (or what have you) deductions.

But now, I think with my further realization of the difference, I can finally remove "reality".


I agree, I think that the term was causing (me at least) confusion.

Knowledge ultimately is a deduction. A deduction is a conclusion which necessarily follows from its premises.


Hopefully I demonstrated that this is not necessarily true. Yes, a deduction is what necessarily follows from its premises, but that isn't necessarily knowledge with respect to the initial principle(s).

Any legitimate contradiction to a deduction, means its not a deduction any longer.


In terms of it's logical consequences, yes. Any contradiction to the overlying principle(s) does not revoke it as a deduction.

A zero point is the origin of an X and Y graph. When you are looking at a line pattern, putting it to the zero point can give clarity on comparing its symmetry and slopes. What we're doing with definitive and applicable knowledge is putting knowledge on a zero point, and noting the X and Y dimensions. It is in essence a drawn line or parabola, but charted in a graph in such a way as to break it down into an easier calculation.


I think this contradicts the whole purpose of the epistemology (especially in terms of essential properties), which entailed clear and distinctive terminology. The terms can share inheritance, but not definitional essential properties. For example, a square and a triangle are mutually exclusive, however they share the trait of being a "shape". I don't think this is what you meant by (0, 0): I think you are arguing for the allowance of minimally ambiguous terminology.

And when you combine the two, that result cannot be obtained without both an induction, and a deduction.


I don't think this is true (or maybe I just am not following). An induction that transforms into a deduction is no different than a deduction (ultimately). If I induce that that object over there is a potato and then I go over there an deduce it actually is, this is ultimately the same exact thing as if I had it in my hands to begin with and deduced it was a potato. Both are "distinctive knowledge". Only what is deduced is verified in the induction, anything else would remain an induction. "Applicable knowledge" means nothing more than I happened to create a belief prior to deducing any knowledge about it. The induction is not constituted in any of the "knowledge" aspect.

What you are missing here is another ingredient we have not spoken about very much, but is important.


I understand why you went to societal contexts, but it can be posited (and was supposed to be posited) as a solo context.

I feel in a self-contained context, the descriptors of distinctive and applicable are clear


I don't think they are. I think "inductions" and "deductions" (and "hypothetical" vs "categorical" deductions therein) are clear. But maybe that isn't what you are trying to convey.

An induction, who's conclusion has been reached deductively, is applicable knowledge.


I don't find any meaning in this definition. If it was deduced, then it is distinctive, not applicable. If it happened to be an induction previously, well then it was an induction previously. I don't see how this is a meaningful distinction to make.

In terms of the coin flipping example, that was distinctive knowledge that is being semantically refurbished as applicable simply because you happened to preemptively determine a belief towards what will happen. If you didn't guess it would land on heads, it would have been purely distinctive knowledge (nothing was induced). Why does it matter if I preemptively guess?

Finally, it is essential to note how the induction is concluded. Having an induction that happens to be correct is not the same as knowledge in any epistemological analysis I've ever read. And for good reason. A guess that happens to be right is not knowledge, its just a lucky guess. We can have knowledge that we made a guess, and we can have knowledge of the outcome of that guess, but that is it.


I don't think this is a relevant point to your epistemology, as it doesn't claim inductions are knowledge. The moment you deduce what actually is, that's what you know. Even if it aligns with your induction and you have legitimate reasons to conclude you were on to something with that induction, you didn't know anything--you had an inductive belief. Even in hindsight, you didn't know. At best, you now know that you lucked your way into aligning with real knowledge, regardless of how solid your evidence was for the induction you held.

I still believe distinctive knowledge always comes from applicable knowledge


If distinctive knowledge is a categorical deduction (and maybe potentially also the categorically true logical consequences of the premises of a hypothetical deduction), I agree.

I would clarify that the applicable is not the attempt to verify inductions, it is the deductive result of an induction


If this is the case, then it is distinctive knowledge. A "deductive result of an induction" is simply distinctive knowledge when it happened to be preceded by a belief on the position, which could very well not have asserted (and the deduction would have still occurred). For example, if I am walking around and pick up an object off the ground, without any prejudgments of what it is, and deduce it is a potato, this is no different ultimately than if I spot it at a distance, claim it is a potato, and then deductively discover it is a potato (it just has more extraneous steps involved).

This is the part you might like Bob, as I believe you've been wanting some type of fundamental universal of "reason". This logic of induction and deduction is reached because we are able to think in terms of premises and conclusions. This is founded on an even simpler notion of "predictions" and "outcomes to predictions". Much like our capability to discretely experience, this is an innate capability of living creatures. I believe this coincides with your definition of "reason" earlier as "decisions with expectations".


I still think we are slowly converging in our views, it is just taking a while (: . I wouldn't put it that way (in terms of reason), but I think you are starting to explore recursively reason on itself and, thusly, realizing that "deductions" and "induction" are innate in us. I'm not entirely sure how you are planning to ground it, but I definitely think you can (assuming it remotely aligns with my conception of reason).

Just to be brief, I think you are going to have to ground it in fundamental logic, which is something I don't think you agree with yet (I think you believe it all to be constructed). That fundamental logic, whatever you want to semantically call it, is going to (I would anticipate) resemble the basic transcendental properties of reason (as in that which is deductively obtained as necessitous and apodictic of the mind, which is concluded to be such due to its ever present--potential infinite--nature in all forms of thoughts). But I will let you navigate the conversation as you deem best fit.

Can we define this in a way that is undeniable, like discretely experiencing?


As of now, and this goes back to way back when we first started having this conversation, I don't think you have grounded "discrete experience" except that it is "undeniably there". But I think you will have to ground both in the same manner (if it is going to be an absolute grounding), and I definitely think you can do it. I think your initial usage of pon is a perfect start (but there's some things, I would say, that precede distinctions--aka discrete experience). I think "discrete experience" is a convenient clumping of many aspects of the fundamentals of the mind, but to achieve your grounding of deductions, premises, conclusions, induction, predictions, etc, I think you are going to have to at least conceptually analyze the sub-categories.

I agree that we may need to save "will" for later and I concur that "reason" hasn't been discussed too much yet. I can do so if you want, otherwise I will simply respond to wherever you navigate the discussion. Likewise, although I think it will be inevitably discussed soon in relation to your "actions" and "premises" and such, I will allow you do decide if you want to discuss "fundamental logic" or not.

I look forward to hearing from you!

Bob
Philosophim April 02, 2022 at 22:18 #676874
A wonderful write up as always Bob. No worry on the time, quality posts take a while to write! I have to think through my responses quite a bit at this point myself, as you often ask new questions I haven't considered before, and I want to mull my initial thoughts over before responding. Lets get to it.

Quoting Bob Ross
Firstly, "distinctive knowledge" is "deductions". "Applicable knowledge" is merely referencing the means of achieving that "distinctive knowledge" (i.e. the transformation of an inductive belief into deductive knowledge--belief into knowledge) and, therefore, is unnecessary for this distinction you are trying to convey.


I think it is important that this distinction remain. What I might have been missing is a third category.
Deductions are knowledge. The difference between distinctive and applicable are what was involved prior in the chain of reasoning. This mirrors the induction hierarchy, though I don't think one deduction is more cogent than another. Deductions without any inductions immediately prior are distinctive knowledge. Deductions concluded immediately inductions are applicable knowledge. I would not mind renaming the words within that distinction, but that distinction is absolutely key to breaking out of the previously failed theories of knowledge. I will see if I can show you why in our conversation.

First, to be clear, deductions are forms of knowledge. Inductions are forms of beliefs. But how we determine those inductions and deductions allows us a different approach. I think the problem is maybe I haven't clearly defined an abstraction. An abstraction is not a deductive conclusion from an induction, it is the formulation of the essential and non-essential properties of an identity. Within a solo context is a tool of your own creation, there are no limits to what you can, and cannot create in an abstraction. If you create limits, those are self-imposed limits.

For example, making a game. Imagine there are no people around. I invent the game called "Go fish" on my own. Did anything in reality force me to create those rules? No. Now, can I take a real deck of cards and play a game? That is an induction. Once I confirm that I can, or cannot play that game, then I have a new type of knowledge, the conclusion of an induction. That is something that needed to test reality, and either passed or failed.

Another way to view it is when you discretely experience the color "red". Not the word, the experience. Then you say, "That is 'something'". That construction of the essential property, and non-essential property of what 'red' is, is the abstraction, and fully in your creative control.

Quoting Bob Ross
For all intents and purposes here, I am going to elaborate with a distinction of "categorical" vs "hypothetical" deductions (not married to the terms, just for explanation purposes). Although they are both deductions (and, consequently, their conclusions necessarily follow from the overlying principle and subsequent premises), they differ in the validity of the overlying principle itself. If a deduction was "categorical", then it is necessarily (categorically) true.


Your categorical deduction fits the bill perfectly. I agree that is a deduction. But I'm not sure the hypothetical is an actual deduction. Let me point it out

Quoting Bob Ross
However, if I am asserting this "hypothetically", then I am thereby asserting in virtue of hypothetically holding that it is true that all cats are green.


"All cats are green". Is that by definition, or is that an induction? That is the fine line that must be clarified. If cats are green by definition, as an essential property, then that is what is distinctively known. If however, color is not an essential property of a cat, then its involvement in our logic does not result in a deduction, but an induction. This is because I am admitting to myself that if I found a red creature with the essential properties of a cat, I would still call it a cat.

Quoting Bob Ross
However, if it was "hypothetical", then the conclusions are only true in virtue of granting the overlying principle as hypothetically true. This can be demonstrated (both of them) in one example:

1. All cats are green
2. Bob is cat
3. Bob is green


This is not hypothetically if you are the one who has determined the definitions. Lets flesh it out correctly.

1. An essential property of cats is they are green.
2. An essential property of Bob is that they are a cat.
3. Therefore, Bob is green.

Including non-essential properties turns this into an induction.

1. An accidental property of cats is they are green. (Could or could not)
2. An essential property of Bob is that they are a cat. (Must be)
3. Therefore, Bob is green.

This is not a deduction. This is an induction because we've basically stated, "Cats could, or could not be green". We have deduced an induction based on our abstractions. And we can classify this type of induction using the hierarchy. If, as you implied, we've always seen green cats, but we are willing to accept a cat that could be another color, then this is a speculative induction.

So, we can abstract both deductions and inductions. And this abstraction is distinctive knowledge.
They are not applicable knowledge, because applicable knowledge only comes about after we have taken our abstracted induction, and deductively concluded the result.

Quoting Bob Ross
if I define "glass" as having the essential property of being "(1) clear and (2) made from melting sand", then, assuming I didn't watch it get made, I can't assert that this pane in front of me is actually "glass": it would be an induction.


True. Based on the context of your definition, you will never applicably know whether that is glass.

Quoting Bob Ross
So, although I deductively discover the properties of the presumably "glass pane" in front of me, I do not deductively obtain that it is thereby "glass" (I inductively assert it is).


Just to clarify, if you meant that you deduced that the glass was made of silica, clear, and for all intents and purposes, had all the non-essential properties of a window, but you could not find the one essential property "That it was formed through melting sand", then yes, you could only ever inductively know it as a window.

Quoting Bob Ross
I would also like to briefly clarify that my square root of 25 example was meant as a "solo context", as it can be posited as either one (but I should have made that clear, so that's my bad). The dilemma is still there if we were to presume that I came up with the mathematical operation of the square root. I came up with it a year ago, by myself, and began memorizing the answers of the square roots of like 100 integers (or what have you). Then, a year later, I ask myself "what's the square root of 25?". I immediately assert it is "6" in reference to what I believe was what I memorized a year ago (in accordance with the mathematical rules I produced). That's an induction.


This is such a good point! Lets walk through this. So at the time when you state, "the answer is 6", that's still distinctive knowledge and deduction. That is because what you experience remembering as the answer, is the answer. There's no one else to tell you that you are wrong. There's no other answer you can give, because that is what you remember.

Stay with me here, because I know how that can sound at first. Later you may "remember differently" or find a record of the logic that you put down. At that time, you will know that your original deduction was wrong. But that was what you still distinctively knew at the time. With new knowledge to revise the structure that you had, you now distinctively know that the square root of 25 is not 6.

The bigger question, and the part where you may be right that we can induce abstractly, is when you make the claim, "What I remember today is the same thing I remembered yesterday." What a head twister honestly! That by nature is always an induction. Or is it? Can't I simply decide yes or no? I can, but it is a belief, and therefore an induction, is the deduced conclusion to that then applicable knowledge?

If I have no outside evidence of the past, or record of the past, the answer is still what I ultimately decide. If I remember, "Yes, I do," then I've been given an answer, but from my own mind. If I remember that I do in fact, remember what I remembered yesterday, that's an answer that I distinctively know. But is it true? Is that really a deductive answer to an induction? It is, because its the distinctive knowledge that I have. Same as if I experienced that I did not remember the same thing I did yesterday (even if I'm incorrect objectively). Finally it is deductive if I conclude, "I can't trust my own memory anymore, so I don't know."

But, and here's the kicker, is the answer to this induction, a deduction or another induction? What's interesting about this case is it may not fit either. I'm not sure, so I'm going to break it down.

Is there a premise in the drawn conclusion to the original induction?

Case 1. I remember that what I remembered yesterday, is what I remember today.

As what we discretely experience is what we distinctively know, then I distinctively know this. Thus this conclusion is actually a deduction, even if there was some outside evidence, even if this was not true.

Case 2. I remember that what I remembered yesterday, is not what I remember today.

The outcome is the same as case one. This is distinctive knowledge.

Case 3. I conclude "I'm unsure if what I remembered today is what I remembered yesterday."

Lets call this the Descartes Doubt case. The answer to case 3 cannot be found by anything outside of our own deductions again. This is the "I doubt even my own thinking". What is the answer that we deduce in this case? "That I cannot remember if what I remembered yesterday, is what I remember today." This is not an induction, as we have concluded that this is the case with no alternatives. This is what I discretely experience.

In short, in what we conclude in a prior reference to our memory, an abstraction, is a deduction because it is whatever we experience.

But, lets compare this to another scenario in which I know I wrote down what I remembered yesterday, "The square root of 25 is 5" (according to my made up rules). If I remembered this existed, and this paper could prove that I correctly remembered what I knew yesterday, then I would deductively know that I could not ascertain an answer unless I found it. This is not the same as claiming, "I believe the paper says the square root of 25 is 6" This is an induction, and can only be denied or confirmed once the paper is discovered.

The entire point I want to note is that abstractions, which are entirely in our head, can never be inductions in themselves. We can use those abstractions as inductions, and when we do, we can gain applicable knowledge be deductively solving the conclusion. When we make abstractions in our head and apply them to abstractions in our head (that we have made up) there is no induction, because it is whatever we conclude.

That being said, we can classify deductions in two ways, and I believe these are important identities.

1. Deductions in which the premises are not changed.
2. Deductions in which the original premises are changed and amended.

Recall that when one applies an induction, they can amend their terms to fit the new conclusions. So for example, if I considered all cats green as an essential property, and I found a feline that matches all the essential properties of a cat except that it was red, I might decide to amend the essential property of color into an accidental property. I could also simply keep the color as an essential property, and conclude from the induction that I had found a new animal. That choice is mine. But perhaps noting when we change or amend our original distinctive knowledge versus when we do not change or amend our original distinctive knowledge is a key difference.

Quoting Bob Ross
I don't think this is what you meant by (0, 0): I think you are arguing for the allowance of minimally ambiguous terminology.


This was a reference to a mathematical concept. If you're not familiar with it, its not a good example, so lets not worry about it.

Quoting Bob Ross
I would clarify that the applicable is not the attempt to verify inductions, it is the deductive result of an induction

If this is the case, then it is distinctive knowledge.


Since distinctive knowledge is a particular knowledge that precludes the involvement or prior inductions in its conclusions, no. Recall that both forms of knowledge are deductions. Just like the hierarchy of inductions, it is the steps that we take to arrive at those deductions that create the essential difference.

Quoting Bob Ross
I still think we are slowly converging in our views, it is just taking a while (:


Ha ha! Yes, I honestly feel our views are off by only very small differences. I think this is one of the reasons the conversation has been so engaging and helpful (for me at least). You've been able to point out that slightly semantic/alternative view point that really tests what I'm proposing, and makes me think. It has helped me amend and leave out a few approaches that you have shown are unnecessary or simply confusing. As always, it is appreciated to find another person who is interested in the truth of the matter and the refinement of the discussion.

Quoting Bob Ross
I think you are starting to explore recursively reason on itself and, thusly, realizing that "deductions" and "induction" are innate in us.


One thing I want to clarify is that I agree that the capability to deduce and induce are innately within us. Distinctively knowing these words and these concepts is something which must be discovered. One can accidently deduce or induce, but not have any distinctive knowledge that they do. So what I meant by, "The logic of deduction and induction are reached by..." I mean the knowledge of the logic of deduction and induction are reached by..."

Quoting Bob Ross
I think "discrete experience" is a convenient clumping of many aspects of the fundamentals of the mind, but to achieve your grounding of deductions, premises, conclusions, induction, predictions, etc, I think you are going to have to at least conceptually analyze the sub-categories.


Full agreement with you. Another large write up from me! I hope I covered the points, please let me know if there is something that I missed or did not clarify. I fully expect a response on the claim that abstracts are essentially distinctive knowledge and cannot be inductively concluded.
Bob Ross April 05, 2022 at 18:27 #677964
@Philosophim,

I want to disclaim that this post is going to be quite complicated, as you brought up an incredibly valid, and thought-provoking, dilemma which deserves an adequate response. The reliability of memories was a keen insight Philosophim!

Before I dive into that dilemma, let me first address deductions.

But I'm not sure the hypothetical is an actual deduction. Let me point it out


A deductive argument is that which has a conclusion that is necessitated from its premises, not that the premises are true. So, a better way to propose my cat example, at first glance here, is:

1. IF all cats are green
2. IF bob is a cat
3. THEN bob is green

You are absolutely correct that #1 and #2 could be false (even an induction), but that doesn't mean it isn't, by definition, a deduction. I understand what you were trying to get at with your refurbishment, which looked like:

1. An essential property of cats is they are green.
2. An essential property of Bob is that they are a cat.
3. Therefore, Bob is green.

1. An accidental property of cats is they are green. (Could or could not)
2. An essential property of Bob is that they are a cat. (Must be)
3. Therefore, Bob is green.

My response is tricky here, because you are sort of right when you posit #1 like that. But I still don't think you are right that deductions can't have incorrect (or inductive) premises (deductions are defined by their form, not truth value). The first deduction here I think we both agree is a "categorical deduction", but #1 in the second one isn't really a deduction (I would agree) because it is not positing IF. In my head, it is equivalent to:

1. Not all cats are green
2. Bob is a cat
3. Bob is green

That isn't a deduction because it doesn't have the logical necessitous form (has nothing to do with whether they are true, just that the premises necessitate the conclusion). My main point here is that this would be a hypothetical deduction:

1. IF an essential property of cats is that they are green
2. IF an essential property of bob is that they are a cat
3. THEN bob is green

This was not categorical, in the sense I was meaning it, because I am not, in positing it, affirming the truth of #1 and #2 (however it is still indeed a deduction that may or may not be true). This is different than actually claiming that I am categorically defining cats as must having an essential property of greeness (as in cats actually are all green). So, in short, I think you are right that, in the manner you depicted it, it would not be a deduction but this is not based off of truth value: it is about the form. However, I still think hypotheticals are different than categoricals. A deductive argument is denoted by IF the premises are agreed, then it necessitates the conclusion. The premises could be inductions.

Alright, now it is time for the main dilemma you posited: the reliability of memories (which I would extend as thoughts as well). Fair warning that this gets complicated fast, but I know you can handle it (: So, firstly I want to give a brief overview of what I think and then dive into what you said.

Here's a brief overview first:

1. I cannot doubt a thought until after it becomes apart of the past (therefrom an absolute grounding of trust is established).
2. Any given past thought is always recollected as a reliable memory (in virtue of #1).
3. The validity of a given past thought is deduced insofar as it relates to other past thoughts.
4. The reliability of the total set of past thoughts is never established (inductively nor deductively) because it is an illusory transcendent concept.
5. Inductions can arise pertaining to deduced memories.

Let's talk about #1: I cannot doubt a thought until after it becomes apart of the past. The "present thought", which I will define as 0, is always necessarily granted as trustworthy, and this is apodictic. However, the proof for this is not an easy feat. The problem is that to claim a "present thought" is taken as trustworthy (albeit potentially questioned thereafter by even the very next thought) requires that its immediate trustworthiness be evaluated by a subsequent thought--thereby rendering it a past thought (which it means, at face value, the very last thought is being utilized as reliable to deduce that when it was the "present thought" it was necessarily trusted). However, the proof for the immediate trustworthiness of the "present thought" cannot rely on the reliability of a past thought (because that would defeat the whole purpose). Therein lies the difficulty. But I realized this can nevertheless be proven (I think at least), because I can deduce (regardless of the validity of any thoughts) that if a past thought hypothetically was at one point actually the "present thought" and it wasn't immediately trusted (prior to another thought succeeding it) then I would never have a coherent sequence of reason. Therefore, I would never be convinced of anything. But since I am convinced of things, and thusly have coherent sequences of reason, I know that I must be trusting the "present thought". In short, I think there are two logically true statements we can make regardless of the reliability of the total set of past thoughts:

1. Regardless of the validity, my past thoughts are always in succession, therefore in a sequence, which necessitates boundaries. Which in turn, necessitates the "present thought".
2. if any given past thought was actually at one point in time the "present thought", then it is necessarily the case that it was trusted immediately. For, otherwise, I would not have obtained the coherent sequence of past thoughts, regardless of the validity therein.

This brings us to the vital understanding of #4: The reliability of the total set of past thoughts is never established (inductively nor deductively) because it is an illusory transcendent concept. I can only merely prove that, given the sequence of past thoughts I have, if any given past thought was the "present thought", then I would logically be obligated to trust it immediately prior to another thought manifesting. But this doesn't speak to whether the sequence of past thoughts I am analyzing are indeed reliable (for all I know, my "present thought" is referencing a completely false previous past thought or the whole set is fallacious). The main problem is that I am always inferring the "present thought" by virtue of the sequence of past thoughts. Therefore, the concept of a past thought existing objectively as itself does not exist, for I am always potentially infinitely referencing memories via other memories.

My brain hurts (:

Now, this means, if I am correct (emphasis on if), then it is deduced that the absolute grounding of trust is the "present thought", which can, admittedly, be doubted fervently thereafter.

Now on to #2: Any given past thought is always recollected as a reliable memory (in virtue of #1). Recollection is the process of retrieving a past thought, which inevitably brings it forward as the present thought. Therefore, as the memory loaded into the present thought, it is granted trustworthiness (although it can be questioned thereafter). Recollection, although it does bring forth past thoughts as a present thought, does not "refresh the time stamp" so to speak: the memory itself is merely referenced in relation to when it is thought to be in the sequence of past thoughts, but the recollection itself, being a present thought, is always appended to the succession of thoughts separately. For example:

1. if I remember memory A, I am recollecting it.
2. Recollection entails A being presented as the “present thought”, 0
3. therefore, 0 is referencing A (i.e. the recollection is not A, it is 0 which references A)
4. therefore, A is still referenced in the sequence of past thoughts where it is remembered to have occurred relative to the others, but 0 will become a new past thought (aka: memory of remembering A)
5. This occurs recursively for a potential infinite


Moreover, #4 here is not completely explained (as noted by the emphasis on “remembered”): in immediate recollection, whatever is referenced from A in 0 is immediately trusted. If A contained holistic or partial references to where it is in the collection of past thoughts, then that is immediately trusted as well. However, if A doesn’t contain where in the collection it should be (i.e. its index), then a subsequent thought will be required to attempt to deduce what is remembered as its index (which is subjected to the same process as previously described).

Now, the doubting occurs when a remembrance of a memory (0 now as a past thought) is examined by 0 (the present thought) in relation to what could potentially be the difference of A and &A (A being the memory, &A being the reference to A in 0). In other words, &A is posited as potentially not holistically referencing A as what it initially was, therefore is potentially A != &A, and therefrom the dilemma occurs. But, to invoke #4 (from my original generalization of my views), the validity of the thoughts is never obtained nor actually performed outside of a relation between past thoughts and, therefore, the answer to the reliability of all thoughts is unobtainable. The apodictic nature of referencing past thoughts in the present thought entails that the concept of a thought as itself vs how it was referenced (A vs &A) is illusory. It would only ever be how A is considered by some subset of past thoughts vs how &A is considered by some subset of past thoughts: thereby never achieving a transcendent concept of “a true thought in-itself”.

As we already established #4, #3 (in my original generalization) simply denotes that what really happens when we question our past thoughts (and sometimes determine some to be unreliable and others reliable and still others undetermined) is that we are only establishing "reliability" as it relates to other past thoughts: it is the analysis of the sequence of past thoughts via the present thought (which is always granted as trusted immediately). The procedure of determining what is reliable or not is not relevant to the dilemma itself, so I will leave it there.

Now, how's does that all relate to what you said? Well, I think you are partially right:


Case 1. I remember that what I remembered yesterday, is what I remember today.
Case 2. I remember that what I remembered yesterday, is not what I remember today.
Case 3. I conclude "I'm unsure if what I remembered today is what I remembered yesterday."


if your cases are referring to one memory’s validity in relation to the set of past thoughts, then you are right that we can deduce such. If you are trying to derive the validity of the entirety of the set of past thoughts, then you are wrong (it is an illusory concept that acts as if it has transcended reason). They seem to be lacking the consideration that it is a recursive dilemma. The first two cases are explicitly self-contradictory ("I remember"), and the last case is essentially the same thing: they all beg the question of the validity of those memories being utilized to resolve the conflicting memories. It is a recursive operation that is inevitable, but can be accurately portrayed in a non-absurd manner if one realizes that it is all relative to the absolute point of trust: the present thought.

Now, let me address your main contention here:
In short, in what we conclude in a prior reference to our memory, an abstraction, is a deduction because it is whatever we experience.


I think you are partially correct. In terms of the process of thinking as outlined previously, the reliability in relation to another past thought is deduced. Likewise, it is deduced that there is a "present thought" and that it necessarily is trusted. However, the reliability of set of past thoughts is not determined. Also, I still think that an induction is possible abstractly, however your definition of "abstraction" doesn't allow it by definition (and I would say it is not a main stream definition of abstraction). None of this entails that something cannot be an induction pertaining to two deduced subsets of memories.

So at the time when you state, "the answer is 6", that's still distinctive knowledge and deduction.That is because what you experience remembering as the answer, is the answer.


This is where #5 (from my original generalization) comes into play: this is simply not true. I deduce that I remember the answer being 6, but that does not mean I deduced that that memory must be correct in relation to what I remember are the rules of the operation of the square root. I induced that it was correct, based off of the fact I remember the answer being 6. Nothing about me remembering that the answer is 6, even if it could be proven it was 100% accurate that I did indeed answer it as 6 before, necessitates that the answer actually is 6 (in accordance to what I remember is the mathematical operation). Deductions are what necessarily follow from the premises. Now, it is deduced that the answer must follow my pre-determined operation of the square root, which is subjected to your critique that I may not remember that operation reliably, but nothing about my memory of answering a particular way necessitates that it is the answer. I think what you are missing is that both the operation and the answer are deduced memories, which are compared, and you are correct in the case of questioning the memory of the operation (whatever I remember is the square root operation, is the square root operation), but the connection of the memory of the answer 6 being accurate to the memory of the operation of the square root is an induction. If I remember the operation of the square root (whatever that may be) and remember answering six, I can logically, abstractly, derive whether my memory of answering six actually aligns with the correct answer (as derived from my memory of the operation).

Look at it this way:

1. IF I am remembering correctly that I previously answered 6.
2. THEN the answer to the square root of 25 is 6

Does the conclusion necessarily follow from the premise? No. Therefore, it is not a deduction. I think your critique is perfectly valid, and very thought-provoking, in terms of the reliability of the operation of the square root. Likewise, let’s say I remember that there was a mathematical operation of the square root but I can’t remember what it was at all, then it may be the case that the most cogent induction is to go with what I remember answering with before: but it is not a deduction.

I think I may need to stop here for now. Wonderful post Philosophim!
Bob
Philosophim April 07, 2022 at 23:56 #679159
Quoting Bob Ross
My main point here is that this would be a hypothetical deduction:

1. IF an essential property of cats is that they are green
2. IF an essential property of bob is that they are a cat
3. THEN bob is green


This would fit. This would be a deduction based off of two inductions that we do not know the results of. The entirety of this would still be distinctive knowledge. Only after the 2 induced premises had a deduced conclusion, would we call the result applicable knowledge. The question will be when those first two premises are "inductions", and when they aren't.

So yes, we can make deduced conclusions based on hypothetical results to inductions. I still see this as only distinctive knowledge, because we don't have deductions within the premises, but inductions where we assert a possible outcome. The conclusion to the deduction has not been deduced. I will come back to this at the end.

Now to the main event! Fantastic post that took a lot of thinking and work. Let me see if I can adequately address it.

First, I want to commend that I believe you put together a great list of premises and arguments. I'm going to "translate" it where I can into the foundational epistemology I've proposed. I don't want this to come off as dismissive or unappreciative of the great argument you've set up. It is just the goal of this endeavor is to create an epistemology that can be applied and supply an answer to any epistemological question. So with this, I'll start.

Quoting Bob Ross
1. I cannot doubt a thought until after it becomes apart of the past (therefrom an absolute grounding of trust is established).


According to the foundational epistemology I've proposed, you can doubt anything you want. But can I distinctively know I have that thought? Yes. A memory is a thought which can be a recollection of the past. The question of course is, "How does the foundational epistemology resolve the question, 'Is my memory of the past accurate?'" And here we try to figure out the resolution.

Quoting Bob Ross
2. Any given past thought is always recollected as a reliable memory (in virtue of #1).


Here I want to slightly tweak this. Any given past thought is a current thought. Meaning that we can distinctively know that current thought. If I experience that my memory is reliable, that is what I distinctively know. If I experience that my memory is unreliable, that is what I distinctively know. I may have some unanswered uncertainty in my question, but I have certainty that the memory and questions I am experiencing are distinctively known.

Quoting Bob Ross
3. The validity of a given past thought is deduced insofar as it relates to other past thoughts.


When we say validity, it is a deduced conclusion. If we do not have applicable knowledge, we can only make a deduction about the accuracy of a past thought based on the distinctive knowledge we have. As past thoughts are distinctively known, this statement you've made seems accurate.

Lets continue with your conclusions.

Quoting Bob Ross
It is a recursive operation that is inevitable, but can be accurately portrayed in a non-absurd manner if one realizes that it is all relative to the absolute point of trust: the present thought.


This is where I want to go next. A memory is a present thought that is thought to represent a past time in some sense. But it is a present thought regardless. That memory, is what is discretely experienced currently.

Quoting Bob Ross
Now, let me address your main contention here:
In short, in what we conclude in a prior reference to our memory, an abstraction, is a deduction because it is whatever we experience.

I think you are partially correct. In terms of the process of thinking as outlined previously, the reliability in relation to another past thought is deduced. Likewise, it is deduced that there is a "present thought" and that it necessarily is trusted. However, the reliability of set of past thoughts is not determined.


When you say reliability, do you mean distinctive, or applicable? in the distinctive case, we know without question what that set of past thoughts is. If you extend it to an applicable level however, when you make an induction and a deduced conclusion can be reached, this is a different sense of "reliable". I believe when you agree that I am partially right, you are referring to the distinctive sense.

Quoting Bob Ross
Also, I still think that an induction is possible abstractly, however your definition of "abstraction" doesn't allow it by definition (and I would say it is not a main stream definition of abstraction).


I want to clarify that we can make inductions from abstractions. That is how we create beliefs. What I wanted to assert was that abstractions themselves are not applicable knowledge. This is because we can create any abstraction we want, and thus there is no conclusion that necessarily follows the premises besides what we invent.

For example, I tell myself, "I believe that 1+1=2." That is an induction, but if I have created the rules of math, it really isn't. If I remember 1+1=2, then that is what I remember. If I remembered that 1+1=3, then I wouldn't believe that 1+1=2. The only time I can make a seeming induction is if I state, "Maybe I don't remember what 1+1 equals", but even the solution to this is whatever I abstractly come up with.

To be a real induction, it must involve something we cannot simply conclude ourselves. There must be something outside of our own power and agency that creates a conclusion that does not necessarily follow from the premises we've created. Only in that situation, can you have applicable knowledge. And yes, the way I've defined abstraction, if an abstraction is the deduced conclusion to an induction, it was never an induction to begin with. Perhaps that is unfair, but its simply a conclusion I've come to using the epistemology.

To be very clear, this is because an abstraction has no rules besides what you make. There is no one besides yourself who can tell you your own created abstraction is "wrong". No one to tell you but yourself that your memory is "wrong". In short, abstractions are our limitless potential to "part and parcel" as we like.

Quoting Bob Ross
1. IF I am remembering correctly that I previously answered 6.
2. THEN the answer to the square root of 25 is 6

Does the conclusion necessarily follow from the premise? No. Therefore, it is not a deduction.


It is a hypothetical deduction as you noted earlier. The question comes into play when we consider what appears to be an induction in premise one. There is one key here. You determine whether you remember correctly that the previous answer is six. If you do, then you do. If you remember that it is 7, then it is 7. And if you never do, you never do. That conclusion is distinctive, not applicable. Because there really was no induction. There is no uncertainty, for the continuation of the deduction results that the answer is, "Whatever you abstractly choose".

This is why it is essential to keep a difference between distinctive and applicable knowledge. Inside of your own head (to simplify an example) we are masters of our own universe, and can "reason" however we wish. It is the fact that there are things beyond abstractions that force us to re-evaluate the world we've created. To look at our identities once again, and realize there is a "right" and a "wrong". That was the original intention of "reality", though I don't think the word is needed any more.

In our own head, inductions are just pauses before we formulate the answer. An induction can only truly occur when we are not the sole masters of the outcome.

Quoting Bob Ross
But I realized this can nevertheless be proven (I think at least), because I can deduce (regardless of the validity of any thoughts) that if a past thought hypothetically was at one point actually the "present thought" and it wasn't immediately trusted (prior to another thought succeeding it) then I would never have a coherent sequence of reason. Therefore, I would never be convinced of anything.


If you are a purely abstracting being, then you decided it was a coherent sequence of reason. You just as easily could have decided it was not. You could decide to never be convinced of anything. That is the danger of a mind that lives purely in abstraction. Such an experience would be a dream world. It is only our experience of situations in which our abstractions fail that we can realize certain abstractions are not useful. That is when we create true inductions, where we cannot deduce the outcome until we apply that induction and experience its resolution.

Going back to the start now.

1. IF an essential property of cats is that they are green
2. IF an essential property of bob is that they are a cat
3. THEN bob is green[/quote]

In the solo context, the answer to the "inductions" is whatever we decide. We decide if they are essential properties or not. They are not inductions, their conclusion is certain to whatever we decide.

If however, we pull another person into the equation, a society with written rules, then we have an evolution. I cannot conclude whatever I want. I must make an induction, a belief about what society will decide. The answer to that, is applicable knowledge. Even then, the abstracts of society that it creates, that I must test my beliefs against, are its distinctive context, not applicable context. We could encounter a society that decides math works differently. It is only when we apply that distinctive context to an actual situation, "1 potato plus 1 potato equals 2 potatoes" can we deduce whether societies abstraction can be applicably known as well.
Bob Ross April 13, 2022 at 01:15 #680884
@Philosophim,

I apologize, the week has been quite busy for me.

Firstly, I think we need to revisit the "distinctive" vs "applicable" knowledge distinction holistically because I am still not understanding why it is important. Hypothetically, if I were to grant you that abstractions never are inductions, and subsequently that there are two distinct methods of arriving at a deduction, I don't see the meaningfulness behind such a distinction. I went ahead and re-read your past two posts, and, to just quote you briefly, this is generally what you stated (although I could just be missing it as I am re-reading):

I would not mind renaming the words within that distinction, but that distinction is absolutely key to breaking out of the previously failed theories of knowledge. I will see if I can show you why in our conversation.


Even after re-reading the whole post (this is two posts back), I don't see how this achieves nor is necessary to "break out of the previously failed theories of knowledge". I understand (at least I think) what you are referring to by what failed in previous theories, but I see this evidently clear in two key principles of your epistemology: (1) inductions are not knowledge and (2) inductions are not equally cogent as one another. These are the two principles, as I see it, that are vital to breaking out of such failed epistemologies: nothing pertaining to the distinction between methods prior to deducing knowledge. Yes you could technically, if I grant that abstractions are not inductions themselves, make a distinction between a deducing after conjuring an induction vs abstractly deducing, but this has no bearing on what I think is the bedrock of your epistemology. Principle #1 demonstrates exactly what you have been outlining in your examples (such as inventing a game with cards abstractly vs non-abstractly): if I induce it, I do not know. I think it is that simple and, thusly, am failing (even in terms of granting your argument as far as I can imagine) to understand the importance of distinguishing that I can thereafter obtain knowledge of what I deduce in relation to that induction. Again, principle #1 outlines this clearly already.

I guess where I am confused is: why not just say "if you didn't deduce it, you don't know it" instead of "you don't gain applicable knowledge until it is deduced"? It seems like the latter is obviously given (at least to me) in the former: regardless of when we can, as subjects, conjure an induction and when we can't. My question for you is, given that you clearly see it as vital to the epistemology, what am I missing? I'm sure I am just missing something.

Likewise, I don't think "applicable knowledge", in the sense of a deduced conclusion pertaining to an induction, has any actual relations to the induction. The induction and deduction are completely separate: mutually exclusive. To say I induced something, then deduced knowledge that happens to fall under that same category of inquiry is just that: a coincidence or, at best, the induction was merely the motivation but necessarily has no direct relation to the obtaining of knowledge whatsoever.

I think clearing that up will help with what we are currently conversing about. Now on to your most recent post:

I don't want this to come off as dismissive or unappreciative of the great argument you've set up. It is just the goal of this endeavor is to create an epistemology that can be applied and supply an answer to any epistemological question.


Absolutely no problem! Do what you wish with my responses: I never want you to feel obligated to address it in a specific manner (or in its entirety).

According to the foundational epistemology I've proposed, you can doubt anything you want.


So this is tricky. If by "doubt everything" you mean that everything is technically falsifiable, then yes I agree. However, once we endeavor on our journey of doubt, we realize that we have obtained that certain things cannot be doubted. So, in another sense, I disagree: you cannot doubt everything. You cannot, as outlined in my previous post, the "present thought". Sure, you can doubt my assertion of it, disagree with it, etc, but you will nevertheless always be trusting your "present thought" to the degree I mentioned before. If you are claiming that your epistemology allows for "pure doubting" of literally everything, wherein the subject never obtains anything which it realizes it strictly cannot doubt, then I think that is simply false (but I have no problem if you mean it in the sense of everything is falsifiable).

The entirety of this would still be distinctive knowledge. Only after the 2 induced premises had a deduced conclusion, would we call the result applicable knowledge.


Although I want to agree with what you are proposing here, upon further reflection, the hypothetical deduction has no inductions (not even in the premises)(nor do deductions in general). To state that "IF an essential property of cats is that they are green" is not an induction: it is simply a logical conditional. I am not asserting that given repetition I think that an essential property of cats is "greeness", I am simply stating that IF it is, then this is what logically follows. My main point here is that you would be correct if they were inductions, in terms of how you defined applicable knowledge, but the premises are logically verified (i.e. IF) and are thereby certain. In other words, although I was onboard with the idea of deductive premises being inductions, I think that "IF ..." conditionals are deductively verified to be true: "IF .." is not incorrect. Even if I stated "IF a square circle ...", that is valid, but if I stated "a square circle ...", that is invalid. This is because I am not asserting that the contents of the IF are true or actually can be true, only that if granted as true what would follow logically. So, I don't think this hypothetical deduction's premises would ever become applicable knowledge.

Now what I think you were trying to get at (and correct me if I am wrong) is that if we were to remove the IF conditional and try to verify the content, then it is either deductively ascertained or inductive. If it is inductive, then we do not know it until it is deduced (thereby becoming applicable knowledge). My point is that the premises, when postulated with IF conditionals, are not inductions. Now let's go back to your original example (because I think I can more adequately address it now):

1. An accidental property of cats is they are green. (Could or could not)
2. An essential property of Bob is that they are a cat. (Must be)
3. Therefore, Bob is green.


This is not a deduction. Why? Because premise #1 does not logically necessitate the conclusion (which is the definition of a deduction). You haven't posited IF all cats are green, you've posited it logically as not necessary for a cat to be green, which means it does not necessarily follow that Bob is green. Therefore, this is not actually a deduction.

1. An essential property of cats is they are green.
2. An essential property of Bob is that they are a cat.
3. Therefore, Bob is green.


However, this would be a deduction, because you have posited it in a way that necessitates the conclusion. But my main point is that this is not "necessitated" in the sense the premises are being argued as actually true, only that, at the very least, are granted as true in an IF conditional.

So, although they would both be valid deductions, this is not quite the same as your previous example (in the above quote):

1. IF an essential property of cats is they are green.
2. IF an essential property of Bob is that they are a cat.
3. Then bob is green

This is also a valid deduction, but is not asserting that the premises are actually true, which is why I distinguished this as a "hypothetical deduction". But what I was missing in my previous response is that a deduction cannot, by definition, have an induction as a premise (that would mean the conclusion does not necessarily follow).

The question will be when those first two premises are "inductions", and when they aren't.


They never are inductions, unless it wasn't a deduction to begin with.

In the solo context, the answer to the "inductions" is whatever we decide. We decide if they are essential properties or not. They are not inductions, their conclusion is certain to whatever we decide.


Again, they are never inductions. I think you are conflating an induction with logical if conditionals, I don't think they are the same. Sure, we can decide what is categorical and what is hypothetical insofar as we do not contradict ourselves. I cannot willy nilly conjure up whatever I want.

If however, we pull another person into the equation, a society with written rules, then we have an evolution. I cannot conclude whatever I want. I must make an induction, a belief about what society will decide. The answer to that, is applicable knowledge. Even then, the abstracts of society that it creates, that I must test my beliefs against, are its distinctive context, not applicable context.


The same critique you made of solo contexts applies to societal contexts: I can deny whatever society throws at me, just like I can deny whatever I throw at myself. Ultimately I have to decide what to accept and what not to. If someone else came up with:

1. IF an essential property of cats is that they are green
2. IF an essential property of bob is that they are a cat
3. THEN bob is green

We are still in the same dilemma. I don't think the process is as different as you may think.

In the solo context, the answer to the "inductions" is whatever we decide.


the answer to anything is what we decide (ultimately). This doesn't mean we are right and it surely doesn't mean (in either solo or societal contexts) that we are actually completely free to do whatever we want.

If you are a purely abstracting being, then you decided it was a coherent sequence of reason. You just as easily could have decided it was not.


I agree. But this doesn't entail what you are trying to entail. Just because I can utter the words "I decide that it was not a coherent sequence of reason", does not make it so. Just because I convinced of it, that does not make it so. And as an example, your next sentence is a great explication of this:

You could decide to never be convinced of anything


This is true in the sense that I can be convinced that I am not convinced of anything, however I am definitively wrong because I am thereby convinced of something. The danger of the mind is that it can fail to grasp things, not that it can do whatever it wants. Reason is not relative, it is absolute in relation to the subject at hand. I can utter and be convinced that "pon is false", but thereby it is true. I can fail grasp that, or it may never pop into my head, but that is still an absolute grounding for me (the subject).

It is a hypothetical deduction as you noted earlier. The question comes into play when we consider what appears to be an induction in premise one. There is one key here. You determine whether you remember correctly that the previous answer is six. If you do, then you do. If you remember that it is 7, then it is 7.


I should have made it more clear:

1. IF I am remembering correctly that I previously answered 6.
2. IF the correct answer must abide by what I remember the square root operation is
3. THEN the answer to the square root of 25 is 6

This is not a valid hypothetical deduction because it is not a deduction (the premises do not necessitate the conclusion). But, I apologize, my original formulation of it was wrong and you are correct there that it was a hypothetical deduction.

In light of my position that premises cannot be inductions in a valid deduction, then I think you are right in just that respect. But I can induce that what I remember being 6 does align with what I remember is the square root of 25 (when the operation I remember is applied) without first applying it. However, I would only know they align via a deduction (remembering the 6 and applying the operation to 25): which would be completely separate from the induction (which I would consider abstract).

Likewise, I want to be clear that I do not think that the induction component and deduction component of "applicable knowledge" are in any way related. Just like how I can induce that 6 and square root of 25 align (and my knowledge they don't was a completely separate deduction/deductions), so it is with "applicable knowledge": whatever was induced that isn't contained in what was deduced remains induced, and whatever is contained in the deduction is now verified via the deduction where those inductive conclusions get thrown out into the garbage can. There's no relation between an induction and a deduction: they two completely separate forms of reason.

I would also like to note very briefly that we have been kind of ignoring our friend "abductions", which is not an "induction" nor a "deduction". I'm not sure where you have that fit into this equation: is it simply merged with inductions?

To be very clear, this is because an abstraction has no rules besides what you make. There is no one besides yourself who can tell you your own created abstraction is "wrong". No one to tell you but yourself that your memory is "wrong". In short, abstractions are our limitless potential to "part and parcel" as we like.


I think where we disagree fundamentally is that you seem to be positing that we control reason (or our thoughts or something) in the abstract, but we do not. I do not decide to part and parcel in a particular way, it just manifests. There are rules to abstract though (again, pon). I can linguistically deny it, but nevertheless my reason is grounded in it. I cannot literally conjure whatever I want, because conjuring follows a set of rules in itself.

There must be something outside of our own power and agency that creates a conclusion that does not necessarily follow from the premises we've created.


It seems like you are arguing you do have power over your thoughts (and potentially imagination): I do not think you do. They are all objects and reason is the connections, synthetic and analytical, of those objects.

Moreover, if I have a deduction, and it is sound, then nothing "outside of my power" (whatever that entails) cannot reject it (in the sense that "reality" rejects what "I want", or what have you). The deduction is true as absolutely as the term "absolute" can possibly mean. Inductions (and abductions) are the only domains of reasoning that can be rejected. We are still dictating "what is outside of our control": I decide that it holds without contradiction that my friend bob jr. has a totally different definition of "pancakes" than I do. I could fail to understand this, or straight up deny it, and claim that we both actually have the same definition, where mine is "round object" and his is "square object", but that doesn't mean I am right. Same thing is true of thoughts: they are objects. I can tell myself "I can do whatever I want abstractly", but that doesn't make it so. It is no different than "reality" or "other powers" scenario. My main point here is that your criticism of "we can make a dream world of 'reality'" is just as valid and can be posited for "we can make a dream world of our thoughts".

I will address your points on the mind-bender dilemma of the reliability of thoughts after the aforementioned is resolved because I do not feel that I can substantively respond without understanding the rest first.

I look forward to hearing from you,
Bob
Bob Ross April 15, 2022 at 23:11 #682035
@Philosophim,

I hate to double post, but just to explicate more clearly my dilemma with "applicable" vs "distinctive" knowledge, let me explain a bit more (now that I've been thinking more and more about it).

I don't think that there are two "forms" of knowledge and, to my understanding, I don't think your epistemology truly posits two different forms (even though I think you are arguing for such).

For example, let's use your "Go Fish" example. Abstractly, I can determine that a game, which I will define as "Go Fish", is possible according to the rules I subject it to: thereby I "know" "GoFish" is possible in the abstract. However, as you noted, it is an entirely different claim to state that "Go Fish is possible non-abstractly" (as I conjured up "Go Fish" according to my rules) (e.g. it turns out a totalitarian regime burned all the playing cards, what a shame, or my rules do not conform to the laws of nature). I think, therefrom, you are intuitively discerning two forms of knowledge to make that meaningful distinction.

However, I believe it to be an illusory distinction, albeit intuitive: the claim of knowledge towards abstract "Go Fish", and more importantly the "cards" therein, is a completely different conception than "cards" being utilized when claiming "Go Fish is possible non-abstractly". The conflation between the two (what I define abstractly as "a card" along with its existence presupposed in reference to the abstract vs what coincides non-abstractly) is what I think you are trying to warn against. I may define "card" as "floating mid-air" and quickly realize that this is only possible in relation to "abstract cards" and not "non-abstract cards".

Consequently, "distinctive" and "applicable" are the exact same. If I claim that "Go Fish is possible abstractly", I know this deductively. If I claim that "Go Fish is possible non-abstractly", I also know this deductively. I could, however, posit "Go Fish is possible non-abstractly" as knowledge when I do not in fact know it because it is an induction, which would be illegal in the sense of your epistemology. If I induce that "Go Fish is possible non-abstractly", then I believe it and it is subjected to the hierarchy of inductions. If I deductively obtain sufficient knowledge pertaining to the possibility of Go Fish in the non-abstract, then I thereby have "knowledge".

In the event that I did induce then deductively affirm that induction (holistically, as in verify the entire induction was true in the sense that I have since then deduced its premises and conclusions) (let's hypothetically say), then I am still only gaining knowledge via a deduction and the induction was merely coincidentally correct.

In other words, it is possible to ground an induction in knowledge (deductions), but not possible to ground a deduction in beliefs (inductions): the relation, therefore, is uni-directional. Furthermore, I now can explicate much more clearly what the hierarchy of inductions is grounded upon (assuming I am understanding correctly): the induction with (1) the most knowledge (deductions) as its grounds and (2) no dispensable entities is the most cogent within that context. This is exactly why, for example, "possibility" is more cogent than "speculations": "possibility" is (1) grounded in more knowledge. However, upon further reflection, I am not entirely sure that you would agree with #2: what if a "speculation -> speculation" is justified as necessitous? What if it isn't multiplying entities without necessity? What if the opposing induction "speculation" is eroding some necessary components of the induction chain?

But an even deeper dilemma arises: the claim, and I would say key principle, underlying the hierarchy itself is an induction (to hold that the inductions that are more acquainted with, grounded in, knowledge is an induction, not a deductively concluded principle). Which inevitably undermines the hierarchy, since there is necessarily one induction (namely inductions grounded in more knowledge are more cogent) which is outside of the induction hierarchy (since it is itself contingent on it in the first place: we construct the hierarchy from this very induced principle). So, we do not "know" that the hierarchy of inductions is true, under your epistemology, I would say, because it is induced and, therefore, we "believe" it is true. If knowledge is only deductions than I think we are forced to conclude this.

Anyways, I thought I would share my thoughts you can see more clearly what I am thinking here.

Bob
Philosophim April 17, 2022 at 15:30 #682659
Another fantastic set of posts Bob! Lets get into your points.

Quoting Bob Ross
Firstly, I think we need to revisit the "distinctive" vs "applicable" knowledge distinction holistically because I am still not understanding why it is important.


This is fair, I really didn't go into it last post as I had initially intended. Deductions are knowledge, period. However, if there's one thing I think we can conclude from the epistemology, its the reasoning and path we take to get there that matters as well. This is why there is a hierarchy for inductions. This being the case, I see an identifiably different type of knowledge when we deduce the end result of an induction.

Quoting Bob Ross
Likewise, I don't think "applicable knowledge", in the sense of a deduced conclusion pertaining to an induction, has any actual relations to the induction. The induction and deduction are completely separate: mutually exclusive.


Applicable knowledge is the deductive result of an induction. It is not a deduction that follows an induction.

I believe the next penny flip will be heads. (Induction) ->
I have a penny in my pocket. (Deduction)

In this case, yes, though a deduction followed an induction in terms of the thought process, they are not connected. A connected deduction is the result of the induction.

I believe the next penny flip will be heads. (Induction) ->
I flip a penny I found in my pocket and it turns up tails. (Deduction)

It is not the deduction alone which is applicable. It is the combination of the induction, and its result. The deduction, by itself, would be distinctive. We are not analyzing the deduction itself, we are analyzing the steps it took to get there.

So why is this an important/needed distinction? Because it can help us realize our limitations. I noted earlier that one can create a fully deductive abstract in one's head. I could create an entire world with its own rules, laws, math, and it be a purely deduced achievement. A set of knowledge which has no inductions with deduced resolutions in its chain of reasoning is circumspect. The reality is we face uncertainty constantly. Our deductions which are reasonable at the time, may be countered in the face of new information. Part of reality is uncertainty, and our reasoning should reflect that. Arguably, the uncertainty of life is why we have the concept of knowledge at all. If there was no uncertainty in whatever we concluded, wouldn't we already know everything?

Lets look at science. Science is not a success because it has carefully crafted deductions. It is a success because it has concluded carefully crafted deductions to inductive situations. Science seeks not to deduce, but to induce and then find the result. Science's conclusions are essentially applicable knowledge.

Quoting Bob Ross

So this is tricky. If by "doubt everything" you mean that everything is technically falsifiable, then yes I agree.


I meant it as purely the emotional sense of doubt. You can doubt anything, whether its reasonable or unreasonable to do so. Yes, we are in agreement that despite having doubts, one can reasonably conclude that one's doubt is unfounded or incorrect. So to clarify, I was not talking about a reasonable doubt, which is limited, but the emotional non-reasonable doubt. In this epistemology, reasonableness is not a requirement of any person, it is always a choice. However, their unreasonable choices cannot counter a reasonable argument for those who are reasonable.

In regards to hypothetical deductions, I believe we are in agreement! It just seems we had some slight misinterpretations of what each meant.

Quoting Bob Ross
1. IF an essential property of cats is they are green.


It depends on how this is read. If we are reading this as "if this is true", then yes, this is simply an abstract premise and a deduction. If however this was read with the intention that we do not know the resolution, "An essential property of cats is they could, or could not be green", then it is an induction.

Basically the IF alone is ambiguous to the user's intent. Does IF mean, "I don't know the essential property" or, "Assume an essential property is X". In the former, if we are to apply it to actual cats, then we must decide what the essential versus accidental properties of a cat are. If not, then we have an induction. In the latter, we have a deduction because we have concluded the essential property of a cat is X, and if we discover something that has all the other properties but X, we will say that creature is not a cat.

From your answers, I think we are in agreement here on this breakdown. Please let me know if I'm incorrect here.

I want to use the example of logical 'if' conditionals to demonstrate the reason why I separate the two knowledges. I can craft distinctive knowledge that avoids an induction. So I can state, "Assume that the essential property of a cat is that its green." I'm putting a hypothetical outcome to an induction, not a deduced outcome of an induction. The hypothetical property can be a part of a deduction, but it is a deduction that has avoided the test of induction.

In the second case where I state, "The next cat I will see will be green", I am putting something testable out there. Hypotheticals are possible deduced solutions to that test. So I could deduce the conclusion that I would be correct if I found the next cat was green, and I could deduce a conclusion if it was the case that the cat is not green. But neither of those deductions are the resolution to the induction itself. They are deductions about what is possible to conclude from an induction, but they are not the deduced result of the induction itself. I find this distinction key to avoid ambiguity when someone claims they "know" something.

Finally, this is important to note when someone changes their definitions. If I claimed, "The penny will flip heads" and the result was that it was tails, the deduction from that conclusion is that the penny landed on tails. Afterward, if I decided to flip the meaning of heads and tails in my head, that would be new distinctive knowledge. The applicable knowledge still stands. "When my definition of heads was this state, the resolution was it landed on tails. After, I changed the definition of heads and tails."

Without first resolving the induction based on one's distinctive knowledge claims one had when they made the induction, then someone could attempt to claim, "Since I changed my definition of heads to tails, my induction was correct." But, the induction was not correct based on the distinctive knowledge at the time. In this, applicable knowledge acts as a historical marker of one's chain of thoughts.

Quoting Bob Ross
If however, we pull another person into the equation, a society with written rules, then we have an evolution. I cannot conclude whatever I want. I must make an induction, a belief about what society will decide. The answer to that, is applicable knowledge. Even then, the abstracts of society that it creates, that I must test my beliefs against, are its distinctive context, not applicable context.

The same critique you made of solo contexts applies to societal contexts: I can deny whatever society throws at me, just like I can deny whatever I throw at myself. Ultimately I have to decide what to accept and what not to. If someone else came up with:

1. IF an essential property of cats is that they are green
2. IF an essential property of bob is that they are a cat
3. THEN bob is green

We are still in the same dilemma. I don't think the process is as different as you may think.


You are correct in that we can decide to reject societies' definitions. But what we cannot do is claim applicable knowledge of, "Society doesn't actually believe that the color of a cat is non-essential" I can distinctively know my own definitions. I can distinctively reject societies definitions. I could distinctively know that society does not define something a certain way. But I cannot applicably know that society defines something a certain way, when the result of that claim would show that they deductively do not.

Quoting Bob Ross
You could decide to never be convinced of anything

This is true in the sense that I can be convinced that I am not convinced of anything, however I am definitively wrong because I am thereby convinced of something. The danger of the mind is that it can fail to grasp things, not that it can do whatever it wants. Reason is not relative, it is absolute in relation to the subject at hand. I can utter and be convinced that "pon is false", but thereby it is true.


Correct, if you decide to use reason, then you cannot reasonably be convinced that you are not convinced of anything. If you decide not to use reason, then you can. Its like a person who states, "Everything is absolute". Its completely unreasonable, but there are some who forego reasonableness, even when it is pointed out, and insist on their belief. Fortunately, we can use reasonableness, but this does not deny the fact that a person can reject all that in favor of what we might call insanity.

I suppose what I'm getting at in these "A person can feel or do X" is that there is nothing as an essential property of a person that requires them to be reasonable. There are unreasonable people that we still label as people. Holding reasonable positions is non-essential, meaning if a human is biologically or willingly an unreasonable person, there is nothing we can do to make them. A reasonable person will likely live a much better life, but may find the revocation of reasonableness in certain situations to be more profitable to what they desire.

Quoting Bob Ross
I would also like to note very briefly that we have been kind of ignoring our friend "abductions", which is not an "induction" nor a "deduction". I'm not sure where you have that fit into this equation: is it simply merged with inductions?


I think so. My understanding of abductions is that it is an induction that is the most reasonable one a person can hold given a situation. From the Stanford Encyclopedia, "You may have observed many gray elephants and no non-gray ones, and infer from this that all elephants are gray, because that would provide the best explanation for why you have observed so many gray elephants and no non-gray ones. This would be an instance of an abductive inference."
-https://plato.stanford.edu/entries/abduction/#DedIndAbd

With the inductive hierarchy, an abduction would simply be choosing the most cogent induction in a situation. If you considered the color of the elephant non-essential, this would be an induction. This could also be considered simply distinctive knowledge. If you consider the color of the elephant essential, then upon discovery of a pink elephant, you would call it something else from an elephant, or amend the definition to make the color of an elephant non-essential. It is our chain of reasoning to conclude what we are stating that determines its classification.

Quoting Bob Ross
I think where we disagree fundamentally is that you seem to be positing that we control reason (or our thoughts or something) in the abstract, but we do not. I do not decide to part and parcel in a particular way, it just manifests. There are rules to abstract though (again, pon). I can linguistically deny it, but nevertheless my reason is grounded in it. I cannot literally conjure whatever I want, because conjuring follows a set of rules in itself.


Yes, there are aspects about ourselves that we may not have control over. I did not want to state that because we have the power to part and parcel existence, that it is something we always have control over. For example, there are people who are unable to recognize faces. People who are unable to visualize in their mind. This is the applicable context from which we are limited or given the gift of creating distinctive knowledge. Being reasonable is not a fundamental of being human. If it is, I have not been able to prove it so far.

Despite cases in which you cannot easily decide to part and parcel, there are other instances in which you can. Look at one of your keys on your keyboard. Now look at the letter. Now look at any space next to the letter. Draw a circle in your mind around that space. You could if you wish mark a circle, and have created a new identity on that key. You can look at my writing. The page. The screen. The computer system. The room. You can focus and unfocus, and create new identities distinctively as you wish.

Quoting Bob Ross
There must be something outside of our own power and agency that creates a conclusion that does not necessarily follow from the premises we've created.

It seems like you are arguing you do have power over your thoughts (and potentially imagination): I do not think you do. They are all objects and reason is the connections, synthetic and analytical, of those objects.


No, I am noting that while we have an incredible amount of power within our own agency, there are things outside of our control. I cannot fly with my mind alone, no matter how much I imagine I can. I cannot bend my limbs past a certain point. But I can imagine that I am able to. I have a world I can create, a logic I can form, and conclusions that will never apply to reality, but be valid in my mind.

Quoting Bob Ross
Moreover, if I have a deduction, and it is sound, then nothing "outside of my power" (whatever that entails) cannot reject it (in the sense that "reality" rejects what "I want", or what have you). The deduction is true as absolutely as the term "absolute" can possibly mean. Inductions (and abductions) are the only domains of reasoning that can be rejected.


True, and that is because we have defined it as such. We are being reasonable, constructing definitions, and holding to them to create a logic. But, someone could create entirely different definitions for deductions and inductions. Still, according to the epistemology, we could hold them to a rational standard that results from those amended identities. It is why epistemology is so important. It is a rational standard for which we can debate about what we can know and not know, when the human race by nature, has no standards besides what they themselves or a contextual group would hold them to. We are trying to create a standard that can elevate itself beyond individuals or groups, but can also note what those individuals and groups distinctly and applicably know. Does it meet this standard? Perhaps, but it is an ongoing test and challenge.

I also want to address something again. The idea of something "outside of my power". Basically there are things we cannot will. And you agree with me by stating there are things you cannot choose to part and parcel. Can it be granted at this point that we both believe there are things outside of our mental control?

Quoting Bob Ross
For example, let's use your "Go Fish" example. Abstractly, I can determine that a game, which I will define as "Go Fish", is possible according to the rules I subject it to: thereby I "know" "GoFish" is possible in the abstract. However, as you noted, it is an entirely different claim to state that "Go Fish is possible non-abstractly" (as I conjured up "Go Fish" according to my rules) (e.g. it turns out a totalitarian regime burned all the playing cards, what a shame, or my rules do not conform to the laws of nature). I think, therefrom, you are intuitively discerning two forms of knowledge to make that meaningful distinction.


I believe this is correct.

Quoting Bob Ross
the claim of knowledge towards abstract "Go Fish", and more importantly the "cards" therein, is a completely different conception than "cards" being utilized when claiming "Go Fish is possible non-abstractly". The conflation between the two (what I define abstractly as "a card" along with its existence presupposed in reference to the abstract vs what coincides non-abstractly) is what I think you are trying to warn against. I may define "card" as "floating mid-air" and quickly realize that this is only possible in relation to "abstract cards" and not "non-abstract cards".


Also correct!

Quoting Bob Ross
Consequently, "distinctive" and "applicable" are the exact same. If I claim that "Go Fish is possible abstractly", I know this deductively. If I claim that "Go Fish is possible non-abstractly", I also know this deductively.


Correct in that both are deductions. I hope I clarified here that the real distinction is the in the chain of reasoning.

Distinctive knowledge: Discrete experience or
A deduction that leads to a deduction.

Applicable knowledge:
An induction that leads to a deduced resolution

Quoting Bob Ross
In other words, it is possible to ground an induction in knowledge (deductions), but not possible to ground a deduction in beliefs (inductions): the relation, therefore, is uni-directional.


Correct. But we can obtain the actual outcome of the induction. When an induction resolves, we have the outcome.

This result in relation to the induction is the special category of applicable knowledge.

Quoting Bob Ross
Furthermore, I now can explicate much more clearly what the hierarchy of inductions is grounded upon (assuming I am understanding correctly): the induction with (1) the most knowledge (deductions) as its grounds and (2) no dispensable entities is the most cogent within that context.


The first part is part of the reason, but I did not understand what a "dispensable entity" was.

Quoting Bob Ross
But an even deeper dilemma arises: the claim, and I would say key principle, underlying the hierarchy itself is an induction (to hold that the inductions that are more acquainted with, grounded in, knowledge is an induction, not a deductively concluded principle). Which inevitably undermines the hierarchy, since there is necessarily one induction (namely inductions grounded in more knowledge are more cogent) which is outside of the induction hierarchy (since it is itself contingent on it in the first place: we construct the hierarchy from this very induced principle). So, we do not "know" that the hierarchy of inductions is true, under your epistemology


We distinctively know the hierarchy of inductions, we do not applicably know if the claim is true. That would require testing in a lab. I've given the arguments already for why the hierarchy exists. If we want to revisit it, we can, but this is enough to cover for now. Thanks again Bob, great points, and always feel free to post more if you have new thoughts and I haven't followed up yet!
Bob Ross April 18, 2022 at 21:33 #683104
@Philosophim,

However, if there's one thing I think we can conclude from the epistemology, its the reasoning and path we take to get there that matters as well. This is why there is a hierarchy for inductions.


I am not particularly sold on this quite yet. The hierarchy of inductions analyzes the "paths" in relation to its epistemic groundings, which is a relation of deduction -> induction (which I think is fine), but this relationship is not symmetrical (i.e. induction -> deduction). We can create meaningful labels pertaining to deduction -> induction, but not vice-versa (i would say). I think you are seeing it as symmetrical, whereas I see it more asymmetrical.

Applicable knowledge is the deductive result of an induction. It is not a deduction that follows an induction.


You explicated the dilemma much more elegantly than I did here! From what you said here, I am arguing the exact converse: to claim a deduction is a result of an induction is to necessarily concede that they are not mutually exclusive (there’s at least one relationship, no matter how weak or strong, being claimed to be validly made). I am claiming that a deduction can follow an induction, but never is a result of one. The results of a deduction can prove how aligned an induction was in relation to knowledge, but an induction never produces a resulting deduction.

I believe the next penny flip will be heads. (Induction) ->
I have a penny in my pocket. (Deduction)

...

I believe the next penny flip will be heads. (Induction) ->
I flip a penny I found in my pocket and it turns up tails. (Deduction)


I think these are truly the same: the latter just feels connected, but isn't anymore connected than the former. I could have just as easily, in the case of the latter, not posited a belief and flipped the penny from my pocket and it turns up tails (which would thereby no longer be applicable, yet I obtained the exact same knowledge distinctively).

So why is this an important/needed distinction? Because it can help us realize our limitations. I noted earlier that one can create a fully deductive abstract in one's head. I could create an entire world with its own rules, laws, math, and it be a purely deduced achievement. A set of knowledge which has no inductions with deduced resolutions in its chain of reasoning is circumspect. The reality is we face uncertainty constantly. Our deductions which are reasonable at the time, may be countered in the face of new information. Part of reality is uncertainty, and our reasoning should reflect that. Arguably, the uncertainty of life is why we have the concept of knowledge at all.


For the most part, I agree with the underlying meaning I think you are trying to convey (i.e. recognizing our limitations), but I think your "distinctive" vs "applicable" isn't a true representation thereof. What I think you are really trying to get at is that "knowledge" is always indexical. I am not certain what the result of flipping a coin (non-abstractly) will be until I do it, because my abstract simulation does not refer to non-abstract consideration (although I can definitely conflate them as synonymous). I can, therefore, have a belief prior to my deductively ascertained knowledge that it flipped tails, but that has no bearing on how I obtained that knowledge. I could equally have not posited a belief and obtained the exact same result, which indexically refers to something relationally beyond my abstract consideration. I am failing to see how the induction provided a meaningful difference, because even if I didn't induce anything prior to flipping the coin, thereby labeling it as "distinctive", does not equate to "categorical": I still had to obtain it non-abstractly in the exact same manner as applicable knowledge.

If there was no uncertainty in whatever we concluded, wouldn't we already know everything?


Firstly, I don't think "uncertainty" directly entails that one has to formulate an induction: I can be neutrally uncertain of the outcome of flipping a non-imaginary coin without ever asserting an induction. So when I previously stated that inductions and abductions only provide the uncertainty, I was slightly wrong: we can deductively know that we do not deductively know something and, therefore, we are uncertain of it (to some degree). No induction is technically needed (but definitely can be posited).

Secondly, yes, we would, without uncertainty, know everything. However, where are you drawing that line? I think you are trying to draw it at "distinctive" vs "applicable", but I don't think those definitions work properly. As previously discussed, the non-abstract flipping of a coin could be either form and still be obtaining knowledge pertaining to something uncertain.

Lets look at science. Science is not a success because it has carefully crafted deductions. It is a success because it has concluded carefully crafted deductions to inductive situations. Science seeks not to deduce, but to induce and then find the result. Science's conclusions are essentially applicable knowledge.


Yes, science does claim to "find the result" after a test, but the "result" has no relation to the induction (hypothesis) itself: that was merely posited as the best educated guess one could make prior to any knowledge deductively obtain after/during the test. Most of the time, science never reaches the point where we have verified the entire hypothesis (deductively) before it gets translated into a "theory": scientists obtain enough deductively ascertained knowledge that supports the hypothesis (or hypotheses) to warrant stating it is more than just a hypothesis (but, most importantly, it is not holistically knowable most of the time).

Although I may be misunderstanding you, if you are trying to claim that "applicable knowledge" is something scientists obtain about the holistic hypothesis, then I think you are (most of the time) incorrect. Unless the test is something really trivial (like "this will fall if I let it go"), then it generally doesn't make it to knowledge, just a stronger version of an induction (more thoroughly tested which entails more knowledge that it is grounded in). Sometimes they do categorically deductively ascertain during experiments, such as if I were to test whether this particular bottle is made of glass, which would inevitably be tested against my definition of "glass" and the means of verifying it meets each criteria of "glass" is also categorically defined. But i don't see how any of this proves in any way that they obtained something other than one form of knowledge (and, further, although I see the underlying meaning useful in terms of indexicals, I don’t see how there’s really a distinction between the two forms you are positing).

I meant it as purely the emotional sense of doubt. You can doubt anything, whether its reasonable or unreasonable to do so. Yes, we are in agreement that despite having doubts, one can reasonably conclude that one's doubt is unfounded or incorrect. So to clarify, I was not talking about a reasonable doubt, which is limited, but the emotional non-reasonable doubt. In this epistemology, reasonableness is not a requirement of any person, it is always a choice. However, their unreasonable choices cannot counter a reasonable argument for those who are reasonable.


That's fair enough.

In regards to hypothetical deductions, I believe we are in agreement! It just seems we had some slight misinterpretations of what each meant.


I think we are in agreement then! My question for you is: do you find it a meaningful distinction (categorical vs hypothetical), and what terminology would you translate that to in your epistemology? I don't think it is the same distinction as what you are trying to convey with "distinctive" and "applicable", but I could be wrong.

So I can state, "Assume that the essential property of a cat is that its green." I'm putting a hypothetical outcome to an induction, not a deduced outcome of an induction. The hypothetical property can be a part of a deduction, but it is a deduction that has avoided the test of induction.


In terms of underlying meaning, I understand and agree, but I don't think this is being described correctly. Everything is tested, abstract and non-abstract alike, but what makes the error you are explicating correct is that the tests are indexical. Testing in my mind in terms of my imagination, for example, does not automatically hold for that same "label" in non-abstract considerations. So I wouldn't say that "avoiding an induction" is a mistake, it is "avoiding the indexical consideration" that is the mistake. If I deduce that a "card" exists in my imagination with the color red on it, it would be a mistake for me to thereafter conclude there is a "card" in the non-imagination. Now, in terms of obtaining whether a "card" that is red exists in non-imagination takes the form of all other tests (including testing that belief in the abstract in terms of my imagination), and so I don't necessarily have to pre-judge whether or not I think there actually is one. If I look down and see a "red" "card", then I just deductively ascertained (without an induction) that non-abstractly there exists a "red card". I am failing to see how this is contingent on inductions. If I cannot deductively ascertain that there is such a thing as a "red card", then I am left with nothing else but to induce my best guess and, if push comes to shove, I bank my money on it.

In the second case where I state, "The next cat I will see will be green", I am putting something testable out there


But that belief has no bearing on uncertainty. You can have easily have simply deductively noted that you have no clue what the next cat will be, and then saw it was green (and you would know that you have no clue deductively). If you do submit such a belief (as you did), then yes we can deductively ascertain how aligned your induction was with real knowledge, but it never becomes knowledge. Even if you guessed right, you didn't know. Not even in hindsight. In terms of the induction hierarchy, we are simply inducing that given that the inductions more grounded in knowledge seem to produce more aligned results (with knowledge) that we are more rational to hold those over other, less grounded, inductions. We do not deductively know this. There's nothing that deductively tells me that a possibility actually is more certain of a claim than a speculation, only that I should rationally bank my money on it because that has tended to work out better. I have no deductive reason to believe that because something has been experienced before that it has a higher chance of happening again over something that has never been experienced: that is an induction (similar, if not exactly like, Hume's problem of induction).

So I could deduce the conclusion that I would be correct if I found the next cat was green, and I could deduce a conclusion if it was the case that the cat is not green. But neither of those deductions are the resolution to the induction itself. They are deductions about what is possible to conclude from an induction, but they are not the deduced result of the induction itself. I find this distinction key to avoid ambiguity when someone claims they "know" something.


Again, i see this not as "a result of an induction" but, rather, the importance of understanding knowledge is indexical. There's nothing wrong with positing a hypothetical deduction, but, as you rightly pointed out, that has no meaning if the IF conditionals are removed. By definition, it would no longer be hypothetical, so it would either have to be categorical or an induction.


"Since I changed my definition of heads to tails, my induction was correct." But, the induction was not correct based on the distinctive knowledge at the time. In this, applicable knowledge acts as a historical marker of one's chain of thoughts.


So, firstly, the induction is never "correct", it is just a "best guess" (or potentially not the best guess but no less "a guess"). It can happen to align with knowledge to any degree, but it isn't knowledge.

Secondly, you are right that the terminology is sometimes deductively (categorically) defined before the induction and that does shed light into their intentions, but this has no bearing on inductions. I could categorically define "cat" as "1 square" and then, without inducing anything, see what one would usually refer to a cat and decide to change my terminology. There's still a historical marker here, and it is memory (oh boy, which gets us back to that dilemma), not inductions. It's not that you induced X that provides a historical marker for me that you had other intentions prior to deductively ascertaining about X, it is that I remember you using terminology in your induction in a manner that suggests you weren't meaning it in that way, which I deduced. Now, we could get into whether I truly can deduce your intentions (it may just be an induction), but hopefully you see what I mean here.

But what we cannot do is claim applicable knowledge of, "Society doesn't actually believe that the color of a cat is non-essential" I can distinctively know my own definitions. I can distinctively reject societies definitions.


I think what you really mean here (and correct me if I am wrong) is that society's definition and my definition do not have to align (because knowledge is indexical). I can induce that society doesn't hold that a cat is essentially defined by "color", or I could categorically define "society" as necessarily not holding color as an essential property of cats. The problem is that when I define "society", it is in relation to what I've deduced, which indexically refers to my abstractions, and the definition someone else may have deductively defined indexically refers to themselves (and it would be a conflation to think they are necessarily bound to one another).

I could distinctively know that society does not define something a certain way.


This is where you sort of lost me. If by "distinctively know" you mean that you can categorically define "society" in a way that necessitates that they don't hold that definition of "cat", then I agree. But this has no relation to any sort of induction, the conflation arises when knowledge isn't viewed as indexical.

But I cannot applicably know that society defines something a certain way, when the result of that claim would show that they deductively do not.


I would agree insofar as the distinction being made is that my deduced abstract consideration of what a "society" or "cat" is has no indexical relation to non-abstract considerations, but I am failing to see how this has anything to do with necessarily positing an induction prior to deducing.

Correct, if you decide to use reason, then you cannot reasonably be convinced that you are not convinced of anything. If you decide not to use reason, then you can. Its like a person who states, "Everything is absolute". Its completely unreasonable, but there are some who forego reasonableness, even when it is pointed out, and insist on their belief. Fortunately, we can use reasonableness, but this does not deny the fact that a person can reject all that in favor of what we might call insanity.


This is true in a sense, but I think you are agreeing with me that this doesn't mean someone can actually do whatever they want just because they claim it.

There are unreasonable people that we still label as people. Holding reasonable positions is non-essential, meaning if a human is biologically or willingly an unreasonable person, there is nothing we can do to make them.


I would say that you are correct that people can feel as though they can be without reason, but they necessarily are. Someone can look a table, and then say they didn't just look at a table, but they did (and I think you are agreeing with me on this). It is an essential property of "human being" that they are a reasoning being, but I think how you are using "reasonableness", they don't have to have it. But they nevertheless abide by certain rules, which is their reason, even in the most insane of circumstances, which is apart of the definition of being human.

I think so. My understanding of abductions is that it is an induction that is the most reasonable one a person can hold given a situation. From the Stanford Encyclopedia, "You may have observed many gray elephants and no non-gray ones, and infer from this that all elephants are gray, because that would provide the best explanation for why you have observed so many gray elephants and no non-gray ones. This would be an instance of an abductive inference."


I apologize, I was too hasty to slide that into the discussion, we have much bigger fish to fry. I think we should not proceed to that conversation yet (that's my fault).

Despite cases in which you cannot easily decide to part and parcel, there are other instances in which you can. Look at one of your keys on your keyboard. Now look at the letter. Now look at any space next to the letter. Draw a circle in your mind around that space. You could if you wish mark a circle, and have created a new identity on that key. You can look at my writing. The page. The screen. The computer system. The room. You can focus and unfocus, and create new identities distinctively as you wish.


I don't think any of this proves that I was in control of anything. What discerns actual accordance from coincidental repetition?

We do, colloquially, make distinctions between something like "intention" and what the body actually is capable of, but ultimately I fail to see how we truly control any objects (which includes all concepts, so thoughts, imagination, the body, etc). What proof is there that you are not along for the ride?

No, I am noting that while we have an incredible amount of power within our own agency, there are things outside of our control


This isn't quite what I was trying to get at, I do think that you think that some things are outside of our control (if not a lot of things), but you do think that there is a clear divide between "incredible amount of power with our own agency" and that which isn't: where is that line drawn at? Do you think you control your thoughts? Imagination? Bodily movements? Maybe not absolutely, but sometimes at the very least? I am trying to hone in on what you mean, because I do not hold that the subject, reason, has any control over objects.

But I can imagine that I am able to. I have a world I can create, a logic I can form, and conclusions that will never apply to reality, but be valid in my mind.


Do you think that you sometimes can control your "dream world" within your imagination, or all time? Or never?

The distinction you are making in terms of what a proposition references (indexicals) is still valid if one doesn't control objects whatsoever.

And you agree with me by stating there are things you cannot choose to part and parcel. Can it be granted at this point that we both believe there are things outside of our mental control?


I cannot quite remember what I stated previously, but my contention isn't really "is there anything outside of our control" but, rather, "is there anything inside our control" (which is different). To say "outside our control" is fine, and I would agree that there are, but where I am failing to understand you is where is the line drawn? When you say "outside of our mental control", this leads me to believe that you think that you control your mental, or abstract considerations, but I do not think you do. There is no point at which, in reference to any object, where we "know" that we controlled it. It is an induction at best.

Correct in that both are deductions. I hope I clarified here that the real distinction is the in the chain of reasoning.


I think that what you are trying to convey (if I am understanding it correctly) is right, but it is wrong to postulate it as having anything to do with a chain of reasoning (I would view is asymmetrical to induction chains).

Distinctive knowledge: Discrete experience or
A deduction that leads to a deduction.

Applicable knowledge:
An induction that leads to a deduced resolution


If by "leads" you are saying "results", then I disagree. We deduce knowledge and, in hindsight, see how close our inductions were (if we even posited any) to that deduced knowledge. Deductions can "lead" to inductions, but never vice-versa in a literal sense (like "results"), but if you mean a loose sense like an induction can "lead" someone to investigate further in some circumstances, then I agree. If "lead" is being used loosely, then I wouldn't consider something sparking your interest as something that then results in a deduction (another deduction could have just as easily sparked my interest).

But we can obtain the actual outcome of the induction. When an induction resolves, we have the outcome.


The outcome is not apart of the induction, that is knowledge which is a deduction (which I think you would agree with me on that). There's no entailment from induction -> deduction. You don't need to state a belief either way before flipping a coin. The flipping of the coin and its conclusion is all deductively ascertained (thusly knowledge) either way.

The first part is part of the reason, but I did not understand what a "dispensable entity" was.


Essentially occam's razor.

We distinctively know the hierarchy of inductions, we do not applicably know if the claim is true.


Upon further reflection, I don't think we deduce the hierarchy holistically (either as distinctive or applicable--either way they are both considered deductions). Nothing about the premises necessitates the conclusion that "possibility" is more cogent than "speculations". Nothing about experiencing something once deductively necessitates that it is more likely to happen again over something that hasn't been experienced (and isn't an irrational induction). I think some of them may be deductively ascertained (such as irrational inductions since they defined as contradictions, which would necessarily always be known as the worst option), but I don't think it holds for all of them (but I need to ponder it a bit deeper).

I look forward to hearing from you,
Bob
Philosophim April 22, 2022 at 22:36 #684855
Quoting Bob Ross
I think you are seeing it as symmetrical, whereas I see it more asymmetrical.


I would not say it is symmetrical, I just think there is a similar situation to consider. Inductions and deductions are like atoms, and their chain of reasoning is like molecules. How they combine creates a new identity to consider.

We may have a fundamental disagreement as to whether an induction can be deductively concluded. Perhaps its my language. Let me make it simple first. "Applicable knowledge is the conclusion of an induction". Add in "Deductive conclusion" because it is possible to believe the conclusion to an induction is another induction.

Quoting Bob Ross
I could have just as easily, in the case of the latter, not posited a belief and flipped the penny from my pocket and it turns up tails (which would thereby no longer be applicable, yet I obtained the exact same knowledge distinctively).


Yes, you could have. But that does not negate the situation in which there is an induction that you are actively trying to discover the end result.

Quoting Bob Ross
I can, therefore, have a belief prior to my deductively ascertained knowledge that it flipped tails, but that has no bearing on how I obtained that knowledge. I could equally have not posited a belief and obtained the exact same result, which indexically refers to something relationally beyond my abstract consideration.


Let me break down the indexical (or context) of the flip itself.

I can flip a penny, look at the result, and create the identity of "I'll call that heads". That is not applicable, but distinctive knowledge.

I can also flip a penny, look at the result and see a symbol that seems familiar. I then try to match the symbol to what is considered "heads" in my mind, and I do without contradiction. This is applicable knowledge.

The induction in this case is the belief that what I am observing matches a previous identity I have created. Does this side of the penny match heads? That is "the question". The result, "Yes it does, "if deduced, is "the answer".

If I had believed that the penny would result in heads, then the answer is the resolution to the induction. Identifying an induction that has not yet resolved, versus an induction that has a resolution in our chain of thinking is incredibly important! I could come up with an entirely fool proof deductive point about Gandolf in the Lord of the Rings. Isolated, no one would care. But if at the very beginning of my deduction I started with, "I believe Gandolf is a real person," that puts the entire "deduction" in a different light!

Knowledge is about a chain of thinking. We make claims all the time in the world, and people find their results very pertinent. When people make a bet on what horse will win the race, there is active incentive to find out what the actual result of the race is. We don't want to answer with, "Maybe your horse won the race." People also don't want to hear, "Oh, Buttercup lost? Well I'm going to redefine my bet that when I bet on Princess, I really bet on Buttercup". People want a definitive, or deduced answer to that question because there is a lot on the line.

Quoting Bob Ross
For the most part, I agree with the underlying meaning I think you are trying to convey (i.e. recognizing our limitations), but I think your "distinctive" vs "applicable" isn't a true representation thereof. What I think you are really trying to get at is that "knowledge" is always indexical.


Contextual, yes. Specifically distinctive and applicably contextual. We could view it as distinctive and applicably indexical if you wish. Although I may need to refine the meaning of those terms within contexts now that I've tweaked the meaning of applicable. Distinctive context is the set distinctive knowledge a person is working with. "A horse has X essential properties. The definition of winning a race has Y essential properties. Applicable context is the limitations of what can be used to find the result of the induction. "I'm blind, so I can't confirm essential properties that require sight".

Quoting Bob Ross
Firstly, I don't think "uncertainty" directly entails that one has to formulate an induction: I can be neutrally uncertain of the outcome of flipping a non-imaginary coin without ever asserting an induction. So when I previously stated that inductions and abductions only provide the uncertainty, I was slightly wrong: we can deductively know that we do not deductively know something and, therefore, we are uncertain of it (to some degree).


Agreed within the correct context. If I distinctively know "I do not know something", then I'm not making an induction. It is when I make a belief that X matches Y definition in my head that I am making an induction, and need to go through the steps to deduce that this is true. At the point the coin is flipped, the induction happens when I attempt to match the result to my distinctive knowledge. The implicit induction is, "I believe the result could match to what I distinctively know." One could also implicitly induce that the result will not match what one distinctively knows, and not even bother trying. A deduction after the result happens will determine which induction was correct.

Quoting Bob Ross
Secondly, yes, we would, without uncertainty, know everything. However, where are you drawing that line? I think you are trying to draw it at "distinctive" vs "applicable", but I don't think those definitions work properly. As previously discussed, the non-abstract flipping of a coin could be either form and still be obtaining knowledge pertaining to something uncertain.


I hope the above points have answered this. Let me know if they have not!

Quoting Bob Ross
Yes, science does claim to "find the result" after a test, but the "result" has no relation to the induction (hypothesis) itself: that was merely posited as the best educated guess one could make prior to any knowledge deductively obtain after/during the test.


Perhaps this is unimportant after the previous notes, but I felt I needed to address this. The hypothesis is absolutely key. Science does not seek to prove a hypothesis, it seeks to invalidate a hypothesis. A hypothesis must be falsifiable. There needs to be a hypothetical state in which the hypothesis could be false. Science attempts to prove a hypothesis false, and if it cannot, then we have something.

Science has been very aware that you can craft an experiment to easily prove a hypothesis correct, and that this is often faulty. Just as I've noted earlier in our conversations, we can craft distinctive knowledge in such a way that they avoid inductions. "I believe a magical unicorn exists that cannot be sensed in any way." This is something that is non-falsifiable. When it rains, I could say, "Yep, that's the magic unicorn using its powers to cause the rain." When someone tries to explain the water cycle to me, I simply respond with, "Well yes, that's how the unicorn works its magic."

The hypothesis is the key to the experiment. The main focus of the experiment is trying to prove the hypothesis wrong. Upon peer review, scientists will attempt to see if the experiment properly tested what could falsify the hypothesis, or if the results were baked for a positive outcome. You and I are discussing a theory of epistemology. It is important that we try to prove it false, to attack it, and put it to the test. While there may be instances both of us can see positives that would make the theory useful, what matters more is whether the theory holds up in logical consistency. We are not trying to prove the theory right by its positives alone, we are trying to prove the theory right by the fact that attempts at negating it do not work.

Quoting Bob Ross
I think we are in agreement then! My question for you is: do you find it a meaningful distinction (categorical vs hypothetical), and what terminology would you translate that to in your epistemology?


I think there is a meaningful distinction here. Categorical deductions involve no potential inductions. Hypothetical distinctions take a potential induction, and conclude a deduction based on a hypothetical outcome of the induction. I think that is very important in evaluating the risk and about how much we should care about the induction.

If I have to find an item at the store, I'm in a rush, and it could be in aisle 1 or 2, I can evaluate the outcomes if I pick correctly vs. incorrectly. Because the aisles aren't that big, I decide not to ask a member of the store where the item is, and quickly run through both aisles. Of course, if I'm in a rush and I don't know where the item is among 25 aisles, in evaluating the hypothetical outcomes, its much quicker on average to ask the person at the store next to me where the item in question is then potentially find the item on the 25th aisle I explored. Perhaps the hypothetical deduction might give a better way to evaluate which inductions are worth pursuing beyond the cogency hierarchy; something I know you've been interested in.

Quoting Bob Ross
Testing in my mind in terms of my imagination, for example, does not automatically hold for that same "label" in non-abstract considerations. So I wouldn't say that "avoiding an induction" is a mistake, it is "avoiding the indexical consideration" that is the mistake.


I did not intend to note that "avoiding an induction" is a mistake. I think it is a reasonable tactic at times to be efficient. But yes, you can call it "avoiding an induction" or "creating a different context that does not contain an induction" and that is fine.

Quoting Bob Ross
If I look down and see a "red" "card", then I just deductively ascertained (without an induction) that non-abstractly there exists a "red card".


Any time you attempt to match your identity of "red" to something else, you are making an implicit induction. Only until after you confirm the essential properties that it is "red" do you have the deduced conclusion. This can be done very quickly, but you do not look at the "red" card and create an identity called "red" for the first time. You are looking at the "red" card, and matching it to the belief that it is "red", the identity you created when you saw "red" for the first time.

Quoting Bob Ross
In the second case where I state, "The next cat I will see will be green", I am putting something testable out there

But that belief has no bearing on uncertainty. You can have easily have simply deductively noted that you have no clue what the next cat will be, and then saw it was green (and you would know that you have no clue deductively). If you do submit such a belief (as you did), then yes we can deductively ascertain how aligned your induction was with real knowledge, but it never becomes knowledge. Even if you guessed right, you didn't know.


I want to make sure you didn't misunderstand me here. I am not saying that an induction becomes knowledge. I am stating the deduced result of the induction becomes knowledge. If I believe the next cat I see will be green, that is an induction, not a deduction. If the next cat I see is deductively confirmed to be green, then my induction was correct, but it does not change the fact it was an induction. The induction itself is not knowledge, only the deductively concluded result is knowledge.
If I state, "I have no clue what color the next cat I see will be", the induction is when you see a cat, whether you believe that cat's color has a match to your distinctive knowledge of colors. That result is the deductive conclusion.

Quoting Bob Ross
I could distinctively know that society does not define something a certain way.

This is where you sort of lost me. If by "distinctively know" you mean that you can categorically define "society" in a way that necessitates that they don't hold that definition of "cat", then I agree.


Correct.

Quoting Bob Ross
But I cannot applicably know that society defines something a certain way, when the result of that claim would show that they deductively do not.

I would agree insofar as the distinction being made is that my deduced abstract consideration of what a "society" or "cat" is has no indexical relation to non-abstract considerations, but I am failing to see how this has anything to do with necessarily positing an induction prior to deducing.


I am not stating there is necessary induction prior to creating further deductions. I am simply noting that when one decides to induce, applicable knowledge is the deduced resolution to that induction.

Quoting Bob Ross
Someone can look a table, and then say they didn't just look at a table, but they did (and I think you are agreeing with me on this). It is an essential property of "human being" that they are a reasoning being, but I think how you are using "reasonableness", they don't have to have it. But they nevertheless abide by certain rules, which is their reason, even in the most insane of circumstances, which is apart of the definition of being human.


Using reason in the most basic way we have defined it so far, yes. Reasonable would be a human being who uses societally agreed upon logic over emotions and desires. In the case of our very basic definition of reason, yes, that is an essential property of I think all living beings. But having reasonableness, or agreeing to make decisions based on logic over emotions and desires, is not an essential property of being human.

Quoting Bob Ross
I don't think any of this proves that I was in control of anything. What discerns actual accordance from coincidental repetition?

We do, colloquially, make distinctions between something like "intention" and what the body actually is capable of, but ultimately I fail to see how we truly control any objects (which includes all concepts, so thoughts, imagination, the body, etc). What proof is there that you are not along for the ride?


What proof is there that we do not have control over certain things? My proof is I have control over certain things. I can will my arm to move, and it does. I can will against my emotions to do something more important. Are you saying that you have control over nothing Bob? I don't think you're intending that, but I think I need clarification here. And if you are intending that we can control nothing, it would be helpful if you could present some evidence as to why this is.

Quoting Bob Ross
Do you think that you sometimes can control your "dream world" within your imagination, or all time? Or never?


Sometimes.

Quoting Bob Ross
When you say "outside of our mental control", this leads me to believe that you think that you control your mental, or abstract considerations, but I do not think you do. There is no point at which, in reference to any object, where we "know" that we controlled it. It is an induction at best.


Again I'm confused here. I'll need this broken down more.

Quoting Bob Ross
Upon further reflection, I don't think we deduce the hierarchy holistically (either as distinctive or applicable--either way they are both considered deductions). Nothing about the premises necessitates the conclusion that "possibility" is more cogent than "speculations".


It was a while back, but I believe I did cover this. It had to do with chains of inductions away from the induction. A probability is one step from a deduction, a possibility is a less focused induction that probability, because it cannot assess the likelihood of it happening. A speculation is an induction introduces not only a possibility, but the induction that something that has never been confirmed to exist before, can exist. And then you remember irrational inductions.

Quoting Bob Ross
Nothing about experiencing something once deductively necessitates that it is more likely to happen again over something that hasn't been experienced (and isn't an irrational induction).


Correct. The hierarchy cannot determine which induction is more likely to be. It can only determine which induction is more cogent, or least removed from what is known. Cogency has typically been defined as a strong inductive argument with true premises. Here cogency is measured by the length and degree of its inductive chain away from what has been deduced.

Great conversation again Bob!















Bob Ross April 24, 2022 at 21:14 #685751
@Philosophim,

Wonderful post!

"Applicable knowledge is the conclusion of an induction". Add in "Deductive conclusion" because it is possible to believe the conclusion to an induction is another induction.


With respect to the first sentence, it depends on what you mean by "conclusion" whether I would agree. Again, by "conclusion" are implying there is an actual connection between an induction and a deduction, or is it simply that the latter followed the former, but was necessarily not a result of it? I think that we colloquially assert that in the event that deductive knowledge follows an induction pertaining to the same subject we have thereby concluded our induction was correct or incorrect, but I don't think that holds formatively. In other words, if you mean "induction" -> "deductive conclusion", then I disagree. However, if you mean "induction" ~> "deductive conclusion" -> "analysis of induction", then I agree. "->" is how I am signifying a strict entailment, whereas "~>" is a loose entailment (e.g. I induce A, A motivates me to investigate the subject B pertaining to A, I then ascertain knowledge K on subject B deductively, and then analyze A through my newly acquired K to determine how aligned it was with knowledge, however A does not directly entail K in any way beyond the loose entailment of motivation or incentive).

With regard to the second sentence, I think you are suggesting that Applicable Knowledge can be a conclusion that is an induction, which I would strongly disagree with (if I am understanding that sentence correctly). If "Applicable knowledge" is a "conclusion of an induction", and "conclusion" is purposely not restricted to "deductive conclusion", then I can substitute it in and get "applicable knowledge is (or can be) an inductive conclusion to an induction", which I think cannot be true since an induction is not knowledge. One can most definitely formulate a "conclusion" to an induction which is also an induction, but it would not be "applicable knowledge".

Yes, you could have. But that does not negate the situation in which there is an induction that you are actively trying to discover the end result.


I think I am starting to understand better what you are conveying. Essentially (and correct me if I am wrong) you are utilizing "applicable knowledge" as a distinction to emphasize that which is not in our control and, thusly, must be discovered as opposed to projected. Although I think there is a meaningful distinction between "discovery" and "projection", I think ultimately it is all discovery. I can recursively analyze my thoughts in the exact same manner, and so I don't think the distinction between "induction" ~> "deduction" has any bearing on what you trying to convey. If one claims knowledge pertaining to something that does not indexically (contextually) refer to the proof they provide, then therefrom a contradiction arises which invalidates such.

The induction in this case is the belief that what I am observing matches a previous identity I have created. Does this side of the penny match heads? That is "the question". The result, "Yes it does, "if deduced, is "the answer".


The "question" you posited here is not an induction. You are correct, however, that the induction in your example was "see a symbol that seems familiar", but that is not simply just a question. "Does this side of the penny match heads?" is a completely neutral assertion, because it isn't an assertion at all. I am not inducing that it does match or that it doesn't. So that "question" coupled with the "answer" would be, in this case, distinctive knowledge. But in your previously example (asserting it is familiar) would be applicable. That's why I can easily refurbish your example as distinctive and still obtain the same exact knowledge:

I can also flip a penny, look at the result and wonder if I've seen it before. I then try to match the symbol to what is considered "heads" in my mind, and I do so without contradiction. This is distinctive knowledge.

When you stated "seems familiar", I can see how that could potentially imply an assertion that it actually is familiar, which would imply that it has been seen before (which is an induction). But wondering is not an assertion either way in itself.

If I had believed that the penny would result in heads, then the answer is the resolution to the induction. Identifying an induction that has not yet resolved, versus an induction that has a resolution in our chain of thinking is incredibly important!


I 100% agree it is important to understand whether an induction has been resolved or not; however, I don't see how that is a comparison of an unsolved induction vs a resolution in our chain of thinking (it would simply be, in my head, identifying an unsolved vs solved inductions). "resolution" of an induction is simply utilizing our knowledge to ascertain how aligned it was with true knowledge, which is a spectrum (it isn't a binary decision of "I resolved that it was true or that it was false): my induction could have been correct to any degree, and incorrect to any degree. Likewise, it is a continual process, we simply take the knowledge we have and utilize it to determine how "correct" our induction was, but we can very well keep doing this as our knowledge increases. So, I'm not sure where the line would be drawn for when an induction truly is "resolved" vs when it is still "unresolved". I think colloquially we simply roughly discern the two as "inductions with very little knowledge grounding it" vs "inductions that have lots of knowledge grounding it". I think that it can seem like a binary situation when considering really trivial examples, such as flipping a coin. But when considering something really complicated like evolution, it is much harder to see how one would ever holistically know such: it is more that we have ample knowledge grounding it (such as evolutionary facts and many aspects of the theory), but there's never a point where we truly can deduce it holistically.

I could come up with an entirely fool proof deductive point about Gandolf in the Lord of the Rings. Isolated, no one would care. But if at the very beginning of my deduction I started with, "I believe Gandolf is a real person," that puts the entire "deduction" in a different light!


I'm not sure what you mean by "no one would care". Sure, people may not be interested in Gandolf from the movie, but, if you truly came up with a fool proof deductive argument, then that argument would be true of Gandolf in the movie (regardless of who is interested therein). And, yes, inducing that Gandolf is a real person does put it in a different light, which is simply that it no longer indexically refers to a movie. I'm not sure how this necessitates that this distinction ought to be made as "induction" ~> "deduction" vs "deduction". I know deductively the indexical properties of the given proposition, and thereby can ascertain whether my assertion actually does pertain to the subject at hand or whether I am misguided.

Knowledge is about a chain of thinking.


I would say only insofar as knowledge is strictly deductions. It is within the realm of inductions where I would say we are claiming chains of thinking matter (in terms of cogency), but inductions aren't knowledge (as you are well aware).

When people make a bet on what horse will win the race, there is active incentive to find out what the actual result of the race is


Incentives do not entail knowledge in themselves. If I state that my horse won the race (simply what you would call distinctively), then obviously I do not know this in relation to the "actual" race, because there's a contradiction here: all I know is that, at best, I am convinced my horse won the race (or I am imagining a race within my mind which is not the "actual" race), not that it actually did win because there is an indexical consideration, of which I am therefrom accidentally committing a conflation.

People also don't want to hear, "Oh, Buttercup lost? Well I'm going to redefine my bet that when I bet on Princess, I really bet on Buttercup"


Although I see the meaningful distinction here, I don't think this has any direct correlation to your "distinctive" vs "applicable" knowledge distinction. Firstly, someone could actually have meant to bet on Buttercup but instead associated the wrong horse with the name on accident. Secondly, they could be simply trying to change because their bet was wrong. It isn't that we want definitive "deduced answers", it is that we want definitive answers (which can be inductions). In most places, even if everyone knows that I have pure intentions and truly meant to bet on the winning horse but mistakenly bet on a different one, they take my induction definitively with pre-agreed upon definitions. No one cares if I deductively ascertained it or inductively ascertained it, they just care what I said and not what I meant.

Contextual, yes. Specifically distinctive and applicably contextual. We could view it as distinctive and applicably indexical if you wish. Although I may need to refine the meaning of those terms within contexts now that I've tweaked the meaning of applicable.


Contextual is fine, no need to redefine it as "indexical", I understand. The problem is that there aren't only two contexts (as you are trying to posit). What exists in my thoughts may not exist in my imagination, and it may not exist in "reality" either. Likewise, what may exist in "reality" here may not exist there, likewise what exists in "imagination" here may not exist there, and ditto for thoughts. Just because I can rightfully claim knowledge of X in "reality" here doesn't mean it is not a contradiction to thereafter claim X there. This critique, a very important critique you are making at that, is subjected to a potential infinite of contexts. I am failing to see how hyperfocusing on one contextual distinction (distinctive and applicable) amongst a potential infinite of contextual differences is meaningful. I am starting to see that it really boils down to control for you (I think): distinctive is what is in our control vs applicable is what is not (i.e. discovery vs projection), but, as we will see in a bit, I find this to be an incredibly difficult line to draw.

It is when I make a belief that X matches Y definition in my head that I am making an induction, and need to go through the steps to deduce that this is true


I hate to reiterate, but I could very well simply omit the belief and see if X matches Y, thereby obtaining distinctive knowledge.

At the point the coin is flipped, the induction happens when I attempt to match the result to my distinctive knowledge.


Not necessarily. An induction only happens in this scenario if you propose a belief towards if it matches. If you simply attempt to match a result to "distinctive knowledge", then that is purely deduced.

The implicit induction is, "I believe the result could match to what I distinctively know."[quote]

This is very interesting, because it is not an affirmation nor a denial of the result. It is merely whether one is capable of matching non-abstract symbols to abstract ones (such as memories). I think this is deduced as true and if one happens to deduce the opposite then they don't pursue trying to match them. I don't believe that I can match non-abstract symbols to abstract ones, I know I can. Are you saying you don't know if you can, you simply believe you can?

[quote]Science does not seek to prove a hypothesis, it seeks to invalidate a hypothesis. A hypothesis must be falsifiable. There needs to be a hypothetical state in which the hypothesis could be false. Science attempts to prove a hypothesis false, and if it cannot, then we have something.


I partially agree with you here. but it is vital to clarify that science does not solely seek to prove something is false and, in the event that it can't, deem it true (that is the definition of an appeal to ignorance fallacy). Science deals with "positive" and "negative" evidence: the former are tests conducted to see if the results match what should be produced to support the hypothesis (as in it is what is expected if it were true), whereas the latter are tests conducted to see if one can produce results that negate the possibly of the hypothesis being right. Both of which are technically attempts to falsify the hypothesis because positive and negative evidence are two sides of the same coin. The mere falsifiability of a hypothesis is simply the preliminary verification step. Peer reviews do not just seek to verify that the tests conducted produced negative evidence: they also make sure there is positive evidence for the hypothesis. In other words, just because something hasn't been falsified does not mean scientists take it seriously.

I think there is a meaningful distinction here. Categorical deductions involve no potential inductions. Hypothetical distinctions take a potential induction, and conclude a deduction based on a hypothetical outcome of the induction


What do you mean by "potential inductions"? I would hold that there are no inductions in deductive premises. If conditionals are not inductions.

Any time you attempt to match your identity of "red" to something else, you are making an implicit induction


Only if I formulate a belief then this is true. If I state "I think this is red", and then attempt to match it to "redness" abstractly am I making an induction (originally). However, I can see something and ask "what is this?" or "I wonder if this is a color?" and then match it to "redness" abstractly to deduce it is red. An induction is not necessary, but can occur.

I am not saying that an induction becomes knowledge. I am stating the deduced result of the induction becomes knowledge.


I apologize if I was misrepresenting you, I understand. What I am depicting is that this doesn't mean we have a "induction" -> "deduction" relation, nor do I find any meaningfulness in a "induction" ~> "deduction" relation.

I am simply noting that when one decides to induce, applicable knowledge is the deduced resolution to that induction.


This makes sense (as in it is a working definition), but I don't think this has any direct correlation to the critiques you are claiming towards "breaking out of the old epistemologies".

What proof is there that we do not have control over certain things?


First I need to say that I am talking about libertarian free will, but we can get into different definitions if you want.

At face value, something is only in one's control if we can prove that it is. If we can't prove it, then we don't know that we control anything. At this point, it doesn't mean we don't control anything, it simply means we don't know whether we do or not. Likewise, the default belief should be that which is the most intuitive, so to speak, so libertarian free will would be the default (I would say).

At a deeper level, there's many different reasons (I will briefly overview here) why the "subject" does not control anything as defined by libertarian free will:

1. To control one's thoughts, one would have to think of those thoughts before thinking them. Which inevitably leads to an infinite regression (potential infinite that is) of which we do not have: thoughts simply manifest.

2. The natural order either (1) abides by causation, which inevitably proves causal determinism, or (2) is a result of true quantum randomness (which also produces determinism, just not causal determinism in a traditional sense).

3. To know why reason manifests how it does, one would have to literally transcend their own reason, which is impossible. If we think of it in a more materialistic mindset, one would have to truly transcend their own reason to bridge the gap between mind and brain to determine the manifestations of reason. From a more idealistic mindset, one would have to truly transcend their own reason metaphysically to determine what powers (or what not) is determining such manifestations. Either way, it is impossible.

Now, for number four, I am actually going to address your proof:

I can will my arm to move, and it does. I can will against my emotions to do something more important


This doesn't prove (in the sense of libertarian free will, which I have no clue if you subscribe to it or not) you have control over your emotions nor your bodily movements: it proves that your mind's will can align with your body's will--which is not the same proposition (I would say) at all. Yes, there's a plethora of situations in which I genuinely know that my will aligned with my body's actions (which is typically referred to as "intentions" and "actions" alignment), but that doesn't mean that I have any reason to believe that my will was the manifestor of those actions. In other words, something aligning with my will does not in the slightest mean that something was in accordance with my will. There are two separate questions: was my arm lifting in alignment of my will or/and from my will? You just proved the former and not the latter. This would be point 4 and, to keep it brief, I will stop there.

Are you saying that you have control over nothing Bob? I don't think you're intending that, but I think I need clarification here. And if you are intending that we can control nothing, it would be helpful if you could present some evidence as to why this is.


I am most aligned with soft determinism, also called compatibilism, which dictates that the natural world is determined, but that at least one form (or definition) of free will is compatible with it. So I hold that libertarian free will is incorrect and incompatible with determinism, but that doesn't mean we can't still make meaningful distinctions pertaining to acts of "free will" vs "unfree will" (i.e. just because it is determined, doesn't mean we are completely unfree either). I think I will just end here for now on that to serve as merely an introduction.

Again I'm confused here. I'll need this broken down more.


I hold that the "subject", or reason, is that which makes the synthetic and analytic connections of objects, which are manifested in the form of a concept. This is why I do not hold that "consciousness" is equivocal to "reason", because there are numerous aspects of consciousness that are more than adequately accounted for via the brain (materialistic origins). At best, I would say, we could induce that repetitive alignments of the will of the mind and the will of the body reasonably suggests that they are actually one and the same (however I think there are problems with it, too great for me to commit myself to that view).

It was a while back, but I believe I did cover this. It had to do with chains of inductions away from the induction. A probability is one step from a deduction, a possibility is a less focused induction that probability, because it cannot assess the likelihood of it happening. A speculation is an induction introduces not only a possibility, but the induction that something that has never been confirmed to exist before, can exist. And then you remember irrational inductions.
...
The hierarchy cannot determine which induction is more likely to be. It can only determine which induction is more cogent, or least removed from what is known. Cogency has typically been defined as a strong inductive argument with true premises. Here cogency is measured by the length and degree of its inductive chain away from what has been deduced.


I think your hierarchy of inductions boils down to two key principles, one of which that is important here is: the deductive groundings of an induction dictates its cogency level in comparison to other inductions within the induction hierarchy. But what is this principle based on? Knowledge or a belief? This is the presupposition of which I don't think we quite explored yet. I don't see how it is necessarily deduced (therefore knowledge) for them. In other words, do we "know" that the strength (or cogency) of an induction increases due to an increase in deductive groundings, or are we inducing such?

I look forward to hearing from you,
Bob
Philosophim April 30, 2022 at 13:32 #688663
Quoting Bob Ross
However, if you mean "induction" ~> "deductive conclusion" -> "analysis of induction"


Yes, this is my intention.

Quoting Bob Ross
With regard to the second sentence, I think you are suggesting that Applicable Knowledge can be a conclusion that is an induction, which I would strongly disagree with (if I am understanding that sentence correctly).


No, I simply mean that someone can do induction ~> inductive conclusion -> analysis of second induction as a conclusion of the first induction, and this would not be applicable knowledge.

Quoting Bob Ross
I think I am starting to understand better what you are conveying. Essentially (and correct me if I am wrong) you are utilizing "applicable knowledge" as a distinction to emphasize that which is not in our control and, thusly, must be discovered as opposed to projected. Although I think there is a meaningful distinction between "discovery" and "projection", I think ultimately it is all discovery.


This is a good way to break it down. And yes, I've never denied that knowledge is ultimately deductions. But, ultimately all molecules are made up of atoms. It doesn't mean that the creation of the identity of separate molecules doesn't serve a helpful purpose. However, I think you've made some good points, and I will have to go back to my original definition of applicable knowledge. While I think we use applicable knowledge to resolve inductions, the act of resolving inductions in a deductive manner is not applicable knowledge itself. Applicable knowledge is when we attempt to match an experience to the distinctive knowledge we have created, and deductively resolve whether there is, or is not a match.

Quoting Bob Ross
I can also flip a penny, look at the result and wonder if I've seen it before. I then try to match the symbol to what is considered "heads" in my mind, and I do so without contradiction. This is distinctive knowledge.


No, distinctive knowledge is when I create an identity when I flip the coin. There are no limitations as to what I can create. I can call it one side "feet" and the other side "hands", with their own essential and non-essential properties. If I attempt to match the coin's side to an identity I created previously with distinctive knowledge, then I am attempting applicable knowledge. If I conclude what I see matches the essential properties of the definitions I hold, then I have applicable knowledge that there is a match.

Quoting Bob Ross
When you stated "seems familiar", I can see how that could potentially imply an assertion that it actually is familiar, which would imply that it has been seen before (which is an induction).


This is the induction I'm talking about. When you believe that what you've seen matches distinctive knowledge, this is an induction, not a deduction. The act of checking, understands that you don't know the answer until after you've checked. You can deduce, "I don't know if what I've observed matches my distinctive knowledge." But if you are going to try to match it, there is uncertainty until you arrive at a deduced outcome.

But I realize I am stretching what it means to be an induction here. The idea of deductively matching to the identities you distinctively know, vs creating identities you distinctively know, was the original way I described applicable knowledge. While I have tried to see if there is an implicit induction in the act of matching, I'm not sure there is now. Its not necessarily an induction, its the experience of the unknown, and how you attempt to deal with it. An induction is really just an extension of the unknown. And whether our deduction is distinctive or applicable (an attempt to match to distinctive) is really just a way a person has decided to resolve an induction. Do we attempt to match to our identities, or create a new one?

That being said, I'm glad we've explored this route, as I believe examining the resolution of an induction seems to be important. I also still claim that one can only resolve an induction applicably. Only after that can they create new distinctive knowledge. An induction relies on distinctive knowledge in its claim. First, one must resolve the induction based on that distinctive knowledge. If one changes the definitions prior to this induction, one is not really testing the induction, they are avoiding it and making another claim. After one has resolved the induction based on the distinctive knowledge of the definitions originally made, then one of course can change and amend their distinctive knowledge as I've noted before.

Quoting Bob Ross
"Does this side of the penny match heads?" is a completely neutral assertion, because it isn't an assertion at all. I am not inducing that it does match or that it doesn't. So that "question" coupled with the "answer" would be, in this case, distinctive knowledge.


While I agree with everything you've said here, I want to note the solution would be applicable knowledge if you tried to match "heads" with your distinctively known identities. If you decided to create an identity, that would be distinctive knowledge.

Quoting Bob Ross
"resolution" of an induction is simply utilizing our knowledge to ascertain how aligned it was with true knowledge, which is a spectrum (it isn't a binary decision of "I resolved that it was true or that it was false): my induction could have been correct to any degree, and incorrect to any degree. Likewise, it is a continual process, we simply take the knowledge we have and utilize it to determine how "correct" our induction was, but we can very well keep doing this as our knowledge increases.


An induction can be resolved with another induction, or a deduction. If one "resolves" an induction with another induction, its not really resolved. In the case of an induction's resolution being another induction, we have taken a belief, and believed a particular answer resulted. In the case where we applicably resolve an induction, we have removed uncertainty. Of course, this has never meant that knowledge could not change at a later time as new distinctive knowledge is learned, or we obtain new experiences and deductions that invalidate what we knew at one time. But the future invalidation of a deduction does not invalidate that at the time it was made it was a deduction, and what a person could applicably know in that situation with what they had.

Quoting Bob Ross
But when considering something really complicated like evolution, it is much harder to see how one would ever holistically know such: it is more that we have ample knowledge grounding it (such as evolutionary facts and many aspects of the theory), but there's never a point where we truly can deduce it holistically.


There are cases where if we analyze the chain of reasoning, we'll find inductions that have never been deductively resolved. That's where the hierarchy of induction comes in. Further, areas where cogent inductions are within our logic should always be noted as possibilities we can always go back an attempt to improve on. There is nothing wrong with noting that a claim to knowledge has inductions without deduced resolutions within it, if it truly is the best conclusion we can make. But glossing over that it is an induction is not a resolution either. Some things which we know are at their core cogent inductions, with hypothetical deductions as the assumed resolution. If that is the best we can do with what we have, then it is the tool we should pick.

Quoting Bob Ross
And, yes, inducing that Gandolf is a real person does put it in a different light, which is simply that it no longer indexically refers to a movie. I'm not sure how this necessitates that this distinction ought to be made as "induction" ~> "deduction" vs "deduction". I know deductively the indexical properties of the given proposition, and thereby can ascertain whether my assertion actually does pertain to the subject at hand or whether I am misguided.


This example was only to demonstrate the importance of looking at the chain of thinking, and how it is important to realize that deductions in isolation do not necessarily tell the full story of what a person knows.

Quoting Bob Ross
Although I see the meaningful distinction here, I don't think this has any direct correlation to your "distinctive" vs "applicable" knowledge distinction. Firstly, someone could actually have meant to bet on Buttercup but instead associated the wrong horse with the name on accident. Secondly, they could be simply trying to change because their bet was wrong. It isn't that we want definitive "deduced answers", it is that we want definitive answers (which can be inductions).


I went into societal context here. In this case, society will not accept an individual changing the definitions involved in the original bet. Despite the individuals intention that they bet on "the other horse", the reality recorded by society is that they bet on the losing horse.

This again is more of an example to demonstrate the importance of resolving a situation that is "unknown". While originally I proposed the resolution of the induction was applicable knowledge, I feel confident at this point to go back to my original meaning, which was that one could solve this uncertainty applicably, or distinctively. The point here is to emphasize once again that resolving inductions with deduced resolutions is an important societal need and should be considered in any theory of knowledge.

Quoting Bob Ross
I am failing to see how hyperfocusing on one contextual distinction (distinctive and applicable) amongst a potential infinite of contextual differences is meaningful.


As I've noted so far, I believe the decision to create an identity, vs match to an identity one has already created is a meaningful distinction that is important when trying to resolve knowledge questions. We can go into this deeper next discussion if needed.

Quoting Bob Ross
I partially agree with you here. but it is vital to clarify that science does not solely seek to prove something is false and, in the event that it can't, deem it true (that is the definition of an appeal to ignorance fallacy).


I did not mean to imply that science marks as "true" whatever is not disproven. It simply notes such alternatives are not yet disproven. I don't want to get into the philosophy of science here (We have enough to cover!), as long as there is an understanding science takes steps to disprove a hypothesis, that is the point I wanted to get across.

Quoting Bob Ross
What do you mean by "potential inductions"? I would hold that there are no inductions in deductive premises. If conditionals are not inductions.


A hypothetical deduction is when we take an induction, and take the logical deductive conclusion if it resolves a particular way. This deduction is not a resolution to the induction, this is a deductive conclusion if the induction resolves a particular way. Just as a hypothetical is a potential deductive conclusion, every hypothetical has a potential induction it is drawn from.

Quoting Bob Ross
If I state "I think this is red", and then attempt to match it to "redness" abstractly am I making an induction (originally). However, I can see something and ask "what is this?" or "I wonder if this is a color?" and then match it to "redness" abstractly to deduce it is red. An induction is not necessary, but can occur.


I agree. This is why I'm going back to my original definition of applicable knowledge, which is when we attempt to match our experiences with our previously established distinctive knowledge and deduce an answer.

Thank you for explaining your view on libertarian free will. I have no disagreement with this, as this is simply a distinctive context you've chosen. Part of what I refine into the distinctive knowledge of "I" is that which wills. How I am formed or determined is irrelevant to how I define myself. This does not negate your distinctive context either. If such a distinctive context is useful to yourself, then I see no reason not to use it.

But, does your distinctive context escape the epistemology proposed here? I would argue no. You still need a set of definitions. You can create a distinctive logic using the definitions you've come up with. The question then becomes whether you can applicably know it in your experience. If you can, then you have a viable distinctive and applicable set of knowledge that works for you. I of course can do the same with mine. If I expand the definition of the I to also include "will", then I can prove that I can will my arm to move, and it does. And in such a way, my definition of "I", and having control over particular things is applicably known as well. I personally find the idea that I control things useful to my outlook in life. You personally do not. For our purposes here, I'm not sure this difference between us is all that important to the main theory.

Quoting Bob Ross
But what is this principle (Inductive hierarchy) based on? Knowledge or a belief? This is the presupposition of which I don't think we quite explored yet. I don't see how it is necessarily deduced (therefore knowledge) for them.


The hierarchy of induction is distinctively known based on the logic proposed earlier. I have always stated that despite our conclusions of what is more cogent, they are always still inductions. Meaning that choosing a cogent induction does not mean the outcome of that induction will be correct.

The probability of a jack being pulled out of a deck of 52 cards. The most cogent guess with that information is that any card but a jack will be drawn next. But a jack can still be drawn. This is more cogent that not knowing how many of each card are in the deck, but knowing that at least one exists in it. We may guess a jack will be drawn without odds, but that is not as likely to be correct as when we guess with the odds that could have been known. Again, even if there is only 1 jack, it does not negate it may be drawn.

And of course, speculating that a jack can be drawn in a deck of cards, when we have never seen a jack be drawn, and do not know if there is even one in the deck, is even less cogent. There of course could be a jack, but its less reasonable to guess there is a jack before one knows the deck contains a jack. And of course, we could be shown the deck, that there is not a jack, but still guess a jack will be drawn. While this is irrational, perhaps the dealer did something outside of our applied knowledge, such as slipped a jack in when we weren't looking.

But is the hierarchy of inductions applicably known? No, that would require extensive testing. These are fairly easy tests to create however. First, mix different card types into a deck on each test. Show the person the odds of the cards in the deck, and have them guess what card will come next. Second, don't show the person the odds of the cards in the deck, just tell them what's in it. Third, don't show them what card types you shuffled into the deck. Finally, show them all the cards in the deck, then have them guess a card that is not in the deck. Do this hundreds of times, then chart the percentage of guesses that were correct for each cogency level. Do I have confidence that such a test will reveal the more cogent the induction, the higher chance a person's guess will be correct? Yes.

Great points again Bob. I think you have thoroughly shown that I can not expand applicable knowledge as the resolution of an induction. It is that we resolve inductions using applicable knowledge. The results of that resolution can then be used to make new distinctive knowledge. I think this is enough for me to cover right now, and I look forward to your further critique!
Bob Ross May 04, 2022 at 00:55 #690464
Hello @Philosophim,

In light of your post and upon further reflection, I think that your "applicable" vs "distinctive" knowledge distinction is becoming ever so clear to me. In fact, I am now fairly confident we are essentially conveying the exact same thing in terms of underlying meaning, but we are semantically disagreeing. Or I am misunderstanding you yet again and we aren't on the same page: only time will tell (:

While I think we use applicable knowledge to resolve inductions, the act of resolving inductions in a deductive manner is not applicable knowledge itself. Applicable knowledge is when we attempt to match an experience to the distinctive knowledge we have created, and deductively resolve whether there is, or is not a match.


I believe, alas, I understand your distinction, which is simply that which is created vs that which is matched. I have no problem with that distinction (in terms of the underlying meaning). I have a similar view for myself, albeit not in the form of that terminology. However, which this is reverting back to one of my original contentions in our discussion, I find the terminology you use confusing (in light of what it is meant to structurally convey).

"Distinctive knowledge" is misleading (in my opinion) because all of knowledge is "distinctive" in the sense of what the term actually means (but I understand you are implying more than that with it as you define). Likewise, "applicable knowledge" is misleading (I would say) because all of knowledge is "applied". Therefore, I find (as of now) the distinction to be most accurately represented as synthetic (~projected) vs analytic (~discovered) knowledge, whereof synthetic knowledge is a child of analytic knowledge (not to be confused as a sibling). synthetic generally means (philosophically) "a proposition whose predicate concept is not contained in its subject concept but related", which clearly describes (in my opinion) the extension of one's own "creations" (projections) onto the "world", so to speak. For example, the concept of a rock (or just a rock, so to speak) on the floor doesn't have any inherent properties that necessitate it be called a "rock": I synthetically projected that property onto it. Likewise, analytic expresses the contrary: "a proposition whose predicate concept is contained in its subject concept"; I think that clearly describes something which cannot be a mere projection (or extension of a concept).

No, distinctive knowledge is when I create an identity when I flip the coin. There are no limitations as to what I can create. I can call it one side "feet" and the other side "hands", with their own essential and non-essential properties.


I am presuming you meant "no limitations" loosely, which I would agree with. But, to clarify, there are limitations. In terms of my example, I think you are right if I am understanding your terminology correctly now: since it has no bearing on the induction and it is analytical, it is applicable knowledge.


This is the induction I'm talking about. When you believe that what you've seen matches distinctive knowledge, this is an induction, not a deduction. The act of checking, understands that you don't know the answer until after you've checked.


I would agree, but clarify the implications of this postulation: this directly entails that a lot of topics traditionally viewed as "controlled" by the mind can also be applicable knowledge (analytical knowledge)(e.g. imagination, thoughts, etc). I'm not sure if you would agree with me on that. For example, thoughts are analyzed (~discovered), not synthesized (~projected). However, those thoughts can analytically discover, so to speak, the fact that each inferred "current" thought seems to be "projecting something which is synthetic in relation to a given concept". In other words, and this goes back to my subtle disclaimer that "synthetic knowledge" is a child of "analytic knowledge", we analytically discover that we synthetically project.

Moreover, going back to our discussion of whether "distinctive knowledge" can be induced, this also implies that the deduced validity of a subset of memories (in relation to another subset) is applicable knowledge (discovered: analytic), as opposed to being distinctive knowledge (projected: synthetic): which would be where, if I am currently understanding your view, we went sideways (our argument was presupposing the analysis of memories as "distinctive", which is incorrect). I have a feeling this is not what you are intending, but I nevertheless think it is the necessary implications of what you are distinguishing. For example, my assertion that memory A is valid in relation to the set of memories S would have to be analytical (because I am discovering the "truth" of the matter), whereas labeling it as "memory" + "A" and "memories" + "S" would be synthetic.


But I realize I am stretching what it means to be an induction here. The idea of deductively matching to the identities you distinctively know, vs creating identities you distinctively know, was the original way I described applicable knowledge.


I think that if you are reverting back to that definition (and I understand it correctly), then you are not stretching the definition of inductions, since it has no bearing on the distinction anymore.

I also still claim that one can only resolve an induction applicably


If I am understanding you correctly (as I have elaborated your distinction hitherto), then I actually agree. Because "distinctive" is no longer meaning what I thought it meant. On a separate note, I still do not think we can ever validate the entire set of memories S: we can only validate a subset in comparison to another subset. But I'm not sure how relevant that is anymore.


An induction can be resolved with another induction, or a deduction. If one "resolves" an induction with another induction, its not really resolved. In the case of an induction's resolution being another induction, we have taken a belief, and believed a particular answer resulted. In the case where we applicably resolve an induction, we have removed uncertainty. Of course, this has never meant that knowledge could not change at a later time as new distinctive knowledge is learned, or we obtain new experiences and deductions that invalidate what we knew at one time. But the future invalidation of a deduction does not invalidate that at the time it was made it was a deduction, and what a person could applicably know in that situation with what they had.


If I am understanding your distinction correctly, then I agree here except that applicable knowledge is not relatable to an induction directly. So when you state " In the case where we applicably resolve an induction, we have removed uncertainty", it seems a bit like you may be implicating inductions + uncertainty + applicable knowledge again, which I think is incorrect.

This example was only to demonstrate the importance of looking at the chain of thinking, and how it is important to realize that deductions in isolation do not necessarily tell the full story of what a person knows.


I would now attribute this to a synthetic vs analytic distinction: your example demonstrates the conflation many people have with claims that are contained in the given concept, and those that extend beyond it.

This again is more of an example to demonstrate the importance of resolving a situation that is "unknown". While originally I proposed the resolution of the induction was applicable knowledge, I feel confident at this point to go back to my original meaning, which was that one could solve this uncertainty applicably, or distinctively. The point here is to emphasize once again that resolving inductions with deduced resolutions is an important societal need and should be considered in any theory of knowledge.


I would agree in the sense that "deduced resolutions" are induction ~> deduction, which I think you are agreeing with me on that. It is indeed vital to have a means of "resolving" inductions in any given epistemology, however I would personally describe it as "having a means of dispensing of inductions for knowledge" to really hone in on my position thereon.


As I've noted so far, I believe the decision to create an identity, vs match to an identity one has already created is a meaningful distinction that is important when trying to resolve knowledge questions. We can go into this deeper next discussion if needed.


Assuming I have finally grasped what you were trying to convey, I agree!

I did not mean to imply that science marks as "true" whatever is not disproven. It simply notes such alternatives are not yet disproven. I don't want to get into the philosophy of science here (We have enough to cover!), as long as there is an understanding science takes steps to disprove a hypothesis, that is the point I wanted to get across.


Fair enough.

A hypothetical deduction is when we take an induction, and take the logical deductive conclusion if it resolves a particular way.


I don't think this is true. A hypothetical deduction is a deduction wherein each premise is hypothetically granted as true: it is a valid deduction due to it conforming to the necessary form of a deduction. It is not constructed of inductions where we presume they resolve one way or another (it could be that, if we were to disband it from its hypothetical roots, it has deductive premises as well). I think this is where it is vital to distinguish "resolution" in terms of induction -> deduction vs induction ~> deduction again: the former implies inductions are valid premises of a hypothetical deduction (which is wrong), whereas the latter implies we can dispense of that induction. I think it may be even more clear when "induction ~> deduction" is postulated as "induction <- deduction", as that is really what I think it is. In pseudo formal logic:

D = deduction
I = induction

¬(I ?D) ? ¬(I ?¬D)
D ?(I ? ¬I)

I was a bit confusing previously, because there is truly no ~> relation between inductions and deductions, it is really a relation of the deduction to the induction.

This deduction is not a resolution to the induction, this is a deductive conclusion if the induction resolves a particular way.


I'm not certain I agree with this. The induction does not resolve a particular way: the deduction resolves the induction insofar as we can reinterpret the induction via our apperception. The induction does not resolve into a deduction (which I think you are agreeing with me), but, rather, a deduction can resolve an induction by either dispensing of it (as now it is known that the induction happened to be accurate or it wasn't) or retaining it as not directly pertinent to what is newly known.

But, does your distinctive context escape the epistemology proposed here? I would argue no. You still need a set of definitions. You can create a distinctive logic using the definitions you've come up with. The question then becomes whether you can applicably know it in your experience. If you can, then you have a viable distinctive and applicable set of knowledge that works for you. I of course can do the same with mine. If I expand the definition of the I to also include "will", then I can prove that I can will my arm to move, and it does. And in such a way, my definition of "I", and having control over particular things is applicably known as well. I personally find the idea that I control things useful to my outlook in life. You personally do not. For our purposes here, I'm not sure this difference between us is all that important to the main theory.


I don't think our free will differences matter anymore either, assuming I understand your distinction correctly. "control" is irrelevant to synthetic vs analytic knowledge.

The hierarchy of induction is distinctively known based on the logic proposed earlier. I have always stated that despite our conclusions of what is more cogent, they are always still inductions. Meaning that choosing a cogent induction does not mean the outcome of that induction will be correct.


First I want emphasize that you did a more than adequate job of proving the induction hierarchy in terms of first order. However, I wasn't referring to the first order derivation of it (I have no problem with your example of empirically verifying that probability based propositions tend to pan out more than possibilities): I was referring to the second order (a deeper consideration). To really explicate this, less assume we have empirically obtained (via your extensive test) that each scenario resolves to accurately prove that each respective induction type was always in the postulated relation of probability > possibility > speculation > irrational. We thereby have a satisfying first order proof that this hierarchical structure works (I would, on a side note, argue that such a test is not required to prove it, but that's irrelevant right now). However, now we must deal with a second order proof pertaining to why we ought to believe that because they related in a particular way in the past that it will hold in the future (aka hume's problem of induction). If, for example, given the probability of drawing a king out of three cards which contains two kings and a non-king is 2/3, I were to obtain via trials that over time the continual simulation of drawing a king out of such approaches 66%, then I have a first order proof. However, I don't have any reason thereby to claim that my knowledge of 66% = 2/3 (trials matched abstract) in the past holds true in the future. This is the area that I don't think we have addressed (and, if I'm remembering correctly, your essays briefly gloss over). In other words: do we know the hierarchy of inductions is true (in terms of the cogency relation) or is that in itself also an induction (again, in terms of second order analysis)?

I look forward to hearing from you,
Bob
Nickolasgaspar May 04, 2022 at 12:48 #690640
Reply to Philosophim A Philosoophy of Science course by Paul Hoyningen can provide great info on a systematic methodology of knowledge evaluation.
Philosophim May 06, 2022 at 11:09 #691497
Quoting Nickolasgaspar
A Philosoophy of Science course by Paul Hoyningen can provide great info on a systematic methodology of knowledge evaluation.


Hello Nickolasgaspar and thanks for your contribution. I'm sure you had good intentions, but its not very helpful to me. Is there something in particular in the argument or conversation that you noticed such a course could help? Could you perhaps summarize the points he makes to show me its relevance to the OP or the following discussion?
Philosophim May 08, 2022 at 13:22 #692373
Ah, the analytic/synthetic distinction. Long ago when I first wrote this philosophy, I used the analytic and synthetic distinction instead of distinctive and applicable knowledge. The problem was, as you likely know by now, I had very different definitions from the a/s distinction. When I shared the paper or ideas with other individuals I ran into major problems.

First, people wouldn't listen. They wouldn't try to amend the definitions, and insist that I was just "wrong". Not wrong in my underlying amendments of the definitions, but wrong in trying to change them to begin with. Understandable.

Second, people took their vast knowledge of analytic/synthetic knowledge and would cite philosophers or other criticisms of the a/s distinction without understanding or addressing the points I made. It was straw man after straw man, and few people I found are willing to hear, "No, that's not what this version of the a/s distinction means, this is why that doesn't apply."

So I created new terms. This forces people to understand the terminology if I want a conversation. Of course there are still people who don't want to explore something new, but they never wanted to listen when I was redefining the a/s distinction anyway. What I didn't lose were the people who wanted to discuss concepts, but were turned off by word redefinitions. Yes, I redefine some words slightly, but I think by that point people are in the conversation enough that it naturally leads to that.

Are the names I made very good. Probably not. I'm not great with coming up with names! I like distinctive, as it flowed nicely from discrete experience. "Applicable" is probably not very good, but I'm not sure what else to call it. I view words as place holders for concepts, and I view placeholders as contextual. As long as the word works in some sense within this context, that's fine by me. I see it as "Applying distinctive knowledge" to something other than itself.

But I am very open to new naming! Perhaps creative and comparative knowledge? Identity knowledge and confirmable? Dynamic and static? The problem of course with all of these comparisons is if you interpret the word meaning a particular contextual way, they don't quite work either. The contextual implication of the words in their general use gets in the way when trying to apply them in context to the argument. The reality is, the knowledge I'm proposing has never existed before. Its a concept no one (I have read) has proposed. So perhaps I need new words entirely and should research some latin.

At this point though, feel free to use the a/s distinction to help you understand the concept. I'll correct where the a/s distinction doesn't apply. Let me get to your points now.

Quoting Bob Ross
...analytic expresses the contrary: "a proposition whose predicate concept is contained in its subject concept"


To compare to distinctive knowledge, we need to remove proposition, predicate, and subject.

Distinctive knowledge - A deduced concept which is the creation and memorization of essential and accidental properties of a discrete experience.

This then leads into applicable knowledge, which is loosely based off of synthetic knowledge.

Quoting Bob Ross
synthetic generally means (philosophically) "a proposition whose predicate concept is not contained in its subject concept but related"


Applicable knowledge - A deduced concept which is not contained within its contextual distinctive knowledge set. This concept does not involve the creation of new distinctive knowledge, but a deduced match of a discrete experience to the contextual distinctive knowledge set.

Context- when the symbol/identity of one or more sets of distinctive knowledge are identical, while the essential and accidental properties of the symbol/identity are different. "A rock" in the context of geology has different essential and accidental properties than the context of a 5 year old child for example. This can further be compounded when a person is able to comprehend the essential and accidental properties of a distinctive context, but unable to actively apply those properties due to inability. For example, being a blind geologist has a different applicable context than those with sight.

As you can see, while there are some similarities, they are very different.

Quoting Bob Ross
(Noting synthetic) which clearly describes (in my opinion) the extension of one's own "creations" (projections) onto the "world", so to speak. For example, the concept of a rock (or just a rock, so to speak) on the floor doesn't have any inherent properties that necessitate it be called a "rock": I synthetically projected that property onto it.


Both distinctive and applicable knowledge can be seen as the extension of one's creation on the world. A discrete experience (the rock) has no inherent properties that necessitate it be called anything. Distinctive knowledge is when we create those essential and accidental properties that allow it to be called a "rock". This is our creation upon the world. Upon finding finding a new discrete experience (potential rock) we attempt to match our definition of a "a rock" to "the discrete experience". If we deduce that the essential properties match, we have applicable knowledge that "the discrete experience" is a match to "A rock". This is another extension of our creation upon the world.

Quoting Bob Ross
this directly entails that a lot of topics traditionally viewed as "controlled" by the mind can also be applicable knowledge (analytical knowledge)(e.g. imagination, thoughts, etc). I'm not sure if you would agree with me on that. For example, thoughts are analyzed (~discovered), not synthesized (~projected).


This doesn't quite fit. Projection can happen in both instances of knowledge. It is more about creation of identities versus deduced matching of experiences to already established identities. But both can involve the projected world.

Quoting Bob Ross
In other words, and this goes back to my subtle disclaimer that "synthetic knowledge" is a child of "analytic knowledge", we analytically discover that we synthetically project.


To translate into this epistemology, we always start with distinctive knowledge. So I distinctively create the identity of applicable knowledge. But then, I am also able to applicably know the distinctive knowledge of "applicable knowledge" successfully. So I both distinctively, and applicably know the concept of applicable knowledge.

Once I applicably know applicable knowledge, I can also applicably know that I distinctively know. We can then apply this knowledge back to the initial claim in the beginning that, "I discretely experience." I established a definition of discrete experience, then apply the concept successfully.

Quoting Bob Ross
Moreover, going back to our discussion of whether "distinctive knowledge" can be induced, this also implies that the deduced validity of a subset of memories (in relation to another subset) is applicable knowledge (discovered: analytic), as opposed to being distinctive knowledge (projected: synthetic): which would be where, if I am currently understanding your view, we went sideways (our argument was presupposing the analysis of memories as "distinctive", which is incorrect).


The act of experiencing a memory is part of the act of discrete experience itself. For example, "I remember seeing a pink elephant." Whether the memory is accurate when applied is irrelevant. It is the memory itself that is distinctive. The act of attempting to match your memory to a different discrete experience is application of that memory. The deduced outcome of that match is the applicable knowledge. But if I attempted to show there was a pink elephant that existed, the deduced outcome of that would be applicable knowledge.

Quoting Bob Ross
For example, my assertion that memory A is valid in relation to the set of memories S would have to be analytical (because I am discovering the "truth" of the matter), whereas labeling it as "memory" + "A" and "memories" + "S" would be synthetic.


Memories in relation to other memories are distinctive. "Pink elephant" combines our distinctive understanding of "pink" and "elephant". The application of that memory for its accuracy is applicable. "I remember seeing a pink elephant in my room last night," is distinctive. "My memory is an accurate representation of what happened in reality" is applicable. Was there really an elephant? Was it pink? The outcome is irrelevant to the knowledge of the memory itself.

Quoting Bob Ross
If I am understanding your distinction correctly, then I agree here except that applicable knowledge is not relatable to an induction directly.


There may be a misunderstanding of what is meant by "directly". If I make an induction that the next coin flip will be heads, the result that is experienced and deduced will be the outcome of the flip. If I deduce that the coin lands on heads, (instead of just guessing it did) then I have a "resolution" to my induction. This is the relation that I'm talking about. I guessed heads, and it ended up heads. My guess was correct. I guessed heads, and it ended up tails. My guess was incorrect. This resolution is applicable knowledge.

Quoting Bob Ross
A hypothetical deduction is when we take an induction, and take the logical deductive conclusion if it resolves a particular way.

I don't think this is true. A hypothetical deduction is a deduction wherein each premise is hypothetically granted as true: it is a valid deduction due to it conforming to the necessary form of a deduction.


The hypothetical is a possible resolution to an induction. If there was no induction, there would be no hypothetical. The coin can land either heads or tails. We can hypothetically deduce that if it lands heads, X occurs, and if it lands tails, y occurs. But the hypothetical cannot exist without the induction as a source of alternative outcomes. A deduction leads to a necessary conclusion, not a hypothetical conclusion. Only inductions can lead to hypothetical conclusions. That's the whole point of the IF. If there was no uncertainty in the outcome, we would not need the IF. I don't think we're in disagreement here beyond semantics.

Quoting Bob Ross
the former implies inductions are valid premises of a hypothetical deduction (which is wrong), whereas the latter implies we can dispense of that induction.


To correct this, I am saying inductions are necessary premises to create a hypothetical deduction. The IF implies uncertainty. If you remove the IF, it is no longer a hypothetical, it is not a deduction.

Hypothetical: IF the penny lands on heads (Implicit uncertainty of the initial premise happening)
Non-hypothetical: The penny lands on heads (A solid and certain premise)

Quoting Bob Ross
I'm not certain I agree with this. The induction does not resolve a particular way:


Can an induction ever resolve then? If I say, "I believe the next penny flip will land on heads" will I ever find out if I was correct in my guess? All I'm noting is how we figure out the outcome of the guess. That must be done applicably.

Quoting Bob Ross
but, rather, a deduction can resolve an induction by either dispensing of it (as now it is known that the induction happened to be accurate or it wasn't) or retaining it as not directly pertinent to what is newly known.


I'm simply noting the accuracy of the induction. I think you're taking two steps here, noting the accuracy of the induction, and then deciding to dispense or retain it. For example, I could deduce the penny lands on tails, but still insist it landed on heads by inventing some other induction like "an evil demon changed it", or simply not caring and insisting it landed on heads regardless of what I deduced. The second step of deciding to stick with or reject the induction is a step too far from what I'm saying. All I'm noting is the deduced outcome after the induction's prediction comes to pass.

Quoting Bob Ross
However, now we must deal with a second order proof pertaining to why we ought to believe that because they related in a particular way in the past that it will hold in the future (aka hume's problem of induction).


I have already concluded that you cannot make any knowledge claim about the future. You can only make inductions about the future. The smartest way to make inductions is to use the most cogent inductions we already know of. So we would make our decisions based on the hierarchy of the inductions we have at our disposal. Just because we can speculate that the rules of reality may change in the future, doesn't mean its possible they will. Since we know what is possible and probable, it is possible and probable they will continue to happen in the future.

Great points again Bob! I hope I adequately showed why the distinctive and applicable distinction of knowledge might be inspired by the a/s distinction, but is not the a/s distinction itself.

Bob Ross May 13, 2022 at 23:35 #694950
Hello @Philosophim,

Well I have clearly missed the mark yet again ): It seems as though we are not semantically disagreeing but, rather, fundamentally disagreeing. I understand now that you are by no means making a synthetic/analytic distinction. It is becoming exceedingly difficult to map d/a to s/a because, quite frankly, they aren't the same distinction. However, I am making that kind of s/a distinction (as opposed to d/a), so I want to clarify that my usage of a/s hereafter isn't meant as a depiction of your distinction but, rather, of mine in contrast to yours.

Are the names I made very good. Probably not. I'm not great with coming up with names! I like distinctive, as it flowed nicely from discrete experience. "Applicable" is probably not very good, but I'm not sure what else to call it. I view words as place holders for concepts, and I view placeholders as contextual. As long as the word works in some sense within this context, that's fine by me. I see it as "Applying distinctive knowledge" to something other than itself.

But I am very open to new naming! Perhaps creative and comparative knowledge? Identity knowledge and confirmable? Dynamic and static? The problem of course with all of these comparisons is if you interpret the word meaning a particular contextual way, they don't quite work either. The contextual implication of the words in their general use gets in the way when trying to apply them in context to the argument. The reality is, the knowledge I'm proposing has never existed before. Its a concept no one (I have read) has proposed. So perhaps I need new words entirely and should research some latin.


People are indeed diverse, and I can definitely see how some people simply either don't engage with refurbished terminology or misunderstand your points due to the previous definitions of the terminology: fair enough. In that case, latin may be a good choice; Simply as a means of forcing them to understand the underlying meaning and so they don't get upset by the refurbishment of terms.

Out of the terms you suggested, I think "creative" and "comparative" was the closest to what I think you are trying to convey. But I think you are only constituting something as "applicable knowledge" if it is a match, with no relation to contrast (so comparative may not be the best word: "matched" might be, I am not sure). For example, if I begin the act of matching and thereby determine that concept A is not a match of concept B, then do I, under your terms, "applicably know" they aren't a match? In other words, is contrasting, as opposed to simply comparing similarities, an aspect of "application" in your terms? I am understanding you to more be making the distinction strictly in the sense that "a successful match" is "applicable knowledge".

...analytic expresses the contrary: "a proposition whose predicate concept is contained in its subject concept" — Bob Ross


To compare to distinctive knowledge, we need to remove proposition, predicate, and subject.


I understand now that one would have to remove "proposition, predicate, and subject" to roughly map it onto "distinctive knowledge" because, quite frankly, we aren't speaking of the same distinction (which I previously thought was the case). To my understand, the fundamental reason for your distinction was meant to expose indexical conflations in a given claim presented by a subject . However, I think that I can achieve that underlying meaning, assuming I understood it right, by using the most fundamental distinction in terms of how reason works: a proposition (all reasoning beings are capable of such) wherein the predicate (all propositions must have a predicate, and therefore all claims made by a subject that must recognize the distinction of indexical relations must have a predicate) is contained (or not contained for the contrary) in its subject concept (all propositions must have a subject concept). If the sentence doesn't meet these fundamental underlying requirements, then the distinction I think you are trying to make isn't applicable anyways (by applicable I am not referring to your term, just normal use). Now, I want to clarify that I am not referring to diction, semantics, or syntax: those all can be contextually redefined (or defined) in terms of both societal and personal contexts. I am referring to the underlying concepts. The given individual doesn't have to call it a "predicate" nor do they have to syntactically abide by the english language, but they necessarily must have a "predicate" concept which refers, in terms of underlying meaning, to a predicate. If not, then it is incoherent to consider it in terms of indexical conflations (e.g. "oranges" therefore "oranges" makes no valid references, therefore it isn't possible to conflate anything that we would like to expose in terms of indexical conflations).

Here's some examples:

If I propose "B", it is not a proposition.

If I propose "B is", it is not a proposition.

If I propose "is blue", it is not a proposition.

If I propose "B is the same as A", then either B matches the definition of A or it does not. However, to know either way, I have to compare and contrast. This is the first issue I have with your terminology: I have to compare and contrast everything to know even if it is distinctive or applicable, but yet "applicable" is supposed to be the area in which I "match" (and potentially contrast?): which doesn't really fit the distinction that I think should be made. In the case that B is a match of the definition of A, then I recognize that there is not an indexical conflation occurring if I were to make claims about B which were derived from claims about A. You would call this "applicable knowledge". In the case that B does not match the definition of A, I recognize that it would be fallacious to make claims about B which were derived from claims about A. At first glance, it feels like that is what you mean by "applicable" and "distinctive", but I don't think it is holistically. I have to perform this for everything, which is the problem with your distinction. For example, if I distinctively define A and distinctively define B, but they are by happenstance defined the exact same, my conclusion that they are defined the same is a comparison of the two distinctively defined concepts, A and B, to derive that they are indeed a match: this didn't involve anything "outside of my control", so to speak. I think you would regardless consider it holistically in the realm of "distinctive knowledge", which I would disagree with. The concept that "concept A = concept B" is a different concept which is not contained in the subject concept of either A or B (therefore it is not analytical): it is a synthetic unity of both A and B under equivocation from matching their definitions all abstractly. The definition of A did not contain the necessity that the concept of B is equivocal to itself. I have to use both: I analytically unpack the definitions of A and B to then synthetically compare the two. Maybe I am just misunderstanding you (I probably am), but here's your definitions:


Distinctive knowledge - A deduced concept which is the creation and memorization of essential and accidental properties of a discrete experience.


Applicable knowledge - A deduced concept which is not contained within its contextual distinctive knowledge set. This concept does not involve the creation of new distinctive knowledge, but a deduced match of a discrete experience to the contextual distinctive knowledge set.


It is tricky to map onto a/s because both distinctive and applicable are synthetic and analytic in their own regards: I am starting to see there's no line that can be drawn in the fashion I think you are trying to in order to provide a distinction that exposes indexical conflations.

Applicable knowledge does involve the creation of a new concept: the synthetic joining of "A = B", which is a separate concept from A and B. There was a concept A and a concept B, now there's a new concept that "A = B". This is not necessitated in the concepts A nor B, but yet true of them (i.e. it is synthetic). But there was an analysis that was required to determine "A = B" which was the analysis of what is contained in the concept A and, likewise, what is in the concept B, which is analytical. So both were used to obtain "applicable knowledge". I think this, as of now, is the true pinpoint of the distinction we are both really trying to portray (but I may be wrong, as always).


Both distinctive and applicable knowledge can be seen as the extension of one's creation on the world. A discrete experience (the rock) has no inherent properties that necessitate it be called anything. Distinctive knowledge is when we create those essential and accidental properties that allow it to be called a "rock". This is our creation upon the world. Upon finding finding a new discrete experience (potential rock) we attempt to match our definition of a "a rock" to "the discrete experience". If we deduce that the essential properties match, we have applicable knowledge that "the discrete experience" is a match to "A rock". This is another extension of our creation upon the world.


I think you are right and that is why I need to be careful with my verbiage: synthesis and analysis are both projections in a sense. However, in terms of a/s, there's a meaningful distinction between the joining of two concepts and what is contained within a given concept. Another reason why we are disagreeing here is because I am viewing the "matching" you described as synthetic and analytic. So matching "a rock" to the what is called "a rock" would be projection (the connection of concepts together) whereas the derivation of the properties of "the rock" would be analytical (which wouldn't be meaningfully depicted as projection, but technically would be in a sense). Projection probably isn't a good word here, so I am going to stop using it.

It is more about creation of identities versus deduced matching of experiences to already established identities.


I don't think this directly explicates the recognition of indexical conflations. It is more of a byproduct.

To translate into this epistemology, we always start with distinctive knowledge.


I think that we start with analysis (which is empirical observation) and therefrom derive synthesis. I haven't found a way to neatly map this onto your d/a distinction. I don't think we always start with distinctive knowledge as you've defined it.

For example, take the concept of "A is equal to B" ("A = B"). To realize that I actually synthetically connected the concept of A and the concept of B in a relation of equivocation I must first analytically dissect the created concept of "A = B" to determine that there's a synthesis that occurred. Likewise, I could then counter myself with "well, bob, you just performed synthesis in determining that you analytically discover synthesis". And I would be correct, however I didn't realize that necessarily until after I analytically observed the claim (i.e. that I analyze to discover what is synthesized). I am always one step behind the synthesis, so to speak. Hopefully that made a bit of sense.

The act of experiencing a memory is part of the act of discrete experience itself. For example, "I remember seeing a pink elephant." Whether the memory is accurate when applied is irrelevant. It is the memory itself that is distinctive.


The act of experiencing imagery in ones mind is part of discrete experience: the conclusion that it is a remembrance of the past is not. It would be more like "I am imagining a pink elephant right now" as opposed to "I remember seeing a pink elephant before". The further consideration of whether it is a remembrance is synthetic as I am doing essentially "A = B". The discrete experience of the pink elephant would be analytic, at least prima facie, because it is simply analyzing what is contained in the concept. But any labeling would be synthetic of the contents of the concept.

"Pink elephant" combines our distinctive understanding of "pink" and "elephant".


The definitions of "pink" and "elephant" would be analytical. But the new concept of a "pink elephant" would be synthetic. The problem is that "pink", in isolation, is "distinctive knowledge". So there's no clear distinction here that "pink" -> therefore "pink elephant" is wrong because it doesn't enter the domain of "applicable knowledge". In other words, your epistemology essentially allows full knowledge claims in the realm of distinctive knowledge and emphasizes the incorrectness of indexical conflations, but yet the latter can occur in the former. Imagine I never imagined a "pink elephant" but, rather, I envisioned "pink", in isolation, and "an elephant" in isolation. If I then claimed "pink elephant", it would make just as little sense as envisioning a "pink elephant" and claiming "there's a pink elephant in my backyard". The a/s distinction, I think thus far, does the best job of constructing the most precise line that exposes indexical conflations holistically.

The hypothetical is a possible resolution to an induction. If there was no induction, there would be no hypothetical. The coin can land either heads or tails. We can hypothetically deduce that if it lands heads, X occurs, and if it lands tails, y occurs. But the hypothetical cannot exist without the induction as a source of alternative outcomes. A deduction leads to a necessary conclusion, not a hypothetical conclusion. Only inductions can lead to hypothetical conclusions. That's the whole point of the IF. If there was no uncertainty in the outcome, we would not need the IF. I don't think we're in disagreement here beyond semantics.


Unfortunately, I don't think we are merely semantically disagreeing on this either. I think you are conflating "uncertainty" with "induction". You can have deduced uncertainty. Therefore, a premise that is hypothetical is not necessarily, when stripped of its if conditional, an induction. It could be a deduction or an induction. If I say Premise 1 = IF X, I am not thereby implying necessarily that X is an induction. I could have deductively ascertained that I simply don't know if X is true, therefore I need an IF conditional to ensure that Premise 1 validates the form of the deduction.

To correct this, I am saying inductions are necessary premises to create a hypothetical deduction. The IF implies uncertainty. If you remove the IF, it is no longer a hypothetical, it is not a deduction.


I would refurbish this to "uncertainty is necessary to create a hypothetical deduction".

Hypothetical: IF the penny lands on heads (Implicit uncertainty of the initial premise happening)
Non-hypothetical: The penny lands on heads (A solid and certain premise)


Again, I agree with this analogy, yet it doesn't prove that the hypothetical is an induction when the if conditional is removed: I might deductively not know whether or not the penny will land heads.

Can an induction ever resolve then? If I say, "I believe the next penny flip will land on heads" will I ever find out if I was correct in my guess? All I'm noting is how we figure out the outcome of the guess. That must be done applicably.


Yes, so with further contemplation, you can resolve an induction, but is resolved deduction -> induction (or induction <- deduction), not induction -> deduction. Again, this is implying to me the indexical conflation consideration: it seems to me you are implying, rightly so, that "a guess" entails uncertainty which entails that some sort of empirical observation (analysis) is required. I am simply noting that this is true of both "applicable" and "distinctive" knowledge. "a guess about A", G, implies that G is not contained in the concept of A, which was analytically ascertained and thereafter a new concept of "G != A" was synthetically created. Therefore, claims about A that are contained in A cannot be extended graciously to G: further empirical observation is required. This process can and does occur abstractly.

I'm simply noting the accuracy of the induction. I think you're taking two steps here, noting the accuracy of the induction, and then deciding to dispense or retain it. For example, I could deduce the penny lands on tails, but still insist it landed on heads by inventing some other induction like "an evil demon changed it", or simply not caring and insisting it landed on heads regardless of what I deduced. The second step of deciding to stick with or reject the induction is a step too far from what I'm saying. All I'm noting is the deduced outcome after the induction's prediction comes to pass.


Fair enough.

I have already concluded that you cannot make any knowledge claim about the future. You can only make inductions about the future. The smartest way to make inductions is to use the most cogent inductions we already know of. So we would make our decisions based on the hierarchy of the inductions we have at our disposal. Just because we can speculate that the rules of reality may change in the future, doesn't mean its possible they will. Since we know what is possible and probable, it is possible and probable they will continue to happen in the future.


Then I think you may be agreeing with me that we do not know that a possibility is more cogent than a speculation in the relation to the future, we only know that it is true of the past. The grounds of the induction hierarchy in relation to the future (which is the whole purpose of it is for the future) is an induction.

I look forward to hearing from you,
Bob
Philosophim May 14, 2022 at 15:16 #695180
Quoting Bob Ross
Well I have clearly missed the mark yet again ): It seems as though we are not semantically disagreeing but, rather, fundamentally disagreeing.


Not a worry at all! Please continue to shoot arrows. I think comparing this epistemology to the a/s distinction is inevitable and necessary to fully understand it. I am glad we are exploring this route, as I think it can help clarify what my proposed epistemology means. Further, there needs to be a reason why we should use this epistemology over the a/s distinction if it is to have any worth. Lets dive in.

Quoting Bob Ross
I have to perform this (comparison) for everything, which is the problem with your distinction. For example, if I distinctively define A and distinctively define B, but they are by happenstance defined the exact same, my conclusion that they are defined the same is a comparison of the two distinctively defined concepts, A and B, to derive that they are indeed a match: this didn't involve anything "outside of my control", so to speak. I think you would regardless consider it holistically in the realm of "distinctive knowledge", which I would disagree with.


Again, this depends on how the comparison is made. Lets say I hold A and B in my head as merely definitions. Further, I define a synonym as "Two identities which have the same essential and non-essential properties. Then I say, "A and B are synonyms". At that point, I have to compare the essential and non-essential properties. But there is no uncertainty involved. How I define A, B, and synonyms are all in my solo context. I could change the identities of A, B, and synonym anytime I desired. But I don't. Perhaps this process should receive a new identity such as "logical distinction".

If a situation arises in which we are wondering if a distinctively unknown specific experience matches the definition of B, we are applying the identity to something else which is outside of our creative identification. We are still distinctively committing to what the identity of B is, but we are purposefully not creating a new identity for this currently undefined experience. At this point it requires an investigation of what this new identity is, and if it can deductively match to our B identity.

As such, applicable knowledge always involves the resolution of a distinctive uncertainty. There is no certainty that the match of this new uncertainty will match with something I distinctively know. I cannot change what B means, and I am choosing not to create a new identity for the undefined experience. The premise that the undefined experience matches B is not a necessary conclusion. But the attempt to match is the belief, or induction that it could. This is what I've been trying to narrow in on their difference. Distinctive knowledge has no uncertainty. Applicable knowledge only happens in the resolution of an uncertainty.

Distinctive knowledge - A deduced concept which is the creation and memorization of essential and accidental properties of a discrete experience.

Applicable knowledge - A deduced concept which is not contained within its contextual distinctive knowledge set. This concept does not involve the creation of new distinctive knowledge, but a deduced match of a discrete experience to the contextual distinctive knowledge set.


These are both very well written general definitions. For applicable knowledge, perhaps we need to tweak it a little with my above analysis. "A deduced resolution in the uncertainty of matching a distinctively undefined experience to a contextual distinctive knowledge set."

Quoting Bob Ross
Applicable knowledge does involve the creation of a new concept: the synthetic joining of "A = B", which is a separate concept from A and B. There was a concept A and a concept B, now there's a new concept that "A = B". This is not necessitated in the concepts A nor B, but yet true of them (i.e. it is synthetic). But there was an analysis that was required to determine "A = B" which was the analysis of what is contained in the concept A and, likewise, what is in the concept B, which is analytical. So both were used to obtain "applicable knowledge". I think this, as of now, is the true pinpoint of the distinction we are both really trying to portray (but I may be wrong, as always).


This also sounds good. If one uses the a/s distinction, they will have to use both within distinctive and applicable knowledge. Distinctive and applicable knowledge do not divide into a/s distinctions themselves however. I'll clarify further with the pink elephant example early.

Quoting Bob Ross
Imagine I never imagined a "pink elephant" but, rather, I envisioned "pink", in isolation, and "an elephant" in isolation. If I then claimed "pink elephant", it would make just as little sense as envisioning a "pink elephant" and claiming "there's a pink elephant in my backyard".


Distinctively, there is nothing strange about taking the terms pink and applying it to an elephant. We create whatever definitions we wish. The part that doesn't make sense is stating there is some unknown distinctive identity apart from our imagination or fiction that matches to the identity of a pink elephant. The creation of distinctive knowledge does not necessitate such knowledge can be applicably known. The a/s distinction is what causes the confusion, not the d/a epistemology.

Alright, back to the original flow!

Quoting Bob Ross
It is more about creation of identities versus deduced matching of experiences to already established identities.

I don't think this directly explicates the recognition of indexical conflations. It is more of a byproduct.


No, taken alone, the process of distinctive and applicable knowledge do not explicitly involve context.

Language A: A bachelor is an unmarried man. (Distinctive)
This person is found to be unmarried. (Applicable)
Therefore this man is a bachelor (Logical distinction)

Language B: A bachelor is a married man. (Distinctive)
This person is found to be married. (Applicable)
Therefore this man is married (Logical distinction)

By this I mean the context does not affect the logical process itself. The context only determines the defined starting point. The process itself is not contextual, only the identifications and capabilities of the observer/thinker.

Quoting Bob Ross
To translate into this epistemology, we always start with distinctive knowledge.

I think that we start with analysis (which is empirical observation) and therefrom derive synthesis. I haven't found a way to neatly map this onto your d/a distinction. I don't think we always start with distinctive knowledge as you've defined it.


You are correct! The analysis is the introduction to discovering we discretely experience. That is how we analyzed and discovered the term "distinctive knowledge". Nothing I've proposed is done without analysis, and all is attempted to be shown using distinctive and applicable knowledge where possible (barring inductions).

Quoting Bob Ross
Likewise, I could then counter myself with "well, bob, you just performed synthesis in determining that you analytically discover synthesis". And I would be correct, however I didn't realize that necessarily until after I analytically observed the claim (i.e. that I analyze to discover what is synthesized). I am always one step behind the synthesis, so to speak. Hopefully that made a bit of sense.


I believe so. It is one reason why I found the a/s distinction to not tell the whole story. It is a useful distinction, but one that diminishes in usefulness the more granular you get with them.

Quoting Bob Ross
The act of experiencing imagery in ones mind is part of discrete experience: the conclusion that it is a remembrance of the past is not.


I want to tweak this sentence a little to ensure we are on the same page.

The act of experiencing imagery in ones mind is part of discrete experience.
The act of experiencing that is a remembrance of the past is part of discrete experience.

The deduced conclusion that it is an accurate remembrance of the past is the discrete experience of applicable knowledge.
The deduced realization that I believe my memory to be an accurate remembrance is the discrete experience of distinctive knowledge.

Quoting Bob Ross
Unfortunately, I don't think we are merely semantically disagreeing on this either. I think you are conflating "uncertainty" with "induction". You can have deduced uncertainty.


I don't believe there is conflation, but perhaps I am wrong. An induction is a claim of uncertainty. Certainly we can deduce that an induction is all we can make.

Quoting Bob Ross
Therefore, a premise that is hypothetical is not necessarily, when stripped of its if conditional, an induction. It could be a deduction or an induction. If I say Premise 1 = IF X, I am not thereby implying necessarily that X is an induction.


No, X alone is not an induction. "IF X" is an induction. It is the same as my saying, "I believe it will rain tomorrow." If I remove "I believe", then we are left with "It will rain tomorrow" as a fact. I can create deductions based on the premise "It will rain tomorrow". The addition of the IF lets the reader know that this is not a fact, or a conclusion that followed from the premises we had. It may, or may not rain tomorrow.

Adding the IF makes it hypothetical.
Hypothetical - involving or being based on a suggested idea or theory : being or involving a hypothesis : CONJECTURAL https://www.merriam-webster.com/dictionary/hypothetical

A hypothesis is an induction. A conjecture is an induction. A claim that asserts a conclusion that is not certain, is an induction. The IF is the assertion of a conclusion that is not certain, therefore an induction. IF the induction turns out to be correct, then we can deduce what will follow.

Quoting Bob Ross
Hypothetical: IF the penny lands on heads (Implicit uncertainty of the initial premise happening)
Non-hypothetical: The penny lands on heads (A solid and certain premise)

Again, I agree with this analogy, yet it doesn't prove that the hypothetical is an induction when the if conditional is removed: I might deductively not know whether or not the penny will land heads.


If the IF condition is removed, it is no longer a hypothetical deduction. At that point, it is simply a deduction. The penny lands on heads is not an uncertainty, but a certainty at that point. The identities of our chain of reasoning are based on the zero point we pick. Its all about the starting point in our analysis.

Pure Deduction chain: Deduction -> Deduction all the way down.
Hypothetical: induction -> Deduction with the induction stating an outcome that will happen (But has not yet).
Deduced induction: Deduction -> Induction due to limited information
A Deduced Inductions Hypothetical Deduction -> Induction -> Deduction.

So if I take a hypothetical induction, and remove the induction as a premise within my chain of reason (removing the IF) it is now just a deduction.

Quoting Bob Ross
Again, this is implying to me the indexical conflation consideration: it seems to me you are implying, rightly so, that "a guess" entails uncertainty which entails that some sort of empirical observation (analysis) is required. I am simply noting that this is true of both "applicable" and "distinctive" knowledge.


I hope I have explained why this is not true of both applicable and distinctive knowledge at this point. Distinctive knowledge does not require empirical observation. An induction itself is distinctively known. But the resolution to that induction is applicably known.

Quoting Bob Ross
Then I think you may be agreeing with me that we do not know that a possibility is more cogent than a speculation in the relation to the future, we only know that it is true of the past. The grounds of the induction hierarchy in relation to the future (which is the whole purpose of it is for the future) is an induction.


I want to make sure its understood that cogency does not mean "truth" or "deduced certainty" Cogency originally is defined as "a strong inductive argument with true premises." Here it is amended to be "A strong inductive argument based on how many steps it is removed from deductions in its chain of rationality."

That has been shown distinctively, and I believe can be shown applicably. But I don't claim that taking a cogent induction determines that the induction will come to pass. Its simply shown to be more likely to pass when taken over a large sample space. And if a person is to be rational, they will take the induction type that gives them the greatest odds of being correct.

Also I never claim that we can applicably know that any form of induction will necessarily lead to its outcome. It is reasonable to guess that an outcome that will occur 99% of the time will happen, but you will be wrong 1% of the time.

Any claim about the future is always an induction. The question is, do we have a rational way of sorting out which inductions are more reasonable based on logic and past experience? Yes. While it is an induction that logic and our past experiences will be the same tomorrow, we must also not forget that it is also an induction that our logic and past experiences will NOT be the same tomorrow. As no one has experienced logic suddenly altering, or the past suddenly shifting reality, it is a speculation that this may change, while it is a possibility it remains stable. Therefore it is more cogent to act as if the known certainties of today such as logic and needing to breath and eat to survive, will be the known certainties of tomorrow. My inductive hierarchy can justify itself. Can any other rationalization of inductions do so? I leave that to you.

Fantastic post Bob, and I hoped I adequately addressed your thoughtful points!
Bob Ross May 18, 2022 at 16:43 #697099
Hello @Philosophim,

You have brought up some very thought-provoking points and, thusly, it has taken me some time to really give it its due. I realized, with aid of your contentions, that the synthetic/analytical distinction is also not actually directly exposing what I want (just as, I would argue, the applicable/distinctive distinction isn't) and, therefore, I can no longer invoke it legitimately to convey my position. Consequently, I was forced to really dive into what I am actually trying to convey and, therein, really clearly define each fundamental building block. So, I now going to share with you what I believe to be a much more clear, distinct representation of what I am trying to convey (but of course it could not be as well (: ).

As a general overview, I still do not think (as I alluding to above) either a/s or a/d properly convey the distinction I am addressing and, quite frankly, I don't think it quite explicates properly what you are trying to convey either. I think both distinctions are missing the mark: in hindsight, the a/s more than a/d. It is like at prima facea a/d makes sense, but at a deeper evaluation it diverges from the rightful distinction. Let's dive in.

First I need to start my derivation not at the distinction I want to convey but at the groundings, fundamentals, of everything. That is, a deeper analysis of reason to determine, recursively, what is occurring across all instantiations (because reason is the focal point of all derivation, I think we would agree on that at least generically). If this endeavor is accomplished, then I submit to you that it will be relevant, at the very least, to your epistemology as it would be the protocol by which all else conforms.

I think that, although I am open for suggestions, there are two groups of fundamentals worth mentioning right now: the most fundamental and some sub-distinctions therein. It is important to note, before I begin deriving and defining them, that I only giving ordering in terms of those groups and not in terms of the items therein: in the case of the most fundamental I am not particularly convinced one can make a meaningful order and in the case of the sub-distinctions therein I don't find it relevant at this point to parse it.

Most Fundamental:
In the case of the most fundamental, they are as follows:

- The principle of non-contradiction (PoN): subject concept which is not in contradiction by its predicate.
- Negatability: the ability to conceive of the direct opposite (contradiction) of a given concept.
- Will: a motive.
- Connectivity: the ability to construct connections via connectives.
- Connective: a concept which relates two other concepts in some manner (relations).
- Spatiotemporality: the spatiotemporal inevitable references of concepts.

These are the fundamentals which are such because they are the utmost (or undermost) conceptions that one can derive. Any other concept is thereafter.

It is important to note that by "spatiotemporal" I am not referring to "space and time" (as in two separate distinctions) but more as "space and time juxtaposed as one". Time and space cannot be separated in a literal sense.

Sub-distinctions Therein
There are two sub-groups worth mentioning at this time. First is the sub-group of connectivity:

- Possibility: a predicate which does not contradict its subject concept.
- Necessity: a predicate which is true of all possibilities of its subject concept.
- Impossibility: a predicate which contradicts its subject concept.
- Conditional (Contingent): a connective which relates two concepts in some sort of dependency. This includes, but is not limited to, biconditionals (IFF) and uniconditionals (IF).
- Unconditional (Not Contingent): a connective which relates two concepts in a manner that has no dependency (e.g. the connection that A and B are not related is a relation determined by a connective which dictates their unconditioned nature).
- Communal: two concepts share a concept.

The second relevant sub-group is of spatiotemporality:

- Quantity: A concept which is numerable. Such as "particular", "singular", "three", etc.
- Quality: A concept which is innumerable. Such as degrees on a spectrum from 0 to 1.

Immediate Productions of The Fundamentals and Sub-distinctions
Now, from those fundamentals, along with the understanding of the relevant sub-distinctions therein, arises immediate processes of reason which are identifiable, which are:

- Concepts
- Properties
- References
- Contexts
- Conflations
- Conceptual Conflations
- Contextual Conflations
- NOTE: probably many more, but the aforementioned are the relevant ones.

These immediate processes, derived ultimately from the fundamentals, are, in fact, arranged in order (unlike the two groups I mentioned previously) as their definitions rely on the previous to understand each other. They are what I would consider the "fundamentals" which can be constructed given the actual fundamentals (previously explicated).

Concepts:
A "concept" is spatiotemporal connection(s) composed of spatiotemporal connection(s).

E.g. Concept A is comprised of other concepts:

NOTE: apparently philosophy forum strips white space characters and won't let me upload any images, so I am going to have to represent by diagrams a bit odder.

'=' will be assigning operator
'[ ]' will be a set
'&' will be a reference operator
'<=>' biconditional operator
'( )' order of operations

A = [P1, P2]

Properties:
A "property" is a concept, P, which is connected (related) to another concept, C, in a manner of necessity as one of C's comprised parts. In the above example, P1 and P2 are properties of A.

References:
A "reference" is a connective, R, which connects its concept to another separate concept, wherein "separate concept" entails that the given concept is not a property of the other concept.

Concept A, which has two properties, is referencing concept B, which has a property that is not equal to either of A's:

B = [P3]
A = [P1, P2, &B]

Contexts:
A reference which dictates its concept as conditional on another concept in the manner of IFF (biconditional).

There are two concepts defined as A, but each is biconditionally referenced to concept B and C respectively (B and C would thereby be considered contexts):

B <=> (A = [P1, P2])
C <=> (A = [P3, P4])

It is important to note that the properties of both A's must be different, otherwise it is not a biconditional and, therefore, not a context.

Conflations:
The use of two or more concepts as synonymous when they are differentiable in terms of their properties or/and references (see subsequent examples).

Conceptual Conflations:
The use of two or more concepts as synonymous when they are differentiable in terms of their properties.

A = [P1, P2]
B = [P3, P4]

Conflation: B has property P1 because A has property P1.

Contextual Conflation:
The use of two or more concepts as synonymous when they are differentiable in terms of their references.

B <=> (A = [P1, P2])
C <=> (A = [P3, P4])

Contextual Conflation: A from C has property P1 because A from B has property P1.

Brief Explanation:
The entire point of the previous derivation is so that I can more accurately and precisely convey my point of view and is not in any way meant to derail the conversation into a discussion about a different epistemology (although it inevitably sort of requires such insofar as it is my position). To keep this brief, let me elaborate on my previous definitions in contrast to your epistemology:

Advantages Over Your Epistemology

Free will is irrelevant. The determination of "knowledge" is not related directly to control, which dissolves any issues or paradoxes related thereto.

Creation & Application are irrelevant. The distinction being made has no direct relevancy to whether a given concept was "created" or "applied", just that the conceptions appropriately align with the fundamentals. In relation to concepts, dissolving of the distinction of "distinctive" vs "applicable" resolves a lot of issues, such as the fact that contextual conflations can occur in distinctive knowledge which seems, in your epistemology, to be an exemption wherein no conflations can occur. Take the elephant example, here's your response:

Distinctively, there is nothing strange about taking the terms pink and applying it to an elephant. We create whatever definitions we wish. The part that doesn't make sense is stating there is some unknown distinctive identity apart from our imagination or fiction that matches to the identity of a pink elephant. The creation of distinctive knowledge does not necessitate such knowledge can be applicably known. The a/s distinction is what causes the confusion, not the d/a epistemology.


The problem is that I can conflate distinctively concepts. If I, in isolation, imagine the color pink and, in isolation, imagine an elephant, it would be a conflation to claim the concatenation of the two produced a literal "pink elephant". Given the nature of imagination, it isn't so obvious that there's a conflation occurring, but a more radical example explicates it more clearly: I imagine a circle and then imagine a square, I then declare that I distinctively know of a "a circle that is a square". What I really distinctively know is a square, a circle, and a contradiction (impossibility in this case).

The concept of "square", and its properties (essential properties in your terms), as a predicate (such as "this circle is square") contradicts the subject concept "circle" and is therefore "impossible". It contradicts it because the properties are related to the concept as necessitous by nature and therefore a contradiction in the predicate to the properties of "circle" (the subject concept) results in rejection (due to PoN): this is what it means to be "impossible".

Potential vs Possibility is now resolved. There's no more confusion about possibility because what you are defining as "possibility" is not fundamentally what it should be, however the distinction you made is still relevant. "Possibility" is truly when a predicate does not contradict its subject concept. Thereafter, we can easily explain and justify the validity of what you are meaning to distinguish with "possibility". We simply need to provide the concepts of "reality" and "self" (for example) and demonstrate that the two concepts have at least one different properties and, therefore, they are two different subject concepts. Therefore, it would be a conceptual conflation to relate a predicate to both by mere virtue of them being considered synonymous (because they aren't). It is important to note here, as I have defined it, that this would not be a contextual conflation but a conceptual conflation. This is because the approach previously mentioned is differentiating the two concepts by means of their properties and not their references to other concepts. If it were the case that "reality" referenced a context and "reality" referenced a different context, then the use of a predicate for both in virtue of being synonymous would be a contextual conflation. But in the case of comparing properties, the conflation is not occurring contextually. To be clear, a "conceptual conflation" occurs by means of properties and "contextual conflations" by means of references.

Further, notice that properties, as I defined them, are only essential (because they are utilizing a connection of the nature of necessity) and never accidental (unessential). I think this nicely portrays what the mind really does: if something is an accidentally property, what is actually happening is the mind is determining the accidental property to be "possible" (as I defined it) and therefore noting that the given concept could reference another concept but it is not necessitous. For example, if concept A has one property of "being circular" (to keep it simple) and concept B has one property of "being green", then it is "possible" for A "to be green" (reference concept B: A = [..., &B]) because "being green" does not contradict A. Now, what you are noting, and rightfully so, is that A referenced in the concept of "reality", so to speak, cannot be conflated with a reference to "imagination", which really looks like:

Reality <=> (A = [Circular])
Imagination <=> (A = [Circular])

A contextual conflation arises if one were to claim X of Imagination's A in virtue of Reality's A (and vice-versa) because of the referential difference (even though they are the same conceptually in this case, so there's no conceptual conflation). Likewise:

Reality <=> (A = [Green, Circular])
Imagination <=> (A = [Circular])

This would be a referential and conceptual conflation if one were to claim X of one in virtue of the other. In this case the conceptual conflation would determine that the concepts of A are not synonymous when compared with each other (in their contexts). Which I think is important as well.

I think, overall, this really gets at the fundamental situation of reason and how it operates, which is the pinnacle in relation to a given subject.

As you probably noticed, there is a recursive nature to my definitions: they are all concepts. This is purposely so because, quite frankly, it is an inescapable potential infinite regress of reason. Which I think is important to note that the epistemology is never complete, only consistent. The most fundamental is that which is apodictic.

The last thing I will say is that I can see how this all, at prima facea, seems like I really used what your epistemology states to even derive these terms (e.g. I "created" definitions and applied them without contradiction). However, I actually think that the previously mentioned process is what occurs as the fundamental building block of reason (at least human reason) and your epistemology happens to align with it pretty nicely, but the subtle but vital differences required me to really derive and explicate my position to figure out what wasn't quite adding up for me: I think mine explicates the situation more clearly and precisely. Hopefully that makes sense.

In terms of your post, I am now going to try to respond to what I think is still relevant to our conversation, but feel free to prompt me to respond to anything you think I left out.

I define a synonym as "Two identities which have the same essential and non-essential properties.


I would define synonyms as two concepts which have the same properties, where property is connected as necessary. Apart from the obvious difference in semantics, the important part is that non-essential properties no longer exist: they are references to other concepts determined by "possibility".

But there is no uncertainty involved. How I define A, B, and synonyms are all in my solo context.


There's a difference between saying A and B are synonyms, and trying to discover if they currently are synonymous. Maybe the latter is applicable knowledge? However, that would be solely abstract consideration, which I think you were stating was only possibly distinctive.

applicable knowledge always involves the resolution of a distinctive uncertainty


Would you agree with me then that there is such a thing as uncertainty distinctively? Because prior it felt like you were stating there's never uncertainty because I am "creating" the definitions:

Distinctive knowledge has no uncertainty.


I see this as a direct contradiction. Which I think is resolved in my position because we no longer need a/d.


No, taken alone, the process of distinctive and applicable knowledge do not explicitly involve context.


I think that I was wrong to think the distinction needed to be contextual conflations, it is actually simply conflations in general (both).


No, X alone is not an induction. "IF X" is an induction.


In the way you have defined it from the dictionary, I am no longer certain "hypothetical" is the correct term. There's a difference between stating "I believe it will rain" and "I don't know if it will rain". The former is an induction, the latter could be either: both are expressing uncertainty. The latter is not a hypothesis, it is a certainty of uncertainty (assuming it was deduced). if I state "IF it rains, THEN ...", I may not be claiming that I "believe" it will rain, I could be claiming "I do not know either way" which is not an induction. That's my only point.

Therefore it is more cogent to act as if the known certainties of today such as logic and needing to breath and eat to survive, will be the known certainties of tomorrow. My inductive hierarchy can justify itself. Can any other rationalization of inductions do so? I leave that to you.


I still think hume's problem of induction isn't really answered here. But I completely understand and agree that the most rational thing to do is the hierarchy of inductions. But more on that later as this is very long.

Bob
Philosophim May 18, 2022 at 23:05 #697266
Well done Bob, a great analysis! No need to apologize for long pauses between replies, I believe we are both out of our comfort level of easy response at this point in time. I find it exciting and refreshing, but it takes time to think.

The problem I have with your fundamental concepts, is I do not consider them the most fundamental concepts, nor do I think you have shown them to be. The most fundamental concept I introduced was discrete experience. Prior to discretely experiencing, one cannot comprehend even the PoN. Arguably, the PoN works because we cannot discretely experience a real contradiction ourselves. I have never experienced a situation in which I have existed in two different spots at the same time for example.

That being said, I don't necessarily disagree with your fundamentals as system that can be derived from the fundamental that you discretely experience. But I don't think you've shown that it isn't derived from the more fundamental a/d distinction.

Having discussed this with you for some time now, I believe this has been a re-occurring difference between us. You've typically been thinking at a step one higher, or one beyond what I've been pointing out. Your ideas are not bad or necessarily wrong. I am talking about a system from which all systems are made, while you're talking about a system that can be made from this prime system.

The d/a distinction applies as a fundamental formation of knowledge from discrete experience. As you've noted, you had to use the d/a distinction to use the concepts that you created. I'm noting how knowledge is formed to create systems, while you are creating a system. Your creation of a system does not negate the d/a distinction, but only confirms it can be used to create a system.

For myself, you have to demonstrate that you can form a system without using the d/a distinction, and that system must invalidate or demonstrate why the d/a distinction is invalid. To do so, I believe you have to show there is something more fundamental than the ability to discretely experience. Or if not more fundamental, something along the lines of that fundamental ability that can lead to knowledge without needing discrete experience.

But, let me address of a few of your derived concepts that cross into my derived concepts so I can clarify this position.

Quoting Bob Ross
Free will is irrelevant. The determination of "knowledge" is not related directly to control, which dissolves any issues or paradoxes related thereto.


Free will is not necessary to my epistemology. Free will is a distinctive and applicable concept that is contextually formed. Whether a person defines free will, or does not, is irrelevant. What I have attempted to note are situations that separate distinctive knowledge from applicable knowledge. One could use a concept of free will to describe a difference, but its not necessary.

What is necessary is the concept of a will. A will is an intention of the self, and an outcome is the result of that will. At its most basic, a will is the intention to eat to live. I believe this is very similar, if not identical to our previously agreed upon definition of "reason". It is very clear to any willing/reasoning being that one's intention does not always result in the outcome they wished. Situations in which one's will is provably certain is essentially distinctive knowledge. This is the act of discretely experiencing expressed as memory, identity, and sensations. Some in philosophy might call this, "being".

But, when your reason is placed in a situation in which it is provably uncertain, the deduced results of the experience are applicable knowledge.

Quoting Bob Ross
Creation & Application are irrelevant. The distinction being made has no direct relevancy to whether a given concept was "created" or "applied", just that the conceptions appropriately align with the fundamentals.


As I mentioned earlier, your fundamentals are not fundamentals. I can both distinctively and applicably know what you claim to be fundamentals. I distinctively know the PoN, and I applicably know the PoN. If I did not applicably know the PoN, you would have to prove it existed correct? Which means you would have to show some application of it that would demonstrate to me it wasn't something you just distinctively identified, but something that can also be utilized apart from our direct distinctions.

Quoting Bob Ross
The problem is that I can conflate distinctively concepts. If I, in isolation, imagine the color pink and, in isolation, imagine an elephant, it would be a conflation to claim the concatenation of the two produced a literal "pink elephant". Given the nature of imagination, it isn't so obvious that there's a conflation occurring, but a more radical example explicates it more clearly: I imagine a circle and then imagine a square, I then declare that I distinctively know of a "a circle that is a square". What I really distinctively know is a square, a circle, and a contradiction (impossibility in this case).


Conflation is not a function of my epistemology, but a way to demonstrate separations of knowledge and context. If you imagine a pink elephant combining your memory of pink and elephant, that is distinctive knowledge. There is nothing wrong with that. The conflation occurs if you think that you have applicable knowledge that a pink elephant exists apart from your imagination. If conflation is allowed to occur in this epistemology without explanation, I would consider that a contradiction and flaw that should be pointed out. I just don't see where this is happening at this time.

Quoting Bob Ross
The concept of "square", and its properties (essential properties in your terms), as a predicate (such as "this circle is square") contradicts the subject concept "circle" and is therefore "impossible". It contradicts it because the properties are related to the concept as necessitous by nature and therefore a contradiction in the predicate to the properties of "circle" (the subject concept) results in rejection (due to PoN): this is what it means to be "impossible".


If we distinctively identify a square and a circle to have different essential properties, than they cannot be the same thing distinctively. But our definition of square and circles are not applicably necessitous by nature. I may try to apply whatever my contextual use of square is, and find that I run into a contradiction. In your case, you are using a societally agreed upon contextual definition of square and circle that is both distinctively, and applicably known and proven. Using those current societal definitions and applicable knowledge of square and circle, there are certain things you cannot distinctively conclude. That is a distinctive impossibility. But will the rules and applicable knowledge of a square and circle remain the same tomorrow? That is an applicable unknown. That is where induction comes in.

Quoting Bob Ross
Potential vs Possibility is now resolved. There's no more confusion about possibility because what you are defining as "possibility" is not fundamentally what it should be, however the distinction you made is still relevant. "Possibility" is truly when a predicate does not contradict its subject concept.


If you want to create a system in which you define possibility as when a predicate does not contradict its subject concept, that's fine. I've noted you can create whatever system you want distinctively. But, when you make the claim that your derived system invalidates the underlying system, you are applicably wrong. The fact that I use the word possibility to describe the concept of making a belief that because X is applicably known 1 time, it could be applicably known again, is irrelevant. You and I may be using the same sign/word, but the essential properties are widely different. We can discuss why you may be more interested in a different word than possibility to describe the essential properties of this particular kind of induction, but you have not shown that these particular properties of the induction are flawed in and of themselves.

Quoting Bob Ross
As you probably noticed, there is a recursive nature to my definitions: they are all concepts. This is purposely so because, quite frankly, it is an inescapable potential infinite regress of reason.


This would be a flaw in your proposal then. The d/a distinction has a finite regress of reason. That is to what is discretely experienced. An infinite regress cannot prove itself, because it rests on the belief in its own assumptions. In other words, an infinite regress cannot be applicably known. You may have created a distinctive set of logic that fit in your mind, but it has no capability of application. The a/d distinction is complete. It start with finite experiences, and ends with them. You can use the a/d distinction in the formulation of the a/d distinction itself. That is a major strength of the theory compared to all others which I know of that are not able to use the very theory they propose to prove the theory itself.

Quoting Bob Ross
But there is no uncertainty involved. How I define A, B, and synonyms are all in my solo context.

There's a difference between saying A and B are synonyms, and trying to discover if they currently are synonymous. Maybe the latter is applicable knowledge? However, that would be solely abstract consideration, which I think you were stating was only possibly distinctive.


If you are the creator of the definitions of A and B, then there is no uncertainty. You aren't trying to discover anything. Synonyms are identical distinctive knowledge. When we are trying to match an unknown identity with a distinctive identity, that deduced result is applicable knowledge.

Quoting Bob Ross
applicable knowledge always involves the resolution of a distinctive uncertainty

Would you agree with me then that there is such a thing as uncertainty distinctively? Because prior it felt like you were stating there's never uncertainty because I am "creating" the definitions:


Let me be clear by what I mean by distinctive. Distinctive is like binary. Its either on, or off. Either you have defined A to have x property, or you have defined A to have y property. You can define A as having X property for 1 second, then define A to have Y property the next second. You can even alternate every second for eternity. But there is no uncertainty that at any point in time, what you have defined or not defined as an essential property of A is the distinctive knowledge of A then.

Quoting Bob Ross
In the way you have defined it from the dictionary, I am no longer certain "hypothetical" is the correct term.


That may be the case. I do agree there is a difference between "I believe" versus, "I don't know". But the IFF is an affirmative of a possible outcome, which is an assertion that there are other possible outcomes. But we may be splitting hairs at this point.

I really think going through the terms has helped me to see where you are coming from, and I hope I've demonstrated the consistency in my use and argumentation for the a/d system. Everything we've mentioned here so far, has been mentioned in prior topics, but here we have it summed up together nicely. I look forward to hearing from you again Bob.
Bob Ross May 19, 2022 at 18:11 #697796
Hello @Philosophim,

No need to apologize for long pauses between replies, I believe we are both out of our comfort level of easy response at this point in time. I find it exciting and refreshing, but it takes time to think.


I likewise find it exciting and intriguing. If one isn't out of their comfort zone, then they aren't learning.

The problem I have with your fundamental concepts, is I do not consider them the most fundamental concepts, nor do I think you have shown them to be.


I suspected this would be the case, and I agree to a certain level: in my previous post I purposely refrained from going into a meticulous derivation of the fundamentals so as to prevent derailing into my epistemology as opposed to yours. I can most certainly dive in deeper.

The most fundamental concept I introduced was discrete experience. Prior to discretely experiencing, one cannot comprehend even the PoN.


"discrete experience" and any argument you provide (regardless of how sound) is utilizing PoN at its focal point. Nothing is "beyond" PoN. Therefore, I view "discrete experience" as a more ambiguous clumping of my outlined fundamentals. There's nothing wrong, at prima facea, of thinking of them in terms of one lumped "discrete experience", but this cannot be conflated with "differentiation" nor "spatiotemporality".

That being said, I don't necessarily disagree with your fundamentals as system that can be derived from the fundamental that you discretely experience.


You derived this via PoN. A common theme that I view as a misunderstanding is to think that the derivation of a "fundamental" should be what one can determine as what they are contingent upon: they were required in the first place. It is not what one can derive via PoN as the grounds which is the fundamental, it is what was used in the first place to derive it (e.g. PoN). A "fundamental" is that which is an unescapable potential infinite of the subject's manifestations ("thoughts", "reasoning" if you will). I claim PoN is false, it is thereby true. I claim X, it used PoN, I verified that because PoN is true. I verified "because PoN is true" via PoN: it is a recursive potential infinite. That is the nature of "reason": a succession of finite operations which are constrained to necessary principles.

But I don't think you've shown that it isn't derived from the more fundamental a/d distinction.


At this point, I still don't think a/d distinction is very clear. Some times you seem to use it as if it is "abstract" vs "non-abstract", other times it is "creation" vs "matching": these are not synonymous distinctions. Sometimes it is:

I've noted you can create whatever system you want distinctively.


Other times it is:

Free will is not necessary to my epistemology. Free will is a distinctive and applicable concept that is contextually formed.


The former implies some form of "free will" regardless of whether the term is constructed or not. The latter denies any such implicit necessity.

The way I understand it is:

- If distinctive knowledge is "creation", then by virtue of the term it implies some form of "free will" to "create" whatever one wants. Unless you are positing a "creation" derived from an external entity or process that is not the subject.

- If distinctive knowledge is "abstract", then it renders "free will" irrelevant, but necessarily meshes "creation" and "matching" into valid processes within "distinctive knowledge" due to the fact that "abstraction" can have both.

Quite frankly, your descriptions are "free will" heavy (in terms of implications): I think you are frequently mapping "distinctive knowledge" to a distinction of free construction, whereas "applicable" is outside of that construction. I don't think you have offered an adequate reconciliation to this issue (but I could be simply misunderstanding).

Furthermore, being able to always classify something under one of two categories does not entail that that those two categories are fundamentals. Your a/d distinction is like a line drawn in a potential infinite beach of sand, whereas I am trying to examine it by granule. Sure, the granule is either on the left or the right of the line, but that doesn't have anything to do with fundamentals.

What is necessary is the concept of a will.


Is this will "creating" the distinctive knowledge? I get heavy vibes that that is not what you are saying, but I could be wrong. If not, then there's a heavy "free will" implication. Even in terms of this will, if it is directing the constructed "distinctive knowledge" and it isn't an act of free will of some sort, then it isn't the subject "creating" anything: therefore they cannot do whatever they want distinctively, but maybe the rudimentary will can?

But, when your reason is placed in a situation in which it is provably uncertain, the deduced results of the experience are applicable knowledge.


This leads me to believe, instead of "creation"/"abstract" vs "matched"/"non-abstract", you are really trying to convey "certainty" vs "uncertainty", which, again, is not the same thing.

Let me invoke your definitions from a while back:

Distinctive knowledge - A deduced concept which is the creation and memorization of essential and accidental properties of a discrete experience.


Applicable knowledge - A deduced concept which is not contained within its contextual distinctive knowledge set. This concept does not involve the creation of new distinctive knowledge, but a deduced match of a discrete experience to the contextual distinctive knowledge set


This is a "creation" vs "matching" distinction. "creation" does not equate to "abstract consideration". "matching" does not equate to "non-abstract consideration".

You've typically been thinking at a step one higher, or one beyond what I've been pointing out. Your ideas are not bad or necessarily wrong.


I think it is essentially the converse. However, what makes it tricky is that your definitions think higher and equal to mine, which clouds the waters.

I am talking about a system from which all systems are made, while you're talking about a system that can be made from this prime system.


I am arguing the exact same thing conversely. I don't think your "discrete experience" is the fundamental: it is an ambiguous lumping of the fundamentals into one term. It works fine prima facea, but as I have been examining your epistemology it slowly breaks down when one gets to a/d. Neither of us can derive a/d, or any distinction, without first using PoN, connectivity, negations, equatability, spatiotemporality, and a will. These are not after nor do they arise out of discrete experience. PoN is the focal point and thereafter the other fundamentals follow logically. "discrete experience" is an ambiguous sort of equivalent to the lumping of these concepts: it is the realization that one is experiencing differentiation via the PoN, connections, negatiability, equatability, and spatiotemporal references: we cannot go beyond those, they are apodictic.

As you've noted, you had to use the d/a distinction to use the concepts that you created. I'm noting how knowledge is formed to create systems, while you are creating a system.


I wasn't trying to note that I used a/d: I was meaning that it seems as though (in anticipation) that I am given the murky waters in the definitions of a/d. You are drawing a line in the sand, I am noting the granules and the granules that make up those, etc to derive what is necessarily always occurring in the finite procession of the manifestations of reason. I am not convinced that a/d somehow is being used to derive PoN, when PoN was required to derive a/d.

As I mentioned earlier, your fundamentals are not fundamentals. I can both distinctively and applicably know what you claim to be fundamentals. I distinctively know the PoN, and I applicably know the PoN.


Being able to categorize one granule of sand either as on the left or the right does not have any bearing on what is fundamental. Even if the a/d distinction works for all granules, it wouldn't thereby be a fundamental. The derivation of a/d, I would argue, utilizes my fundamentals to get there. Try to derive a/d without using PoN. Try to derive anything without it.

Likewise, depending on what distinction you mean by "distinctive" and "applicable" it may or may not be the case that one can derive PoN in those two contexts separately. There's a definition of "PoN" in my head, which I abstractly had to perform application to know that, and I abstractly apply it to my previous abstract thoughts to determine whether it holds as apodictic: and it does. I would suppose I had to "applicably" know that I "distinctively" knew, not the other way around, because I don't know I had a definition of "PoN" until after I perform the necessary abstract applications to determine I do. "Application" and "definitions" is a murky distinction (just like creation and matching), no different than a/s.

One cannot know of their own definition before they perform application to obtain that. Once they know, then they can distinguish that from whether the definition's contents hold. It would be a conflation to claim that the definition proves it owns validity beyond it: which doesn't have any bearing on a/d. I claim "I cannot hold A and not A". I didn't know I made that claim until I applicably determine via PoN that I did claim it. Thereafter, it is a conceptual conflation to claim that in virtue of the claim it is true: this is the distinction I think should be made.

Conflation is not a function of my epistemology, but a way to demonstrate separations of knowledge and context


That is my point: there is only one form of knowledge. No matter what distinction is made, the subject is necessarily following the same underlying process. All the issues your distinction are supposed to be demonstrating can be resolved simply by noting conflations.

If you imagine a pink elephant combining your memory of pink and elephant, that is distinctive knowledge. There is nothing wrong with that.


Depends on what you mean. If you are conflating concepts, then there is something wrong. A "pink elephant" in combination is not the same as "pink" + "elephant" in isolation, it would be wrong to abstractly conflate the two.

If we distinctively identify a square and a circle to have different essential properties, than they cannot be the same thing distinctively.


This is necessarily the case because we fundamental utilize PoN as the focal point. This is not a choice, it is always abided by.

But my point was that concepts can be conflated abstractly and, potentially depending on how you are defining "distinctive", distinctively.

I may try to apply whatever my contextual use of square is, and find that I run into a contradiction


The real underlying process here I think is trying to relate, whether abstractly or non-abstractly, concepts to one another and whether it results in an invalid conflation. You tend to be using "applicable" as if it is "non-abstract".

But, when you make the claim that your derived system invalidates the underlying system, you are applicably wrong.


There is no underlying system. My proposed system is meant as the underlying system. Your definition of "possibility" implicitly uses mine. The mind necessarily considers in terms of how I defined it. Now, semantically, that is a whole different question. Your possibility's function was to note a contextual conflation, which is accounted for in my system without redefining possibility in a way that creates confusing different "could" terminology (i.e. "I speculate I could" vs "I possibly could").

This would be a flaw in your proposal then...An infinite regress cannot prove itself, because it rests on the belief in its own assumptions.


Firstly, a finite regress of reason should never prove itself: that is circular logic. Secondly, a system cannot prove all of its true formulas. Goedel's incompleteness theorems thoroughly proved that truth outruns proof: it is an infinite regress wherein a system has at least one unprovable, but yet true, formula which is only proven by using another system (aka it is non-computational).

Although I am interested to hear your reasoning, I didn't get the impression that your epistemology proves itself in that sense: it is consistent, but not complete. There's nothing wrong with that.

Thirdly, I think this is a strength of my system is that it explicates the true nature of reason: potential infinite regressions and one circular reference. This is why PoN is the focal point, as it is the one valid circular reference:

It is a potential infinite circular cycle of "X is true because of PoN", where X can also be PoN. There's nothing wrong with that: that is why it is an axiom. The reason that isn't special pleading is because all other circular logic depends on PoN and we can demonstrate therefrom their invalidity. Apodictic doesn't mean complete, it means demonstrably true (not to be confused with absolutely true). When a subject tries to prove PoN, they have to eventually give up under the conclusion that it is true as they follow the potential infinite path of derivation, which is cyclical. I don't think, in action, you can demonstrate that to be false (as that very proposition is presupposing PoN). That's why it is an axiom.

The potential infinite regressions (recursions to be specific) is simply noting what concepts are and how they exist in a infinite recursive pattern. Similar to how PoN is cyclical but yet valid, noting that when one derives any concept they can perform the finite operation to all of its properties, sub-properties, sub-sub-properties, etc for a potential infinite. All concepts, even in your derivation, are referencing other concepts in a potential infinite fashion. This is provable by means of simply trying to invalidate it: try to come up with a concept that isn't derive from other concepts. The nature of reason is a continuity: there's no stopping point. This does not rest on its own assumptions.

If you are the creator of the definitions of A and B, then there is no uncertainty.


There's always uncertainty. When someone claims they are certain of what they defined as A, they really mean that they very quickly ascertained what they defined, but necessarily had to perform application to discover what it was. They had to dissect the concept of A, and the act of dissecting implies uncertainty. This is not the same as claiming they are formulating inductions.

Let me be clear by what I mean by distinctive. Distinctive is like binary. Its either on, or off. Either you have defined A to have x property, or you have defined A to have y property.


This is not " A deduced concept which is the creation and memorization of essential and accidental properties of a discrete experience", you have defined PoN here, which is true of both of your distinctions.


I really think going through the terms has helped me to see where you are coming from, and I hope I've demonstrated the consistency in my use and argumentation for the a/d system. Everything we've mentioned here so far, has been mentioned in prior topics, but here we have it summed up together nicely.


I appreciate your response, I hope I wasn't too reiterative from previous posts here.

Bob
Philosophim May 22, 2022 at 14:47 #699093
Quoting Bob Ross
I suspected this would be the case, and I agree to a certain level: in my previous post I purposely refrained from going into a meticulous derivation of the fundamentals so as to prevent derailing into my epistemology as opposed to yours. I can most certainly dive in deeper.


Please do Bob! You have been more than polite and considerate enough to listen to and critique my epistemology. At this point, your system is running up against mine, and I feel the only real issue is that it isn't at the lower level that I'm trying to address. Perhaps it will show a fundamental that challenges, or even adds to the initial fundamentals I've proposed here. You are a thoughtful and insightful person, I am more than happy to listen to and evaluate what you have to say.

Quoting Bob Ross
"discrete experience" and any argument you provide (regardless of how sound) is utilizing PoN at its focal point. Nothing is "beyond" PoN. Therefore, I view "discrete experience" as a more ambiguous clumping of my outlined fundamentals. There's nothing wrong, at prima facea, of thinking of them in terms of one lumped "discrete experience", but this cannot be conflated with "differentiation" nor "spatiotemporality".


As a reminder, one cannot think about the PoN without first being able to discretely experience. Its been a while since we last discussed this, but if you recall, the same with differentiation and spatiotemporality. Discrete experience is the fundamental simplicity of being able to notice X as different from Y. Non-discrete experience is taking all of your experience at once as some indesciphable.

As a reminder of discrete experience, a camera lens that takes a picture is a non-discrete experience. Everything that comes into the camera lens is spit out on the picture without the lens being able to differentiate anything within the light it receives. All it does is receive light. A being that can discretely experience can parcel that experience into things that it might later identity and differentiate into colors, shapes, etc.

Therefore a fundamental which a being must have before it can identify, is it must be able to discretely experience.

Quoting Bob Ross
It is not what one can derive via PoN as the grounds which is the fundamental, it is what was used in the first place to derive it (e.g. PoN).


We used the PoN to deductively assert that we discretely experience. But we could not begin to use deduction about discrete experience, without first being able to discretely experience. We cannot prove or even discuss the PoN without being able to understand the terms, principle, negation, etc.

Quoting Bob Ross
I claim PoN is false, it is thereby true. I claim X, it used PoN, I verified that because PoN is true.


Yes, but you must first understand what the terms "true" and "false" are. I believe truth and falsity are more fundamental than the PoN. While I do believe that fundamentals can be applied to themselves, an argument's ability to apply to itself does not necessitate that it is a fundamental.

I will create the PoN using the a/d distinction now. Instead of truth, its "What can be discretely experienced", and instead of false its, "What cannot be discretely experienced. What is impossible is to discretely experience a thing, and not the very thing we are discretely experiencing at the same time. Such a claim would be "false", or what cannot be discretely experienced. As you see, I've built the PoN up from other fundamentals, demonstrating it is not a fundamental itself.

I believe you have mentioned prior the idea of temporal fundamentalism. In other words, the order of discovery determines what is "fundamental". If for example, molecular theory was used to discover atomic theory, you would say molecular theory was more temporally fundamental.

Fundamental to me means the parts that make up the whole. While we may have discovered molecular theory first (hypothetically) molecules are fundamentally made up of atoms and rules that we might not have been aware of. But the use of a tool which discovers another fundamental does not mean that the underlying make up of the tool is not fundamental, nor that we necessarily needed that particular tool to discover the underlying fundamental. As we could use molecular theory as a starting point to discover atomic theory, we can also use atomic theory to discover molecular theory once we discover atomic theory. A fundamental when discovered, either confirms the higher order we used to discover the lower order, or adds clarity to that higher order concept.

I've used the a/d distinction to demonstrate an explanation for why the PoN is not a fundamental as it is made out of component parts. Barring your agreement with my proposal, you would need to identify what "true" and "false" are. As such, I think its been clearly shown that the PoN has parts and logic prior to its logical construction, and is not a fundamental.

Quoting Bob Ross
At this point, I still don't think a/d distinction is very clear. Some times you seem to use it as if it is "abstract" vs "non-abstract", other times it is "creation" vs "matching": these are not synonymous distinctions. Sometimes it is:


I think the problem is you are trying to use terms for synonyms to the a/d distinction. It is not as simple as "abstraction vs non-abstraction" or "creation" vs "matching". I can use these terms to assist in understanding the concept, but there is no synonym, as it is a brand new concept. Imagine when the terms analytic and synthetic were introduced. There were no synonyms for that at the time, and people had to study it to understand it.

I think part of the problem is you may not have fully understood or embraced the idea of "discretely experiencing". If you don't understand or accept that fully, then the a/d distinction won't make sense. But you cannot use derived systems to explain the fundamental system that allows those derived systems to exist. I think this is ultimately the source of your misunderstanding and confusion. You are still at a higher level of system, and assume that higher level is fundamental. What I've tried to demonstrate is your system is derived, and rests on the assumptions you are trying to negate. Can you use your derived system without my system underlying it? No. Until that changes, it cannot be used as a negation of the very thing it uses to exist.

Quoting Bob Ross
I've noted you can create whatever system you want distinctively.

Other times it is:

Free will is not necessary to my epistemology. Free will is a distinctive and applicable concept that is contextually formed.


"I" is the discrete experiencer. You've been attributing the "I" as having free will. I have not meant to imply that or used those terms.

Quoting Bob Ross
Quite frankly, your descriptions are "free will" heavy (in terms of implications):


But I'm not implying free will. I think you're mapping your own outlook on this when it has never been my intention.

Quoting Bob Ross
The way I understand it is:

- If distinctive knowledge is "creation", then by virtue of the term it implies some form of "free will" to "create" whatever one wants. Unless you are positing a "creation" derived from an external entity or process that is not the subject.


No. Distinctive knowledge is the creation of the discrete experiencer. If I see the color red within the sea of existence, that is my creation. If I am color blind, then what I discretely experience might be different. A person might see a tree while another sees two plants, "green leaves" attached to "brown trunks". A camera lens cannot see the color red within the light that it absorbs. It is unaware of any difference. There is no "I" within the lens. There is no distinction.

Quoting Bob Ross
- If distinctive knowledge is "abstract", then it renders "free will" irrelevant, but necessarily meshes "creation" and "matching" into valid processes within "distinctive knowledge" due to the fact that "abstraction" can have both.


As I've noted, its about deduction vs induction within your chain of reasoning. It depends on your context of what you mean by "abstraction". In one context, everything is abstraction. Our sensations are abstractions, as well as thoughts. Arguably a person could state we never experience "the thing in itself".

Distinctive knowledge is the knowledge of the experience itself, knowledge of the abstraction one creates. The key is that there is no deduced uncertainty of one's will. If I see red while you see blue, we both distinctively know our own experiences. But the moment I introduce deduced uncertainty, "You see the color blue, while I see the color red," that is a belief that my will alone cannot assert. Have I experienced how you see the world? No. That is an application I must experience before I can determine if my belief is true. Do I have the distinctive knowledge of this belief? Yes. Is that belief applicable knowledge? No. At best, such a belief is an inapplicable speculation.

Quoting Bob Ross
I am arguing the exact same thing conversely. I don't think your "discrete experience" is the fundamental: it is an ambiguous lumping of the fundamentals into one term.


This is why it is a fundamental. A fundamental is part of everything that derives from it. Atoms are the fundamentals of molecules. We don't have to create the concept of molecules, and the fundamentals of atoms will still exist. Discrete experiences are the necessary atoms that make up your higher level concepts. That's not an ambiguous lumping. All I'm noting is your molecules are made up of atoms, and atoms can be used to make more than the molecules you are noting.

Quoting Bob Ross
Neither of us can derive a/d, or any distinction, without first using PoN, connectivity, negations, equatability, spatiotemporality, and a will. These are not after nor do they arise out of discrete experience.


I think I've shown the thinking that they do not arise out of discrete experience to be incorrect. Where does the idea of negation come from? True and false? Each of your terms rest on concepts that you have not proven yet, or shown where they come from. Mine does. Negation is the discrete experience of one thing, and then that thing not being experienced anymore. True is what is and can be discretely experienced, while false is what cannot. From this, I can derive the PoN. Can you derive the PoN differently, or demonstrate how my derivation is incorrect?

Quoting Bob Ross
Likewise, depending on what distinction you mean by "distinctive" and "applicable" it may or may not be the case that one can derive PoN in those two contexts separately.


As with everything, you must clarify whether your knowledge is distinctive or applicable. The problem with epistemology has been it has lacked this distinction, and has conflated very two different identities. I can distinctively know of a pink elephant, and I can applicably know if I've encountered one. What one distinctively knows does not necessitate it can be applied.

Quoting Bob Ross
One cannot know of their own definition before they perform application to obtain that. Once they know, then they can distinguish that from whether the definition's contents hold. It would be a conflation to claim that the definition proves it owns validity beyond it: which doesn't have any bearing on a/d. I claim "I cannot hold A and not A". I didn't know I made that claim until I applicably determine via PoN that I did claim it. Thereafter, it is a conceptual conflation to claim that in virtue of the claim it is true: this is the distinction I think should be made.


Notice how you used "know" without clarifying whether this knowledge was distinctive or applicable. If you don't clarify what type of knowledge, then you aren't using the epistemology. At that point you aren't disproving the epistemology through a contradiction of use, you are simply showing how not using the epistemology causes confusion.

Let me reconstruct your sentence. "One cannot applicable know their own definition before they perform application to obtain that." While that sentence still doesn't make much sense, it is not addressing distinctive knowledge. Did you mean to say, "One cannot distinctively know their own definition before they perform application to obtain that?" That doesn't work, because distinctive knowledge does not require applicable knowledge.

Perhaps what you meant was that you cannot distinctively know something prior to experiencing it. Which is true. But you also cannot applicably know something before you experience it. If the a/d distinction cannot be used to divide a generic use of knowledge or runs into a contradiction, then I think we can safely say there is a flaw. But using a generic definition of knowledge alone is a straw man.

Quoting Bob Ross
That is my point: there is only one form of knowledge.


Knowledge is a chain of deductions. The difference between distinctive and applicable knowledge has been clearly made. Do they both use deductions as an underlying fundamental? Yes. But it is clear that we run into situations in which we have beliefs that must be resolved, and cannot be resolved by our will alone. When a chain of inductions contains a resolved induction, it is an important enough difference to note a new identity. The separation of the knowledges notes this important event, and avoids the problems other epistemologies run into.

Quoting Bob Ross
If you imagine a pink elephant combining your memory of pink and elephant, that is distinctive knowledge. There is nothing wrong with that.

Depends on what you mean. If you are conflating concepts, then there is something wrong. A "pink elephant" in combination is not the same as "pink" + "elephant" in isolation, it would be wrong to abstractly conflate the two.


Please clarify what you mean by this in distinctive and applicable terms. I didn't understand that point.

Quoting Bob Ross
If we distinctively identify a square and a circle to have different essential properties, than they cannot be the same thing distinctively.

This is necessarily the case because we fundamental utilize PoN as the focal point. This is not a choice, it is always abided by.


No disagreement, as this is a logical consequence of using a logic derived from the context of distinctive knowledge.

Quoting Bob Ross
I may try to apply whatever my contextual use of square is, and find that I run into a contradiction

The real underlying process here I think is trying to relate, whether abstractly or non-abstractly, concepts to one another and whether it results in an invalid conflation. You tend to be using "applicable" as if it is "non-abstract".


I will note, I did not introduce the term "abstract" into the conversation. It depends on your context of "abstract". Applicable knowledge comes about from the deduced realization of an uncertain belief. The "uncertainty" is a deduction that our will alone is not enough to ascertain it cannot be contradicted. I may believe this apple is healthy, but upon eating it I discover it was rotten on the inside. Can the terms "eating, rotten, apple, action, etc" be all termed as abstractions? Sure. Can everything in the mind be termed an abstraction? Yes. This is probably where the confusion comes from. You are using a general word that can have its essential properties switched with its accidental properties depending on the context you are using.

As such, if I have used the terms "abstract" it has been to meet what I evaluated your context to be at the time. In the largest abstract of the word, discrete experience can be called an abstraction, and everything is made up of discrete experiences, including applicable knowledge. If we are to use the term abstraction going forward, could you define it clearly in your own terms so I can understand your meaning?

Quoting Bob Ross
This would be a flaw in your proposal then...An infinite regress cannot prove itself, because it rests on the belief in its own assumptions.

Firstly, a finite regress of reason should never prove itself: that is circular logic. Secondly, a system cannot prove all of its true formulas. Goedel's incompleteness theorems thoroughly proved that truth outruns proof: it is an infinite regress wherein a system has at least one unprovable, but yet true, formula which is only proven by using another system (aka it is non-computational).


What I meant by "proving itself" is it is consistent with its own rules, despite using some assumptions or higher level systems like the PoN. I assumed several higher order logics to be true, and I can use the epistemology to demonstrate why logic works. I may ask the question, "Why do I discretely experience?" but that answer is not necessary to know that I discretely experience, and can use it to form knowledge. Just like I don't need to know molecular theory to use a ruler for measurement. There is (to my mind) nothing underlying or apart from the theory itself that needs to be given to explain the theory itself.

Also, I am not using truth. If you wish to use Goedel's incompleteness theorem in relation to this theory, feel free. Goedel's is also not a free pass to set up an infinite regress. What I am noting is that an infinite regress is something that cannot be applied, and therefore an inapplicable speculation. Such an argument is flawed, and as my system is more fundamental than yours, can conclude Setting up an explanation for knowledge as infinitely regressive is therefore a flaw. I can construct your system distinctively, but it is inapplicable. My system can be constructed distinctively, and applicably used, while not using infinite regress. Because an infinite regress is inapplicable, it is an inapplicable speculation, or induction. Mine does not rely on such an induction, and is therefore more sound.

Quoting Bob Ross
All concepts, even in your derivation, are referencing other concepts in a potential infinite fashion.


No. I avoid that flaw that most other epistemologies fall into. Everything starts with the foundation that I discretely experience. All distinctive knowledge boils down to that. I need no other outside reference. If I do, please show how I do.

Quoting Bob Ross
If you are the creator of the definitions of A and B, then there is no uncertainty.

There's always uncertainty. When someone claims they are certain of what they defined as A, they really mean that they very quickly ascertained what they defined, but necessarily had to perform application to discover what it was.


Correct. My note was there is no uncertainty in distinctive knowledge. When there is uncertainty, or when it is deduced that one's will, will not necessarily result in the will's outcome, we have a situation in which we must experience the deduced outcome of that induction. That is acting upon a belief until that beliefs outcome is found.

There is no application within distinctive knowledge, because it is our experience itself. You don't match the experience itself to the experience itself. It is simply the experience itself. The act of being. What you are, is what you are. What you remember is what you remember. What you define something as, is what you define something as. There is no regress. There is no induction in this. This has been deductively shown by noting that you discretely experience, and all of these things logically flow from this fundamental.

Quoting Bob Ross
Let me be clear by what I mean by distinctive. Distinctive is like binary. Its either on, or off. Either you have defined A to have x property, or you have defined A to have y property.

This is not " A deduced concept which is the creation and memorization of essential and accidental properties of a discrete experience", you have defined PoN here, which is true of both of your distinctions.


Poor wording with lots of implicitness on my part. Let me rephrase it.

Distinctive knowledge is a deduced concept. This deduced concept is that I discretely experience. Anytime I discretely experience, I know that I discretely experience. This is distinctive knowledge. This involves, sensation, memory, and language. This is not the definition of the Principle of Negation, though we can discover the principle of negation as I noted earlier.

And no worry Bob if we retread old ground a bit! Many of those subjects were disparate, but now we have a nice consolidation.
Bob Ross May 23, 2022 at 03:37 #699467
Hello @Philosophim,


Please do Bob! You have been more than polite and considerate enough to listen to and critique my epistemology. At this point, your system is running up against mine, and I feel the only real issue is that it isn't at the lower level that I'm trying to address. Perhaps it will show a fundamental that challenges, or even adds to the initial fundamentals I've proposed here. You are a thoughtful and insightful person, I am more than happy to listen to and evaluate what you have to say.


I appreciate that, and same to you! Most of my conversations on this board, apart from ours, hasn't been very fruitful. It seems as though most people on here like swift abrupt responses and then get bored and move on to the next topic. I, and I think you as well, like longer, thought-out discussions that really go much deeper. That's why I really enjoy our conversations, as you are very respectful, genuine, and are providing thought-provoking responses.

The fundamental issue between us is becoming clearer and clearer for me, and I suspected as much but now I think it is pretty solidified. I think this is the pinnacle of our fundamental disagreement:

Philosophically, you seem to be taking a heavy realist methodological approach whereas I seem to taking a heavy anti-realist methodological approach.

Consequently, I am performing derivation starting with the mind and working my way outwards onto the "real" world, whereas you seem to be starting with the "real" world and working towards your mind. Now, firstly, I want to disclaim that I am not in any way trying to put words in your mouth or unfairly fit you in a category, I am merely explicating what I think is the root issue here, which is reflected quite clearly (I think) in our disagreement in terms of what a "fundamental" is. Secondly, when I stated you seem to be working "towards your mind" from the "real" world, obviously you are thinking and therefore are starting with your mind in that sense, but what I mean is that you are grounding fundamentals in the "real" world, whereas I don't. Subsequently, I think you would hold (correct me if I am wrong) that your mind is from a brain (which the latter would be more fundamental than the former) and, as you mentioned, the atom is would be more fundamental than the brain. That kind of derivation, if I am allowed to say so, is a realist approach which I would gather, if I may guess, you are probably somewhere along the lines of an ontological naturalist. Again, not trying to put words in your mouth, just trying to get to the root of the issue between us, as I don't think that our disagreement is as easy as "fundamental" semantics.

I, on the other hand, although I used to be in that boat of thinking (ontological naturalist, materialist), approach it from a heavy anti-realist position. It took me a while to recognize the shift in my thinking over the years, but in hindsight it is quite obvious. I start with the mind and, therefore, only subscribe to methodological naturalism (as opposed to ontological).

I think, in light of the aforementioned, it is glaringly clear to me why I am thinking PoN is a fundamental whereas you think it is discrete experience. I don't think going back and forth about "you had to use PoN to claim that" (me) and " one cannot think about the PoN without first being able to discretely experience" (you) is going to get us anywhere productive. I would simply respond with the same counter argument that you already know well, and thusly I don't think you find it productive either.

I think, and correct me if I am wrong, you are arguing for discrete experience in virtue that the brain (or whatever object is required, to keep it more generic) must produce this discrete experience for me to even contemplate and bring forth PoN (in other words, I must discretely experience). Now, I don't think that is how you explained it, but I think that is a pretty fair (admittedly oversimplified) generalization.

I understand that, and in contemplation of my body as an object I agree. In contemplation of other bodies, objects, I agree. But in relation to myself, wherefrom derivation is occurring, I start with PoN and derive the relations of objects (and one conclusion is that the brain produces discrete experiences wherefrom it makes sense contemplation of PoN can arise). However, to claim that that is truly a fundamental in relation to the subject is to take a leap, in my opinion, to bridging the gap between mind and brain, which, as of now, I do not hold.

Before I dive into direct responses, I want to explicate clearer what I mean by "fundamental". I am not talking about a contextual fundamental in relation to another object. Yes, atomic theory is more fundamental than molecular theory (I vaguely remember that conversation, and if I argued the converse then I was mistaken) contextually within that relation. I am talking about, do I dare say, the absolute fundamental. By absolute I need to be careful, because what I don't mean is that it is unquestionable: I mean that amongst all contexts (and the derivation of what a context is in the first place) it is necessarily true.Now, what I mean by "all contexts" is in relation to the subject at hand: I am not extending this out objectively or inter-subjectively at this point.

Let me try to explicate this clearer in my direct responses:

Discrete experience is the fundamental simplicity of being able to notice X as different from Y. Non-discrete experience is taking all of your experience at once as some indesciphable.


This is simply outlining the fundamentals of how a brain works. I find nothing wrong with this. I do not hold the brain as the subject, which I think is clearly where we are actually disagreeing (realist, materialist vs anti-realist, idealist--generally speaking, I'm not trying to force us into boxes).

You are explicating a correct derivation of a fundamental contextually in relation to when discrete experience arises out of objects (this is an analysis of the mereological structure of objects, which is fine in its own accord) . However, the flaw I think you are making is bridging the gap, so to speak, between mind and brain in virtue of this: there are aspects of the brain which will never be explained from it. The brain is simply a representation of the mind, which can never fully represent itself.

But we could not begin to use deduction about discrete experience, without first being able to discretely experience. We cannot prove or even discuss the PoN without being able to understand the terms, principle, negation, etc.


Apart from the fact that, again, you are fundamentally positing objects as more fundamental than subjects, I want to clarify that explicating PoN and utilizing PoN is not the same thing. I am not talking about what is necessary to argue for PoN, I am talking about the actual utilization of PoN regardless.

Yes, but you must first understand what the terms "true" and "false" are.


I don't want to be too reiterative, but this argument is sound in relation to the utilization of PoN: without PoN, the best way to describe it would be "indeterminacy". That claim doesn't thereby grant you some kind of obtainment outside of PoN, or what exists beyond it because you just thereby used it.

In the most radical example, if I could hypothetically prove without a doubt PoN was false (even just in terms of some kind of distinction), that would be in relation to PoN. Again, I think this disagreement is really at a deeper level than this because I suspect you were anticipating this response.

While I do believe that fundamentals can be applied to themselves, an argument's ability to apply to itself does not necessitate that it is a fundamental.


In terms of fundamentals contextually in object relations, you are correct. But in terms of the absolute pin point of derivation, I think you are incorrect: that is why PoN is called an axiom: you can't prove it in the sense that you can prove something via it.

I will create the PoN using the a/d distinction now. Instead of truth, its "What can be discretely experienced", and instead of false its, "What cannot be discretely experienced. What is impossible is to discretely experience a thing, and not the very thing we are discretely experiencing at the same time. Such a claim would be "false", or what cannot be discretely experienced. As you see, I've built the PoN up from other fundamentals, demonstrating it is not a fundamental itself.


I appreciate you demonstrating this, but I think it is fundamentally still using PoN. First your entire derivation here is utilizing it: "truth = what can be discretely experienced" is an argument from PoN and so is "false = what cannot be discretely experienced". To claim that impossibility is to discretely experience and not discretely experience in the same time is utilizing the more fundamental aspect of your mind: spatiotemporality. Our minds will not allow for something to be in two places at the same time, nor one place at the same time. This is because the mind considers it a contradiction in its continuous understanding, which inevitably is based off of PoN. I don't think this is going to be productive, but my ask back to you would be to try and "create" PoN using the a/d distinction without utilizing PoN: you can't. Likewise, try to justify not that one thing being at two places at the same time is a contradiction but why it is a contradiction without using PoN: you can't. Try to point to something objective to prove it, I don't think you can: not seeing something right now in two places at the same time is not a proof that it cannot occur.

Fundamental to me means the parts that make up the whole


In mereological consideration of objects it does: not holistically. I am using it more in terms of (from https://www.merriam-webster.com/dictionary/fundamental):

"serving as an original or generating source"
"of central importance"
"belonging to one's innate or ingrained characteristics"

I am not referring to what constitutes as the parts of an object or all objects (like fundamental particles).


I've used the a/d distinction to demonstrate an explanation for why the PoN is not a fundamental as it is made out of component parts


Hopefully I demonstrated why it is not made of component parts. You aren't contending with PoN itself but, rather, utilizing it to define it differently (which is completely possible).

Barring your agreement with my proposal, you would need to identify what "true" and "false" are.


It is the transcendental aspect of the mind which determines what is a contradiction and what is not. I didn't choose that something cannot be in two different places at the same time, nor that two objects cannot be at the same place at the same time. Likewise, I didn't choose the validity of the causal relations of objects. The contemplation of the understanding is fundamentally in terms of spatiotemporal references (e.g. I can redefine PoN in terms of something else as long as it does not violate these underlying principles, if I were to define it as "discrete experience of X and Y at the same place in the same time" then that obviously wouldn't fly, but why?--because I am inevitably playing by the rules of my own mind and so are you regardless of whether either of us realize it). This happens before consideration of what must exist for us to transfer our views to one another.

I am not sure how relevant defining "true" and "false" are with this respect, because "true" is simply a positive affirmation, and "false" is a negative affirmation (denial). I think this derails quickly though because I can posit PoN for the terms as well: it isn't that X can't be "true" and "false", it is that it can't be true and false at the same time. Likewise, if X had the capability to be in two different places (even merely in abstract consideration), then X can be "true" and "false" at the same time because it isn't in the same place.


I think the problem is you are trying to use terms for synonyms to the a/d distinction. It is not as simple as "abstraction vs non-abstraction" or "creation" vs "matching". I can use these terms to assist in understanding the concept, but there is no synonym, as it is a brand new concept. Imagine when the terms analytic and synthetic were introduced. There were no synonyms for that at the time, and people had to study it to understand it.


I can assure you I am not meaning to straw man your position: if it is the case that not even "certainty" and "uncertainty" relate to it, then I am not sure yet what to do with your distinction. I am not saying it is wrong in virtue of that, I am simply not understanding yet.

I think part of the problem is you may not have fully understood or embraced the idea of "discretely experiencing". If you don't understand or accept that fully, then the a/d distinction won't make sense


I most certainly have not fully embraced it. I am not sure how that would make the a/d distinction make sense, but you definitely know better than me.

You are still at a higher level of system, and assume that higher level is fundamental.


For you it is higher, for me it is lower. For you "higher" is the mind, "lower" is the objects which constitute the production of the mind. For me, "lower" is the mind, and "higher" is the derivation of the objects. For me, "lower" and "higher" aren't really sufficient terms because they more relate to mereological structure, which pertains to objects alone.

Can you use your derived system without my system underlying it? No. Until that changes, it cannot be used as a negation of the very thing it uses to exist.


I feel like my response so far should clear up the confusion here (not saying you are going to agree with me though of course (: ).


"I" is the discrete experiencer. You've been attributing the "I" as having free will. I have not meant to imply that or used those terms.


I have no problem if you aren't trying to convey any position on free will in your epistemology, my problem is that when you state "I've noted you can create whatever system you want distinctively", that implies free will of some sort (I am not trying to box you into a specific corner on the issue). I don't see how that could imply anything else. If I walk up to a hard determinist and say that they are definitely going to catch on to that implication very quickly.

Where does the idea of negation come from? True and false?


Metaphysically the mind. Explain to me how you can derive PoN without using PoN to derive PoN. I don't think you can. Explain to me how you can validate causality holistically: the best one can do is systematically validate one connective (relation) of two objects by virtue of assuming the validity of another connective (or multiple): this occurs for a potential infinite.

Did you mean to say, "One cannot distinctively know their own definition before they perform application to obtain that?" That doesn't work, because distinctive knowledge does not require applicable knowledge.


The entire point was not to conflate or omit your terminology, when I used "application" I was referring to "applicable". I should have been more clear though: the point is that one does not know distinctively anything without performing application to know it. Your distinction is not separable in that sense like I would imagine you think it is.

Please clarify what you mean by this in distinctive and applicable terms. I didn't understand that point.


Of course. Forget for a second that you have obviously imagined a "pink elephant" before (or at least odds are you just did). Now image you "discretely experience" "pink", in isolation. Now, imagine you "discretely experience" "an elephant". Now, without imagining a combination of the two, you assert "I have imagined a pink elephant". That is a conceptual conflation. You did not, in fact, imagine a pink elephant. The concatenation of concepts is not the same as the union of them.

What I meant by "proving itself" is it is consistent with its own rules, despite using some assumptions or higher level systems like the PoN.


I wasn't referring to consistency, I was referring to completeness. Consistency is when the logical theory proves for all provable sentences, S, either not S or S. Completeness is when the logical theory proves all sentences in its language as either S or not S.

Also, I am not using truth. If you wish to use Goedel's incompleteness theorem in relation to this theory, feel free.


I was never attempting to argue you were using "truth". You are arguing for what is "true", which is "truth", but you are refurbishing its underlying meaning (to not be absolute). That is what I meant by "truth outruns proof".

What I am noting is that an infinite regress is something that cannot be applied, and therefore an inapplicable speculation.


It is applied. I think I noticed clearly in my previous post how one could negate it. Also, I want to clarify I am referring to a potential infinite regress, not actual.

My system can be constructed distinctively, and applicably used, while not using infinite regress


You just previously conceded "despite using some assumptions...like PoN". You can't finitely prove PoN. It is not possible.

Mine does not rely on such an induction, and is therefore more sound.


If I were arguing for an actual infinite regress, then it would be an induction. A potential infinite regress is deductively ascertainable.

Because I am not fully understanding (I would suspect) the a/d distinction I am going to end this with a step by step analysis of your definition here and you tell me where I am going wrong (thank you by the way for elaborating):


Distinctive knowledge is a deduced concept. This deduced concept is that I discretely experience. Anytime I discretely experience, I know that I discretely experience. This is distinctive knowledge. This involves, sensation, memory, and language. This is not the definition of the Principle of Negation, though we can discover the principle of negation as I noted earlier.


1. Distinctive knowledge is a deduced concept.

Makes sense.

2. This deduced concept is that I discretely experience.

The justification for this seems to be "Anytime I discretely experience, I know that I discretely experience". The question is why would this be valid? I would argue it is valid in virtue of PoN, spatiotemporal contemplation, etc. You know it because your mind related the objects in that manner in accordance to the rules you inevitably submit to. Causality are simply the connections of your mind. There's nowhere to point to in objective "reality" that validates the causal connection of two objects in space and temporally in relation to time: it is a potential infinite regress of validating connectives in virtue of assuming the validity of others and so on and so forth.

3. This involves, sensation, memory, and language.

I think all of these are aspects of the brain in a derivation of objects and their relations. But the relations themselves are of the mind. This is why I am careful to relate my position to reason as opposed to consciousness.

4. This is not the definition of the Principle of Negation, though we can discover the principle of negation as I noted earlier.

I agree that it is not PoN, but you are necessarily using it here. Just because you can discover it doesn't mean you weren't using it fundamentally to discover.

I look forward to hearing from you,
Bob
Philosophim June 04, 2022 at 13:07 #704941
Hello again Bob, this was more delayed than I had liked due to Memorial week activities and summer starting here, thanks for waiting. This was also a doozy of a post as there are a couple of central themes. As a summary, I can state I feel I've lost you somewhere along the way on the d/a distinction, and that may be an insurmountable issue at this point. For my part, you have given me every single examination and critique of the d/a distinction I have ever wanted, and I am eternally grateful for that. At this point, I feel we are getting into your own outlook and view of knowledge, and I greatly respect that as well.

The goal of this exploration was to see if someone could poke holes in the d/a distinction within the argument itself. I feel that has been adequately explored. At this point, it seems to be the dissection of your theory, and I'm not sure I want to do that on this thread. It is unfair, as you have not had the time and space to adequately build it up from the ground floor. Further, my emphasis on this thread is my own theory, and I have a bias towards that. Perhaps it is a time for another thread where you write and construct your theory, and then I will be able to adequately address it properly, minus the d/a distinction I've written here. There are a few questions I could ask about the basis of your theory, but then the thread would get derailed, and the posts here would reach new record lengths. :smile: You'll see below a lot of my disagreements with your points are merely due to perhaps not understanding how you built the theory from the ground up. As such, I feel we might be talking past each other, and I would rather just give your theory its full focus and due. I do feel at this point though that we'll need to address either your theory, or mine, and the combination of both will just explode too much writing and exploration for one thread.

With that, I'll begin.

Quoting Bob Ross
I think, and correct me if I am wrong, you are arguing for discrete experience in virtue that the brain (or whatever object is required, to keep it more generic) must produce this discrete experience for me to even contemplate and bring forth PoN (in other words, I must discretely experience).


Yes, this is correct.

Quoting Bob Ross
However, to claim that that is truly a fundamental in relation to the subject is to take a leap, in my opinion, to bridging the gap between mind and brain, which, as of now, I do not hold.


Again, I would ask how a person could even realize they were a subject without discrete experience. What I believe I can agree with is the speculation that a self could exist that could not discretely experience. Such a thing would have no awareness of itself, much less the capability for knowledge.

Quoting Bob Ross
Discrete experience is the fundamental simplicity of being able to notice X as different from Y. Non-discrete experience is taking all of your experience at once as some indesciphable.

This is simply outlining the fundamentals of how a brain works. I find nothing wrong with this. I do not hold the brain as the subject, which I think is clearly where we are actually disagreeing (realist, materialist vs anti-realist, idealist--generally speaking, I'm not trying to force us into boxes).


I don't think we're disagreeing here. I've never claimed that "I am the brain", just "I am the discrete experiencer". Focus on your breathing for a second, and control it. There you are discretely experiencing your breathing. But a few minutes ago, you were not discretely experiencing breathing. It was part of the entirety of your existence, but you didn't parcel it out of everything. Now we know from other knowledge that the brain is still what causes you to breath, but as the discrete experiencer, you do not always discretely experience breathing. To form the initial theory, knowledge of the brain is not needed, much like the knowledge of the material a ruler is made out of is needed to use the ruler.

Quoting Bob Ross
However, the flaw I think you are making is bridging the gap, so to speak, between mind and brain in virtue of this: there are aspects of the brain which will never be explained from it. The brain is simply a representation of the mind, which can never fully represent itself.


Stating that there are aspects of the brain which will never be explained from this methodology is an induction, not a fact. Everything the mind can comprehend is a representation of the mind, including itself. That is exactly what discrete experience is. It is creating discrete experiences out of the entirety of existence. An atom is a creation of discrete experience. It is a concept. As I've noted, we never had to create that concept. Think of the Bohr model versus quantum model of atoms. https://pediaa.com/difference-between-bohr-and-quantum-model/#:~:text=Main%20Difference%20%E2%80%93%20Bohr%20vs%20Quantum%20Model&text=Quantum%20model%20is%20considered%20as,particle%20duality%20of%20an%20electron.

Are any of those models "the thing in itself"? Is even "the thing in itself" something that is existent in nature as a concept apart from the minds creation? No. They are all discrete experiences. Everything is a representation of the mind, the brain is no exception.

Quoting Bob Ross
But we could not begin to use deduction about discrete experience, without first being able to discretely experience. We cannot prove or even discuss the PoN without being able to understand the terms, principle, negation, etc.

Apart from the fact that, again, you are fundamentally positing objects as more fundamental than subjects, I want to clarify that explicating PoN and utilizing PoN is not the same thing.


I want to clarify again that I am not positing objects as more fundamental than subjects, at least in regards to knowledge. First, I never use the word "object" in the theory. Knowledge never claims "truth" or that there is a "thing in itself" that exists out there. Knowledge is a logical tool developed by a subject (the discrete experiencer) to create a model of one's discrete experience in such a way that it ensures our survival and success. PoN can be part of that model, but it is not a fundamental that is first needed to derive other things. The PoN is derived and proved. I showed you how I did it with the a/d distinction. The ability to discretely experience is required first for the PoN to be derived and proven.

As I have noted in an earlier post, one can use something without applicable knowledge of it. Many of our conclusions are filled with implicit inductions. We may use the PoN without first proving that it is applicably known. But for the PoN to be applicably known, we must then examine it. And the point that I was making is that when we finally get around to seeing if the PoN is applicably known, we must prove it. And to prove it, we need the d/a distinction. A thing's use does not make it fundamental. What makes it fundamental, is that there is nothing deeper that needs to be shown to logically explain it as a concept. We may have a fundamental disagreement here, which is fine. For my purposes, fundamental construction of logic is in both explication and utilization. And to explicate and utilize PoN as knowledge, one must distinctively and applicably know the d/a distinction. I'll keep exploring below why that is.

Quoting Bob Ross
Yes, but you must first understand what the terms "true" and "false" are.

I don't want to be too reiterative, but this argument is sound in relation to the utilization of PoN: without PoN, the best way to describe it would be "indeterminacy".


Lets list what the PoN is. In Western Philosophy it is often associated with Aristotle and comprises several principles. The law of the excluded middle and the law of contradiction for example.
'if p, then not not-p,'
'if not not-p, then p.

In India you have the principle of four cornered negation. "S is neither P, nor not p, nor both p and not p, nor neither p nor not p. And that is not necessarily agreed with by all people.

The point is that these are distinctive knowledge constructs that then must be applicably known to be useful. My theory can explain how they can be known without assumption. We may have assumed they were true, but the PoN is not proven as a fundamental truth, or thing in itself. It is a construct of the mind like everything else. The reason why it works, is that it works both applicably and distinctively.

One further, I'm going to go back to something I said very early on. Humans aren't the only discrete experiencers. Animals, and even insects discretely experience. If they did not, they could not identify what was food, and what was not food. A thing that does not discretely experience is like a coma patient on a drug trip of indeterminate sensation and thoughts. Its not an "I" at that point, but what we might call a "thing" that exists without any determinate realization of anything in the world, including its own existence.

Does an insect need the PoN? No. Its beyond its capability to realize or think such a thing (in theory). Yet it can, and must, discretely experience. This is why the ability to discretely experience is more fundamental than the PoN. You see the PoN as fundamental to human thinking and logic. I'm noting that human thinking and logic relies on the fundamental of being able to discretely experience.

So back to "truth" and "false". Yes, without the PoN, we could create another identity called "indeterminancy".

Quoting Bob Ross
I think this derails quickly though because I can posit PoN for the terms as well: it isn't that X can't be "true" and "false", it is that it can't be true and false at the same time.


We can create a distinctive logic model which notes that it is possible for a thing to exist, and not exist at the same time. "Truth" is when a thing exists in its state. "False" is when it does not. "Indeterminancy" is when it exists in both a true and false state. We'll call this the "PoI".

What we cannot do is applicably know such a thing, which is why it is not used by anyone seriously within science. But a human being can live their entire life believing in the "PoI" if they so desired, and live a life. "Somewhere out there, I believe we'll find a thing that both exists and doesn't exist at the same time." Again, this is speculative at best. But why it isn't useful is it has not been applicably known, seems inapplicably and arguably illogical, and is not useful to daily life. But the reason why we distinctively and applicably know this, is not because of the PoN. Its because of the knowledge formula formed with the d/a distinction. While I can distinctively know indeterminancy, I cannot applicably know it.

Quoting Bob Ross
I don't think this is going to be productive, but my ask back to you would be to try and "create" PoN using the a/d distinction without utilizing PoN: you can't.


A deduction assumes that the conclusion follows the premises. I will instead use the PoI. All deductions instead would be hypothetical, as the deductions state could exist, but it could also not exist at the same time. A conclusion would not necessarily follow the premises, because the premises and the conclusion could potentially be, and not be at the same time. At that point we would have to tweak it to say, "But if it were the case that the involved premises and conclusions were not indeterminate", we could get something like a determinate theory. It is not required that we have the PoN, it just makes things cleaner, and is something we have applicably known.

Thus I would conclude using the POI that what is distinctively known is what we discretely experience, and I would add the claim we could discretely experience both something, and its negation at the same time. I would say then that we could applicably know something, and we could applicably know something that exists, and does not exist at the same time. But after determining the d/a distinction, I can then go back and ask myself, "Is the PoI something I can applicably know?" No, using the theory from there, I determine I cannot applicably know the PoI. Therefore its a distinctive theory that cannot be applicably known, and is unneeded. At best, it would be included as an induction. But I did not need the PoN to create the d/a distinction as shown. What I could do is form the PoN to make the proof cleaner, but it is not required.

Without the d/a distinction, there is a problem that the PoN must answer. "Just because I have not experienced an existence and its contradiction at the same time, how do I know I won't experience such a thing in the future? Isn't claiming that the PoN will always exist just an induction without the d/a distinction? And if it is an induction, why is it any better than the induction that in the future, we may experience the PoI?" The d/a distinction can answer this clearly. With the d/a distinction, the PoN is something which is possible, the PoI is speculatory at best. As they are competing inductions, it is more cogent to use the PoN over the PoI. How do you answer such a question without the d/a distinction? Despite your disagreements with the d/a distinction, this is an essential question your theory must answer.

Quoting Bob Ross
I have no problem if you aren't trying to convey any position on free will in your epistemology, my problem is that when you state "I've noted you can create whatever system you want distinctively", that implies free will of some sort (I am not trying to box you into a specific corner on the issue). I don't see how that could imply anything else.


The I is the discrete experiencer. It is what discretely experiences. I'm using "want" broadly here, and should probably have used "will". What the discrete experiencer experiences, is what the discrete experiencer experiences. Whether it has constructed a logical idea of will that is free or deterministic are non-essential properties.

Quoting Bob Ross
It is the transcendental aspect of the mind which determines what is a contradiction and what is not. I didn't choose that something cannot be in two different places at the same time, nor that two objects cannot be at the same place at the same time. Likewise, I didn't choose the validity of the causal relations of objects.


And yet someone could choose to use the PoI distinctively. The reason why its not useful is because it cannot be known applicably. Just because you couldn't choose to create a different distinctive knowledge, doesn't mean its not possible for others to do so. You have never observed these contradictions, but as noted earlier, how do you explain that this gives you knowledge that it is not possible somewhere in reality? Without the d/a distinction, your argument is only a subjective induction and cannot necessarily explain why it is superior to the PoI.

Quoting Bob Ross
Where does the idea of negation come from? True and false?

Metaphysically the mind. Explain to me how you can derive PoN without using PoN to derive PoN. I don't think you can.


I create the idea of PoN distinctively, then applicably show it to be true. Then, I note that any competing principle when used in the future, the PoI for example, is not as cogent of an induction as the PoN.

Quoting Bob Ross
I think part of the problem is you may not have fully understood or embraced the idea of "discretely experiencing". If you don't understand or accept that fully, then the a/d distinction won't make sense

I most certainly have not fully embraced it. I am not sure how that would make the a/d distinction make sense, but you definitely know better than me.


Then this is absolutely key. If there is any doubt or misunderstanding of the idea that we discretely experience, that has to be handled before anything else. Please express your doubt or misunderstanding here, as everything relies on this concept. You keep not quite grasping the a/d distinction, and I feel this is the underlying root cause.

Quoting Bob Ross
The entire point was not to conflate or omit your terminology, when I used "application" I was referring to "applicable". I should have been more clear though: the point is that one does not know distinctively anything without performing application to know it.


No, this is fundamentally false. Applicable knowledge absolutely requires distinctive knowledge first. If there is no distinctive knowledge, there is nothing to match to. When you first encounter a new sensation, you can try to match it to something you have already distinctively known. But if you have no distinctive knowledge, or do not try to match it to something distinctively known, your knowledge of the sensation will be distinctive, not applicable.

If I see a swamp thing for the first time, and name it a "swamper", that is how I distinctively know it. If I encounter it again and deductively match it to a "swamper", then I applicably know it as a swamper. But I can't applicably know it as a swamper, until I've first distinctively known it as a swamper.

Quoting Bob Ross
Forget for a second that you have obviously imagined a "pink elephant" before (or at least odds are you just did). Now image you "discretely experience" "pink", in isolation. Now, imagine you "discretely experience" "an elephant". Now, without imagining a combination of the two, you assert "I have imagined a pink elephant". That is a conceptual conflation. You did not, in fact, imagine a pink elephant.


That's not a conceptual conflation, that's a lie. If I say I've imagined something, but I have not, then obviously I have not. Words without any essential properties to them are just words without any essential properties to them. I'm not seeing the problem.

Quoting Bob Ross
I wasn't referring to consistency, I was referring to completeness. Consistency is when the logical theory proves for all provable sentences, S, either not S or S. Completeness is when the logical theory proves all sentences in its language as either S or not S.


I think completeness is more than clearly showing distinct identities. It also must be able to adequately answer questions and critiques of it. Anytime a theory must reference an infinite regress is when it is inapplicable, and incomplete in my eyes. As I've stated many times, you can form many distinctive logical arguments in your head that fail in application. My theory notes that ability to create a distinctive logical concept is only one half of the equation. I'm quite certain someone could construct a distinctive logical concept that is in exact contradiction to your own. The proof is whether the logical construct can be applicably known. Without applicable knowledge, how can your theory compete with someone who uses a completely different theory using different definitions for words and concepts?

Quoting Bob Ross
I was never attempting to argue you were using "truth". You are arguing for what is "true", which is "truth", but you are refurbishing its underlying meaning (to not be absolute). That is what I meant by "truth outruns proof".


Yes, absolute truth outruns proof. Which means any theory which relies on absolute truth can never be proven. But I am not arguing absolute truth. Anything which relies on absolute truth is inapplicable, and therefore not useful. My theory is applicable, and therefore useful and logically consistent.

Quoting Bob Ross
What I am noting is that an infinite regress is something that cannot be applied, and therefore an inapplicable speculation.

It is applied. I think I noticed clearly in my previous post how one could negate it. Also, I want to clarify I am referring to a potential infinite regress, not actual.


No, its not applied, and by this, I mean applicably known. Any distinctive reference to the infinite can never be applicably known. Long ago when we first met on the "A First Cause is Logically necessary" thread, you were the only one to point this out, and I conceded you were correct. If there is potential infinite regress, then you don't have a deduction. That's an induction.

Quoting Bob Ross
My system can be constructed distinctively, and applicably used, while not using infinite regress

You just previously conceded "despite using some assumptions...like PoN". You can't finitely prove PoN. It is not possible.


I think I've done that. Using the d/a distinction, I constructed the idea that I cannot distinctively experience both a thing and its negation at the same time. Then, I've applicably known this. As such, I hold the induction that it is not possible for me to experience both a thing and its negation at the same time. This is the principle of negation, and requires nothing else then the steps shown.

Quoting Bob Ross
Mine does not rely on such an induction, and is therefore more sound.

If I were arguing for an actual infinite regress, then it would be an induction. A potential infinite regress is deductively ascertainable.


A potential infinite regress is an induction. You can deductively ascertain this induction, but it is an induction. Potential means, "It could, or could not be." If your theory has a potential infinite regress, you have an unresolved induction as the base of your argument. This leads anyone to ask, "What separates your theory which has an induction as its base, from any other theory that has an induction as its base?" Mine contains no potential infinite regress. It is all a finite logical system, and needs nothing more than what I've given.

Quoting Bob Ross
The justification for this seems to be "Anytime I discretely experience, I know that I discretely experience". The question is why would this be valid? I would argue it is valid in virtue of PoN, spatiotemporal contemplation, etc.


As I've demonstrated, PoN can be replaced with the PoI, and I can still conclude this. In other words, I'm not claiming that I cannot discretely experience indeterminancy. Discretely experiencing indeterminency would still be discretely experiencing. The PoN is a logically concluded limit to discrete experience, because if we explore our discrete experiences, we find it impossible to applicably know that we can discretely experience both an identity and its negation at the same time. Space and time are later identities we can both distinctively and applicably know within our discrete expereinces as well. But they are not required to know I discretely experience.

Quoting Bob Ross
Causality are simply the connections of your mind. There's nowhere to point to in objective "reality" that validates the causal connection of two objects in space and temporally in relation to time: it is a potential infinite regress of validating connectives in virtue of assuming the validity of others and so on and so forth.


I've never spoken about causality or objective reality so this does not apply to my theory. We can discuss how causality would apply with the a/d distinction, but that shouldn't be in the conversation at this point. I would address it here, but this post has already been long enough!

Quoting Bob Ross
3. This involves, sensation, memory, and language.

I think all of these are aspects of the brain in a derivation of objects and their relations. But the relations themselves are of the mind. This is why I am careful to relate my position to reason as opposed to consciousness.


I do not claim the mind or brain on first construction of this theory. Yes, these are all distinctively constructed identities that we can then applicably know. I don't disagree with your notion, but they don't disagree with the knowledge theory either.

I see and understand your theory, but it is separate and apart form the a/d distinction. Your criticisms seem to miss the mark on the a/d distinction, and at this point I'm not sure what else I can do except ask you to review either parts of the original paper again, or go back and see previous replies. Again, I do not want to imply that the PoN is wrong, or that spatiotemporal identities are wrong either. I'm also not denying that you couldn't conclude the a/d distinction with those identities. What I'm trying to point out is they are not necessary, and thus not fundamental to the idea of discrete experience. They are also not necessary, and thus not fundamental to the a/d distinction either.

The key between us at this point is to avoid repetition. I fully understand that two arguments can be made, and eventually it may be that each side is unpersuaded by the other. It may be time where if you feel you are repeating yourself, feel free to state, "I disagree because of this previous point." and that is acceptable.

I feel I understand your positions at this point, and they are well thought out. But there are a couple of fundamental questions I've noted about your claim that the PoN is fundamental that I think need answering. Neither are a slight against you, you are a very intelligent, philosophically brilliant individual; the best I have encountered on these boards. So, if you would like, either we can start a new thread addressing your knowledge theory specifically, or we can simply spend the next post only going over your theory from the ground up, without the d/a distinction. I leave it up to you!





















Bob Ross June 05, 2022 at 18:08 #705347
Hello @Philosophim,

Hello again Bob, this was more delayed than I had liked due to Memorial week activities and summer starting here, thanks for waiting.


As always, take your time: no worries! I have no problem waiting for substantive, well-thought out replies (:

The goal of this exploration was to see if someone could poke holes in the d/a distinction within the argument itself. I feel that has been adequately explored. At this point, it seems to be the dissection of your theory, and I'm not sure I want to do that on this thread. It is unfair, as you have not had the time and space to adequately build it up from the ground floor.


That is absolutely fair. This is your thread and, thusly, I want this conversation to be directed exactly where you would prefer: if you think that the discussion has met its end (in this discussion board at the least), then by all means we can conclude whenever you deem so! I completely understand the desire to prevent irrelevant derailments on the thread, and I can see how diving into my epistemology could do just that. With that being said, someday soon I am planning on posting an in depth analysis of my epistemology and, as always, feel free to rip it apart (: It may be a little while though as I want to ensure its quality before posting.

With that being said, I will respond to your post with the intention of keeping it relevant to your epistemology but also very briefly responding to some of the points you made about mine (or alluded to them in your responses). After that, if you wish to cease the conversations on grounds of derailment, that is totally fine my friend.

To be honest, I don't think you are entirely understanding what I am trying to convey, but that is by no means your fault and it is entirely possible that you do and I am failing to perceive it. To keep it brief, let me address your points on PoN and how it relates to what you defined as PoI.

Lets list what the PoN is. In Western Philosophy it is often associated with Aristotle and comprises several principles. The law of the excluded middle and the law of contradiction for example.
'if p, then not not-p,'
'if not not-p, then p.


My contention here would be that the LEM (law of excluded middle) is by no means apart of the law of noncontradiction even with respect to classical western logic: they are completely separate principles. Instead of positing it as "not-p" and "p", which are meant to presuppose the use of LEM and PoN together, there separability can be more easily demonstrated as follows:

"B cannot be A and not A" (or more precisely "B cannot be A and not A at the same time")
"B is either A or not A"

The former does not directly necessitate the latter in this terminology, but using "not-A" instead of "not A" implies LEM--as anything that is A = not not-A and thusly anything that isn't A is a not-A, which means that the if conditionals "if A, then not not-A" and "if not not-A, then A" directly necessitate the law of the excluded Third. But within the refurbished terminology it is quite clear that B necessarily not being "A and not A" does not necessitate that B is thereby one or the other. This is the wiggle room where paraconsistent, paracomplete, and, as you noted, eastern logic, such as catuskoti (tetralemma) notions, are able to be conceived. Also, as you noted, the kotis actually do allow for B to be A and not A . To keep it brief, my point is that my use of PoN is not meant as a logical construct like those, and its precise definition holds no immediate favoritism on the battle between paraconsistent vs consistent logical languages. I am defining PoN in the form of predicate-logic:

"a predicate cannot contradict its subject concept"

Or even more precisely:

"a predicate cannot be true and false of its subject concept"

This move is admittedly subtle, potentially sneaky, which turns out to be vital. This is not equivocal to "B cannot be A and not A" nor "B cannot be A and not A at the same time"! To keep it brief, here is an example where the distinction matters:

"circles are green and not green" (aka: "Bs are A and not A")

A more classical logic enticed individual will deny this sentence in virtue of the obvious (A and not A), while a more paraconsistent minded individual will allow it in at least some circumstances. However, using the predicate-logic definition of PoN, the aforementioned sentence, at face value, is not violating PoN, contrary to popular, classical logic belief. Firstly, let's allow ourselves to refurbish the subject concept "circles" how we please (with the exception of holding fast to the concept of plurality: i.e. circles), given that the sentence wasn't given any prerequisite definitions of the concepts. One particular scenario of the definition of "circle" pops out: what if "circle" is defined to contain "has essential property of being green and not green". Now the sentence "circles are green and not green" makes perfect sense: apart of the definition of being a "circle" is to have a "contradictory" state of greeness, which is perfectly definable and describable by human reason. Now, this definition of "circle" is perfectly coherent, yet does not entail any sort of "circles in 'reality' that are green and not green". Secondly, let's analyze it from the understanding of the colloquial use of the term "circle": nothing in the concept of a "circle" necessitates a certain color nor that it cannot be "green and not green". However, we have violated the predicate-logic PoN in the colloquial use of the term "circle" because stating is permits the non-necessity of color in the definition of a "circle" with its necessity in a coexistent state, which amazingly has nothing to do with the fact that we posited the color in contradictory states, this violation can also occur without it:

"circles are green"

Given a "circle" inheriting the colloquial definition, this violates PoN. With a bit more clarity, we can also violate PoN with the contradictory greeness:

"a circle, by definition, can be any color"
"circles are green and not green" (which could equally violate PoN with proposing any color even in a "non-contradictory" state in this case)

Now, I am skipping a couple steps here, but I think you get the point. This is why subjects can posit and bend "PoN", because they are not referring to what I am referring to. It is perfectly possible to hold sincerely that something is A and not A without contradiction as long as the subject concept is not contradicted by the predicate: this is the aspect of reason which is always abided by, not "B cannot be A and not A" or "B cannot be A and not A at the same time". If someone defines B as X and then posits B is not X, they will not hold it unless there is some other variables at play which resolve this predicate contradiction as no longer existent or they simply do not recognize the contradiction (regardless of how valid their derivation actually is or is not). The important aspect here is that I am trying to derive and convey how reason works as opposed to developing a logical language. Maybe PoN is the wrong term? People can most certainly construct PoN how they like as long as they abide by the PoN I am proposing (I would argue).

And, yes, I am using a constructed logical language's, predicate-logic's, form of PoN and still claiming that it precedes constructed logic, because this is analogous to simply deriving that one discretely experiences by constructing it from discrete experience. I can most definitely propose a constructed logical language which embodies a more fundamental principle than logical languages.

Now, with that in mind, let me address your PoI. Yes, one could, prima facea, construct a logical language wherein the classical logic PoN is accounted for but LEM is non-existent (which is exactly, I would say, what you did in creating PoI). In fact, there are many logical languages which deny LEM without any issues, such as fuzzy logic (https://www.globaltechcouncil.org/artificial-intelligence/fuzzy-logic-what-it-is-and-some-real-life-applications/), which doesn't utilize boolean logic (which by virtue of being boolean requires LEM) but, instead, uses values from 0 to 1. It is actually very useful in certain situations where boolean logic doesn't cut it. Logical systems, such as fuzzy logic, necessarily cannot hold LEM as that would necessitate it to be boolean logic, which would defeat the purpose.

Now, what you described in PoI is a much bolder constructed logic which is like but not equivocal to our fuzzy friends: you posited three outcomes (true, false, and indeterminate). Firstly I want to note that this is entirely possible to construct, prima facae, using the predicate-logic formulation of PoN. One can produce sentences with PoI in which the subject concept is not contradicted by its predicate, such as:

"B is in an indeterminate state"

That's fine. This makes no inherent position on what "state" must be in terms of possibility (it doesn't contradict its subject concept)--it doesn't specify that an indeterminate state must be either A or not A (LEM). Indeterminate could be ineffable, neither both, both, true and not false, or false and not true (the kotis for example). Let's take your sentence:

"Somewhere out there, I believe we'll find a thing that both exists and doesn't exist at the same time"

The reason this is possible for you to construct this sentence is because the subject concept, implicit here, isn't contradicted by its predicate: the concept of ignorance could potentially be enough wiggle room for one to posit such a sentence about the unknown. My main point with respect to your epistemology is that you are using, inadvertently, this more fundamental PoN (more like the form of predicate-logic) to formulate discrete experience. I was never trying to convey that you have been involuntarily using classical logic PoN and LEM.

What we cannot do is applicably know such a thing, which is why it is not used by anyone seriously within science.


Although I understand what you are trying to convey, logicians and mathematicians (and scientists) do not disregard logic simple based off of classical logic principles. There are perfectly applicable logics, like first-degree entailment logic, which allow for koti-like truth value systems: f (false and not true), t (true and not false), b (both true and false), and n (neither true nor false). Wherein the output of a given function is a set: {f}, {t}, {t, f}, and {} (empty set being n).

But more in terms of every day to day application, four possibility systems are also applicable, albeit not as applicable as classical logic is. Imagine I am eating cereal and claim:

"I am eating bread"

That's false and not true. Imagine I am eating cereal and claim:

"The bread I am eating is purple"

Well, I am not eating bread. So I am neither eating bread that is purple nor bread that is not purple, because I am not eating bread. Therefore it is neither true nor false. Imagine I am eating cereal and I claim:

"this sentence is false"

I could simply concede that the liar paradox outputs {t, f}, which is essentially the same thing as defining a liar paradox sentence as having a property of being contradictory (just like being green and not green). I could also simply deny its truth-aptness, which is the exact same thing as claiming the output is {} (i.e. n). As you can probably see, there are application, even in mundane life, for first-degree entailment logic.

This is incredibly relevant to how you are trying to resolve this within your epistemology:

But after determining the d/a distinction, I can then go back and ask myself, "Is the PoI something I can applicably know?" No, using the theory from there, I determine I cannot applicably know the PoI. Therefore its a distinctive theory that cannot be applicably known, and is unneeded. At best, it would be included as an induction.


You are subscribing your epistemology to LEM and PoN, most notably as described by classical logic. This rules out the actual applicable usages of paraconsistent, fuzzy, and first-degree entailment logic. My epistemology still accounts for these within their own respects.

Thus I would conclude using the POI that what is distinctively known is what we discretely experience, and I would add the claim we could discretely experience both something, and its negation at the same time.


I don't think you can posit this unless you are redefining discrete experience: the subject concept necessitates, categorically, that it be distinct, which necessitates that one cannot experience both something and its negation at the same time in the same place. As you described it, technically speaking, that is possible. I could experience a blue car and a not-blue car at the same time as long as they are not in the same place. My main point here, in relation to predicate style logic PoN, is that the subject can only posit your claim here if they either don't recognize the contradiction in the predicate or they convinced themselves of some sort of wiggle room (which requires, I would argue in your case, some refurbishing of the term "discrete experience").

What I could do is form the PoN to make the proof cleaner, but it is not required.


You can most definitely posit it without classical aristotilian logic which uses PoN and LEM, but that's not what I am referring to. You cannot help but use predicate style PoN to determine discrete experience.

Without the d/a distinction, there is a problem that the PoN must answer. "Just because I have not experienced an existence and its contradiction at the same time, how do I know I won't experience such a thing in the future?


You could, if it isn't in the same place at the same time. But let's refurbish the claim to append "at the same place" into your inquiry here to try and steel man it: the concept of space and time (in terms of their overlying references and not different theories out there such as string theory) would be contradicted by a predicate which states "Space/time contains A and not A in the place at the same time". This is why it is important to note the necessary inseparability of time and space, for the sentence "Space contains A and not A" does not violate predicate logic PoN, nor does "Time references A and not A at the same time": it's only when combined, the union of the two concepts, where the predicate contradicts the subject concept. I don't see how this is a problem for PoN as I've described it.

You have never observed these contradictions, but as noted earlier, how do you explain that this gives you knowledge that it is not possible somewhere in reality?


It doesn't. Firstly, I am deriving the possibility of reason, not constructing rationality. Secondly, there is application, rationally, for logical systems that do not use LEM and even some that do not use traditional PoN (from classical logic). What isn't possible is to sincerely posit a claim wherein the predicate contradicts its subject concept. It is only possible if one refurbishes the terminology or simply doesn't recognize the contradiction: that's the only way. At this point, I am not attempting to construct a logical system I deem most rational for a given context, I am noting the possibility of reason and therefrom asserting the fundamentals thereof.

Then this is absolutely key. If there is any doubt or misunderstanding of the idea that we discretely experience, that has to be handled before anything else. Please express your doubt or misunderstanding here, as everything relies on this concept. You keep not quite grasping the a/d distinction, and I feel this is the underlying root cause.


I think I understand that we discretely experience. However that doesn't necessitate it is a fundamental. We utilize predicate logic style PoN to derive we discretely experience. Someone could possibly deny this by introducing "wiggle room" into the concept of discrete experience to abstract applicable non-LEM scenarios or even non-PoN scenarios. Maybe my use of PoN is misleading, maybe I need to use a different term?

Without applicable knowledge, how can your theory compete with someone who uses a completely different theory using different definitions for words and concepts?


They would be using mine fundamentally. I cannot say the same for classical logic, fuzzy, etc. I can't say the same for every definition of PoN, LEM, law of identity, etc. I am speaking much broader than I think you may believe me to be.

Yes, absolute truth outruns proof.


That's not quite what I meant, but I agree. I'll refrain from further elaboration to keep this shorter and more relevant to your epistemology.

A potential infinite regress is an induction. You can deductively ascertain this induction, but it is an induction. Potential means, "It could, or could not be." If your theory has a potential infinite regress, you have an unresolved induction as the base of your argument.


Every valid epistemology must have an absolute as its point of derived contingency. Mine is no exception. A potential infinite regress is not an induction. Again, uncertainty is not equivocal to an induction. The absolute wherefrom contingency arises is utlimately reason in my epistemology. A potential infinite, of the type I am describing, is not claiming "it could, or could not be", it is claiming that a particular finite operation would be infinite if given the sufficient resources to continue. For example, counting the positive integers starting at 1 is a potential infinite. This claim is not an induction whatsoever. I deductively know that given sufficient resources counting the positive integers would be an finite operation occurring infinitely: there is no uncertainty in the claim here, only uncertainty in whether there is sufficient resources or not (which is not the actual claim here). This is clearly different, I would say, than an induction, such as, for example, if I were to claim that because I've seen white swans my whole life that all swans are white. Any sort of epistemology which grounds itself in an induction is faulty.

Mine contains no potential infinite regress.


I think it does. You can construct PoN and LEM based off of my definition of PoN, but cannot prove my definition of PoN without recursively using it. This is just like how you can't ever stop counting positive numbers granted enough resources and claim you've hit the last positive integer.

The key between us at this point is to avoid repetition. I fully understand that two arguments can be made, and eventually it may be that each side is unpersuaded by the other. It may be time where if you feel you are repeating yourself, feel free to state, "I disagree because of this previous point." and that is acceptable.

I feel I understand your positions at this point, and they are well thought out. But there are a couple of fundamental questions I've noted about your claim that the PoN is fundamental that I think need answering. Neither are a slight against you, you are a very intelligent, philosophically brilliant individual; the best I have encountered on these boards. So, if you would like, either we can start a new thread addressing your knowledge theory specifically, or we can simply spend the next post only going over your theory from the ground up, without the d/a distinction. I leave it up to you!


I understand and that it completely fair. If you would like to end the conversation in this discussion board here, that is totally fine! Sometime soon I will post a discussion board of my epistemology anyways. If you feel like this post has been utterly repetitive, then feel free to simply respond stating that, there's no need to repeat yourself countering my claims herein if you think you will indeed be reiterating.

I really appreciated our conversation, and I look forward to many more! You are also a brilliant, respectful, and genuine philosopher, and I respect that. It may be that we just agree to disagree, and continue this conversation (if you are interested) on another discussion board in the future.

Bob
Philosophim June 11, 2022 at 11:20 #707682
Quoting Bob Ross
I completely understand the desire to prevent irrelevant derailments on the thread, and I can see how diving into my epistemology could do just that.


My concern was less with derailment, but not giving your theory its proper due when you're constantly trying to compare it to the d/a distinction. I've had time to build up the d/a distinction, then we've drilled into it. You have not been given the time to build your theory up, but are building it while comparing. That makes it very difficult for me to evaluate your theory fairly, while also trying to explain mine. In reading your reply, I see my suspicions were correct. Your definition of PoN was different from my understanding of it, and that's only because you haven't had time to let your own epistemology be explored and carefully constructed like I've had time to do here.

I do not mind at all exploring your epistemology here! Next post, feel free to get the last responses to the points I'll make here. I will not respond to them, but give you time to post your theory. You can use this spot as a draft if you would like before making your own post. Once I understand your theory, and get to ask my questions about it without the d/a comparison, we might come back to this later. You have had patience and curiosity with my proposal, the least I can do is return that favor. If this sounds like something you would like, I'll post my final points on the d/a distinction (for now!).

Quoting Bob Ross
To keep it brief, my point is that my use of PoN is not meant as a logical construct like those, and its precise definition holds no immediate favoritism on the battle between paraconsistent vs consistent logical languages. I am defining PoN in the form of predicate-logic:

"a predicate cannot contradict its subject concept"


Ah, I completely misunderstood this. I don't think this is called the principle of negation as often understood, but simply a consequence of language construction. First, lets break down what a predicate and subject are Feel free of course to amend my understanding of these definitions to fit your intention!

Subject - the "thing" being addressed in the sentence
Predicate - some type of assertion attributed to the subject in the sentence. An attribute, action, etc.

First, we can clearly see this is not more fundamental than discrete experience. This is a linguistic construct, whereas discrete experience requires no language, and is the foundation for language. One must be able to discretely experience to define a subject, and within my theory, you are able to define an essential or non-essential attribute of said subject. This is essentially a predicate; a further breakdown of the discrete experience of a subject into more discrete component parts. The "thing" is currently running. The "thing" is red. But I don't have to note that its running or red. The "thing" can exist as simply the discrete experience itself, unbroken and without any attributes but itself. Predicates are not required for subjects to exist.

Now if we are to note that properties are sub-discrete experiences of a subject, then by consequence we've constructed a system of distinctive logic that entails that a predicate is part of a subject. Thus we could propose that a predicate cannot contradict a subject, as that would mean we created attributes of a discrete experience that cannot exist on that discrete experience (the subject). But this does not predate the ability to discretely experience, it is built up from it. As such, "The predicate cannot contradict the subject" is not needed as a fundamental. It is a derived logic.

As for it being impossible that a predicate cannot contradict a subject, lets go further. What is the nature of a deduction? That the conclusion follows the premises. This also means that the conclusion does not contradict the premises. That the predicates do not contradict the subject. An induction is a conclusion that does not necessarily follow from the premises. This also means one possible type of induction is a conclusion which does negate the premises! I believe Dan is running right now. If so, it is distinctively implicit that Dan may in fact not be running right now. I look at Dan, and applicably determine he is not running. So here I have an induction who's resolved conclusion is that the predicate counters the subject. This was something I distinctively knew and held, despite reality showing otherwise. How does your epistemology handle the fact that inductions also implicate a predicate that contradicts the subject?

Let go even further. I applicably conclude Dan is running. But it turns out I made a mistake. It turns out this was Dan's twin that I was not aware existed. His name is Din, and he was the one running. Dan was also walking nearby with his back to us, and he turned around to let us know that was his brother when we yelled at "Dan" (who was Din) to turn around. Yet prior to Dan turning around, I distinctively and applicably knew that "Din" was "Dan" and that he was running. Barring the d/a distinction, was I not holding knowledge of a subject that had a contradictory predicate? Because the actual Dan was walking. In short, a Gettier type problem. How does your epistemology handle this?

The d/a distinction does not require the principle of subject non-negation (PSNN?) This is because I can distinctively know inductions, which implicitly allow me to distinctively hold knowledge of a sentence that could in application, have a predicate that contradicts a subject. Now, we can state that we distinctively know through deductions. This is true. But why should we hold to deductions over inductions? As I've noted, there is a hierarchy. But why is there a hierarchy? It is not because there is some necessary logical construct. It is because this logical construct gives us the best chance of survival, and actually understanding the world in a way where we can control or predict its outcomes accurately. Again, I do not see the PSNN as a fundamental. A nicely derived logic, but necessary for my epistemology.

Thinking further, someone could most certainly construct a distinctive knowledge that does not follow the PSNN. The construction of an all powerful God is one. All three omni's make this God. Despite a person being pointed out how that would be a contradiction, the person simply adds another property to God, "God can do all things, including holding predicates which are contradictory to its subject." Are we to say they do not distinctively know this? No, they distinctively know this, despite the predicate contradicting the subject. We can construct a separate distinctive logical system which would show this to be a poor distinctive bit of knowledge to hold, but we cannot deny that this is what they distinctively know.

I think this is similar to your green circles example.

Quoting Bob Ross
It is perfectly possible to hold sincerely that something is A and not A without contradiction as long as the subject concept is not contradicted by the predicate


Except for the fact that there are contradictory predicates. But if the predicates are contradictory in themselves, how does this relate to the subject? In the d/a distinction, I can claim I do not applicably know of any thing that is both existent and non-existent at the same time. But I can distinctively create such a thing in my mind. Which means I can say "There is a thing which is everything and nothing at the same time." and it be "possible" because I can create this in my mind. In your argument, these predicates do not contradict the subject. Whereas with the d/a distinction, I can demonstrate distinctively such a thing is possible, but applicably, it is something we cannot know. I do not have to concern myself to a linguistic game of predicates and subjects.

Finally, I want to ask if a subject can hold two contradictory predicates, why can it not hold a predicate which contradicts its subject? If a thing can have the predicate of both being there, and not being there, then isn't the subject a contradiction in itself? Which again, we can imagine such a thing distinctively. At best we can only speculate that such a thing could be known applicably. If I can distinctively create whatever subject with whatever properties (predicates) I want any time, then doesn't that hold to the notion I've been stating this entire time? That is, distinctively, I can hold whatever system of logic I want. And I am not seeing the argument that convinces me that I cannot create a system of logic in which the predicate can contradict the subject.

Again, the only way to counter such a hold, is with applicable knowledge. By asking them to show that such a being exists, we can escape the fact that we can distinctively know almost anything we want/are programmed to hold. In applicable knowledge we use deduction, but again, we use deduction not because we need to, but because it is more helpful to our survival and outcome in life.

Quoting Bob Ross
"The bread I am eating is purple"

Well, I am not eating bread. So I am neither eating bread that is purple nor bread that is not purple, because I am not eating bread. Therefore it is neither true nor false. Imagine I am eating cereal and I claim:


I had to note I don't believe this is the case. This is a combined sentence, and we can break it down.

I am eating bread.
The bread is purple.

Both are false, I cannot see this as being neither true or false in application.

Quoting Bob Ross
"this sentence is false"

I could simply concede that the liar paradox outputs {t, f}, which is essentially the same thing as defining a liar paradox sentence as having a property of being contradictory (just like being green and not green).


Again, we can break the implicit combination down.

This is a sentence "This sentence is false."
The previous sentence is false.

That results in t,f. No paradox or indeterminency. I would argue that when one cannot break a sentence down into t and f, that is a weakness of sentence construction, not a revelation of knowledge.

Quoting Bob Ross
You are subscribing your epistemology to LEM and PoN, most notably as described by classical logic. This rules out the actual applicable usages of paraconsistent, fuzzy, and first-degree entailment logic. My epistemology still accounts for these within their own respects.


I never claimed my epistemology ruled these logical constructs out. If anything, I've noted repeatedly you can construct whatever logical system you want distinctively. Can those logics be used in application? If so, then they are fine. I think this is a situation again in which I do not fully understand your theory.

Quoting Bob Ross
Thus I would conclude using the POI that what is distinctively known is what we discretely experience, and I would add the claim we could discretely experience both something, and its negation at the same time.

I don't think you can posit this unless you are redefining discrete experience: the subject concept necessitates, categorically, that it be distinct, which necessitates that one cannot experience both something and its negation at the same time in the same place.


No, we just affirmed I could do this. Can't I say a thing is both green and non-green at the same time? That is indeterminency. I can distinctively know this. Can I applicably know such an indeterminency? So far, no.

Quoting Bob Ross
A potential infinite, of the type I am describing, is not claiming "it could, or could not be", it is claiming that a particular finite operation would be infinite if given the sufficient resources to continue. For example, counting the positive integers starting at 1 is a potential infinite. This claim is not an induction whatsoever.


I agree with your definition here. But we know this because the design of numbers allows this to be. Such a description is not necessarily meaningful for any designed system. What we can discretely experience is potentially infinite. What we can applicably experience is potentially infinite. Any formulaic system with an X variable will always be so. My question to you so I understand better, is whether your foundation is finite. The system of numbers is formed by symbols, addition, subtraction, and for our purposes, base 10 rules. Does your epistemology have a solid and unquestionable base that does not need potentially infinite regress?

Quoting Bob Ross
This is why it is important to note the necessary inseparability of time and space, for the sentence "Space contains A and not A" does not violate predicate logic PoN, nor does "Time references A and not A at the same time": it's only when combined, the union of the two concepts, where the predicate contradicts the subject concept.


I don't believe this is correct Bob.

Space contains A and not A
Time references A and not A at the same time
Therefore space and time contains A and not A, and references A and not A at the same time

So again, we have contradictory predicates to a subject. What might help is showing a genuine situation in which a predicate contradicts a subject, and why, without using the d/a distinction.

Quoting Bob Ross
Mine contains no potential infinite regress.

I think it does. You can construct PoN and LEM based off of my definition of PoN, but cannot prove my definition of PoN without recursively using it. This is just like how you can't ever stop counting positive numbers granted enough resources and claim you've hit the last positive integer.


I think I clearly did using discrete experience. If you discretely experience within another discrete experience, then that sub discrete experience is part of the bigger one. But we could also discretely experience that the sub discrete experience is not part of the bigger one. Perhaps it is a parasite, or foreign entity that we find not necessary to the greater experience. If the predicate cannot contradict the subject, can the subject contradict the predicate? What happens then if in my mind I reverse what the subject and predicate are? Claiming a predicate can never contradict a subject is a logical rule you have constructed after understanding what a subject is, and what a predicate is. It is not foundational.

Quoting Bob Ross
If you would like to end the conversation in this discussion board here, that is totally fine! Sometime soon I will post a discussion board of my epistemology anyways.


Coming back to this, I think it is simply needed that you construct your epistemology from its foundation at this point. I believe I don't fully understand your theory, as you've noted you define things different from what I think you are. Coming from me, I understand. :) So until you really have room to build your theory, I think we'll be talking past one another. Again, feel free to respond to my points that I have made, and I will let you have the last word on those. Then, if you would like to continue, feel free to construct your epistemology here, even as practice before posting it on its own thread. I will address it without using the d/a distinction. If we get to a point where you and I both feel we understand your theory, then we may go back to those final points that you'll make. Great discussion as always Bob!













Alkis Piskas June 11, 2022 at 15:40 #707710
Reply to Philosophim
Quoting Philosophim
I've been having a fantastic discussion with a member on this forum

You mean, simply, "I had a fantastic discussion ..." :smile:

Quoting Philosophim
how we "know" knowledge.

Besides being a pleonasm and a circular question, knowledge is acquired, not known.
Knowledge consists of facts, information and skills acquired through experience or education.
I had no idea about baseball. Then a friend of mine explained to me how it is played, its rules. points. etc. So, I got some information --which I had to process in my mind (vital)-- and I know now a little about this game. If I had watched a baseball match --which I haven't-- I would have more knowledge about the game. Still though, far less knowledge from a baseball player. There are levels of knowledge, They are built --and knowledge is built-- with acquiring more and more information and getting more experience about an object or subject.

This is basically the process of acquiring knowledge, but, here too, there are levels of elaboration, complexity and details in its description/explanation, which have to do, for example, with how the mind processes facts. But this belongs to some other topic ...

***
I know of course that this is far from being an actual reply to the whole topic, which BTW sounds quite interesting, but too mauch for me to get involved.. I just brought up some basics of knowledge.
Philosophim June 11, 2022 at 18:12 #707795
Quoting Alkis Piskas
?Philosophim
I've been having a fantastic discussion with a member on this forum
— Philosophim
You mean, simply, "I had a fantastic discussion ..." :smile:


As you'll see in the last reply prior to yours, I'm still having a nice conversation with Bob.

Quoting Alkis Piskas
I know of course that this is far from being an actual reply to the whole topic, which BTW sounds quite interesting, but too mauch for me to get involved.. I just brought up some basics of knowledge.


No offense, but if you aren't going to read the OP, you have no idea what you're talking about and are not offering anything useful. Feel free to read it and bring your full criticism and knowledge to bear on the subject. We'll chat then.
Alkis Piskas June 12, 2022 at 10:48 #707960
Quoting Philosophim
As you'll see in the last reply prior to yours, I'm still having a nice conversation with Bob.

I see. So you simply mean, "I have a nice conversation ..." :grin:

OK, I see that you don't care about semantics and/or grammar, which is of secondary importance of course, neither about how knowledge is acquired, which is of primary importance because it concerns your topic, and which nevertheless you ignored or avoided to discuss, most probably because you don't know what knowledge is and you don't want to know.

Anyway, posting a topic requires being also a good host and thank people who repsond to it, except if they are offending you. Which was not my case (as you stated).
Anyway #2, even semantics and grammar, let alone definitions and desciptions of concepts -- such as "knowledge"-- are useful. Only nonsense is useless. Which was not my case either, I think.

Anyway, sorry about my intervention in your topic. My bad. I've just chosen a wrong person to respond to.
Philosophim June 12, 2022 at 11:12 #707973
Quoting Alkis Piskas
As you'll see in the last reply prior to yours, I'm still having a nice conversation with Bob.
— Philosophim
I see. So you simply mean, "I have a nice conversation ..." :grin:


No, I am having. As in ongoing, present tense. The conversation has not ended yet.

Alkis, you're being a troll, and its obvious. Anyone who doesn't read the OP, in which I go over how knowledge is acquired, then tries to critique something they haven't read is an ignorant person who is wasting my time. I thank people who are willing to engage in the OP and legitimately challenge the views here, not people like you.
Bob Ross June 12, 2022 at 16:24 #708006
@Philosophim,

First and foremost I want to thank you for a wonderful discussion (as always)! I appreciate you taking the time to respond the points I made that had no relevance to your epistemology and for being willing to discuss it in this forum. However, as you suspected, I don't think you quite understand my epistemology (and that's no fault of your own) nor do I 100% understand yours. I think it is best if we actually just pause the conversation here and reconvene after I post my epistemology. Then, you will have a fair grasp of what I am trying to convey and we can revisit our conversation of your epistemology. Then we can juxtapose them and explore them more adequately. With that being said, I think it is best that I actually leave it with your last post having the last word: although there is much I would like to say, it will all be addressed properly in my epistemology post (once I get the time to post the whole thing).

I look forward to our next conversation,
Bob
Philosophim June 12, 2022 at 23:49 #708113
Quoting Bob Ross
First and foremost I want to thank you for a wonderful discussion (as always)!


I feel equally the same Bob! This is hands down the best overall discussion I've had with another person on the forums. My respect for you cannot be overstated. You've given me a conversation I tried to find for years. Even if this never goes anywhere beyond these forums, that has been enough for me to feel fulfilled. I look forward to your epistemology, and I will seek to give it the respect and thoughtfulness you have shown mine.

Thanks again,
Philosophim