Knowledge without JTB
The theory of knowledge that serves as the foundation of philosophy is flawed. Like, many incorrect answers the popular theory of knowledge leads to infinite regression and makes an assumption that all knowledge must exist in the mind of an agent. And lastly, the JTB definition of knowledge does not correspond to the facts in so much as it can not account for the errors that are present in any body of knowledge. To define away this objection is to deny the reality of human error.
A thing that is knowledge can exist as the content of a book. The book does not have to believe it or justify it, only contain it. So, knowledge may exist without being believed or justified.
Knowledge 'ought to' be true, but often it is not, because of the mistakes we make in our understanding of the facts.
Change my mind.
A thing that is knowledge can exist as the content of a book. The book does not have to believe it or justify it, only contain it. So, knowledge may exist without being believed or justified.
Knowledge 'ought to' be true, but often it is not, because of the mistakes we make in our understanding of the facts.
Change my mind.
Comments (252)
This reminds me of Russel's famous conundrum: "The present king of France is bald."
Anyway, the most charitable reading of your post suggests that you are dissatisfied with the JTB theory of knowledge because it does not fully reflect the way the word "knowledge" is used in the natural language (English language, at least). This would have been a valid objection if an English language dictionary gave "justified true belief" as the only definition of the word "knowledge." Like many words, the meaning of "knowledge" as exemplified by actual use is heterogeneous and will not be captured by a single, compact definition. But JTB was not meant to serve as a general definition - it was to be a technical definition for use in analytical epistemology. So we can talk about whether it is a useful definition (and many have challenged it before you, most famously, Gettier).
JTB is all about how to differentiate between knowledge claims...
You're conflating one's statement that something is true, or is a piece of knowledge, with the definition of knowledge. When someone states that he or she knows that something is the case, as in JTB, someone else may come along and ask, "How do you know?" - and it's at this point that you demonstrate your knowledge. If it turns out that you cannot demonstrate, i.e., justify your claim, then it's not knowledge. This is why justification is important, because people sometimes think they're claims are knowledge, but when examined closely we see that they're not.
So the idea that people make mistakes as you say, is built into the idea of what it means to have knowledge. Doubting or being skeptical is built into the language of epistemology. If it weren't, then we could infer that someone has knowledge simply by their claim to knowledge. So when someone claims to know that such and such is the case, we want to know how they justify their claim, because people often do make factual mistakes.
So epistemology does in fact account for error. The definition is quite different though, viz., it says that a belief that someone claims is true, in order for it to be knowledge it must be properly justified. There is no error in the concept, only in people's claims. Again, the two are quite different.
Quoting Cheshire
Knowledge is made up of beliefs, they are particular kinds of beliefs, viz., beliefs that are true and justified. A book may contain beliefs that fit this definition, so in that sense a book may contain knowledge, but only in so far as the book represents the beliefs of someone.
Knowledge is a success word, it accomplishes a purpose, that of being true. Knowledge is not a matter of simply saying something is true, it requires that the belief be correct.
Epistemological statements occur with in a rule-governed activity, viz., language.
If it ain't true, then it ain't knowledge.
I greatly appreciate the charitable read and I agree. So long as JTB isn't meant to actually describe the real world and is only maintained for the purpose of an exercise I suppose I no longer object. Thank you for the reference to Gettier; I'm aware my arguments or causal assertions must appear quite naive.
Do you think you could produce an example of these two different types of knowledge? The general and the technical?
I suppose I'm agreeing with Gettier in a sense, but avoiding his objection. He's saying hey your system doesn't work because it can produce mistaken knowledge. I'm saying some knowledge is mistaken.
Quoting Sam26
Yes, except it isn't conflation if it is accurate.
If some knowledge is always held in error, then all knowledge cannot be true.
Quoting creativesoul
You have never known something and then later found out it was incorrect?
Quoting Sam26
I think knowledge intends the success of being correct, but realistically it turns out to be rational conjecture that hasn't been proven wrong yet. So, if you wanted to augment JTB with or F, then I would be satisfied for today.
Which I'm claiming is always going to be the case, so why not just acknowledge that some knowledge will eventually be proven wrong.
And why “acknowledge that some knowledge will eventually be proven wrong” when we can instead acknowledge that some things we don’t actually know? The latter seems a far more reasonable approach.
I'm arguing that without the certainity of what is true or false apart from your "thinking" it's true or false we're left with a description of knowledge that's too idealized to be practical beyond philosophical exercises. I don't think philosophy ought to be limited by it's own definitions.
Quoting Michael
I mean, yes, that's totally rational. But, technically problematic because we don't know what we don't know so to speak. I think instead of playing these word games we cut to the chase and say our knowledge contains our errors and inaccuracies. And until such a time as all errors from our knowledge have been eliminated we cannot and ought not hold that all knowledge is true. Really, it's a better mirror to how knowledge actually seems to progress. We don't so often establish the all-determinate truth of a theory, but rather find out where the error lies and improve upon it. I think Socrates would like it.
It's always the case "that sometimes....."
Because we cannot tell the difference between what we actually know and what we think we know until it's proven wrong.
It doesn't follow from this that we should talk about "wrong knowledge" rather than "not knowledge".
I would say that if I think I know that your name is John and if I find out that it's not actually John then it's better to say that I didn't know that your name is John than to say that I knew but was wrong.
Really? I can understand "I guessed your name was John and it wasn't", but if you thought you knew then, you must have had some reason; then it makes more sense to say what I knew was incorrect. To say "I thought I knew" implies a process which made you think it was indeed knowledge. It goes back to my assertion there is no observable difference between "what I know" and "what I think I know" at any given moment, so I cannot exclude the latter in my description of knowledge. I don't feel compelled to concede the matter, but I'm not sure how to expand on the idea.
There's also no observable difference between what's true and what's false at any given moment, but we don't then say that something is true just because we believe it to be so.
We can believe that something is true even if it isn't, and we can believe that we know something even if we don't.
Don't we? Every time I say something is true is just because I believe it is true. Otherwise, I'm not properly truthing.
There's a difference between "I say that X is true because I believe that X is true" and "X is true because I believe that X is true". And there's a difference between "I say that I know that X is true because I believe that I know that X is true" and "I know that X is true because I believe that I know that X is true".
I'm saying that the second in each case is false.
It goes without saying I thought? It's absurd to claim one's beliefs change the truth the of the matter. I understand how it could be read that way, but I don't understand why it would be read that way.
I do not agree with Sam regarding what counts as justified belief. It does not require being argued for(the act of justification) on my view.
Then why would we to define our products in such an ideal sense and still expect they correspond to the facts? If knowledge is always true, then how is it our knowledge changes?
I'm accounting for the beliefs that have been conflated with truth. I acknowledge conflation is done in error, but because it persist it should be considered a part of our knowledge.
No one has any problem saying knowledge is true, but suggest it can also be false and your burning down the house.
It has to be arguable, but doesn't have to have been argued?
The suggestion is nonsense, and leads to self-contradiction.
Compelling.
You claim that one can know a false statement.
That is nonsense.
"Tom Cruise is president" is false.
According to your logic, we can know that statement.
"The sun revolves around the earth."
Both of those were once thought to be true, but never were. They were called knowledge because they were believed to be true, and the evidence at our disposal at the time supported the ideas.
According to you, they are still knowledge.
You're conflating being justified true belief with being called such. The two are not one in the same.
:wink:
No.
It always has to be well-grounded, and it doesn't always have to be argued for.
Did you mean to say this the way you said it?
Yup.
Oh, I would say it was knowledge and had since been falsified. It doesn't make sense to have falsified knowledge. Knowledge intends to be true. I thought you would claim it never was knowledge, but just totally treated the exactly the same as if it were knowledge. But, remember its not actually knowledge, only completely indistinguishable from knowledge(At the time).
Sure we can. It's the difference between belief and knowledge. The former presupposes it's own truth, and the latter is true.
There's the nonsense and/or self-contradiction.
:wink:
Knowledge is not the sort of thing that is capable of intention.
No.
I have believed something and later found out that I was wrong. I believed that things were a certain way, and it turned out that they were not.
That will set you straight.
As I've explained before, this is not quite accurate, i.e., that I believe that justified belief is what's argued for. What I disagree with Creative about has to do with justification being a pre-linguistic concept. Justification can happen in several ways (linguistic ways), argument, inference, and proof is just one of those ways. Justification is used in other ways, viz., through sensory experience, linguistic training, testimony, and it can be tautological. So to say that I think JTB only happens through argumentation is a misrepresentation of my epistemology. When I use the word argument, I'm speaking in terms of logical argumentation. However, for me justification goes beyond logic. That is to say, logic is only one way of justifying a belief.
As I understand Creative he wants to say that knowledge is something that can occur apart from language, i.e., that prelinguistic beliefs can be justified. It happens in our metacognition according to Creative. However, this makes no sense to me. It's akin to saying we can have a private language, which is nonsense, at least from my perspective. Epistemology is a linguistic endeavor, not a private endeavor. To see this one need only look at the role doubt and skepticism play within our epistemological constructs. It happens necessarily in a social environment.
I don't disagree that there is a metacognitive reality, I just disagree about what's going on in that private world. Creative wants to bring in things that only happen within a linguistic and social context. Specifically he wants to bring in the idea that rule-following, which is necessarily linguistic, and necessarily part of epistemology, can happen privately. This follows from what he says because of his idea that justification can happen to prelinguistic humans.
Not my idea... Actually against the position I've been arguing for...
:yikes:
Yes, Gettier's counterexamples are where all three of the JTB criteria seem to be satisfied, and yet the result doesn't meet our intuitive, pre-analytical notion of knowledge. Your examples are where our intuitive notion of knowledge does not meet the JTB criteria. How damaging are such attacks? That totally depends on the context.
Like I said, if the goal was to just give an accurate account of how the word "knowledge" is used in the language, you probably can't do better than a good dictionary, together with an acknowledgement that such informal usage is imprecise and will almost inevitably run into difficulties with edge cases like Gettier's.
But philosophers define their terms in order to put them to use in their investigations, so I think the best way to approach the issue is not to latch onto one bit taken out of context, but see what work that JTB idea does in actual philosophical works. Maybe the JTB scheme is flawed because it doesn't capture something essential about knowledge, or maybe the examples that you give just aren't relevant to what philosophers are trying to do. I haven't done much reading in this area myself - I am just giving what I hope is sensible general advice on how to proceed.
Then I don't know what you're talking about, and I don't think anyone else does.
Gladly, if you'll set out the difference between a belief of knowledge and knowledge.
I'm not so much interested as how its used 'in language', but rather how it's used in reality. I know exactly zero people that actually consider an idea based on a JTB scheme or accept an idea because it fulfills one. And before you object, I mean to say especially philosophers, when I say people. My primary reason for making JTB a target is just because it's so well guarded from criticism and taught as if were a law of thought; when as Gettier showed in nearly satirical fashion the emperor has no cloths.
I suppose the way to proceed is abandoning the notion there's a set of criteria which knowledge contains and disqualifies all else or change JTB, or change the philosophical definition of knowledge. It's a bit Gettierish, but saying all knowledge is JTB or Not would technically silence my objections.
I would concur that a claim to knowledge is not equivalent to having knowledge..
It's one in the same difference.
I've already set it out. Have a look for yourself.
I think that how we're using the term "justify" is the root of our misunderstanding.
When one justifies his/her claims, they provide the ground(s) to another.
All I'm saying is, and you've agreed with me before, that one need not provide their ground to another in order for the belief to be well-grounded. Being well-grounded is the criterion for being justified. It is not providing that ground to another.
Right?
Maybe this will help. The way I view things: There are two types of knowledge, the ideal, purely conceptual standard by which all practicable knowledge is appraised—I’ll call this “ontic knowledge”—and that which is within our capacity to hold and engage in—I’ll call this “subjective knowledge”. The former is that by which the latter is measured against as means of appraising the latter’s validity.
A leading issue with knowledge is with the notion of truth. To believe that something is is to believe ones notion/idea/conceptualization/etc. to be true (other types of belief are however possible). However, so doing, of itself, does not signify that any given belief of what is true is in fact true/accurate. In these statements there are two implicit concepts: in parallel, an ideal and conceptual understanding of truth—call it “ontic truth”, or the factual state of being wherein that believed accurately conforms to that which is real—and, secondly, our awareness-based appraisals of what is and is not true—call this “subjective truth”, or the subjectively, and sometimes tacitly, made appraisal that that which is maintained accurately conforms to that which is real. It might be subtle, but the two concepts are distinct.
Most implicitly interpret in JTB is the notion of ontic truth—which, by its very idealized property of being ontic, or factual, cannot ever be mistaken in any way. And it is this notion of ontic (factual) truth which brings about our ideal notion of ontic knowledge. Yet this can only be a conceptual model of that which is aspired for; that which “knowledge intends” as you’ve mentioned. Yet the knowledge in this latter statement is not the conceptual standard of ontic knowledge—which is ontically true belief that can thereby be justified upon request—but is, instead, the only form of knowledge that can be had in practice: subjective knowledge.
Subjective knowledge, then, can be defined as: a notion that is believed to be true and which can be justified at will in so being true. If that which is believed is in fact ontically true, then it will in deed accurately conform to that which is real. Yet this is where ontology plays a crucial role: Where that which is real is itself factually interconnected in coherent manners (for example: physics, chemistry, biology, and awareness are all coherently related in some manner—this despite our lack of full understanding regarding these coherent interconnection), then there will always be means of justifying that which is in fact real. Where contradictions are found in one’s justifications, for one example, this will then illustrate that one cannot account for what one believes to be true, for one cannot provide how it accurately conforms to what in fact is real.
The just stated would take a lot to unpack—particularly in regard to the nature of reality (an aspect of ontology) and to the nature of valid, ontology-contingent justification in general (an aspect of epistemology). Doubtless there would be much contested in any such account, but I yet find the overall relation to reality and to reality-contingent justification to be rather intuitively valid for most, if not all, people.
So there’s JTB, our conceptualized ideal form of knowledge (which cannot be had in practice unless one were to evidence one’s belief of what is true to be infallible); and there’s validly justifiable believed truth—I’ll term this JBT—the only form of subjective knowledge possible to hold in practice when lacking truths that have been demonstrated to be perfectly secure from all possible error (i.e., when lacking truths that have been infallibly demonstrated to so be).
Now, for all practical purposes, our knowledge that the sky is blue, as one example, is absolute (also that 1 + 1 = 2; etc.). But these instances of knowledge are not (perfectly) infallible in technical philosophical jargon; their truth is not proven to be perfectly secure form all possible error via means that are themselves perfectly secure from all possible error.
More concretely, though, the way I view things is that those people that once asserted knowledge of the sun circling the Earth can, presently, be confidently stated to not have held knowledge of this. This is so because their JBT did not conform to JTB; their subjective knowledge did not conform to ontic knowledge.
When we say, “I thought I knew,” we affirm that we once held our JBT to be an instance of JTB—but that we were wrong in so thinking. (It’s telling to me that we don’t say “I believed I knew”—though we can say “I believe I know”. I’m thinking we don’t say the former because it’s redundant without rhetorical purpose and because upon discovering our mistake we acknowledge it to be due to faulty justifications then held (our beliefs of what is true, of themselves, not being at fault; for they are not knowledge in themselves). We can say the latter because, until we consciously discern we hold justifications for what we believe to be true, we do not hold a conscious awareness of the given belief being knowledge—though we can intuitively sense that it is. Hence, we can believe we know.)
To sum things up: Any validly justifiable belief of what is true (JBT) can well be an instance of a belief that is ontically true and thereby justifiable (JTB). If inconsistencies are lacking, then there’s no justification by which to assume that an instance of JBT is not an instance of JTB. Nevertheless, until we can infallibly prove ontic truths (something which fallibilists will uphold cannot be done by us), we could, hypothetically, be mistaken in our upholding any instance of JBT to be JTB. But: until evidence presents itself to the contrary, because all our JBTs could all (or at least mostly) well be instances of JTB, we then are justified in proclaiming that we hold knowledge (JTB).
I’m still working though some of the connotative facets of this myself—making a concept simple and unambiguous from multiple vantages if sometimes harder than it should be. However, I again find that this outlook does justify why the guy who “knew” that the Earth is flat didn’t in fact hold knowledge of this: we currently have ample means by which to evidence that his JBT did not conform to JTB.
So JTB stays, imo, at least in the ways just outlined.
--------------
I'm appending this in attempts to be clearer: I understand and agree with all our held knowledge being fallible, and that we thereby (to incorporate the semantics I previously used) can only hold onto JBT that has so far not been falsified in being JTB—and which, thereby, can very well be JTB (as here interpreted: ontically true belief that is thereby justifiable).
A badge of honor. The price of novelty.
Many more do today than did a decade ago. What I'm talking about hasn't changed much at all.
Knowledge is a word, language use is its reality. It's not like there is some celestial dictionary in which the "real" meanings of words are inscribed once and for all. Knowledge is what we say it is. So one way to approach the question is to do as linguists do when they compile a dictionary: see how the word is used "in the wild." Philosophers and other specialists extent the natural language in coining their own terms, which they can do in ways that narrow the colloquial meaning or diverge from it. However, it is considered to be a bad and misleading practice to diverge too far, in effect creating homonyms.
While @javra attempted a conceptual justification of the JTB knowledge, I'll stick to natural language for a moment. How much does the JTB knowledge differ from common sense knowledge? One thing you can say about the JTB definition is that, at first glance, it does not appear to be an operational definition (this parallels both your critique and @javra's notes above). If you wanted to sort various propositions into knowledge and not knowledge, you could plausibly use the first two of the JTB criteria (setting aside for a moment legitimate concerns about those two), but you cannot apply the criterion of Truth, over and above the criterion of Justification. For how do you decide whether a proposition is true, if not by coming up with a good justification for holding it true?
But think about what happens when we evaluate beliefs that we held in the past, or beliefs that are held by other people. They are Beliefs, and they could be Justified as well as they possibly could be, given the agent's circumstances at the time. And yet, when you consider those beliefs from your present perspective, you could judge the Truth of those beliefs differently. And since it would not be in keeping with the common sense to call false beliefs "knowledge," it seems that there is, after all, a place for the Truth criterion.
Quoting Cheshire
Well, how familiar are you with contemporary epistemology? Even from a very superficial look, it is hard to see where you got this idea - see for instance SEP article The Analysis of Knowledge.
Well, that is excellent news. Tell me, do you believe JTB is the best description for knowledge in a non-general sense? I know you can justify it, but I'm curious as to whether you believe it.
The confusion may be in the following: I learn through the language-game of epistemology, i.e., what it means to justify a belief. Once I learn it in the proper setting, then I'm able to apply it privately. I don't learn it privately, but I can apply it privately. Just as I learn mathematics within the language of mathematics (socially again), and then I can do it privately.
It's in the private setting, after I learn it in a social setting, that I don't have to state it. I know what it means to justify, so in this sense I don't need to state anything. Unless someone asks for the justification, then I can give it. The social, or the language part comes first though.
Are you saying that it can be done totally in private? Just trying to clarify.
I am ambivalent about it. The advice that I gave you about seeing how it works in a philosophical context is the advice I would take myself. I haven't read enough, haven't burrowed deep enough into surrounding issues (partly because I didn't find them interesting) to make a competent judgement.
.
This idea that Gettier somehow showed that JTB is flawed is just not the case. It's as if Gettier performed a slight of hand, and people think it's an actual picture of reality. It's true that some philosophers think this, but I would consider that all Gettier pointed out is the difference between a claim to knowledge, as opposed to actual knowledge. So if I make a claim, and that claim appears to be JTB, but in the end turns out to be false, then it's simply not knowledge. There is nothing difficult here. No amount of thinking something is JTB, amounts to something actually being JTB.
There are very few absolutes when it comes to JTB. Much of the time what we believe is justified is based on what's probable, not what's absolute. I can perform all the experiments in the world that confirm a theory, but that doesn't mean that that one chance in a million cannot occur and flip your theory on its head. Depending on how you take your theory of course, if you understand that's it's just based on probability, then it won't have much of an affect on your theory. However, if you think that what you know is an absolute, then it might turn your theory on it's head.
There are a lot of variations, but I would say that JTB fits about as well as one might expect, given how we use the term.
Quoting Sam26
One cannot provide the ground of a belief to another privately. Providing ground is existentially dependent upon language. Language is social. That's irrelevant to the point being made.
I'm saying that one need not provide the ground in order to have the ground. Providing the ground doesn't matter at all with regard to the quality of the ground. It's the quality of the ground that determines whether or not the belief is a justified belief. This is obvious. Not all instances of offering one's grounds result in us concluding, saying, and/or recognizing that the belief is justified. If offering one's grounds justified one's belief, then all belief would be justified by virtue of the person offering the ground. That's just not the case.
This is common sense, I would think.
Yes, we seem to agree here, providing you're using ground as a synonym for justification, as I am.
Quoting creativesoul
Which seems to be another way of saying the following:
Quoting Sam26
I can have the ground without stating the ground, but learning the ground is social.
It may be obvious to you, some of this, but I'm still trying to get clear on how you're using the words ground and justification. For me, to say a belief is well-grounded is essentially the same as saying, the belief is justified. Are you also thinking of a grounding as in bedrock?
Is it?
Stating the ground is social. Learning another's ground is social.
Fire causes discomfort when touched. That doesn't require language to learn. Is it not the ground for believing that touching fire caused pain?
Well, shouldn't JTB be able to meet it's own criteria? If you can't believe it, then it isn't knowledge right?
When one justifies their belief to another, we say that that belief has been justified. We do not say that that belief has been well-grounded. We say that it is well-grounded, for we've just come to realize that. It was already well-grounded prior to another justifying it to us.
The point here is that providing the ground does not make the belief well-grounded. It shows that it is.
If I recall it wasn't simply a matter of knowledge being subject to time, but rather a case where JTB criteria was met but the matter still found to not be knowledge. I think the slight of hand is ignoring that our understanding of things is the result of what we know and what we know in error. The 'truth' element to JTB makes J&B largely irrelevant. And it creates a concept of knowledge that is untenable when it's placed in the context of humans subject to error. To me it seems self evident and if you prove it wrong then you prove it right. It's not my theory of knowledge anyway. Karl Popper laid it out as the basis for critical rationalism which is really an excellent approach to revisiting quite a bit of tired dogma.
There are causal beliefs. For example, my belief that snakes are dangerous was caused by the bite of the snake. But I would take issue with the idea that the cause is a ground or justification, as in an epistemological ground. Why would you think that causal effects are a grounding. Moreover, to answer the question why I believe something, it may take into account both causality and reasons/evidence, but there is a big difference in terms of epistemology. If a cause is the same as a justification, then we can justify all kinds of weird things. When I talk about justification or a grounding, I'm talking about reasons/evidence, and I think most philosophers are talking about reasons/evidence.
Yes, I understand that, but what I said still holds. For example, let's say that I see what I think are five red barns in a field at time T1. I justify my belief based on my general belief that my sensory perceptions are generally accurate. So I believe that I'm justified in believing that I see five red barns, i.e., JTB. However, later at T2, I see that they aren't barns at all, but something else. So the question arises, were you justified (justified in the sense that you have knowledge) at T1? The answer is, no. Why? Because the justification was not warranted in that instance, because of what we found out later.
The error idea is a point in my favor. Basically it says that the instance above is only probable knowledge, i.e., it takes into account that I could be wrong about my claim; and we know this based on past experiences. If you were to ask the person who claims to see the five barns, "Is it possible you're wrong about your claim?" - They would probably respond, "Yes." It often happens that people think or belief they're justified when they're not. Again, because one thinks one is justified, and has indeed followed the rules of justification, that doesn't mean they are justified. There is a difference between the definition of knowledge, which necessarily is JTB, as opposed to your belief that you have knowledge. One is true by definition, the other not.
It doesn't make JTB untenable. In fact, it just shows that not all claims amount to JTB. You seem to be applying some absolute sense to our claims of knowledge. What would be the point to challenging someone's claim to knowledge (a justified claim), if one couldn't be wrong about the claim. This goes directly to the idea of a doubting a claim.
There is more to this, but this is a start.
I get it, I just don't thinks its correct. It's the "later on..." part that bothers me because there's always a "later on..we find out..." about one thing or another. I was working on a thought experiment sans farm structures that might better make my point, but it eludes me presently. I'll be back.
Quoting Sam26
A reason, by definition, is a cause, motive, or explanation. It then naturally renders reasoning as the process of providing causes, motives, or explanations for. To justify a belief as true, I then argue, is to provide valid reasoning (a set of valid, i.e. consistent, causes, motives, and/or explanations) for a belief being true.
The argument can be made that most of our justifications are non-linguistic at any given time. We could linguistically express them, but we generally don’t. It is only when we want to convey these to others or else deliberate upon some issue internally that we make use of language. For example: When playing a sport, one knows to move left instead of right at a certain juncture—for example—without need for linguistic expression of beliefs, truths, or justifications; this while one yet holds a justifiable true belief that so moving is optimally advantageous at the given moment.
A lesser animal pet can thus be argued to know what its name is—for example—for it holds a pre-linguistic believed truth which is to it justifiable via its experiences of causes and motives (with explanations here being deemed to always be linguistic for the sake of simplicity. Although, when defined as “to make something understandable”, explanations then will not necessarily contain human language: e.g., an animal’s body language (which sometimes can be intentionally deceptive in more intelligent lesser animals) explains its states of mind to other like—and often unlike—animals; or, an animal’s memories of motives and causes will serve to explain to the animal the meaning of some given).
An animal can then be upheld to know that fire burns—especially when it is an acquired belief of what is true that is itself in keeping with the animal’s set of learned causal relations and motives for actions (i.e., with the animal’s non-linguistic reasoning regarding what is). Here especially thinking of the more intelligent lesser animals: canids, corvids, dolphins, elephants, great apes, etc.
But here things can get complex very quickly: an ant innately knows its cast and what to do for the colony (just as we innately know how to suckle when birthed, among other forms of our innate knowledge). And in such instances, the issue of JTB becomes murky—although, imo, not necessarily invalid (especially when the property of justification is not conceived as entailing human language: e.g., a human baby is justified in holding a pre-linguistic belief that suckling will satisfy its pangs of thirst/hunger, thereby knowing it must suckle in order to live).
While I’m at it: Knowledge by acquaintance, broadly defined, can be deemed in similar enough fashions to be believed truth justified by first-person experience. Example: I am justified in holding the believed truth that I am psychologically certain by my experienced feeling that I am—which is itself the valid reason (cause, motive, or explanation) for my belief being true. Thus, one can validly affirm, “I know I’m certain (or happy/sad; etc.).
as a sub-quote taken from the one above:
Quoting Sam26
Yet this is how we get to certain people knowing that the Young Earth model of the universe is true. Or that eliminative materialism is true. What stands in the way is that their specified causes, motives, and explanations for so believing will not be fully coherent and, thereby, will contain contradictions. This, at least, in principle wherever the justified believed truth is in fact false.
I'm always presenting knowledge, by definition. :nerd:
If the only knowledge that can be had in practice is 'subjective knowledge' then what is the point of calling it subjective, beyond differentiating it from a bad definition. Ought the knowledge we have be called knowledge and anything else be better qualified. I understand that we can posses ontic knowledge but not necessarily know when we do.
Here's my attempt at a thought experiment, I don't know that's going to be coherent but I might post it anyway in hopes of highlighting the error in my thinking that makes me subscribe to a non-JTB framework. I might need to borrow a philosophy demon to help me out in this one. Which is oddly fair game.
At a particular moment in time let's suppose you know 10 things. And then, my philosophy demon informs you that one of the things you know is wrong, but not which of the things you know is wrong. So, you turn and tell me you in fact know 9 things. I argue that, no you know 10 things because you can't tell me which 9 are actually correct or you know zero things because 1 of the ten is wrong and it could be any of the 10.
The issue of terms is the very semantic facet that I’m yet trying to better specify. One could just as readily say “X knowledge” and “Y knowledge” instead of the “ontic knowledge” and “subjective knowledge”—but this is far less descriptive.
The point is that one is a fully conceptual ideal—that is thereby non-operational as knowledge. It presents a knowledge that is infallibly justifiable and infallibly true—this not being possible to obtain in at least current practice. The other form is the only type of knowledge that can be had in practice. This, I’m thinking, is again best exemplified by the criterion of truth; there is the fully conceptual ideal of absolute truth which can never be wrong; and then there is the only operational form of truth that can be found: that which we believe to conform to the former type of ideal truth.
So—the thought just came to me, with the help of previous posters—scratch “ontic” and “subjective” and replace with “ideal” and “operational”, respectively. (seems to do a better job at describing what’s intended).
Operational knowledge, then, can only be evaluated via use of ideal knowledge. Without it needing to conform to ideal knowledge, any claim to have a justified believed truth will be knowledge. E.g., I know that it will be sunny today because—i.e., due to the cause of; or, on grounds that—my cat is out in the backyard and there are satellites in the sky. Now, in everyday life, were someone to tell you this, you’d think them to be, well, ignorantly mistaken. To not in fact know that it will be sunny today. But why come to this verdict if it’s a believed truth that has just been justified to the satisfaction of the bearer?
The answer I’m giving is that this believed truth does not conform to ideal knowledge—here, because it is deemed to not be validly justified. And, hence, is then judged to not be knowledge.
>>> At this point I should ask: If someone were to tell you it’ll be sunny today for the reasons just mentioned, and whether or not it’ll be sunny today holds some degree of risk/importance for what you do today, would you then yet hold their belief to be knowledge? And, therefore, act in accordance to this known?
Compare the aforementioned with: I know it will be sunny today because my cat has the odd habit of only going outdoors on days that are perfectly sunny, and he is now outdoors, and because the weather forecaster has picked up from satellites the depiction of weather patterns that nearly always entail that a sunny day is in store.
Here, while yet not being ideal knowledge—which is perfectly justified to be an absolutely true belief—the given justified believed truth nevertheless does conform to ideal knowledge (to our ideal of what perfect knowledge should be). And, because of this, can now be deemed to be operational knowledge. Hence, here, we will deem this person to in fact know what he is talking about—and will hold no reason to question this knowledge unless we hold other data or reasoning that appear to us to conflict with it.
So, I’m arguing, we can only appraise what is and is not operational knowledge by appraising whether or not it conforms to ideal knowledge. If it’s falsified in potentially so being, then we deem it to not be knowledge.
Quoting Cheshire
It’s an interesting thought experiment, but I think it obfuscates the primary issue. Here, we’re trying to apply (meta-)operational knowledge to what is and is not particular instances of operational knowledge given the circumstances. How do we know if we only know nine or none of the ten formerly thought to be know givens? The question of what knowledge is to begin with still remains.
It's a bit of straw-man isn't it? If an individual told me something absurd I wouldn't confuse it with the subject of knowledge.
Quoting javra
We have an ideal concept of circles, but we don't call the one's we draw operational circles. Because we never draw ideal circles, so the operator is redundant.
Quoting javra
It would obfuscate if in fact the demon was necessary. In actuality, suppose all the things you know. I'm asserting 1 of them is wrong and you don't know which one.
It was a false choice. In this experiment we know 11 things and 10 of them are subject to error.
No, not a straw man: Why do you appraise it as nonsense—this if it is a believed truth that is justified to the satisfaction of its bearers?
Quoting Cheshire
Yes, because here we clearly know that no drawn circle is an ideal circle—and so there’s no implicit equivocation involved.
Same with knowledge, from where I stand at least. You do recognize, however, that some hold their knowledge to in fact be infallible? Be this within religions, philosophies, or out in the everyday world. Here, there is equivocation between the operational and the ideal that is confused with unequivocal states of affairs which are in fact obtained ideals.
Quoting Cheshire
In which case, why should I believe you in lieu of proper justifications for this? Due to an authoritarian commandment?
Quoting Cheshire
Yes. Well, you’re discussing this with a fallibilist—i.e., a philosophical skeptic in the tradition of Cicero and Hume, among others (not Pyrrhonian and not Cartesian): a very broad, but different, matter. As I’ve indicated in my previous posts on this thread, all our held beliefs of what is true are—I argue—susceptible to error, hence to being wrong. Though this in no way entails that they are. Until they’re falsified in so being, there’s no reason to believe that they are wrong.
Hence, following your specifics, we fallibly know 11 things, all of which are subject to error.
At any rate, that we know 11 and not 10, 9, or 0 givens still does not answer the question of what knowledge is.
Not according to this novelty, If you ever prove that things are not subject to error you have to prove that something is subject error. Specifically, the statement in question. So, if you could hypothetically disprove it you in turn prove it. So, 11 things, 10 subject to error.
Right. I hear that Descartes once tried it. Turns out he didn’t succeed. But his methodology also produced such philosophical questions as BIV scenarios. Meanwhile me and a few others are worried about the outcomes of increased global warming, a possible future politics of global Orwellianism, and other such philosophically trite things.
Look, to be less sarcasticalish, in the absence of proven infallible truths (and, thereby, infallible knowledge), we’re left with what we realistically have. We’re not discussing what knowledge is to aliens in some alternative parallel universe of our imagination, but what it is in the world in which we live.
I’ll grant that operational knowledge, unlike ideal knowledge, can hold degrees of strength. To intuitively know that the planet is round is not as strong a knowledge as to know so due to well justified empirical evidence—though both are fallible and both could be instances of ideal knowledge. So, if this makes sense to you, then you’re eleventh know could be stronger than the ten knowns it addresses. But you’d have to provide for why this is so.
Still, I’m not big on when I give replies without having my honest questions answered in turn. A personal quirk wherein I typically find other things I’d rather be doing. Again, why do you find some believed truths justified to the satisfaction of its bearers to be nonsense rather than knowledge?
That's fair, I wanted to give your replies more consideration, so I just replied to the aspects I had already thought through. I'll return in kind.
Cheers. We haven’t chatted before and it’s sometimes fuzzy what the other’s character is like. But, yea, if you can find a viable alternative account of what knowledge is—this as per the question asked—or else find faults with my reasoning, I’d love to hear bout it. Nice talking with you, btw.
There is an important distinction that needs to be made in reference to justifying a belief by giving reasons as opposed to citing a cause for a belief. First, causes happen before their effects in time, i.e., they precede their effects. Reasons on the other hand may or may not precede a particular belief. Second, causes have nothing to do with purpose or intention, but reasons do. Third, reasons can be good or bad reasons, but causes are not good or bad in the same way, i.e., my judgment about a cause is different than my judgment about your reasons. This has to do with intentions, or with choices made by persons.
Again, as I said before, there are beliefs that are caused, but to justify a belief, as in JTB, we're not referring to causes but good reasons.
In all fairness, the precise definition of reasoning is a fuzzy issue in philosophy, granted. But I’m hoping that some linguistic ambiguity might be the reason for our partial disagreements.
It might be that you’ve misinterpreted me as saying that to justify something one must provide for the cause of the very belief’s manifestation. This, however, would be a very incorrect interpretation of what I said/intended to say. What I was/am thinking is that justification requires reasoning and that reasoning sometimes consists of contemplated or expressed causal relations.
To keep this example simple, if person A asks person B to justify the truth to the eight ball being in a particular corner pocket, one valid justification could be as follows: It’s in the specific corner pocket by the cause of person C hitting it with a cue ball on the right. Less formally: it’s there because person C hit it with the cue ball.
Of course there are countless other ways to justify this, such as by having person B take a look into the corner pocket. But each different form of possible justification would likely be best suited to different particular contextual factors, such as that of why person A want’s to know.
If it’s a linguistic ambiguity that is the principle reason for our current disagreements, then our current disagreements have been caused by a linguistic ambiguity. In this case, the reason equates to that which has caused—and not to a motive, intention, etc. To justify the truth of our current disagreements, one could provide data to some other. But where this is not feasible, an alternative means of justifying this truth is by the causal reasoning just mentioned, by specifying that a linguistic ambiguity was the cause for it.
More complexly, since it’s the first thing that now comes to mind, to justify that change is real and not unreal as per the conclusion of Zeno’s paradoxes, data of itself will not suffice. So, here, one could try to justify this truth via causal reasoning: e.g., awareness, which is ever changing, is the reason, the cause, for Zeno’s being at all familiar with his paradoxes—for his being at all familiar with what logic and reasoning are, for that matter. Hence, due to Zeno’s conclusions being dependent upon awareness’s presence—i.e., due to awareness being a/the cause to the effect of Zeno’s conclusions—Zeno’s conclusion that change is not possible can only be somehow flawed. OK, this does not of itself find any fault with the specific reasoning that he used. But it does provide a valid (regardless of it being to whatever measure imperfect) justification for change being real.
I’m thinking this could unfold into what is meant by causation. Here, I simply intend the property wherein the existential presence of X (the effect) is determined by Y (the cause)—such that the cause produces the effect in due measure to which the presence of the effect is determined by the cause. It’s on the generalized side as definitions go, but it does encapsulate efficient causation fairly well, imo. This delineation can apply to physical entities but is in no way limited to physicality. Example: my thought of a freshly cut lemon causes me—is the reason for—my unexpected extra salivation; for, in this case, the presence of a watery mouth has been determined by the thought of a freshly cut lemon.
Again, I’m not saying that all reasoning consists of causal reasoning, but that some of it does—to me, a fairly good portion. And we justify things by use of reasoning—including, at least at times, that of causal reasoning.
If disagreement persist, I’m honestly unclear as to the reasons (not motives, but causes). So I’ll stop here and see what replies I get.
Since offering one's ground for belief does not make the belief well-grounded, that should tell us that a belief does not necessarily need argued for in order for it to be justified. The same is the case with a belief being true. It need not be argued for in order to be true.
One need not know that they know in order to know that touching fire caused pain.
Seems to me that all of the valid problems regarding JTB are dissolved by virtue of getting thought and belief right to begin with.
Blah, blah, blah...
I understand, but I think this is an error. I was trying to point out that reasoning is separate from causality because it involves intention and choice. Causality, as I see it, is quite separate in terms of reasoning, but not separate in terms of some beliefs. Not all beliefs are a matter of reason. However, the beliefs I'm specifically referring to, are those beliefs that are connected with JTB.
Also, there are no prelinguistic JTBs, but there are prelinguistic beliefs. Justification is a linguistic endeavor, and always has been. There is no medium for justification apart from language. It necessarily involves others within a linguistic setting.
Then illustrate how none of the three examples I provided for justification via causal reasoning is in fact a form of valid justification. Otherwise, if any of my examples of justification are valid, they prove my position's validity.
Quoting Sam26
What you’re saying is mainstream. Right up there with all concepts being dependent upon language, rather than vice versa. But I disagree with this popular believed truth that you too uphold. To get a better understanding of your stance:
1) If the pre-linguistic child cannot discern via reasoning was is true from what is false by means of some form of implicit, non-linguistic justification applicable to its various empirical experiences and imaginations, then how—in your opinion—can such child come to know any particular language to begin with?
2) When we are not linguistically justifying out beliefs to ourselves or to others, do we know anything? If you answer that we do, how so? (Remembering of a linguistic justification seems to me to count as an instance of consciously apprehended linguistic justification for something—so I’m not here addressing recollections.)
Our views are likely to differ even after you provide answers for these questions, but I am sincerely interested in how you view the world in this regard.
Learning how to use language is experience, as is getting burned. Why privilege the former and not the latter?
Is it more important that one be able to talk about the reasons they believe something or other, or is it more important that those reasons are good ones; that they warrant belief?
Not sure who you’re addressing this to, but so it doesn’t go un-replied:
Once you get to the roundabout point you address, the ensuing issue is:
>>> How does a belief become well-grounded in the absence of actively manifesting language.
For example, what makes surprise—be it on the part of an intelligent lesser animal, a human infant, or an adult human—warranted and, thus, well-grounded?
Surprise is the act of finding our concepts of what is true to be unwarranted—this typically as they apply to expectations of what was, is, or will be, expectations assumed to be well-grounded. We adults will often linguistically warrant—i.e., linguistically justify—our surprise by explaining that we had good reason to think we knew that which we then discovered we didn’t. All the same, the act of being surprised precedes any and all linguistic justifications for so being. It is entwined with non-linguistic evaluations of what in fact is. Hence, surprise for the animal or toddler, for example, is the expression of a discovery assumed to be well-grounded that that which has been so far assumed to be warranted/well-grounded in fact isn’t.
Or, alternatively, what to a specific (intelligent) lesser animal or toddler warrants—makes well-grounded—that a specific sound is what they are intentionally called by? That the sound is made in representation of their personal being?
It certainly isn’t linguistic justification via linguistically classified abstract concepts. But it is, I uphold, a non-linguistic means of evaluating what in fact is from what isn’t, one that makes use of, at least, a very rudimentary reasoning—a far less developed reasoning that nevertheless remains true to the laws of thought.
Hence, my current position: the non/pre-linguistic believed truth is thereby believed well-grounded via some system of non-linguistic justification*. One which—among more intelligent sapience which adult humans are—becomes expressible via linguistic means and, thereby, certainly vastly more complex (by comparison to infants and to lesser intelligent animals). Yet one which—as we all experience when not linguistically justifying our beliefs and actions—does not stand out consciously as do our linguistic expressions of concepts. … But this, I acknowledge, gets a little deeper into hypotheticals of how the mind works (ones that are in keeping with biological evolution); all this likely not being a proper subject for this thread.
* By “justification” I here roughly intend “to reckon or surmise that that concerned is warranted due to interrelations between obtained data and, hence, due to some form of reasoning, be the reasoning linguistic or not (with “warrant” as verb here roughly meaning: to guarantee as true). Please let me know if this intended concept is better expressed by a term other than that of “justify”, as in “to evidence just/correct/right”. I’ll then use that term instead, if it indeed is more fitting of the concept.
As a heads up, I’m currently in no position to properly argue all of this stuff out. Just presenting it here as my upheld current opinion—which I hope I’ve to some small degree justified. All the same, the matter of explaining the occurrence of surprise in non/pre-linguistic beings still seems to me to be pertinent to the issue of well-grounded beliefs being knowledge.
The issue I've been at pains to point out goes unnoticed more often than not. Treating the terms "justified" and "well-grounded" as equally interchangeable synonyms is a mistake if one also holds that being justified requires justification. It's a mistake because being well-grounded does not. That is always the case. Always. I mean think about it...
The act of justification is when a speaker provides the ground for his/her belief statement to another person. It can be the case that the ground is insufficient/inadequate. They would be insufficient and/or inadequate prior to being given to another, and they would be insufficient and/or inadequate after. The same is true of ground that is sufficient. From this common sense understanding we can glean something significant about the notion of justification.
The act of justification is not required for well-grounded belief.
The same way it does within language use. It is validly inferred from pre-existing true belief, actual events, the way things are/were, and/or some combination thereof.
In my previous post I addressed what I intended by the term "to justify" as process and "justification" as an instance of this process. The concept I have in mind and have described does not require language--thought it also applies to linguistic expressions. And, so far, I have no better term for it than that of "justify/justification". I won't rewrite it, but its there.
Linguistically, when asked, "how do you justify X?" what is typically asked is, "what are your reasons for believing X to be true?" One doesn't need to provide these reasons for the valid reasons to be there, i.e. for the belief to be well-grounded, I agree in this. But if reasoning is provided among us linguistic beings and if the reasoning is found valid, then the believed truth is then deemed to be justified--or, as I previously addressed, is "evidenced to be just/correct/right".
Intelligent animals and toddlers don't provide the reasoning for their beliefs to themselves or to others; of course not; they have no language by which to do so. But they can infer, reason, all the same. And via their inference their beliefs can be well grounded or not.
I guess what I'm driving at is that well-grounded-ness is always itself fallible, never infallible/absolute.This is what makes surprises possible in intelligent beings. As well as learning by trial and error.
In due measure with intelligence there are reasons--inferences--held for certain beliefs being maintained. And it is this reasoning that I'm currently terming "justification"--again, the evidencing of being just/correct/right.
Maybe this will better help in making sense of where we differ:
Quoting creativesoul
How then do you believe this non-linguistic valid inference is different from “[non-linguistically] evidencing [that concerned] to be just/correct/right”?
I saw that. Not trying nor wanting to be pedantic about it, but while I can understand the desire to use an already existing phrase it can become quite problematic. That is particularly the case when in situations like this. We are involved in a discussion that is based upon a conventional notion. When the topic of discussion is the conventional notion of JTB, then we must maintain the standard meaning for it. That is what we're discussing afterall.
If one wants to argue against the conventional notion, as I am doing, one must argue against the conventional notion. One cannot be expected to be taken seriously if one argues against the conventional notion of JTB by virtue of re-defining what counts as being justified and/or what counts as justification.
That said, I can understand and fully appreciate a situation where ones finds that conventional notions are inadequate for taking account of what one wants to take account of.
Time to coin a new phrase...
This is all a bit irrelevant to our agreements though. Let's discuss those and take things form there.
Quoting javra
May I first suggest something here?
It looks to me like you are conflating truth with either belief or statements thereof in your use of "believed truth". If you drop the "truth" part and keep the "belief" part, you'd end up with the following...
Quoting javra
Yeah, pretty much. The key part here, as it pertains to my own critique regarding the notion of justification as it pertains to JTB, is that it is the listener who 'deems' the belief "justified". That's a problem. Think Copernicus. The listeners of his time did not deem his beliefs justified, nor true, but many of them were both.
The underlying problem with the JTB notion of justification is clear. It is not required in order for a belief to be either well-grounded or true.
It is required in order for a speaker to be able to talk about his/her own belief in terms of it's ground(how/why one believes what they do). It's useful as a means for helping a capable listener further discriminate between competing and/or contradictory claims. It's useful for helping a listener determine whether or not they can and/or should trust the reliability of a speaker.
Quoting javra
Of course we're fallible. There are steps we can take that decrease the likelihood of our being mistaken. The whole point of some folks' methods and philosophies are to minimize error. Methodological Naturalism comes to mind. I tend to work from it's tenets.
Quoting creativesoul
Quoting javra
Well for one, I do not find that it is possible for a non-linguistic creature to be involved in any activity that meaningfully and sensibly qualifies as 'evidencing'... not to themselves nor others. Both require the same things(the ability to take account of one's own mental ongoings - ahem, language), and a creature without language quite simply does not have what it takes to be actively involved in 'evidencing'.
I'm not sure what you're talking about though. Try to explain this notion in terms of what it takes to do it. I mean, what is the criterion for it - which when met by some candidate or other - counts as being a case of "evidencing [that concerned] to be just/correct/right."
On my view, that may be three completely different criteria with being just and being correct both requiring language.
Being 'right', if that is to mean forming and/or having true belief, well that one doesn't require language. At least for forming and/or having some belief. However, even with this one, I find no sensible way to talk about a language less creature 'evidencing' that sort of belief.
Hey, thanks for the thoughtful reply.
Yea, human language is built to connote human language dependent concepts when it comes to many a mind-associated process or attribute. Talk to some and they’ll insist that “valid inference” necessitates the use of language as well. But be this as it may.
I’ve used “believed truth” as shorthand for distinguishing belief-that from belief-in, both of which are beliefs.
As to the criteria for “evidencing” … again, this would get deeper into interpretations of mind than I’d like. I’ll try though: that which evidences is that which suggests the truth of. One might object in that non-linguistic beings lack our linguistic concept of truth. Clearly they lack any account of what truth is; yet, again, for lack of better terms that are ready present, I uphold they do have understandings of that which conforms to reality, i.e. of that which is true.
I’ll provide an example (there are far better ones when it comes to lesser animals, such as those pertaining to great apes, but keeping this sufficiently common): a person’s petting a dog on the back typically evidences the person’s affection toward the dog to the dog. The dog’s memories of being petted will then evidence to the dog that the person who pets him holds affection for him. The data here non-linguistically justifies the given belief-that (haven’t yet come up with a novel term for the concept, though).
If I’m not mistaken, seems like our primary disagreements are over the words that should be properly used. And that no proper words exist for the intended concepts. To me, however, this is not to say that the concepts are lacking or that they’re not well-grounded, to use your semantics.
If they are well-grounded, then these currently ineffable (?) concepts do relate to the thread’s contents; this by illustrating how linguistic justification can be a more advanced, abstracted form of what occurs in pre-/non-linguistic intelligent beings so as to result in “well-grounded beliefs”. But it’s hard to debate most of this if the concepts are not understood via the words used. So, presently, I’m contingently planning on backing out of this discussion.
You're more than welcome javra.
You are free to do as you see fit with regard to this discussion, but for me at least... it looked like it had a certain potential that I've not seen for quite some time.
I thoroughly enjoy critiquing others' and my own writings, and do appreciate valid objections. I seek them out, in fact, often. Unfortunately, they're few to be found hereabouts. That said, you didn't elaborate upon one, but hinted at it. I agree with the sentiment about some saying that valid inference requires language use. I could probably make that argument against my claim. Kudos.
That said, we've just bee skirting around some stuff thus far. This last bit, in particular, piqued my interest...
Quoting javra
I'm not even sure that we disagree here. I was just trying to help you better develop this concept/notion you've been alluding to with the terms "justification" and "justify". Best advice I can offer follows from a translation technique I like to use when folk are using terms in a way unfamiliar to myself. We can replace the term with it's definition in every instance of use. If the overall writing still makes sense, then it's an acceptable manner of speaking.
For completely different purposes, I suggest that you could intentionally avoid the terms "justification" and "justify" but instead use the description or definition that you've called such. It may be a bit unwieldy at first, but it will result in a better conception.
This part rings very relevant and true...
From a naturalist starting point:At conception there is no thought or belief. History shows us that our knowledge is accrued. Knowledge consists of belief. Belief is accrued. With that in mind...
The bit above regarding justification being a more advanced form of what occurs in pre-linguistic and/or non-linguistic creatures is not at all problematic for me. In fact, it would have to be that way, or similarly so, if my own position is right.
I would readily agree that language less creatures presuppose both reality and the correspondence of their own thought and belief with/to reality. I cannot, however, agree that language-less creatures have an understanding of that which conforms to reality.
It seems that you're using this notion of 'evidencing' as a manner of talking about sufficient reason to believe... or warrant. It's commonly called "ground" for belief. Seems like nothing is lost if we swap "evidenced" and "justifies" with "grounded" and/or "warrants"...
In the words of the British, buggers. I was hoping to get on with other things, but since this is intellectually stimulating …
What I was hinting at leads back to the way all languages I’m very familiar with (roughly, two: English and the other one being largely Latin based, Romanian) are structured. They very often presuppose linguistic capacity in the cognitive attributes they specify. In a way this makes a great deal of sense: we’re addressing these concepts to ourselves, not to non-linguistic creatures. In another way, to my mind, it handicaps philosophical enquiries into what is by predisposing our abstract thought to limit itself to that realm of linguistically-dependent cognitive givens. Add to this ego-centeredness and the anthropocentrism that naturally ensues in light of the problem of other minds and, to me, there’s something of a near universal cultural bias that obfuscates the way we, humans, contemplate all things pertaining to mind, most especially non-human minds. (And, for fairness, on the other side of the isle there’s the occasional character that believes lesser animals are just as aware of things as humans are, the anthropomorphizing crowd. Me, I’m stuck somewhere in the middle between what I deem to be these two, to me not very well-grounded, extremes.)
For example, we deem that one must first understand what “validity” and “inference” point to as words prior to being capable of engaging in valid inferences—for how can one engage in valid inferences (further complicated by the sometimes very formal structures we associate with them) when one does not know what the language-demarcated concepts are?
Somewhat tangentially, Descartes is well reputed to have believed that lesser animals are basically mind-devoid automatons. He’s anecdotally known, for example, to have kicked a pregnant bitch while believing she held no feelings to speak of. This being only rational to him. Because only humans have feelings, i.e. emotions—not to even bring up the capacity of reasoning and, hence, of making inferences … which are worthless if not somehow evaluated for their validity in contrast to their potential falsehood. (to those who go by anatomy, just as lesser mammals have their own limbic systems, so too do they have their own cerebral cortexes, these being less developed mirror images of our own ... not to even mention analogous evolution of intelligence as can be found in octopuses)
But this topic is to me a headache. One that should be resolved, but not a subject which data alone can resolve. To me, this issue requires reasoning concerning metaphysics—as far removed as it may sometimes be to immediate concerns. And so doing is too off-course from the thread’s intended subject—and debate via soundbites hardly does the topic justice. BTW, if at all of interest, the book Primates and Philosophers: How Morality Evolved by Frans de Waal, et al. serves as a good example of these complexities—and of the difficulty in using data to resolve the matter. IMO, it sort of all boils down to preexisting metaphysical commitments on the part of each particular philosopher or scientist. And our language certainly conveys in implicit manners many of these metaphysical commitments—to not even bring up our culture(s).
That’s my general take in regards to language and cognitive processes—even if it is a bit too general. But if there’s something more specific that you’d like to address in terms of the capacity to reason among more intelligent non-linguistic beings, let me know what it is. Also, to the extent we differ in this just addressed outlook, I wouldn't mind finding out how.
I’ll address some of your other replies a little later on.
Yes, its a promising idea; still, replacing words with definitions can make communication cumbersome. The longer a definition the more cumbersome the expressions become. And I don’t have a short definition. (more on this below)
Quoting creativesoul
Very glad we concur here.
Quoting creativesoul
“Grounded” and “evidenced” are indeed synonyms. But “evidenced” seems to me to provide the process by which the conclusion is obtained—that of data and reasoning about it—whereas “grounded” does not, instead simply serving as conclusion. No?
“Warrants”, I’ve wanted to make use of the word myself previously in this thread. Two issues that come to mind. One is that warranting has many other meanings such that when the word is used, to my ear, it does not specify reasoning as process. The second more directly concerns the thread’s subject: should propositional knowledge then be phrased as “warranted true belief”? Here again, the implication of “belief supported by reasoning to be true” seems to me to be connotatively lacking. I find that the continuum between nonlinguistic intelligent animals and adult humans should be expressible by a single terminology. Otherwise, it at the very least insinuates a division in attributes.
Still thinking about it, though. Part of why I’ve been thought-stuttering here (yes, unbeknownst to others, I have) is that I associate “justify” with “making just” and, in turn, “just” with a metaphysical principle for that which is … well, it’s hard to express in a few words. At any rate, here I deviate from a physicalist’s framework. To laconically express it would only be poetry and to justify it … well, it’s a very long analytical philosophy I’m working on. But on a whim, I’ll try anyway. You have the intra-subjective, this being what goes on in and only in individual minds; dreams for example, among many others. Then there’s the inter-subjective: languages, cultures, etc. Then there’s the dia-subjective: givens that are, curtly expressed, equally applicable to all intra-subjectivities; i.e. physical objectivity, inclusive of its natural laws. And the last category: non-subjective reality. This last category is, for lack of a better short phrase, metaphysical objectivity; hence, equally applicable, or impartial, to all intra-subjectivities. Justness, then, is in my view a property of non-subjective reality. By all means, no justification for any of this was provided so there’s absolutely no call to take any of this seriously; I’m saying this in all earnest. But with this as background, if non-subjective reality is, and if this metaphysical objectivity is in part synonymous to justness, then to justify something is to align it with that which, firstly, is metaphysically objective (via reasoning) and, secondly, as a derivative, to that which is physically objective (via reasoning + data). I understand if this brief account isn’t making much sense; never mind if it does not seem credible. I’ve nevertheless mentioned it, however, so as to illustrate why I’m so attached to the term “justify”, i.e. to make just. It fits well into the metaphysics I’ve in mind—and, here, it does not necessarily imply either linguistic manifestations nor thought which occurs after the fact.
If I’ve just spoken out of hand by mentioning my reasons for preferring the term “justify”, my bad. I’m pretty certain we share different metaphysical dispositions and, on my part, it’s by far not the most important aspect of this thread’s discussions.
I’m going to mull over the issue of terminology some more, though. Thanks again for your input so far.
Unfortunately for the debate, didn't find much of anything to disagree with.
I too find myself between these two extremes:The one side denying any and all non linguistic thought and belief based upon an utterly inadequate framework that sorely neglects to draw and maintain a meaningful distinction between thought and belief and thinking about thought and belief; and the other side neglecting to draw and maintain the equally crucial distinctions necessary for taking proper account of the complexity of belief.
To this point, there's nothing jumping out as incommensurate to my own understanding of thought and belief. I mean, it seems trivial that there's an allusion to something you called "the problem of other minds". It seems that you're tying it to attributing human thought and belief to non human entities without offering adequate justification for those assertions. Anthropomorphism is to be avoided. To be clear here, I'm not outright denying that different creatures can form virtually the same thought and belief as humans. It's just that those are not able to be shared by them, at least not in the form of belief statements.
Again, I think we agree here.
Quoting javra
Yes. To such a skeptical reply, I would answer like this...
Following the rules of correct inference.
Does that require either knowing the rules or knowing the strict academic meaning(s) of the terms "validity" and "inference".?
Getting burned by fire doesn't.
A language-less creature has it's first encounter with some small form of fire. So small is this danger, in fact, that it doesn't trigger anything fearful within the creature. The creature has a bit of curiosity, and so touches the fire and feels the resulting discomfort/pain. The creature refuses to do that again. Rather, it has become - quite literally - painfully aware of what happens when you touch fire.
On my view, this creature has formed meaningful thought and/or belief by virtue of the attribution and/or the subsequent recognition of causality. Who would deny that that creature correctly attributed causality by virtue of drawing a correlation, connection, and/or association between it's own behaviour and then ensuing pain/discomfort?
Post hoc ergo prompter hoc?
Does that apply to the recognition of a well-known causal chain of events?
I think not. While it is true that just because something happens after something else it doesn't mean that the first was the cause of the other, language-less creatures can't think like that to begin with. Feeling the pain from touching fire most certainly happens afterwards.
The fallacy applies to situations where the believing creature is offering the temporal order as ground for it's own belief. The creature didn't contemplate a temporal framework. One must contemplate a temporal framework in order to be guilty of post hoc ergo prompter hoc.
Language less creatures can draw meaningful correlations between different things. They can acquire knowledge of what we have long since already known:Touching fire causes discomfort/pain.
They come to know what we come to know by virtue of making the same causal connections between different things(touching fire and feeling pain).
I concede the point and will resist talking about non linguistic logical inference. It adds only unnecessary confusion and isn't necessary. Thanks for taking me to task on it. It's not a common thing for me to write. Good conversation javra. I'll get to your latest post the next time around...
:smile:
I’m currently interpreting the following to be in line with your outlook, and since it fits into the thread’s subject:
I’ve come to understand belief as the content to that which is trusted to be (including to have been and to will be). I’ve also come to find at least three categories for trust: trust-that (trusting that X in fact is; e.g. trusting that the earth beneath one’s feet is solid); trust-in (roughly, trusting that X can or will do Y; e.g. trusting in Ted’s capacity to do well in a marathon despite the uncertainty to this); and trust-between (roughly, trust existing between two or more agencies as pertains to implicitly maintained contractual obligations; e.g. Alice’s trust that Bill will not deceive her). “To believe” is to me then fully synonymous in all instances with “to trust”.
Curious to know what criticism of this overall proposition could be offered. (I’ve addressed one potential criticism below)
Thus understood, though, to believe is other than to think—for the latter requires connections made between givens whereas the former a) does not and b) is a prerequisite to thought’s occurrence (each associated given must be trusted in some way prior to associations between them being made).
When one ponders one’s beliefs, one is then thinking about what one generally trusts in manners that now abstract the formerly held enactive trust/belief—this into something now apprehended by the contemplator which enactively trusts. Again, such that one must enactively trust that one’s apprehended abstractions, memories, etc. pertaining to that which one trusts are valid. That which is pondered in some existential sense now becomes other relative to that agency which is enactively trusting.
Likewise, to think is also in similar manner different than thinking about thought—for the thought one thinks about is that which is apprehended by the thinker.
BTW, in conjunction with the aforementioned, one then can also classify trust as being innate (e.g., a calf’s innate trust that it must stand and run as quickly as it can); learned (e.g., one’s learned trust that Earth circles the sun and not the other way around); or enactive (e.g., one’s consciously held trust whenever some uncertainty is consciously discerned).
Thus understood, beliefs can be innate, learned, or enactively held. Animals not capable of any significant degree of intelligence will be largely guided by innate trust/beliefs that cannot be altered—save by processes of biological evolution acting out on the life or death of individuals relative to a given species (or, such as is the case with ants, individual colonies/cohorts). The more intelligent the animal the more learned trust/belief it will hold a capacity to gain via enactive trust/belief that later on become tacitly remembered. When it comes to humans, we’re intelligent enough to be capable of sometimes altering both our learned trust/belief and, less often, our innate trust/belief via our enactive trust/beliefs.
So it’s known: The major criticism that I know of concerns the way in which trust is typically thought of in English speaking communities, as strictly pertaining to agency in relation to other agencies (cf. https://en.wikipedia.org/wiki/Trust_(emotion)). But I believe (trust) that this is too narrow a demarcation of what is ontically occurring—brought about by how English conceptualizes reality via words’ connotations. For example, in Romanian (harkening to Latin) there is no linguistic disparity between trust and belief—both being addressable via the word “cred” (as per credo); and to have trust in another agency can (but is not necessarily) specified by “in-cred-ere” (akin to “to entrust”). Hence, “I believe you” gets translated as “te cred” [whereas “I believe in you” translates into “am incredere in tine”]; this while “I believe that […] gets translated as “cred ca […]”. To me this serves as one reason/example for why a more universal aspect of belief-as-trust is ontically present in mind processes than that which English specifies.
OK, all this is a mouthful. But then, propositional knowledge can roughly be expressed as “well-grounded trust-that that is true”. Criticisms of the aforementioned, wherever applicable, would again be welcomed.
Ok, so I reread the matter and probably will have to again, because the examples used to make your point are a bit awkward. I have a cat and I enjoy it's fluffy indifference to my affections, but on no occasion do I defer to it on matters of scientific inquiry. If I asked anyone a question and their answer included a reference to what their cat was currently doing I would call into question any answer they gave going forward. Because, whether their conclusions are correct becomes a matter of happenstance.
I believe it's an attempt to reduce my rather generalized concept of knowledge to be so weak as to include the answers given by mad men concerning the weather, which seems strawman-y at the least. It's not what you intended based on your response, but I struggle with producing a better interpretation. No fault implied.
My reason for presenting it in the way that I did was, largely, to illustrate the difference between justified / well-grounded beliefs (in the latter example) and those that are not (in the first example). Maybe a better example should be used; all the same: In both the before and after versions, the same two basic realities are at play: a cat is out in the yard and satellites are up in the sky. In the before version, though, there is no rational connection between these facts and the held known. In the latter (granted that it’s a very strange cat which only goes outdoors on fully sunny days), the same two facts are now rationally associated with the affirmed known.
The way I’ve asked the question, “if it’s a believed truth that is justified (or warranted) to the satisfaction of its bearers”, then intends to get at more significant examples of knowledge. Such as knowledge of reality being as materialism, or idealism, or Cartesianism affirms it to be. Which, if any, actually knows how reality in fact is? (it could be something apart from these three choices) If it’s asserted that they all in fact know how reality is, then is reality inherently contradictory? Or is knowledge indifferent to truth? And so on … but in all cases, the respective belief-that will be justified/warranted to the satisfaction of its bearers—just that it will not be deemed justified by those of contradictory positions, due to what these latter will perceive as inherent contradictions in the positions addressed (or some other rational fallacy).
I know the aforementioned probably confuses things a bit. But, again, what else can knowledge be if not a belief whose reasons for being are rationally associated and that is in fact true?
Alternatively, isn’t this why so called mad men are so labeled: their explanations for why they believe what they do are not rationally sound?
There are some differences in our frameworks. I think it would be helpful here to revisit our agreements and then offer an overall outline for the purposes of keeping our vein of thought and thus the discussion on the path of discovering what all thought and belief have in common. This is a nod to something you mentioned earlier about metaphysical work needing to be done.
We're using the same methodology, it seems to me. That's huge, because I'm looking to further hone my own position, and it seems that you're doing the same. That said, I'm planning on addressing the general outlook you've offered...
Which of these kinds of belief is not existentially dependent upon language? I cannot see how any of this belief content is existentially possible with a language less creature. I'm working from the premiss that at conception there is no thought or belief. With that understood, the belief content you're offering directly above seems far too conceptual and/or language laden to be existentially prior to language.
The first suggestion is almost acceptable...
I cannot be convinced that a language less creature is capable of believing/trusting that the earth beneath it's feet is solid, unless that belief can be formed by virtue of a language-less creature drawing correlations between different things(including but not limited to itself), and all of those things exist in their entirety prior to being part of the creature's correlation.
Regarding the belief-that approach...
The belief that approach fails to draw and maintain the distinction between belief and reports/accounts thereof, which is part and parcel to neglecting the distinction between thought and belief and thinking about thought and belief. A belief statement always follows the words "belief that". It's always propositional in form. The belief that approach targets statements of belief. The belief that some statement/proposition or other is true.
The content of non linguistic belief cannot be propositional. Propositions are existentially dependent upon language. Non linguistic thought and belief cannot have propositional content.
Here you've invoked the need for trust/belief prior to associations between things. I replace trust/belief with presupposing the existence thereof. All correlation presupposes the existence of it's own content regardless of subsequent further qualification(s). That would be the presupposition of correspondence to fact/reality inherent to all belief.
My more justified answers to your posts are contingent on a number of metaphysical conclusions. I’ll try to do my best to reply without embarking upon these.
Trust to me is itself a process of awareness heavy embedded in metaphysical issues. Trying to define trust in the broadest manner possible while skipping all these, I get roughly this definition: a disposition—be it a) genetically instinctive, b) learned and stored within memory and one’s unconscious, or else c) consciously maintained and utilized—of so called “psychological” (and not epistemic) certainty toward what was, is, or will be.
So:
Suppose an animal which has not acquired a trust that the earth is solid beneath its feet were to walk upon quicksand. Why would it have done this if not for its innate (genetically instinctive) trust that the earth beneath its feet is solid?
I’ll keep this short since there’s a lot here that could be disagreed with; including a philosophy of mind which addresses a) innate, genetically inherited behaviors/dispositions, b) the unconscious were tacit memories and learned behaviors are stored, and c) conscious awareness (with the latter being perpetually interwoven with the two former). Although this isn’t metaphysics, it’s still a rather contentious subject, and my understandings of trust heavily rely upon the overall understanding of mind just addressed.
I’m mainly wanting to see the extent to which there’s common ground so far as concerns understandings of what trust is.
ps.
Quoting creativesoul
To me, this very presupposition you address is one of maintained trust that, namely trust that there is a "correspondence to fact/reality". And here, I'd uphold this to be an innate (or genetically inherited) trust.
Don't you have to torture the meaning of "justified" in order to maintain this position?" By saying to the satisfaction of its bearers" it seems to erase justification's implied rational characteristics.
Quoting javra
And the result of this trespass is a new variable. The 'Grounding'; which feels nice intuitively, but have we solved a problem here or created one? What does a belief alone mean to us now? The answers given randomly to binary questions, but held without discern-able reason?
No sir, you put justification back where you found it and play with your own toys.
Why should you believe that in all the things you know at least one is a mistake? I would maintain you accept it based on the law of identity.
I think there's reason to be certain at least some of them are wrong and by trying to falsify our beliefs we eliminate our errors and our knowledge improves or specifically becomes a better approximation to ideal knowledge. Without this assumption of unknown error we are left guarding beliefs when we should be testing them. It's a subtle, but significant difference in positions.
Can rational justification be infallible, i.e. perfectly secure form all possible error? I don’t believe it can. This does lead into a major quandary in philosophy, but, if its untrue, can anyone here supply evidence of an infallible justification (e.g., such that all premises and means of justifying are themselves evidenced to be perfectly secure from all possible error)?
Otherwise, it seems to me that all justification will be deemed sufficient for its intended purposes when it satisfies those for which the justification is provided (be it one’s own self or others to which its expressed).
So the issue of how and when knowledge is deemed to be, such as in relation to the examples previously provided, still remains.
But I acknowledge the issues become increasingly more complex the further they become enquired into; to me, it inevitably leads into metaphysical positions concerning various aspects of reasoning, such as those of the three basic laws of thought.
Quoting Cheshire
Don’t know if you’ve been keeping track of the conversations on the thread; I added the “grounded” part due to them. For simplicity of argument, however, I’ve no issue with sticking to the concept of “justification” as traditionally understood.
And stop it with the “sir”, mon senior. We’re all brats here, me thinks. :smile: [or maybe this was just you being a brat just like the rest us :razz: ]
Quoting Cheshire
You’d have to explain this better for me to understand. Are you alluding to the law of noncontradiction?
Quoting Cheshire
As stated, I can find this disposition warranted. Nevertheless, what I was attempting to emphasize is that there’s no need to become paranoid about being wrong about any particular upheld known—not until there’s some evidenced reason to start believing it is, or at least might be, wrong. But yes, remaining at least somewhat open to the possibility is part and parcel of the epistemological stance I maintain: fallibilism (or, a specific form of global skepticism that, unlike Cartesian skepticism, is not doubt-contingent).
Quoting javra
Rational justification doesn't imply infallibility, so falling short of infallibility does not leave a thing unjustified.
Quoting javra
And here lies the issue I have and repeat. All justification can not be said to be sufficient based on the criteria of any given audience. Can it appear as such? certainly, but this is no fault of the concept of justification. An argument can't said to be justified because of who is judging it.
Quoting javra
The issue remains if we continue to subscribe to JTB in a dogmatic framework. I don't find justification to be the best measuring stick for the quality of knowledge. So, I'm a bit indifferent to how well somethings been justified. I would rather know that it had been criticized and remained unfalsified.
Quoting javra
No, I probably could try to; but I was alluding to the third law of thought. "What is, is." The fact you posses an unknown error in your knowledge is simply a matter of being human subject to error. So, there is no need to state it from a position of authority any more than stating other obvious undoubted things.
Quoting javra
Well, stating that the error is unknown to the individual implies to me at least that we aren't discussing a single upheld known, but rather the set of upheld known. So, I suppose I agree. I'm thinking we may be doing the same dance to different songs.
You seem to find that where our justification is subject to error our true beliefs fall short of knowledge.
I'm really just skipping the middle man and suggesting our definition of knowledge falls short of reality. Because either our apprehension of what is true or our justification for what is true will be subject to error so long as we are human. I think we nearly agree.
.
Yes, I was having a bit of fun.
But of course.
Quoting Cheshire
Yes and no. But here, to approach the matter from a different angle, we'd start addressing the issue of universals. Justness, or the property of being just, is only found within individual minds; yet, it is impartially applicable to all minds, regardless of what the particular mind might want to make of it. So the the universal of justness is a universal standard by which all judgements, be they rational or irrational, are measured. And the decision to deem something sufficiently justified rests upon the mind(s) concerned.
Though I already know the concept of universals is a big and contentious issue. But this is my take.
BTW, tangentially, I venture that lesser animals do not appraise the world via what we recognize as logical contradictions. If so, than the universals of the law of identity, non-contradiction, and excluded middle applies to them as well. Again, making such ubiquitous universals technically be ubiquitous universals. Its a supposition, but I find merit in it all the same.
Quoting Cheshire
Remaining unfalsified is itself a form of judgment as pertains to justification: the means used to appraise something as unfalsified will themselves be a form of justification. But this aside:
If one upholds that all justification and appraisals of truth are fallible, then by what (rational) means does one discern what is and is not in fact wrong if not via justifications? (an answer here is sincerely wanted)
Quoting Cheshire
The third law of thought is that of the excluded middle, which naturally follows those of identity and noncontradiction. (for technical purposes, this when the qualifier of "in the same way and at the same time" is applied)
More importantly: How does it follow that some given which is liable to error is therefore erroneous?
Quoting Cheshire
Here's a crucial point in which we disagree: that our awareness of what is ontic is liable to error does not then entail that there is nothing ontic. Hence, the distinction between operational knowledge and ideal knowledge. Until infallible appraisal of truth and justification can be made, we will not be able to obtain ideal knowledge (there's a caveat to this, but it applies only to one metaphysical given which is itself a-rational: that which just is; and the obtainment of ideal knowledge of it also requires a literal eradication of distinction between itself and all forms of subjectivity ... this only as a hypothetical of what might be possible in principle; its a trite issue but I've mentioned this hypothetical exception for maintained accuracy all the same. Please don't mind this part if it doesn't make sense or apply to your concerns as pertains this thread's issues of knowledge). Again, until then, we only have operational knowledge of what is, which itself is meaningless without the standard of ideal knowledge ... by which it can become potentially falsified.
Hence, until you evidence why the possibility of being wrong about X entails that one is wrong about X, that which we operationally know can well be fully conformant to reality.
If you find yourself disagreeing, then please evidence how fallibility entails the necessary presence of error.
I actually I do agree, but would add that we may not ever know if it is actually ontic, because of this liability. Simply put, objective truth may be possible, but knowing when it occurs might not be.
Just my quick answer. I intend to give your entire response the attention it deserves.
Quoting Cheshire
The proposition that there is nothing ontic directly entails the following: it is ontic that there is nothing ontic; thereby concluding in both A and not-A at the same time and in the same way. If we allow for one logical contradiction, such as this one, to be valid/just then it would lead into a type of ubiquitous unintelligibility—for anything could then potentially only be valid only if logically contradictory. We are therefore stuck with the law of noncontradiction for as long as we want anything to remain intelligible to us. Thereby necessitating that we mandatorily accept that there in fact is something ontic. This too is not infallible, but I propose it is not falsifiable either. (Having read up on it some, I’m not big on dialetheism for this reason—which is upheld due to a lack of justification for the law of noncontradiction.)
So we then can "unfalsifiedly" know that something ontic is. But as to the details, such as in our knowledge of what is ontic being accurate, yes: we remain fallible with sometimes lesser degrees of certainty. Still, again, this does not entail that we are thereby wrong.
Quoting Cheshire
Implicit in this sentence, hence proposition, hence thought is an assumption of held ideal knowledge. If it weren’t, I don't see how this would be an issue. We do operationally know when we are in possession of objective (which I interpret to mean what I previously specified as “ontic”) truth. This, again, because our beliefs of what is ontically true are well justified to us and, in the process, not falsified as in fact so being objectively true. But as to holding an ideal knowledge of this, this cannot be had till infallible truths and infallible justifications can be provided. (It’s a bit of a quantum leap, I imagine, between the assumption that we can hold absolute/ideal/objective/infallible/indubitable/etc. knowledge (for which truth—if not also justification—with the same qualifiers is required) and a justified conviction that such a thing is not, at least presently, possible to obtain for anything whatsoever.*)
Again, we cannot infallibly know anything. Be we can and do fallibly know very much--some of which, such as 1 + 1 = 2, is currently unfalsifiable by any means we can currently think of.
I mentioned these two points, in part, because your stances seem to me to present a kind of slippery slope toward Pyrrhonianism. This is where, roughly expressed, it is deemed warranted to not hold any beliefs due to all epistemological criteria being fallible. But then, if so held, the very act of debating would be a bit hypocritical.
------
* In thinking of a possible criticism for what I've stated: Instead of something along the lines of "I know that I know nothing", replace with, "I/we fallibly know that I/we infallibly know nothing".
Do you have a standard by which you determine what counts as non linguistic thought and/or belief content? If so, what is it and how did you arrive at it? If not, by what means are you determining what counts as being non-linguistic?
Wikipedia defines thought as encompassing a “goal oriented flow of ideas and associations that leads to a reality-oriented conclusion.” Granting this definition (imperfect though it might be), whether thought requires language and, if not, when it does and when it doesn’t is, to me, again, ultimately grounded upon metaphysical presuppositions. And I currently do not want to engage in debate over metaphysical presuppositions. If this is too abstract, one issue is that of whether or not thought is teleological. And language to me is at the very least one form of highly developed thought. But, again, I find that answering your questions requires complex, metaphysics-contingent answers—which I’d rather not presently discuss.
Oh yea, well:
:razz:
Who wants to be wise, anyway. :smile:
Can one trust prior to being able to doubt?
Quoting javra
Seems to me that a maintained trust that there is a "correspondence to fact/reality" requires understanding the notion in quotes. I do not see how a non linguistic creature can have a maintained trust based upon understanding a linguistic conception of "truth".
I do, have, and would continue to argue that correspondence is prior to language, and thus prior to conceptions thereof.
My answer is an unequivocal "yes". To doubt one must first hold a trust for that which is accurate, for one example. Since we were talking about non-linguistic creatures, were a dog or a chimp capable of doubting something, it would first need to trust that there is a distinction between what we term right/true/correct and wrong/false/incorrect (they each point to something held in common). Addressed otherwise, doubt always is contingent upon a preexisting certainty, i.e. on something which we trust to be.
Quoting creativesoul
Ah. I can see how that could be inferred. But no. What I want to address is not something which is because it takes the form of a thought which we can manipulate via the act of thinking. I instead was here addressing what to me are inherent aspects of awareness. For example: To be aware of anything, I argue, presupposes a trust that that which one is aware of is as one interprets it to be. [It would be a long list, but, for example: an imagined ghost is trusted to so be imagined; a so called real apparition of a ghost is trusted to be real by those who "see" the ghost. Thoughts and justifications as to what was and was not real that occur after the fact here placed aside; though these too are likewise trusted to be as one apprehends them to be ... and so forth.]
Hence, I was not addressing this as an acquired trust. For example, we instinctively trust that that which we see is as we see it to be; as do animals; we humans can, however, come to no longer trust our eyes in certain situations due learned trust: such as when where sticks get seemingly bent when submerged in water. But this is built up over our innate trust in what we see being as we see it to be. BTW, I gather that some presume human infants acquire all such trust. I disagree with this. As an example: an infant trusts the stimuli of a nipple to be as it anticipates it to be and acts accordingly, without having learned how to do so or consciously holding conceptual understandings of what it's doing and interacting with. Nevertheless, in so doing, it innately trusts its impressions (not very visual, but consisting of many tactile perceptions) to "correspond to reality". Not reality as a conceived of ontology; rather, reality as that which is real.
I'm curious. Do you uphold a "blank slate" notion of mind?
I think we agree that all (reasonable/justifiable)doubt is belief-based(trust-based on your framework). It seems you've also implied that doubt is dependent upon a creature's awareness of falsity/mistake?
Quoting javra
I agree with the overall sentiment.
Quoting javra
No.
There is no ability to doubt it for pre-linguistic creatures.
Well, again, for me to believe is to trust that; and a belief is the contents of that which is trusted.
If to doubt is to presume some preestablished certainty as possibly being wrong, then yes, for a creature to doubt (themselves or others) they’d need to be capable of holding some innate understanding of falsity/mistake. I’m thinking of a dog that wants to traverse some narrow bridge, for example, but doubts whether or not it can do it via some sensed fear or anxiety (i.e., holds some trepidation about it). It would need to be aware that there is a possibility of being mistaken in trusting that it could traverse the bridge. Because of this, it would need to hold some notion of falsity/mistake—obviously not linguistic or linguistically conceptual.
Quoting creativesoul
Cool.
Even if so, we maybe agree that one does not need to doubt in order to trust? So we may hold beliefs that are justifiable and true without needing to doubt/question ourselves about them, for example.
I disagree here. You've presupposed what needs argued for, and arrived at the realization that the account needs some unaccounted for notion of falsity/mistake. We could do away with the need for a non-linguistic notion of being mistaken. On my view, that is not even possible. Dog's can be uncertain about what may happen as a result of having unexpected consequences result from their actions in past. This doesn't require a non linguistic notion of being mistaken.
Yes. Let's not conflate that which is prior to language with that which is not though.
If we set out trust in a minimalist fashion, in order to trust without the ability to doubt, we would lose sight of all of the different situations where one deliberately does not doubt... that is... where one intentionally places confidence in the truthfulness and/or reliability of something or someone else... usually a source. I'm reminded of Russel here...
I don’t follow. Here, written hastily enough, a more formal argument:
-- Premise 1: If there is uncertainty of any form, there will be uncertainty about something (there is no such thing as a context-devoid, free-floating, uncertainty).
-- Premise 2: If there is uncertainty about something, there will minimally be two competing alternatives regarding that something: that that something is (else should be, or can be done) and that the same something is not (else shouldn’t be, or can’t be done).
-- Premise 3: Uncertainty holds the potential to cease so being.
--Premise 4: The potential of uncertainty being resolved entails the following: Whichever former alternative remains at expense of all others, this now resulting singular possibility/decision will signify that—to the mind of that which was formerly uncertain—all former alternatives other than the possibility which remains where wrong (if addressing something of fact, a belief-that).
-- Premise 5: In order for premise 4 to hold any validity, there must be some sense of wrongness/mistakenness v. rightness/correctness on the part of the mind involved.
-- Conclusion: The presence of uncertainty entails an awareness of the capacity to be wrong/mistaken as well as of the capacity to be right/correct as pertains to some specific given.
Please explain what you disagree with and why in the just given generality—so that I may better understand what you have in mind. If you answer than only humans can understand the concepts to any of these words, you’d be completely missing the intended point of the argument—which aims at universals regarding how the mind works (in this case, as pertains to the presence of uncertainty). In which case, without getting into philosophy of mind or that of metaphysics—which I don’t care to do presently—we’d at best end up running in circles, something that I don’t want to do.
Quoting creativesoul
Only if one were to take an either/or approach to it, which I’ve already explained is not my take. To me, we all have innate “minimalist” trust/beliefs and our more complex beliefs are built up on top of them.
But this is all deviating from the issue of belief.
How do you go about conceptualizing non-linguistic belief?
Also, can you provide any example of a belief whose contents are not trusted to be by the respective being? Else, can you explain where the difference lies between trusting that something is and believing that sometimes is?
BTW, if you’d like to mutually agree to disagree and be done with the discussion, I’d be onboard.
I appreciate the offer, and you're more than welcome to end this discussion if you so choose. For me though, you're one of very few people that I've debated or had discussion with on any philosophy forum who seemed like they a)had an interest in non-linguistic thought and/or belief and b)had some well thought out notion of what that was.
I'd rather flesh out our agreements as well as our disagreements, with the main focus being upon the agreements. I'm about to work on a reply to the rest of the last post, paying particular attention to the question regarding how I conceptualize non-linguistic belief, because that method is pivotal to arriving at a convincing criterion for what counts as being such.
We differ remarkably regarding what an awareness of being wrong/right requires.
Premiss 2 presupposes that the creature experiencing uncertainty understands a plurality of possible outcomes. I find that presupposition dubious for a language less creature.
Here's why/how I've arrived at my own understanding of the matter...
An awareness for the capacity to be right/wrong requires thinking about one's own thought and belief. Thinking about one's own thought and belief requires the ability to become aware of, isolate/identify, and subsequently further consider one's own pre-existing thought and belief. That requires written language. Thus, an awareness of the capacity to be wrong/mistaken as well as an awareness of the capacity to be right/correct requires written language.
A language less creature does not have what it takes to be aware of the capacity to be wrong/mistaken or right/correct.
Quoting javra
Next post...
Much to agree with here...
There is plenty of evidence to support the conclusion that, while in utero, humans are drawing correlations between auditory sensations and their own level of comfort/discomfort. That satisfies my own minimalist criterion for what counts as rudimentary thought and belief formation.
Those innate beliefs(a creature's thought or belief at the moment of birth) would have to consist of correlations drawn between things that exist, in their entirety, prior to being a part of the correlation. Drawing the correlation is belief formation. The correlation itself is the belief. The content of the correlation is the belief content.
I'm still wrapping my head around your framework...
Quoting javra
The best one yet! I've been wanting and waiting for this one for a while.
:grin: I know; I know ... :cool:
@creativesoul
I find that you’re thinking of right/wrong in too abstract a manner—as only relatively mature humans can do. Yet very young children sense when they do wrong things (cheat, act aggressively, etc.) just as much as when they do good things (overlooking the more fuzzy grey areas). What’s more, so do dogs.
Though I’d like to avoid metaphysical issues, I find I can’t address this properly without eventually mentioning something of metaphysics. To be relatively informal about it, there are metaphysics of sharp and absolute division pertaining to different life forms’ abilities and, on the other side, there are metaphysics of gradations. Doesn’t matter if its Richard Dawkins or many, but not all, Abrahamic fundamentalists, here there is a metaphysical divide between man and beast. I take the latter metaphysical position, one of gradation which, when sufficiently extended, results in sometimes expansive leaps of ability. I also don’t approach things from a physicalist account; pertinent here is that to me there is a non-subjective objectivity at play in reality at large: justness—this just as much as the laws of thought—is to me an aspect of this non-subjective reality which is equally impartial to all discrete givens. Why this is important: in the latter position, we do not learn of justness conceptually in order to sense right and wrong, no more than we learn of formal laws of thought in order to operate via laws of thought. It is not something acquired from language but, instead, it is a universal facet of mind which language expresses, however imperfectly. Here there is no absolute metaphysical division between man and beast; both are, in a very trivial way, equal facets and constituents of nature. It is not that a less intelligent being is metaphysically apart from the laws of thought, or from the universal of justness. It is only that less intelligent beings are in due measure that much less capable of forming abstractions about these universals—which, as metaphysical universals, concretely dwell within all of us (with or without our conscious understanding of them) as innate aspects of what, or who, we are as sentient beings.
So, potential debates galore on this issue—and the issue can sprawl in myriad directions. I’ve highlighted some of my beliefs, though, only to better present my disposition.
A dog doesn’t hold a conscious understanding of “alternatives” regarding some given nor of “right and wrong”. Nevertheless, to the extent that intelligent creatures, dogs included, can become uncertain of givens, they will actively experience competing alternatives which they must choose between so as to resolve the uncertainty. Not all of our uncertainty—as adult humans—consists of consciously appraised alternatives; arguably, most of our uncertainties do not. They instead consist of competing gut-feelings, intuitions which we do not during the even take time to linguistically quality (never mind contemplate), and we as conscious awareness choose, or decide upon, one—thereby forsaking all others once the decision has been (often) unthinkingly made. Arguably, this can easily be complicated by some of these uncertainties taking place in the unconscious mind—such that they bring about states of anxiety, disquiet, of fear … else, equally applicable, states of wonder, curiosity, awe, and sometimes even beauty (such that these states would not occur were we to be fully certain of all relevant aspects of that regarded). I’ll also add that not all forms of uncertainty equate to doubt: e.g., we can be, and most often are, uncertain about any number of future events without in any way doubting them. Yet, if there is uncertainty about something, what other mechanism can be at play other than that of competing alternatives for what in fact is?
I doubt this will resolve the given disagreement, but think of it this way. Were language mandatory for sensations of right/correctitude and wrong/mistakenness, Helen Keller could not then have made any non-stochastic choice in her life during her first seven years (I’ve checked with Wikipedia and Helen only began learning language at about seven-years-old). For she then could not have had any sense of mistakenness v. correctness via which to so make (non-stochastic) choices (I grant that stochastic choices is a contradiction in terms … but since I’m in a bit of rush) … and choices are always made between alternatives.
Well, this better expresses some aspects of my worldview. But I’m skirting around issues which underlie it: those of metaphysics and of philosophy of mind. And I understand if there will be plenty of disagreement throughout the aforementioned.
I’ll likely get around to the rest over the weekend (bit short on time for now).
But to better understand: with the process of thinking in mind: can a thought, of itself, be defined as not necessarily consisting of a consciously understood abstraction (regardless of the degree of abstraction)? For instance, could we settle on correlations between percepts being an act of thinking? This would not require language nor consciously appraised abstractions. Still, the implications of so defining thought would then be fairly expansive (e.g., if an ameba can make correlations between its percepts than it would be engaged in an act of thought while eluding predators (e.g. bigger amebas) or while searching for prey. Amebas can easily be discerned to elude predators and search for prey—which takes a bit of autonomous order within an environmental uncertainty to accomplish—but I mention them because, obviously, they are rather “primitive” lifeforms.). I lean toward a more inclusive understanding/definition of thought and, therefore, thinking—again, favoring the outlook of gradation rather than that of division. But I’d like to know your general position as regards the nature of thought before I reply.
Quoting creativesoul
No problem.
Of course. Very good points. I [s]thing[/s] think both me and creativesoul were limiting ourselves to how it might pertain to lesser animals. Feel free to complicate things, though.
(I'm a typo-holic. Can't help it. :roll: )
Nah. I'm talking about what a language less creature is capable of. With regard to a language less creature's thought and belief, they are rudimentary, very basic level simple correlations drawn between things that exist in their entirety prior to being a part of the correlation. Then there are the more complex products of the correlations themselves(when they become part of another correlation).
A sure sign that we've gotten something wrong here - when discussing non linguistic thought and belief - is if and when it is too complicated. Simply put, non linguistic thought and belief cannot be that complicated.
Regarding my earlier criticism that you're referring to in the above quote...
I'm talking very specifically - as precisely as possible - about what it takes to become aware of one's own fallibility, which is a much 'cleaner' way to say "become aware of one's capacity to be right/wrong". I offered an argument for the position I hold. It's been sorely neglected. That argument is based upon something very important. The distinction between thought and belief and thinking about thought and belief that the whole of philosophy has neglected to draw and maintain...
Regarding the bit about morality. Morality is all about what counts as acceptable/unacceptable thought, belief, and/or behaviour. Right and wrong in a moral/immoral sense as compared contrasted to a true/false sense.
The children you speak of are in the process of acquiring moral belief. Dogs do no such thing. The commonality between the two is that both dogs and young children will draw correlations between what they do and what happens afterwards.
I'm going to take that last post in chunks...
Come on javra. Those points miss the point entirely. Non linguistic here means thought and belief that exists in it's entirety prior to language. It does not mean unspoken thought and belief after language acquisition...
Besides that, music is language, poetry is language, metaphor is language, art... well who determines what counts as art? Does that determination require language in order for it to happen?
Surely.
None of that is language less... None.
All good.
Differences are certainly between our views. However, I'm not interested in fleshing those out unless they matter directly to the topic at hand. While I do often enjoy argument for it's own sake, not here, not now, and not with you about this topic. That said, we are much more alike than unalike here.
You may want to know that I reject many an inadequate historical dichotomy. The objective/subjective one notwithstanding. All those that I reject I do so on the same ground. They cannot take account of that which is both and/or neither...
To be blunt about it, I've no interest in metaphysics for it's own sake. None whatsoever.
To me, that's contradictory on it's face, therefore unacceptable. Choosing between competing alternatives is existentially dependent upon knowing of them. Knowing of competing alternatives is existentially dependent upon understanding them. To be more precise, this bit began with claims about a non linguistic creature being aware of it's own fallibility. That cannot happen.
Becoming aware of one's own fallibility is to become aware that one has false belief. Becoming aware that one has false belief requires one knowing that one has belief to begin with. Knowing that one has belief requires being able to think about one's own belief. Thinking about one's own belief requires identifying it and isolating it for further contemplation. Identifying, isolating, and further contemplating one's own belief requires written language.
Becoming aware of one's own fallibility is existentially dependent upon written language. A language less creature has none.
Uncertainty is the mechanism. It is fear based. A dog can have expectations. Those expectations can be jolted into fearful uncertainty(anxiety) about what's happening or what may be about to happen, by the unexpected happening and negatively affecting the 'sense' of familiarity that the dog had until then.
There is no need here for the dog to be aware that it had false belief, nor is it even possible.
We agree regarding the gradation aspect. That is particularly amenable for me with regard to initial thought formation and it's successive continuation all the way through the transformative correlations that only spoken language, written language, and then again, metacognition have the goods to deliver...
In order to remain sensible and have the strongest possible justificatory ground, all that we call "non linguistic thought and/or belief" must share the same basic elemental constituents with conventional notions of thought and/or belief statements. The groundwork is imperative.
All examples of thought and belief are existentially dependent upon predication. All predication is existentially dependent upon drawing correlations between different things. All examples of thought and belief are existentially dependent upon drawing correlations between different things.
Given that all thought and belief is meaningful and presupposes truth(as correspondence) somewhere along the line, the presupposition of truth(as correspondence) and the attribution of meaning(being meaningful) seem to be irrevocable. They ought be considered as part of an adequate minimum, and thus need to be part of the criterion for thought and belief.
So, that's three different elemental constituents that have been identified. Namely... 1.being existentially dependent upon drawing correlations between things, 2.being meaningful, and 3.presupposing it's own correspondence.
All attribution of meaning is existentially dependent upon the existence of something to become a sign or symbol, something to become significant/symbolized and a creature capable of drawing a mental correlation between the two. All such correlation presupposes the existence of it's own content(regardless of subsequent qualification).
Pavlovs dog was clearly proven to have made a connection between the sound of a bell and satisfying innate hunger. Involuntary salivation. He drew a correlation and/or association between hearing a bell and what happened afterwards. That bell became significant to the dog solely by virtue of the attribution of meaning which led to clearly held belief about what may come... expectation.
Mmmmm....
Have not gained enough confidence to clearly draw a line in the sand. However...
It does not seem to have the physiological makeup. Stimulus response satisfies the avoidance of danger and the gathering of resources. I've no supporting evidence nor argument for granting meaningful mental correlations that presuppose their own correspondence.
Adequacy matters.
But if this minimalist notion of thought and belief sets out what is the case prior to our setting it out, then the rightful application of it will produce consequences decimating many a philosophical 'problem' across a very broad scope/range...
I don't know that I agree with this. For instance, in Kant's contention that existence is not a predicate.
Make the argument, we'll discuss it further. Otherwise... hand-waving and gratuitous assertions won't do here.
Music, poetry, and metaphor are definitely language. Saying otherwise is just plain asinine. Art is a social construct that is existentially dependent upon language.
None of these things contradict what I've claimed, nor negate it. If you believe otherwise, then that alone is adequate evidence to conclude that you've not quite understood the position I'm arguing for.
I just thought about this another way...
If offering an accurate account of nonlinguistic belief by means of art, music, poetry, and/or metaphor qualifies as 'capturing nonlinguistic belief', then I may actually agree...
I mean, I could put my own words to music or in poetic verse...
Metaphor can't do it though.
Art has a broad enough definition of what qualifies as art than I could envision some forms of art(music and poetry) doing it too...
Well, I've already set out my position on this... all belief is existentially dependent upon and consists of correlations. Correlation presupposes the existence of it's own content.
On my view, the belief content is the content of correlation. This seems to be in agreement with your framework, aside from the equation you've drawn between trust and belief, which causes me pause...
Trust, to me, is pivotal though. I mean, we both place tremendous value upon trust. On my view, it is best understood as an unavoidable human condition arising from our being interdependent social creatures.
By my lights, you're attempting to situate trust into the timeline before it can be rightfully accounted for. The criterion for it seems to be so minimal that trust could be had by a creature that is completely incapable of doubting anything at all.
That seems problematic.
This seems to be the basis for the belief that approach. It certainly lends support to the method, regardless of whether it is intentional or accidental.
Typically, when we talk about one believing something, we're saying that one believes that X is true; is the case at hand; is the way things were, are, and/or will be; corresponds to fact/reality; etc.
Let X be a statement or proposition.
Here, it makes perfect sense to draw an equation between trust and belief, for the two terms are easily interchangeable without self-contradiction.
Trust requires a remarkable 'sense' of familiarity.
Familiarity(the kind that produces trust) requires a succession of the same or similar enough belief about that which is trusted. This familiarity cannot be had if innate fear takes hold of the creature. One cannot trust that which aggravates instinctual/innate fear, at least not one at a language less level.
Here, the two terms are not so freely interchangeable.
Familiarity(the kind that destroys trust) also requires a succession of the same or similar enough belief about that which is not trusted.
Aside from thought like that... there is no difference between thought and belief.
Quoting creativesoul
You’re forgetting the mind is a very complex thing. It includes, for example, unconscious processes that always effect, affect, entwine with, and bring about the consciousness’s form. And we do not hold conscious awareness of all our beliefs at any given time. At any given time, most of our beliefs are unconsciously held—staying there till they're brought up into conscious awareness for purposes of deliberation. And the assumption that non-linguistic thought and belief is somehow simple is, to me, very erroneous. I’ll give some data below to better illustrate my point.
Quoting creativesoul
I’m not talking about a conscious awareness of an abstraction/thought concerning the possibility of being wrong. I’m talking about an innate, inborn, unlearned, not consciously contemplated, Kantian-like (if you will) mental capacity to distinguish the category of right/correct/etc. from the category of wrong/incorrect/etc.
Some data. Taken from https://www.sciencedaily.com/releases/2009/08/090810025241.htm
Yes, it’s very rudimentary arithmetic ability. However: Here is found the capacity to discern error in matters of fact—which would not be possible devoid of a complimentary capacity to discern non-error in the same givens. To whatever extent this capacity might be learned, if any, it is not contingent on language use.
Quoting creativesoul
You can maybe see how your argument is in contradiction to the data. Your argument's premise that awareness of right/wrong requires thought about thought and belief is faulty; it only requires belief and thought (without requiring thought about either).
Quoting creativesoul
I disagree, but this will be a long argument and relatively tangential to what we're focusing on. Still, dogs can be curious, and curiosity cannot occur when there is full psychological certainty relative to all matters regarded. So curiosity to me requires uncertainty--one that is obviously not fear based. All the same, trying to keep my reply focused ...
Quoting creativesoul
From https://en.wikipedia.org/wiki/Dog_intelligence#Theory_of_mind (its not a long read):
Dogs are relatively good at deceiving others, and this requires that dogs hold a rudimentary theory of mind. This understanding of other minds is not learned via language, nor is it likely to in any way be thoughts about thoughts and beliefs. It is also likely not something learned but something innate, inborn, Kantian-like, etc. that only gets refined via experience. Here there is a belief of what the other will believe when deceived. But it seems obvious to me that this belief of other minds' capacities is not acquired via correlations between things (for it is not held as a result of learning about other minds; puppies will hold such belief of other minds).
Remember that I uphold a difference between innate beliefs (beliefs we're birthed with), learned beliefs (stored in our unconscious after having been acquired till brought up into consciousness), and enactive beliefs (e.g. beliefs we actively deliberate upon consciously). Correlations will be one means of acquiring learned beliefs, but they cannot account for innate beliefs. So I strongly disagree with all beliefs being dependent on correlations in order to manifest.
Quoting creativesoul
To me not at all. If all belief (innate, learned, and enactive) is a form of trust for what in fact is, then thought is a process of relating various beliefs--especially those that are innate in less intelligent living beings. The two processes are to me therefore distinct, and their ontological differentiation in lesser beings is not contingent on lesser beings' meta-cognition. (see again the two examples of dog intelligence, neither of which are contingent on a dog's capacity of metacognition).
There's again a lot here that could be disagreed with. So I'll stop here and see where the reply leads to.
Quoting creativesoul
groovy :smile:
BTW, it's not an ideal time for me to be hanging out on the forum. Can't say when my next reply will be. But I would like to focus on the two empirical data addressed: that of dogs' capacity to discern error in 1 + 1 = 1 and that of dogs' having a very rudimentary theory of mind (more specifically, both belief and thought as regards other minds when these other minds are deceived).
I would need to see the actual studies and experiments that these conclusions were based upon in order to offer a more informed opinion of the reliability of those conclusions. However, some things can be said without my having access to that.
There is a remarkable difference between noting differences and noting errors.
I would be willing to bet that there is nothing in either experiment or study to justify saying that the dog noticed an error rather than saying that it noticed a difference between the equations it was presented with. Recognizing differences doesn't equate to recognizing errors.
Counting is not the same thing as recognizing different quantities. Again, I do not have the studies or experiments on hand, however, I would be willing to bet that the dog drew correlations between some symbol or sign and a quantity.
That said...
It seems that there is a fundamental difference between our views here. It may prove to be irreconcilable. In addition, you've now presented a strawman argument on multiple occasions. You've adamantly rejected things that I've not claimed. It is always better to actually present the argument and then clearly express which premisses or conclusions you disagree with and offer some valid objection for that disagreement.
I do not want to get into yet another discussion where one participant is criticizing another's position/argument without first granting the terms. That is the bane of philosophy.
I've yet to see you present an argument for this capacity to be innate. You asserted it multiple times. That is to presuppose precisely what needs argued for.
I've presented an argument which negates that ability, and situates it at a minimum level of written language. That argument has not been directly addressed, although you've openly expressed your disagreement with it, and even 'strongly' disagreed with it calling it "in error" or words to that affect/effect.
That's not acceptable to me at this juncture...
Hey, I’m trusting the info based on what I take to be the fact that the information on both sciencedaily and Wikipedia would not be up there were it to be uncorroborated, merely anecdotal, or hearsay. Both sources heavily rely upon peer-review, after all.
Quoting creativesoul
Hm. Whatever I might have either not addressed or, else, poorly represented was unintentional on my part. I’m more than OK with simply agreeing to disagree at this point. I’ll leave it at that.
Ah well, if you ever want to do some philosophy here on this topic, I'll gladly re-join.
We've not even gotten to the point where we know what the disagreements are.
There's a reason why psychology is called a 'soft' science, and an appeal to authority is rather unconvincing, particularly nowadays given the way science is funded...
Do you not even grant these points?
Having both engaged in independent psychological (cognitive science) experiments (particularly, in the importance of eyes v. mouth in human non-linguistic communication concerning emotions) as well as in a neuroscience lab (experiments on zebra finches learned capacities to recognize and produce song patterns via brain lesion to critical areas in chicks, etc.)—both some twenty years ago—my personal experience illustrates to me that well done psych. research can hold far, far fewer confounding variables and, therefore, far greater statistical integrity than the often termed “hard sciences” of biology/neuroscience. Take it or leave it. They’re nevertheless my experiences.
Quoting creativesoul
No, actually. But I'm feeling there's often differences with the semantics of the words we're both using. And to get to the bottom of it would most likely be very time consuming.
At any rate, it was nice engaging in this overall debate with you. But I’ll leave it where we’re at. Till the next time. :up:
A language less creature does not have what it takes to be aware of the capacity to be wrong/mistaken or right/correct.
You've disagreed with the first claim above, which was being used as a premiss. It needs set out so that you can address it's ground, prior to it's being used as a premiss.
p1. In order to be right/wrong, one first has to have true/false belief about something or other.
p2. Having belief does not require language.
C1. One can have true and false belief(one can be right/wrong) without language.
p3. To be aware that one is right/wrong is to be aware that one has belief.
p4. Being aware that one has belief has - as the 'object' of awareness/consideration - the belief itself.
C2. Being aware that one has belief is thinking about belief.
Look javra I can more than appreciate that experience, and don't take the "soft science" comment personally, it wasn't about you. It was about the fact that there are several equally compelling equally valid explanations for any given set of non linguistic behaviours. Hence... the crucial need for a sound philosophical approach to what counts as thought and belief, particularly what counts as non-linguistic thought and belief.
Surely you can understand my trepidation regarding the conclusions in that article, given that I do not have access to the details of the experiments performed?
OK
P3 is to me not true/right/correct.
By analogy: I can be aware of time (as can most any lesser mammal, for example) without needing to have an awareness about me having a belief about time. Same with space. Same with quantity and rudimentary arithmetic. Same with the law of noncontradiction. Same with values we term “bad” and “good”. Don’t tell me we humans now have a conclusively definitive understanding of what time, space, mathematics, laws of thought, and the meta-ethical reality of bad/good are … Nevertheless, we now as adults—just as we did as infants—hold an awareness of them … one that does not existentially require a belief/thought about our belief/thought prior to the very awareness being present.
Same type of pre-linguistic, pre-meta-cognitive awareness can be had in relation to error/non-error in manners a priori to an awareness about the belief that one can be erroneous/non-erroneous.
... as evidence, there's again the addressed empirical research into dog intelligence showing that dogs can find 1 +1 = 1 erroneous. I don't have access to the original experiment(s). But, the way I understand and know this ethological research to be, those who express human-like abilities in lesser animals are viciously assaulted by others in related fields. So I'm inferring that where this statement to not be well-justified/grounded, it would never have been published by the APA.
There are two veins of thought here...
One about the experiments, the other about what awareness takes...
I want to continue, but keep them in separate posts. I'd like for you to continue here. Would you, could you, in a box? Would you, could you, with a fox? On a boat, with a goat?
Don't mind my silliness. It's a coping mechanism. :wink:
I want to continue, but want you to trust that it's worth it. It is to me.
Yeah well... without access to the details of the experiment, I cannot know if it's good quality or not. Do you have access to the details?
Quoting creativesoul
Do you agree with these two claims?
That seems to be where(and/or what) the bulk of our disagreements rest(upon).
:blush: ... well. Yea, but I make an big effort to prioritize the stuff that ought to be prioritizing right now. So ... not that my word is in any way absolute ... but, I'd like to not reply until after this weekend my time.
Quoting creativesoul
no, but see my reasoning in what I added/edited in my previous post.
Quoting creativesoul
I agree with them, but they're not essential to the issue of recognizing 1 + 1 = 1 to be erroneous/incorrect/wrong/etc. This does require the recognition of error and different quantities.
Take your time. I'll be much less wordy in what I offer between now and then. It seems we've reached a point that we can discuss where our differences lie. I want to do that in the best possible way.
Enjoi your weekend, my friend.
p2. Having belief does not require language.
C1. One can have true and false belief(one can be right/wrong) without language.
p3. To be aware that one is right/wrong is to be aware that one has belief.
p4. Being aware that one has belief has - as the 'object' of awareness/consideration - the belief itself.
C2. Being aware that one has belief is thinking about belief.[/quote]
I have a small issues with p1 and c2
What is the difference between an opinion and a fact? If I think I am right saying abortion is wrong, how would there be a real effort in determining what is true or false about that belief?
Two people could be right and differ completely about the same thing.
If I believe I love someone, having that belief is not in thinking it is a belief but thinking it is a fact. If I have a belief and maintain that it is a belief then it must be still undetermined whether or not whatever the belief is to posit is a fact, and therefore I would not have a belief at all but a skeptical opinion.
Well, there's certainly a difference between "nothing ontic" and lacking the knowledge that a thing is ontic. So, the explanation that follows doesn't really fit the claim I'm making here.
Do you have any beliefs you've never bothered to formulate? I think I do. What am I drawing from by stating my belief's if not from a source of unspoken things. The belief doesn't come into my mind after I've said it.
Or I could join the party arguing that unconscious realization of object permanence is direct evidence of knowledge without language.
Quoting javra
False analogy.
You've offered purported examples of things that one can be aware of without being aware that it has thought or belief about those things. What's being discussed here - what you've objected to - is what it takes to be aware that one is right/wrong.
So the relevant question is...
Can any creature be aware that it is wrong/right about those things without being aware that it has true/false belief about those things?
I think not.
There's a remarkable difference between being right/wrong and being aware of that. Being wrong/right is having true/false belief. Given that, being aware that one is wrong/right is being aware that one has true/false belief. Nothing else suffices.
A language less creature can form and have true/false belief without being aware of it. It can experience unexpected events(and confusion) as a result. I'm not arguing against the notion of a non-linguistic creature having true/false belief. Thus, I'm agreeing that such a creature can be right/wrong. I'm arguing that such a creature cannot be aware that it is right/wrong without being aware that it has true/false belief.
You've not offered a valid objection to that. I'm fully aware that you disagree. However, your disagreement alone does not make my position erroneous. Nor does your position on the matter serve to be very convincing when it is stated without supporting argument. That's gratuitous assertion, and obviously unacceptable as a means of objection.
The conversation has gotten of the OP. It still applies. None of us are applying it.
You're missing the point.
We're talking about, and fleshing out the details for a criterion; what counts as thought and belief that is not existentially dependent upon language.
You're talking about belief regarding what's considered to be acceptable/unacceptable thought, belief, and/or behaviour. "Wrong" here isn't equivalent to having false belief. Rather, it's equivalent to agreeing or disagreeing with standards of moral belief(codes of conduct).
None of this is applicable to a language less creature. Those creatures cannot have these sorts of beliefs.
What's 'this'? What doesn't make sense?
What doesn't make sense?
Our discussion has gotten off the OP. That is clear because we're no longer discussing it's relevance to the OP. However, it is still relevant to the OP. Because it is still relevant to the OP, but we're not discussing that relevance, it makes perfect sense to say what I did.
Yes, one can easily say almost anything.
But what of the belief that something is right or wrong, with seemingly no conscious basis? An example of this is taboos. There are certain taboos in ancient cultures of which the basis for believing certain things transcends any linguistic approach to them.
There is too the tendency to completely change words and names of people in some ancient cultures based on beliefs that have absolutely no intelligibility ascertained through analysis.
Poetic ideas. Fantasy.
Language is a rendering of experience. Belief is prior to language not in a temporal degree but is more proximal and/or primordial. Belief belongs to the realm of experience. Belief is instantiated by language in that belief lacks a manageable form prior to its translation. Language is rendered by belief. Language is, partly, a sublimation of belief. An example of this is as follows.
I met a person. I fell in love with that person. I believed that I loved them. I believe still that I love them. But the feelings constituting this belief do not originate with the words that contain them in such a statement about them. The feelings constitute an inclination, tendency and direction a sort of amalgamation of feelings and affects, and such a 'thing' crystallizes into language to be expressed. Is this not the fundamental operation of language?-- to, primarily, express? What would be expressed if it originated in the same tool of expression? Would all belief thus be a sort of simulacrum, representation after representation? Nihilistic? Some web of the arbitrary? This is obviously false. Something prior to language is expressed by language. Never does a belief originate in language, unless it is an artificial conglomeration labeled as 'belief.'
Cheers, amigo. Good news is I managed to do the more important parts of what I should’ve done. But back to debating.
Quoting creativesoul
Well, not by my count. The analogies intended to address non-reflective awareness of certain givens universally applicable to all awareness-endowed beings (I maintain, to all life). Here’s a very relevant, yet controversial, issue (relevant to the issue of awareness): the awareness of self. Self-awareness as it’s typically understood requires thought about thought/belief in the form of a concept of self. Yet the sheer awareness of what is other and what is not-other—and, thereby, an innate and non-reflective awareness of selfhood via which one acts and reacts—is inherent in all life; otherwise, it would starve to death, for one example. What I’m trying to get at is that the same non-reflective awareness of what is other and what is not-other—for simplicity, here strictly concerning dogs—can apply with equal validity to a non-reflective awareness of what is correct and what is erroneous. More on this below.
Quoting creativesoul
Whereas I, again, think this is the case.
Quoting creativesoul
Addressing only the first sentence, yes, of course; but this only from the point of view of our adult human awareness which, in part, consists of an awareness of our abstracted notion of what the true/false dichotomy requires. But the true/false dichotomy doesn’t exist because we’ve conceptualized it as an abstraction; rather, we’ve conceptualized it as our best map of a pre-existing territory. In this case, roughly expressed, the territory is the potential relations we as sentient beings hold with that which, firstly, is other relative to us as consciousnesses and, secondly—or, even more abstractly—with that which is ontic (here including the very presence of us as consciousnesses). But one does not need to conceptualize what truth and falsity are in order to make this distinction via consciousness/awareness—just as a being does not need to hold an abstracted understanding of selfhood to hold a crude but stanch innate awareness of what is itself and what is other.
Quoting creativesoul
Here, I’m picking up on the culturally common understanding of awareness as consisting of humans’ awareness of abstractions regarding awareness. Thus, of self-awareness in the sense of being aware of an abstraction regarding awareness as the core of the (conscious) self—or something to this effect. It yet still amounts to a belief about belief(s)—and not to the non-reflective belief itself. Ok, this issue of non-reflective beliefs and acquired complex beliefs which then act as non-reflective beliefs via which we then filter yet other beliefs we're addressing can, of itself, become very complex. Still, I’m trying to clarify that this is not what I’ve previously intended:
Imagine, for example, that to the dog 1 + 1 = 1 just doesn’t feel right whereas 1 + 1 = 2 does. The dog then acts and reacts accordingly (I imagine only on average in relation to this simple arithmetic). The dog here doesn’t need to hold an awareness of the concepts of true and false (nor of the concepts of error and correctitude, for that matter … all of which being abstract thoughts/beliefs which one holds trust for, i.e. believes). Nevertheless the dog will instinctively trust via is awareness-dependent apprehensions of information (i.e. will hold a pre-reflective awareness) that one sum is wrong (and will thereby find it unfavorable) and the other is right (and thereby favorable).
I don't know if I've lost you so far—this regardless of whether or not you agree. I'm sure that if I have you'll let me know. But here's a different example that may be of greater service:
Dogs are relatively good at deceiving. This, again, requires a belief about the beliefs of others when they are being deceived. For willful deception to be at all effective, the dog then must hold a certainty that engaging in behaviors X will (or at least is very likely to) create an erroneous belief in the other which—simultaneously—the deceiving dog apprehends to be an erroneous belief and, therefore, not a correct belief. Wikipedia gives the example of a dog that sits on a treat to hide it till the other leaves the room. I’ve got plenty of anecdotal accounts of my own (e.g., with a very intelligent shepherd dog I had as a kid), but let’s go with the Wikipedia example. The dog must be aware that the treat really is beneath its bum. It must also be aware that by concealing it this way the other will then hold an erroneous belief that there is no treat in the room. Here again, I argue, is required an awareness of error and non-error regarding that which is—an awareness that is not dependent on abstract thoughts/beliefs regarding the concepts of right/wrong, or true/false, or error/non-error, etc. A belief-endowed awareness that can well be non-reflective (though in this case likely does contain some inference and, hence, reflection regarding what's going on in the mind of the other).
I’ll grant your objections to the study that dogs can discern error in 1 + 1 = 1 (thought I yet disagree with them) … but when it comes to dogs’ ability to deceive, here I’m holding fast. I’ve had too many experiences with dogs to deny them this ability.
I found your statement somewhat ambiguous and was doing my best to cover all the bases, just in case.
The more important part of my reply was this:
Quoting javra
In other words, your use of knowledge here is that of an absolute, or infallible, knowledge. That "we may not ever know if it is actually ontic"—for example—is only a problem when one believes such infallible knowledge can be had. Come to believe that we cannot hold infallible knowledge in practice for anything, and this problem fully dissolves, for we then can and do fallibly know "if its actually ontic"--and no other form of knowledge is possible.
I disagree.
Quoting Cheshire
Ok. Noted.
Just read your most recent reply to me...
I think we're making progress, which is saying something. It's not so much that we're in agreement, but rather that the point(s) of divergence is becoming clearer. That is, the points where we choose alternative explanations for the same thing...
I want to do that reply justice...
I'm working on it.
:smile:
P.S. I'm still not quite sure that we completely disagree. I mean, our viewpoints still may be commensurate with one another to much greater extent than not...
I still disagree, but I'm starting understand why...I think. To dodge a bit of confusion, I'm reading [absolute, infallible, ontic, ideal, and objective] knowledge to be the same thing. I disagree that it is a problem to not know when our knowledge infallible, so I don't see any reason to subscribe to the notion we can't have it. I think your saying something like 'we can subjectively know if we have objective knowledge, because objective knowledge isn't a thing'. I suspect much of our knowledge approximates objective truth to a very high degree. Right or wrong is this where our viewpoints differ?
Quoting javra
I was kind of afraid that might be the case.
Quoting javra
Sorry, this doesn't translate coherently to my ape brain. I don't disagree or agree.
Quoting javra
I'm reading "operationally" to mean subjectively or non-ideal; Really, the above sounds contradictory even though I'm pretty certain it isn't intended to be read that way. It's the "..so objectively true" that I'm confused about.
I'd say we are at about a 50/50 communication barrier versus philosophical disagreement. I propose we establish three single statements we disagree on, so i know where to go from here. Perhaps the following is reasonable. Agree or Disagree
1. A person may know something objectively true and objectively know when they know it is objectively true.
2. You can not 'subjectively/operationally' know when something is objectively true by definition.
To some degree this is already so. But, yea, it would be nice.
Quoting creativesoul
Yes.
Since you left it at that I guess I’d need to clarify some of my underlying positions at this point. To me awareness entails a good number of things. Among them is that to hold an awareness of X is to trust that X is for the duration one is aware that X is—and, therefore, is to hold a belief that X is for the same timespan. Awareness of, to me, thus entails some form of belief-that. As an example, if I’m visually aware that there is a tree in front of me, I will simultaneously via the same awareness hold a un-thought of belief that the given tree is in front of me. I may then reflect upon this belief, articulate it, or justify it after the fact; still, the basic belief was yet there at the time I saw the tree. This will not be belief about belief, nor will it be consciously active thought in the form of inference or deliberation. Yet it is still belief.
Thought, then, is to me various associations made between beliefs that holds some aim —regardless of whether these beliefs are stored in memory or else are actively experienced.
Yea … it’s not mainstream. But yes, this way I can for example find myself cogently stipulating that a dog can believe that there’s something wrong with 1 + 1 = 1 despite the dog not having in any way thought about it.
So, this has the potential to open up a whole can of worms regarding tidbits from philosophy of mind. I'm hoping not, though.
Will wait for your replies …
Yes, or OK, but in all honestly I dislike the term “objective” in this context. Knowledge and truths are held by subjective beings and, therefore, are subjective givens by entailment. Else, you're addressing objectivity in the sense of impartiality. And neither knowledge nor truth need to be infallible in order to be (relatively) impartial.
Quoting Cheshire
Ah.
Again, I’m one of those fallibilists / philosophical global skeptics that uphold the following: any belief that we can obtain infallible knowledge will be baseless and, thereby, untenable. There are two ways to argue this: one is by lack of evidence to the contrary via which this belief can be falsified—and, here, the onus is on anyone other to provide evidence for any infallible knowledge (this is where evidence that the affirmed known is not perfectly secure form all possible error is provided via illustration of how this given holds some potential to be wrong); the other is by building up an argument from scratch to justify this belief (which would be lengthy … and, if I’m asked to do this, I’ll first point to a likewise lengthy first chapter on demarcations of certainty, uncertainty, and doubt that I currently have online. Again, building up a valid and all-inclusive argument for fallibilism takes some work. Meanwhile there’s the arguments found in Agrippa, Sextus Empiricus, and a few others.)
But in short, you believe that infallible knowledge is possible to obtain; I don’t. We might be at a standstill on account of this disagreement.
But I’ll continue replying as best I can all the same.
Quoting Cheshire
You may have not read or else forgotten a number of previous posts in which I’ve defined ontic truth and placed it in contrast to believed truth. Think of it as infallible belief of what is true that, thereby, factually is true belief. Or, alternatively, it might be better for me to instead refer to it as “ideal truth” … though I’ve really wanted to avoid Platonic notions of ideals, maybe this is a better terminology since I’ve already made use of “ideal knowledge” to contrast “operational knowledge”. (Again, I'm still fiddling with proper terms for the concepts.)
In my best review of previous posts: So ideal truth is factually correct correlation/conformity to that ontic given it regards. In contrast, operational truth is an embedded aspect of all beliefs-that. To believe that X is to believe that X is true, that X is not false, mistaken, erroneous, etc.—this with or without conscious conceptualization of the dichotomy between truth and falsity (added this to keep things better aligned with the discussion I’m having with creativesoul).
Any instance of operational truth can well be an instance of ideal truth. Furthermore, all, or at least most, operational truths will be assumed to be ideal truths while held by the bearer.
The fallibilist, however, will maintain that all operational truths are nevertheless fallible—not mistaken, but only hold some potential of maybe being mistaken in their in fact being ideal truths.
Hence, to the fallibilist, where any operational truth, aka belief-that, to in fact be an instance of ideal truth, it then would need to be justifiable due to its correlation / conformity to that which is real / reality at large.
Yet the fallibilist will also affirm that this justification too can only be operational / fallible—and not ideal / infallible.
So, to the fallibilist, when we believe something to be and can furthermore justify our belief we then hold demonstrable knowledge whose strength is directly proportional to the strength of the justification. And until this justification can be infallible—aka, perfectly secure form all possible error—our knowledge can only be fallible.
And again, ideal knowledge is infallible knowledge. To the fallibilist, operational knowledge can only be fallible.
Hence:
Quoting Cheshire
Disagree. We may be aware of an ideal truth—else, hold an factually true belief—but we cannot hold an ideal knowledge of this being so (for ideal knowledge requires an infallible justification, i.e. one that is perfectly secure form all possible error).
Quoting Cheshire
Disagree when the knowledge addressed is fallible and not infallible. Hence, we do fallibly / operationally know when we hold ideal / “objective” truth because, or on grounds that, our belief will be justified as being ideally true. What you’re inserting here is “infallible knowledge”, so that the quoted statement intends to read as follows: You cannot ‘subjectively/operationally’ hold an infallible known concerning when something is objectively true by definition. This rendition I’d agree with, but find it pointless on grounds that infallible knowns are baseless.
I’m guessing some of this will nevertheless yet be at least somewhat confusing, doubtless in part due to my less then perfect expression in a sound-bite post. (I too find the issue to be complex. I'm not happy with my presentation but I don't have the time to reedit it at length. Call it laziness.)
Still, I’ll again draw attention to your belief that infallible epistemic criteria are possible to obtain; in this sense, if I'm correct about this, your beliefs are then those of an infallibilist. Here there is a strong contradiction with my own beliefs, those of fallibilism.
This is the foundational issue that either becomes resolved or else will make all other debates about this matter frivolous. Are infallible epistemic criteria possible?
Because this last question is a complex issue, I’m OK with calling it a draw at this point, but it’s up to you.
To add a bit to this...
This topic(thought and belief) is my forte, my life's work(in philosophy) as it were. Roughly put, I've found considerable reason to believe that the whole of philosophy has gotten thought/belief wrong. The consequences of not drawing the crucial distinction between thought/belief and thinking about thought/belief manifest themselves within nearly all of the greats I'm aware of, all the way up until today.
The evidence of this is clear. Convention still has it that belief content is propositional. Your claim that awareness entails some form of belief-that follows convention in this way. The belief-that approach is very useful in helping determine things about belief statements, and positive assertions in general. Hell, speaking in general for that matter. Namely, what statements presuppose and even a bit about the attitude of the speaker as well as a bit about meaning.
The approach lends itself and/or leads to reductionist and redundancy accounts of truth(although I reject those on the grounds of invalid reasoning/conclusion as a result of conflating "is true" with truth). They all still are capable of helping us to better understand ourselves and the world around us by virtue of having a good grasp upon thought, belief, truth, meaning, and how they all work together.
I do not think that any major philosopher has gotten it all wrong, per se, regarding thought and belief. Rather, it is my contention that no one has ever drawn and maintained the aforementioned distinction. The proof of that is everywhere in philosophy.
So, back to the current discussion...
I realize that there are significant differences in our taxonomies/frameworks. I'm trying to write in such a way as to avoid using terms that you define much differently than I do. It's proving difficult at times, bit I do know a 'trick' that I have yet to have performed here. We may, and I suspect must, get into how we've arrived at some of these definitions/conceptions/criterion as a means for assessing warrant, should we want to argue about which framework is superior and why/how. However, that's not necessary unless we want to argue about that stuff. I do not, at least not without provocation, and you do not seem to be looking to provoke... so... neither will I.
:wink:
I no longer have the impression that you're working from/with methodological naturalism. Of course, you're surely aware that I am, or at least I make a very concerted effort at finding the simplest adequate explanation possible. I'm also neither a monist nor a dualist. Nor am I whatever those people call themselves who've (mis)attributed meaning to Spinoza and arrived at all life being conscious in some minimalist sense. Are you one of those people? Oh yeah, pardon my forgetfulness and candor here... Now I remember... panpsychists. Your repeated assertion that you grant awareness to all life lends itself to such a view.
What follows is what I believe we agree upon. I'd like to check though, and then perhaps set out the disagreements as well, and then take it from there. That ought make this more like a real worthwhile discussion...
Some thought and belief is not existentially dependent upon language(written or spoken).
Hmmm...
:rofl:
Care to add to this? I'm less certain than I realized after re-reading things...
This presupposes that belief cannot begin well-grounded. That was a very helpful... and thus good... question, by the way!
Cheers!
"Un-reflective belief"...
I believe that there is such a thing. I'll go first. As always, we look to set out a minimalist criterion, which when met by some candidate or other, serves as a measure of determination. All things that meet this criterion qualify as being an unreflective belief. That criterion needs to be properly accounted for. I say "accounted for" here quite intentionally. Because we are reporting upon thought and belief, we must keep in mind that our account can be wrong when it comes to that which is not existentially dependent upon our account. Un-reflective belief is one such thing. We can also get it right.
Unreflective belief is a particular specifiable kind of belief. Our knowledge of it is existentially dependent upon written language. It is not. To be a kind of belief, is to be one of a plurality of different kinds of the same thing. This necessarily presupposes a universally applicable and/or extant set or group of common denominators. These can be thought of as individual elemental constituents. Perhaps "ingredients" is best? Each of these are an irrevocable element, for they all play their own role in all belief... statements thereof notwithstanding.
So again... as always, we look to set out a criterion...
What counts as belief? What is the criterion which, when satisfied by a candidate, offers us the strongest possible justificatory ground for saying that that candidate is belief? This criterion must be met by any and all sensible; consistent; coherent usage of the term "belief". I say that that criterion must set out the aforementioned group of common denominators that all belief share, and that none of these ingredients can be existentially dependent upon written language, for all reflection is to think about one's own thought/belief, and that is existentially dependent upon written language.
What are your thoughts on such a method?
What criterion for what counts as "un-reflective belief" are you working from/with?
Yes.
Are we of a sudden skipping back to the issue of pre-linguistic justification?
Quoting creativesoul
With some ambiguity. The post you quoted from was addressing learned beliefs. Hence the issue of how a learned belief becomes well-grounded. What is presupposed is that beliefs—whether innate and genetically inherited via processes of evolution, learned via experience, or actively contemplated—can be wrong.
Innate beliefs can be argued well-grounded due to evolutionary processes upon genotype appearing in phenotype. This is their means for being well-grounded, yet fallible.
Learning is a process that in part makes use of innate beliefs to arrive at learned beliefs.
The unanswered question remains: How do learned beliefs become well-grounded? Are some learned beliefs well-grounded and others not solely due to happenstance? Or Is there a third alternative you have in mind that explains why some learned beliefs are well grounded and others are not?
You’ll have to better explain your stance so that I may better understand “the method” you are proposing.
Re-read that post, if you will. Everything you need to know in order to agree or disagree is there.
Innate beliefs are a kind of belief that you're proposing/asserting exist. What makes them belief? What is the criterion which, when met by a candidate, offers us the strongest possible justificatory ground for claiming that that candidate is belief?
Very intrigued, but relaxed. A piqued interest. A conversation long waiting to happen. Seated.
:wink:
It strikes me as putting the cart before the horse. Else as tautological and hence as much ado about nothing: "everything this is an unreflective belief as per some definition qualifies as being an unreflective belief per stated definition
If not, explain.
While I'm waiting, please remember to answer this issue:
Quoting javra
Explain what? Tautological is a derogatory charge meant to be applied when discussing purely inductive reasoning(arguing by definitional fiat). The method which you're calling 'tautological' is deductive, and is no such thing. It's common sense based, has the strongest possible justificatory ground, and works from the fewest unprovable assumptions.
I suspect that you're aware of this...
I'm working upon the explanatory power.
What makes a "learned belief" different than other kinds of belief? More importantly what makes them similar enough to still qualify as belief?
What are you waiting for?
Quoting creativesoul
Oh no. Pardon my spaciness...
Upon perusing the thread... I just found that that question had went otherwise unanswered. It deserved to be answered.
The number of different kinds of belief is growing quickly.
Remove all of the individual particulars(that which makes them all different from one another) and then set out what it is that they all have in common that makes them all what they are... beliefs... aside - that is - from our just calling them all by the same name...
Bears repeating, huh?
To alleviate some potential ambiguity: In my experience beliefs can decay just as easily as they can be gained. My former belief that this debate between us has been one of honestly reasoned enquiry has now eroded. On account of this, I will no longer be debating this subject with you.
I've enjoyed this one for the most part. Unfortunately though, I am left with the faint impression that I've done it yet again. I apologize if I've caused you to take a fighting stance.
Thank you for the time and attention.
This presupposes that you believe that I'm not speaking sincerely. That's too bad. I am many things, but dishonest ain't wunuvem. Tactless??? Sure. Having 'no filter'??? Certainly. Dishonest? Only on very very rare occasions...
You've conflated your thoughts about me with me...
Perhaps, but we will at least know why.
Quoting javra
Isn't this being put forward as infallible knowledge, because its so well evidenced to render any counter argument baseless and untenable. If so, it proves itself wrong.
more to follow.
Glad to hear. Still, I have no interest in rereading the entire thread on a daily basis to see which newly lengthened posts require my re-reading due to me not being informed of the lengthy additions in a timely manner—and this after I’ve already taken time to reply to them. Ya know? I get it. It was a lack of ideal tact—something which I obviously lack as well. Nevertheless, that and a lot that I’ve addressed and/or asked which has not been addressed in turn presently leaves me wanting to leave our discussions as-is.
Quoting Cheshire
If I haven’t mentioned it in a super-explicit form before, I will now: my stated affirmation is itself fallible; i.e., not perfectly secure form all possible error. Here is not addressed “infallible for all practical purposes” or “so close to being infallible that it makes no difference in everyday life”—but, again, technically infallible in its being perfectly secure form all possible error. And again: A fallibilist will fallibly know that he/she holds no infallible knowledge (not even in this affirmation).
Hence, no contradiction, not for the fallibilist. Contradictions only appear when an infallibilist account of knowledge is taken into consideration.
And, as previously discussed by me, just become X is liable to error (i.e., less than perfectly secure from all possible error) does not in any way signify that it is therefore erroneous.
That's too bad. I'm not sure which posts I've altered after you replied to an extent that would've changed anything you may have gleaned from it. I certainly haven't attempted to do anything deceptive.
As a measure of good faith(a gesture of good-will), I woud be glad to address anything that you've written that you do not feel has been given due attention.
And I'm arguing this is the reason infallible knowledge must possibly exist. What is infallible knowledge, but knowledge without error?
Before you answer I still have a lot to respond to from above.
You literally stated it was both perfectly and not perfectly secure. It's a direct contradiction, unless one just chooses to ignore it to maintain a position.
In a further argument:
1. Infallible knowledge is possible or not.
2. Premise 1 is infallibly correct.
3. Infallible knowledge is possible.
To first get this out of the way:
Quoting creativesoul
I've provided definitions for all belief types I've utilized and support. As to defining belief in general, I’ve already done that as well: trust-that. If you have objections to any of my definitions then so state with reasons for your objection. Otherwise, this post of yours to me looks like an example of spin.
You, however, have not provided a single interpretation of what belief is. Describing that a belief about belief is not the belief itself does not define what you mean by belief. Give it a go. What is belief to you?
As per the issue of dogs’ ability to discern errors, you have fully overlooked dogs’ ability to deceive and its implications of discerning errors of belief. I find that the following deserves to be addressed rather than ignored or evaded via spin:
Quoting javra
I’ve asked you to answer this:
Quoting creativesoul
The questions you pose have already been addressed by me in previous posts. Your questions are also purely tangential to the issue at hand: that of whether the property of being "well-grounded" must be arrived at via some form of substantiation, or is a matter of happenstance, or something other?
The issue I've asked you of remains unaddressed.
In laconic review of what you have yet to reply to: Operational knowledge cannot be demonstrated to be devoid of all possible error; ideal knowledge is devoid of all possible error, but it is only a conceptual ideal and not that which can be utilized in practice. I've argued why this is so at length in previous posts.
Hence, I view your quoted statement as category error, for infallible knowledge would need to be proven in practice in order to be obtained. And to prove it in practice requires infallible justifications for the given belief in fact being true. Explain why it is not the case if you disagree.
Quoting Cheshire
I disagree and will try again:
“All my beliefs—including this one—are not perfectly secure from all possible error.”
Where is the contradiction?
BTW, just in case: If no contradiction can be discerned by you and me and if this is offered as “proof” of infallibility: Can you prove that for all remaining time no justifiable alternative to what is affirmed will ever be discovered? You'd have to be omniscient to do so. If not, then even this lack of contradiction will remain less then perfectly secure form all possible error—for any justifiable alternative, even those that might exist only in principle, introduces some measure of possible error. And one cannot prove that no justifiable alternatives exist in principle; again, not without being omniscient. ... This even when no justifiable alternatives can be found in practice.
To sum the just stated, one has to be omniscient to have infallible knowledge. (And I uphold that no psyche is capable of omniscience due to its intrinsic duality between self and other.)
Quoting Cheshire
Are you ready to prove how the law of noncontradiction is perfectly secure from all possible error? If yes, please do so. If you can’t then (1) is not infallible (this as per the aforementioned definitions).
I mostly nearly agree. I just prefer to leave the door cracked instead of closed. True, no demonstration may be possible, but this doesn't mean ideal knowledge is impossible - only not demonstrable.
Quoting javra
Gladly, you don't have to prove you have infallible knowledge in order for it to be obtained. I concede I can't prove when or if I obtained infallible knowledge and yet I maintain its possible that I do and do not know it.
Quoting javra
If the strength of my argument rests on my ability to doubt the law of non-contradiction, then I would get a new argument. I'm sorry, my position presupposes logic.
No, this I like. I agree.
The conclusion I do not share. I don't have to prove how impossibly correct I am in order to have infallible knowledge. In order to prove this or that is infallible knowledge is another issue entirely.
Yea, these are among the more fine tuned issues concerning the matter: but of course fallibilism leaves the proverbial door open. To affirm an infallible knowledge that infallible knowledge is impossible is, of course, a blatant contradiction. One can only fallibly affirm this, if one so cares to.
Quoting Cheshire
To refresh a previous argument of mine, operational knowledge can well be, ontically, not erroneous. Nevertheless, this is not currently possible to prove epistemically.
Are we not somehow agreeing to this? My only issue here is that infallibility to me is an epistemic property. My bad if I didn’t make that explicit previously. Maybe this facet makes a notable difference? If not, then we indeed disagree. Call it a day?
Quoting Cheshire
:razz: Well, I never once said anything about doubt. One does not need to doubt the law of noncontradiction, as one example among many, to understand that is not something which can be demonstrated perfectly secure from all possible error. There are strengths of operational knowledge, and the law of noncontradiction is pretty high up there in its strength.
From earlier because it is relevant here...
Quoting javra
This seems to be the basis for the belief that approach. It certainly lends support to the method, regardless of whether it is intentional or accidental. Typically, when we talk about one believing something, we're saying that one believes that X is true; is the case at hand; is the way things were, are, and/or will be; corresponds to fact/reality; etc. Let X be a statement or proposition.
That is all perfectly understandable and acceptable at the level of reporting upon belief. Here, the content of our report is propositional. X is equal to some proposition/belief statement. This method shows us that belief presupposes it's own truth, and that adding "is true" to a belief statement adds nothing meaningful to it. Here, it makes perfect sense to draw an equation between trust and belief, for the two terms are easily interchangeable without self-contradiction.
However, we are talking about belief that is not existentially dependent upon language. Such belief can be reported upon. Our reports will have propositional content. The kind of belief that we're reporting upon cannot. Belief that is not existentially dependent upon language must consist of something other than propositional content, even though our report of it must. All this must be kept in mind when using the belief that approach as a means to take account of belief that is not existentially dependent upon language...
How does a creature believe/trust something that it has never thought about?
You've actually posited trust/belief at the genotype level of biological complexity. That would require that the content of what's being trusted(belief on your view) is something that exists in it's entirety at that level and can transcend the believer on a physical level through reproduction. That's a big problem for your notion of belief for all sorts of reasons. We could explicate those consequences if you'd like...
Trust requires a remarkable 'sense' of familiarity, and there is more than one kind of familiarity. All familiarity requires thought and belief.
Familiarity requires a succession of the same or similar enough belief about that which is trusted. Trusting the content of thought/belief cannot be had if innate fear takes hold of the creature. One cannot trust that which aggravates instinctual/innate fear, at least not one at a language less level.
Here, the two terms are not so freely interchangeable. Trust is not equivalent to belief.
Familiarity requires a succession of the same or similar enough belief about that which is not trusted. So it is clear that trust and belief are distinct.
Trust is most certainly being built during the formative years of initial language acquisition. Contentment and familiarity with one's caregivers. That is prior to language acquisition. That seems to be where knowingly relying upon something(trust) comes from...
Quoting creativesoul
Can you provide, or point to, a concrete example of such belief-that which is not propositional?
This ties into what I address below.
Quoting creativesoul
For the record, though I too hold an ego, I have no problem in being shown how my beliefs could be improved upon or else how they are wrong.
Quoting creativesoul
On what grounds do you affirm this?
Example: I see an odd shaped red apple on the table for the first time. I'm not at all familiar with this type of apple. I either trust that it is there as seen, trust that it is not as seen, or trust that both possibilities might be valid; the latter being an instance of uncertainty while the two former cases are instances of certainty. Regardless, all three scenarios are initially experienced by me without without a sense of familiarity, without thought, and without beliefs about beliefs (belief is what we're addressing to begin with, so I'm assuming you were here addressing beliefs about beliefs).
I'll posit a facet of trust to make this easier:
Quoting Wiktionary
From this I extrapolate the following as a cogent facet of trust: To act and/ or react (either physically or mentally such as via intentions) to something being ontic devoid of rationality for the given something in fact being ontic is, in itself, a process of trust. The quality one here has confidence in or reliance on is property of being ontic.
On what grounds would one disagree with this extrapolation?
If the extrapolation is valid, then trust can be non-linguistic, genetically inherited, and does at all times affirm (else, makes firm within the respective mind) that which is true—but this without a necessary conscious understanding of the relation implied by notions of truth as we linguistically express it. Trust's contents, then, form the given belief.
Quoting creativesoul
I thought we weren't addressing belief about belief. Be this as it may:
Innate fear-based mistrust requires a more primary trust; namely a confidence about that which is feared being deserving of fear. It has to do with trust for optimal benefit to self in the face of that which is feared, or mistrusted.
Still, all this is, here, in large part contingent upon the facet of trust which I've explicitly extrapolated above.
No. It cannot be done. Nor does it need to be. The question doesn't help.
We're looking to take proper account of something that is not existentially dependent upon language. Propositions are existentially dependent upon language. That which exists prior to language cannot consist of propositional content.
I can provide you with an example of belief that is not existentially dependent upon language. It will not follow the belief-that format. It will put the otherwise useful knowledge gleaned from that approach to good use. The belief will consist of correlations drawn between different things. It will presuppose it's own correspondence to fact/reality. It will effectively attribute meaning. It will be meaningful to the creature.
All belief presupposes it's own correspondence somewhere along the line. Positing belief at the genotype level is to posit belief that is inherently incapable of presupposing it's own correspondence.
All belief is meaningful to the creature. All attribution of meaning is existentially dependent upon something to become sign/symbol, something to become significant/symbolized, and a creature capable of drawing correlations between the two. Positing belief at the genotype level is to posit belief that is inherently incapable of being meaningful to the creature.
On the ground that any and sensible notions of trust must include - in some fundamental sense - what our everyday notions of trust include.
Quoting javra
The same grounds as above, and on the ground that that definition inevitably leads to aburd consequences(reductio ad absurdum).
The performance of a vehicle relies on all sorts of different qualities and people. It does not trust.
No.
If the extrapolation is valid, then it follows from it's premisses. That does not make it true. The definition has consequences that are unacceptable. Therefore, the definition is unacceptable. The definition is a premiss of the extrapolation. False premisses cannot validly lead to true conclusions...
Are you asking me to use my own philosophical position to offer an alternative account of the dog's behaviour?
"The nature by which well-grounded-ness comes about" is an odd phrasing. Again, it presupposes that being well grounded is something that happens after thought/belief formation. At the language less level there is no arguing for one's own belief. There is no act of justification.
Some belief on the language less level is well grounded upon it's initial formation. It doesn't make sense to talk about these beliefs in terms of how their well-grounded-ness 'comes about', unless my answer satisfies that query...
That conversation hinges upon what counts as deception. I would deny that the dog deliberately sets out to trick another dog.
All thought and belief consists of mental correlations drawn between different things.
I greatly doubt that we’ll find common ground. I’ve also lost the desire to further debate this issue. I’m giving a partial reply so as to not be utterly off-putting:
If non-linguistic belief is correlations drawn between different things such that it presupposes its own correspondence to fact/reality then this belief will be acquired, hence learned, via the different things that become correlated. The belief then “comes about”. And unless lesser animals’ beliefs are always fully devoid of error, there must then be a means by which well-grounded beliefs attain this property in their initial formation.
You seem to however insist otherwise.
Quoting creativesoul
No. Evolutionary theory would readily account for this. But I sense you will insist otherwise.
Quoting creativesoul
For the record, the Wiktionary definition is what everyday notions of trust entail. It is a long standing wiki page, after all.
Quoting creativesoul
Next you’ll tell me that a vehicle acts and reacts? No, vehicles lack an agency by which to hold confidence in or reliance upon—something I take to be commonly understood.
Quoting creativesoul
The denial or evasive treatment of research findings is not a thing I feel in any way comfortable with. I won’t ask you to specify what you mean by “deliberately”. I don’t know how one could believe that the dog in the Wikipedia example sat on the treat by accident until the other dog left the room. But I’m confident it can be conceptualized this way. Still, I’ll continue trusting research findings that are published by well-reputed peer-review journals, such as the APA (which I've previously linked to).
Quoting creativesoul
Yet this does not distinguish thought from belief so as to define what belief is.
Again, I more than likely won’t continue in this debate, believing that it’s ran its course.
Not really. I think that you're making it more complex than it need be.
Drawing correlations is thought and belief formation. Being well grounded, on my view, means having sufficient reason to believe... being warranted. I think it is a mistake to call this a 'property' of belief. It adds unnecessary complications...
A language less creature can learn that touching fire causes pain/discomfort by virtue of touching fire for the first time. That creature's belief is not that "fire hurts when touched" or that "fire causes pain/discomfort". Those are our reports of that creature's mental ongoings(belief). They can be accurate enough descriptions without being equivalent in content. They must be in order for us to sensibly talk about it.
The creature's belief cannot consist of propositional content. Propositions aren't meaningful to the creature. I referred to this earlier, when cautioning about what need be kept in consideration when reporting upon belief that is not existentially dependent upon language.
The creature draws a correlation between it's own behaviour(touching fire) and what happened afterwards(the onset of pain). That is thought/belief formation that is not existentially dependent upon language. The creature's belief consists of the correlation. The belief is existentially dependent upon the content of the correlation itself(touching fire and pain). All correlation presupposes the existence of it's own content. The fire becomes significant to the creature by virtue of belief formation. The creature attributes meaning by virtue of drawing these correlations.
Regarding whether or not this particular example of belief is well grounded...
What better reason is there to attribute/recognize causality?
The definition, as it was written, led to the conclusion that a vehicle trusts it's parts and the people who maintain/built them.
That is prima facie evidence that that definition is unacceptable, regardless of how long it's been a Wikipedia page...
Claiming that you are not at all familiar with a type of apple is a performative contradiction. You are obviously familiar with it enough to categorize it as an apple.
I may be wrong here, but it seems to me that you're wanting to say that language less creatures are capable of trust(without familiarity) by virtue of trusting their physiological sensory perception when encountering novel things. Not having the ability to doubt the veracity of one's physiological sensory perception and/or rudimentary thought and belief is not equivalent to trusting it. Any such notion of trust which would allow such a loose criterion(trust equaling all reliance upon something) would lead us to claim that vehicles trust their parts.
The notion is unacceptable as it is. It obviously requires some refinement.
Your example above is chock full of thinking about thought and belief all the while simultaneously denying that.
I find that physiological sensory perception alone(not the kind of 'perception' informed by language) is inadequate for thought and belief. A creature can perceive something without drawing correlations between it and something else. Such things are not meaningful/significant to a language less creature.
At the language less level there is no difference. At the rudimentary level there is no difference. They both consist of the exact same things - correlations. All thought and all belief consist of correlations. All differences in either are determined precisely by virtue of the content of the correlations.
The meaningful difference between the terms "thought" and "belief" has to do with the attitude of the user... uncertainty. Uncertainty arises from becoming aware of our own fallibility. The ability to consider some statement or other without necessarily believing that it is true(the only difference between thought and belief) requires rather complex language replete with proxies(names) for one's own mental ongoings.
I find that reflective thought can be both, prior to thinking about one's own thought and belief and after. The content of the reflection is memory. Memory does not require thinking about thought and belief. For that reason, positing it alone is utterly inadequate for helping us to delineate thought/belief that is not existentially dependent upon language from that which is. However, we could determine the content of the memory as a means to ascertain whether or not that content is existentially dependent upon thinking about one's own thought and belief. We could also determine whether or not that content is existentially dependent upon language.
Thanks.
In general, I find that we agree on much more than we disagree. However, regarding this particular topic of belief that is not existentially dependent upon language, there are indeed stark differences in our views.
This became clearer to me after you denied that belief must begin simply and grow in it's complexity.
I would be more than willing to discontinue discussing everything else except one thing...
I would like for you and I to set out the bare minimum criterion for what counts as belief. I think a comprehensive comparison between the two(I'm assuming that there will be some differences) will lend itself to a much greater understanding.
I haven't been able to ascertain yours. I've definitely tried to. On my view, there must be something that all belief have in common which makes them belief aside from just calling them all by the same name...
I'm satisfied.