The Problem Of The Criterion
According to the Internet Encyclopedia Of Philosophy, the most general form of The Problem Of The Criterion is as follows:
1. Which propositions are true/knowledge? [Instances of truth/knowldege]
2. How can we tell which propositions are true/knowledge? [Definition of truth/knowledge]
Other ideas relevant to the discussion can be found on the same page.
The Problem Of The Criterion can be formulated with two propositions given below:
A. We can't know instances of truth without having a definition of truth/knowledge [methodism]
B. We can't have a definition of truth without knowing instances of truth/knowledge. [particularism]
The argument is that A and B form a vicious circle: To identify instances of truth/knowledge, we need to first formulate a definition of truth/knowledge but to formulate a definition of truth/knowledge we must first possess instances of truth/knowledge and the vicious circle is complete and we're helplessly trapped inside it...or so it seems.
The statement A makes complete sense for how can we find a thing without knowing what that thing is? For instance, I don't know what "parsing" means and I can tell you with full confidence that I wouldn't be able to identify instances of "parsing" (By the way I would be grateful if someone could edify me about what "parsing" means. Thanks in advance.)
What about statement B? Does it make sense?
Is it possible to recognize dogs/truth/knowledge without knowing what dogs/truth/knowledge are/is? This smacks of a contradiction: I don't know what dogs/truth/knowledge are/is (I don't have a definition of dogs/truth/knowledge) & I know what dogs/truth/knowledge are/is (I can identify dogs/truth/knowledge). Since statement B is/entails a contradiction, it must be false.
Ergo, particularism is untenable and the way to go is methodism.
1. Which propositions are true/knowledge? [Instances of truth/knowldege]
2. How can we tell which propositions are true/knowledge? [Definition of truth/knowledge]
Other ideas relevant to the discussion can be found on the same page.
The Problem Of The Criterion can be formulated with two propositions given below:
A. We can't know instances of truth without having a definition of truth/knowledge [methodism]
B. We can't have a definition of truth without knowing instances of truth/knowledge. [particularism]
The argument is that A and B form a vicious circle: To identify instances of truth/knowledge, we need to first formulate a definition of truth/knowledge but to formulate a definition of truth/knowledge we must first possess instances of truth/knowledge and the vicious circle is complete and we're helplessly trapped inside it...or so it seems.
The statement A makes complete sense for how can we find a thing without knowing what that thing is? For instance, I don't know what "parsing" means and I can tell you with full confidence that I wouldn't be able to identify instances of "parsing" (By the way I would be grateful if someone could edify me about what "parsing" means. Thanks in advance.)
What about statement B? Does it make sense?
Is it possible to recognize dogs/truth/knowledge without knowing what dogs/truth/knowledge are/is? This smacks of a contradiction: I don't know what dogs/truth/knowledge are/is (I don't have a definition of dogs/truth/knowledge) & I know what dogs/truth/knowledge are/is (I can identify dogs/truth/knowledge). Since statement B is/entails a contradiction, it must be false.
Ergo, particularism is untenable and the way to go is methodism.
Comments (50)
I have a memory of my daughter learning to speak that comes to mind here. Aged somewhere around 18 months old, she would point out the window of the car as we drove along and say ‘bah’. It took us a number of instances of this to recognise that she was pointing to puddles and making a connection between them and her nightly bath. She was using a language concept ‘bah’ to describe instances of water. So, did a definition develop later for her, or is definition simply a summary of instances?
I think the circular nature of this process is important, but I also think the language concept is a vital piece that seems to be overlooked. It is possible for a child to correctly label an instance of ‘dog’ without a definition of what dogs are - with nothing more than a fuzzy language concept formed from a summary of previous instances. Does this mean they know instances of ‘dog’? The main difference between a child not knowing what ‘dog’ means and you not knowing what ‘parsing’ means may simply be the courage to risk being wrong in identifying an instance.
I think there’s also a distinction to be made here between the possibility of truth and the potentiality of knowledge.
The process of "discovering" truth is simultaneously deductive and inductive.
Assuming something to be true makes us able to see the supporting evidence.
Truthing is circular like that.
Unlike parsing, heh.
So, I will rephrase the statements as they apply to classification:
A. We can't discern members of a class without having criterion that identifies their features. [methodism]
B. We can't specify criterion for the members of concrete class without being presented with examples that we can extrapolate to a definition. [particularism]
In computer science, this is the clustering problem. Given unclassified samples, we are tasked to break them into spatially contiguous groups, while simultaneously identifying which grouping produces the least internal sparsity. The solution is computationally laborious, intractable when perfectly optimal, but its approximations are reasonably efficient. The applications sometimes require human discretion (supervision as it is called) for domain-specific considerations, but automatic execution is not impossible. Solving such a problem presupposes an already established feature space, so the techniques may have to be combined with feature extraction / dimensionality reduction step. This is the step in which you conclude which qualities are most distinguishing for your samples. For the human cognition, the task may require more sophisticated mechanism, but the problem should be otherwise similar. We discover objects and situations which have closeness in some regard and define the categories that naturally emerge as the most effective classifiers.
Having derived classes for objects or situations, and having the faculty to introduce conjunctions (thus universals), disjunctions (thus existentials), complements, modalities, etc, a human being may produce a reasonable amount of propositions. I am not sure whether thinking operates with such formality and I am not sure whether we introduce logic into the picture in precisely this way, but it offers one path to the ability of autonomous mental synthesis of statements.
Feature extraction
Dimensionality reduction
Cluster analysis
I am not sure about which kind of parsing you are referring to, but in computer science, and I believe, also in linguistics, it is the process of deciding how to split a text into its syntactical constituents and label them with their syntactical categories (ascribe meta-information to each region of letters and symbols, metadata as it is called in computing).
That there are different definitions of truth (correspondence, paragmatic, coherent, etc.) is suggestive...hints at some degree of arbitrariness...something I referred to in the OP.
If truth were abstracted from instances of truth this wouldn't be the case for then that which can be described as the form (Plato?) of truth would be constant, precluding, in my humble opinion, variety in the definition of truth.
Being able to identify an instance of dog is not the same as knowing dogs. By the same token, any definition of ‘dog’ is not necessarily as exact as it claims to be. So, while knowing dogs without knowing dogs IS an obvious contradiction, I maintain that it IS possible to find or identify an instance of something without knowing definitively prior what that something is - until you find it, that is. In fact, it appears to me that knowing an instance and knowing a definition may indeed be one and the same process.
Quoting TheMadFool
The more instances (limited definitions) of truth we incorporate, the closer we may get to a broad ‘definition’ of truth. But in all honesty, I think truth would be formless as such - any possible ‘definition’ of truth is necessarily inclusive of its relation to what is not true.
We keep returning to a contradiction, and we keep rejecting it, convinced there must be some other, more logical structure underlying it all...
I was concerned with the possibility of solving the issue as specified and wanted to propose that we break it down into aspects of smaller complexity. I may have come off as assuming some lack of familiarity with the intricacies that are involved.
Quoting TheMadFool
It still seems to me that the criterion is not about establishing unambiguously the truth of propositions, but of designing descriptions for a proposition that matches particular experience. The problem does elicit however many considerations involved when matching the descriptions between individuals through exemplification.
Lets consider correspondence truth first. The assertion that an object is an apple is just a statement for association with some class of objects, and different people may have different notions in this regard. A child thinks of an "apple" and so does an agricultural botanist. First, the child may have had limited experience with apples in relation to its interactions and needs. The object have been measured as fitting in the palm of your hand, the color reddish with sparse nuances of green, the taste, mostly acidic sugary or just sugary. A botanist, in contrast, knows apples of all varieties, ripe and unripe, green and red, bigger and smaller fruits, etc. It will measure the sizes in widely recognized units, not in proportion to body parts. Nonetheless, both will associate their descriptions with apple. The ambiguity stems from matching the statement and not the interpretation. (Edit: What I mean is, the statement is considered the same, even though, both parties would agree empirically that their interpretations do not match.) The botanist is at least uncontested authority in this case, which is not always a clearly defined role, and hence, the child can be informed of the class's variety through literature, expressed in references to other words, or through other examples. But people will ultimately never base their vocabulary in the same set of direct experiences. (*)
I will use the mechanism of clustering to explicate some of the issues involved. It is not the value of the proposition which is ambiguous. The indefiniteness of the statement is the problem, which is inherent in the way in which the description is derived. The clustering algorithms will produce a solution based on the quality and variety of the sample (the objects) and the desired crudeness. Those factors will vary for different people. The feature set used as relevant discriminator will also differ. And the truth will not be binary, but will have a degree of certainty, with clear matches and some borderline cases. The confidence when dealing with uncertainty will differ, based on the needs and attitudes of the individuals.
The same logic applies to pragmatic truth, but instead of representations, it is dealing with intents and applications. For example, what is right for the poor is not right for the rich, what is right for the elderly is not right for the young, etc. Different people will associate the truth of certain statements with different social clusters or conditions, and thus will interpret them into different propositions.
*: This example is easily rectifiable error of conceptualization and communication due to limited observations of one party, which also shows that representational truth is inherently easier to verify. This is why it is so central to science. As you suggested, one can talk about some sense of objectivity there, although the question of bringing all experiences in alignment remains problematic. But pragmatic truth, a first-order phenomenon of cognition in my opinion, has much less sense of objectivity. It also affects debates of proposed representations, because the very criteria for making a proposition canonical can be affected. For example, some physicists would argue that a theory of should be mathematically minimal, while others will argue that it has to confer at least some explanative value.
P.S. In summary, I think the circularity is not about ultimate truth, it is about finding effective taxonomy of the available knowledge. Even so, the process is one of continuous refinement (for example, exploration of the facts by science and redefining the concepts after some cluster of information is supplemented with new data), distributing the relevant information to all authorities on the subject, and arbitrating between attitudes and values that affect the perception of the explanative scope of the evidence, its significance relative to the individual cognitive bias, culture, social strata.
I disagree.
Language users can know when some statements are true or not long before they have a definition of truth or knowledge. That much is easily proven.
This is Meno's paradox. Plato uses it to show that some knowledge is innate (that's one way to put it).
:up: :ok:
Initially I was wondering whether Possibility's daughter's ability to identify dogs had something to do with innate knowledge but the matter is much simpler than that. Pointing to dogs and uttering the word "dog" is an act of providing instances to the audience (here Possibility's daughter) and if that's all that's being done, leaving the audience to figure out what the word "dog" means i.e. it's the audience's job to abstract the essence of a dog from the instances provided. It appears this is a valid method of defining words. That's that.
Defining truth may be similar too. We do a systematic survey of propositions and sort them based on different attributes and decide that propositions with such and such attributes (whatever they maybe) should be called true propositions and absent these attributes are not true.
However, there's an, for lack of a better word, intuition albeit vague as far as I can tell that truth has to be something specific i.e. there are constraints on what truth can be. The thorough study of the atrributes of porpositions don't result in truth being defined based on just any constellation of attributes. To the contrary, we're drawn to certain groups of attributes (correspondence, coherence, pragmatic, etc) - it feels natural to define truth in these terms - and this I consider as an indication of a preconceived, how shall I put it, idea of what truth should be.
In other words, it may look like we're trying to abstract a definition of truth from instances of truth, from a careful analysis of propositions but in fact we already possess a definition of truth and are simply looking for propositions that match that definition. That is to say that, at least on the matter of the definition of truth if not dogs the impression that we get of examining propositions so that we may extract the essence/form of truth is an illusion.
What say you?
Well, my take on this is not essentialist, so I don’t see it as ‘abstracting the essence of dog from the instances’, but as abstracting recurring patterns in qualitative relations. This will always be ‘fuzzy’ to a certain extent - a definition seems to be just a linguistically-structured summary or reduction of these patterns.
Quoting TheMadFool
Here’s a better word than intuition: assumption.
To define something - to state or describe exactly its nature, scope or meaning; to mark out its boundary or limits - is a reductionist methodology that discards qualitative variability or ‘fuzziness’ in the information we have about that something.
It feels more comfortable to ignore, isolate or exclude a relation to truth that lacks sufficient attributes to be positioned with certainty on this side of an arbitrarily-drawn true/false dichotomy. The idea of truth includes an understanding why we feel so uncomfortable with this uncertainty. The idea of what truth should be, however, excludes this relation to what is possibly but not certainly true.
Are you suggesting here that a definition of truth is a priori analytical knowledge?
What's the alternative? Anything goes? So, for instance, a dog could be defined in terms of non-essential features like fur, claws, ears, eyes, tail, fangs but then...event cats, bears, tigers have these and then every one of these essentialism-based categories would be dogs. Do you want to go down that road? I could be mistaken of course and that's where you come in I guess.
Quoting Possibility
Nec caput nec pedes. Can you clear the matter up for me? I don't see the relevance of fuzziness to The Problem Of The Criterion. For my money, the issue of vagueness comes much much later - after we've settled the matter of what truth means and which statements are true. Even if truth is a fuzzy concept there have to be propositions that are clear-cut truths.
Quoting Possibility
Quoting Possibility
I am not saying that propositions are inferred from example instances for each specific proposition, but by dividing the instances of experience into suitable classes. Only when it comes to synchronizing the vocabulary and learning through normative (compulsive) and authoritative (convincing) sources, the classes are acquired second-hand or through especially representative examples.
I am not saying that someone will point dogs to a child until it is coerced to learn the concept dogmatically. I am saying that a child will see several dogs and cats, and will be able to use its own discretionary capacity to derive that "this is one thing, and that is another thing". Its mother might convey to her/him, that we call the first a dog and the second a cat, but the conceptualization of the species is already present. And I pointed out that we are not merely speculating about the possibility of such classification, but that there are ways to specify the problem precisely and to solve it with available techniques.
Obviously, there is fuzziness, because the subject has influence on the parameters of the task. A child might decide that koalas are a kind of bear. Which they are not. But for a child's purposes the distinction is inconsequential. That is why the intentions and needs will factor themselves in. The child will only rectify its concept under coercion (they might be simultaneously provided convincing justification), because it has no need to do so on its own. Many parameters will depend on the subject - the completeness of the description, i.e. known features, of the instances, the crudeness of the classes, or clusters, the total sample collection (have they seen enough canines, felines, bears, marsupials, aliens, sledgehammers, quantities of things, lengths of things), and how they metricize the feature space (i.e. how they define the distance between objects/situations/qualifiers in consideration of their biases, attachments, goals, etc)
I see what you did here
I wouldn't have noticed. It happened accidentally.
Face recognition is innate. Dogs have forward facing eyes like a human, so dogs should be stand out to babies.
The concept ‘dog’ is constructed in our minds with the help of language in relation to instances. So, a ‘dog’ may be initially understood in terms of a relational structure of shapes, size, sound, texture, etc. - depending on whether those early instances are a family pet, pictures in a book, or sounds from next door. This is how my daughter initially understood ‘bath’ to describe bodies of water. From there, she soon realised that ‘bath’ referred to more specific instances of ‘water’. She may also see another furry creature with pointed ears, four legs and a tail and say ‘dog’ - only to be gently corrected with ‘cat’. Remember ‘Monsters Inc’, when the little girl calls the big furry monster ‘Kitty’? It’s not about essential features, but about recognising patterns in qualitative relational structures.
Quoting TheMadFool
Why do there have to be propositions that are clear-cut truths? In order to think, speak or act with a degree of certainty or confidence in what is, for all intents and purposes, a prediction. We haven’t settled the matter - we’ve constructed a prediction, which we’ve then defined in a summary of past instances. The accuracy of this definition is temporary: fragile and fleeting from the moment it’s proposed. Hence the fuzziness of the concept.
I beg to differ. In the absence of essences to dogs or whatever else is the topic, there can be no further discussion. Can you tell me what "dog" means? I'm supremely confident, as out of character as that is, that you'll be listing a set of essential features.
Quoting Possibility
Why are we discussing predictions?
Quoting TheMadFool
To answer the question requires an ongoing and continually refining process of interaction between an imperfect sensory perception and an imperfect conception. It’s the experience of “I’m not sure what I’m looking for, but I’ll know when I find it”. We articulate an unknown concept by what it may be like, but isn’t - its non-essential features. So, if I don’t know what a dog is, I can still find a dog by referring to its non-essential features - fur, claws, ears, eyes, tail and fangs - in a qualitative relational structure that enables me to exclude cats, bears and tigers from the search.
So I’m going to throw this back to you: can you define ‘dog’ without a qualitative pattern of non-essential features?
I’m pointing out a distinction between the linguistic definition of a concept - which is an essentialist and reductionist methodology of naming consolidated features - and an identification of that concept in how one interacts with the world - which is about recognising patterns in qualitative relational structures. Asking me to linguistically define a dog does not prove that I or anyone would identify a dog out of a line-up of creatures based on this essentialist methodology - as much as rationality would beg to differ. Yes, we may confirm this identification by a checklist of ‘essential’ features, but I’m arguing that we would already have identified the concept ‘dog’ (as a prediction) in order to identify a set list of features to check off - or more accurately, qualitative relational patterns to match.
Quoting TheMadFool
Prediction is a manifestation of the problem: sensory interoception generates a current state of the organism in relation to reality, while the mind organises what we know of the universe in relation to past states of the organism, and from this predicts interactions of the organism with reality as an instruction of effort and attention for the brain. In other words, prediction is the ongoing difference between our current sensory perception of the organism in relation to reality, and our current conception of reality in relation to the organism.
So the point at which we appear to be settled on the matter is when we linguistically define our prediction as a summary of past instances. But this is not the truth of the matter - it’s a proposition. The truth of that proposition is determined from three angles: its relation to a conception of reality (ie. knowledge); its relation to sensory perception (empirical evidence); its relation to alternative propositions. But none of them are true. We’re not in a position to construct propositions of ‘clear-cut truth’.
Quoting simeonz
This makes sense to me. Much of what you have written is difficult for me to follow, but I get the sense that we’re roughly on the same page here...?
Yes, nice. Truth in the end is an umbrella word used to describe a very wide range of relationships. We don't know anything much about truth but we know how to justify beliefs. Everything is what it is by virtue of its relation to everything else. You can't capture X in it's purity. There isn't anything to X except those relationships.
Dogs are one thing but we also know what a Muppet is (well, some of us do). This is something that doesn't even exist in nature and is a time-limited, made made artifact. But there is a grammar of Muppet design and it is possible to recognize the visual patterns and even context of their appearance, even if we have never seen a Muppet in the real world or watched a TV show with Muppets in it. When you say this is a Muppet you are not reflecting some platonic ideal of a glove puppet in the world of universal puppet forms. You are simply connecting to one or more visual aspects of the object which adds up to a Muppet. Of course none of that stopped our 6 year-old calling the mop at our place a Muppet.
Yes. No doubt the mop fits all the patterns of qualitative structure that your 6yr old currently applies to the term ‘muppet’, a concept more frequently applied to their experiences than ‘mop’. Plus, it’s cute, so no-one can bring themselves to correct it at this point. The association appeals to our imagination.
Another more colloquial use for the term ‘muppet’ is in derogatory reference to a foolish or incompetent person. The superfluous character of the term (the only difference between ‘muppet’ and ‘puppet’ being a reference to Jim Henson) lends a certain fluidity to its meaning. When we say that we ‘know’ what a Muppet is, we’re not referring to a clear-cut definition or a grasp of ‘essential features’, but an experience of sufficient instances to construct a pattern of qualitative structure in identifying a Muppet.
I think the idea that we identify concepts by ‘essential features’ is a myth we use to constrain the reality of experience to rational, consolidated forms.
Can you expand?
No, and isn't that the point?
Sorry but I seem to have lost my train of thought but what I'm getting at is very simple and perhaps that simplicity is misplaced in an issue that is, could be, complex. Let's discuss definitions as it seems to be relevant.
A good definition must:
1. be based on essential features
2. not be too broad or too narrow
3. be clear (no figurative languages, avoid vagueness and ambiguity)
4. be positive whenever possible
5. not be circular
The point of contention between the two of us is 1 - we're debating whether essentialism in definitions is a reasonable condition to ask for. As far as I'm concerned a definition must focus on the essentials, otherwise how would we identify that which is being defined? If that which is being defined can't be identified from a "...line-up..." then the discussion ends there. Nothing more can be said.
It appears, on closer examination, that The Problem Of The Criterion is a pseudo-dilemma insofar as definitions are concerned because it makes a critical assumption that's unfounded. Let's go through The Problem Of The Criterion again at the risk of boring everyone.
1. We need a definition to identify particular instances
2. We need particular instances to construct a definition.
The Problem Of The Criterion occurs when we realize that to construct a definition we need particular instances but to get particular instances we first need a definition, the circle is constructed and we're inside it, hapless victims of an ingeniously constructed trap.
However...
A way out, if it is one, is to, specific to the problem itself, think of definitions as being of two kinds:
1. Arbitrary definitions. Such definitions are, in a sense, pulled out of the ether very much like a magician produces a rabbit out of his hat. Such definitions don't require particular instances from which some kind of an essence is extracted/abstracted; au contraire such definitions can be viewed of as hypothetical to begin with and if what they define are instantiated then that would make them real. An example of such a definition would be say the word "pigfly" whose definition is "a pig that can fly". As you can see that I'm able to do this indicates, in no uncertain terms, that arbitrary definitions are possible and even real if you consider the word "unicorn"
2. Non-arbitrary definitions. These definitions are derived from particular instances but it's not true that the particular instances were first identified using a definition. What actually happens is a group of objects is studies, a detailed list of their properties are made, and objects in the group are sorted according to different combinations of properties. So, take the group of objects in the set {c, 1, 9, 4, u}. If I sort the elements of this set in the categories letters or numbers I get {c, u} and {1, 4, 9} but if I sort them according to whether they're curvaceous or angular I get {c, 9, u} and {1. 4}. To construct these categories I didn't rely on a preexisting definition; all I did was sort them based on certain properties which were chosen from the list of properties drawn up from the elements themselves.
Since there are two ways of defining i.e. there are two methods of constructing definitions and these two are independent of each other, The Problem Of The Criterion is, at some level, solved because the problem is predicated on the dependence between these two ways of defining.
Well, consider what would be the essential features of a Muppet, for instance?
We talk as if everything has essential features - some unique properties without which it would not be what it is - but these are just non-essential properties that are qualified in relation to each other such that they exclude alternate qualitative structures. So a definition of a Muppet or a dog is not based on so-called ‘essential features’, but on qualitative structural patterns of NON-essential properties.
We know that:
1. Muppet is the name given by Jim Henson to his puppet/marionette characters in order to distinguish them from the work of other puppeteers.
2. Something made of fabric, with a roughly personable shape, that moves seemingly of its own accord has been identified as a Muppet by a six year old.
3. A ‘muppet’ is also defined in the dictionary as “an incompetent or foolish person”.
We can say that the ‘essential features’ of a Muppet are a puppet associated with Jim Henson, but on their own these are non-essential features of other concepts. It is by the relational or qualitative structure or pattern of these features that a Muppet is commonly defined, not by the consolidated features themselves.
But does this mean 2 and 3 are wrong? Or does it mean that how we identify and how we define the concept are two different processes?
I’m saying that a dog can be identified from a line-up of instances without a prior definition, and that this process also serves as informing a potential definition. Likewise, a definition can be given prior to experience of any instances, but can only be known in relation to instances identified either side of that definition. Either way, a definition need not focus on essentials, but on a relational structure between identifiable instances - and must be refined accordingly.
Let’s take a more complicated concept of ‘consciousness’ - when we seek to define consciousness by its essential features, we are left none the wiser. Because it is not by a checklist of essentials that we define a concept, but by the qualitative relational structure between identifiable instances.
The Problem Of The Criterion has, at its core, the belief that,
1. To define we must have particular instances (to abstract the essence of that which is being defined)
2. To identify particular instances we must have a definition
The Problem Of The Criterion assumes that definitions and particular instances are caught in a vicious circle of the kind we've all encountered - the experience paradox - in which to get a job, we first need experience but to get experience, we first need a job. Since neither can be acquired before the other, it's impossible to get both.
For The Problem Of The Criterion to mean anything, it must be the relationship between definitions and particular instances be such that in each case the other is precondtion thus closing the circle and trapping us in it.
However, upon analysis, this needn't be the case. We can define arbitrarily (methodism) as much as non-arbitrarily (particularism) - there's no hard and fast rule that these two make sense only in relation to each other ss The Problem Of The Criterion assumes. I can be a methodist in certain situations or a particularist in others; there's absolutely nothing wrong in either case.
Quoting Possibility
I guess which Muppet category depends upon context. They are all potentially simultaneously correct.
But all this is predicated on a correspondence version of truth (so detested by Idealists).
To point in the direction of the mop and say 'it is not that case that there is a Muppet in the mop cupboard' sounds like an example of the problem of counterfactual conditionals. People who are anxious about the metaphysical aspects of realism will argue that there are no negative facts and thus correspondence breaks down. This proposition about the mop cupboard doesn't seem to have any corresponding relation to objects and relation to objects in the world. Or something like that.
I can be a Methodist in certain situation or a Presbyterian in others, it depends on whether there is alcohol. Sorry.... I'm a child.
It is when we exclude negative facts from realism that we limit the perception of truth in which we operate. That’s fine, as long as we recognise this when we refer to truth. Counterfactual conditionals are only a problem if we fail to recognise this limited perception of truth.
The proposition ‘it is not the case that there is a Muppet in the mop cupboard’ is made from a six year old perception of truth, the limitations of which have been isolated from the proposition. A six year old would make a proposition in order to test conceptual knowledge, not to propose truth. A more accurate counterfactual conditional here (pointing in the direction of the mop) would be: ‘if it were not the case that there is a Muppet in the mop cupboard, then that would be a Muppet’. This clarifies the limited perception of truth in which the proposition operates, with correspondence intact.
The way I see it, the Problem of the Criterion is not just about defining concepts or identifying instances, but about our accuracy in the relation between definition and identification. The problem is that knowledge is not an acquisition, but a process of refining this accuracy, which relies on identifying insufficient structures in both aspects of the process.
To use your analogy, the process of refining the relation between getting a job and getting experience relies on each aspect addressing insufficiencies in the other. To solve the problem of circularity, it is necessary to acknowledge this overall insufficiency, and to simply start: either by seeking experience without getting a job (ie. volunteer work, internship, etc) or by seeking a job that requires no experience (entry-level position or unskilled labour).
In terms of identification and definition, it is necessary to recognise the overall insufficiency in our knowledge, and either start with an arbitrary definition to which we can relate instances in one way or another, or by identifying instances that will inform a definition - knowing that the initial step is far from accurate, but begins a relational process towards accuracy.
This reminds me of a Blackadder response - "Yes.. And no."
Quoting Possibility
I think that according to your above statement, the technical definition of a class does not correlate to immediate sense experience, nor the conception from direct encounters between the subject and the object, nor to the recognition practices of objects in routine life. If that is the claim, I contend that technically exhaustive definitions are just elaborated countours of the same classes, but with level of detail that differs, because it is necessary for professionals that operate with indirect observations of the object. Say, as a software engineer, I think of computers in a certain way, such that I could recognize features of their architecture in some unlabeled schematic. A schematic is not immediate sense experience, but my concept does not apply to just appearances, but logical organization, so the context in which the full extent of my definition will become meaningful is not the perceptual one. For crude recognition of devices by appearances in my daily routine, I match them to the idea using a rough distilled approximation from my concept, drawing on the superficial elements in it, and removing the abstract aspects, which remain underutilized.
If you are referring just to the process of identification, you are right, that if I see empty computer case, I will at first assume that it is the complete assembly and classify it is a computer. There is no ambiguity as to what a computer is in my mind, even in this scenario, but the evaluation of the particular object is based on insufficient information, and it is made with assumed risk. The unsuccessful application of weighted guesses to fill the missing features turn into an error in judgement. So, this is fuzzyness of the concept matching process, stemming from the lack of awareness, even though the definition is in?ppropriate under consideration of the object's complete description.
Another situation is, that if I am given a primitive device with some very basic cheap electronics in it, I might question if it is a computer. Here the fuzzyness is not curable with more data about the features of the object, because it results from the borderline evaluation of the object by my classifier. Hence, I should recognize that classes are nuances that gradually transition between each other.
A different case arises when there is disagreement of definitions. If I see a washing machine, I would speculate that it hosts a computer inside (in the sense of electronics having the capacity for universal computation, if not anything else), but an older person or a child might not be used to the idea of embedded electronics and recognize the object as mechanical. That is, I will see computers in more places, because I have a wider definition. The disparity here is linguistic and conceptual, because the child or elderly person make crude first association based on appearances and then the resulting identification is not as good a predictor of the quality of the object they perceive. We don't talk the same language and our underlying concepts differ.
In the latter case, my definition increases the anticipated range of tools supported by electronics and my view on the subject of computing is more inclusive classifier. The classification outcome predicts the structure and properties of the object, such as less friction, less noise. We ultimately classify the elements of the environment with the same goal in mind, discernment between distinct categories of objects and anticipation of their properties, but the boundaries depend on how much experience we have and how crudely we intend to group the objects.
So, to summarize. I agree that sometimes the concept is indecisive due to edge cases, but sometimes the fuzzyness is in its application due to incomplete information. This does not change the fact that the academic definition is usually the most clearly ascribed. There is also the issue of linguistic association with concepts, I think that people can develop notions and concepts independently of language and communication, just by observing the correlations between features in their environment, but there are variables there that can sway the process in multiple directions and affect the predictive value of the concept map.
:smile:
Get used to it with Possibility.
2. How can we tell which propositions are true/knowledge? [Definition of truth/knowledge]
Knowledge is justified belief. What evidence counts as sufficient justification depends first upon the desired intent. Truth is an individual perspective on reality (consensus experience). This understanding is necessary and sufficient for all related epistemological questions and problems.
universal taxonomy - evidence by certainty
0 ignorance (certainty that you don't know)
1 found anecdote (assumed motive)
2 adversarial anecdote (presumes inaccurate communication motive)
3 collaborative anecdote (presumes accurate communication motive)
4 experience of (possible illusion or delusion)
5 ground truth (consensus Reality)
6 occupational reality (verified pragmatism)
7 professional consensus (context specific expertise, "best practice")
8 science (rigorous replication)
-=empirical probability / logical necessity=-
9 math, logic, Spiritual Math (semantic, absolute)
10 experience qua experience (you are definitely sensing this)
You seem to be arguing for definition of a concept as more important than identification of its instances, but this only reveals a subjective preference for certainty. There are variables that affect the predictive value of the concept map regardless of whether you start with a definition or identified instances. Language and communication is important to periodically consolidate and share the concept map across differentiated conceptual structures - but so that variables in the set of instances are acknowledged and integrated into the concept map.
That is true. I rather cockily answered "yes and no". I do partly agree with you. There are many layers to the phenomenon.
I want to be clear that I don't think that a dog is defined conceptually by the anatomy of the dog, because it is inherently necessary to define objects by their structure. I don't even think that a dog can be defined conceptually exhaustively from knowing all the dogs in the world. It is rather, contrasted with all the cats (very roughly speaking). But eventually, there is some convergence, when the sample collection is so large that we can tell enough about the concept (in contrast to other neighboring concepts) that we don't need to continue its refinement. And that is when we arrive at some quasi-stable technical definition.
There are many nuances here. Not all people have practical use for the technical definition, since their life's occupation does not demand it and they have no personal interest in it. But I was contending that those who do use the fully articulated concept, will actually stay mentally committed to its full detail, even when they use it crudely in routine actions. Or at least for the most part. They could make intentional exceptions to accommodate conversations. They just wont involve the full extent of their knowledge at the moment. Further, people can disagree on concepts, because of the extrapolations that could be made from them or the expressive power that certain theoretical conceptions offer relative to others.
I was originally proposing how the process of categorical conception takes place by direct interactions of children or individuals, without passing of definitions second hand, or from the overall anthropological point of view. I think it is compatible with your proposal. Let's assume that people inhabited a zero dimensional universe and only experienced different quantities over time. Lets take the numbers 1 and 2. Since only two numbers exist, there is no need to classify them. If we experience 5, we could decide that our mental is poor, and classify 1 and 2 as class A, and 5 as class B. This now becomes the vocabulary of our mental process, albeit with little difference to our predictive capability. (This case would be more interesting if we had multiple features to correlate.) If we further experience 3, we have two sensible choices that we could make. We could either decide that all numbers are in the same class, making our perspective of all experience non-discerning, or decide that 1, 2, and 3 are in one class, contrasted with the class of 5. The distinction is, that if all numbers are in the same class, considering the lack of continuity, we could speculate that 4 exists. Thus, there is a predictive difference.
In reality, we are dealing with vast multi-dimensional data sets, but we are applying similar process of grouping experience together, extrapolating the data within the groups, recognizing objects to their most fitting group and predicting their properties based on the anticipated features of the class space at their location.
P.S.: I agree with your notion for the process of concept refinement, I think.
I think I see this. A fully articulated concept is rarely (if at all) stated in its full detail - definitions are constructed from a cascade of conceptual structures: technical terms each with their own technical definitions constructed from more technical terms, and so on. For the purpose of conversations (and to use a visual arts analogy), da Vinci might draw the Vitruvian Man or a stick figure - it depends on the details that need to be transferred, the amount of shared conceptual knowledge we can rely on between us, and how much attention and effort each can spare in the time available.
I spend a great deal of time looking up and researching terms, concepts and ideas I come across in forum discussions here, because I’ve found that my own routine or common-language interpretations aren’t detailed enough to understand the application. I have that luxury here - I imagine I would struggle to keep up in a face-to-face discussion of philosophy, but I think I am gradually developing the conceptual structures to begin to hold my own.
Disagreement on concepts here are often the result of both narrow and misaligned qualitative structures or patterns of instances and their extrapolations - posters here such as Tim Wood encourage proposing definitions, so that this variability can be addressed early in the discussion. It’s not always approached as a process of concept refinement, but can be quite effective when it is.
I will address the rest of your post when I have more time...
Yes, the underlying concept doesn't change, but just its expression or application. Although, not just in relation to communication, but also its personal use. Concepts can be applied narrowly by the individual for recognizing objects by their superficial features, but then they are still entrenched in full detail in their mind. The concept is subject to change, as you described, because it is gradually refined by the individual and by society. The two, the popularly or professionally ratified one and the personal one, need not agree, and individuals may not always agree on their concepts. Not just superficially, by how they apply the concepts in a given context, but by how those concepts are explained in their mind. However, with enough experience, the collectively accepted technically precise definition is usually the best, because even if sparingly applied in professional context, it is the most detailed one and can be reduced to a distilled form, by virtue of its apparent consequences, for everyday use if necessary.
The example I gave, with the zero-dimensional inhabitant was a little bloated and dumb, but it aimed to illustrate that concepts correspond to partitionings of the experience. This means that they are both not completely random, because they are anchored at experience, direct or indirect, and they are a little arbitrary too, because there are multiple ways to partition the same set. I may elaborate the example at a later time, if you deem necessary.
The best definition being the broadest and most inclusive in relation to instances. So long as we keep in mind that the technical definition is neither precise nor stable - only relatively so. Awareness of, connection to and collaboration with the qualitative variability in even the most precise definition is all part of this process of concept refinement.
Quoting simeonz
I’m glad you added this. I have some issues with your example - not the least of which is its ‘zero-dimensional’ or quantitative description, which assumes invariability of perspective and ignores the temporal aspect. You did refer to multiple inhabitants, after all, as well as the experience of different quantities ‘over time’, suggesting a three-dimensional universe, not zero. It is the mental process of a particular perspective that begins with a set of quantities - but even without partitioning the set, qualitative relation exists between these experienced quantities to differentiate 1 from 2. A set of differentiated quantities is at least one-dimensional, in my book.
Actually, there are multiple kinds of dimensions here. The features that determine the instant of experience are indeed in one dimension. What I meant is that the universe of the denizen is trivial. The spatial aspect is zero-dimensional, the spatio-temporal aspect is one-dimensional. The quantities are the measurements (think electromagnetic field, photon frequencies/momenta) over this zero-dimensional (one-dimensional with the time axis included) domain. Multiple inhabitants are difficult to articulate, but such defect from the simplifcation of the subject is to be expected. You can imagine complex communication would require more then a single point, but that breaks my intended simplicity.
The idea was this - the child denizen is presented with number 1. Second experience during puberty is number 2. Third experience, during adolescence is number 5. And final experience during adulthood is number 4. The child denizen considers that 1 is the only possibility. Then, after puberty realizes that 1 and 2 both can happen. Depending on what faculties for reason we presume here, they might extrapolate, but lets assume only interpolation for the time being. The adult denizen encounters 5 and decides to group experiences in category A for 1 and 2 and category B for 5. This facilitates its thinking, but also means that it doesn't have strong anticipation for 3 and 4, because A and B are considered distinct. Then it encounters 3 and starts to contemplate, if 1, 2, 3, and 5 are the same variety of phenomenon with 4 missing yet, but anticipated in the future, or 1, 2, 3 are one group that inherits semantically A (by extending it) and 5 remains distinct. This is a choice that changes the predictions it makes for the future. If two denizens were present in this world, they could contend on the issue.
This resembles a problem called "cluster analysis". I proposed that this is how our development of new concepts takes place. We are trying to contrast some things we have encountered with others and to create boundaries to our interpolation. In reality, we are not measuring individual quanta. We are receiving multi-dimensional data that heavily aggregates measurements, we perform feature extraction/dimensionality reduction and then correlate multiple dimensions. This also allows us to predict missing features during observation, by exploiting our knowledge of the prior correlations.
Because it isn’t necessarily just the time axis that renders the universe one-dimensional, but the qualitative difference between 1 and 2 as experienced and consolidated quantities. It is this qualitative relation that presents 2 as not-1, 5 as not-3, and 3 as closer to 2 than to 5. Our grasp of numerical value structure leads us to take this qualitative relation for granted.
I realise that it seems like I’m being pedantic here. It’s important to me that we don’t lose sight of these quantities as qualitative relational structures in themselves. We can conceptualise but struggle to define more than three dimensions, and so we construct reductionist methodologies (including science, logic, morality) and complex societal structures (including culture, politics, religion, etc) as scaffolding that enables us to navigate, test and refine our conceptualisation of what currently appears (in my understanding) to be six dimensions of relational structure.
How we define a concept relies heavily on these social and methodological structures to avoid prediction error as we interact across all six dimensions, but they are notoriously subjective, unstable and incomplete. When we keep in mind the limited three-dimensional scope of our concept definitions (like a map that fails to account for terrain or topology) and the subjective uncertainty of our scaffolding, then I think we gain a more accurate sense of our conceptual systems as heuristic in relation to reality.
From IEP: “So, the issue at the heart of the Problem of the Criterion is how to start our epistemological theorizing in the correct way, not how to discover a theory of the nature of truth.“
The way I see it, the correct way to start our epistemological theorising is to acknowledge the contradiction at the heart of the Problem of the Criterion. We can never be certain which propositions are true, nor can we be certain of the accuracy in our methodologies to determine the truth of propositions. Epistemology in relation to truth is a process of incremental advancement towards this paradox - by interaction between identification of instances in experience and definition of the concept via reductionist methodologies.
If an instance doesn’t refine either the definition or the methodology, then it isn’t contributing to knowledge. To focus our mental energies on calculating probability, for instance, only consolidates existing knowledge - it doesn’t advance our understanding, choosing to ignore the Problem and exclude qualitative variability instead of facing the inevitable uncertainty in our relation to truth. That’s fine, as long as we recognise this ignorance, isolation or exclusion of qualitative aspects of reality as part of the mental process. Because when we apply this knowledge as defined, our application must take into account the limitations of the methodology and resulting definition in relation to our capacity to accurately identify instances. In other words, we need to add the qualitative variability back in, or else we limit our practical understanding of reality - which is arguably more important. So, by the same token, if a revised definition or reductionist methodology doesn’t improve our experience of instances, thereby reducing prediction error, then it isn’t contributing to knowledge.
Edit: Sorry for not replying, but I am in a sort of a flux. I apologize, but I expect that I may tarry awhile between replies even in the future.
This is too vast a landscape to be dealt with properly in a forum format. I know this is sort-of a bail out from me, but really, it is a serious subject. I wouldn't be the right person to deal with it, because I don't have the proper qualification.
The oversimplification I made was multi-fold. First, I didn't address hereditary and collective experience. It involved the ability to discern the quantities, a problem for which you inquired, and which I would have explained as genetically inclined. How genetics influence conceptualization and the presence of motivation for natural selection that fosters basic awareness of endemic world features need to be explained. Second, I reduced the feature space to a single dimension, an abstract integer, which avoided the question of making correlations and having to pick most discerning descriptors, i.e. dimensionality reduction. I also compressed the object space, to a single point, which dispensed with a myriad of issues, such as identifying objects in their environment, during stasis or in motion, anticipation of features obscured from view, assessment of orientation, assessment of distance.
The idea of this oversimplification was merely to illustrate how concepts correspond to classes in taxonomies of experience. And in particular, that there is no real circularity. There was ambiguity stemming from the lack of unique ascription of classes to a given collection of previously observed instances. Such as in the case of 3, there is inherent inability to decide whether it falls into the group of 1 and 2, or bridges 1 and 2 with 5. However, assigning 1 and 3 to one class, and 2 and 5 to a different class would be solving the problem counter-productively. Therefore, the taxonomy isn't formed in arbitrary personal fashion. It follows the objective of best discernment without excessive distinction.
No matter what process actually attains plausible correspondence, what procedure is actually used to create the taxonomy, no matter the kind of features that are used to determine the relative disposition of new objects/samples to previous object/samples and how the relative locations of each one is judged, what I hoped to illustrate was that concepts are not designed so much according to their ability to describe common structure of some collection of objects, but according to their ability to discriminate objects from each other in the bulk of our experience. This problem can be solved even statically, albeit with enormous computational expense.
What I hoped to illustrate is that concepts can both be fluid and stable. New objects/impressions can appear in previously unpopulated locations of our experience, or unevenly saturate locations to the extent that new classes form from the division of old ones, or fill the gaps between old classes, creating continuity between them and merging them together. In that sense, the structure of our concept map is flexible. Hence, our extrapolations, our predictions, which depend on how we partition our experience into categories with symmetric properties, change in the process. Concepts can converge, because experience, in general, accumulates, and can also converge. The concepts, in theory, should gradually reach some maximally informed model.
Again, to me, all this corresponds to the "cluster analysis" and "dimensionality reduction" problems.
You are correct, that I did presuppose quantity discernment and distance measurement (or in 1-D difference computation). The denizen knows how to deal with the so called "affine spaces". I didn't want to go there. That opens an entirely new discussion.
Just to scratch the surface with a few broad strokes here. We know we inherit a lot genetically, environmentally, culturally. Our perception system, for example, utilizes more then 5 senses that we manage to somehow correlate. The auditory and olfactory senses are probably the least detailed, being merely in stereo. But the visual system starts with about 6-million bright illumination photoreceptor cells and many more low illumination photoreceptor cells, unevenly distributed on the retina. Those are processed by a cascade of neural networks, eventually ending in the visual cortex and visual association cortex. In between, people merge the monochromatic information from the photoreceptors into color spectrum information, ascertain depth, increase the visual acuity of the image by superimposing visual input from many saccadic eye movements (sharp eye fidgeting), discern contours, detect objects in motion, etc. I am no expert here, but I want to emphasize that we have inherited a lot of mental structure in the form of hierarchical neural processing. Considering that the feature space of our raw senses is in the millions of bits, having perceptual structure as heritage plays a crucial role in our ability to further conceptualize our complex environment, by reinforcement, by trial and error.
Another type of heritage is proposed by Noam Chomsky. He describes, for which there is apparently evidence, that people are not merely linguistic by nature, but endowed with inclinations to easily develop linguistic articulations of specific categories of experience in the right environment. Not just basic perception related concepts, but abstract tokens of thought. This may explain why we are so easily attuned to logic, quantities, social constructs of order, pro-social behaviors, like ethical behaviors, affective empathy (i.e. love) etc. I am suggesting that we use classification to develop concepts from individual experience. This should happen inside the neural network of our brain, somewhere after our perception system and before decision making. I am only addressing part of the issue. I think that nature also genetically programs classifiers in the species behavior, by incorporating certain awareness of experience categories in their innate responses. There is also the question of social Darwinism. Because natural selection applies to the collective, the individuals are not necessarily compelled to identical conceptualization. Some conceptual inclinations are conflicting, to keep the vitality of the community.
No problem. Persist with the flux - I think it can be a productive state to be in, despite how it might feel. I realise that my approach to this subject is far from conventional, so I appreciate you making the effort to engage. I certainly don’t have any ‘proper’ qualifications in this area myself. But I also doubt that anyone would be sufficiently qualified on their own. As you say, the landscape is too vast. In my view that makes a forum more suitable, not less.
Quoting simeonz
It is the qualification of ‘best discernment without excessive distinction’ that perhaps needs more thought. Best in what sense? According to which value hierarchy? And at what point is the distinction ‘excessive’? It isn’t that the taxonomy is formed in an arbitrarily personal fashion, but rather intersubjectively. It’s a process and methodology developed initially through religious, political and cultural trial and error - manifesting language, custom, law and civility as externally predictive, four-dimensional landscapes from the correlation of human instances of being.
The recent psychology/neuroscience work of Lisa Feldman Barrett in developing a constructed theory of emotion is shedding light on the ‘concept cascade’, and the importance of affect (attention/valence and effort/arousal) in how even our most basic concepts are formed. Alongside recent descriptions in physics (eg. Carlo Rovelli) of the universe consisting of ‘interrelated events’ rather than objects in time, Barrett’s theory leads to an idea of consciousness as a predictive four-dimensional landscape from ongoing correlation of interoception and conception as internally constructed, human instances of being.
But the challenge (as Rovelli describes) is to talk about reality as four-dimensional with a language that is steeped in a 3+1 perspective (subject-object and tensed verb). Consider a molecular structure of two atoms ‘sharing’ an electron - in a similar way, the structure of human consciousness can be seen to consist of two constructed events ‘sharing’ a temporal aspect. This five-dimensional anomalous relation of potentiality/knowledge/value manifests as an ongoing prediction in affect: the instructional ‘code’ for both interoception and conception. How we go about proving this is beyond my scientific capacity, but I believe the capacity exists, nonetheless. As philosophers, our task is to find a way to frame the question.
Quoting simeonz
I agree that concepts are not designed according to their ability to describe common structure or essentialism, but to differentiate between aspects of experience. Partitioning our experience into categories is part of the scientific methodology by which we attempt to make sense of reality in terms of ‘objects in time’.
I also agree that concepts can be perceived as both fluid and stable. This reflects our understanding of wave-particle duality (I don’t think this is coincidental). But I also think the ‘maximally-informed model’ we’re reaching for is found not in some eventual stability of concepts, but in developing an efficient relation to their fluidity - in our awareness, connection and collaboration with relations that transcend or vary conceptual structures.
It’s more efficient to discriminate events than objects from each other in the bulk of our experience. Even though our language structure is based on objects in time, we interact with the world not as an object, but as an event at our most basic, and that event is subject to ongoing variability. ‘Best discernment without excessive distinction’ then aims for allostasis - stability through variability - not homeostasis. This relates to Barrett as mentioned above.
I guess I wanted to point out that there is more structural process to the development of concepts than categorising objects of experience through cluster analysis or dimensionality reduction, and that qualitative relations across multiple dimensional levels play a key role.
You are right that many complex criteria are connected to values, but the recognition of basic object features, I believe is not. As I mentioned, we should account for the complex hierarchical cognitive and perceptual faculties with which we are endowed from the get go. At least, we know that our perceptual system is incredibly elaborate, and doesn't just feed raw data to us. As infants, we don't start from a blank slate and become conditioned by experience and interactions to detect shapes, recognize objects, assess distances. Those discernments that are essential to how we later create simple conceptualizations and are completely hereditary. And although this is a more tenuous hypothesis, like Noam Chomsky, I do actually believe that some abstract notions, such as length, order and symmetry, identity, compositeness, self, etc - are actually biologically pre-programmed. Not to the point, where they are inscribed directly in the brain, but their subsequent articulation is heavily inclined, and under exposure to the right environment, the predispositions trigger infant conceptualization. I think of this through an analogy with embryonic development. Fertilized eggs cannot develop physically outside the womb, but in its conditions, they are programmed to divide and organize rapidly into a fetus form. I think this happens neurologically with us when we are exposed to the characteristic physical environment during infancy.
This heritage hypothesis can appear more reasonable in light of the harmonious relationship between any cognizant organism and the laws of the environment in which it operates. To some extent, even bacterial lifeforms need to be robotically aware of the principles governing their habitat. Our evolutionary history transpired in the presence of the same constraining factors, such as the inertial physical law for objects moving in the absence of forces, and thus it is understandable that our cognitive apparatus would be primed to anticipate the dynamics in question, with a rudimentary sense of lengths and quantities. Even if such notions are not explicit, the relationship between our reconstruction of the features of the world and the natural laws would be approximately homomorphic. And the hypothesis is, that at some point after the appearance of linguistic capabilities, we were further compelled by natural selection towards linguistic articulation of these mental reconstructions through hereditary conceptualization. Whereas fundamental discernment of features of appearances would have developed even earlier , being more involuntary and unconscious.
Quoting Possibility
Maybe I am misreading the argument. Affective dispositions are essential to human behavior where social drives and other emotions come into the foray, but people also apply a layer of general intelligence. I will try to make a connection to a neurological condition of reduced amygdala volume, which renders people incapable of any affective empathy, and for the most part, highly diminishes their sense of anxiety. They are capable of feeling only anger or satisfaction, but the feelings fade quickly. Such individuals are extremely intelligent, literate, articulate. They conceptualize the world slightly differently, but are otherwise capable of the same task planning and anticipation. Considering the rather placated nature of their emotions (compared to a neurotypical), and the exhibition of reasonably similar perception of the world, intelligence isn't that reliant on affective conditions. Admittedly, they still do have cognitive dispositions, feel pain or pleasure, have basic needs as well, are unemotionally engaged with society and subject to culture and norms (to a smaller extent). But the significant disparity in affective stimuli and the relative closeness to us in cognitive output appears to imply that affective dispositions are a secondary factor for conceptualization. At least on a case by case basis. I am not implying that if we all had smaller amygdala volume, it wouldn't transform the social perception.
Quoting Possibility
To be honest, it depends on whether a person can reach maximally informed state, or at least sufficiently informed state, with respect to a certain aspect of their overall experience. For example, quantum mechanics changed a lot about our perception of atoms, and atoms changed a lot about our perception of the reaction of objects to heat, but I think that to some extent, a chair is till a chair to us, as it was in antiquity. I think that while we might perceive certain features of a chair differently, such as what happens when we burn it, or how much energy is in it, or what is in it, its most basic character, namely that of an object which offers solid support for your body when you rest yourself on it, is unchanged. The problem with the convergence of information is its reliance on the potential to acquire most of the discernment value from a reasonably small number of observations. After all, this is a large universe, with intricate detail, lasting a long time.
Quoting Possibility
I do believe that intelligence, to a great extent, functions like a computer trying to evaluate outcomes from actions according to a some system of values. The values are indeed derived from many factors. I do agree that there are implicit aspects to our intelligence strongly engaged with ecosystemic stability, where the person is only one actor in the environment and tries to enter into correct symbiotic alignment with it. The function of the personal intelligence becomes allostatically aimed, as you describe. On the other hand, there aspects to our intelligence, not always that clearly separated, but at least measurably autonomous from this type of conformant symbiotic thinking, that are concerned with representational accuracy. You are right there, that I was focusing more on this type of conceptual mapping, and indeed, it is the only one that is homeostatically aimed. In fact, the recent discussions in the forum were addressing the subject of belief and its relationship to truth, and I meant to express my opinion, which exactly follows these lines. That our personal ideas can seek alignment with the world either by exploring compelling facts outside of our control, or by maneuvering ourselves through the space of possible modes of being and trying to adjust according to our consequent experience. The distinction and the relationship between the two is apparently of interest, but is also difficult to reconcile. Also, I was referring to objects, but objects are merely aspects of situations. Even further, as you suggest, situations are merely aspects of our relation to the context in which these situations occur. I was simplifying on one hand, and also, I do indeed think that we do classify objects as well, since thankfully we have the neurological aptitude to separate them from the background and to compress their features, thanks to our inherited perception apparatus and rudimentary conceptualization skill.
Quoting Possibility
In retrospect, I think that there are two nuances to intelligence, and I was addressing only one. The empirically representationally aimed one.
Edit. I should also point out, that the intelligence you describe, is the more general mechanism. I have previously referred to a related notion of distinction, that of pragmatic truth versus representational truth. And pragmatic truth, as I have stated, is the more general form of awareness. But it is also the less precise and more difficult to operate. It is outside the boundary of empiricism. Your description of allostatic conceptualization is actually something slightly different, yet related. It brings a new quality to pragmatic truth for me. I usually focus on empirical truth. Not because I want to dispense with the other, but because it has the more obvious qualities. Even if both are evidently needed, if the latter then operates under the former.
This is a common misunderstanding of affect and the amygdala, supported by essentialism, mental inference fallacy and the misguided notion of a triune brain structure. The amygdala has been more recently proven NOT to be the source of emotion in the brain - it activates in response to novel situations, not necessarily emotional ones. Barrett refers to volumes of research dispelling claims that the amygdala is the brain location of emotion (even of fear or anxiety). Interpretations of behaviour in those with reduced or even destroyed amygdala appear to imply the secondary nature of affect because that’s our preference. We like to think of ourselves as primarily rational beings, with the capacity to ‘control’ our emotions. In truth, evidence shows that it’s more efficient to understand and collaborate with affect in determining our behaviour - we can either adjust for affect or try to rationalise it after the fact, but it remains an important aspect of our relation to reality.
I’m not sure which research or case studies you’re referring to above (I’m not sure if the subjects were born with reduced amygdala or had it partially removed and I think this makes a difference in how I interpret the account) but from what you’ve provided, I’d like to make a few points. I don’t think that an impaired or reduced access to interoception of affect makes much difference to one’s capacity for conceptualisation, or their intelligence as commonly measured. I think it does, however make a difference to their capacity to improve accuracy in their conceptualisation of social reality in particular, and to their overall methodology in refining concepts. They lack information that enables them to make adjustments to behaviour based on social cues, but thanks to the triune brain theory and our general preference for rationality, they’re unlikely to notice much else in terms of ‘impairment’.
I would predict that they may also have an interest in languages, mathematics, logic and morality - because these ensure they have most of the information they need to develop concepts without the benefit of affect. They may also have a sense of disconnection between their physical and mental existence, relatively less focus on sporting or sexual activity, and an affinity for computer systems and artificial intelligence.
As for anxiety, this theoretically refers to the amount of prediction error we encounter from a misalignment of conception and interoception. If there’s reduced access to interoception of affect by conceptualisation systems, there’s less misalignment.
Quoting simeonz
Well, not once you’ve identified them as objects, no. I don’t think that’s how these initial concepts are developed, though. I think the brain and sensory systems are biologically structured to develop a variety of conceptual structures rapidly and efficiently, some even prior to birth. Barrett compares early development of concepts to the computer process of sampling, where similarities are separated from differences, and only what is different from one frame or pixel to the next is transmitted:
I do, however, believe that notions such distance, shape, space, time, value and meaning refer to an underlying qualitative structure of reality that is undeniable. We ‘feel’ these notions long before we’re able to conceptualise them.
Quoting simeonz
I think bacterial lifeforms are aware of the principles governing their habitat only to the extent that they impact allostasis. Any rudimentary sense of values would be initially qualitative, not quantitative - corresponding to the ongoing interoception of valence and arousal in the organism. But as Barrett suggests, the neuronal structure conceptualises in order to summarise for efficiency, separating statistical similarities from sensory differences to eliminate redundancy. Our entire evolutionary development has been in relation to the organism’s capacity to more efficiently construct and refine conceptual systems and structures for allostasis from a network of interoceptive systems. The systems and network we’ve developed now consist of whole brain processes, degeneracy, feedback loops and a complex arrangement of checks and balances, budgeting the organism’s ongoing allocation of attention and effort.
This is a long post already - I will return to the rest of your post later...
I think this sense that a chair is still a chair to us relates to goal-oriented concepts. Barrett references the work of cognitive scientist Lawrence W. Barsalou, and demonstrates that we are pre-programmed to develop goal-oriented concepts effortlessly: to categorise seemingly unconnected instances - such as a fly swatter, a beekeeper’s suit, a house, a car, a large trash can, a vacation in Antarctica, a calm demeanour and a university degree in etymology - under purely mental concepts such as ‘things that protect you from stinging insects’. “Concepts are not static but remarkably malleable and context-dependent, because your goals can change to fit the situation.” So if an object meets that goal for you, then it’s a chair, whether it’s made of wood or plastic, shaped like a box or a wave, etc.
Quoting simeonz
Charles Peirce’s pragmaticist theory of fallibilism, as described in Wikipedia’s article on empiricism: “The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth". The historical oppression of pragmatic truth by empirical truth translates to a fear of uncertainty - of being left without solid ground to stand on.
Yes, pragmatic truth is less precise in a static sense, but surely we are past the point of insisting on static empirical statements? Quantum mechanics didn’t just change our perception of atoms, but our sense that there is a static concreteness underlying reality. We are forced to concede a continual state of flux, which our sensory limitations as human observers require us to statistically summarise and separate from its qualitative variability, in order to relate it to our (now obviously limited sense of) empirical truth. Yet pragmatically, the qualitative variability of quantum particles is regularly applied as a prediction of attention and effort with unprecedented precision and accuracy.
I know that it is me who brought it up, but I dare say that the precise function of the amygdala is not that relevant to our discussion. Unless you are drawing conclusions from the mechanism by which people attain these anomalous traits, I would consider the explanation outside the topic. Regarding the quality in cognitive and neurological research, I assume that interpretational lattitude exists, but the conclusions are still drawn from correlations between activation of the brain region and cues of affects after exposure to perceptual stimulus. From brief skimming over the summary of a few recent papers, I am left with the impression that there appears to be no clear and hard assertion at present, but what is stated is that there might be primary and secondary effects, and interplay between this limbic component and other cognitive functions. Until I have evidence that allows me to draw my own conclusion, I am assuming that the predominant opinion of involvement in emotional processing is not completely incorrect.
Quoting Possibility
Psychiatry labels the individuals I was referring to as having antisocial personality disorder, but that is a broad stroke diagnosis. The hereditary variant of the condition goes under additional titles in related fields - forensic psychology and neurology call it psychopathy. Since psychopaths are not experiencing overwhelming discomfort from their misalignment with pro-social behaviors, they are almost never voluntary candidates for treatment and are rather poorly researched. I am not at all literate on the subject, but I am aware of one paper that was produced in collaboration with such affected individual. According to the same person, a dozen of genes are potentially involved as well, some affecting neurotransmitter bindings and from my observation of the responses given from self-attestated psychopaths on quora, the individuals indeed confirm smaller amygdala volume. This is a small sample, but I am primarily interested that their callous-unemotional traits seem to be no obstruction to having reasonably eloquent exchanges. They can interpret situations cognitively, even if they lack emotional perception of social cues.
Psychopaths do not report to be completely unemotive. They can enjoy a scenery. The production of gratifying feeling from successful mental anticipation and analysis of form, as you describe, from music or visual arts, is not foreign to them. Probably, less expressively manifest then in a neurotypical, but not outright missing.
Quoting Possibility
There might be an allusion here. I am not getting my information first hand. I would characterize myself as neurotic. Granted, a psychopath would mask themselves, so you could make of it what you will, but I am at worst slightly narcissistic.
Quoting Possibility
What you describe seems more like being in a surprised state. I am thinking more along the lines of oversensitivity and impulsiveness, heightened attention, resulting from the perception of impactfulness and uncertainty. In any case, psychopaths claim that both their fear and anxiety responses are diminished.
Quoting Possibility
I understand, that you specifically emphasize that we perceive and indeed this is opposition to Chomsky's theory of innate conceptualization. Granted, perception does not rely on abstractly coded mental awareness. But even if we agree to disagree regarding the plausibility of Chomsky's claim, what you call feeling, I could be justified to call perceptual cognition. Even pain is registration of objective physical stimulus (unless there is a neurological disorder of some kind), and as analytically-blocking and agonizing as it can be, it is not intended to be personally interpretative.
Quoting Possibility
Again, interoception, when it expresses an objective relation between the subject and their environment, is simply perception. How do you distinguish this interoceptive awareness from being cognizant of the objective features of your surroundings? The fact is that we are able to percieve objects easily and to discern visual frame constituents quickly. There is specialization in the development of our brain structures and it is very important for drawing empirical information from our environment. Which suggests to me that empirical assessment is natural to us and part of our intellectual function.
Quoting Possibility
I do agree, that if we grouped only according to innate functions, every object that provides static mechanical connection between underlying surface and rested weight would be a chair. That would put a trash bin in the same category and it isn't in it. However, we do have a function concept of the mechanical connection, i.e. the concept of resting weight through intermediary solid, and it has not changed significantly by the discovery QM. We develop both function concepts and use concepts, intentionally, depending on our needs. The metrics through which we cluster the space of our experience can be driven by uses or functions, depending on our motivation for conceptualization.
Quoting Possibility
Going back to the influence of QM and the convergence of physical concepts. Aristotle taught that movement depends on the presence of forces. Newton dismantled that notion. But we are still perceiving the world as mostly Aristotelian. I am aware of Newtonian physics and I do conceptualize the world as at least Newtonian. But I consider the Newtonian world as mostly Aristotelian in my average experience. New physical paradigms do not uproot entirely how we evaluate the features of our environment, but refine them. They revolutionize our perception of the extent of the physical law, which makes us reevaluate our physical theories and make us more observant. The same is true for relativity and QM.
Quoting Possibility
I am not sure which aspect of staticity you oppose. Truth does not apply to antropological realities in the same sense by default. As I stated in another thread, you cannot always support truth with evidence, because not all statements have this character. Antropological phenomena, including science, depend on the "rightness of approach", which is settled by consensus rather then just hard evidence. On the other hand, empirical truth underlies the aim of the scientific pursuit, and it is the quality of its attainment that can produce convergence. It may not be attained in reality, but if it is attained, the result will be gradually converging.
Lets suppose that truth, as we can cognitively process it, is never static. Lets examine a few reasons why that could be.
I do not contend interoception. Appreciation of music and art is, I believe, interoceptive-analytical loop of sorts. Most mental actions involve a degree of satisfaction that manifests also interoceptively. I only contend that it sways our cognitive response from objective analysis of the information to some allostatically aimed impulsive reaction. For social interactions, as they are inherently subjective, this may be true, but for empiricism and physical feature analysis, I would say not so much.