Neural Networks, Perception & Direct Realism
Artificial neural networks have experienced a lot of success in recent years with image recognition, speech analysis, and game playing. What's interesting is that the networks aren't programmed specifically to recognize images or sounds, but rather learn to do so given a training data set. After having been trained, the network can then recognize new shapes or sounds, similar but not exactly the same as the ones it's been trained to learn.
A simple example would be learning to recognize handwritten digits. Supervised learning is when the training data has labels such as 3 for any version of a handwritten three in the data. Unsupervised learning is when the network learns to recognize patterns which are not labelled.
A heirerchachal neural network has different layers at which patterns are recognized and built up to match a complicated shape like a face, or spoken sentences. Last year, Google's DeepMind was fed three days worth of Youtube video thumbnails, containing 20,000 different objects that humans would recognize. But there were no labels, so this was unsupervised.
Human faces and cats were two of the categories Deepmind learned to successfully recognize. This is of philosophical note, because it confirms that there are objective, mind-independent patterns in the data for a neural network to find. Otherwise, unsupervised learning shouldn't work at all, particularly with recognizing shapes and sounds that human minds do.
In summary, neural networks are discovering patterns that our visual and auditory systems find, without any programming telling them to do so, in the case of unsupervised learning. This lends support to mind-independent shapes and sounds out there in the world that we evolved to see and hear. And this would be direct, because neural networks have no mental content to act as intermediaries.
It also runs counter to Kant's claims, at least at the level of raw perception, because if DeepMind can recognize cat patterns in video data, then it can't be the case that cats are merely phenomena. They must be part of the noumena, since the neural networks have not been trained or programmed to recognize space, time, or any categories of thought.
The cats really are there in Youtube videos.
A simple example would be learning to recognize handwritten digits. Supervised learning is when the training data has labels such as 3 for any version of a handwritten three in the data. Unsupervised learning is when the network learns to recognize patterns which are not labelled.
A heirerchachal neural network has different layers at which patterns are recognized and built up to match a complicated shape like a face, or spoken sentences. Last year, Google's DeepMind was fed three days worth of Youtube video thumbnails, containing 20,000 different objects that humans would recognize. But there were no labels, so this was unsupervised.
Human faces and cats were two of the categories Deepmind learned to successfully recognize. This is of philosophical note, because it confirms that there are objective, mind-independent patterns in the data for a neural network to find. Otherwise, unsupervised learning shouldn't work at all, particularly with recognizing shapes and sounds that human minds do.
In summary, neural networks are discovering patterns that our visual and auditory systems find, without any programming telling them to do so, in the case of unsupervised learning. This lends support to mind-independent shapes and sounds out there in the world that we evolved to see and hear. And this would be direct, because neural networks have no mental content to act as intermediaries.
It also runs counter to Kant's claims, at least at the level of raw perception, because if DeepMind can recognize cat patterns in video data, then it can't be the case that cats are merely phenomena. They must be part of the noumena, since the neural networks have not been trained or programmed to recognize space, time, or any categories of thought.
The cats really are there in Youtube videos.
Comments (192)
All forms of pattern recognition involve a priori representational assumptions. Unsupervised learning is no different. In fact machine learning nicely vindicates neo-Kantian ideas of perceptual judgement, even if not in terms of the same fundamental categories, nor from categories derived from introspective transcendental arguments (Kant was after all, targeting philosophical skepticism about the self and the possiblity of knowledge of the external world, and not the scientific problem of how to understand the behavioural aspects of mental functioning that concerns a merely empirical affair)
But yet from this neo-Kantian perspective, consider for example a nearest-neighbour image classifier consisting of nothing more than a disordered collection of images. Without any a priori assumptions it is impossible to even talk about this image collection as containing a pattern by which to classify new images with respect to that pattern.
This was the basic observation of David Hume. Raw observation data by itself cannot justify empirical judgements as claims to knowledge. Kant merely pointed out that raw observations alone cannot even constitute empirical judgements, which as machine learning nicely illustrates, requires innate judgement in the form of synthetic-a priori responses.
Generalisation from experience requires metrics of similarity for perceptual pattern matching, together with categories of perception for filtering the relevant information to be compared. Neural networks don't change this picture, even if perceptual filters are partially empirically influenced. Decisions still need to be made about the neural architecture, its width, depth, the neural activation responses on each layer, the anticipatory patterns of neurons and so on.
All of this constitutes "synthetic a priori" processing.
You have a point there. What if we let the cats train the neural networks?
Even if this doesn't count as a critique of Kantianism, it does count against skepticism. And it shows how rudimentary perception can work on a direct realist account.
Note also how the learning depends on there being an a-priori hierarchical organisation. So that rather supports Kant's point that notions of spacetime must be embedded to get the game going.
Hierarchical organisation works by imposing a local~global dichotomy, or symmetry-breaking, on the data. At the hardware level, there is a separation of the transient impressions - a succession of training images - from the developing invariances extracted by the high-level feature-detecting "neurons".
So space and time are built in by the hierarchical design. The ability to split experience into general timeless/placeless concepts vs specific spatiotemporal instances of those concepts is already a given embedded in the hardware design, not something that the machine itself develops.
So a neural network can certainly help make Kant more precise. We can see that judgements of space and time have a deeper root - the hierarchical organisation that then "processes" the world, the thing-in-itself, in a particular natural fashion.
The key is the way hierarchies enforce local~global symmetry-breakings on "data". And what emerges as a hierarchical modelling system interacts with the messy confusion of a "real world" is a separation in which both the global conceptions, and the local instances, become "clear" in tandem. They co-arise.
Perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat".
So the particular experience of a cat is as much indirect as the generalised notion of a cat-like or feline thing. Each is grounded in the other as part of a hierarchy-enforced symmetry-breaking. Neither arises in any direct fashion from the world itself. Even the crisp impression of some actual cat is a mental construction, as it is the hierarchy which both abstracts the timeless knowledge of cat-like and also locates some particular instance of that categorised experience to a place and a time (and a colour, and aesthetic response, etc, etc).
The success of machine learning should at least give idealists and dualists pause for thought. Even a little bit of hierarchical "brain organisation" manages to do some prettty "mind-like" stuff.
But the psychological and neurological realism found in DeepMind supports the indirect realist. After all, the patterns of activity inside the machine look nothing like the cat pictures.
It is also worth noting that DeepMind extracts generality from the particulars being dumped on it. The training images may be random and thus "unsupervised". But in fact a choice of "what to see" is already embedded by the fact some human decided to point a camera and post the result to YouTube. The data already carries that implicit perceptual structure.
So what would be more impressive is next step hardware that can also imagine particular cats based on its accumulated knowledge of felines. That would then be a fully two-way system that can start having sensory hallucinations, or dreams, just like me and you. The indirectness of its representational relationship with the real world would then be far more explicit, less hidden in the choice of training data, as well as the hierarchical design of its hardware.
[/quote]
I'm just going to leave the Painting Fool here. Neural nets can already learn ontologies (operational stratifications of concepts). It's probably being worked on somewhere in a lab at the minute.
These are anticipation-based approaches to cognition and awareness. So imagination is not a tacked on extra. It is the basis on which anything happens.
Realism is truly indirect as the brain is a hierarchical system attempting to predict its input. And the better practised it gets at that, the more it can afford to ignore "the real world".
Direct realism means awareness of mind-independent objects instead of some mental intermediary. For the hierarchal system to be indirect, our perceptual awareness would be of the hierarchy instead of the object that's being detected using the hierarchy.
We've been over this before. Let's just say that your version of direct realism wants to skip over the reality of the psychological processes involved, although I would also myself want to be more direct than any representationalist. So the terms can fast lose any general distinctions as we strive for our personal nuances.
But by the same token, your claims to prove direct realism and deny Kantian psychology are weakened to the degree you don't have a hard definition of what direct vs indirect is about here.
For the record, here is Wiki trying to firm up the definitions....
So for example, do you think the world is really coloured? That red and green are properties of the world rather than properties of the mind?
I prefer to argue the more complex story myself - that colours are properties of a pragmatic or functional mind~world relation. So this is an enactive, ecological or embodied approach. And it has its surprising consequences. Like the point I made about the intention of the brain being to maximise its ability to "ignore the world". We see the physics of the thing-in-itself in terms of reds and greens because such an "untrue" representation is the most useful or efficient one.
Direct realism presumes the brain just grasps reality without effort because reality is "lying right there". The thing-in-itself forces itself upon our awareness due to its recalcitrant nature.
Representationalism then points out that there is still considerable effort needed by the brain. It has all those computational circuits for a reason. But still, representationalism shares the underlying belief that the goal of re-presenting reality must be veridical. The mental intermediary in question is not the weaving of a solipsistic fiction, but a presentation of an actual world.
Then as I say, I take the position in-between these two extremes of "indirect realism" (Ie: the realism that at least agrees there is a mediating psychological process). And that embodied realism is what Grossberg and Friston in particular have been at the forefront of modelling. They get it, and the philosophical consequences.
So I hope that clears the air and we can get back to the interesting point that really caught my attention.
The success of neural network architectures at doing mind-like things is down to their hierarchical organisation. So in trying to define the simplest self-teaching or pattern recognising machine, computer science has found it in fact is having to hardwire in some of pretty Kantian conditions.
It may not seem obvious, but hierarchical structure - a local~global dichotomy - already wires in spatiotemporal "co-ordinates". It sets up a distinction between local transient impressions vs global lasting concepts from the get-go. Like a newborn babe, that is the order the machine is already seeking to discover in the "blooming, buzzing, confusion" of its assault from a random jumble of YouTube images.
Quoting Marchesk
Maybe you understand the Kantian position far less well than I was assuming?
A cognitive structure is required to do any perceiving. That is the (apparently chicken and egg) paradox that leads to idealism, social constructionism, and all those other dread philosophical positions. :)
So no. My point is that in seeking the minimal or most fundamental a-priori structure, a neurological hierarchy is already enough. It already encodes the crucial spatiotemporal distinction in the most general possible way.
The fact that it is so damned hard for folk to see their own hierarchical perceptual organisation shouldn't be surprising.
Thought places space, objects, and even time (or change) as being "out there" in the world. But why would we place our own conceptual machinery in the world as the further objects of our contemplation - except when doing science and metaphysics?
It is the same as with "qualia". We just see red. We don't see ourselves seeing red as a further perceptual act. At best, we can only reveal the hidden fact of the psychological processing involved.
As in when asking people if they can imagine a greeny-red, or a yellowy-blue. A yellowy-red or a greeny-blue is of course no problem. We are talking orange and turquoise. But the peculiarities of human opponent colour processing pathways means some mixtures are structurally impossible.
Likewise, blackish-blue, blackish-green and blackish-red are all standard mixtures (for us neurotypicals), yet never blackish-yellow (ie: what we see as just "pure brown").
So the hidden structure of our own neural processing can always be revealed ... indirectly. We can impute its existence by the absence of certain experience. There are strategies available.
But I'm baffled that you should reply as if we ought to "see the hierarchy" as if it were another perceptual object. We would only be able to notice its signature imprinted on every act of perception.
And that is what we do already in making very familiar psychological distinctions - like the difference between ideas and impressions, or generals and particulars, or memory and awareness, or habit and attention, or event and context.
The structure of the mind is hierarchical - always divided between the local and global scales of a response. The division then becomes the source of the unity. To perceive, ideas and impressions must become united in the one act.
So the structure is certainly there if you know how to look. Psychological science targets that mediating structure. And computer science in turn wants to apply that knowledge to build machines with minds.
That is why the right question is what kind of mediating structure, or indirect realism, does computer science now seem to endorse.
I see you want to make the argument that remarkably little a-priori structure seems needed by a neural net approach like DeepMind. Therefore - with so little separating the machine from its world - a mindful relation is far more direct than many might have been making out.
A fair point.
But then my response is that even if the remaining a-priori structure seems terribly simple - just the starter of a set of hierarchically arranged connections - that is already a hell of a lot. Already an absolute mediating structure is in place that is about to impose itself axiomatically on all consequent development of the processing circuitry.
:-O
That's a pretty sad basis to start critiquing a robust/nuanced position involving direct perception.
Yep. And that is the point. The OP certainly comes off as an exercise in naive realism. You can't both talk about a mediating psychological machinery and then claim that is literally "direct".
If Marchesk intends direct realism to mean anti-representationalism, then that is something else in my book. I'm also strongly anti-representational in advocating an ecological or embodied approach to cognition.
But I'm also anti-direct realism to the degree that this is an assumption that "nothing meaningful gets in the way of see the world as it actually is". My argument is that the modelling relation the mind wants with the world couldn't even have that as its goal. The mind is all about finding a way to see the self in the world.
What we want to see is the world with ourselves right there in it. And that depends on a foundational level indirectness (cut and paste here my usual mention of Pattee's epistemic cut and the machinery of semiosis).
So this is a philosophical point with high stakes, not a trivial one - especially if we might want to draw strong conclusions from experiments in machine pattern recognition as the OP hopes to do.
There just cannot be a direct experience of the real world ... because we don't even have a direct connection to our real selves. Our experience of experience is mediated by learnt psychological structure.
The brain models the world. And that modelling is in large part involves the creation of the self that can stand apart from the world so as to be acting in that world.
To chew the food in our mouth, we must already have the idea that our tongue is not part of the mixture we want to be eating. That feat is only possible because of an exquisite neural machinery employing forward modelling.
If "I" know as a sensory projection how my tongue is meant to feel in the next split second due to the motor commands "I" just gave, then my tongue can drop right out of the picture. It can get cancelled away, leaving just the experience of the food being chewed.
So my tongue becomes invisible by becoming the part of the world that is "really me" and "acting exactly how I intended". The world is reduced to a collection of objects - perceptual affordances - by "myself" becoming its encompassing context.
The directest experience of the world is the "flow state" where everything I want to happen just happens exactly as I want it. It was always like that on the tennis court. ;) The backhand would thread down the line as if I owned its world. Or if in fact it was going to miss by an inch, already I could feel that unpleasant fact in my arm and racquet strings.
Which is another way to stress that the most "direct" seeming experience - to the level of flow - is as mediated as psychological machinery gets. It takes damn years of training to get close to imposing your will on the flight of a ball. You and the ball only become one to the degree you have developed a tennis-capable self which can experience even the ball's flight and landing quite viscerally.
So direct realism, or even weak indirect realism, is doubly off the mark. The indirectness is both about the ability of the self to ignore the world (thus demonstrating its mastery over the world) and also the very creation of this self as the central fact of this world. Self and world are two sides of the same coin - the same psychological process that is mediating a modelling relation.
Perhaps not, but one could easily provide solid justificatory ground for talking about how mediating psychological machinery is existentially contingent upon direct perception.
Quoting apokrisis
Nonsense. Physiological sensory perception is as direct a connection as one could reasonably hope for.
Physiological sensory perception is prior to language on my view.
What is direct about motion detection or hue perception I wonder? Are you going to begin by talking about the transduction of “sensory messages” at the level of receptors?
Do you agree?
Well of course. Animals have minds and selfhood. Our models of perception have been built from experiments on cats and monkeys mostly.
Quoting creativesoul
Quoting apokrisis
What then did you mean by "perception" in the first quote above?
Yeah, I mean because that's what we have direct access to as compared/contrasted/opposed to our own selves...
What on earth are you talking about?
I meant direct in the philosophical sense, where direct realists argue that perception is one of being directly aware of mind-independent objects out there in the world, and not some mentally constructed idea in the head.
That there is neurological/cognitive machinery for perceiving objects directly is understood. That machinery is only a problem if it generates a mediating idea.
The direct realist debates always go off the rails on these points. That's why we get arguments about how objects aren't in the head, or light takes time to travel, and therefore direct realism can't be the case.
I'm a realist who argues in favor of direct physiological sensory perception. I'm not sure if I'd say/argue that direct perception requires awareness of that which is being perceived. Awareness requires an attention span of some sort. Bacteria directly perceive. I find no justification for saying that bacteria are aware of anything at all...
Only if it generates an idea that mediates the physiological sensory perception itself...
Doing that first requires becoming aware of such a thing. Language is required for becoming aware of one's own physiological sensory perception. Language is not required for being born with neurological/cognitive machinery(physiological sensory perception).
Thus, drawing correlations, associations, and/or connections between 'objects' of physiological sensory perception can result in a mediating idea and still pose no problem whatsoever for a direct realist like myself. The attribution and/or recognition of causality is one such correlation/association/connection.
One can learn about what happens when one touches fire without ever having generated an idea that mediates one's own physiological sensory perception. One cannot learn what happens when one touches fire without attributing/recognizing causality.
As usual I have no clue what you are on about. Did you think I would argue that sensory level, and then linguistic level knowledge of the world is indirect, but that scientific knowledge is direct?
All knowledge would be indirect in the semiotic sense I’ve described.
Quoting creativesoul
Quoting apokrisis
What then did you mean by "perception" in the first quote above?
It's not. I would favor a direct scientific realist account of perception. But in any case, one could argue that smell, sound, color are how we experience the world directly.
Quoting apokrisis
It's not my definition, and I don't know whether direct realism is true. But it occurred to me that if neural networks are a crude approximation for how our perception works, then they do favor realism about the patterns being detected.
I don't know whether any neural network can be said to have illusions or hallucinations. Possibly illusions. Sometimes there are notable failures where it incorrectly recognizes the wrong pattern, despite otherwise having a high degree of success.
How could we argue that the world is coloured as we “directly experience” it when science assures us it is not?
Sure, phenomenally, our impressions of red seem direct. We just look and note the rose is red. That’s the end of it. But science tells us that it isn’t in fact the case. There is mediation that your philosophical position has to incorporate to avoid the charge of naive realism.
Quoting Marchesk
But your argument seemed to be that unsupervised neural net learning is evidence for just how unmediated perception would be. So that might be an argument for a high degree of directness in that sense, yet it remains also an acceptance of indirectness as the basic condition.
If awareness is mediated by a psychological process, then by definition, it ain’t literally direct.
Pigeon perception is not linguistically scaffolded. They have no concept of "cat".
You need to sort out the incoherence and/or equivocation in your usage of the term "perception".
Quoting apokrisis
Your experience is part of the world, no?
So enactivism?
I don't think this is right. Presumably these neural networks simply recognise patterns in the magnetism on the hard drive, that although covariant with the visual shape (given whatever algorithms translate the input into binary), are not the same thing.
And "discover" might not even be the right word. "Respond" is better. The deterministic behaviour of the computer causes it to output the word "circle" when the magnet passes over a particular arrangement of magnetised and non-magnetised pieces.
You seem to be reifying our abstract description of how computers work.
Did you notice the thread title or read the OP?
Quoting Harry Hindu
Your experience is your world, no?
Correct. I said that.
Quoting Michael
A good point. The system has no eyes. It is just fed 18x18 chunks of pixels - strings of hex code.
It might be worth checking the paper - https://arxiv.org/pdf/1112.6209v5.pdf
The system seems pretty Kantian in terms of the amount of a-priori processing structure that must be in place for "unsupervised" learning to get going.
I'd note in particular the dichotomous alternation of filtering and pooling. Or differentiation and integration. Followed by the third synthesising step of a summating normalisation.
In doing their best to replicate what brains do, the computer scientists must also build a system that pulls the world apart to construct a meaningful response - one that separates signal from noise ... so far as it is concerned.
Go read any description of artificial neural networks. When they want to get technical, they talk in terms of linear algebra, matrices, and finding the minimum slope for error correction. How the computer actually accomplishes computation is irrelevant.
To the extent that artificial neural networks function like biological ones, the physical instantiation is irrelevant. But nobody thinks they're exactly the same, only somewhat analogous. And of course the biological details matter for how a brain actually functions.
I don't know whether philosophers spend much time debating perception in bacteria, but when it comes to human perception, the argument between direct and indirect realists is over whether we are aware of the objects themselves via perception, or something mental instead.
Is access direct or indirect? Are objects really out there or just mental? Is there anyway we can tell? And to what extent does the mind construct those objects based on categories of thought that aren't necessarily reflected in the structure of the world?
I just came across scientific direct realism on Internet Encyclopedia of Philosophy. Locke's primary properties, like shape, would be directly perceived, while color would be the means by which we see shape, even though it belongs to our visual system.
Of course there are other flavors of direct realism that might say something different about color. Some would even be color realists, although I have a hard time seeing how that can be defended. But they do try.
But you could use a camera stationed anywhere, and see what sort of objects an unsupervised network will learn to categorize.
And there are autonomous vehicles designed using deep learning techniques. A self driving car needs to be able to handle any situation a human might encounter when driving on the road. Here is a short video for one of those companies working on the challenge:
https://spectrum.ieee.org/cars-that-think/transportation/self-driving/how-driveai-is-mastering-autonomous-driving-with-deep-learning
Yep, that is the line that direct realism tried to defend back in the 17th century. It would give up qualitative sensation and insist on the directness of quantitative sensation.
But psychological science has obliterated that line - even though I agree most people haven't really noticed.
Quoting Marchesk
Great. You seem persuaded that colour experience is definitely indirect - mentally constructed in some strong sense.
I can see why you might then protest that the shapes of objects are just self-evident - unprocessed, unvarnished, direct response to what is "out there". It seems - as Locke argued - realism can be secured in some foundational way. A shape is impossible to be misrepresented. It therefore requires no interpretation. Our experience of a shape is unmediated.
But as I say, psychology has shown just how much interpretation has to take place to "see a shape". The useful sanitary cordon between primary and secondary sensation has evaporated as we've learnt more.
Sure. The AI labs will want to keep improving. But a computer that can actually do human things might have to start off as helpless and needy as a human baby. Would you want to have to wait 20 years for your Apple Mac to grow up enough to be useful?
It's a two edge thing. Yes, it would be great to identify the minimal "Kantian" hardwiring needed to get a "self-educating machine" started. But then that comes at the cost of having to parent the damn device until it is usefully developed.
So - philosophically - neural networks are already based on the acceptance that the mind has Kantian structure. Awareness is not direct. So to replicate awareness in any meaningful fashion, it is the mediation - the indirect bit - we have to understand in a practical engineering sense.
The unsupervised learning is then the flip side of this. To the degree a machine design can learn from an interaction with the world, we are getting down to the deep Kantian structure that makes biology and neurology tick.
And as Michael points out, step back from the "computer just as fascinated by internet cat videos" nonsense used to hype DeepMind, and you can see just how far the AI labs have to go.
DeepMind's "reality" actually just is a hex code string, magnetic patterns on a disk. It is forming no picture of the world, and so no sense of self. It is the humans who point and say golly, DeepMind sure loves its YouTube cat clips.
So I agree it is an interesting experiment to consider. I just draw the opposite conclusion about what it tells us.
It isn't. You want to talk about it in terms of the computer recognising shapes as we do, and so conclude that shapes are mind-independent things. But a look at the mechanics of computation will show that this is the wrong conclusion to make. The computer simply responds to the magnetic charge on the hard drive.
And if the algorithm in question is using thousands of GPUs or TPUs (tensor processing units) reading from a bunch of solid state drives over a server farm, or being fed data over a network?
You could argue that a neuron simply responds to an electrical charge from a connected neuron. What does that have to do with perception?
The brain has to do be able to recognize a shape somehow. It's not magic, and shapes don't float along on photons into the eyes and travel from there on electrons into the homunculus sitting in the visual cortex.
A distinction needs to be made between naive realism, where unreflective and unscientific view of seeing the world is like looking out a window onto things. Obviously, that's not how it works. No philosopher is going to defend a totally naive view of vision which involves an object showing up in the mind magically. There has to be a process.
The question is whether the process of perception creates an intermediary which we are aware of when perceiving, or whether it's merely the mechanics of seeing, hearing, touching, etc.
No neuroscientist could accept that simple account. Neurons respond to significant differences in the patterns of connectivity they are feeling. And that can involve thousands of feedback, usually inhibitory, connections from processing levels further up the hierarchy.
So mostly a neuron is being actively restricted. And that constraint is being exerted from above. The brain is organised to that ideas - expectations, goals, plans - lead the way. The self-centred indirectness is what we see when we actually put individual neurons under the microscope.
Quoting Marchesk
Of course. I'm not defending any caricature story here. No need to put these words in my mouth.
Quoting Marchesk
Yep. So now again we must turn to why you insist this is better characterised by "direct" than "indirect".
If your argument is that the brain has the goal of being "as direct and veridical and uninterpreted as possible", then that is the view I'm rejecting. It is a very poor way to understand the neuroscientific logic at work.
But as you say, I wouldn't then want to be batting for good old fashioned idealism. We don't just imagine a world that is "not there".
So I am carefully outlining the semiotic version of indirect realism which gives mediation its proper functional place.
Quoting Marchesk
That is no longer my question as I reject both direct perception and homuncular representation. My approach focuses on how the self arises along with the world in experience.
The surprise for most is that both these things in fact need to.
And no computer scientist is going to say that all an algorithm is doing is reading a magnetic charge.
Let's make this really, really simple. What is the result of visually perceiving a tree?
A. Seeing a mental image.
B. Seeing the tree.
I'll let your unsupervised neural network categorize the two.
Seems to me that both sides are wrong for the same reason. They both work from an impoverished language 'game'(linguistic framework).
The point about bacteria was to highlight some of the impoverishment...
Just yet another case of the self-imposed bewitchment of inadequate language use.
Quoting apokrisis
You said that pigeons can make the same perceptual discrimination as humans after saying that "...perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat"."
The incoherence and/or equivocation is the direct result of self-contradiction. So, yeah... I suppose it does have something to do with thought/belief; particularly the kind that doesn't warrant much more of my attention. Some folk care about coherence and clarity. Others apparently don't.
Hey March. I just want to point something out, just in case you've not noted it. Pay very close attention to how the term "perception" is being used in these discussions. Re-read the thread with that as the aim...
The pigeon doesn't understand "the cat" as a cuddly pet or abstract concept, but it can still recognize it, and likely has a similar visual experience to humans.
That's the sort of stuff that needs unpacking and/or explained March...
That's a good point. How do philosophers typically define perception?
Poorly.
A perception shouldn't be a synonym for a conception, so there needs to be some differentiating there. And a "cat watching" neural network is only classifying different pixel patterns that match up to what humans recognize as cats with a certain degree of accuracy.
I'm not knowledgable enough regarding how computers work to say much at all regarding that. However, it is my understanding that binary code still underwrites it all. Is that correct?
It's all binary in that the circuit logic is based on boolean algebra (true/false or 1/0). The instructions a processor carries out are based on combinations of 1s and 0s. But the functionality humans care about is understood at an algorithmic level, because that's what we designed computers to do.
A trained neural network that recognizes a word would have a vector of positive and negative real numbers representing that word. But nobody really understands what those numbers represent. They're the outcome of training a network to recognize the word "cat" for example (written or auditory depending on the network). They're the different weights and biases of the inputs that make the network recognize "cat".
Of course those real numbers are stored as bit patterns.
So, the computer prints "Here kitty, kitty." whenever 'it is shown' a catlike image and it is able to somehow 'match' the image to it's database of what counts as a cat?
It doesn't have anything to do with neural networks, just that you can represent false statements in code, and I'm using the unicode character for a cat face, because some programming languages let you use any unicode character.
Although maybe you meant the code has to be true in the error free sense, although errors can crop up while the code is running, of course.
https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1
Yeah, the binary code can be any statement that can be represented by 1s and 0s.
So maybe the statement would be (in human terms):
"There is a 94.57% probability there is a cat in this Youtube video."
Which represents the confidence the network has in making the classification, I think.
What feature (or physical property) of a computer is analogous to physiological sensory perception?
There isn't. The equivalence would be functional. Someone could probably a hook a camera up to a physical artificial neural network, where the neurons are somehow realized physically, instead of just being software functions. But it still wouldn't be biological.
Are we talking about a dream tree? How does your own unsupervised neural network categorise those?
And is the greenness of this tree - either real or imagined - something true of the actual tree or a property of the mental image. (You seemed to agree that colours were A, but that shapes would be B.)
So good luck with your ambition making things really, really simple. These are deep philosophical issues, and not merely language games, for a reason.
A lot of familiar psychological terms are poorly defined. But we'll live. We can talk about the biological commonality with the laboratory animals into which we plunge our electrodes while also reminding of the particular difference that linguistic scaffolding makes to everything happening in a human mind.
So I'd agree that the standard jargon ought to reflect the distinctions better. Perhaps that is what you think you do with "thought/belief" and suchlike. I'm still waiting for you to explain. Somehow you never do.
I didn't come up with the direct/indirect realism debate.
Quoting apokrisis
No, we're talking about the perceived tree. Is it a mental image or not? That's what direct/indirect realism comes down to. All this other stuff is confusing the issue.
I agree, but it never helps in these discussions when the result is endless semantic dispute because nobody ever agrees on how the terms should be used.
It's weird, because I can go to SEP and it will clearly state what direct realism is about, but then I come here, and it's muddled semantic confusion the entire time.
And I understand that not everyone will agree with a philosophical position. That's fine. But when we can't even agree on what terms mean, then the debate just meanders all over the place with people talking past one another. And i'm speaking in general here. We've had 100 page long disputes over apples and cats on mats in the old forum which went the same way.
There was one thread which ended with antirealism being associated with direct access, whatever that could mean. Basically, a melding of Wittgenstein and direct realism.
That one made me laugh. Show me the simple definition of direct realism, or even indirect realism, in this SEP entry - https://plato.stanford.edu/entries/perception-episprob/
There are 100 shades of opinion. And that ain't a dreadful thing. There is a reason Descartes is where modern philosophy finally woke up and took its business seriously. Progress on the central issue has been an arduous affair. In a deep way, it may be logically irresolvable due to its self-referential nature.
On the other hand, psychological science has been steering in a direction. Representationalism is on the wane. Embodied and semiotic approaches are increasingly popular. It is a big step towards accepting the complexity of the mind~world relationship by shifting up from a dualistic framing to one that is irreducibly triadic (the hierarchical view).
Quoting Marchesk
How can it be illegitimate to talk about the tree perceived in a dream? If you can't tell reality just by looking, then direct realism is dead from the get-go.
Of course we can learn to dismiss dreams as imaginings. Folk used to believe in the reality of their spirit wandering while they were asleep happily enough. Now we categorise that kind of experience differently. But it is still just a categorisation from a hard epistemological point of view. We are not justified in taking short-cuts just for the sake of argumentative convenience.
So again, it was your OP that wanted to use machine intelligence as an argument for direct realism. You clearly felt there was an issue at stake because a doubt is in play. You didn't just ignore the doubt. You felt it worthy of that attack.
And now that you are encountering pushback, you ought to be prepared to deal with it in turn. Dream trees and experiences of colour become fair game. Where does the indirectness leave off so the directness can start?
My own arguments have gone further. I have made the case for why indirectness is an advantage. It is how a self is even formed to stand in relation to "the world".
Likewise it was interesting to me how some actual vaunted neural network project reveals the Kantian structure that must be smuggled in to get its Lockean tabula rasa up and running, recognising cute kitten faces on the interweb.
But dreams, hallucinations, illusions and all the standard stuff is still relevant if your own interest is in defending an understanding of perception where the mediation never gets in the way of the production of the mediated experience.
Because perception doesn't occur in dreams. If you want to attack direct realism with dreams, then you need to say the experience is the same, That's the reason the argument from hallucination has bite.
It is simply stated as whether there is mental mediary we're aware of when perceiving an object. If no, then direct realism is the case.
The arguments for or against direct realism is where you get the "100 shades of gray". But the issue is stated simply, until everyone and their grandma goes off on tangents around the meaning of terms like direct, access, and realism. The semantic dispute over terms then gets conjoined with the arguments for and against the question of whether we behold a mental construct, or the thing itself.
So I ask you again, do we or do we not behold a mental construct of a tree when we see a tree? I honestly don't care which way you answer, since I'm not sure myself. But I do care about the argument being able to proceed without semantic muddle.
I would say that they're the same in that both waking and dream experiences are constituted of sense-data/qualia. They're different in that in the case of the former they are caused by external stimulation and in the case of the latter they are caused by internal stimulation.
I think there are two issues, referring to two different understandings of direct realism (the one which you and some others argue for, and the one which the Wikipedia article describes). The first issue is what is the immediate object of perception – the sense-data/qualia or the stimulation – and the second is whether or not the properties/features of the experience are properties/features of the stimulation.
For example, you might want to say that the immediate object of perception is the external-world chair but that the colour property/feature of the experience isn't a property/feature of the external-world chair
– instead it is a causally covariant effect of stimulation by a certain wavelength of light (a wavelength that is determined by the physical make-up of the chair).
From my understanding, the indirect realist argues that none (or perhaps just almost none) of the properties/features of the experience (e.g. colour, smell, sound, taste, etc.) are properties/features of the external stimulation – they are just causally covariant effects of external stimulation. They then argue that because of this, it doesn't make sense to claim that the immediate object of perception is the external stimulation. Our immediate awareness is of sense-data/qualia, which although causally covariant with and indicative of this external stimulation, isn't itself the external stimulation, hence why our perception of this external stimulation is indirect.
Personally, I'm inclined to this indirect realist view. I don't understand what it means to claim that perception is direct if not to say that the properties of the experience (e.g. the colour) are properties of this external stimulation, and I don't think that the properties of the experience (e.g. the colour) are properties of this external stimulation.
I'm not quite sure how that follows, for neuroscience and machine learning are both representationalist and neo-Kantian in the sense of being functionalist, at least in terms of their surface grammar. The upshot is that representations are internal and their designated truth labels are external and they aren't typically considered to be part of a unified single entity.
It might be enlightening to read about Kant's theory of cognitive judgement on the SEP to understand the precise differences of modern neuroscientific thinking to Kant's transcendental idealism.
If I recall correctly, Kant's views of perception are somewhat similar to direct realism in the sense of being roughly deflationary about consciousness in terms of its contents, but with some minor and irrelevant differences that relates to the normativity of judgements. In truth his views were probably somewhat vague and ambiguous but i believe they are deflationary in the critical sense of rejecting truth by correspondence in the empirical sense.
In other words, empirical doubt about the 'external' world 'as a whole' might be impossible for Kant, but not necessarily rational doubt concerning the transcendental reality or significance of the empirical world, since after all, Kant speaks of transcendental Noumenal entities that are rationally deducible, even if they are unimaginable and empty logical entities without empirical meaning and significance.
Hence it appears that direct realism, at least for Kant, even if eliminating empirical doubt 'as a whole', cannot defeat rational Scepticism.
Fine. Answer that version of the same question then.
I don't know. What makes perception qualitatively different from other mental experiences? It is remarkable how much a dream seems like you're perceiving. The disjunctivists deny that the experiences are the same. I'm not sold on that.
So, what are you saying, Apo - that you're just another sheep following the herd?
Quoting apokrisisIf solipsism were the case, then "my experience" would be the world, or there would simply be the world, and to say that there would be an experience of it by me, would be incoherent.
If solipsism isn't the case, then there is the world and my experience of it, along with your experience of it, and everyone else's. If solipsism weren't the case, then there wouldn't be anything incoherent about using the terms, "my", "experience", etc., as that would be referring to real things, that are part of the whole world, which includes all experiences, like yours and not just mine.
If solipsism isn't the case, then there is only one world, and many experiences of that world.
If solipsism is the case, then there is only one world, and no experiences of it.
You can tag along if you like.
So, going back to my question, which you avoided (yet again), is our experience part of the world, and if so, then isn't color part of the world?
And of course, MY version of "my" corresponds to nothing.
And of course, if anybody is rational they will agree with my statements as written, but draw exactly the opposite conclusion.
And the only reason this is appears inconsistent to the realist is because he insists on the semantic symmetry of propositions whose subject is the first-person; he assumes that enlightened individuals would all be in verbal agreement with each other when discussing the truths of philosophy; that their propositions of epistemology would all be phrased in terms of "we know this" as opposed to "I know this" whereas "you know that"
But the realist overlooks what is directly in front of his nose. For when describing one's use of words in relation to one's own experiences, one directly sees indirect realism when one watches other people perceiving their surroundings, whereas one can only think like a direct realist when it comes to one's own experiences. For one's own experience is the very basis in which indirect realism is interpreted.
This sounds like a contradiction. This sounds like you have direct access to reality to describe it with such detail and with such confidence, not indirect access.
yes, *I* have direct access, in the sense that I cannot imagine what it means to have indirect access in my own case. Yet it is natural for me to describe everyone else as having indirect access, since I observe other people as being objects that are distinct from their objects of perception.
So I am afraid, it is direct realism for me and indirect realism for everyone else.
And surely you would agree. For isn't it obvious to you that my words can only refer to my representations of your world that I cannot possibly know or even meaningfully talk about?
Our conception of the physical world says wavelength and not colour is part of that world. Our conception of neurological processes is that colour is somehow part of what brains do. But that is actually quite a mysterious thing when considered as a “property”. Most folk would call it a property of the mind and not the world. This then leads to entrenched dualistic issues.
So you seem intent on bypassing the complexities of the question. That isn’t very useful.
What is a theory of perception? Presumably it's a way of assigning a description to the following kind of event: X perceives Y and set of properties and relations P(Y) influenced or deriving from the set of properties and relations P(X,Y). As an example.
I perceive a cup on my table, it is plain white and filled with coffee.
I (X) perceive a cup ( Y ) on my table ('on my table' is a relation between the cup and the table, a member of P(Y) ), it is plain white (a property of the cup, a member of P(Y)) and filled with coffee (being filled with coffee is another member of P(Y)).
I think any direct realist and any indirect realist would agree that indeed I do see a cup on my table, and that it is plain white and filled with coffee. What matters between them is how to analyse 'I see' in terms of the subject: me, X; the object: Y, the cup. Specifically, what matters are the properties of the relation 'sees' between X and Y. How does it arise? What does it mean for me to see X? What are the relations between the seen object and the object? (representational sense data or identity for indirect/direct examples). Answering these questions gives elements of P(X,Y)
Notably absent from this kind of analysis is any analysis of the performativity in the perceptual event, and this changes the kind of questions that would be asked of a perceptual theory. A contrastive question between direct and indirect realism, of specific sorts, might be 'do I see the cup of coffee or do I see a representational sense datum of the object?', an analysis inspired by the performativity of the perceptual act (it's a verb, c'mooooon) might ask "how is it that I see the coffee cup? what perceptual structures allow me to see the coffee cup?". It changes debates from, ultimately, a semantic theory of perceptual verbs or their conditions of possibility to 'what makes us perceive how we perceive and how do we perceive?'
Husserl noticed the difference between these two styles of questioning, or something like it, with his idea of 'bracketing','reduction' or 'epoché'. This means, roughly, forgetting the objectivity or veridicality of our experiences and instead attempt to deal with their internal structures and webs of meaning.
If we already grant the 'world of perception' to a person, what remains is to give an account of its formation and stability rather than our conditions of access to it.
So the mind isn't part of the world? Then how do minds interact if not through the medium of the shared world? What is it that divides minds to call them separate? It seems that once you start down the path of claiming the mind isn't part of the world, you start down the path towards solipsism.
Sure. We can say that a pigeon perceives precisely the same way that humans do. We can also offer adequate justificatory ground for doing so, by virtue of establishing a notion of perception that is universally applicable to any and all perceiving creatures.
Not all creatures have written language. Thus, if our notion of perception includes that which is existentially contingent upon written language, then we would be forced to deny any and all creatures without written language the very capability. Pigeons would not count.
If we do not possess a notion of perception that is sensibly said to be satisfied by a pigeon as well as a human, then we have no justificatory ground for claiming to know what we're talking about when we're talking about the particular difference that language makes to that aforementioned universally applicable base notion of perception. To know the differences between pigeon perception and human perception one must know what both respectively consist of and require.
So, with all that in mind... you're right, we'll live on even if we have no idea what we're talking about.
Nice post fdrake...
I miss your ultimate warrior avatar!
;)
You are confusing the epistemic issue of direct vs indirect realism with the ontological commitments I might then argue concerning the mind~world issue.
Quoting fdrake
The following bit garners my attention, and has for quite some time...
[i]...'what makes us perceive how we perceive and how do we perceive?'
Or you could stop putting words like "precisely" in my mouth. That would be a good start.
Quoting creativesoul
Well whoopsie-do. Again, making any claim about perception being existentially contingent on written language is a misconception of your own doing here. For whatever reason, you are again projecting you own baggage on to what I say.
Quoting creativesoul
You are sounding particularly pompous today. Or should that be sounding/acting?
Quoting apokrisis
emphasis mine
emphasis mine...
Higher level than what, exactly?
What does pigeon perception consist in?
What is the criterion, which - when met by a pigeon - counts as being the same case of perceptual discrimination as humans?
It's certainly not this...
Perception involves being able to see "that cat there", a judgement grounded in the matching development of a generalised capacity for categorising the world in terms of the long-run concept of "a cat".
That sort of perception requires (is existentially contingent upon) written language... and thus what helped prompt the post regarding what it takes to know what we're talking about when comparing different creatures' perception.
This is psychology's most celebrated example of how similar animals are to humans in their ability to go beyond "direct experience" to categorise their experience abstractly.
Your move.
I'm still curious. Is there an answer somewhere in there to either of the following questions?
What does pigeon perception consist in?
What is the criterion, which - when met by a pigeon - counts as being the same case of perceptual discrimination as humans?
I'll refrain from saying much about those summaries, given I do not know the specifics of the experiments themselves, and they're pretty much irrelevant to our debate, as far as I can see. The one point I would make is that they are couched in language that assumes the conclusion.
That does not follow from anything within those snippets.
If I ignore the language used to describe the pigeon behaviour and grant that the pigeon learned to recognize some object or another and learned to associate different objects and behaviours with getting food, then I'm unsure how that would support indirect perception. Seems like direct perception of exactly what we train them to associate with food, regardless of the object we choose.
For example, it is quite misleading and probably just plain wrong to claim that a pigeon learned to recognize a malignant group of cells from non-malignant ones simply as a result of being able to effectively distinguish between the two kinds. Categorizing, in rudimentary form, requires noting similarity between different 'objects'. While a pigeon may very well be quite capable of being trained to pick out malignant formations, the pigeon doesn't recognize them as malignant formations. Rather, it recognizes the similarities between such structures and draws causal associations between their behaviour and getting food.
We could just as easily train them to incorrectly categorize them.
Well, that is stating the bleeding obvious. The point of the pigeon research is that animal brains can in fact categorise to quite a human degree ... when linguistically-scaffolded in human fashion. So that shows both that our biology of perception/conception has much more in common than most might expect, but also that language then really makes a particular kind of difference we can add to the discussion.
I'm not confusing anything. I'm just trying to ask a question and to show you the consequences of your answer. If you don't want to answer because you fear the consequences, just say so.
Quoting apokrisis
The point of the research isn't proven. The evidence doesn't warrant such a conclusion. Thus, you're overstating the case. Pigeons aren't recognizing malignant formations as malignant formations. Thus, they do not - cannot - 'perceive' them as such (scarequotes intentional). That requires language. They are not categorizing in linguistically - scaffolded fashion. To quite the contrary, we are the ones who categorize their mental ongoings and behaviour in linguistically scaffolded fashion.
Thus, this raises the crucial importance of getting it right. Ockham's razor applies here.
They become aware of and recognize the differences between kinds of cell structures. They draw mental correlations between their own behaviour, the cell structures, and getting food. They are attributing causality. I would be surprised if their behaviour did not also clearly indicate that they begin to form expectation. None of that requires language. All of it requires thought formation. All of it also clearly lends support to direct perception.
So, all of what can be rightfully said about pigeon mental ongoings is quite rudimentary when compared to the highly complex degree of categorizations that humans discover, invent, or employ.
We agree here, in some sense. However, we're right back to where we were.
What does pigeon perception consist in?
What is the criterion, which - when met by a pigeon - counts as being the same case of perceptual discrimination as humans?
Does that not cover perception?
Quoting creativesoul
Really? After wasting so much time on irrelevancies, you seem to have forgotten to address the OP.
Of course it entails perception. However, it seems apparent that you and I have incommensurate notions regarding what counts as perception. I've been justifying my position; bearing the burden as it were. Care to do the same?
Recently I've addressed what you've put forth.
How one arrives at thinking that I've forgotten to address the OP, given what I've written here, is beyond me. Everything I've said here develops the notion of perception. There's quite a bit more involved. It's rather nuanced.
What is the criterion, which - when met by a pigeon - counts as being the same case of perceptual discrimination as humans?
You’d have to explain how. I’ve asked in the past and you haven’t explained.
I’ve made the point that human language changes the way we are aware of the world deeply. Yet also we share the same basic brain biology.
So for instance, I am happy to talk about pigeons recognising, but I wouldn’t believe they can recollect. They have memories and can categorise experience. But they don’t have the structure of language that would allow for a narrative or autobiographical use of those memories.
Likewise they would have anticipatory imagery. They could search their environment with an expectation in mind. But not having language, they couldn’t have what we mean by imagination - the ability to generate mental imagery that is not closely tied to what the world around demands.
So the difference that language makes is an issue I’ve written books about. It is perfectly familiar to me. I’m not getting why you’ve got your knickers so in a twist about me talking about pigeon perception in a routine psych 101 way. Yes, in psych 101 they do skate over the difference that language makes. But that is excusable as talking about the general biological case before getting into the qualifications of the specifically linguistic human case.
What bugs me about your approach in this thread is that you keep using your own weird neologisms without proper explanation and you fail to provide grounding citations for whatever position you think you take. So it is hard to discuss the issues with you rationally. You are coming across as a crackpot. Yet I also think you are trying to make the same point as I also make. So that remains confusing.
That is false.
I've been explaining throughout this thread. It might be worth stating some basics about comprehension. In order to understand an author, a reader must employ the same sense of the key operative terms. In this case, the discussion revolves around what counts as being direct or indirect perception. So, in order for either of us to understand the other, we must understand what is meant when either of us use the term perception.
Our discussion has revolved around precisely those differences. Perhaps this be better put differently: I have been extrapolating upon the differences between our notions of perception. Doing so involves attending to the way we've employed the term respectively.
You've employed the term as a proxy for all sorts of mental ongoings ranging from brute direct perception of objects external to mental ongoings (such as what we call "malignant cellular structures") to highly complex linguistic conceptions such as seeing "that cat there".
Simply put, on my view, perception is not equivalent to mental correlations. Whereas you fail to draw and maintain that distinction, I draw and maintain that perception is one necessary but insufficient element of mental ongoings. You're not alone though. It is an historical shortcoming pervading the whole of philosophy, philosophy of mind (psychology) notwithstanding.
Quoting apokrisis
Here, I would largely agree. However, I would note that the notion of "recollect" above presupposes recollecting to someone or something. On my view, recollecting doesn't require reporting upon that recollection.
Human language most certainly changes the way that we are aware of the world deeply. But for that notion to have any bite, in order for it to be robust, we must have a relatively good grasp upon what our awareness of the world is without language. Lest, we have no ability to compare our awareness prior to language with our awareness post language acquisition. Without that comparison, any talk of change after language or as a result of language is groundless.
Quoting apokrisis
That appears to be self contradictory. If they have anticipatory imagery, then they must have the ability to generate such imagery. We agree that animals form and hold expectations. What those expectations consist in seems to be where we differ.
Quoting apokrisis
Not much I can do about what bugs you other than to point out that philosophical arguments stand or fall on their own merit. Nothing I've said requires citations. I'm not referencing anyone else's work.
I suggest that you spend less time thinking about me personally and more time addressing the substance of my posts...
We are most certainly focused upon the same problem. Our methodological approaches stand in stark contrast to one another however. Hence, the issues hinge upon and stem from the differences therein. Specifically speaking, the framework will limit or delimit what can coherently be said according to it. Explanatory power.
That's how crackpottery starts. Right away, I can't take you seriously.
Quoting creativesoul
Well I can point to any standard psychology textbook. If your definition is all your own work, then unless I develop telepathy, you are going to have to do a lot better job of explaining yourself.
Quoting creativesoul
Great. You've just come out with a bunch of your private definitions and tell me it is not just me that is wrong, but the whole of philosophy and science.
The crackpot-ometre is reading off the dial right now.
So what's a mental correlation? What's a mental ongoing?
Quoting creativesoul
Clearly it is the self doing the recollecting. Clearly that is also a potentially homuncular way of putting it. Clearly then, we don't want to be led into a hard claim about a self that both recollects experiences and experiences those recollections - a rather overdetermined position to take.
So your "big insight" here seems merely well-worn commonsense. It itself is a feature mentioned in any sensible, citable, theory of how language makes a difference to human mentality. Take Mead's Symbolic Interactionism for instance. We are born into a world where we find everyone talking grammatically in terms of I, you and them. And from there, a notion of "being a self" gets learnt.
Philosophy and psychology then have to go along with those grammatical conventions, just to get things said in a way people can start to understand. It doesn't make it impossible to turn around and expose the homuncularity of those conventions. That is exactly what Symbolic Interactionism and other such schools of psychology did.
You would know all this if you read the books.
Quoting creativesoul
Great. And I have a very good grasp on that having written a number of books on the subject (that were in turn based on the vast amount of relevant research that exists).
Quoting creativesoul
Do I sense a linguistic notion of selfhood creeping into your thinking just there? You say "they" must have the ability to generate. Is there a "they" without linguistic scaffolding? Isn't there just the brain doing its thing in Bayesian brain fashion?
Surely what you meant to say was that with animals, there can be no socially-constructed self that can imagine itself being at the control of a flow of anticipations. The animal mind is extrospective, not introspective. There is no linguistic self to turn attention away from the world and direct it towards an internal world of rumination and day-dream.
You would know all these things if you read the right books.
Quoting creativesoul
Sure. Is that substance arriving any time soon? Have you done attacking both my ignorance and the general ignorance of all philosophy and psychology?
Yep. Me scholar, you crackpot. Me cite sources, you complain the world doesn't understand.
Quoting creativesoul
Who would'a thunk? Social constructionism 101.
Be well apo...
This seems like the perfect time to allow Kant to place apo's recent ad hom's in proper perspective.
X-)
So we have as our example...
Now take seems/appears. Is one the animal level of perception, the other the human level? Is that what Creative hopes to signal? Or is one the proposition, the other the truthmaker? Does one imply some generality, the other some more specified circumstance?
Why should I be forced to be kept guessing like this? Does Creative actively require that I don't understand him for some reason of his own. That is certainly what it seems/appears like to me.
What about allow/permit, place/put, latest/most recent, proper/rightful. Then now even "perspective and/or point of view".
Aren't these all synomynous pairs with nary a meaningful difference? Or can someone else crack Creative's linguistic code, find a rule behind it?
Join in there apo. This thread is tangential. It regards direct vs indirect perception...
...and be on your best behaviour, if you would.
It's your private theory that the whole world doesn't understand. I remember now your recent lament that you can't seem to bring the academic world to proper account for its failings in your eyes.
So if there is a key to your code, you can just reveal it right here. I don't mind if that involves you having to go cut and paste that answer from wherever you might have done just that in your best honest fashion.
But I am tired of chasing you around in circles. This has been going on for quite a few years, hasn't it?
Yes. It's redundant, idiosyncratic, and does not conform with a decent editorial standard. I wish he would stop doing it. It's a bad habit.
So why is saying the same thing two different ways of any importance? What does that idiosyncrasy mean?
Clear that up. Then you can tackle mental correlations/mental ongoings. If you can't point towards some basis in standard scholarship when it comes to that jargon, you really do need to make an effort to explain yourself.
That was what was written originally. Ah well. Once again, rather than focus upon the substance of the post (that time it was Kant) some would rather talk about others on a personal level...
Stop feeling sorry for yourself. You are making your own credibility central to any discussion as you admit this is all your own personal theory, your own terminology, your own concepts.
You are welcome to ad hom me. It's against forum rules but I think it is a big part of the fun. I won't complain.
However the difference is that I always have some kind of citation to show where any claim might be coming from. So if you attack my views, I don't have to take it personally. I can show you the context within which those views arise. And that is just basic scholarship. If you don't like what I say, I say well go attack these other guys. Come meet my big brother. :)
Not my problem.
No, it's redundant and counterproductive. You should listen to the feedback. Being able to do something is one thing, but actually doing it isn't necessarily the right thing to do. I might be fluent in ten different languages, but that doesn't mean that actually providing nine alternatives alongside my native language is the right thing to do. This is what dictionaries, thesauri, and translators are for. How about you talk like a normal human being, and we will check a dictionary or request clarification if need be?
So let's stack that up against a more scholarly view - https://en.wikipedia.org/wiki/Pleonasm
So any standard notion of good writing would cross out your redundant terms as being more confusing than enlightening.
You may think it is a habit that makes your thoughts clearer. But for me, the redundancy just halts the flow.
I don't know which/what word/term I/myself am/are meant/intended to/at be/am attending/focusing on/at at/on any/every particular/specific moment/instant.
[Phew. Small round of applause please.]
Clarity par excellence! (Or a confusing mess that is a strain to read).
The irony.
An astute reader can look to the above example that apo has somehow judged to be rightfully applicable to the situation at hand, and clearly see that it is an example that doesn't apply to what I've written. Kant's explanation looms large...
Vanishingly my dear planck.
Given you don't seem to take Kant's meaning here, the point is that you do have to internalise the proper habits of conception. Just being able to parrot words is meaningless. You have to come to understand them in a way that is intentional.
Which would be why you can't reject what you haven't mastered. You can't reject the words of scholarship because "they just don't make sense to you". You have to show first that you understood what those other guys really meant to say. And then communicate - unfortunately, also through the skillful use of language - your own "better" way of conceiving of whatever that thing was.
Philosophy and science rely on logical or mathematical language to ensure the maximum possible level of correct communication. Ordinary everyday speech carries too much ambiguity when the going gets tough.
So it really is a scholarly game with its rules for communicating. There's things you do, and things you don't do, because that is what has been found to work.
I'm calling you out for not accepting those rules ... even after posting Kant's own words.
The fact that you bolded and highlighted any passing phrases that you felt gives licence to your claim not to need to connect with active scholarship, or follow norms of philosophical writing, goes straight to your state of mind.
Kant wasn't actually whispering down the generations, "Creative, go you good thing. Stick it to the unbelievers in your special language."
Actually, not quite. The "latest/most recent" was me. :D
I used an unorthodox method to try to teach you a lesson.
Ah, but the more localised the applause, the more immense is its energy. Heisenberg's principle rules.
But where is the astute reader who can make sense of your linguistic quirk? If Kant is there beside you, can you put him on the line?
Otherwise, I can only call upon you again to stop being bashful and explain yourself at last.
I do keep that in mind. At least you openly admitted to making my original post much more confusing than it was originally. Gave the dog something to chew on as well, even if it was based upon mistaken false belief(that I wrote that garbled mess).
Uncerntainty? Vanishingly constant or constantly vanishing?
Oh right. So I was punk'd on that one. :)
Apologies to Creative there. But it was so believable...
That's another thing that you do. There should be a space there!
And don't start with a capital letter after a colon! (Actually, it turns out that that one's an Americanism, so I'll let you off).
Okay, I'm done. Sorry to derail the discussion. Love you too, creativesoul. :)
This is too rich. Pots and kettles. I'm not interested in your rhetoric apo.
All quantum mechanics can tell us is that it sure started small yet intense.
But then under a thermodynamically extended view of QM - decoherence - we could predict that the joke/applause will indeed evolve state from the vanishingly constant to the constantly vanishing. It will spread, yet dilute, as time passes.
Ah physics jokes. Surely the best!
Evasion, evasion, evasion.
You've been haranguing me for definitions. I've given them. To the degree I could given your refusals to clarify what it is exactly you might question about those definitions.
And now - as has always been the case - you run for cover when I insist on some kind of sensible definition of your own terminology.
I'm actually fascinated in a horrible car-crash way. I want to see what you come up with eventually.
The question and/or issue revolves around whether or not our perception is mediated by our mental ongoings or whether it is not.
The correct answer is both, depending upon the notion of perception. If it is based upon a minimalist criterion, then it would not involve language, and it would be a more physical notion. If it is based upon a criterion that requires complex linguistic notions, including awareness of our own fallibility, then our perception would most certainly be indirect, because it would amount to the affects/effects of one's worldview and would be a more mental notion.
The OP removes the notion of worldview, and yet still shows that the neural networks do indeed perceive things external to the networks themselves. Thus, it seems to me that that ought provide grounds for re-thinking the notion of what counts as direct perception, and in turn what counts as indirect perception...
Finally you spell out a position. And I agree with the gist. It is why I say humans introspect but animals extrospect. Animal perception is direct in the sense that they have no choice but to be plugged into the here and now. Their minds are run by their immediate environment and the circumstance it presents. The capacity to detach from that is very limited - even if chimps, dolphins and ravens can do some planning, some abstracting, some deeper level of analysis.
Then humans can completely detach from the world to have a socially-constructed inner world due to the semiotic mechanism of symbolising language. Language creates an epistemic cut. Mentality gets divided into linguistically scaffolded notions of self and world. Consciousness becomes a self-consciously regulated thing. Introspection adds a further internal dimension where a “self” resides.
So it is the epistemic cut, the semiotic machinery of a symbolic code, that makes human mentality and perception indirect compared to the “trapped in the moment” directness of the biological animal mind.
Yet then, the thread is really about computers only aiming to achieve a conscious animal level of perception. DeepMind claims to replicate something of the neural architecture of brains, not the socially-conditioned being of human minds. It is only the programmers who know DeepMind is seeing cute kittens. No one pretends the machine is making a linguistic classification in unsupervised learning fashion.
So that is why your attack on my usage of “perception” was so out of place. It suggested you didn’t really understand the meaning of my language within the context of the thread.
But anyway, I also then would make the further point that animal perception is still indirect even in its directness. It remains the case that animal consciousness is also founded on an epistemic cut - the mediating semiotics of neurons.
So the whole semiotic argument applies with equal force, just at this more foundational level. That is why while it is true human consciousness is even more indirect than that of non-linguistic animals, here that is inessential to the indirect realism position.
I'd like to add a bit to the above. What I mean by "correct" is important here, particularly to another point I aim to make that ought add some depth to our understanding.
There is more than one sense of the term "perception". All are correct, because what determines the correctness is established by how a group uses the term. Sensible language use correctly follows conventional norms, the latter of which is established solely by virtue of 'enough' people using the term in the same and/or similar enough ways. Most of us are aware of the difficulty that can arise when incompatible and/or oppositional senses of a term are being employed in a debate based upon that term.
I think that it is important to consider the 'best move' when these sorts of circumstances become the case.
In order to effectively critique an author's position, the reader must understand that position. Understanding requires granting terms and seeing them through. When it comes to discourse regarding whether or not perception is direct or indirect, what counts as either is key to understanding one's position. Both sides offer notions of perception, but are both sides talking about the same thing, and if so on what 'level'?
Because indirect perception is mediated, whatever mediation is existentially contingent upon, so too is indirect perception. Mediating perception requires metacognition. Thus, indirect perception requires metacognition. Metacognition requires written language. Thus, indirect perception requires written language. So, we arrive at the following conclusion:No creature without written language has indirect perception.
From earlier on in the discussion. It is well worth repeating...
Perception isn’t mediated. It is the mediation. Radiant energy gets turned in colour experience. Floating fragments of organic matter get turned in the scent of a rose.
The gap or epistemic cut is between the physics of the world and the qualia of the mind. Perception is our way of talking about the fact that “we” - the linguistically constructed introspecting observer - have to accept basic experience as brute fact. That part of what our brain does - processing the world as a pattern of sensations - is hardwired.
So perception is the primary mediating step. Then secondary linguistic habits can mediate that biological level experience. We can talk about lovely sunsets and try to put a name to the particular variety of rose we might be smelling.
Quoting creativesoul
Nope. Metacognition is dealing with already mediated experience.
And crickey, why written language?
What's being mediated?
Quoting creativesoul
You can google the dictionary definition if you like -https://en.oxforddictionaries.com/definition/mediate
Citation? Explanation?
Quoting creativesoul
Quoting apokrisis
So, you're arguing that all perception mediates an agent's experience of the world, and it is indirect as a result...
Quoting creativesoul
Quoting apokrisis
Quoting apokrisis
Quoting apokrisis
Quoting apokrisis
Quoting apokrisis
I'm trying to make sense of all this...
Metacognition is thinking about thought/belief. Prior to thinking about thought/belief there must be something to think about. Thought/belief is prior to metacognition. Prior to thinking about thought/belief there must be a means for doing so. Written language facilitates our ability to isolate our thought/belief and then talk about it by virtue of using the terms "thought" and "belief". The same is true of all mental ongoings and the terms and notions used to take account of those.
First up, the psychologists who talk about metacognition don’t really get the linguistic scaffolding approach. They are treating those human skills as if they were further genetic functions, not socially constructed and language based skills.
Then still, what has written language got to do with it? Just have a mind structured by oral speech is plenty. Kids don’t learn metacognitive type skills from a manual.
This is all getting a little too weird now.
Your position is suffering from equivocation. Two terms in particular. The first being perception and the second being mediate. Equivocation is the result of self-contradiction. You're putting forth an incoherent position, and have been from the start. That is a mountain-sized problem. The proof of that is easy enough to see by virtue of proof by substitution. Define both terms, and then review all your posts while substituting every use of each term with it's definition, and then watch what happens.
I do not expect you to take this seriously, but...
You need to sharpen your notion of perception, it is ill-conceived. Proper quantification is necessary and would be a good start. Not all perception requires language.
And yet you do not offer valid criticism/counter-argument. It can easily be placed into simple form, and I have actually done so on several occasions throughout the years. Each time, rather than argue for why a premiss isn't true, or showing an invalid move, or showing some other inadequacy inherent to the position, you quote some premiss or some conclusion you disagree with, gratuitously assert that disagreement, and proceed to offer your own explanation on your own terms.
That's not how valid criticism works.
But my argument has been that the limitation is fruitful, purposeful - the feature and not the bug. So the indirectness is critical to the design. It creates the epistemic cut by which the mind separates itself from the world so as then to be able to assert control over the world.
What you treat as an interruption to directness that doesn’t do too much damage, I am saying is the interruption that is foundationally necessary so that a self can be introduced into the equation. The world must be filtered in a way that represents already the self-interested self.
So you are motivated to argue for directness, despite the evidence, because you seek to defend a mistaken notion of processing.
Yours is essentially a representational ontology where the brain turns sensory input into a conscious state of experience - that some homuncular self then experiences. The usual confusion.
I’ve argued the embodied and Bayesian brain view where the brain instead does its best to predict its inputs. Success is defined in terms of how much the world can be afforded to be ignored. So the self interest exists from the get-go. And the epistemic cut is enforced by the mind only having to read reality in terms of its own privately constructed system of signs.
Quoting creativesoul
This is an example of your nonsensical replies. Where have I ever said all perception requires language?
Your refusal to answer on small but important details is a big problem. Why do you go out of your way to be opaque?
The difference between our views is on a foundational level. I draw a clear and meaningful distinction between perception and thought/belief. You conflate the two. I draw a clear and meaningful distinction between thinking about thought/belief, thought/belief, and perception. You conflate all three.
However, I find no clear and meaningful distinction on your view with regard to what exactly counts as perception in animals without language, in fact you've treated language as perception as far as claiming that they both mediate experience, as well as calling perception the ability to talk about stuff.
Here's something you've offered with regard to the above that we may be able to work from/with...
Quoting apokrisis
The above defines direct perception in terms of whether or not the perceiving agent has a choice to be plugged in to the here and now. Further claiming that their minds are run by their immediate environment and the circumstances it presents without the capacity to detach from all this("trapped in the moment"). Compare that to humans' ability to 'detach' from the world by virtue of basically becoming self-aware via language that divides mentality up into notions of self and world, and you have what indirect perception consists of.
Unless humans have always been linguistic creatures, it seems to me that there is a progression of complexity at work.
Quoting creativesoul
Quoting apokrisis
I've provided the argument. You denied the argument based upon evidence to the contrary.
Provide it.
Strewth. Yes of course. Language had to evolve. And the modern symbolic human mind with it. That is what paleoanthropology studies. Go read a book about it.
Why writing as a necessary step? Why wasn’t speaking already enough?
Am I suppose to understand by “facilitate” that you mean only to say writing helped sharpen what speaking had already got started?
In that case, writing become a redundant issue. It is not a critical fact here.
Again you seem determined to put obstacles in the way of any discussion. You won’t reference, you won’t answer directly, you use weird terminology with meaningless redundancies, you make secret sauce claims of understanding a mystery that no one else gets.
Getting straight answers from you is like blood from a stone.
Smelling like a rose.
That distinction is not one I’ve seen being held up as crucial in any metacognition texts. So you will have to be the one to justify it.
The fact that you will continue to try to worm your way out of doing so says everything that needs to be said.
Again, how do you define facilitate in the above context? Was my suggestion right or wrong?
If that isn't clear enough, then ask a better question.
I’m sure a good a case can be made for how the creation of texts was a big step up in terms of cultural semiosis. Having a sacred book that encodes the right way to think means civilisations of millions can become focused on the shared project of saving their eternal souls.
But equally, anthropologists study hunter-gatherer tribes that rely on oral memory to transmit metacognitive thought habits. There is simple proof your assertions are fallacious.
Are you saying hunter gatherers aren’t properly human?
Show me...
So yes, I have no problem with the idea that literacy made another huge difference. That too is well studied - a routine anthropological fact, even if not a politically correct one.
But if you want to argue that preliterate hunter gatherers aren’t skilled at transmitting cultural metacognitive thinking via their oral skills, then you show me any such evidence.
Meanwhile, read up on how oral communication is employed - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5372815/
You claimed proof. Show me...
I'd argue that weasels aren't either, for the exact same reason. There is no 'evidence' to be had to prove that. Rather, it stands on the merit of the argument and falls whenever evidence is presented that negates it. You've done neither, refuted the argument nor provided evidence to the contrary.
Keep trying though. My argument stands up to the same scrutiny that yours couldn't survive.
You claimed the hard distinction. You can provide the evidence to support your claim.
I say pick up any anthropological discussion of the issue and you will see folk talking about how literacy makes a big difference - particular to the fostering of a "theoretical" mindset over the preliterate "narrative" mindset - and yet they don't claim some hard difference in terms of "metacognition" .... itself an abused term that doesn't even go to the question of linguistic scaffolding, oral or otherwise.
So I can't just cite some experiment or book here. Your tangle of crackpottery goes off in too many self-contradicting directions. Having pointed out the silliness of talking in terms of metacognition, I also then pointed out a further silliness in terms of treating written texts as critical to the human mental difference ... when it comes to what is different about human perception in relation to animal perception.
But good luck getting your thoughts written up and published, revolutionising the course of psychology as your reveal your great hidden truth.
The contrary of what? It is your lack of any properly grounded claim that I drew attention to.
You contradict yourself to the degree you confuse metacognition as a semiotic position.
You could try to rescue your claim that writing makes a critical semiotic difference when it comes to perception. That simply speaking - simply an oral culture - isn't enough.
I already agree - with the literature :) - that literacy does make a difference. Just not a critical one in terms of human perception.
So continue to talk and act like a crackpot. I've done my best to help you sort it out.
Quoting creativesoul
Quoting apokrisis
I've provided the argument. You denied the argument based upon evidence to the contrary.
Provide it.
Yeah. Sounds legit.