There Are No Identities In Nature
Broadly speaking, one can speak of two types of systems in nature: analog and digital. Analog systems are defined by continuous variables, like the distance between points or changes in velocity; rulers, thermometers, or accelerator pedals are all examples of analog systems. Digital systems, by contrast, are defined by discontinuous or discrete variables: as with the ten 'digits' of the fingers, digital systems, unlike analog systems, involve discontinutous 'jumps' between measurement values. Thus the transistor based computer, with it's on/off electrical gates, is an example of a digital system.
Now, one particular feature of analog systems that they admit of no negation. Because analog systems employ continuous variables, there cannot be any 'gaps' in the variables of the system: all the quantities involved are always positive; there are no minus quantities (consider the movement of mercury in a thermometer - it is a smooth, continuous variation). Anthony Wilden, speaking in terms of the difference between analog and digital computation, puts it like this:
"It is impossible to represent the truth functions of symbolic logic in an analog computer, because the analog computer cannot say 'not-A'. Negation in any language or simulated language depends upon syntax, which is a special form of combination, and the analog computer has no syntax beyond the level of pure sequence (and that only in a positive direction). There is no 'either/or' for the analog computer because everything in it is only 'more or less', that is to say: everything in it is 'both-and' ... Because the analog does not have the syntax necessary to say 'No' or to say anything involving 'not', one can refuse or reject in the analog, but one cannot deny or negate.' (Wilden, System and Structure).
How then, do we turn an analog system into a digital one? By doing nothing other than introducing negation into the analog continuum: that is, by placing a 'cut' into the continuum which would prase the continuum into A and not-A. Doing this creates a boundary in the continuum. Here is Wilden again: "Boundaries are the condition of distinguishing the 'elements' of a continuum from the continuum itself. 'Not' is such a boundary. ... the differential boundary of the figure and ground is a primitive digitalization generating a distinction, and the distinction may then become an opposition. [Thus], figure and ground form a binary relation, one and two form a binary distinction, [and] A and non-A in analytical logic form a binary opposition (an identity). 'Not' is a rule about how to make either/or distinctions themselves."
A few quite important things follow from this, but I want to focus on one: it is clear that if the above is the case, the very notion of identity is a digital notion which is parasitic on the introduction of negation into an analog continuum. To the degree that analog systems do not admit negation, it follows that nothing in an analog system has an identity as such. Although analog systems are composed of differences, these differences are not yet differences between identities; they are simply differences of the 'more or less', or relative degrees, rather than 'either/or' differences. The same follows for the principle of the excluded middle, which, to the degree that it is defined by the disjunction between an identity and it's negation ("either p or not-p" [?pv¬p]), is rendered inoperative at the level of the analog.
A way to summarise all of the above is this: to the degree that nature is a continuum, there are no brute identities in nature. Or less provocatively, to the degree that there are identities in nature, they are constructed and derivative of analogic differences. Nothing is 'equal to' or 'identical to itself', 'in-itself'. These notions are heuristics that are imposed upon nature for the sake of communicative ease. As Wilden acknowledges, 'boundaries in fact are the conditions of all communication'; but it will not do to confuse those conditions which those of nature. The all-too-quick takeaway I want to draw from this - the thread is getting too long now - is that any kind of ontology or metaphysics which would rely on the machinery of formal logic to do it's heavy lifting would be severely crippled one (here's to looking at you, analytic metaphysics!).
Now, one particular feature of analog systems that they admit of no negation. Because analog systems employ continuous variables, there cannot be any 'gaps' in the variables of the system: all the quantities involved are always positive; there are no minus quantities (consider the movement of mercury in a thermometer - it is a smooth, continuous variation). Anthony Wilden, speaking in terms of the difference between analog and digital computation, puts it like this:
"It is impossible to represent the truth functions of symbolic logic in an analog computer, because the analog computer cannot say 'not-A'. Negation in any language or simulated language depends upon syntax, which is a special form of combination, and the analog computer has no syntax beyond the level of pure sequence (and that only in a positive direction). There is no 'either/or' for the analog computer because everything in it is only 'more or less', that is to say: everything in it is 'both-and' ... Because the analog does not have the syntax necessary to say 'No' or to say anything involving 'not', one can refuse or reject in the analog, but one cannot deny or negate.' (Wilden, System and Structure).
How then, do we turn an analog system into a digital one? By doing nothing other than introducing negation into the analog continuum: that is, by placing a 'cut' into the continuum which would prase the continuum into A and not-A. Doing this creates a boundary in the continuum. Here is Wilden again: "Boundaries are the condition of distinguishing the 'elements' of a continuum from the continuum itself. 'Not' is such a boundary. ... the differential boundary of the figure and ground is a primitive digitalization generating a distinction, and the distinction may then become an opposition. [Thus], figure and ground form a binary relation, one and two form a binary distinction, [and] A and non-A in analytical logic form a binary opposition (an identity). 'Not' is a rule about how to make either/or distinctions themselves."
A few quite important things follow from this, but I want to focus on one: it is clear that if the above is the case, the very notion of identity is a digital notion which is parasitic on the introduction of negation into an analog continuum. To the degree that analog systems do not admit negation, it follows that nothing in an analog system has an identity as such. Although analog systems are composed of differences, these differences are not yet differences between identities; they are simply differences of the 'more or less', or relative degrees, rather than 'either/or' differences. The same follows for the principle of the excluded middle, which, to the degree that it is defined by the disjunction between an identity and it's negation ("either p or not-p" [?pv¬p]), is rendered inoperative at the level of the analog.
A way to summarise all of the above is this: to the degree that nature is a continuum, there are no brute identities in nature. Or less provocatively, to the degree that there are identities in nature, they are constructed and derivative of analogic differences. Nothing is 'equal to' or 'identical to itself', 'in-itself'. These notions are heuristics that are imposed upon nature for the sake of communicative ease. As Wilden acknowledges, 'boundaries in fact are the conditions of all communication'; but it will not do to confuse those conditions which those of nature. The all-too-quick takeaway I want to draw from this - the thread is getting too long now - is that any kind of ontology or metaphysics which would rely on the machinery of formal logic to do it's heavy lifting would be severely crippled one (here's to looking at you, analytic metaphysics!).
Comments (213)
But the difference between analog and digital has a digital character. It's an opposition in which the poles are interdependent.
Movies seem analog because we aren't fast enough to see that it's discrete frames passing by. Maybe time and space are like that at a fundamental level.... atomic... blinking. If you claim that it's not... how do you know?
Interestingly, the distinction between the analog and the digital is asymmetrical: it's only from the perspective of the digital itself that one can draw the distinction (or can talk of the analog); from the perspective of the analog, digital distinctions are themselves continuous. Consider a thermostat which controls the temperature: although it depends on analog qualities, it turns the heat on or off only once the temperature crosses a certain (digial) boundary. The analog is indifferent to the digital functioning of the thermostat, while the digital, as coarse-grained, can only 'see' a threshold being crossed and is itself indifferent to the continuum of the temperature as such.
Yea. I used to work in telecommunications hardware engineering.. all up in the A to D and vice versa.
It's true you could say that digital exists because of boundaries we cast. Transistors were originally used only for analog applications. Using it as a switch means picking a threshold. Since the threshold is artificial, digital is fundamentally artificial. I think that's what the author you mentioned was trying to say?
You could look at it that way.... the other way is this: reality is all about consequences.
I think it's actually just different ways of looking at the world (ways that are interdependent)... although I have zero will to argue for my own intuition. Maybe somebody else will.
-There is negation in language. This looks indisputable.
-By your argument, language must therefore be a digital system.
-So there is no problem with there being identity of a thing to itself in language.
-Therefore, language must be profoundly metaphysically mistaken, and can't be used as a guide to metaphysics. The problem is that logic is derivative of intuitions based on natural language.
http://news.mit.edu/2016/gene-circuits-live-cells-complex-computations-0603
Bypassing the law of the excluded middle.
The type of identity commonly referred to in logic, is what I would call formal identity. A thing is identified by a form, or formula, such that its identity is based in "what" it is, according to the logical formula. It is impossible that the thing referred to by the word or symbol is anything contrary to what is described by the formula, or else it would not be the thing described. The other type of identity, I would call material identity. This is what Aristotle referred to when he said that a thing is the same as itself. Here, a thing is identified not by a form or formula of what it is, but by itself. The identity of the thing is not to be found in a description or formula, of what the thing is, but within the thing itself. This principle allows that a changing thing continues to be the same thing, through the course of time, despite having a changing form. For instance, I am the same person today, as I was as a child, and my car is the same car after the accident as it was before the accident, despite the fact that a new formula is required to describe what the thing is, with each such change..
The point here, is that what I've called "material identity" is based in an assumed temporal continuity. It is this assumed temporal continuity of matter which allows us to say that the object is the same object from one moment to the next, despite the changes which are going on with that thing. With reference to the op then, the digital perspective describes the world, and identity of things, by referring to formal identity, and this may be a sequence of states through time. Each state is logically different from the preceding state. However, we assume a continuity between states, such that something connects them, a temporal order, and this allows us to say that we are observing a thing which is changing, rather than a succession of different individual things. This assumed continuity is the analogue perspective.
What Aristotle demonstrated is that the two forms of identity are actually incompatible, formal identity involves itself with being and not being, what is and is not, while material identity is involved with becoming, what lies between when a thing is of one description, and when it is not of that desciption. He showed how becoming, and therefore the associated material identity, cannot be described in terms of is and is not. Leibniz, attempted to establish compatibility between the two with his identity of indiscernibles. If the formula of what the thing is, could capture everything about the thing, even its spatial-temporal positionings, the material identity could be captured by the formal identity. This would require that the formal identity of the thing would describe its form, or what it is, at every moment of time, and this would allow for temporal extension and the associated changes. But Hegel had already laid the groundwork for dialectical materialism with his dialectics of being, which allows that the formal categories of being and not being are subsumed within the material identity of becoming. This assigns reality to the continuous, analogue identity of becoming, leaving reality, as becoming, exempt from the formal laws of logic. Furthermore, Einstein's special theory of relativity renders it impossible to produce a description or formula of what is, at any moment in time, because this concept, a moment in time, is itself incoherent.
In conclusion, there are two distinct forms of identity, the formal, or logical, which lends itself to a digital, discrete world, and the material identity of becoming, which allows for the continuity of an analogue world. The two are incompatible, but a complete understanding of reality requires that one provide a position for both within the world..
Yes, the two kinds of identity: formal and material, you are referring to do seem to be in accordance with the two ideas of identity as digital and analogue implicit in SX's OP. They also seem to be in accordance with the distinction between identification and identity I have argued for in a couple of other threads recently.
So, the material identity of anything consists in its unique existence; in its being a unique entity, where the formal identity of anything consists in what it is called, or what it is identified as; which as SX points out always entails the idea of negation; the idea that what something is identified as being, is conceptually determined by what it is identified as not being.
True. Logically, there can't be a 1:1 correspondence because there are no 1's in a continuum. Maybe infinitesimals, but I think that's a hybrid digital/analog concept.
That's the problem with a continuum. There's nothing to get a hold of... no units. The golf ball can never make it to the hole.
The telling feature of the "in-itself" is relational. As a concept, it marking the difference between things, between where something belongs and where it does not. It's logic marking how one thing relates to everything else-- it's itself, not any other thing which is its absence. Not even identity itself is of nature.
Nothing is "identical itself" not because the logic is incoherent (i.e. "identical to itself" means nothing or is a contradiction), but rather because "identical to itself" doesn't escape posting a relation.
I think you've more or less said this, but I think it's highlighting cause some appear to because treating the lack of identity in nature the incoherence to the logic of identity. As if, as the idealist are prone to say, the lack of equivalence between logic and nature were to render nature incoherent.
Quoting TheWillowOfDarknessThe relation here is a temporal one. The thing is related to itself at a before or after moment in time, and this produces the temporal continuity of existence of a thing. The point though, is that the thing is not identified as being the same as itself, through some formal principle, such that it would have the same description from one moment to the next, because the thing is naturally changing in time. So it is identified as being the same as itself through some principle of temporal continuity, not through some formal principles describing what it is and is not.
Yes, of course the unique material entity must have temporal extension. The exact duration of its temporal extension up to any moment prior to its 'completion' (its ceasing to exist) is an ineliminable part of its unique identity; as is its unique path through spacetime, its unique set of relations to and interactions with other unique entities, as well as the unique set of physical changes it undergoes due to its unique set of relations to and interactions with other unique entities.
There are some unique entities, though, such as number, and kinds of forces, about which it is not so clear whether they should be counted as 'material entities' (or if that is clear then it is not clear what alternative kinds of entities they are) and it is not clear as to whether they have spatial existence or temporal duration.
The "thing itself" is a formal principle: the logical relation of the presence of a thing compared to its absence.
Part of my argument here is that what you refer to as material identity is a kind of hypostatization or transcendental illusion in which 'numerical' (formal) identity is projected (mistakenly) onto nature. I write of course, from the perspective of a kind of philosophy of process where any attempt to think in terms of brute identities ought to be rendered suspect from the beginning. With respect to formal logic, one can see how something as simple as the subject-predicate relation [P(x)] is fraught with metaphysical issues.
-I don't see why an analog system can't deal with or include negation or identity
-I don't see why nature would have to be analog (Leibniz for example effectively proposed it wasn't, and certainly some sections of physics traffic in quanta)
-I don't see what's to be gained from cordoning off what is a transcedental addition versus what is really in nature 'in itself,' and there seems to be no interest in the project if you're not a Kantian (the question of 'is identity in the mind/language/computer, or in the thing itself?' is only of interest to someone with Kantian assumptions)
Wilden for detail: "DNA is the molecular coding of a set of instructions for the growth of a certain living system of cells. But these instructions do not cause growth any more than the directions of a cakemix cause the mix to become a cake. They do not cause growth, they control its possibilities. In other words, the instructions of DNA constrain or limit growth... But it is not only the instructions which constrain growth; so does the environment in which they operate. Thus the articulation of the genetic code - which we know to be in some way double, like language, and punctuated, like writing - depends upon processes of combination-in-context (contiguity) and substitution-by-selection (similarity). Like language, also, it is a combined analog and digital process. Like language, it is not ruled by causality, but by goalseeking and constraint."
Yeh I read the analog/digital talk as essentially a restatement of the Kantian view that it's not something out there which "holds objects together" (object = X). Instead concepts are rules in the mind/language - which are therefore always abstractions/simplifications. I'm assuming that the difference here is that there's no transcendental idealism, we do have access to the things themselves 'the analogue system' but it needs to be simplified to be used in language and therefore in logic.
My concern is that if we take this as a limitation, so that identity, negation, formal logic etc. is based on abstraction, what in the end can we say about issues philosophically? Even the split itself of what systems can be considered analogue vs digital is actually a continuum and not a distinct break.
How could it? Again, the paradigm is the mercury in the thermometer - what even would negative mercury mean? To speak as such already implies a digitization which is nowhere present in the movement of the mercury. Anyway, one can be more than rhetorical here. The first thing to note is that any digital system, by definition, requires a boundary to be set: in order for an (analog) continuum to distinguish itself from itself, some kind of boundary needs to be set in place in order to digitize the continuum. Digital systems are what happens when a continuum distinguishes an element of itself from itself.
The crux is this: the role of negation ('not') is precisely to enable a continuum to do just that. The ability to distinguish between A and not-A (that is, the setting up of a boundary between one thing and another thing) is the minimal condition which would allow the digitization of a continuum. Now, the reason why negation can play this role is because what negation actually is is a recursive (or 'metacommunicative') operator on an object language. That is, to negate is to take a statement and say something about that statement itself: to enact a negation of A ("not-A") is to say something about "A"; it is communication about communication.
Wilden again: "By introducing at a more complex level the possibility of communicating about communication, metacommunication provides the potentiality of truth, falsity, denotation, negation, and deceit. (The [animal's] nip says "This is play." The next step is to be able to say: "This is not play." And then: "This is/is not play." Only human beings pretend to pretend.) The introduction of the second-level sign into a world of first-level signs and signals detaches communication from existence as such and paves the way for the arbitrary combination of the discrete element in the syntagm. ...'Not' is a rule about how to make either/or distinctions."
So negation - or at least the possibility of negation - is the founding aspect of abstract symbolism, and representation more generally, i.e. the kind employed by formal logic. To represent is to make a distinction. It is the possibility of negation which allows the (analog) continuum to distinguish itself from itself; and insofar as the analog is by definition defined in terms of it's continuity, any negation (a metacommunicative, boundary setting operator) would make it at once a digital system. Thus, as far as analog systems go, they best they can do is to refuse or reject (I can turn away from your request), but not deny or negate. More to say about the charge of Kantianism later (early hint: analog communication is a perfectly valid form of communication; it is just not a representational, denotative, form of communication; the analog is not a unknowable noumenon).
I've just noticed that it's curious that we tend to (or at least I do) think of digital as 'binary' whereas etymologically it refers to a base-10 system, because 'digital' is in reference to our ten fingers.
Another observation is that, whereas nature may seem continuous, in very many common situations it is actually discrete. For instance QM tells us that there are only a countable number of different energy states of a pendulum. It's just that they are so close together that they seem continuous/uncountable.
Conversely, some processes that seem discrete are actually continuous, or at least have many more states than we think of them as having. For instance we think of a gate on a computer chip as being either Off or On, but actually that is determined by the voltage applied to the gate, which is usually high or low, but it could - in the presence of abnormal environmental factors or a flaw in the chip - be somewhere in between, providing a state in between on and off.
I haven't yet got my head around what that means for DNA, but it seems to me that there may be scope for interpreting the world either as discrete or continuous, digital or analog.
Do you think that something analog is incapable of 'making distinctions?' Surely, even if a bucket of water is in some way 'analog,' one can still distinguish between being hit with a little bucket of water and a big one, and measure the size difference between the two. If not, I'm not understanding how that I'm 'effecting a distinction' means that I'm committing to something being digital.
This is a little off topic, but the mercury example is a weird one in that quantities of mercury are in a very real sense digital, in that there is a finite, countable number of mercury atoms in any sample, and broken down beyond this we have not mercury but something else.
Substance ontologies can and must make room for activity (Aristotle's concept of energeia) being predicated of substances both as actualization of their characteristic powers and as enabling the background conditions of their existence (as characterizing underlying processes and boundary conditions). I don't see how an ontology of pure (merely "analog") processes can account for the possibility of empirical knowledge about anything. Kant's arguments regarding criteria for distinguishing objective succession from objective simultaneity, developed in The Analogies of Experience (in the CPR), seem to preclude the possibility of so much as objectively predicating analog qualities of elements of nature (however continuous), without also postulating substances a priori.
Recently, David Wiggins has argued for a richer fundamental ontology that makes room for both substances and process at an equally basic level in his paper Activity, Process, Continuant, Substance, Organism, Philosophy, 91, 2, 2016.
"Extensive properties include not only such metric properties as length, area and volume, but also quantities such as amount of energy or entropy. They are defined as properties which are
intrinsically divisible: if we divide a volume of matter into two equal halves we end up with two volumes, each half the extent of the original one. Intensive properties, on the other hand, are properties such as temperature or pressure, which cannot be so divided. If we take a volume of water at 90 degrees of temperature, for instance, and break it up into two equal parts, we do not end up with two volumes at 45 degrees each, but with two volumes at the original temperature ... An intensive property is not so much one that is indivisible but one which cannot be divided without involving a change in kind. The temperature of a given volume of liquid water, for example, can indeed be “divided” by heating the container from underneath creating a temperature difference between the top and bottom portions of the water. Yet, while prior to the heating the system is at equilibrium, once the temperature difference is created the system will be away from equilibrium, that is, we can divide its temperature but in so doing we change the system qualitatively." (DeLanda, Intensive Science and Virtual Philosophy)
Not sure what to make of the quote. Temperature and pressure are numerically measurable and so divisible: whether effecting a certain division, say by half, involves literally just cutting the thing itself in half or not seems beside the point. Of course cutting water in half is not going to halve its temperature. Why would it, and how does that show a difference in kind between volume and temperature...? This shows that temperature is not the same thing as volume, and so cutting volume in half doesn't cut temperature in half. It doesn't show that temperature isn't halve-able. What a bizarre train of reasoning. You can cut the temperature in half by putting it in the fridge.
I do think there are intensive and non-quantifiable properties, but temperature and pressure in the physicist's sense aren't examples of them. And this reasoning is getting very scattershot, so the goalposts aren't clear to me anymore. If the distinction that matters here is cardinal versus ordinal, one would wonder how you can't have heard of ordinal numbers.
Here is the rub. (My argument here is influenced by similar consideration advanced by Michel Bitbol in some papers on the philosophy of physics, which I may seek to locate if needed). Whenever you are predicating some lawful and re-identifiable continuous quality of a system, this presupposes an ability to identify it as a system of a specific sort, or as a well defined instrumental set-up. This set-up, as a whole, and what defines it as a setup of this sort, is basically a substance. You can't conceive of it outside of a substance ontology. Interestingly enough, Bitbol himself purports to be arguing against substance ontologies, but his real target is a crude essentialism (or objectionable "metaphysical realism" as Putnam would label it) that doesn't make room for the constitutive role of concepts.
On edit: Similar arguments, it seems to me, sustain the core thesis of Karen Barad's book Meeting the Universe Halfway, which I saw you were rereading currently.
The real issue here, I think, is the question of whether the natural (analogue) continuum has any identity whatsoever. Aristotle identified it as the same as itself, and called it "matter". If there is nothing real which is being identified here, then our whole understanding of nature beaks down, because it is based in the assumption that the continuum is real, that something real has been identified as "the continuum".
So even if numerical (digital) identity is projected onto the (analogue) continuum of nature, in the act of measurement, during the attempt to understand nature, this should not be characterized as mistaken. There is an age-old philosophical principle which states that like cannot recognize like, and this manifests in the tinted glass analogy, when like is projected onto like, it causes deception. Mistakes occur when the projection, and the thing projected upon, are not properly separated, because this produces a failure in distinguishing between the characteristics of what is projected and what is projected upon. That is why each, the formal digital aspect, and the material continuum, must each be properly identified, and understood as separate.
Quoting StreetlightX
This may be an example of such a mistake. The continuum can only be identified as One. To separate one part from another would produce a contiguity which consists of separate parts. So if the digital, numerical system is projected onto the continuum, to separate out parts, such boundaries which are created are not a natural part of the continuum, but an artificial separation. The continuum itself, must be understood to remain whole, indivisible, despite any such acts of recognition, because to distinguish an element of itself, from itself, and claim that such a distinction is based in something real, within the continuum, would contradict its identity as a continuum. Allowing that the continuum has an identity is the only means for avoiding a descent into confusion.
As I said above, the analog is not at all anything like a 'thing-in-itself'. It is eminently knowable in the most trivial of ways; it's just that unlike 'digital knowledge' which is denotative and representational, analog knowledge deals with relationships. It's only by confusing knowledge as such with denotative, representational knowledge can one make the kind of objection you have. The most fun example, drawn from Gregory Bateson, comes from thinking about animal communication. No known animal communication is digital, with the exception of our own, human language. This doesn't mean that animals can't know things.
Thus speaking of cats trying to get our attention, Bateson writes, "When your cat is trying to tell you to give her food, how does she do it? She has no word for food or for milk. What she does is to make movements and sounds that are characteristically those that a kitten makes to a mother cat. If we were to translate the cat's message into words, it would not be correct to say that she is crying "Milk! Rather ... we should say that she is asserting "Dependency! Dependency!" [or, following Wilden, who borrows from Bateson here: "will you put yourself in a mother relationship to me?" - SX] The cat talks in terms of patterns and contingencies of relationship, and from this talk it is up to you to take a deductive step, guessing that it is milk that the cat wants." (Bateson, Steps To an Ecology of Mind).
Of communication among bees, Wilden writes: "No bee constructs a message out of or about another message (there is no metacommunication about messages as is possible in digital communication); in other words, no bee can "dance bout dancing". The 'gesture language' of bees involves perception (perceptual representations are analogs of what they represent). Moreover, no bee who has not flown the course to find the nectar can send the message 'about' where it is, no bee can tell where the nectar or the pollen will be, no bee can say where the nectar isn't ...The circular dance has the specific function of analog communication: it simply says something about the dancing bee's relationship to the food near the hive, but it cannot say there is no food there. The wagging dance uses a code of signals to point; it is a more complex analog message. In neither case does there seem to be a possibility of a methodological analysis of these forms into discrete elements with a duality of patterning similar to that of morphemes and phonemes, for the indications of distance in the wagging dance are frequencies and times, and relatively imprecise." (System and Strucutre).
So the idea that the analog is a kind of noumenal 'in itself' is wrong. To drive the point home: "The analog is pregnant with meaning whereas the digital domain of signification is, relatively speaking, somewhat barren. It is almost impossible to translate the rich semantics of the analog into any digital form for communication to another organism. This is true both of the most trivial sensations (biting your tongue, for example) and the most enviable situations (being in love). It is impossible to precisely describe such events except by recourse to unnameable common experience (a continuum). But this imprecision carries with it a fundamental and probably essential ambiguity: a clenched fist may communicate excitement, fear, anger, impending assault, frustration, 'Good morning', or revolutionary zeal. The digital, on the other hand, because it is concerned with boundaries and because it depends upon arbitrary combination, has all the syntax to be precise and may be entirely unambiguous. Thus what the analog gains in semantics it loses in syntactics, and what the digital gains in syntactics it loses in semantics."
And what exactly do you think ordinal numbers indicate? Distinctions between discrete elements in a set? Not at all. What they mark are relationships, which are - guess what - analog, and not digital differences.
Your example is good but we are miscommunicating. On my view, neither process nor substance are noumenal. Both are empirical and, qua pure concepts, they are co-eval. It just becomes impossible to know or think of anything empirical that instantiates one of them when we seek to make one of them *the* fundamental constituent of "nature". It is only when one attempts such a reduction that the possibility for knowledge becomes unintelligible and that the basis of reduction (either substance or pure process) retreats into the noumenal. Intelligible ontologies must be pluralistic.
I agree with this, but the same can be said of substances as they are conceived within a pluralistic ontology. Empirically knowable substances also essentially involve relationships. We must meet the substances that populate the universe midway, in Karen Barad's words.
I am snagged on the basics. 'Analogue' began as a mathematical term itself, about sameness of ratio or proportion. It's become a loose term, particularly handy as contrastive with 'digital'. It seems to me that one's beginning should be that there are a number of ways of speaking about systems in nature, one digital, one based on analogy, others descriptive in other ways.
This is true in electronics. DC voltage is quantified discretely (obviously). We use a little calculus to quantify AC voltage. Interestingly, what we're doing when we quantify AC voltage is we're trying to weigh AC against DC... as if quantification is fundamentally a digital thing.
A digital phenomenon is either something or nothing... and when the something is there, it's static.
An analog phenomenon seems to always Be. There aren't any moments of absence.
If it's true that identity requires a moat around the castle.. a gulf between thing and world, then it's true that identity is the offspring of a digital world... a dualistic scene. It could be argued that the primal identity is the self. Where there is a self, there is a POV. POV requires space.. separation between me and the thing I observe.
Is the most important identity of all absent from the "nature" mentioned in the OP?
Streetlight. I feel that the title of your thread is misleading, as you yourself seem to acknowledge in the quote above. For given all that you have said, it simply does not follow that there are no identities in nature but merely that such natural identities as there are must be parasitic upon "analogic" differences. One can wonder (as others have) whether or not there is any definitive evidence to support the conclusion that nature is "fundamentally" analog in nature, but I'll leave that line of inquiry for now.
If we are to start traveling down the path that you have set by relegating identity to the realm of "transcendental illusion" we'll inevitably encounter the question of whether the contents of such illusions are themselves a part of nature, and I would expect that you'd be loathe to answer in the negative on that score. But even if you stick to your guns on that point, we can also leverage your own line of reasoning to query the reality of the binary distinction that you have made between digital/analog systems in nature - is that distinction not, by your own lights, merely a transcendental illusion (and what of the binary distinction between natural/transcendental)? I'm not sure we're going to make it very far down this path before it becomes clear that we've made a wrong turn.
Thougths?
It gets complex however, because digital processes, once engendered, can feed-back into their analog 'ground' as it were. So you get something like a self-fulfilling prophecy: treat something as having an identity, and at some point it will have one. There's a dialectic, in other words, between the analog and the digital that takes hold once digital processes have come into being. Manuel DeLanda gives the example of a refugee who, having become aware of the fact of being so classified, changes her behaviour in order to fit better the criteria of refugee in order to gain asylum.
But as he also notes, "to explain the case of the female refugee one has to invoke, in addition to her awareness of the meaning of the term 'female refugee', the objective existence of a whole set of institutional organizations (courts, immigration agencies, airports and seaports, detention centres), institutional norms and objects (laws, binding court decisions, passports) and institutional practices (confining, monitoring, interrogating), forming the context in which the interactions between categories and their referents take place." In any case relational differences (i.e. analogic differences) are constitutive of identity, even if said identity has a real ontological standing, as it were. Identity, as real, is nonetheless context-bound, and constitutively, necessarily so.
As Kant knew, transcendental illusion gave rise to real effects: the entirety of dogmatic, uncritical metaphysics. The claim here is similar: if one uncritically employs categories of identity as metaphysically primitive, this too will this lead to uncritical ontologies. As far as the distinction itself, I'll refer you to the third post I made in this thread, but I'll add this: to the extent that communicative precision depends on digital, rather than analog communication, it is necessary that one employ digital forms of communication to get these - or any fine grained - ideas across. This is what makes the illusions transcendental: recall that for Kant, such illusions were unavoidable and were engendered due to the nature of reason itself. They are illusions intrinsic to reason. Our ability to communicate similarly - unavoidably - tempts us to project the digital into the world in toto.
That said, I still feel hesitant to deny that there is a legitimate distinction to be made between those identities that essentially depend upon contextual relations to "ens rationis" (e.g. the human lebenswelt) and those that essentially depend solely upon contextual-relations to "ens reale". Again, this seems to come part-and-parcel with the notion that some binary distinctions are naturally sustained (e.g. consider the evolution of "switches" in biological nature, and their fundamental role in processes of homeostasis, reproduction, sensation, etc.). The upshot is that I'm not entirely convinced of the notion that identity is merely transcendental in the sense of being confined merely to "ens rationis", while perhaps acknowledging that it is transcendental in the sense of being essentially context-dependent (I believe that medieval scholars actually referred to the fundamental sensitivity of finite, substantial being to environmental context "transcendental relativity").
Yeah, exactly. I said elsewhere in the thread that got this train of thought going that what I'm kind of after is something like a "critique of pure formal logic" as it were.
I agree with this actually, although I would even refine it somewhat. I would in fact say that the emergence of the digital goes hand in hand with neither ens rationis nor ens reale but with ens vitae: that is, life. It is no accident or coincidence that all three examples of 'natural' digital systems mentioned so far in this thread are biologial - gene expression, biological switches and synthetic biocircuits, if I recall correctly. Insofar as digital process as self-relating circuits (that operate via negation), it doesn't take a giant leap to recognize that the self-relating 'ens' par excellence is life - that which is autopoietically defines itself by sustaining a boundary been organism and environment. So perhaps it might be fair to say the the transcendental illusions of identity are just those of life itself. It is only by virtue of our biological being that we can engage in digital communication.
Right - on my analog watch 3:30 (which is not really 3:30, because there are no identities) is not "not 3:31" - thus spoke the metaphysician.
This isn't strictly on point, but I think most people who have even the tiniest little smidgeon of philosophical curiosity are naturally drawn to the question of in-itself vs. in-the-mind. Plenty of people who have never heard of Kant spontaneously ask: 'what if what you see as orange, I see as purple??'
Point being, imo the distinction being drawn doesn't seem specialized and academic at all.
Insofar as they are considered merely as degrees on a scale they are ordinal, and insofar as they represent degrees (degrees here in the sense of quantities) of heat, as in the example, they are cardinal. It's all in the interpretation.
That's a delicate project, insofar as any such critique must itself take some logical form. While certainly not an impossible task, one must be careful not to cut off the branch upon which one sits (sorry for the overworn cliche).
Quoting StreetlightX
I'm not sure I can agree. Granted, it is only with the emergence of life that "the digital" can in some sense be recognized as such, and leveraged toward the achievement of some "end" (e.g. some natural system comes to be leveraged as a switch by some other living system). Still, it seems hard to deny that the "raw materials" are there in nature prior to the emergence of life. A pressure gradient between two points in space is still a binary difference in magnitude even when its not being leveraged as such by some living system, isn't it (in the sense that the magnitude at point A is not the magnitude at point B)?
Though that said, the question of whether identity is in the objects really boggles the mind in a way that color doesn't, IMO. Though the ideality of identity is an old, old subject, and I subscribed to it myself for a time, and get the appeal.
Cardinal numbers do introduce a total ordering by definition, but could only be assigned plain old ordinal numbers (accepting there is no such thing as e.g. 'the 1.5th...' which seems plausible) once we settle on some granularity, which the analog in principle doesn't require. But what the digital can do is go down to an arbitrarily small granularity as required, and then leave the rest to an open interval over this arbitrarily small amount. This is granting that things are analog in themselves, which as I said before is questionable for a mercury sample anyway. Then we preserve the ability to conduct well-defined operations such as negation, the gap being not one of metaphysical import, but of a necessary imprecision of arbitrarily small amount enforced on us by needing to decide on some granularity.
What you write here seems to me to risk blurring a very good binary distinction between the digital and the analogue, or between the dichotomous and the analogical. Are you just wanting to say that the potential for the digital is inherent in the analogical continuum and that the apparently binary nature of the distinction between digital and analogue is itself a hypostatization?
The 'fully blown' digital seems to emerge with symbolic language. For example, all numbers in ordinary arithmetic are made up of just ten kinds of digits.
A boundary is only put precisely there by digitization.
I.m not sure I understand the question. Digitization is the introduction of crisply discrete values, One discrete value cannot be another discrete value so there is a crisp distinction (boundary if you like) between them
But where is that boundary? How can we tell, in virtue of placing the boundary? How can we tell where we've put the boundary at all? One way is to say, we just look at which things fall on either side of it: but this begs the question, since if we could determine precisely to begin with, before knowing where the boundary is set, which side each was on, then there will have ipso facto been digitalized distinctions between those two things that lie on opposite sides of the border all along, since we need to make this digitalization in order to place the border. But if we do not assume this, then we have no reason to say that the border was placed at once place rather than another, and we literally cannot figure out exactly where it is, or which things digitally fall on either side of it, hence the border itself becomes analog, and contrary to hypothesis we have done no digitalizing in placing the border.
A delicate project indeed. Part of the motivation for this thread was to wonder if, within formal logic, there are resources by which to deal with these kinds of issues, or if these kinds of issues even can be dealt with within the constraints of formal logic. I'm simply not well versed enough to know where to look or what that would even look like. My point of view is very much from the 'outside in'. However, I'm always on the look out for clues and resources which would help; Deleuze's work has been invaluable (as has the work of Francois Zourabichvili, who provocatively reads Deleuze as a logician, albeit a 'non-rationalist/empiricist logician'), so too Gregory Bateson and his spiritual successor, Anthony Wilden, whose System and Structure has been an indispensably invaluable resource for me for trying to think these things through. There are other resources too, but as far as I know, no one has really drawn them all together in a sustained way.
Quoting Aaron R
I don't think this works: a pressure gradient still has no negative values: there is more pressure here, and less pressure there, but at no point is there a relation of exclusion between the two 'ends' of the gradient; the magnitude at point A is not that of ¬B and vice versa. Identity and the law of the excluded middle is simply not operative at this level. The key again is that pressure is an entirely relative variable; the pressure at any point in a gradient (read: continuum) is defined by it's 'place' in that gradient. It cannot be 'isolated' without losing it's status as a 'point' of pressure to begin with: that is to say, pressure is a differential variable that cannot be taken 'out of context' without losing it's 'identity'. As with all analog systems, the pressure gradient is a matter of the 'more or less', both/and, and never either/or.
I'm inclined to say that's the point being refuted: the boundary is not placed anywhere upon the world. It's entirely on it's own-- when I speak a catergory, I do not do it through the world in front of me. I am the one speaking. The saying of the catergoy a particular state of me, not any object I'm pointing at.
Boundaries are without being within the outside world. The drawn line is its own, not whatever is underneath it.
You seem to be running together two different processes in your thinking; you seem to be conflating perceptualization and symbolic conceptualization. Crisp boundaries are established only by symbol-based conceptualization, but more or less distinct boundaries are always already a condition of perception itself (of which pre-symbolic conceptualization is also a necessary condition). These pre-symbolic boundaries are not absolutely precise, though. I see the apple on the table and I can see the outer skin (boundary) of the apple. I can say that the apple ends precisely where the air begins, and that establishes a conceptually precise boundary. Of course the imagined precision of that boundary may be blurred by thinking in terms of the fundamental particles that constitute the apple and the air, for example; so it is not a case of believing that there is an absolutely precise boundary in actuality. I think that is rather the whole point of SX's OP.
But we can conceive of a precise boundary just as we can conceive of a perfect geometrical figure; and we would be, for example, establishing a conceptually precise boundary by specifying a geometrical figure using plane coordinates. Same goes for entities. We can conceive a precise identity (boundary) in terms of entity/ non-entity; but of course I cannot say where the actual boundary between some object and its environment is precisely located. I think the question about the location of conceptual (digital) boundaries is based on a category mistake.
All I can say is that we're apparently not talking about digital boundaries 'somewhere, wherever they might be, and I don't know or can't know in principle,' but rather digital boundaries that are well-defined in formal systems. The former seems to be more an epistemic matter anyway: to imagine a digital boundary is indeed to imagine it somewhere perfectly precisely, even if one can't say for whatever reason exactly where. We can bypass the epistemic issues by setting them in a thought experiment, and still the problem remains. If this was a conceptual mistake, then we shouldn't be able to say, for any given part of the continuum, whether it fell on one side of the boundary or the other. But this is just to say either that 1) there is no digital boundary, or 2) there is no boundary at all, which are not situations we're interested in ex hypothesi. The issue about perception v. conception also seems to miss the point in that we're not talking about literally perceiving a boundary but rather being able to tell. in the thought experiment, where it is by thinking about it.
But to be honest my interest in this topic isn't proportional to the amount of time I'm spending typing about it, so I'm going to stop replying here.
And in particular:
Quoting The Great Whatever
All I can say is that as I conceive it the discreteness of digitally or symbolically conceptualized entities (the conceptual boundary between them in other words) just consists in the fact that one symbol cannot be thought to be identical to another. It's a matter of logical differences.
When it comes to the continuum, think of something like "Imagine the line of the equator: you obviously know which side of the line Europe is, even though you don't know precisely where the line lies on the planet's surface; but there might be locales in Ecuador, the Congo, Kenya and Indonesia, for example, where the answer is not clear at all. Maybe with GPS you could tell if you were stepping over the equator.
Anyway, I'm not greatly interested in this particular obtuse angle we are trying to explore either, so it suits me well enough to drop it if that's what you want.
If negation is not used as an index, it becomes all too easy to paper over the in principle difference between the analog and the digital by appealing to limit-procedues which simply granulize the digital, like the ones suggested by TGW previously. But no limit procedure, no amount of granulation can account for the irreflexivity of the analog which is without negation. A corollary of this is that digital languages, thanks to their reflexivity, can represent things, while analog languages cannot. Analog communication is at best iconic or indexical, but never symbolic, which belongs by right to the digital alone.
Now, the interesting question that has been raised a few times - and that I've avoided talking about - has to do with the status of the boundary itself. Does it belong to the continuum itself, or does it belong to the instituted digital system? The answer can only be that the boundary belongs to neither. It cannot belong to the continuum, because if it did, the continuum would be already-digitized; on the other hand, it cannot belong to the digital system because it is the very condition by which the digital is instituted. Like Russell's barber who both shaves and does not shave himself, the boundary's status is constitutively undecidable.
Wilden: "It is impossible to decide whether [the boundary] belongs to the set A or the set non-A. It belongs to neither, it is both neither and nowhere, and it corresponds to nothing in the real world whatsoever". The reason for this undecidability of course, has to do with the paradoxes generated by self-inclusion: if the digital, in constituting itself as a continuum subject to recursion and reflexivity, is to be consistent, it must forgo completeness (the status of the boundary cannot be decided 'within' the system, on pain of inconsistency). As Paul Livingston puts it in his discussion of Graham Priest's dialetheic logic, this is the 'choice' that every formal system must make, of necessity:
"In facing up to the paradoxes of self-reference, formal thought thus defines a fundamental choice: either consistency with incompleteness (and hence the prohibition of total self-reference, and the regress into an open iterative hierarchy of metalanguages) or completeness with inconsistency (and hence reference to paradoxical totalities). On the level of formal languages and systems, taken simply as neutral objects of description, either of these choices is evidently a possibility; we can save the consistency of our systems by ascending up the hierarchy of metalanguages or, as Priest suggests, we can model inconsistency within self-contained formal languages by means of what he calls a dialetheic logic, one that tolerates contradictions in certain cases" (Livingston, The Politics of Logic).
Of course, these are the choices that must be faced 'within' the digital itself. From the perspective of the analog, which is without negation, and to which the laws of identity and the excluded middle do not apply, these forced choices are inapplicable. Wilden himself will refer Godel's results to make the same point: "every consistent deductive system will generate Godelian sentences which we know to be true but which cannot be demonstrated within the system. And a system of meta-axioms will engender a meta-sentence, and so on ad infinitum. This implies that all human communication, including mathematics and logic, is an open system which can be subject to closure only for methodological reasons. The problem of the punctuation of the analog by the digital is irresolvable for humankind." (System and Structure, my emphasis). In other words, the irresolvable paradoxes of the digital are a symptom of it's always being too 'loose' to 'fit' the continuum of the analog.
You may be missing the force of the argument SteetlightX. "The analog", continuum, or whatever you wish to call it, is the very same as the "thing-in-itself", in the sense that its existence is simply assumed. We experience the appearance of some sort of continuity within the world, so we assume a continuum to account for this appearance, just like we experience the existence of real substance in the world, and assume the thing in itself..
Accordingly, anything you might say about this analog existence, this continuum, is based only in this assumption. So in order to say anything true about the continuum, your assumption of a real existing continuum must be first validated, justified. Only by validating this assumption does the nature of the continuum become intelligible. To simply assume a continuum, and say that it is of an analog nature, and completely other than the digital, is just an assumption which is completely unjustified, until it is demonstrated why this is assumed to be the case.
Now we must start without the assumption of an analogue continuum, and justify this assumption. We cannot start with the assumption of a continuum, with all the connotations of meaning (identity) which go along with such an assumption, we must demonstrate the need for this assumption, and this demonstration will expose the character of this so-called continuum. In other words, rather than assuming a "continuum", or "analog" existence, with all the features of identity associated with those words, we must bring this thing which we are trying to describe, into focus, such that we can accurately describe it.
Quoting StreetlightX
To begin with, this is self-contradictory. If a continuum could distinguish a part of itself from itself, it could not be a continuum. A true continuum would not give any principles for making such a distinction. And if an arbitrary distinction was made, even the points of boundary would consist of something other than the continuum itself, so it is impossible that a continuum itself is distinguishing a part of itself.
Quoting StreetlightX
And this approach cannot be satisfactory. The discontinuous is what we can know, the is and is not, so to describe the continuous as that which is opposed to the discontinuous is to employ negation and the tools of logic. But the continuous has already been noted to defy such rules of logic and negation. We cannot use such logical principles to describe the continuous. The existence of the continuous is assumed, based on our experience of living and sensing, so this is what we must refer to in our description of the continuous.
In our living experience, we observe two types of boundaries, spatial boundaries between existing things, and a temporal boundary between the future and past. If one, or both of these boundaries appears to be unreal, then we have reason to assume continuity. Spatial boundaries, between individual entities appear to be real, but the temporal boundary between past and future may not be real, and it is this lack of a real boundary in time, which drives the need to assume continuity.
But there are very real difficulties here. As much as time appears to be a continuum, without any real boundaries, our experience also indicates to us that the boundary between past and future is very real. Time appears to be a continuum, but it also appears to have a real boundary between past and future.
Quoting StreetlightX
The boundary's status is not specifically undecidable, its appearance is paradoxical, and this is what makes it seem to be undecidable. It is paradoxical because the continuum presents itself to us as essentially indivisible, continuous, but, as constituted with a boundary. The way to avoid the paradox is to understand the continuum as the boundary itself. But this makes the continuum a real identifiable entity, a boundary.
The semiotic relation is triadic. And this insertion of an extra step - an epistemic cut - is what gets you past this kind of problem.
So the analog thing-in-itself is vague. It only comes to be called a continuum in crisp distinction to the digital or the discrete within the realm of symbolisation or signification. It is a logical step to insist the world must be divided into A and not-A in this fashion. And then in forming this strong, metaphysical-strength, dichotomy of possibility, it can be used as a theory by which pragmatically to measure reality. We can form the counterfactually-framed belief that reality must be either discrete or continuous, digital or analog, and then test reality against this self-describing theory.
So the situation is the reverse of the one you paint. We don't need to begin in certainty. Instead - as Peirce and Popper argued with abductive reasoning, as Goedel, Von Neumann and others demonstrated with symbolic reflexivity in general - it can all start with a reasonable guess. We can always divide uncertainty towards two dialectically self-grounding global possibilities. The thing-in-itself must be either (in the limit) discrete or continuous. And then having constructed such a sharply dichotomised state of metaphysical certainty - a logical either/or - we have the solid ground we need to begin to measure reality against that idea of its true nature. Pragmatically, we can go on to discover how true our reasoned guess seems.
And in Kantian fashion, we never of course grasp the thing-in-itself. That remains formally vague. But the epistemic cut now renders the thing-in-itself as a digitised system of signs. We know it via the measurements that come to stand for it within a framework of theory. And in some sense this system of signs works and so endures. It is a memory of our past that is certain enough to predict our futures.
So the assumptions here begin in a discussion of existential possibility. If anything exists - in the spatiotemporally-extended sense that we think of as "the world" - then metaphysical logic says there are two options, two extremum principles, when it comes to how that world has definite being. Either it must be continuous or discrete, connected or divided, integrated or differentiated, relational or atomistic, morphism or structure, flux or stasis, etc, etc - all the different ways at getting at essentially the same distinction when it comes to extended being.
And having identified two complementary limits on being - terms that are logically self-grounding because they are seen to be both mutually-exclusive and jointly-exhaustive - we can be as certain of anything we can be that reality, the vague thing-in-itself, must fall somewhere between the two metaphysical-limits thus defined. Exactly where on this now crisply-defined spectrum is what becomes the subject of measurement.
Note that this dichotomy itself encodes both the digital and the continuous in being like a line segment - a continuous line marked by two opposing end-points.
So anyway, the very idea of the analog~discrete is based on the more primal dichotomy of the continuous~discrete - a way of talking about reality in general. But with the analog~digital, we are now drawing attention to the general semiotic matter~symbol dichotomy - the step up in material complexity represented by life and mind.
The analog~digital dichotomy has sprung up in computation and information theory as an ontological basis for a technology - an ontology for constructing machines rather than growing organisms. And yet, in retrospective fashion, it has now become a sharper way of getting at the essence of what life and mind are about - the semiotic modelling relation that organisms have with worlds. The analogy of the code is very useful - not least because it brings so much maths with it.
But in a sense, the analog~digital dichotomy also overshoots its mark. It leads to the idea that modeler and modeled actually are broken apart in dualistic fashion - like hardware and software. And this leads to the breakdown in understanding here - the questions about how a continuous world can be digitally marked unless it is somehow already tacitly marked in that fashion.
So once we start to talk about the Kantian "modeler in the world", the first step is to make this essential break - this epistemic cut - of seeing it as the rise of the digital within the analog. Material events gain the power of being symbolic acts. But then we must go on to arrive at a fully triadic model of the modeling relation. And so attention returns to the middle thing which is the informal acts of measurement that a model must make to connect with its world.
This is what is the focus of modern biosemioticians like Pattee, Rosen, Salthe and many others like Bateson, Wilden, Spencer-Brown, and so on. What is it that properly constitutes a measurement? What is it that defines a difference that makes a difference?
I think it's worth noting that the continuum, just as much as digital logic, exists only by virtue of thought. Thought enables the coming-to-be of parts, and that coming-to-be consists in the self-distinguishing of the part from the whole, as much as it does the self-distinguishing of the whole from the part.
Quoting StreetlightX
Do you understand the question you are indicating here to be asking, when it speaks of "belonging" about the location, in the general sense of its position within ontological space, of the boundary? This is what, it seemed to me, TGW was asking. I think the question is based on a category mistake: I think that boundaries, whether analogue or digital, are logical, not ontological. Imprecise boundaries certainly belong to the continuum, insofar as they are intelligible within its logical space. Think of animal territories, for example. But precise boundaries belong only to the digital as I said before: Quoting John [Emphasis added]
That's right. But then there is still the issue of how they can be imposed on the world - the issue of human measurement.
And then - where this gets radically metaphysical - there is the post-quantum issue of measurement in general.
So through semiotics, we come to explain human understanding of the world as a triadic sign relation. And then it now seems as though the world itself is ontically pan-semiotic - a system that self-referentially measures itself into being in some concrete sense. The universe has to observe itself to "collapse the wavefunction" and have a digitally-crisp, atomistic, mechanically-determined, state of being.
Of course we then call that classical world, that realm of continuous Newtonian dynamics, our analog reality in contrast with the digitality of our symbolic representations of that world.
But quantum theory has re-introduced the basic metaphysical dichotomy - is existence continuous or discrete (or indeed, beyond that, indeterministic)? - at base.
So we know how in epistemic fashion we impose intelligible order on the world in a way that makes it pragmatically measurable. But even while arriving at a fully working theory of that - as in biosemiosis - up pops the holographic bound in fundamental physics and other pansemiotic questions about how the Universe solves its own measurement problem. Where does it stand so as to resolve its own indeterminacy in globally-self referential fashion.
Given this seems to be a debate about Analytic metaphysics vs PoMo metaphysics, as usual I would say only Pragmatic metaphysics has the proper resources to answer these kinds of questions properly. :)
Yes, I guess that is the big question. But even if the real turned out to be (that is if we could know without question that it definitely was) discrete, is it reasonable to think that discreteness could consist in absolutely precise boundaries between the fundamental units? That would seem to evoke Leibniz' Monadology.
Quoting apokrisis
Perhaps all three have their different places and functions if the 'grand scheme'? I''m guessing though, that you see the other two as being subsumed and augmented by pragmatic metaphysics?
;)
Starting to sound a little mystical here. The snake turns and meets its tail? At the point of meeting, the one becomes the two.
Quoting StreetlightX
The boundary is a line or plane for spacial boundaries. It's an idea. I pondered this a long time ago.. the origin of the concept of negation. Maybe it comes from craving and aversion.
Or from the possibility of different actions? I think of the fully fledged symbolic logic of negation and opposition as being prefigured in the less distinct analogical conceptions of exclusion and absence. But then these distinctions also seem to be expressed in the language of logic, or the logic of language, as two kinds of not-X: the weaker 'not-X' of exclusion and absence and the stronger 'not-X' of negation and opposition.
Yes, the continuum comes to be called such to distinguish it from the digital or discrete, but this does not imply that these are properly opposed. That's what must be respected, that different from digital does not mean the opposite of, or the negation of digital. So the analog, or continuum, may be different from the digital in the same way that colour is different from red. Therefore this "logical step" is not a valid logical step at all. We cannot assume a proper A and not-A relation between the analog and the digital
Quoting apokrisis
I do not claim that we need to start in certainty, this is more like what you imply. You imply that if a thing is different from A you can establish the logical certainty of not-A of that thing, but this is not the case. If the thing is said to be different from red, we might still be talking about colour, and it would be false to characterize colour as not-red, because colour includes red. What I said, is that we have to get an idea of what this thing, continuum, is, by looking directly at the thing, and describing it. Saying what it is not, will never tell us what it is.
Quoting apokrisisSo this is the mistake, these two, discrete and continuous, are not properly opposed and therefore are not mutually exclusive, as you imply. We have discrete colours, red, yellow, green, blue, within a continuous spectrum
There is a relation of exclusion involved here, but (as others have alluded to) it's the exclusion of contrariety rather than contradiction. So for example, red and green are contraries whereas red and not-red are contradictories. The former is associated with "material" negation, the latter with "formal" negation. Interestingly, formal negation can be defined in terms of material negation: not-red is the just the set of all of red's contraries, etc.
Similarly, the sense in which the magnitude at point A is not the magnitude at point B (within the context of a gradient) also appears to be that of material negation. Having a psi of 40 at point A is materially incompatible with simultaneously having a psi of 50 at point A, and in that sense the former excludes the latter (and vice versa). Crucially, the magnitude at A is not the magnitude at B quite regardless of the activities or even the existence of ens vitae.
But in another sense, I do agree with you. The building of digital systems that depend upon formal negation still involve the "artificial imposition" of boundaries on natural continuums. So a digital computer leverages material differences in voltages as a foundation for binary computation (2V = "true", 5V = "false"). My point was simply (and hopefully uncontroversially) that nature provides the "raw materials" that make the imposition of binary distinctions possible in first place. If it didn't - if there were no materially exclusive differences already within nature to leverage - then the emergence of binary systems could never have occurred.
Are you just wanting to say that the potential for the digital is inherent in the analogical continuum and that the apparently binary nature of the distinction between digital and analogue is itself a hypostatization?[/quote]
Yep, more or less.
Yes, that's a better way of putting it; precisely the expression I was groping for.
The material difference in voltage isn't used to construct that binary distinction though, for the discintion is it own state, not something that necessarily follows from the presence of material voltage.
We might say that the "raw materials" give relevance to binary distinctions, 2V= "true" and 5V= "false" are relevant when talking about voltage. We can use them to achieve something we want in the world. Not true of the 2V/5V distinction if we are talking about the taste of a cake.
The "raw materials" aren't leveraged to form the binary distinction. We don't distinguish 2V/5V by looking at the "raw materials." The distinction itself is a first principle category which we then relate to the "raw materials."
Quoting StreetlightX
I'm not so sure about this. Certainly there can be analog and digital measurements, but ultimately what exists at the present is what I believe apokrisis calls "crispness" - the vague becomes the discrete, or the digital. Digital corresponds to certainty, analog to uncertainty or vagueness.
The properties that leech on their objects would be identical to themselves. To be 4.15678 g just is to be 4.15678 g. The fact that the object we are weighing measures at 4.15678 g means it has a discrete amount of mass associated with it. There is a fundamental reason why an object is a certain way - say, 4.15678 g. It's not arbitrary; there are discrete properties of objects.
Furthermore, analog systems inherently have digital parts anyway, they just aren't computational. Additionally, the way I understand it, analog systems are not so much a separate kind of thing than they are a less discrete digitalization. Instead of binary 0's and 1's, you have a much larger range of outputs - but just like in an analog clock, these outputs are restricted. An analog clock can only represent certain intervals. The more we narrow down our constraints, the less able we are to maneuver: there are many different kinds of stars, but there are only two biological and fertile sexes. There are a gazillion species of animals, but there are only three naturally occurring isotopes of carbon.
Isn't the difference between an analog and a digital system a digitalization anyway? Either/or you are analog or digital...
Is a heuristic identical-to-itself?
Sure. The road not taken.
Ideally, yes. Some effort was being made to show analog as being primal or primary.
If electrons can be either waves or particles, I'm not sure there is a primary.
As usual, we stand imperceptibly close on some issues, and unbridgeably far on others. While I agree with the thrust of your post, you continue to hold a very narrow view of knowledge as digital, when your own comments ought to disabuse you of this notion. As I commented on elsewhere in reply to Pierre, the analog is not some kind of unknowable 'thing-in-itself' which is simply 'vague'; the analog has qualities which are knowable, but simply in a different mode than that of the digital. If the digital is composed by (stark/crisp/extensive) differences defined by negation, the analog is composed by (non-denotative) relational differences of intensity: "differences in magnitude, frequency, distribution, pattern and organization" (Wilden).
Bateson himself speaks of how analog communication works "by means of kinesthetic and paralinguistic signals, such as bodily movements, involuntary tensions of voluntary muscles, changes of facial expression, hesitations, shifts in tempo of speech or movement, overtones of the voice, and irregularities of respiration. If you want to know what the bark of a dog "means," you look at his lips, the hair on the back of his neck, his tail, and so on. These "expressive" parts of his body tell you at what object of the environment he is barking, and what patterns of relationship to that object he is likely to follow in the next few seconds. Above all, you look at his sense organs: his eyes, his ears, and his nose" (Steps To An Ecology of Mind)
None of these things are noumenal 'things-in-themselves' which stand on the other side of knowledge. They are simply of a different order of knowledge, one relating to sensual movements of and in space and time: aesthetic knowledge. At it's base, this is what 'aesthetic' means: relating to space and time, as with Kant's 'transcendental aesthetic'.
Gilles Deleuze is the philosopher who has perhaps attended to the specificity of analog differences with the most care, referring to them as differences of 'intensity' as opposed to digital differences of 'extensity', noting how the former necessarily underlie the latter: "Every diversity [read: identity - SX] and every change refers to a[n analog] difference which is its sufficient reason. Everything which happens and everything which appears is correlated with orders of differences: differences of level, temperature, pressure, tension, potential, difference of intensity ... The expression 'difference of intensity' is a tautology. .. Every intensity is differential, by itself a difference. ... Each intensity is already a coupling (in which each element of the couple refers in turn to couples of elements of another order), thereby revealing the properly qualitative content of quantity. ... Difference or intensity (difference of intensity) is the sufficient reason of all phenomena, the condition of that which appears." (Difference and Repetition).
So again, to turn back to our eternal debate, any metaphysics based on modeling relations - itself premised on discrete, digital knowledge - is derivative of a more primal aesthetic ground out of which it is born. Elsewhere Deleuze will note that all representation - as with models - depend on 'sub-representitive dynamisms' which are nothing other than intensive (non-conceptual) differences: "No concept would receive a logical division in representation, if this division was not determined by sub-representative dynamisms ... These dynamisms always presuppose a field in which they are produced, outside of which they would not be produced. This field is intensive, which is to say it implies a distribution in depth of differences in intensity ... the concept would never divide or specify itself in the world of representation without the dramatic dynamisms which determine it in this way in a material system beneath all possible representation." (The Method of Dramatization).
I don't see how this follows. To separate "one piece of a continuum from another" is not necessarily to treat it as digital; but simply to treat it as diverse. The continuum, to be susceptible of being thought even analogically must obviously be diverse, right? In the continuum of spacetime there are different galaxies, but there do not seem to be any precise digitally conceivable boundaries between those galaxies.
Well quantum theory says reality is fundamentally uncertain - so fundamentally vague. The discrete and the continuous would then be emergent in being the crisply complementary limits on that basic indeterminism. So it is not a question of whether reality is particle-like or wave-like. Instead those are the bounding alternatives. And which you see becomes a point of view. The observer conjures up the wave or the particle, depending on the type of measurement he chooses.
So if this is the actual ontology of the world, then it is only reasonable that it is reflected in our ideas about logic too. A deep logic is going to go beyond emergent features like continuous and discrete to connect with the indeterministic or vague.
Quoting John
The grand scheme of pragmatism is triadic. So logic has three levels - firstness, secondness and thirdness. Or to talk about it more psychologically, the three things of pure monistic quality, the dyadic thing of a reaction or relation, and the third thing of mediation or habit - a hierarchically structured relation in which a memory becomes the context generally shaping events.
Now the familiar model of logic - as encoded in the laws of thought - is all about secondness or dyadic relations between particular things and events. It is a logic of the particular, in short. It presumes the world already exists as a crisp state of affairs, a set of individuated facts. And it takes a nominalist view on abstracta or laws or any other kinds of transcendent regularity.
It is this logic of the particular that AP-types instinctively seize on to do any metaphysics. You see that in TGW and his furrowed brow when modal logic gets challenged. The only logic that computes is the stuff which ordinary logic courses spend all their time teaching - the logic that is splendidly mechanical and a valued tool in a society that values the making of machines.
Then we have the larger logic of Pragmatism which comes out of the long tradition of organicism and holism. This now adds in a logic of vagueness and a logic of dialectics or symmetry breaking - the firstness and thirdness in Peirce's scheme. (He also distinguished these two categories as the complementary principles of "tychism" - or absolute chance - and "synechism", or generalised continuity.)
So now we have a logic founded in vagueness or indeterminism. Nature creatively sports possibilities. Already the principle of sufficient reason - an axiom of ordinary logic - is denied. Fluctuations can happen without limit.
But then that unbounded and chaotic firstness contains within it the seeds of its own self-regulation. Because while indeed "everything can happen", everything that is then contradictory is going to cancel itself out. So just in trying to be completely chaotic, already firstness is on the way to being self-limiting. And anyone who knows quantum field theory will recognise Feynman's path integral or sum-over-histories logic at work here. This isn't some bit of wild-eyed metaphysics. It is exactly how physics has come to make sense of the world in the past 50 years.
Then we go to the other thing of the dialectic or the dichotomy. This is now a logic of generality. This is how we reason to extract the plausible limits on existence itself. So as with this thread, as we abstract away the particulars, that leaves always the duality of thesis and antithesis - two possible extremum principles, both of which seem equally "true".
So the LEM is for reasoning about particulars. An individuated thing or event has to be logically one thing or another. If it is A, then it is not not-A, and vice versa. Negation seems fundamental in this context. You have to reduce reality to descriptive binaries - and then hold one of the two options true, the other false. And as I say, as a model of secondness, a logic of particulars, it works really well. It makes for splendid machines. And even societies that think and act like machines.
But then the dichotomy is the basis for a logic of generality. Now - following its rules requiring a separation of vague possibility into crisp actuality via a dichotomising process of mutual exclusion/collective exhaustion - we always will arrive at complementary poles on being. We have two alternatives - and both must be "true" in the sense of being ultimate bounds on possibility.
You can head towards the two poles of "the discrete" and the "continuous", but you could never go past them - as how can the discrete be more discrete than the discrete? And you never really leave either behind either as the only way to know you are headed towards discreteness is because it is measurable - plainly visible - that you are headed away still from continuity. And vice versa. So formally, mathematically, the dichotomy encodes the asymmetry of a complete symmetry-breaking. It describes a reciprocal or inverse relation where the way to make one end bigger (or truer, more dominant, more real, more fundamental) is to make the other end smaller.
We see this in familiar things like infinities and their reciprocal, the infinitesimal. What is the number line except an infinity of infinitesimals? That is why a number line can be both continuous and discrete at the same time - unlimitedly countable. It encodes an uncertainty relation at its base. Neither the continuous or the discrete are fundamental, merely emergent. It is the idea of the infinitesimal difference that reciprocally allows the construction of the unboundedly continuous (when it comes to counting). The infinitesimal = 1/infinity, and vice versa.
So metaphysics got going when it discovered this logic of generality or dialectical reasoning. Ancient Greece spilled out a whole set of logical dichotomies that underpin pretty much all of the science and thought that has happened ever since.
Now PoMo - showing the Hegelian roots of its Marxist leanings - has flirted a lot with this dialectical reasoning. So at least it knows about it. But mostly it uses dialectics to generate a play of paradox. It points out that two opposite things always seem true about nature. However instead of saying, well yes of course, and that is what leads on the Peircean thirdness of habit or hierarchical organisation, it treats that fact as some source of deep ambivalence. PoMo is - politically - anti-hierarchical. And so it prefers to conclude that the inevitability of dichotomies is instead a sign that we should somehow return to the vague source of things - the radical uncertainty in which things would be again freest.
It might sound like it is a good thing to return to vagueness like this. But it isn't true vagueness - PoMo just doesn't have a tradition in that regard. Instead it is just another version of AP's notion of existence as an essentially random collection of events, a state of affairs composed of already individuated being.
OK, PoMo does have some concerns about how individuation comes about in fact. But it has no logic of vagueness to work with. It's grasp of logic on the whole is sketchy and not central to its concerns. It actually quite likes the idea of Romantic irrationality as its alternative to the patently mechanical mindset of AP.
So that is why I say Pragmatism is the only brand of metaphysics that both does pursue logic with rigour and has a large enough model of logic to talk about the whole of existence.
AP has tunnel vision. It only wants to apply the logic of the particular with sterile relentlessness. PoMo has ADHD. It is all over the shop as to what logic really is. Only Pragmatism (as defined by Peirce) uses a formally holistic logic that comprises of three elements in interaction - a logic of vagueness, a logic of particularity, and a logic of generality.
Though of course Peirce wasn't the end of the story. He was only a solid beginning. Our ideas about symmetry-breaking and hierarchy theory are much more mathematically developed these days. And quantum theory is rubbing our noses in the reality of indeterminism. So we can be a lot sharper about defining both vagueness and generality now.
I agree that analog~digital is probably not a proper dichotomy. They are terms that arose early on in the development of signalling technology. And so it is a little blurry whether analog - in being iconic, a direct representation of its material source - is the opposite of symbolic, or merely proto-symbolic.
Retrospectively, we could tidy this up and find a way to define digital as 1/analog, and analog as 1/digital. But really, that is a reason I would rarely talk about analog and digital as a crucial metaphysical distinction. Discrete and continuous is simpler to understand as a rigorous dichotomy. Likewise matter and symbol. But analog~digital is a little ambiguous in comparison.
Quoting Metaphysician Undercover
Correct. I say it is essential to by-pass uncertainty and begin with a confident positive assertion - just state an axiom or premise which has the logical form of the LEM. But that positive start is what you then seek to test. Does the guess work out in fact?
Quoting Metaphysician Undercover
Given colour experience is the most unreal of mental constructions, this example is already off to the worst possible start.
The world is not coloured red, yellow, green or blue, nor any mix of these primary hues. That much we know from basic psychophysics.
But the measure of psi is still not 'intrinsic' to the notion of pressure gradient; it still a digital model of the analog; this has nothing to do with the measure of psi per se, but simply because it is a numerical measure to begin with. Recall that the whole number line is generated by utilizing 'zero' as a rule (that is, a boundary) to distinguish between integers; as per Frege, one begins by defining zero as the 'object under which the concept 'not equal to itself' falls', and then from there, generates the whole of the number line:
"Zero is implicitly defined as a meta-integer, and indeed its definition is what provides the rule for the series of integers which follow it ... [Number] depends upon the distinction between 0 as an object falling under a concept, and 0 as the number belonging to a concept. All that needs to be established is that zero is not simply a number as such, but a rule for a relation between integers. The number which belongs to the concept 'identical with 0' is also the interval or gap between the integers (the number one). Thus a? (but not 0°, which equals 0) is arbitrarily defined as 1, because it is the boundary between a¹ and a?¹." (System and Structure)
So the measure of psi - as a measure - is not intrinsic to the analog gradient that is a pressure gradient. While I appreciate that the two measures of psi at different points of a pressure gradient may stand in a relation of contrariety rather than contradiction, not even contrariety is, strictly speaking, an analog value. Hence Deleuze: "It is difference in intensity, not contrariety in quality, which constitutes the being 'of' the sensible. Qualitative contrariety is only the reflection of the intense, a reflection which betrays it by explicating it in extensive. It is intensity or difference in intensity which constitutes the peculiar limit of sensibility" ([i]Difference and Repeition).
See my reply above to Apo - the analog is not just uncertainty and vagueness. It has specific properties of it's own defined primarily by relationality.
As I emphasized previously, the difference between continuity and discontinuity is indexed by negation, and by implication, self-reflexivity. No amount of fine-graining of the digital will allow it to lapse over into the analog. The difference is a difference in kind, and in principle, not just one of degree.
I don't think this is quite right. As a matter of principle, it trades too heavily on some kind of thought/world duality which I think is unsustainable, if not mystical. As far as the facts go, however, digitality is present at the level of things like gene expression and axon function in nerve cells, and require no 'thought' in order to function. What matters in both cases is not thought but behaviour, or better, semiotic function. At this level - the level of life - behaviour is regulated by feedback which means that casual processes operate as sign-relations: whether or not an axon will propagate a signal, for example, depends on the (chemical) feedback it receives from it's environment; as such, signals operate not as efficient causes but as signs.
To use another example, consider a tree-line that doubles as a national border. The tree-line does not 'cause' you to avoiding crossing it without the proper documentation, but it stands as a sign which regulates your behaviour nonetheless. If everyone were to ignore national borders, the tree-line would no longer stand as a sign. But again, what is crucial here is not thought but behaviour: you can't 'think away' the fact that the tree-line is a border, and even if you do, you will (possibly) suffer the real-life consequences of being caught if you cross it illegally. The institution of the digital is a result of a decision, but the status of this decision is not 'in thought' so much as it is 'in practice' (which is why you can't 'find it' anywhere in particular). One must remain a materialist about these things.
I didn't say the analog equates to the vague, so your reply is mostly off the point.
I said to call the thing-in-itself anything is to take a theoretical stance. And so it is the epistemic (not ontic) vagueness that we aim to pierce here. And the way we pierce it is by forming up some robust dichotomy as our best guess as to what could be the case. Doing this employing a dichotomy ensures that whatever is the case in regard to the thing-in-itself, it must logically lay somewhere within the limits we have thus rigourously defined.
And so one such dichotomous inquiry might be to ask is the thing-in-itself continuous or discrete (or in your less crisp lingo, analog or digital)?
So again, in explaining the epistemic cut to MU, I was talking epistemology rather than ontology (the clue was in the "epistemic cut").
Quoting StreetlightX
Yes. So iconic or indexical rather than symbolic. But as I said, I don't think you are working with a well defined dichotomy in talking about analog vs digital. They are not a reciprocal pairing in the way a proper dichotomy like discrete and continuous are. That is why you want to call them contrasting modes or levels of communication or representation. There is some fudging going on there that makes for weak metaphysics.
Of course you could always pause to examine this point, tidy it up.
Quoting StreetlightX
Whoa! Is that really what you have been meaning by "aesthetic". Forgive me for thinking you were using it in the more usual sense....viz:
Quoting StreetlightX
Yeah. That passage reads like gibberish to me so you might have to put it into your own words.
I understand what intensive and extensive properties mean in the standard physics context. I completely don't get your attempts to argue that they somehow reflect an analog~digital distinction.
How are energy or volume "digital" and not physically continuous in their extensibility?
How are bulk properties like melting point and density "analog" when they have a value that doesn't change in continuous fashion?
And how do intensive properties underlie extensive properties when instead an intensive property is formed by the ratio of two extensive properties (as in density being a ratio of mass and volume)? Is a ratio more basic than that which composes it?
This is some baffling shit here.
I can see how you/Deleuze might be driving at a substantialist ontology - one that takes existence to be rooted in the definiteness of material being. And so the inherent properties of substance would seem more fundamental than the relational ones.
But that is quite a different kind of ontology to a triadic process one where - as in hylomorphism - formal constraints shape up or individuate material potential so as to produce the middle-ground actuality we know as substantial being.
Quoting StreetlightX
Again, this is unscientific horseshit. By its very definition, an intensive property is constant through-out the substance in which it is said to inhere. It can't vary in intensity without some further reason to make it so - what I would call a further informational constraint, and which you would thus have to call "discrete/digital knowledge" in the position you are advancing.
Quoting StreetlightX
Again this seems a weird definition of aesthetic. Even if we go now with this being a reference to necessary Kantian intuitions about Euclidean space, I think most agree that Kant screwed this bit up. And it is hardly primal, or non-conceptual/non-digital, to project the idea on to space that it is flat and infinite in a dimensioned, countable, Euclidean maths, way.
Psychology shows that we do dichotomise spatial relations in a fairly primal and inductively-learnt fashion - a posteriori knowledge. We learn that everything we see is generally large when it is close at hand and small when it is far away .... if it is the kind of thing with a normally constant size. And spatial distance in turn relates to time and energy. If it looks close, we can probably get to it quite soon with not too much effort.
So an embodied sense of being in the world is built up from these kinds of exploratory learnings. They are the dichotomies of experience rather than the antinomies of pure reason. :)
So I agree with your general urge to take an enactive or embodied approach to epistemology here. Biosemiosis is indeed foundational to linguistic or mathematical semiosis. And a lot of philosophy does go in the other direction in presuming a physics-free disembodied rationality. That is why computers seem so ... deep ... to so many. They are disembodied rationality, pure syntax, brain in a vat digitalism, personified.
And I get the general thrust of what you mean about the digital distinction. Biologists are embracing Peircean semiotics because it gets at the basis of how - in Pattee's words - rate independent information (a digital code/memory) can constrain rate dependent dynamics (the Newtonian realm of "analog" or continuously state-determined material processes).
So these are the important points. Dissipative structure can be regulated by a machinery of memory. And this is how bodies are formed, individuals are individuated, autonomy arises.
But Deleuze seems mostly mangled Prigogine. And Prigogine, while a genius, also was working at the level of rate dependent dynamics. He wasn't about the larger semiotic story of the epistemic cut and rate independent information. So to make Prigogine your departure point is - as with autopoiesis or dynamical systems theory - to strike out with only half the whole story.
In this example, the psi provides the material content, while the value assigned to it, is what has changed. "Psi" is unchanged, therefore providing us with a continuity between point A and point B. We are talking about the same thing at point A as at point B, but something has changed about that thing, such that we have to give it a different value.
Now, we could deny the continuity, and say that point A and point B are completely different instances which are being compared. If someone is to say that "the same thing" is being measured at point A as is being measured at point B, this claim needs to be justified. So the concept of psi has to be exposed, and analyzed, to determine whether we are actually measuring "the same thing" at A and at B. If there is not something real, which "psi" refers to, there is no continuity, and the claim that we are talking about the same thing having a different value at point A from point B, is false. It is an unjustified assumption.
We can see this issue in modern physics with the concept of "energy". As an attribute of an object, energy can pass from one object to another. This object has energy, and the energy it has may be transferred to another object. Someone might claim, that after transferral, it is the same energy now in the second object, as was in the first object. But this gives identity to the energy itself, and the energy must have identity in order that we can claim "the same energy", a continuity between the energy being in the first object and then in the second object. Under this assumption now, the energy is not a property of a thing, but is an identified thing itself.
We are faced with a metaphysical dilemma. We may hold fast to fundamental ontological principles, and say that energy is a property of things, and can only exist as attributed to a thing. In this case, we must face the problem of how the energy appears to transmit, or radiate from one object to another. The continuity between the first and second object has been denied as an unjustified assumption, by restricting the existence of energy to being a property of an object. Now we must look for another mechanism by which the energy transmits. The other possibility, is the route taken in physics, we allow that energy itself is an identifiable thing. Now the continuity is justified by this assumption, that energy is a thing itself. But the concept of energy now needs to be exposed, and analyzed, just like the concept of psi above, in order to determine whether "energy" actually refers to a real thing, and not just a property of things, and then the continuity would be justified.
Such a difference implies a necessary underlying continuity, sameness, and this is the underlying analog principle. My argument is that this underlying continuity is simply assumed, deemed necessary in order to make difference intelligible, and therefore assumed. Any such assumption needs to be justified.Quoting StreetlightX So the assumption that there is an underlying analog difference, as "sufficient reason", must be justified. We cannot just say "it must be so or else the world is an illusion", the reason why the world is not an illusion, must be itself be made intelligible.
Quoting apokrisis
The problem is, that the pole of "continuous" is an arbitrarily posted pole. It is simply assumed. Therefore the thing which has been designated as continuous may prove to actually be discrete. Then, if it still appears necessary to assume a "continuous", a new post is set, so we are in fact, always going past the pole of "continuous".
Au contraire mon ami, the reference to the discrete and the continuous mean nothing without the index of negation and reflexivity which quite precisely define the distinction between the analog and the digital.
Quoting apokrisis
Consider yourself forgiven; we all begin somewhere, even if that somewhere is Wikipedia. But of course aesthetics in the broader sense has to do with the shaping of space and time; rhythm, form, and everything that belongs to the realm of the sensibility in general. From the Greek Aisth?sis, that which relates to the sensible; the vulgar sense of the aesthetic as relating to the art and the beautiful being a particularly modern, restricted and derivative sense of the term.
Quoting apokrisis
Except you're wrong; the whole point is that 'substances' are differentially engendered. It's process all the way down; though perhaps not all the way up. But your misreading is not worth entertaining too far.
Quoting apokrisis
By intensive I simply mean sub-representative/digital (i.e. analog) differences. A bit of history of philosophy helps here too: Aristotle's understanding of difference is premised on his metaphysics of the categories. A thing can only be said to differ from something else if they belong to the same genera: "Difference is said of things which, while being other, have some identity, not according to number, but according to the species, or the genus, or the proportion" (Aristotle, Metaphysics, book Gamma). In every case for Aristotle difference is derivative or parasitic upon a more primal identity; that is, difference can only ever be digital. But if it agreed that the digital is itself a product of boundary setting, then, in Deleuze's terms, there ought to be a concept of difference not subordinated to differences in the concept (read: genera). That is, there are differences which are not digital differences; in the context of the thread, these are referred to as intensive differences.
Miguel de Beistegui sums things up nicely: "[in Aristotle's schema], differences... make sense only in relation to the species and genus under which they are subsumed. And so, were we to rehabilitate differences in philosophical discourse, we would need to overcome the primacy of ontology as ousiology or, more specically, overcome the punctual character of substance, and the conception of discourse as propositional. We would need to begin with differences, and with matter, and to show how they themselves are generative of identities and substances. We would need to consider them no longer as accidents ... since accidents always presuppose a substance to which they occur, but as events, and as events constitutive of our world. In so doing we would begin to move from an ontology of substance and essence to an ontology of events" (de Beistegui, Truth and Genesis). This is not to mention also Kant's original use of the distinction between intensive and extensive magnitudes from whom Deleuze draws the terms from in modified form.
Wilden too puts the whole issue in terms of difference, although he doesn't employ the vocabulary of intensive/extensive: "There are thus two kinds of difference involved, and the distinction between them is essential. Analog differences are differences of magnitude, frequency, distribution, pattern, organization, and the like. Digital differences are those such as can be coded into distinctions and oppositions, and for this, there must be discrete elements with well-defined boundaries. In this sense, the sounds of speech are analog; phonology and the alphabet are digital. In the same way, the continuous spectrum of qualitative, analog differences ranging from black to white in the visible color spectrum may be digitalized by the boundaries of a color wheel or coded around the opposition of black and white (which, for another system of explanation, as the absence of color, are identical)". (System and Structure).
Quoting StreetlightX
Is this really so clear? We can think of Q as a subset of R, but R is itself is usually constructed from Q and some quantifiers, and finally from the empty set, which is like pure digitality itself. We try to breath the "spirit" of the intuitive continuum into the "letter" of our relentlessly discrete symbols, because we want to have objective or inter-subjective discussions about this intuitive continuum. But we have to build it from digital sets, so it's arguably not the "real" continuum of intuition. As I'm sure you know, the power set of N is sometimes called the continuum, and this seems very digital. Plato's notion of the "One and the Indefinite Dyad" may be useful here. Perhaps we just come equipped both ultimately incompatible world-structuring faculties. The rational numbers were a brilliant fusion of digital counting (the One) and the Indefinite Dyadic intuition of continuous length, but troubled of course by the discovery of irrational magnitudes. So we build the "scientific" reals out of sets. And then there's also the argument that we can do science with a finite subset of the rational numbers (numerical analysis). The intuitive continuum is something like the wind that is only made visible by the discrete leaves it shakes as it informs our construction of discrete, symbolic systems.
I'm not proposing any "thought/ world duality, in fact I am proposing just the opposite, Is there any continuum-in-itself apart from the one we know? How is it known, if not through thought? But take 'thought' here in its widest possible sense as consisting most primordially as re-cognition; which is the basis of semiosis.
When you say that "digitality is present at the level of things like gene expression and axon function in nerve cells, and require no 'thought' in order to function" I have no argument because that is just the way we conceive it. Does it make any sense to say that it could be in any other way than how we conceive it to be, other than it being in some other way, perhaps more comprehensive or even radically different, that we might come to conceive it?
So, the continuum doesn't exist as continuum until it is conceived as such. The continuum itself is an identity for us. But then does it make any sense to say that there are no identities in nature, if nature cannot be conceived and talked about by us except in terms of identities? By this I mean that nature nowadays is generally conceived as a vast causal nexus of relations between entities that are in one sense unique and on the other hand are of specific kinds. It is uniqueness that confers identity, and uniqueness consists just in difference from everything else. It is similarity that underpins identification, and similarity consists in being understood to belong most specifically to species, and most generally to genera.
Quoting StreetlightX
Thought and practice are inseparable, so I don't see it being necessary to privilege one over the other. I think it's really more a matter of looking at it broadly from the different perspectives at our disposal.
So, thinking about your 'treeline as border" example, it's true that I can't "think away" its borderhood, because the latter consists in a collectively institutionalized thinking over which I have no control.and if I cross the border illegally I will be subject to the actions motivated by that collective thinking. I also see that that collective thinking may have itself come about by virtue of past actions that were not specifically motivated by the notion of the treeline as a border, but then have become entrenched by a long history of territorial kinds of practices around the treeline. I think one does better to remain neither materialist or idealist, but to take account of the necessity of both thought and matter, and their ultimate inseparability, and to privilege neither one at the expense of the other..
I can agree with Wilden. It is when you start pulling in Deleuze and "aesthetics" and other such baggage that it loses analytic clarity and becomes a romantic melange of allusions.
So accepting Wilden as a valid starting point, I will focus on the further things that could be said from a (pan)semiotic point of view.
The key thing is that reality itself is digital in being marked. To talk about analog difference is already to talk about a reality that is constrained in particular material ways. If the weather is a pattern of magnitudes - the pressure high here, low there - then already the world is divided against itself, expressing a proto-negation.
So a pure analog state would have to be a completely bland state, one characterised by its intensive or bulk properties. It would be like the early state of the Universe when all that existed was a thermalising bath of radiation - a featureless state with the same pressure and energy density and rate of action everywhere. The Big Bang was the least possible marked state of being - a spreading ocean with no discernible texture. The only change was the change of becoming steadily larger and cooler - a change that could only be appreciated if one was standing god-like outside everything that was happening.
Yet even the radiation-dominated era of the early Big Bang had some digital structure. Action was confined to three spatial dimensions. It was also confined to a single temporal one in the sense that all action had to flow entropically downhill - to flow uphill would be neg-entropic!
So contra your position, existence has to start with the digitisation of the analog - a primal symmetry-breaking. Or as I say, to make proper sense of this, we have to introduce the further foundational distinction of the vague~crisp. We have to reframe your LEM-based description in fully generic dichotomy-based logic.
So now we get to a Peircean, Gestalt or Laws of Form level of thinking where both event and context, figure and ground, particular and general, atom and void, are produced together, mutually, when a symmetry is foundationally broken. In the beginning was a vagueness, an apeiron, a quantum roil, a firstness of pure qualitative fluctuation. Then this state of unformed potential was broken, marked by its most primal distinction. In Big Bang theory, we have a reciprocal relationship between an extensive container and its intensive contents - an expanding spacetime and a cooling ocean of radiation.
This is the really difficult to get bit. But it means that the reductionist instinct to make one aspect of being prior or more foundational than its "other" is always going to mislead metaphysical thought. Does the digital precede the analog, or the analog precede the digital? The whole point of an organic and pansemiotic conception of this kind of question is to focus on how each brings its other into concrete being. To be able to make a mark is to reveal the possibility that there is a ground to accept that mark. So before anything happens - before there is any kind of difference, analog or digital - there is only the vagueness of a potential. And then when something happens, the digital and the analog would be what co-arise as the two aspects of being which such a symmetry breaking reveals.
Now we start to get into the difficulties with your view. As I say, the purely analog - if it is to make dialectical sense - would have to be the least digitallly marked kind of state that still have definite material being.
So it would be like the earliest state of the Universe - a featureless and homogenous realm of the cooling~expanding. All distinctions - all negations or differences that make a difference (to someone) - would be pushed to the margins of this generic state. It would only be a god-like observer, free to take a position outside the totality of this material existence, who could make remarks like "This Universe is a colder/larger than it just was, and it is cooling/expanding at rate x rather than rate y or z." Or heading the other way in scale, remark "This Universe is featureless, except when we get down to the quantum grain, we can see it still has a residual fluctuating freedom that again is an active negation of its generalised state of constraint."
But then of course the actual Big Bang went through its further symmetry-breaking phase transitions and matter condensed out of radiation bath. This - in dichotomistic fashion - cleared the vacuum of energy in a way that made it the other of "the void". So now we still seem to be in an analog realm, but now one with a lot more possibilities for local magnitude differences. Mass is gravitationally clumping. A new level of action is starting to play out.
The radiation era was already digitally-broken - it had generic counterfactuality in that it only had three spatial dimensions and a single entropic gradient, etc. But now the matter-dominated era was starting to get really broken. There existed mass that could have any contingent rate of motion between the limits of rest and lightspeed. Greater digital constraint - the marking of the extremes of speed as two crisply opposed limits - had just bred new analog variety in the fact mass could travel at any rate on the spectrum of rates thus revealed.
So you should be getting the picture. If we actually check in with the physics, we can see how analog~digital is a drama being played out in which both emerge together out of a primal symmetry-breaking. And then both evolve together as symmetry-breakings become the ground - the vaguer preconditons - for further symmetry-breakings which render the presence of the analog and the digital ever-crisper. Both aspects of nature are being strengthened because that is how the mutuality of dichotomous development works. The blacker the pencil, the whiter the paper it marks.
Of course analog and digital were terms created for the late machine age and so are being dropped into a world with a very long history of become crisply developed in its dualistic fashion. If we look around the world of sensible objects, we see it sharply divided in terms of the continuous and the discrete, the part and the whole, the form and the matter, the flux and the stasis, the chance and the necessity, etc. That is physically how it is for us, being creatures that necessarily depend on the Universe having reached its high point of material complexity - sorted into stuff like heavy element planets bathed in the steady energy flux from a star fixed at an optimal distance.
So what Wilden describes is the epistemic cut that underlies the further adventure that is life and mind in the cosmos. He is no longer talking about the material world in and of itself - the topic of pansemiosis. He is not talking about analog and digital in that general physicalist sense. He is now talking about symbolic representations of that materiality. And also perhaps, the evoution of that symbolism - which begins in the analogic simplicity of the iconic and indexical, and terminates in the digital crispness of the properly symbolic.
If we are to talk about analog or iconic representation as opposed to being, then we are talking about machines like old-fashioned wax cylinders where a needle - driven by making noises into a tube - produces a wriggling groove. And then when the energy relation is reversed - the cylinder is cranked to wiggle the needle and cause the tube to utter noise - we get a playback of a trace.
Crank the cylinder too fast or two slow, and we can have proto-negation - a funny playback that is a difference in kind in being a fictional representation rather than a realistic one. But generally, the analog representation is un-digital in being still so closely connected - as close as it can possibly be - to reversible physics.
There is a symmetry-breaking - a one way expenditure of energy to make the recording and reduce dynamical reality (a sound of a band of minstrels singing down the tube) to an enduring negentropic memory trace. But it is a symmetrical symmetry-breaking, a shallow one, not a deep and asymmetry-producing symmetry-breaking (like a dichotomous symmetry-breaking). As I say, just turn things around so the groove drives the needle rather than the needle carving the groove, and you get back the memory you created as a dynamical performance of sound. The minstrels sing once more.
So analog representation, or analog signal processing and analog computation, arises as the most primitive, least broken, form of memory-making. The triadic semiotic trick is all about a living/mindful system being able to internalise a view of the world - code for a set of world-regulating constraints using the machinery of a symbolic memory. And analog representation is the simplest version of that new trick. It sticks a machine - like a wax cylinder recorder - out into the world. And then exploits the physical asymmetry of a rotating cylinder and a dragging sharp point to construct a trace - a linear mark encoding a sequence of energy values.
Just by being able to switch the direction of the energy flow - from the needle to the cylinder versus from the cylinder to the needle - is all the digitality needed. On/off, forward/backward, record/playback. Semiosis at the lowest level boils down to the physical logic of the binary switch.
So the point is that even analog devices are digital from the get-go. What we mean by analog in this context is that they cross the semiotic Rubicon by the least possible distance. They are devices that can do "representation", but of a kind so thin or materially direct that we wouldn't call it properly symbolic, just basically iconic, or at most, indexical.
I hope you can see how - in ignoring the fine print of a definition of analog - you have produced a great confusion in so loosely applying the analog~digital distinction to the world in general, the ontic thing-in-itself, rather than honouring its technical epistemic meaning as a way to clarify our thinking about rate independent information - the semiotic mechanism by which life and mind forms memories or representations of the world.
What's wrong with Deleuze? I find him to be one of the very few modern philosophers who actually seemed to know what he's talking about.
Quoting StreetlightX
See, he is simply saying that there ought to be a concept of difference, which refers to difference itself, rather than referring to our judgements of difference. That seems like a good honest principle to me. How are we going to come to understand "difference" by looking at the way we measure difference rather than looking directly at difference itself?
But this already assumes that there is such a thing identified as "difference". And so, within this identity is implied a certain sameness. If we can refer to everything that we see as a "difference", then by using this same word, "difference", aren't we really saying that everything which we see is the same? Therefore, it doesn't really matter what we call it, "difference" or "same", as long as we are referring to the same thing, what difference does our choice of words make? It's when we refer to different things with the same word, that confusion rolls in.
Quoting apokrisis
This, I will steadfastly argue, is a mistake, the fallacy of synthesis or some such thing. We take two things, future and past for example, and rely one upon the other to understand the two. Then, because our understanding has developed in this way, that we use one to understand the other, and vise versa, we assume that the two are naturally co-dependent. But this is only a reflection of our understanding of the two, we bounce one off the other to get an understanding of both. It doesn't say anything about the real things which are referred to by "future" and "past", in their natural existence, it only says something about the way that we understand these two. Then we might be inclined to say something ridiculous like neither one of these is prior to the other, they are co-dependent. Because we see that future and past are co-dependent in our concept of "present", we might forget the obvious, and make such a silly claim.
Why would it be ridiculous? Is it because the present seems necessarily prior to either the past or the future in your definition of time here?
(Probably the "analog"/"digital" picture as a general ontological idea is taking computer fetishism a bit too far, but let's go along with it for a minute just to answer the question above.)
If you would maintain that what the words "future" and "past" refer to, is purely conceptual, then what they refer to is a purely conceptual opposition. There is no need to assume a priority, because each is dependent on the other in conception. But if you look at what these words refer to, in the real world of experience, rather than something conceived, then they express a priority.
So this expresses the difference between understanding the word simply by relating the word to a concept, and understanding the word by finding the thing in the world which the word refers to. If we don't go to the world, to understand the word, then all these dichotomies, big and small, hot and cold, etc., are filled with words which express a co-dependency. You rely on the one word to express the meaning of the other, as a negation, an opposition, thus it appears as if the two things referred to are ontologically co-dependent. It is only by turning to the world of sensation and experience, that you can see what the words actually refer to, and the real difference between these two supposedly opposed things.
That is the lesson of Plato's "Theatetus" . They went looking in the world, for this thing called "knowledge". They had a preconceived notion that what the word referred to was something which included truth, and excluded falsity. It was a determined relationship between these two opposing terms, true and false. In the real world of things which were called "knowledge", they couldn't find any reliable way that falsity was actually being excluded from the thing which was being called "knowledge".. Therefore they were forced to conclude that their preconceived notion, "including truth, and excluding falsity", was not a proper definition of knowledge to begin with, because this did not fit with what was actually being called "knowledge" in the world, they were looking for the wrong thing. Their preconceived notion of what "knowledge" is, made it impossible that the things which people were calling "knowledge" was knowledge according to the conception. What could they do? They could not tell everyone that what they were calling "knowledge" is not really knowledge because it fails the standards of their preconceived notion, leaving the world with a concept of "knowledge", with nothing in that world to apply the name to. The only viable option is to admit that the conception is wrong.
This is a very real issue, especially with terms of ontological or metaphysical significance. We have a conception of future and past for example. This conception models these two as pure opposition. Take a point, on one side of that point is past, the other side is future. We could build a massive epistemic structure on a conception like this. The problem is, that in the real world, and common understanding of future and past, there is an implied necessary temporal priority, past has gone by, and future is yet to come. The conception, of pure opposition, two sides of a point, fails to take this into account. Therefore any conceptual structure built on this concept is completely illusory, it fails to take into account what we are really referring to when we use the words "future" and "past".
Quoting Hoo
Hoo, do you think that "intuitive continuum" refers to anything real? Isn't it more than just intuition? Would you think that continued existence, what is expressed by terms like momentum and inertia, is simply an intuition, and not supported by anything factual. If you give the status of "factual" to such terms, how can you say that the continuum is simply intuitive.
Quoting JohnHow can there be a continuum which we know? If what we know is the digital, how could a continuum be known? This seems to be the problem. There are indications of a continuum, so we claim to know that there is a continuum, but the continuum cannot actually be known. So how do we validate our claim to know that there is a continuum? What if we are mistaken on this point, and the thing which we are calling 'the continuum" is actually discrete? Would we then have to designate something else as "the continuum", to support our claim to know that there is a continuum? That is the importance of identifying the thing which we claim as "the continuum", to see if it really is the continuum. If it is not, then either our claim to know that there is a continuum, or our claim to have identified the continuum, is wrong.
It appears like StreetlightX is arguing that we cannot even go so far as to identify the continuum, because to identify it is to imply a sameness, when the continuum is necessarily difference, as per Deleuze. My argument is that to identify it as difference is still to identify sameness, and this defeats the claim. I think that this pushes us back toward some type of mystical position, claiming that this assumed continuum is something that we cannot even talk about, like the ancient mystics used to claim about "matter".
Yes, yes, if it doesn't come from the one of five of six philosophers you've bothered schooling yourself in it's all allusion and romantic melange.
Quoting apokrisis
But your 're-framing' does nothing but dilute a perfectly rigorous distinction with a fuzzy, unprincipled one. As I've said quite a few times now, the distinction between the digital and the analog is quite precisely defined by the presence of negation and self-reflexivity. At stake is a difference in kind, not a difference of degree. Wilden himself is unequivocal about this: "'Not' itself is a metacommunicative boundary essential to the 'rule about identity' which is the sole sufficient and necessary condition of any digital logic"; elsewhere: "A digital system is of a higher level of organization and therefore of a lower logical type than an analog system. The digital system has greater 'semiotic freedom', but it is ultimately governed by the rules of the analog relationship between systems, subsystems, and supersystems in nature. The analog (continuum) is a set which includes the digital (discontinuum) as a subset". Insofar as your whole line of reasoning does not respect this fact, it is less a refinement than it is a watering down.
I guess that depends on what one means by "real." Is the real equated here with the scientific image (Sellars) ? For me that image is only a fraction of the foggy, inter-subjective real. I say "foggy" because I'm thinking of a continuum that runs from the unreal to the perfectly real. Maybe the real is well described as axioms in common. This would include the physical world but also the "common sense" that makes more abstract conversation possible. We can double back and edit chunks of common sense, so the real is unstable or "on fire."
Quoting Metaphysician Undercover
You might want to look into (if you haven't already) the "arithmetization of analysis." I've really obsessed over this issue. I love analysis, but the real numbers are strange birds indeed.
That's what a difference of opinion amounts to, your series of assertions versus my series of assertions. The question is, who's series of assertions makes the most sense, and here we have only intuition to refer to. How does it make sense to choose a series of assertions, to believe in, which are counter-intuitive, but are chosen simply because they support an ontological position which is chosen for some reason other than that it makes sense intuitively? Isn't that choice of ontological position supported only by an unreasonable prejudice?
Quoting Hoo
Is that really possible though, or more precisely, is it a correct procedure? It may be possible, but also incorrect. To "double back and edit chunks of common sense" implies that there are principles, based in something other than common sense, which exist, and which we can refer to, for use in a judging of common sense, to edit common sense. Isn't any such principle demonstrably supported by nothing but prejudice, as described in my reply to StreetlightX? It is the very description of prejudice. What would provide you a principle whereby you could judge intuition or common-sense, a principle which could be excluded from the charge of "prejudice"? Refer back to Deleuze's "there ought to be a concept of difference not subordinated to differences in the concept". Such editing of common sense is precisely that, subordinating your concept of difference to difference within the concept. But in judgement, these pre-conceived principles are called prejudice.
I'll go ahead and reply to both your replies, since they are related.Quoting Metaphysician Undercover
I'd say that we have unstable systems of beliefs that we are constantly testing against experience, itself shaped and organized by these beliefs. "Intuition" has to pick up the slack when differing beliefs fail to lead to different actions, but distance from "what should we do now?" is associated (though not identical with) the irrelevance of an issue. This is why pragmatism is so annoying. It "dissolves" differences by taking them "modulo praxis." He believes A. She believes B. They both do C. So it doesn't matter which (if either) is right. If an ontology is a tool, then we just need it to work --to make us happy. If we continually tinker around with it, it's maybe "the princess and the pea under the mattress" situation. Musicians have an ear for tiny differences in sound, and philosophers in thought. Intuition ==taste, etc. As to "unreasonable," I think we need a notion of pure reason to ground any notion of pure unreasonableness (I think you'll agree). So I like to think of normalized discourse (Rorty/Kuhn), where philosophy, the normalizing discourse, is necessarily abnormal as it addresses itself. Reason itself is on fire. Quoting Metaphysician Undercover
For me it's all prejudice. We can build principles on top of parts of common sense that make other parts look less "common-sensible." This is one of my favorite quotes:
[quote=Neurath]
We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.
[/quote]
Most of "common sense" or our prejudices have to remain intact while we judge and edit a particular prejudice. Pleasure and pain are the hammers that re-shape this edifice. But the pain can be cognitive dissonance, and the pleasure can be a sense of status. It's not at all just bodily.
[quote=James]
The observable process which Schiller and Dewey particularly singled out for generalisation is the familiar one by which any individual settles into new opinions. The process here is always the same. The individual has a stock of old opinions already, but he meets a new experience that puts them to a strain. Somebody contradicts them; or in a reflective moment he discovers that they contradict each other; or he hears of facts with which they are incompatible; or desires arise in him which they cease to satisfy. The result is an inward trouble to which his mind till then had been a stranger, and from which he seeks to escape by modifying his previous mass of opinions. He saves as much of it as he can, for in this matter of belief we are all extreme conservatives. So he tries to change first this opinion, and then that (for they resist change very variously), until at last some new idea comes up which he can graft upon the ancient stock with a minimum of disturbance of the latter, some idea that mediates between the stock and the new experience and runs them into one another most felicitously and expediently.
[/quote]
The idea that there is something beyond prejudice can itself be described (though not finally, since description is apparently never final) as one more prejudice. This threatens the distinction itself of course which we need in order to get to this threatening...
For me there are different senses of 'know'. First there is the knowing of participation, familiarity. I believe animals do this; it seems obvious. With symbolic language come recursive and discursive forms of knowing which may be more or less 'digital'. But remember, within linguistically mediated forms of knowing there are metaphorical, which is to say analogical, modes as well as more precisely propositional ( digital) modes. And the differences between these modes of knowing do not themselves constitute a sharp dichotomy (although it may be conceived as such) but a series of imprecise locales along a continuum.
So, primarily we know what we come to conceive as the 'continuum' directly by our participation in it as embodied continua.
In regard to what you say about sameness and difference, I would say that no two parts of the continuum of our experience are the same; every part is unique so it is more a matter of a play between similarity and difference that allows it to be said of two things that they are, not the same, but of the same kind. That something is of a kind is a matter of identifying it, I have argued, involving the balance of similarity and difference.That anything is absolutely unique across time and at each moment is a matter of identity, simply involving difference; to be unique is simply to be different to everything else.
Instead of repeating myself I'll quote my earlier response to StreetlightX because I think it is relevant here:
Quoting John
I'm not sure what you think I'm arguing here. It has been my point that we impose our frameworks of intelligibility on the world.
But then a dialectic or dichotomous logic ensures that this process is rigorous. In being able to name the complementary limits on possibility, we have our best shot at talking about the actuality of the world, as it must lie within those (now measurable) bounds.
So if you want to talk about "time", then it is only going to be an intelligible notion that we can project onto reality in a measurable fashion to the degree we have formed a crisply dichotomous model of it.
For example, the classical conception of time and change developed by the dividing off of stasis and flux, being and becoming. Then space and time became a division of dimensions - if you imagine existence in terms of straight lines, then you can imagine points travelling along the lines so that first they were here, later they were there.
Both relativity and quantum theory have since shown space and time are not so distinct and we are back to having to include energy - as the thermal source of any change - in the spatiotemporal picture. The rate of time can be relativistically bent by energy density. Time and energy form a dichotomistic uncertainty relation in quantum theory. Both even challenge the notion of before and after. Relativity permits wormholes in time. Quantum theory appears to demand some form of retrocausaliity to explain quantum eraser experiments.
So we have a variety of ways of thinking about time - all of them models that try to impose some kind of fundamental dichotomy that would make time an intelligible, and thus measurable, concept of the thing-in-itself.
A logic of vagueness is a further such modelling exercise. And while I might employ familiar (causal) notions like before and after, or earlier and later, to talk about semiotic development, clearly I do so in a new context - one in which any more traditional notion of temporal co-ordinates is itself going to be emergent.
And as I say, this is not wild metaphysical hand-waving. It is where Big Bang cosmology has led. The Planck scale encodes a dichotomous or reciprocal relation between spacetime and energy density now. Planck spacetime is h x G/c, while Planck energy density is h x c/G.
So a quanta of existence - the fundamental unity that the triadic Planck relation expresses - encodes a dichotomously matched pair of limits.
If we think of it geometrically, spacetime is extremitised by being flat. It becomes changeless, featureless and energyless by becoming maximally stretched out in Euclidean fashion. And then energy density or change is extremitised by being hyperbolically curved or maximally fluctuating. Instead of spacetime lying flat and even with itself, now every point is pointing away from such a dimensionally regular state. It all wants to break apart in every possible "direction" as quick as it can.
So now that view of things is thermal and allows us to understand "time" as a (dichotomous) contrast between a backdrop flatness (a Universe that has developed to become generally large and cold) and a localised curvature (the patchy clumps of energy density represented by spacetime-bending "stuff" like nebulae gas clouds, stars, planets, atoms, blackholes).
And matter is now the source of a further temporal dichotomy (one born of the symmetry breakings of particle physics) because it introduces the new possibility of an energy density that moves about at less than lightspeed. It now "takes time" to move about because action no longer has the vanilla rate of c, the vanilla rate of radiation. Mass is instead operating within the new symmetry-breaking, the new dialectical limits, of absolute rest and lightspeed.
So the whole notion of time - in its familiar Newtonian sense - is something that has to develop via a succession of symmetry-breakings. The kind of time you are talking about did have a prior history in which it was a different (less differentiated, and thus more vague) kind of time for quite a long time. :)
You are merely choosing to highlight the bit I already agree with in general fashion. From a biosemiotic viewpoint, that states the obvious.
But what I have been pointing out is that your framing of the issues lacks the further dimensionality that would allow it to be actually developmental in the way a process view needs to be. Your way of talking about the continuum or the analog is fuzzy over the issue of fuzziness. You talk about the analog/continuum as being itself crisply existent (a realm of actualised material being), and then at other times you talk about it as a ground for further development - the less specified basis for the discrete/digital machinery that transcends it so as to have a view of it.
Of course in your confusion, that becomes the confusion you accuse me of. I'm just patiently taking you back to the source of symmetry-breaking to show how both continuity and discreteness co-arise from pure vagueness. And analog~discrete would have arisen as modes of communication or representation in the same fashion.
As I have said, it is important that the analog or iconic representation already exists on the other side of the epistemic cut - on the side of the symbolic or "rate independent informatiion". It is a distinction made at the level of the mapping, even if it means to be talking about a distinction in the (computational!!) world.
And because you set off in the OP to say something logically concrete about metaphysics, you can't just gaily presume that what is true of the map is true of the territory. That further part of the argument must be properly supported.
Either you don't understand that or you simply want to avoid the issue.
So it is fruitless to keep trying to return me back to Wilden's perfectly acceptable 1970s analysis of the distinction between analog and digital computation. You know I agree with that.
The interesting question is then the ontological or metaphysically-general one of how does that fact about representative modes change our conception of nature itself? What new vantage point does it give us for dealing with the central questions of process philosophy, like the mechanics of development and individuation.
A difference that makes a difference can be described analogically or digitally, represented in terms of what it is, or what it is not. But that does not yet get at the deeper question of how representation itself arises (via an epistemic cut), nor how bare difference arises (as an ontic symmetry breaking).
I don't dispute that there are different senses of "know". But I think that they all involve some form of identity. Familiarity involves recognition which is a form of identification. I do not think that it is correct to extend "knowing", right down to primitive life forms, and then restrict "identity" to a function of human language.
Quoting Hoo
Yes, I agree with this, but the point I was making to apkrisis, is that when two terms are seen to be opposed in conception, this does not indicate that the two things referred to are mutually dependent on each other, nor does it mean that one is not prior to the other. So in the case of reasonable and unreasonable, we fist make a conception of what qualifies as reasonable, then based on this description of reasonable, we can determine unreasonable. However, that the concept of reasonable is prior to the concept of unreasonable, does not mean that reasonableness exists prior to unreasonableness in nature. So I do not think that you can proceed to your conclusion that "reason itself is on fire", because what you are referring to is the conception of reasonable, and unreasonable, not "reason itself".
Quoting HooI see the point with the ship analogy, but here we are concerned with fundamental ontological principles. Can we assume that massive conceptual structures rest on fundamental principles? If so, then when we are examining these fundamental principles, should we judge them according to common sense, and good intuition, or should we judge them according to other fundamental principles, so as to maintain consistency with these other principles, and not to rock the boat? I think the former, if the fundamental principles are not consistent with common sense, and good intuition, then there is a problem with those principles, and that must be exposed, despite the fact that other principles might be destabilized in the process. .
Quoting HooI don't know if this can be called "prejudice". Prejudice implies a preconception. What I refer to is the potential for a method to go beyond conception, to observe, and describe, in an unbiased and objective way. If, the idea that this is possible is considered as a preconception, then I guess there is prejudice here as well. I don't see that it is possible to get beyond all prejudice, even common-sense, and intuition are inherently prejudiced, as there are prejudices inherent within our language.
Quoting apokrisis
What I am arguing is that this dichotomous logic is the framework of intelligibility which is being imposed on the world. And this is the mistake. It is a mistake to think that the world must fit within our systems of measurement, the "bounds" which we imposed. We must adapt our systems of measurement, shape them to the world. But even this requires a preliminary understanding, which cannot be given by measurement because the system for measurement will be created based on this understanding.
In order to properly understand the world we must start with a coherent system of description. Despite the fact that dichotomous logic can, and does, place restrictions on how one can describe, it does not attempt to restrict the thing being described to fit the system of description. We shape the system of description to fit the thing being described. We accept the fact that a description may not be precise, that your description may contradict my description of the very same thing, etc.. This is the mode of description, we do not attempt to force the world into our devices of measurement, we keep describing, and re-describing, working at the description and altering our descriptive terms, until we are satisfied with it. We then devise a means for measuring that described thing, based on the desciption.
Quoting apokrisis
This is where I believe the mistake lies. You equate intelligible with measurable. But measurable is restricted by our capacity to measure. A thing is only measurable in so far as we have developed a way to measure it. However, a thing is intelligible to the extent that we have the capacity to describe it, and description does not require measurement. John, above, would argue that the capacity to recognize familiarity makes the thing intelligible. So how we proceed toward understanding the world, is to first develop ways to describe its qualities, then to develop ways of quantifying those qualities (measuring). Therefore, when talking about a thing like time, it is only practical to discuss our ability to measure it, to the extent of our ability to describe it.
This is a great conversation. So, first, thanks for that! Again I'll reply freely to your entire post, since, well, I love this stuff.
Quoting Metaphysician Undercover
This is deep water, because I'm not sure how much of a gap there is between reason and the conception of reason. It's connected to the issue of the world-for-us versus the world-in-itself. But the world-in-itself or the world-not-for-us looks necessarily like an empty negation. It marks the expectation that we will update the world-for-us (which includes the model of the filtering mind enclosed in non-mind that it must manage indirectly, conceptually, fictionally.) Is there a place for reason in this "real" non-mind enclosure? Or is reason a foggy notion distributed through our practices, verbal and physical? As philosophy wrestles with the definition of reason, or reason-for-reason, it seems to be the very fire I was getting at. The problem with reason-in-itself is that we can't say anything about it. It seems to cash out to the expectation that we will keep reconceptualizing reconceptualization itself, you might say.
Quoting Metaphysician Undercover
I think we generally agree. I'd say that we only embrace the destabilization of an investment/prejudice in order to prevent the destabilization of a greater investment/prejudice. We amputate the hand to save the arm, or we trade the old arm for a new arm. It's a model of the modelling mind as a system that seeks minimum dissonance/tension/confusion and/or maximize preparedness, security, the sense of well-being. I think it's useful to think of the mind as a "readiness" machine. We have to act quickly sometimes, so the imagination cooks up detailed responses. If we are terrified by the thought of life in a world devoid of principle X, we will probably throw principle Y under the bus to save it.
Quoting Metaphysician Undercover
So we basically agree. Of course it's not one of my prejudices that all prejudices are equal. The paradigm notion of objectivity is the physical world, perhaps, but maybe that physical world seems to just a part of the fuzzy intersection of the fuzzy cores of the belief systems involved. You mention common sense and language. That's objective, too, and it seems to ground the objectivity of science. We need the "manifest image" as a background and ordinary language in order to practice science. Maybe "true-for-all" is a grandiose extension of "true-for-folks-like-us," encouraged by the apparent universality of mathematical natural science. We share candidates (prejudices) for inter-subjective adoption by a community. As humanists, almost unconsciously so, we tend to think about universal inter-subjective adoption: truth-for-all, not just for Americans or "the superior man." And yet the same prejudice or implicit-rule-for-action might not fit equally well into two differing sets of already-held ideas or rules-for-action. It's just our almost invisible prejudice that what is true for me at least should be true for all.
Yet when it comes to "fundamental ontologies" (including the presence or absence of the god-thing or the right-and-wrong thing or the truth-for-all thing), I don't think there's any reason to expect convergence. Fuzzy convergence in behavior, perhaps, since that is policed, but acceptable behavior under-determines the belief system, hence the "harmlessness" of freedom of thought. Different personal histories give us a different exposure to adoptable prejudices and really a different world to test them against (since we all live in the same world only by abstraction; our lives are all different streams of experience, etc.) Since science stays so close to the sensual and the mathematical, great consensus can be achieved. But our total, less-specialized belief systems don't seem similarly constrained. Lots of different belief systems "work" and are relatively stable. They're on fire, but it's a low heat, perhaps at the surface. For instance, I don't expect any worldview/ethical revolutions at this point, but I'll always be policing the "corona" of the fuzzy core of my investments.
And yet this vision/fiction itself is just a "candidate belief" for the relatively small inter-subjective community that is comfortable with such abstraction. I like it this prejudice, so I bring it benevolently (perhaps hoping for a reward somehow) to the tiny tribe like a better mousetrap.
Well if all this is a mistake, what is your alternative? Can you even define your epistemic method here?
The Peircean model of scientific reason says yes, we have to begin with just a guess, a stab at an answer. And a good stab at an answer is one that tries rationally to imagine the limits that must bound that answer. That is what gives us a reference frame from which to start making actual empirical measurements. And from measurement, we can construct some history of conformity between the thing-in-itself and the way we think about the thing-in-itself. So retrospectively, any founding assumptions, any rational stabs in the dark which got things started, are either going to justified or rejected by their own empirical consequences. If they were in fact bad guesses, experience will tell us so. And contrariwise.
Importantly (and something that goes to the analog~digital distinction as SX, channelling Wilden, has defined it), this also means that the model doesn't have to end up looking anything like what it is suppose to "represent".
I think what troubles you is this apparent loss of veridicality. You want the kind of knowledge of the world that is literally analogic - an intuitive picture in the head. If someone is talking about atoms, you want to see a reality composed of billiard balls rattling around a table. If someone talks about development, you want to see a point moving along a drawn timeline, the future steadily moving backwards to become the past as intervals of the present get consumed.
But higher order understanding of the world is different in being digital. It throws away the material variation to leave only the digital distinctions - the description of the boundaries or constraints, the description of the rate independent information in the shape of eternal or timeless laws and constants.
So semiotic modelling is this curious thing of not being a re-presentation of what actually exists in all its messy glory. Instead, it is a boiling down of reality into the sparseness of abstraction entrained to particularity - the semiotic mechanism of theory and measurement.
Sure, it is still nice to picture billiard balls, waves, timelines, and all kinds of other analogic representations of the thing-in-itself. But the digital thing is all about giving that kind of re-presentation up. In the extreme it becomes the kind of instrumentalism that SX would find disemboddied and "un-aesthetic". One may find oneself left simply with a syntax to be followed - a mathematical habit - which works (it makes predictions about future measureables) and yet for the life of us, we can't picture the "how". That's pretty much where the Copenhagen Interpretation ended up with quantum mechanics.
So the Peircean/digital/semiotic approach to modelling the thing-in-itself is both cleanly justified in terms of epistemology, and also never going to deliver quite what you probably think it should. This is why whenever I talk about vagueness, you always just keep saying tell me about it again in a way that is not purely rational but instead gives me a picture I can believe inside my head.
But sorry, that is what it means for modelling to be embodied, or meaning to be use. We have to head in the direction of extreme mathematical-strength abstraction so as to be able in turn to make the most precise and telling acts of measurement - to also digitise experience itself as acts of countiing, a harvesting of a field of symbols.
Quoting Metaphysician Undercover
So just as I say, you yearn for analog iconicity - a concrete picture in your head that you can stand back and describe ... as if such a representation were the thing-in-itself floating veridically before your eyes.
Pragmatism says that is a Kantian pipedream. A picture in your head is just going to be a picture. What actually matters - the only thing that in the end you can cling onto - is the functional relationship you can build between your model of existence, and the control that appears to give you over that existence. And the digital is stronger than the analog in that regard because it decisively erases unnecessary details. It can negate the real in a way that makes for the most useful map of the real.
And we all know how a map bears bugger all material resemblance to the physical reality of the territory-in-itself. But who complains about a map of a country having "unreal" properties like being small and flat enough to fold up in your back pocket?
Quoting apokrisis
This is where we really overlap. Rescher likes "methodological pragmatism." The epistemological system is machine-like, a normalized discourse.The system as a whole and not its individual, inter-dependent parts is put to the test as we act on its output: "truths" or (implicitly) rules for action. For instance, this was probably the "living" justification of infinitesimals. They were part of a model of existence that allowed us to control that existence.
As I said think there is a useful logical distinction between identity and identification. To identify something is to identify it as something which, as you say, involves prior recognition. Animals ( at least some) obviously do recognition, but I would not agree it makes sense to say that they identify things, much less to say that they see things as identities.
But, all of this is just terminology and I acknowledge there may be different ways of interpreting the ambit of terms. For me, though, it is recognition first, then the recursive act of identification of things (most broadly as entities and then kinds of entities), and then the still more reflexive act of understanding things as identities (unique entities).
Sure, every setting of a boundary is always (at least) double: the explicit one between the two (digitized) elements in question (A, not-A), and implicit one between the 'boundary-setter' and the very system under consideration, taken as a whole. But this is just the methodological constraint set on any attempt at analysis; Wilden himself is perfectly aware of this:
"Even if we think we have successfully divided the whole of reality and unreality into only two sets by drawing a line between A and non-A (and by including within non-A, non-B, etc.), the act of drawing that line defines at least one system or set as belonging to neither A nor non-A: the line itself. And since that line is the locus of our intervention into a universe, it necessarily defines the goalseeking system that drew the line as itself distinct from both A and non-A: it is their 'frame'".
The 'goalseeking system' in question being nothing other than living things, of course. But just as the 'location' of the first boundary setting operation is consecutively undecidable - it belongs neither to A nor not-A (W: "it corresponds to nothing in the real world whatsoever") so too is this second-order boundary setting: the line is methodological, and cannot be imputed to the 'world': doing so is nothing but metaphysical dogmatism, in the Kantian sense of the term.
In other words, you can't have your cake and eat it too: if you insist that the analog/digital distinction is made at the level of digital mapping to begin with, the projection of a more primordial ground of vagueness is simply that: a mythological projection that doesn't abide by the very epistemological constraints you ought to be beholden too. This is why rather than take the path you do, Wilden correctly recognizes that this higher order 'cut' is just that - a higher order cut:
"[The second-order distinction] is of a different logical type from the line between A and B. The metalinguistic function of 'not' is in fact what generates the higher-order paradox, for 'not' is the boundary of the empty set, which like 'the class of classes not members of themselves' is both a member of itself and not a member of itself. And [second-order distinction] turns out to be another, higher order, substitute for 'not' : it defines an Imaginary line which belongs to the process of making distinctions, rather than to the distinctions themselves."
And... the point was being made that there are contradictions in our thinking, such as boundaries that are neither something or nothing. But then set theory was brought up in regard to digital being a subset of analog (which makes no sense to me.. but anyway). Set theory is founded on an odd contradictory notion called the transfinite... so nobody has a monopoly on contradiction.
Weird. The definition of vagueness is that it is the "not yet digitised". Vagueness is that state of affairs to which the principle of non-contradiction fails to apply. And thus it stands orthogonal to crispness, the state where A/not-A are busy doing their logically definite thing.
So in a set theoretic sense, the vague~crisp is the superset here. As I said earlier, it is Peircean thirdness in incorporating the whole of the sign relation - the three levels of logic that would be vagueness, particularity and generality. A/not-A is just the digital crispness which is secondness, or the logic of the particular.
So which is it - do vague and crisp map on to analog and digital or do they not? If they do, in what sense can you claim that the analog/digital distinction is derivative from vagueness (circularity). If they don't, you're back to mythology.
Okay, so "functioning" just refers to operating or behaving in a particular way or engaging in a particular process. Is that much clear to you? And is it clear then when we say that humans can function in one way or another?
Analog design obviously preceded digital. It was characterized by the sorts of things we see in a radio: various kinds of filters, inductors, transformers and so forth. Digital electronics started replacing analog electronics back in the 1960's. The first digital telecommunications transmission system went into operation in 1960 and since then, the majority of electronic equipment has become digital or computer driven.
In this thread, the terms are being used metaphorically. It's not clear if everybody realizes that, although it's been pointed out several times in the this thread that the metaphor is being stretched pretty far... maybe too far.
It is interesting to ponder that metaphor. It obviously runs straight into philosophy of math because we're talking about continuity vs discontinuity. Looking at it that way, the notion that the digital is parasitic on the analog is just wrong. If we persist in maintaining that the digital is "loose" on the analog, we're stipulating some specialized meaning for the terms. It wouldn't be appropriate to complain that people don't understand the jargon. You're going to have to explain it since you've made up something unusual.
He did that in the opening post:
So the analog/digital distinction is a continuous/discrete distinction.
I explained the alternative, it involves first, the recognition that our measurement techniques are inadequate for measuring some aspects of the world, in particular, the aspects associated with the assumed continuum. So we need to go back to a method of focusing on description rather than measuring. This is where the scientific method began, and made its greatest advances, developing out of practises such as alchemy. It involves endless observations, defining words and developing new words to avoid inconsistencies and contradictions between the observations of different individuals. The individuals concern themselves with producing a coherent and consistent description of the phenomenon, based in many varying descriptions.
In the act of describing, the digital method (rules of logic) is applied to the tool of description, language. In the act of measuring, we tend to believe that the digital method is applied directly to the thing being measured, but this is an illusion. In reality, the limitations of the digital method have been incorporated into the language of measurement. The result is that any observations that are measurements, are necessarily theory-laden, due to the restrictions which are inherent within the measurement system. That is the position to which science has progressed today. Scientists rarely give themselves the freedom of separating the logic of digital restrictions from the language of description, to produce feely described observations. They cannot produce varying descriptions of the same phenomenon while using the same measurement system. Instead, they are constrained by a language of mathematics which has restrictions inherent within, to produce observations which are bound by those restrictions. In other words, the perspective from which one observes, is completely restricted by the measurement system, such that the possibility of varying descriptions of the same phenomenon, has been excluded.
Quoting apokrisisWell of course that's what I want. If you assume that there is an analog continuum in the world, yet you describe, or model it as being digital, would you be satisfied with that? Either your assumption or your description is wrong. Can you live happily, knowing that you are involved in such self-deception?
I've stipulated what I've meant by the terms multiple times, precisely defining them in terms of negation and reflexivity, meanings which are certainly not idiosyncratic to me, but freely employed in philosophical discourse. Moreover, defining the difference in this way is far more precise than the appeal to the discrete and the continuous, which are more like heuristics, to the extent that the one can simply scale into the other at a level of granularity fine enough. But this is exactly what I'm trying to avoid. In any case, the terms are certainly not meant as metaphors. Of course you if you think Terrapin's question makes any sense whatsoever even in the data sense, you're welcome to engage him (and even then, the original sense of the terms have less to do with data than they do information).
Oh good, here's someone with some technical knowledge. Can you explain what a "square" wave is, or is that just a metaphor in itself?
This has been pointed out to you before: if there's some base level of granularity in your analog, then you're dealing with something that's fundamentally atomic. Therefore, at that fundamental level, there is negation. That lack of negation that was spoken is only true of a continuum.
With a continuum, if you start talking about discrete points, you're talking about something that the rest of the continuum can only approach as a limit. That is exactly how digital (ideally) is different from analog. You don't pull an infinitely converging progression out with you when you pick out a point.
Quoting StreetlightX
The same information can be transmitted either analog-wise or digitally... so I don't know what you're talking about there.
? This is what I've been saying from the beginning. Not sure what's being pointed out anywhere.
Quoting Mongrel
I'm referring to the distinction between information and data which is a basic one in computer science.
Kind of... in electronics, we think of ideal square waves, knowing that in the real world, instantaneous changes of that kind don't happen.
As TGW mentioned, it may be that down at the quantum level there really are changes of that kind... for instance when a photon pops off of a hot iron atom... the energy level of the atom abruptly drops.
In that case, the thesis would be that nature is fundamentally digital.
.Quoting StreetlightX
The same information can be transmitted digitally or by analog means. So.... I don't know what you're talking about.
Go on...
Us "working with a digital system" when we do logic is our brains doing something--binary logic. So unless our brains are digital systems, there's a problem with saying that only digital systems can do binary logic.
Well, everything extant is a system in nature. So you'd have to be saying that everything is either analog or digital.
Think about the notion that the digital is a subset of the analog.... as if the analog is made up of discrete points and the digital is just some of them. We have here defined analog as something that is fundamentally atomic. It just seems continuous the way a movie seems continuous, though its made of distinct frames. If that's an accurate characterization of nature, then analog is parasitic on digital... not the other way around.
What's really going on here is that continuous and discontinuous are opposites. They just are. A close kin to that opposition is infinite vs finite. Finite is not a subset of infinite. Infinite is not a quantity... its boundlessness... it's a negative concept.
And what we find in the continuum is... lots of infinity.
The latter doesn't follow the former at all. If I cut a cake into two and say that the two pieces now belong to the set 'Cake', it doesn't mean the cake was made up of pieces to begin with. I cut it. except here, negation cuts the analog.
That's simply telling us how an individual is thinking about it.
So the measure of psi - as a measure - is not intrinsic to the analog gradient that is a pressure gradient. While I appreciate that the two measures of psi at different points of a pressure gradient may stand in a relation of contrariety rather than contradiction, not even contrariety is, strictly speaking, an analog value. Hence Deleuze: "It is difference in intensity, not contrariety in quality, which constitutes the being 'of' the sensible. Qualitative contrariety is only the reflection of the intense, a reflection which betrays it by explicating it in extensive. It is intensity or difference in intensity which constitutes the peculiar limit of sensibility" (Difference and Repeition).[/quote]
Sorry for the delayed reply, Streetlight. I have to be brief for lack of time, so here's a simple question to cut-to-the-chase: what precisely is our model of the "intensive"? How are we supposed to understand it?
I think that's the fundamental problem here, with Deleuze, and with the aesthetic approach to epistemology in general. Insofar as it purports to be a sub-representational account of thought, it cannot be represented - it literally cannot be thought or talked about. From what I have seen, this approach throws us either into the Myth of the Given (i.e. sense-certainty), or opens us up to charges of noumenalism (both of which you've encountered to some degree on this thread). Either we can "somehow" represent the sub-representational immediately (i.e. sense certainty), or we cannot represent it at all (noumenalism). Either way, we've come to a dead end.
Thoughts?
I don't like that smiley face. Can't we get different ones?
Try this. Consider that reasoning is something which you do, and it is also something which others do. Therefore it is something which goes on inside your mind, and also something which goes on in other places of the world, external to your mind. If we produce a conception of reason, we are describing all these external instances of reasoning, and making a concept of what it means to reason. Since we cannot see into the minds of all these thinking human beings, we look at their activities, compare the activities with how "I" would be thinking at the time of making that activity, and come up with a conception of reason.
Notice the difference between I am reasoning, which is a particular instance of reason, reason itself, and the other person is reasoning, which is an instance of a human activity which implies reasoning. Now consider Deleuze's distinction of intensive/extensive, as outlined by StreetlightX a couple of pages back, in relation to reason as an object of analysis. We now look at reason as an object to be analyzed for intensive and extensive properties. In looking at other human beings, we have access only to the extensive properties of reason, we can measure and judge the individual's activities for reasonableness, based solely on the extensive properties. But within ourselves, we have direct access to the intensive properties of reason. We can observe sensations, feelings, and emotions, internal things which have direct influence on reason. These things are fluid continuities, which only become particular, "digitalized" instances when relegated to memory. But in memory, the intensive properties, the fluid continuity of an undivided one moment to the next moment (reasoning in action), is removed. Thus in memory, the intensive properties of reasoning are removed, and when the memory is remembered, it is placed into the context of the intensive properties of that moment. This is also what occurs when I communicate a thought to you, the intensive properties of thinking are removed from the thought when it is expressed verbally, or written in phrases; you perceive extensive properties, as the thought comes to exist within the intensive context of your thinking mind.
From this, we may produce assumed "states" of mind which can be projected onto others, being understood as extensive properties. I have not actually read Deleuze's work, but I believe that he assigns a deep incompatibility between intensive and extensive properties, such that any conversion, which is to understand intensive properties as extensive is deficient. Therefore we cannot truly get at the intensive properties of another individual's reasoning, because we only get there through being exposed to the extensive properties, and tying to infer the intensive. Nor can we get to the intensive properties of any existing thing by understanding them as extensive. However, we do have a certain understanding of the intensive, through inductive generalities, laws.
Quoting Hoo
So we can turn this world-for-us versus world-in-itself relationship upside down, invert it. The world-in-itself has intensive properties. Other than understanding those intensive properties as things which are described by laws, we can only have direct access to those intensive properties through our internal selves, and reason is necessarily there. Therefore the attempt to conceive of a world-in-itself as a world without reason is an exercise in futility. The world has intensive properties which must be accounted for in our conception. Our only means for producing a proper conception of the intensive properties of the world is through ourselves, because this is where we have direct access to intensive properties, and here we necessarily find reason.
Quoting Hoo
Here we approach what Steetlight has identified as "goalseeking system". A prejudice, or investment as you call it, is a past act, with a view toward the future. Sometimes, as time passes, and the view toward the future does not pan out, it becomes time to consider dropping the investment. The issue I referred to is that all of the goals are interrelated. So what I am referring to is not an issue of dropping one investment in favour of another more important one, it is a more complex issue. It is an issue of dropping one seemingly small investment, which has become evidently a wrong judgement. That small wrong judgement though, may support other larger, more important investments. So the question becomes one of should I maintain this small wrong judgement, which I know is wrong, and seems very insignificant, but it supports other significant, and more important things, or should I drop it, and destabilize those important investments.
The point is all in the way that we relate significance to insignificance. The judgement which has come to the mind as being a mistake, or wrong judgement, is now judged as being small, slight, or insignificant, in order to justify maintaining it, in spite of now knowing that it was a wrong judgement. It is deemed "insignificant", so that dropping it is seen as unimportant. But the motivation not to drop it, and therefore maintain it, despite it being now understood as wrong, which produces that designation of "insignificant", is the fact that it will destabilize more important investments. This fact indicates that it really is significant, not insignificant, and the designation of "insignificant" is just another wrong judgement, carried out to support the original wrong judgement..
Well, the description of a square wave, as a wave which instantaneously changes from crest to trough, and vise versa, seems somewhat naïve to me. Wikipedia suggests that the effect is produced though the use of harmonics, and filtering out unwanted aspects.
Check my preceding reply to Hoo, for an interpretation of this issue.
Quoting Metaphysician Undercover
I see what you're saying, I think, but that image in my mind/reason of minds/reasons external to my reason is still an image within my own mind or reason. "Not-my-mind" is like an empty negation in a strict logical sense, it seems to me. There's my-reason-for-itself which I model in my mind among other reasons-for-others. But all of this is unified in my concept system. All of this modelling of modelling gets very tangled. I do like the idea of looking at activities.Quoting Metaphysician Undercover
We generally agree here, I think. But it seems the world-in-itself remains an empty negation. We have an complex, conceptual image of mind-independent reality, but this "mind-independent reality" is constructed exactly from our own concepts. Another way to think of the "Real" (mind independent) is as that which resists mere thinking or redescription. It's in the way of our desire. It's otherness is derived from its opposition to our projected future. This is largely just us learning to parse the lingo of other perhaps. I believe there is a world out and that there are other minds out there. And yet this is a belief and therefore within my own "larger mind" in which I model my mind among minds, etc. And then we have an infinite nesting of this structure. Tangled.
Quoting Metaphysician Undercover
For me the wrongness of a judgement is one and the same with its dissonance in the context of other judgements. Whether one should drop a "wrong" judgement supporting more important investments will itself be determined by all of the rest of the networked investments. To ask after a "general solution" is moving more in the direction of "reason" as I originally intended it as a normative image for thinking. Is it reasonable-for-our-community to maintain such an investment?
Quoting Metaphysician Undercover
As I see it, there is only of the pressure of prejudices upon prejudices, so the mistaken judgement is already therefore in conflict with one set of prejudices even as it supports another set. Thinking synthesizing new prejudices, through inference and metaphorical leaps, and prunes them as well, if it doesn't abandon them altogether. There are also shifts in intensity. We strive toward flow. We don't want to lock up like one of Asimov's robots tangled in its own directives. Even here, as I see it, we are working on this system as an extremely self-conscious level, in this system's image of itself as system, etc. (And the yet the notion of this system is just a prejudice we project upon the Real that resists, it seems).
The answer is the same as before. When we are talking about the ontology of a modelling system, we have two realms in play - the material and the symbolic. And the vague~crisp can apply as a developmental distinction in either. And indeed to the modelling relation as a whole. The vague~crisp is about a hierarchy of symmetry-breakings, a succession of increasingly specified dichotomies.
So in the symbolic realm, a vague state of symbolism is indexical. A still vaguer state is iconic.
If you say "look, a cat", that 's pretty definite. If you point at a cat, I might be a little uncertain as to exactly what your finger indicates. If you make mewing and purring noises, I would have to make an even greater guess about the meaning you might intend.
So as I argued using the example of the wax cylinder, informational symmetry breaking can be weak because it is easily reversible - still strongly entangled in the physics of the situation - or it can be strongly broken in being at the digital end of the spectrum and thus as physics-free as possible.
If I were to say "look, the universe", then physically the words involve no more effort that talking about a cat. But pointing gets harder, and pantomiming might really work up a sweat.
But then any form of communication or representation has already crossed the epistemic cut Rubicon in creating a memory trace of the world and so made the step to being physics-free. So even vague iconicity is already crisp in that sense. And thus there is another whole discussion about how the matter~symbol dichotomy arose in nature. And a further whole discussion about whether the abiotic world - with its dissipative organisation - has pansemiotic structure, and so this notion of "digitality" as negatively-self reflexive demarcation (or the constraint of freedom) has general metaphysical import there.
We can see that discrete~continuous is just such a general metaphysical dichotomy - the two crisp counter-matched possibilities that would do the most to divide our uncertainty about the nature of existence. And I would remind you of your opening statement where you said this was all about a generic metaphysical dichotomy that applied to all "systems"....
Quoting StreetlightX
So that sweeping claim is what I have been addressing. And my argument is that when it comes to reality as a system, it is just the one system - formed by dividing against itself perhaps.
This is why I find your exposition confused - although also on the right track. So I tried to show that to resolve the dualism implicit in your framing here, we have to ascend to Peircean triadic semiosis to recover the holism of a systems' monism. We have to add a dimension of development - the vague~crisp - so as to be able to explain how the crisply divided could arise from some common source.
Your opening statement would be accurate if it made it clear that you are talking about symbolic systems or representational systems - systems that are already the other side of the epistemic cut in being sufficiently physics-free to form their own memory traces and so transcendently can have something to say about the material state of the world.
But instead you just made a direct analogy between analog~digital signal encoding in epistemic systems and continuous~discrete phenomena in ontic systems.
Now again, there is something important in this move. It has to be done in a sense because the very idea of a physical world - as normally understood in its materialistic sense - just cannot see the further possibility of semiotic regulation, the new thing that is physics-free memory or syntax-based constraints. So you can't extract symbols from matter just by having a full knowledge of physical law. As you/Wilden say, the digital, the logical, the syntactical, appears to reach into the material world from another place to draw its lines, make its demarcations, point to the sharp divisions that make for a biinary "this and a that".
So saying in a general metaphysical way that the material world is analog, and the digital is sprung on this material world from "outside itself" as a further crisply negating/open-endedly recursive surprise, is a really important ontological distinction.
But then confusion ensues if one only talks about the source of crispness and the fact of its imposition, and neglects to fit in its "other", the vagueness which somehow is the "material ground" that takes the "formal mark" of the binary bit. Or even the analog trace.
So to talk generically about reality as a system - which indeed is a step up from process philosophy in talking about symbol as well as matter, hierarchy as well as flow - is where we probably agree in a basic way. Structuralism was all about that. Deconstructionism was also about that - in the negative sense of trying to unravel all symbolic distinctions. Deleuze was about that I accept.
But again, the metaphysics of systems is always going to be muddy without being able to speak about the ontically vague - Peircean Firstness, Anaximander's Apeiron, the modern quantum roil. Sure we can talk about grades of crispness - iconic vs indexical vs symbolic. But to achieve metaphysical generality, we have to be able to define crispness (computational digitality, or material substantiality/particularity/actuality) in terms of what crispness itself is not.
And to return to your OP.....
Quoting StreetlightX
...this is where your keenness to just dichotomise, and not ground your dichotomy as itself a developmental act, starts to become a real blinkering issue.
Analog signals are still signals (as Mongrel points out). They are differences to "us" as systems of interpretance. An analog computer outputs an answer which may be inherently vaguer than a digital device, but did use to have the advantage of being quicker. And also even more accurate in that early digital devices were 8 bit rather than 16 bit or 64 bit - or however many decimal places one needs to encode a continuous world in floating point arithmetic and actually draw a digitally sharp line close enough to the materially correct place (if such a correct place even exists in a non-linear and quantumly uncertain world).
So whether variation or difference is encoded analogically or digitally, it already is an encoding of a signal (and involves thus a negation, a bounding, of noise). Then while the digital seems inherently crisp in being a physics-free way to draw lines to mark boundaries - digital lines having no physical width - in practice there still remains a physical trade-off.
The fat fuzzy lines of analog computing can be more accurate at least in the early stages of technical development. The digital lines are always perfectly crisply defined whether they use 8-bit precision or 64-bit precision - this is so because a continuous value is just arbitrarily truncated (negated) at that number of decimal places. But that opens up the new issue of whether the lines are actually being dropped in the right precise place when it comes to representing nature. Being digital also magnifies the measurement problem - raises it now to the level of an "epistemic crisis". Ie: the fallacy of misplaced concreteness.
So it just isn't good enough to say analog signals can be signals without the need for negative demarcation and the open-ended recursion that allows. A bell rings a note - produces a sine wave - because vibrations are bounded by a metal dome and so are forced to conform to a harmonic whole number. Identity or individuation does arise in analog processes - in virtue of them being proto-digital in their vaguer way.
Yes, this is a complication of the simpler starting point you made. It is several steps further down the chain of argument when it comes to a systems ontology. And as I say, you/Wilden are starting with a correct essential distinction. We have to pull apart the realms of matter and symbol to start to understand reality in general as a semiotic modelling relation with the power to self-organise its regular habits.
But for some reason you always get snarky when I move on to the complexities that then ensue - the complexities that systems ontologists find fruitful to discuss. The vague~crisp axis of development being a primary one.
Quoting Metaphysician Undercover
So basically you want to use words not maths. And my point is that there is a reason why maths is where we arrive. Logic is itself a branch of maths in its highest state of development you realise?
So first you are not talking about a different method of reasoning and measurement, just advocating for a less crisply developed level of reasoning and measurement.
And then it is not as though I am saying there are no dangers in a more abstract level of discourse about nature. We are in some sense starting to work blind - allowing our formal tools to take over the job of explaining nature.
But this is the way things have gone because pragmatically they have worked. Maths is unreasonably effective as they say. Reality is surprisingly intelligible.
So your call to a more verbal and "picture in the head" level of metaphysical exploration is not actually an alternative method, just a return to a more primitive mode of scientific reasoning.
Now there is no harm in doing some of that too. That is the way we would expect to start to develop some actually fresh insight which - if it works out - could be properly mathematised. But in being a preliminary activity, it wouldn't replace the higher level of abstraction that mathematical discourse can attain. It is not an "alternative" in that sense.
I would instead say that maths is a branch of logic. It's a specialized form of logic, and that's what makes it so precise. But the same thing which makes it so precise, its speciality, also limits its scope, or range of applicability.
Quoting apokrisis
It is not reasoning which I am talking about, it is observation. So I beg to differ. Description refers to qualities in general, measurement refers to quantities. A quality is an attribute, or property of a thing. A quantity is a particular type of attribute. So if you carry out a scientific method of empirical observation which deals only with measurements, quantities, then the qualities which cannot be measured are neglected.
I am not "advocating for a less crisply developed level of reasoning and measurement". I am advocating for a more comprehensive form of observation, one which considers all qualities, not just those which we have the capacity to measure.
Quoting apokrisisAgain, I beg to differ. I am not calling for a more primitive mode of reasoning, I am calling for a less narrow minded form of observation.
Quoting apokrisisThat is the point, precisely. It is truly an alternative method, because science has now progressed to the point where all credible (objective), observations must be measurements. But if you consider, as I suggested, that there are qualities within the world that we haven't got the capacity to measure as quantities, then to understand those qualities, we need to proceed with observations which are not measurements. As we've learned from the past, it is only after we've developed an adequate understanding of different qualities, through observation, that we devise the appropriate mathematics required to measure them.
Either way, the point is that they are the development of a more abstracted level of language. And increased precision doesn't have to mean a lesser scope. Quite the opposite in fact. Greater generality and greater particularity go together here.
Quoting Metaphysician Undercover
More nonsense. Science talks about qualities in a maximally abstract fashion - notions like time, space, energy, information, entropy. And it is that clarity about qualities that engenders clarity about quantification.
Quoting Metaphysician Undercover
...and ignoring Occam's razor. There is a good reason for wanting to quantify reality using the least number of qualitative concepts.
Quoting Metaphysician Undercover
Do you have a list of these unmeasurables in mind? I guess it consists of the usual things like poetry and spirit; the mind, the divine, the meaningful, the aesthetic; beauty, good and truth.
You see I reach a different conclusion when we arrive at such abstractions without apparent ways to quanitify them - except by socio-cultural appeals to "look inwards and experience their phenomenological reality". To me, this shows we just don't have a philosophical-strength understanding of what we want to talk about.
The only problem for your view seems to be that whatever philosophical implications we might think are inherent in the maths cased science cannot themselves be expressed in mathematical language.
Why do you say "cannot" as if a no-go theorem applied. ;)
But I'm not really saying that the truths of maths or logic can only be articulated in an absolutely general syntax of operators and variables. We can use ordinary (technical) language to talk through the equations with more semantic background. We can translate to a certain extent back downwards, just as we can abstract from ordinary speech towards a mathematical expression.
So some kind of translatability is presumed. All scientists, metaphysicians and mathematicians have a native language through which they were introduced steadily to some domain of high abstraction.
But then something important still usually feels lost in translation when they have to go from abstractions back to words. And what is lost is the clarity gained by abstracting from words to abstractions.
That makes sense. Clarity means determinability. And mathematical and logical propositions are more determinable than empirical truths. The former two are sharpened up when expressed in formal symbolic languages, rather than ordinary informal language.
But philosophy, although it might be informed by formally derived propositions, can be done only in ordinary informal language and much of what it is concerned with is just consists in thinking about the indeterminable as rigorously as possible. Epistemology, metaphysics, aesthetics, ethics; nothing is precisely determinable in any of them, so what "rigorous" means will be only something like "consistent with your (necessary) presuppositions; whatever they might be". The content of our presuppositions is itself never necessary, what is necessary is the (at least provisional) holding of one or the other presupposition, in order to get started.
I think this passage is explaining what I was trying to say... that if you have a cake for which the law of excluded middle fails, you can't just slice the cake.
The issue is more subtle than this, although I admit that in my haste to distinguish intensive (analog) differences from the Kantian 'thing-in-itself' I moved too quickly. First, sub-representational intensities are meant to account for extensional magnitudes: in Deleuze's words, they are the sufficient reason of all phenomena, "the condition of that which appears". Second - and here is where I moved too fast, we don't know intensity 'directly', but rather "we know intensity only as already developed within extensity, and as covered over by qualities."
As far as logical status of intensity goes, intensity thus occupies an undecidable place within any system of representation: it can only be known through representation, but is is nonetheless not of the representational register. It's status is strictly correlative to that of the digital cut itself, which neither belongs to the system of representation nor is merely external to it. Hence the paradoxical status of intensity with respect to the question of knowledge: "[Intensity] has the paradoxical character of the limit ... [It] is both the imperceptible and that which can only be sensed."
Here is where things get complicated, but I'll try and do my best to explicate the ideas. If you recall that what's at stake is a 'critique of pure logic', then the idea is to introduce 'extra-formal’/‘real' constraints on the the exercise of what might otherwise be purely syntactic logical manipulations which might simply follow transitively from an established set of axioms. For Deleuze, intensive differences are precisely what force 'real life' (extra-formal) constraints of 'existence' on logic, making logic no longer a formal and arbitrary play of symbolic manipulation, but beholden to a specific existential situation, as it were.
Thought - which just is representational - must be ‘forced' to think under the aegis of what Deleuze refers to as an ‘encounter’ with sub-representational intensities which impose 'real constraints' on thought. These constraints shift the modality of thought from the order of the arbitrary to the necessary: "if necessity is only ever the necessity of an encounter, and of a relation that this encounter gives rise to within us, a relation whose nature cannot be known prior to the forced movement it induces, then we must reconsider the meaning of the arbitrary. The concern of critical philosophy cannot be bound up with evaluating truth from a position of relative or extrinsic indifference … When truths are separated from the necessity of an encounter they become abstract, which is to say, they are reduced to being merely possible or hypothetical.” (Kieran Aarons, The Involuntarist Image of Thought).
There’s a lot more to say here. I’ve not really given a full blown account of intensive differences, nor the manner in which they force us to think, so much as focused on attempting to answering the charge that buying into the notion plunges us into the myth of the given. At most, I’ve focused on the status of intensive differences with respect to thought, but I’ve already gone on too long. By all means ask any follow up questions though, cause these are bloody good thought-encounters for me. But I don’t want to prattle on too long. In the meantime, I'd direct you to this paper by Peter Kugler which takes up exactly how to make sense of the above using Ryle's notion of categories. Will probably elaborate in a next post if you want.
No, but you can make the law of the excluded middle apply by imposing a rule which would, on that basis, arbitrarily split said cake. That’s the whole point of digitisation: you take something that cannot be ‘naturally’ split, then you arbitrarily define a rule by which to impose a distinction on said continuum, then you use that rule to split the continuum from the ‘outside'. Of course, this rule will always leave a remainder, in the form of self-referential paradoxes. In math, this rule is the empty set, and it’s corollary, zero.
This is how you do it: Take a set, S. Then, you find the compliment of S, which just so happens to be the empty set, ? (S-S = ?). Now that you’ve done this, you’re in a great position because the empty set plays a double role. Not only is it the compliment of S, it is also a subset of S, to the extent that every set contains the empty set. Note that the empty set is thus is both ‘inside’ and ‘outside of S, occupying exactly the paradoxical place which we said a rule for distinction would occupy.
Having done this, you can generate the entirety of the number line by asking how many elements belong to the empty set (=1), and then recursively asking how many elements belong to that set and so on ad infinitum. Ta da. You’ve now digitised the continuum.
The problem, of course, as with any digitisation, is whether or not 0 belongs to it. The answer is strictly undecidable. Wilden: "zero is not simply a number as such, but a rule for a relation between integers… zero is implicitly defined as a meta-integer, and indeed its definition is what provides the RULE for the series of integers which follow it.” Zero, like negation, is a higher-order, reflexive rule about the continuum on the basis of which we can divide it, provided we cannot situate either negation nor zero properly in that continuum itself.
Which is exactly the point.
"And that's how we make the golf ball go into the hole!" -Zeno
That's my Zeno impression. Later, I'll do my Aristotle explains what all of this has to do with God.
Quoting StreetlightX
It appears that when you asked how many elements belong to the empty set and came up with one, you were already thinking in discrete terms.
Quoting StreetlightX
Doesn't strike me as intuitional to say that Zero is a higher-order, reflexive rule about the continuum on the basis of which we can divide it. In fact it makes close to Zero sense to me. Um... what's the basis for this rule, then?
Yes, and? A more fun way to understand the whole deal with the empty set is that it's like distinguishing the cake from the not-cake, which means that you take the continuum as such as a discrete element. But of course, there is no not-cake 'in' cake. And then you work your way from there.
Quoting Mongrel
Take it up with math, not with me.
Quoting Mongrel
The basis is always methodological. What are you trying to achieve with your rule?
You can't take it up with math. You can ask a phil-o-math person... which would obviously be Nagase, but he's busy writing some thesis.
To review.. the criticism that was brought was:
That the digital doesn't just fit "loosely" on the analog. I think you're agreeing with that. You note that we use Zero to create a sliced cake... apparently using the Unslice-able Cake as the primal One.
I've found my interests coming back around to Leibniz lately. This is the second time.
I might put it to my fellow mods that all discussion take place in terms of cake now btw.
You can't digitize the Continuum Cake. It has a high glue content and it just stretches forever if you try to take a piece out of it.
You can bake an entirely different cake that can be sliced because it's fundamentally atomic to begin with.
Different tack: I think we can discover whether the categories we're talking about are apriori or aposteriori using the Locke/Hume/Kant trick of asking about what is and isn't imaginable.
Can you imagine a cake that can't be sliced? I say no. Therefore, digital is apriori.
Can you imagine that space itself is atomic? I say no. Therefore, analog is apriori.
They both are. They're conditions of knowledge of the cake.
Not quite sure what's supposed to be going on here, but whatever it is, you can't generate the Reals that way.
[/quote]
I would like to suggest that what you're calling "the critique of pure logic" really boils down this: pointing out that classical accounts of represenatation do not (and cannot) account for the processes of concept creation and/or concept revision (e.g. "creative problem solving", "learning", ect.). The failure of classical logic in this regard is in turn grounded a failure to deal with what you have called (via Deleuze) "the encounter" - which is the event in which some schema of representation is forced to change through confrontation with "the world" via sensation and perception. The encounter confronts us with "the problem", prompting the revision of representation that is "the solution" to the problem.
So in Deleuze we have this thread of ontological duality running through his philosophy, and manifesting in the interrelated dichotomies of problem and solution, sub-representation and representation, intensive and extensive, and (ultimately) virtual and actual. On this account, what we call "experience" is just what happens in the "in between" space of the interminable systole and diastole (i.e. "the eternal return") of the dynamics of these mutually immanent "poles" of reality.
I'm on board with the critique as far as it goes, but am not so sure about the alternative being provided. Again, one has to wonder as to the epistemological status of the virtual given that it must necessarily remain "papered over" by representation. If we subtract out the contents of our representations of the virtual, what's left? The noumena? The shadowy realm about which we can know nothing more than that it causally constrains our representation of it?
Personally, I prefer the Perceian strategy for dealing with the noumena by interpreting it in primarily epistemological rather than ontological terms. So instead of being the shadowy, causal underbelly of the world, it is transposed into the content of the ideal limit of inquiry (e.g. regulative rather than constitutive). It is still a limit concept, but it no longer entices us towards the intellectual bankruptcy of mysticism, and instead pushes us toward the satisfaction of the insatiable desire to know. On this view, the world that causally constrains thought is just the world as we have come represent it so far (what else could it be?) - that is, the world as described by science and (where science fails) common sense. This position is not free from problems, but what position is?
And in that vein, I'll also state that the more I engage in these kinds of discussions the more it seems that the selection of one's metaphysics and epistemology reduces to a matter of personal taste and temperament. Every metaphysics/epistemology has it's strengths and weaknesses. None is immune from the confrontation of certain vexing problems that seem to be inherent within the structure of thought itself. I'm rambling now, so I am going to end on that slightly pessimistic note.
If you could demonstrate that you understood it, your rejection would be a lot more convincing.
Don't forget that I've always said biosemiosis is definitely something new in nature - the development of full-blown digital-strength symbolism to allow for autonomous systems within the Universe.
Physiosemiosis would be vaguer in just being analogic or iconic. And it would be even these in a vaguer sense as the interpreter is "the Universe" as a system - a material system without yet any symbol systems operating within it in their autonomous (not-A) fashion.
So pansemiosis - as Salthe works to define it - is simply the assertion that the Universe is self-organising and comes into existence as a global regulative habit. It is a view rooted in dissipative structure theory and far-from-equilibrium thermodynamics. Natural law is like the self-closure that is the eruption of constraining convection currents in a Benard Cell. Semiosis speaks to the formation of the negentropy or memory by which a Universe becomes its own vehicle for a generalised production of entropy.
Thus it seems in all these ways precisely a thesis that you would agree with. You would have to explain to me how it says something different in your view.
Remember also that the new thing is that the biophysics of the nanoscale has now empirically identified the physical point where a transition from physiosemiosis to biosemiosis can happen - or indeed, is inevitable. I wrote that up in this thread - http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html
So now we have identified a convergence point where material being has a critical instability - an edge of chaos cusp of order~disorder - that allows "digitality" in a physically real sense.
A problem with your highly abstracted exposition is that you make a huge mystery of how the digital cut can be imposed on the analog world - the slice that cuts the cake. Somehow the cake breaks apart as intended without your knife physically doing anything - waving it wishfully or threateningly suffices.
Your use of Wilden's computer analogy encourages this. A computer has just this kind of symbolic disconnection from the world. The software is granted the security of utterly stable hardware and so doesn't have to think anything about its operation. Whereas with life, and semiosis generally, the situation is the precise opposite. It is all about the regulation of a fundamental instability, a fundamental vagueness. And the more on the cusp of the edge of chaos things are, the greater also the semiotic range of regulative possibilities. (Have you ever read Scott Kelso for example? - https://mitpress.mit.edu/books/dynamic-patterns)
So despite the fact you normally claim to be an enactivist, in this thread you have argued from the basis of representationalism - analog and digital computation both being ways to represent the world. And so any digital cut remains virtual rather than actual. The computer can click and whir away doing its digital or analog thing and it makes no bloody difference to the world unless somebody - usually a human - takes notice of its syntactical mapping and treats it as a sign of something about the world.
Semiosis - like enactivism - says that is ridiculous. The digital cut has to be a real cut out in the world. The cake must be sliced - or at least nudged just enough for it to reorganise itself into two parts because it was on the cusp of just such an entropic bifurcation.
So this is the mystery that semiosis solves.
The regular mechanical way of looking at the world presumes that the ground of any hierarchical complexity must be rock solid stable. You have to have something crisp and definite - like atoms - to begin any construction work. The cake is there and is never going to cut itself because cakes have had any such dynamism or self-organisation baked out of them. And all that makes it a real material mystery how any amount of symbolic activity - analog or digital computation - is going to make a difference. The cutting can be imagined, yet where is the power to execute?
But the self-organising semiotic view of the world says instead that you get these major transition zones due to criticality. Now reality is as unstable as it can be - suspended between two states. And the slightest nudge can tip it in either direction. So there is a digitality inherent in the material state (it can distinctly go in either direction just due to spontaneous fluctuations). And then that digitality can be made extrinisic by a symbol system which retains only the slightest physical presence in that world. A system of signs can compute where and when cake self-cutting should happen. Then deliver the almost infinitesimal physical nudge that tips the balance.
So first the physical world does its bit by presenting the potential - some point of absolutely poised instability. And then a minimal bit of physical machinery - a nudging mechanism controlled by as much background symbolic computation as you like - can exploit that eminently controllable situation.
Thus the digital cut imposed in recursive fashion via a negative mark (a pointing towards whatever state a biifurcation happens not to be in) is no longer the kind of virtual phantom act it must be in your framing of things, it now has an actual physicality. It has a size. Indeed it has the particular universal scale now discovered by biophysics.
So I know you think you reject pansemiosis, Salthe, vagueness, and indeed anything that I might mention that you are not already familiar with. But really you are just in the process of getting there.
And one of the presumptions you might not realise you have been making is that existence must be founded in the stable, when the whole point of any view founded on process thinking - such as enactivism - is that it is instability which makes the very idea of regulation possible in the world.
I don't know if you're using math metaphorically here, but the compliment of S is going to be relative to some set X. If X = S, then, yeah, the empty set is its complement. To say that the empty set is both inside and outside of S is a bit of cheat. It's a subset of S but not an element of S (in general). I don't doubt that your getting at something interesting about rules for distinction, though.
Quoting StreetlightX
Are you saying the empty set contains an element? (It definitely doesn't.) You then mention the continuum, but R is typically constructed with subsets or Q (cuts), etc. (Maybe you know all of this.) The measure of the computable reals is 0, so it's not such an intuitively satisfying "digitalization." But maybe this is all metaphorical. If so, perhaps that should be stressed.
Quoting StreetlightX
So this is from System and Structure. It looks like good reading, but it doesn't seem that we are in Kansas anymore (actual math), but instead in a realm where Freud is relevant and "presence and absence fill a continuum." That's cool. Just sayin'.
Set theory probably has the problem that it builds in the distinction SX hopes to derive. It's weakness is that its brackets that bound possibility are themselves so definite and unexplained as features of the world.
But there would be two ways of looking at this.
Either the brackets - {....} - exist in deus ex machina fashion as if someone outside constructed boundaries large enough to contain anything, and thus both everything and nothing (as the crisp complementary limits on vague anythingness!).
Or instead the brackets in fact just represent the simpler thing of being the emergent complementary limits on such a naked state of possibility. The brackets stand for the fact that possibility has its own inherent limits. In saying something is possible, everything and nothing, infinity and zero, already also exist in negative recursive fashion as now the places where everythingness and nothingness put a stop to somethingness.
So - and here is the difficult bit - the limits on being are precisely that which doesn't itself exist. A boundary is where reality stops. And so the boundary itself is unreal or non-existent - even if it seems to have brute causal presence in being "a limit".
This is why I objected to SX's idea of boundaries as something like a 1D line drawn across nature - a single dimensionless feature that somehow bisects reality to make it binary.
Instead - organically - the metaphysical-level logic is that of the dichotomy. The self-organisation that results in a system arising within its own opposing boundaries or limits. The crisp brackets of the set are formed as a result of the action arising within them. The contents are producing their own container - so as to be now definitely "the contents" rather than just vaguely that.
This would be why folk feel that category theory is a better foundation for maths than set theory. It has that embedded dichotomistic view in the mutually exclusive/jointly exhaustive formulation of "structure and morphism". Instead of the container and contents metaphor, we have a organic distinction of constraints and freedoms, organisation and change.
So set theory could be naturalised by recognising the opposed brackets as standing for complementary poles of being - the opposed limits you need to arrive at to have the third thing of the individuated something that can now stand between.
It is then a further thing to give a name to these limits - to call them out as it were, even though they are by definition precisely what does not exist (even as possibility!). So we can speak about infinity, we can speak about zero, as concrete real things. Just as we can talk about all the metaphysical-strength limit states like the discrete~continuous, vague~crisp, stasis~flux, matter~symbol, chance~necessity, part~whole, atom~void, etc, as being real in their limit state unreality.
And that is very powerful from a modelling or reality-mapping point of view. Just look at 2500 years of Western intellectual history. But it also makes us prone to the fallacy of misplaced concreteness that the process view warns us of.
One last point on SX's idea of boundaries as just lines, he would do better to consider Spencer-Brown's diagramatic use of circles as the simplest shapes to form an inside vs an outside - a canonical act of digital symmetry-breaking. Or even better still, go further back to the source of those laws of form in Peirce's own diagramatic re-formulation of logic.
http://mentalmodels.princeton.edu/papers/2002peirce.pdf
http://homepages.math.uic.edu/~kauffman/Peirce.pdf
As I see it, math is machine-like. "Here are formal definitions. Here are rules of inference. See how these definitions are related in terms of those rules of inference." The formal definitions tend to have intuitive appeal of course, but we're aren't allowed to use intuition directly. The ghost of intuition must be incarnated in the symbolism.
As I see it, the formal definition of "set" tries to capture the intuition of "gathering up into a unity." All things as things are unities. The tail and the nose and the fur and so on have been gathered up as the dog, for instance. It's as there is always already a logical circle drawn around any particular thing, perhaps giving it its thing-hood, cutting it out from the background automatically. But then sets are also (intuitively) the extension of properties, which surely inspired the axiom of extensionality.
Quoting apokrisis
I think we get this from writing R as (-inf, inf).
I can't respond to much of your post, since I don't have a feel for it. But I have checked out a book on Peirce, so maybe I'll understand you better after I read it.
That's right. Once you have axioms, you are good to go with the deductions. It all unfolds mechanically in a predestined fashion.
But what is the meta-theory about forming axioms - the semantic residue animating the unfolding syntax?
I would argue that it is dialectic or dichotomistic metaphysics. That is what presents us with our "binary" choices. We can posit the axiom of continuity - having identified it as one of two choices. Reality could be fundamentally discrete or continuous. Well, let's pick continuous for the sake of argument and run with that, see where it leads.
Quoting Hoo
Well the relevant axiom is the axiom of choice. It starts by presuming individuated (crisp and not vague) things, events, properties, whatever. And given that is the case, forming collections becomes trivial in being trivially additive and subtractive. One can construct any unity (or deconstruct it to leave behind "nothing").
Quoting Hoo
Or I would prefer to think of it in terms of the reciprocal limits defined by the notions of the infinite vs the infinitesimal. This is the strictest way of defining each limit on possibility in terms of its other.
Positive and negative infinity are hardly marking bounds in claiming to point in either direction in terms of the unlimited.
Quoting apokrisis
I like dialectic. That's the process. The thesis swells (via anti-thesis then synthesis, repeat) and becomes more capable.
I like the instrumentalist approach. It's not about what's behind the manifest image. It's about what we can do within the manifest image using our theories. Think "prediction machines" or "manipulation machines." It's probably natural for the scientist to think in terms of representation, as with a mathematical sense of "X-ray vision" that pierces through the manifest image. It would also be hard to do math as a sincere formalist. One wants to prove something about objects that exist inter-subjectively. So there are atoms and real numbers, but this "are" doesn't seem absolute. It flickers in the context of purpose and focus. How would we cash out reality as continuous? How would it be established? Our most predictive/manipulative theory based on the real numbers? Or on geometric intuition of flow?
Quoting apokrisis
The AC is often stated as the existence of a choice function. Are you sure you don't have another axiom in mind? I think the logical use of equality keeps things distinct in math generally, not just in set theory. We simply have x = y or not (x = y). All of x's properties are "naked" if we have the eyes to see it. Of course complicated deductions are not obvious, so some properties are invisible, although "already there" in some sense. (The relationship of time and classical logic is probably quite deep. )
What I have in mind is the assumption that you can just pick out individuals and throw them into different contexts freely. But what if that identity was contextual? It's like imagining being able to scoop a whorl of turbulence out a river with your bucket. So the AC shows that kind of assumption at work. But then all of maths pretty much assumes that.
Quoting Hoo
Geometry always beats algebra for me. But note Michael Atiyah's view that the two are dichotomous and reciprocal. Geometry is manipulation in space and algebra in time. And anything describable in the one reference frame can usually be flipped over into the other, as with symmetry groups or Cartesian curves. So dialectics or duality applies right at the heart of mathematical development.
See:
There is an important issue here. In philosophy, an axiom is a self-evident truth. In mathematics, an axiom may be anything which does not contradict the mathematical system which it is put to use within. So in philosophy, an axiom is necessarily true, while in mathematics, an axiom is a logical possibility.
Quoting apokrisis
So, as a philosophical axiom, we cannot just pick any axiom, it must be self-evident. We have evidence that objects are bounded, and "object" may be defined in such a way that an object is necessarily bounded, so we could pick an axiom such as "objects are bounded".
With respect to continuity though, as I stated earlier in the thread, that some aspect of reality is continuous, is implied through observations of reality, and inductive reason. Since it is implied, that some aspect of reality is continuous, this is not self-evident, we cannot pick continuity as an axiom. The assumption of continuity must be justified.
Well self-evident is always going to be a suspect claim.
But anyway, are bounds not self-evidently continuous? So if there are (discrete) objects, then continuity is also an aspect of your axiom of object boundedness?
I don't think it is self-evident that boundaries are continuous. A dotted line makes a non-continuous boundary. I think the best example of boundaries that nature gives us, is the boundary of a physical object, which we see with the visual sense, and touch, feeling it with the hand or other body part. The texture of those boundaries indicates that they may not be as continuous as they appear to be. Of course the science of chemistry indicates to us that the boundaries between substances cannot be considered to be continuous at all.
If we deny the reality of these boundaries, saying that the boundaries of physical objects are not real boundaries at all, what are we actually doing with this denial? We are denying the actual examples of boundaries, in favour of an ideal boundary. We simply assume that boundaries are continuous, as a mathematical type of axiom, an ideal which has not been justified. Then the boundaries which are shown to us do not fulfill the qualifications of the ideal, so we deny that they are boundaries. Now the ideal boundary must be justified as a true example, or it should be dismissed as not properly representing the boundaries which we know of.
That is why I suggested earlier in the thread, that we consider the boundary between future and past, in time. Perhaps this boundary can justify the ideal continuous boundary which you desire as an axiom.
Wouldn't it be leaky or .... vague?
Quoting Metaphysician Undercover
That just puts us back dealing with dichotomies as I routinely argue. We can have the ideal or axiomatic notion of a continuous boundary because we also have the ideal/axiomatic notion of what would be the most leaky possible boundary - one that is discrete instead of continuous, all holes and no bounds like a sieve.
So we have two true notions - the unbroken and the broken. And we can then measure anything in the real world by how close or far it is from those bounding ideal limits.
Dotted lines of course usually mean "tear here" so they are suggestions left for you to complete. The would be exactly halfway between unbroken and broken in that sense.
Oh, then we're on the same page. Math is charming because it escapes this mess by fiat. But away from math identity is tangled in context. "No finite thing has genuine being." (Hegel). The concrete reality (the complete reality) is singular. To understand a blade of grass fully is to understand the totality itself. Essences are describable as nodes on a network, utterly interdependent. The same thing applies to sentences. Meaning appears to me to be radically holistic. We would like it to be more atomistic so that we could normalize metaphysics. We can't get the metaphysical/philosophical axioms that MU mentions because meanings are context dependent. That's the temptation of taking propositions modulo actions. If two different strings of marks and noises function the same way as rules for actions, they are equivalent. This is just a normative rule of thumb. But this is why I'm not excited about metaphysical issues as they become distant from values or useful "framing" metaphors. The "language is a tool" metaphor (as opposed to "language is a mirror") is basically for me anyway the essence of pragmatism. We don't ask if a tool represents accurately. We see if it does what we want done. As we are fairly certain of our desires, it offers a streamlined epistemology.
Quoting Hoo So let me see if I have this straight, the position you're arguing. It is useless to seek self-evident axioms, as there is no such thing, because meaning is context dependent. Therefore we should only use mathematical axioms, as apokrisis suggests, which have crisply defined, and fixed meaning within a mathematical system. This entails that anything which is logically possible is also true.
Sorry, I thought it was the axiom you had proposed. But instead your self-evident axiom is that objects are bounded.
So still my answer would be the same. Metaphysical-strength axioms seem self-evident when they result from dichotomous reasoning. If a pair of possibilities are mutually exclusive and jointly exhaustive, then in being the mutual limits on such possibility, and in exhausting all other possibilities, they would have the status of necessity.
And that has long been accepted of the continuous~discrete. Together they are as far as you could go in making a contrast between the connected and the disconnected, the integrated and the differentiated, the related and the isolated, etc.
But then as I say, my own take is that dichotomies only do produce ideal limits. And limits are boundaries in marking where reality ceases to be some thing. Which in the metaphysical case, is where reality ceases itself to exist. And so while reality might approach the ideal of either the discrete or continuous with asymptotic closeness, it can never actually arrive exactly there because the boundaries are not part of existence. They mark (in our minds) the limit, so the exact point where the business of existing has halted.
So now we could talk the same way about your own proposed dichotomy here - objects and boundaries. You can see how it is actually parasitic on the continuous~discrete as a metaphysical axiom. We can imagine the discrete, individuated, differentiated, isolated thing which is an object because we can imagine the complementary thing of it having a continuous, unbroken, integrated, related boundary - a boundary which is a global limit on the object in marking the point where all its discrete being suddenly stops.
So yes. The idea of a bounded object seems pretty convincing. But boundaries in reality are often pretty vague. Or if crisp, designed in fact to be leaky.
Any river or coastline is a pretty vague boundary. Tides and floods shift the margin between water and land continually. Tracing a river to its source in some clutter of springs and tributaries is always a contentious affair.
On the other hand, country borders, cell membranes, and other semiotic lines drawn across the world, are not just leaky, they are designed to be porous - porous in a way that is regulated. A border or membrane is a boundary which has to have holes so as to allow the object - the nation or organism - to make the right kind of material transactions to continue to persist as the kind of objects that they are.
So the idea of a bounded object is a crisp metaphysical ideal that in reality only really exists in this fashion.
Even a rock has vague bounds as an object. It is always subject to erosion. And at what point exactly - with metaphysical-strength or Platonic perfection - is some silicon or iron atom crossing the boundary from being part of the solid rock to part of its history of eroded material? Or is the mud on the rock, part of the rock as "an object"? If not, why not?
And then where an object in fact has the power to self-define its own boundaries (when it is an organism), or when it is an artifact (like a nation or a plastic cup) where it is us who imposes some idea of a definition, then really any boundary is a constraint imposed on material vagueness. It is regulation of erosive or dissipative processes designed to reconstruct what the world would generally aim to deconstruct over time.
So on the one hand, we can easily imagine a world of bounded objects. We can axiomatise a metaphysical dichotomy in that fashion - one that is built up from ancient debates about the continuous and the discrete, the one and the many, to arrive at an atomistic conception of bounded objects.
But then when that axiomatised conception is put to the emprical test, we find that reality is different. It has a further developmental dimension to it. Reality is founded more on flux than stasis. The Universe is one vast sea of erosion. And now - metaphysically - its ultimate other must be the counter-move of regulative habit. Boundaries are really constraints on dissipative freedom - or vagueness. Boundaries are the semiotic information that form up stable object-ness in a fundamentally unstable world.
No, the point is that such axioms result from a description of what is, reality, not from dichotomous reasoning. The dichotomous reasoning follows the description. The description is "objects are bounded", what follows from the dichotomous reason is that it is impossible that objects are not bounded. So if we assume the existence of something which is not bounded, it is impossible that this is an object.
Quoting apokrisis
See, contrary to what you say here, the dichotomy produced is between the ideal (not bounded), and the practical, the description of objects. The description is not an ideal, it is a representation, a model.
Quoting apokrisis
It's not a dichotomy which I proposed, it is a description, which is proposed as an axiom. It is not proposed as a dichotomy between objects and boundaries, but as a description of objects. Therefore "boundary" is to be read as a property of objects, not as dichotomous to objects.
Quoting apokrisis
Correct, in reality boundaries are porous, as you describe. So the question is where do we get this idea of a continuous boundary. Boundaries, as we know them, are as you describe, yet we also want to assume an ideal boundary, the continuous one. If we cannot describe how this boundary could exist, in reality, what it could be bounding, this supposed ideal is nonsense.
Quoting apokrisisI don't see where you get the axiomatic dichotomy from. We have an axiom concerning the nature of an object, that it has a boundary, and we have an ideal which is "boundless". The ideal of boundless must be described in a self-evident way to become an axiom. I believe that this was the ancient trick of the theologians, to demonstrate that the boundless (God) is self-evident.
Empirical claims about "what is" - the kinds of things people say as a result of common experience of the world - were the departure point for Ancient Greek metaphysical inquiry.
So in the world, we see all kinds of objects and non-objects. Is a cloud an object? Is the wind an object? Is a river an object?
Reason is then applied to the question - the unexamined assumption. So the starting point is only self-evident in the sense no one has really thought to question it systematically. It is only axiomatic in being acted upon without being philosophically considered.
Quoting Metaphysician Undercover
And I accounted for the conditions under which it can be considered a property of an object - if the object has the semiotic power to define its own boundaries. Otherwise the boundary is probably an idea that we ourselves impose on an unbounded nature. It is only us who might be concerned about identifying the true source of the nile or deciding whether some bump on a landscape is a hill or a mountain.
Quoting Metaphysician Undercover
Well what bounded objects did you have in mind as an example? Let's see how necessary continuity might be to that idea of it being an object.
Quoting Metaphysician Undercover
Like the axiom of vagueness you mean? Surely you can see how it arises automatically via a dichotomy with the ideal of the crisp. To be absolutely crisp would be to absolutely lacking in vagueness. And thus, transitively, the same must apply in the other direction.
So if you can tell me about boundedness in any absolute fashion, you will be also telling me about absolute unboundedness as its logical corollary.
And if you can't give that kind of crisp definition of a boundary, then - again logically - your idea of a boundary is rather vague and lacking in metaphysical-strength axiomisation.
What I'm saying is that the "atoms" in these axioms are not so atomistic. They are nodes in a network, or rather they are nodes in billions of similar but differing networks. Math works by fixing meanings more or less exactly. I know the definition of a continuous function, for instance. I can enlarge what I know about continuous functions in terms of other defined objects via a normalized method (a formalized logic, although used informally ). So the meaning evolves as relationships are deduced, but there's is no disagreement about this evolving meaning. It's the meta-law that proof is the law.
But for me philosophy is the supreme example of an abnormal discourse, even if one of its central fantasies is exactly the normalization of discourse --to define itself or science or rationality, etc. It's a permanent revolution, though. One doesn't play by the rules of the epistemology that one is trying to replace. Aren't "great" philosophers those who reinvent philosophy's self-image and method? This is largely done in terms of seduction by metaphors and narratives ("showing the fly the way out of the bottle") and not really so much in terms of refutations in a "word-math." Rhetoric partly succeeds by appeals to logic -- I won't deny that. But language seems too soft for the sort of "word math" that I associate with lots of traditional metaphysics. We can argue from shared investments and assumptions, but this mass of investments and assumptions is a mess. I'm not saying we can't do it at all in a useful way, but I do look at utility as a epistemological principle. "The smallest unit of meaning is a personality as a whole."