You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

I am an Ecology

Streetlight November 28, 2017 at 05:59 15850 views 122 comments Philosophy of Science
Among the most basic concepts in ecology is that of succession. That is, ecologies tend to grow along a rough trajectory from the less to the more complex, from a rapid growth in biomass early on to a levelling-off of growth later on. As trees reach a certain size, they do not keep growing, they simply self-maintain at a certain size. Another feature of succession is precisely this ability to ‘self-maintain’: older, ‘legacy’ ecologies are far better at recycling their waste products, feeding them back into the system for further growth, rather than relying on ‘outside’ resources for their sustenance. Essentially, older ecosystems are more efficient - they waste a lot less than younger, still-growing ecologies, which are often spoken about as ‘leaky’ ecosystems, insofar as they contain fewer internalised ‘cycles’ that reincorporate waste products back into them.

Most here are already familiar with the concept of the food chain: X eats Y eats Z. But food chains can of course be cyclic: X eats Y poops out Z is consumed by X, as a virtuous cycle. Legacy ecosystems are chock full of such cycles, internalised cyclic metabolic processes that keep things ‘in’, rather than letting them leak ‘out’. Of course no ecology is completely closed in upon itself - most systems ultimately derive their energy from the sun, and succession can be seen as a measure of how efficient or well-used that energy is. The most efficient, interconnected, and diverse ecosystems are called ‘climax communities’, ones which have maximised the internalisation of their metabolic cycles. The 55 million year old Amazon rainforest, for example, is just such an example of a climax community.

One interesting thing about thinking in terms of ecological succession is that ecologies can become ‘sick’: ecological ‘stresses’ such as oil spills, forest fires, and so on tend to set the successional stage of an ecology backward: not simply a loss of biomass and life, but a loss of cycles, of interconnection, and of efficiency. Connections and networks are broken, and growth becomes more independant. In other words, ecological setbacks are not just ‘quantitative’ but ‘qualitative’: ecological disasters (when they don’t totally destroy the ecosystem under question), set back the very ‘stage’ that an ecosystem occupies, with ‘recovery’ being a matter of recovering not just the number of this or that species, but the cycles which sustain them.

Now, one cool way of looking at a single animal - in this case you or I - is precisely as a kind of bounded ecology: bound by skin, we are walking, talking, systems of internalised cycles and metabolic processes. We display succession: early rapid growth that levels off after a while, while at the same time becoming more and more self-sufficient over time until we - ideally - reach a peak of efficiency at adulthood (without, of course, ever being entirely self-sufficient, just like no ecosystem is). And moreover, just like ecologies, we fall sick: just like ecologies, what tends to kill us is not just the death of this or that number of cells, but the failure of certain cycles that allow for self-maintenance. Our sickness too is qualitative, and not just quantitative. So as Eric Schneider and Dorion Sagan write: “an organism is an ecology writ small” - I am an ecology.

(That said, it's perhaps more accurate to say that I am an ecology with an in-built hereditary system - DNA and so on, which perhaps is what, in the last analysis, distinguishes an organism from an ecology properly speaking).

Comments (122)

Streetlight November 28, 2017 at 07:15 ¶ #128056
Oh, and to shoehorn in a point of politics, it might be argued, on the basis of the above, that philosophies of rugged individualism are thus philosophies of ecological infantalism, or else ecological sickness.
Shawn November 28, 2017 at 07:45 ¶ #128060
I wonder how you can reconcile rational self-interest with the above.

In other words, how much is enough?
Streetlight November 28, 2017 at 08:41 ¶ #128066
I'm not sure what you mean with your question. Like, what exactly would need reconciling, and why? Could you elaborate?
Shawn November 28, 2017 at 09:29 ¶ #128076
Quoting StreetlightX
I'm not sure what you mean with your question. Like, what exactly would need reconciling, and why? Could you elaborate?


Well, the issues you present that are pertinent today with such cases as not accounting for the negative externalities of excessive carbon dioxide levels being raised by the working of the economy. It would seem that unrestrained economic growth has thus deleterious effects on the world. So, how do you address that issue manifest by the guiding principle of liberalism and neoliberalism that what is rational is to do what is best for one's self-interest?
Galuchat November 28, 2017 at 09:29 ¶ #128077
StreetlightX:Oh, and to shoehorn in a point of politics, it might be argued, on the basis of the above, that philosophies of rugged individualism are thus philosophies of ecological infantalism, or else ecological sickness.


This requires extending the metaphor from ecology to biology to sociology with life and complexity being points in common. Individualism and collectivism are types of cultural bias, not stages in cultural development or symptoms of cultural health.
fdrake November 28, 2017 at 09:45 ¶ #128081
Reply to StreetlightX

The reproductive behaviour of organisms can also be considered as part of an ecosystem though. This is why colony collapse disorder for bees is terrifying, no mo' bees is no mo' trees.

The image of ecological succession in terms of discrete developmental stages of the distribution of plant matter over an area is outdated. The most dated bit of it is the idea of ecological climax, which contains within it a sense of ecological equilibrium (self regulating/homeostatic interdependence), there's no evidence for this. The preferred view atm is one of dynamism and flux, focussing on the possible disturbances and potentials for the ecosystem than rather arbitrary categorisation of stages of plant development. What can be said of the wolves in YellowStone park? Should they be called part of the series?

As a historical note, the idea of succession actually predates the idea of ecosystem. Ecosystem as a concept was proposed to solve some of the conceptual problems associated with plant succession:

Tansley:It is now generally admitted by plant ecologists, not only that vegetation
is constantly undergoing various kinds of change, but that the increasing
habit of concentrating attention on these changes instead of studying plant
communities as if they were static entities is leading to a far deeper insight
into the nature of vegetation and the parts it plays in the world. A great part
of vegetational change is generally known as succession, which has become
a recognised technical term in ecology, though there still seems to be some
difference of opinion as to the proper limits of its connotation; and it is the
study of succession in the widest sense which has contributed and is contributing
more than any other single line of investigation to the deeper knowledge
alluded to.


You can read the amazing article from Tansley, where the word 'ecosystem' comes from, here.

Streetlight November 28, 2017 at 10:35 ¶ #128089
Quoting Posty McPostface
So, how do you address that issue manifest by the guiding principle of liberalism and neoliberalism that what is rational is to do what is best for one's self-interest?


By rejecting such ideas as among the most deleterious and damaging ones ever peddled by anyone, anywhere. Or more specifically, by rejecting the incredibly impoverished and anemic understanding of 'self-interest' that undergrids such horrible notions.
Shawn November 28, 2017 at 10:48 ¶ #128094
Quoting StreetlightX
Or more specifically, by rejecting the incredibly impoverished and anemic understanding of 'self-interest' that undergrids such horrible notions.


Well, as long as such behavior counts as what is 'rational', according to economic theory and such, and produces the maximum amount of utility, then the whole issue is a non-starter.
Streetlight November 28, 2017 at 10:59 ¶ #128099
Reply to Posty McPostface I'm not sure what is referred to here by 'such behaviour'. Neither is it clear to me which 'economic theory' you're referring to, given that the so-called 'rational economic actor' is, at best, a contestible model of human action even within economic theory, with economists themselves increasingly recognizing the abstract and entirely divored-from-reality idea that it is.
apokrisis November 28, 2017 at 11:39 ¶ #128108
Quoting StreetlightX
Oh, and to shoehorn in a point of politics, it might be argued, on the basis of the above, that philosophies of rugged individualism are thus philosophies of ecological infantalism, or else ecological sickness.


If you are interested in the best account of this, try Stan Salthe’s story on the immature-mature-senescent arc of living systems. And it would account for social systems as well.

But your hope to tie rugged individualism to sick or infantile ecology is lefty nonsense.

A senescent ecology is just one so well adapted to a particular life that it becomes brittle, lacking in degrees of freedom to recover from perturbations.

An immature one by contrast can exuberantly spend degrees of freedom to recover from knockbacks, yet is rather wasteful in being not yet well adapted to some particular life.

You can cash that out in sociological and political terms. But not so crudely as you seem to want to suggest.
Streetlight November 28, 2017 at 11:47 ¶ #128109
Quoting fdrake
The image of ecological succession in terms of discrete developmental stages of the distribution of plant matter over an area is outdated. The most dated bit of it is the idea of ecological climax, which contains within it a sense of ecological equilibrium (self regulating/homeostatic interdependence), there's no evidence for this. The preferred view atm is one of dynamism and flux, focussing on the possible disturbances and potentials for the ecosystem than rather arbitrary categorisation of stages of plant development.


This makes a heap of sense, and is a really nice corrective. Thanks.

Quoting fdrake
The reproductive behaviour of organisms can also be considered as part of an ecosystem though. This is why colony collapse disorder for bees is terrifying, no mo' bees is no mo' trees.


True, true. I guess it's more that living things have a 'dedicated' 'in-built' hereditary system (even though it's not the only hereditary system that living things have - i.e. the epigenetic, behavioural and symbolic systems charted by Jablonka and Lamb), whereas ecologies are more modular and not fixed by any particular system like that of DNA.

Streetlight November 28, 2017 at 11:56 ¶ #128115
Quoting Galuchat
This requires extending the metaphor from ecology to biology to sociology with life and complexity being points in common.


I think this is not only perfectly desirable, but has in fact an already-established legacy: the psychiatrist/ecologist Gregory Bateson, for instance, was perfectly happy to speak about 'ecologies' of legal systems, ideas, and even - in his famous phrase - 'an ecology of mind'. Basically any self-relating system composed of networks can be treated in ecological terms. Elsewhere, it's perfectly possible to treat something as abstract as an economy in ecological terms.
Galuchat November 28, 2017 at 12:23 ¶ #128122
Reply to StreetlightX Thanks for that.
fdrake November 28, 2017 at 14:25 ¶ #128154
Reply to StreetlightX

True, true. I guess it's more that living things have a 'dedicated' 'in-built' hereditary system (even though it's not the only hereditary system that living things have - i.e. the epigenetic, behavioural and symbolic systems charted by Jablonka and Lamb), whereas ecologies are more modular and not fixed by any particular system like that of DNA.


There's also the question of (eco)system boundaries. Hereditary mechanisms are embodied at the organismal level but operate above and below it; prosaically, nature has its own notions of scope. Ecosystems are no different, and their boundaries can even be distinct ecological units, arising from both the continuous variation of landscape properties (such as soil moisture content) and discrete variation in terms of presence/absence of communities. At the level of population genetics, you can obtain continuous variation as a result of discretising gene-flow regulators like mountain ranges and archipelago.

But, studying genetic and phenotypic variation along one side of a mountain range doesn't necessarily make use of the mountain range as a gene-flow regulator since the methodological assumption of studying one side of it pre-individuates the gene flows on either side and picks one. Unless it generates a hybrid zone, in which case what was once a continuous variation from base-species to its genetic modifications is reflexively re-introduced to the process at a later time in its development (interbreeding of 'transitional' species). Prosaically, nature unfolds in terms of the continuous, the discrete and their inter-relation. And what is a boundary for one analysis is an irrelevance for another.

As an interesting side note, this emphasis on perturbations and transient dynamics in ecosystem theory is finding an expression in differential geometry and topology, and the view of ecosystems as dynamical systems with flows seems to be serving as a basis for ecology's mathematisation at a theoretical level (like what happened with population genetics and statistics).

The distinct features of flows in population genetic terms and flows in ecological terms could serve as a poetic inspiration for treating an organism as an ecosystem, but nothing is really gained from this taxonomy that wasn't already pregnant in the idea of the organism as composite system embedded in composite systems.

Edit: though maybe it's a useful pedagogical tool to get people thinking about humans in less individualistic terms!
fdrake November 28, 2017 at 19:50 ¶ #128200
I guess it's better if I try and detail the kinds of boundaries that ecosystems have.

Natural boundaries:

Spatial - subtended land area.
Temporal - duration since inception, events can insert different regimes of biomass accumulation (think of an opportunistic shrub's series after a forest fire) and otherwise disrupt ecological flows.
Functional - what the ecosystem does, what flows constitute and regulate it, what perturbations disrupt and change it.

Can view ecotones as 'sharp' spatial/functional/temporal boundaries and ecoclines as 'fuzzy' spatial/functional/temporal.

Generalisations/composites

Communal/community based - the composition of organisms in an ecosystem is often a flow regulator and flow-type distinguisher (eg: biomass going from predator to prey species being a distinct flow category from soil gradients and plant community density/composition gradients despite the possibility of their coupling like Yellowstone), can have an abstract boundary in terms of not functioning the same once perturbed far enough away from its current state. Communities are emergent properties of organism/physical arrangements that are spatio-temporally subtended and functionally active and constrained. The action of a community in an ecosystem can be coupled to the subtended areas and dissolve ecosystems entirely (changing their dynamics irrevocably, think non-endemic crop-parasite behaviour), or promote the growth and stability of the ecosystem in general (wolves of Yellowstone a good example here too).

Zonation - variation of a community or assemblage's properties or its organismal composition along a spatial/temporal gradient. Can be the relationship of the spatial distribution of an organism to an ecological gradient over space.

Non-natural/methodological

Operational Zonation - picking out relevant areas for study of a particular theme, can coincide with natural boundaries but need not.

Curtailing - picking out relevant flows and processes in an ecosystem to study it.

____________________________________________________________________________________

All of these have the idea of parametrisation in common. A quantity varies, a change occurs. Certain ranges of changes are compatible with current ecosystem behaviour (perturbative stability of a state within an amount of perturbation), certain ones aren't ( [localised] extinction, inhabitability, niche destruction). Operational specification of ecological parameters can be fortuitous or occlusive in the process of revealing ecological dynamics; for an example see the discussion on edge effects and whether the increases in biodiversity towards ecosystem edges are illusory, unique to ecosystem operation at the boundary or a result of habitat patch overlap.

Nature seems to care about the parameters since we can study ecosystems using them and learn things, but I don't think nature 'sees', say, the distinction between altitude's effect on the spatial distribution of soil bacteria (propensity-to-change) and the functional form we specify. Nor the specific way we measure ecological parameters.

Another question entirely is the generative process that gives rise to the appropriate parameter spaces for studying ecological dynamics. How does nature learn what to care about?







Janus November 28, 2017 at 20:37 ¶ #128208
Quoting StreetlightX
As trees reach a certain size, they do not keep growing, they simply self-maintain at a certain size.


This is not strictly true; trees may reach a maximum sustainable height but unlike us they never stop growing until they die.
Deleteduserrc November 29, 2017 at 00:32 ¶ #128292
(hey guys, been a minute)

@StreetlightX What's the affective oomph you got when [encountering->considering->incorporating] this new (?) idea [self qua ecology]? I'm assuming, here, that you had some older model for self that this new ecological idea upturned (or, less dramatically, modified in some significant way.)

Part of me wants to read something into the fact you hastily appended a brief political moral ('individualism is [for babies]') in quasi-spenglerian terms.

I think individualism is [for babies] too, of course, but what's the freudian term for when:

(i) you send an email, and realize maybe you suggested the wrong thing (which, granted, is what you meant (tho maybe you didn't realize it) but definitely not what you want to be seen as having meant)

and then

(ii)send a follow up like: [but all in all, i think the faculty really is great here, i didn't mean...]



Deleteduserrc November 29, 2017 at 00:45 ¶ #128295
@fdrake
Nature seems to care about the parameters since we can study ecosystems using them and learn things, but I don't think nature 'sees', say, the distinction between altitude's effect on the spatial distribution of soil bacteria (propensity-to-change) and the functional form we specify.

In all seriousness, I think this is an elegant way to sum up the difference between the 'in-itself' and the 'for-itself'.

It reminds me a little of a passage from that blog Slate Star Codex reviewing a David Freidman book.

Slate Star Codex:Whenever I read a book by anyone other than David Friedman about a foreign culture, it sounds like “The X’wunda give their mother-in-law three cows every monsoon season, then pluck out their own eyes as a sacrifice to Humunga, the Volcano God”.

And whenever I read David Friedman, it sounds like “The X’wunda ensure positive-sum intergenerational trade by a market system in which everyone pays the efficient price for continued economic relationships with their spouse’s clan; they demonstrate their honesty with a costly signal of self-mutilation that creates common knowledge of belief in a faith whose priests are able to arbitrate financial disputes.”

This is great, and it’s important to fight the temptation to think of foreign cultures as completely ridiculous idiots who do stuff for no reason. But it all works out so neatly – and so much better than when anyone else treats the same topics – that I’m always nervous if I’m not familiar enough with the culture involved to know whether they’re being shoehorned into a mold that’s more rational-self-interest-maximizing than other anthropologists (or they themselves) would recognize.


fdrake:Another question entirely is the generative process that gives rise to the appropriate parameter spaces for studying ecological dynamics. How does nature learn what to care about?



Maybe we could get a better grip on how nature learns to care by looking at the gap between the ways in which 'pre-rational'* peoples acted (rational in-themselves, were there someone observing from a distance) and how those peoples experienced and made sense of the ways in which they were acting (probably a big confusing blend of the emotional/spiritual/aesthetic/pragmatic).

That may be a little too schopenhauerian though, idk, but ( a very qualified sort of )panpsychism makes more and more sense to me these days



----------------------
*in the sense of 'not reflexively taking their own society as a scientific field of study'

Metaphysician Undercover November 29, 2017 at 03:00 ¶ #128307
Isn't this:
Quoting StreetlightX
Now, one cool way of looking at a single animal - in this case you or I - is precisely as a kind of bounded ecology: bound by skin, we are walking, talking, systems of internalised cycles and metabolic processes.

an example of rugged individualism? To think of an individual as a bounded ecology is to totally ignore the importance of the larger community. And when you discuss ecology in terms of closed ecological systems, you miss out on an important aspect of ecology, leaving yourself no premise for real growth.

Deleteduserrc November 29, 2017 at 05:37 ¶ #128408
@Metaphysician Undercover from where I'm sitting, everything in the op points to a poetic defense of conservatism. What's being conserved is open to debate ( tenured profs double down on the ideas that tenured them and the department follows suit) but the concept is the same. Now of course traditional conservatism also values the social over the individual, but 'conservatism' in the 'west' today connotes the [infantile, babyish] idea of individual rational actors and free markets etc etc.

So why the non sequitur about individualism and politics?

I have some ideas but im bitter and broke and as suspect as anyone else ressentiment-wise.
Streetlight November 29, 2017 at 06:50 ¶ #128440
Reply to csalisbury Hey! Long time so see!

The point that really resonated with me, I think - and it's hardly new - is that idea that well-established, richly functioning ecologies are rich in networks: things rely on everything else, but also and importantly in ways that are cyclic. The focus on cycles in the OP wasn't incidental, I think it's really important: cycles establish both temporality and spatiality, they 'fill out' ecosystems and give them specific spatio-temporal characteristics that individuate them, dimensionalize them so it's not just a matter of plotting individual organisms on a flat 2D map; you get an irreducible dimension of depth, differance, if you will.

I like even more @fdrake's correction that an ecology can't be seen as one monolithic system, but one composed of an entire assemblage of local, regional and global systems that interact with each other such that "overall system patterning must be understood in terms of a balance reached between extinctions and the immigration and recolonization abilities of the various species." So you don't just have this single trajectory from neonate ecology to legacy ecology constrained solely by geographic region, but, as it were, a whole slate of 'options' in-between that depend on local contingencies, and which, even more importantly, are patterned across time.

So I guess the socio-political point is that this whole gamut of complexity is lost when or if we simply attempt to treat organisms in the abstract apart from these cycles of interconnection and mutuality. One imagines a fresh field of soil, with sprouting saplings planted a meter apart from each other: that's the philosophy of individualism. And moreover, that's what it sees when it looks at a forest. From a policy perspective, you can see just how disastrous this is: if you can't even 'see' the dynamics that encourage growth and suppleness ... or rather, if the only dynamics you can 'see' are cellular growth and base your environmental policy no that alone... well, you're going to end up with an impoverished ecology.
Streetlight November 29, 2017 at 06:53 ¶ #128444
Quoting ?????????????
One way to understand it, is to see that SX's "single animal as a kind of bounded ecology", for MU translates to "a single animal as a closed ecological system"


Yep! Bounded does not mean closed - I was going to reply with this exact same distinction, but you got there first.
Deleteduserrc November 29, 2017 at 07:06 ¶ #128454
@StreetlightX I try to come in here with an an empty fifth and a bad attitude, and I'm still welcomed with open arms. Makes it hard, you know?

But what I was trying to point out, sort of, way too elliptically, is that:

So I guess the socio-political point is that this whole gamut of complexity is lost when or if we simply attempt to treat organisms in the abstract apart from these cycles of interconnection and mutuality. One imagines a fresh field of soil, with sprouting saplings planted a meter apart from each other: that's the philosophy of individualism. And moreover, that's what it sees when it looks at a forest.


that ^ is, basically, the traditional conservative argument in a nut shell (upstart lefty idealists think they know better than whats worked for billion of years, want to rationally organize things, plant this there, and that there)

Nothing wrong with that criticism by any means. I think its quite good, actually.
Deleteduserrc November 29, 2017 at 07:29 ¶ #128469
wait @StreetlightX you've used 'see' a lot - maybe we're drawing on the same sources, here. Are you referencing Scott?
Streetlight November 29, 2017 at 08:09 ¶ #128485
Reply to csalisbury Well, I am a little cross that you haven't really provided an argument for what you've said but I think I can reconstruct where you're coming from any reply anway: I think there's something to what you're saying but the difference is this - the conservative insists upon community for the sake of 'conserving': 'this is the way things are done, this is the way things should be done'. But I'm not interseted in conserving things - I think the whole point of a rich ecology is that is allows for - and you might glower at me here - lines of flight.

In ecological or evolutionary terms, one can think of this in terms of robustness: robust ecosystems, those that can best handle 'perturbations', are also those that can best accommodate diversity and change; in evolution, phenotypic robustness actually allows for a maximum of genotypic change, change that cannot be 'seen' by natural selection because it takes place below the level at which selection can exert pressure on it. I've not studied the ecological analogs of this (perhaps @fdrake will have more to say), but I can only imagine the same applies.

In short, the interconnectivity I'm after is precisely for the sake of maintaining maximum change or variability. Apo earlier in the thread chided me for not distinguishing between senescent ecologies and immature ones: a conservative ecology would be precisely a senescent one, one that, yes, acknowledges the need for 'community' and so on, but that doesn't valorize the changes that such community fosters (correlatively, a philosophy of individualism lies on the other side of the spectrum). The 'best' ecosystems are precisely those perched halfway between immaturity and senescene, insofar as they can accommodate change in the best way.

Quoting csalisbury
wait StreetlightX you've used 'see' a lot - maybe we're drawing on the same sources, here. Are you referencing Scott?


Nah, I think I picked it up from some texts on evolution re: what natural selection can and can't see. Who's Scott? ... Fitz?

Quoting csalisbury
upstart lefty idealists think they know better than whats worked for billion of years, want to rationally organize things, plant this there, and that there)


Eh, this is a question of strategy no? Perhaps we need a new ecology entirely rather than fixing this one...
Streetlight November 29, 2017 at 10:50 ¶ #128512
Quoting fdrake
Nature seems to care about the parameters since we can study ecosystems using them and learn things, but I don't think nature 'sees', say, the distinction between altitude's effect on the spatial distribution of soil bacteria (propensity-to-change) and the functional form we specify. Nor the specific way we measure ecological parameters.


The question of paramatizaion is facinating to me - like, what is the exact status of a 'parameter'? Is it simply 'epistemic', 'merely' a way to gain a handle on things? But it can't be merely that because it has to in some way 'track' a real change occuring in the 'thing/process' itself. So what exactly is happening when you see an 'optimization' of a parameter along a certain dimension in a time series?

My intuition - probably along the lines of Csal's distinction between the 'in-itself' and the 'for-itself' - is that most parameters are 'emergent'; I mean, thinking of certain rate-regulating chemical reactions, there are 'loops' which only ever kick in after chemical levels fall above or below a certain threshold: if 'above', you have an inhibatory reaction (slows rates of growth), if 'below', you get a catalytic reaction (speeds up rate of growth). Of course you can ask how a certain process 'knows' if the level is too high or too low, but it's all just mechanism: because these systems are 'looped', the end product itself influences the rate at which that product is produced. Thus - at another analytic level - the usual alternating-periodic 'sine wave' pattern of certain preditor-prey cycles, which I'm sure you're well, well farmilar with:

User image

But then something happens when a variable in the system can relate to that cycle by, to paraphrase Csal, by 'reflexively taking it's own parameters as a variable that can be acted upon': so humans will cultivate food so that we don't have to deal with - or at least minimize the impact of - cycles of food scarcity and die out like wolves with too few deer to prey on. This is the shift from the 'in-itself' to the 'for-itself', where the implicit becomes explicit and is acted upon as such. And this almost invariably alters the behavior of the system, which is why, I think, the two descriptions of the 'X’wunda trade system' (quoted by Csal) are not equivalent: something will qualitatively change if the system itself 'approaches itself' in Friedman's way.

Methadologically, I suppose, the ecological question is always: does the system see itself in the way I'm describing? And if not, how careful must I be with respect to the conclusions I'm trying to draw with my data? And of course one can relate all of this to Heidegger's 'ontological distinction' and the so-called horizon of intelligibility where beings appear as beings, and animals with are 'without world' etc etc. I think a really interesting project would be to try and think these two things together, but I'm not ready to pursue that here! And yeah, all of this should indeed be linked to your other question: "How does nature learn what to care about?"
fdrake November 29, 2017 at 12:21 ¶ #128523
Lots to think about here.

Reply to StreetlightX

In ecological or evolutionary terms, one can think of this in terms of robustness: robust ecosystems, those that can best handle 'perturbations', are also those that can best accommodate diversity and change; in evolution, phenotypic robustness actually allows for a maximum of genotypic change, change that cannot be 'seen' by natural selection because it takes place below the level at which selection can exert pressure on it. I've not studied the ecological analogs of this (perhaps @fdrake will have more to say), but I can only imagine the same applies.


Biodiversity itself can have a regulatory effect. I think the most extreme example of this is a monocultural crop. If a field consists of a single crop everywhere in it, perturbation through disease can quickly wipe out the whole crop. Diversifying land use in the field can increase both single crop yields and the stability of the crop to disease and other externalities like climate change. There's a nexus of articles on Wiki about similar topics, surrounding polyculture and agro-ecology. This paper is about biodiversity and stability but asks the questions in terms of scope changes (local,regional,global biodiversities) and spatial biodiversity (link totally not biased since it's my old boss' paper). In the latter paper, you can see the effect of fortuitous/unfortuitous ways of thinking about space and locality methodologically (which I mentioned in terms of zonation).

AFAIK the mechanisms that link biodiversity to stability are still being researched, so it's far from 'settled science'.

I should add that thinking about methodological constraints in the same manner as ecological realities as I did in the boundary post is very heterodox and probably needs to be taken with a grain of salt.

Next post:

The question of paramatizaion is facinating to me - like, what is the exact status of a 'parameter'? Is it simply 'epistemic', 'merely' a way to gain a handle on things? But it can't be merely that because it has to in some way 'track' a real change occuring in the 'thing/process' itself. So what exactly is happening when you see an 'optimization' of a parameter along a certain dimension in a time series?


Do you mean the time series obtaining a local maximum through 'optimisation' or do you mean an ecological model obtaining a local maximum through optimisation? The relationship of the latter to an ecological model is more a matter of model fitting and parameter estimation than how a parametrised mathematical model of an ecology relates to what it models. The parameters are 'best in some sense' with respect to the data.

Also @csalisbury:


My intuition - probably along the lines of Csal's distinction between the 'in-itself' and the 'for-itself' - is that most parameters are 'emergent';



But then something happens when a variable in the system can relate to that cycle by, to paraphrase Csal, by 'reflexively taking it's own parameters as a variable that can be acted upon': so humans will cultivate food so that we don't have to deal with - or at least minimize the impact of - cycles of food scarcity and die out like wolves with too few deer to prey on. This is the shift from the 'in-itself' to the 'for-itself', where the implicit becomes explicit and is acted upon as such. And this almost invariably alters the behavior of the system, which is why, I think, the two descriptions of the 'X’wunda trade system' (quoted by Csal) are not equivalent: something will qualitatively change if the system itself 'approaches itself' in Friedman's way



I personally wouldn't like to think about the 'modelling relation' between science and nature in terms of the 'for-itself' acting representationally on the 'in-itself'. Just 'cos I think it's awkward. Will give an example: if you plant a monoculture and it gets destroyed by disease, when the 'in-itself' of the crop gets destroyed, we can say it's because of the 'for-itself' of the vulnerability of the crop to disease in our way of thinking about it. The crop's vulnerability to disease acts as a pattern in nature and a pattern in thought, and there's some kind of functional equivalence of terms. Even if nature sees only the individual plants and their inter-relations, this 'crop through iterated conjunction' still works like the 'crop' which satisfied the properties of monocultures. But this aversion of mine might be because I don't understand Kant very well. Could either of you map the distinction for me insofar as it relates to ecological models?

Of course you can ask how a certain process 'knows' if the level is too high or too low, but it's all just mechanism: because these systems are 'looped', the end product itself influences the rate at which that product is produced. Thus - at another analytic level - the usual alternating-periodic 'sine wave' pattern of certain preditor-prey cycles, which I'm sure you're well, well farmilar with:


I think what allows the aggregation of prey/predators in the model to work like something in nature is that in terms of exchangability. Let's take wolves and rabbits, the specifics of the wolves don't matter too much since availability of food and food amount required operate on each wolf individually in the same way as they operate on the the group (scaled up). Rabbits are the same, the specifics don't matter too much insofar as they need to get food, how much food there is and how many predators there are. A way of putting this might be 'the individual is an aggregate of size 1' in these circumstances.

Methadologically, I suppose, the ecological question is always: does the system see itself in the way I'm describing? And if not, how careful must I be with respect to the conclusions I'm trying to draw with my data? And of course one can relate all of this to Heidegger's 'ontological distinction' and the so-called horizon of intelligibility where beings appear as beings, and animals with are 'without world' etc etc. I think a really interesting project would be to try and think these two things together, but I'm not ready to pursue that here! And yeah, all of this should indeed be linked to your other question: "How does nature learn what to care about?"


I think ecology has some complications that aren't present in simpler relationships between model and world. I'm not sure I could make a list of them all, but there's always a difficulty in measuring properties of ecosystems precisely in a manner useful for modelling. It isn't the same for chemistry.

An example, the Haber process. It works so long as there's air, hydrogen, a catalyst, and a cooling procedure. The terms in the description of the process aren't abstractions, they're the real thing. The algorithm works on real inputs (air,hydrogen) and produces real outputs (ammonia). Why it works might be conceptually ladened, but procedurally the description it embodies is equivalent to the described, if that makes sense. I don't think the same is true of ecological parameters.





Streetlight November 29, 2017 at 13:05 ¶ #128540
Quoting fdrake
though maybe it's a useful pedagogical tool to get people thinking about humans in less individualistic terms!


This, by the way, is definitely part of the motivation here - to think of the human in ecological terms is to think of the human in terms of populations, flows, and rates of change; I mean, even at the level of anatomy, we are, as it were, an anatomical ecology: populations of different cells, cycling through material and energy, hierarchically embedded, and structurally coupled with flows in the environment (a fanciful etymology of 'anatomy' is of course an-atomia; non-atom (non-individual?) - although the root really is more associated with the 'positive' act of dissection or 'cutting up').

And of course, every part of this system is more or less differential: processes will play out differently - will 'do' different things, contribute to different ends - depending on the context. We're basically a series of loops, some only residing 'inside' us, some extending far beyond our skin. Perhaps the best representation of a human - or in fact any 'thing' - is this:

User image

(Will reply to your latest post a bit down the track)..
Metaphysician Undercover November 29, 2017 at 13:16 ¶ #128547
Quoting StreetlightX
I like even more fdrake's correction that an ecology can't be seen as one monolithic system, but one composed of an entire assemblage of local, regional and global systems that interact with each other such that "overall system patterning must be understood in terms of a balance reached between extinctions and the immigration and recolonization abilities of the various species." So you don't just have this single trajectory from neonate ecology to legacy ecology constrained solely by geographic region, but, as it were, a whole slate of 'options' in-between that depend on local contingencies, and which, even more importantly, are patterned across time.


What I'm interested in is how you would relate this description of interconnected systems and cycles to the concept of "growth". Growth appears to be a necessary aspect of an individual living being, and now its very common to judge economies in terms of growth. What is at issue, in my mind, is that if growth may be said to be something "good", then what is the proper description of growth which would best fulfil the conditions of being good.

The op describes "succession" as a growing. However, the end state of the growing, the "climax community" seems to be a well adapted ecology with a lack of growth. This end state is described as the best, such that the growing is not as good as the end state (lack of growing) which the growing brings about. So growth here would appear like a venture into instability, and therefore a bad thing if the climax community is a good state. However, growth is still necessary in order to produce the climax community.

So far, it is implied that growth involves a development of those cycles and systems, which are internal to the ecology. Some sort of boundary is also implied, and the boundary would distinguish between what is within the ecology, and what is outside. To me, the concept of "growth" implies a changing in the boundary. In its simplest form it might be an expansion of the boundary. But an expansion of the boundary is not necessarily "good", because this often leads to stretching oneself too thin. So a truly good growth might be a changing of the boundary in a way which better supports the production and sustenance of the complex inner cycles.

Here's the problem. The boundary, whether it's closed, open, or partially closed, indicates some sort of separation between internal and external. Changes to the boundary which are good for the internal are not necessarily good for the external. And the external must be respected as real, so "good growth" cannot be defined solely on the effects which the growth has within the ecology.

Quoting StreetlightX
In ecological or evolutionary terms, one can think of this in terms of robustness: robust ecosystems, those that can best handle 'perturbations'..


I would assume that a perturbation is something with a cause external to the particular ecology. So I think you need to distinguish at least two distinct types of perturbations, one natural, and one artificial. I think "natural" speaks for itself, but if growth is defined by a changing boundary, then such changes could cause external perturbations, like poking a sleeping bear. So there must be two aspects of good growth, one which allows the ecology to handle perturbations, and one which prevents the ecology from causing perturbations.
Galuchat November 29, 2017 at 15:21 ¶ #128618
StreetlightX:Basically any self-relating system composed of networks can be treated in ecological terms. Elsewhere, it's perfectly possible to treat something as abstract as an economy in ecological terms.


The largest and most complex type of human community is the stratified society, which is composed of nested complex systems (e.g., political, economic, legal, etc.).

Cultures develop over time. Changes in mindset/convention have cascading effects on nested systems, transforming society. Sudden and/or dramatic changes in mindset/convention can cause societal breakdown and collapse.

In terms of Sociocultural Anthropology, the life cycle of a human community (i.e., political economy) consists of: Rise (i.e., success), Dominance (i.e., expansion), Stagnation, Decline, and Fall (i.e., failure).

The life cycle of an organism can be described in similar terms. Can the life cycle of an ecosystem be described in similar terms?
schopenhauer1 November 29, 2017 at 16:10 ¶ #128623
Reply to Galuchat @StreetlightX

Ecologies don't need a telos. That they exist, flourish and work as a system is a well known fact. Humans though have the ability to justify why they put more people into the world, why they need more people to grow, maintain themselves, and die.
Galuchat November 29, 2017 at 16:23 ¶ #128627
Reply to schopenhauer1 So, is describing human political economy in ecological terms a category error?
apokrisis November 29, 2017 at 21:10 ¶ #128693
Quoting StreetlightX
a conservative ecology would be precisely a senescent one, one that, yes, acknowledges the need for 'community' and so on, but that doesn't valorize the changes that such community fosters (correlatively, a philosophy of individualism lies on the other side of the spectrum). The 'best' ecosystems are precisely those perched halfway between immaturity and senescene, insofar as they can accommodate change in the best way.


Senescent is probably a bad word choice by Salthe as he means to stress that a climax ecology has become too well adapted to some particular set of environmental parameters. It has spent all its degrees of freedom to create a perfect fit, and so that makes it vulnerable to small steady fine-scale changes in those parameters outside its control - something like coral reefs collapsing as we cause changes in ocean temperature and acidity. Or else the perturbations can come from the other end of the scale - single epic events such as an asteroid strike or super-volcano.

So evolution drives an ecology to produce the most entropy possible. A senescent ecology is the fittest as it has built up so much internal complexity. It is a story of fleas, upon fleas, upon fleas. There are a hosts of specialists so that entropification is complete. Every crumb falling off the table is feeding someone. As an ecology, it is an intricate hierarchy of accumulated habit, interlocking negentropic structure. And then in being so wedded to its life, it becomes brittle. It loses the capacity to respond to the unpredictable - like those either very fine-grain progressive parameter changes or the out of the blue epic events.

So senescence isn't some sad decaying state. It is being so perfectly adapted to a set of parameters that a sudden breakdown becomes almost inevitable. Coz in nature, shit always happens. And then you struggle if you are stuck with some complicated set of habits and have lost too much youthful freedom to learn some new tricks.

One can easily draw economic and political parallels from this canonical lifecycle model. And it seems you want to make it so that conservatives equal the clapped out old farts, neoliberal individualists equal the reckless and immature, then the greeny/lefties are the good guys in the middle with the Goldilocks balance. They are the proper mature types, the grown-ups.

Well I'd sort of like to agree with that, but it is simplistic. :)

The left certainly values the co-operative aspect of the human ecosystem, while the greens value the spatiotemporal scope of its actions.

So conservatives certainly also value the co-operative aspects of society, but have a more rigid or institutionalised view. The rules have become fixed habits and so senescent even if a good fit to a current state. The left would instinctively take a sloppier approach as it would seem life is still changing and you need to still have a capacity for learning and adaptation in your structures of co-operativity.

Then conservatives also value the longer view of the parameters which constrain a social ecology. Like greens, they are concerned for the long-term view - one that includes the welfare of their great grand children, their estates, their livestock. But while greenies would be looking anxiously to a future of consequences, conservatives - in this caricature - are so set in their ways by a history of achieving a fit that the long-view is more of the past. It is what was traditionally right that still looks to point to any future.

But then conservatives may be the ones that don't rush into the future immaturely. The stability of their social mores may actually encode a long-run adaptiveness as the result of surviving past perturbations. Lefties and greenies can often seem the ones who are in the immature stage of being too eager for the turmoil of reforms, too quick to experiment in ways that are mostly going to pan-out as maladaptive, too much the promoters of a pluralist liberal individualism, too quick to throw history and hierarchy in the bin.

So as usual, the science of systems - of which ecology is a prime one - could really inform our political and economic thinking. It is the correct framework for making sense of humans as social creatures.

But once we start projecting the old dichotomous stereotypes - left vs right, liberal vs conservative - then that misses the fact a system is always a balancing act of those kinds of oppositional tensions.

And we also have to keep track of what is actually different about an ecology and a society. An ecology lacks any real anticipatory ability. It only reacts to what is happening right now as best it can, using either its store of developed habits to cope, or spending the capital of its reserve of degrees of freedom to develop the necessary habits.

But a human society can of course aspire to be anticipatory. It can model the future and plan accordingly. It can remember the past clearly enough to warn it of challenges it might have to face. It can change course so as to avoid perturbations that become predictable due to long-range planning.

And the jury is actually out on how an intelligent society ought to respond. On climate change, the conservatives were at first the ones worried about its potential to disrupt the story of human progress. Then the neoliberal attitude took over where the strategy for coping with the future is to rely on human ingenuity and adaptability.

One view says tone everything down as we are too near the limit. The other says if shit is always going to happen - if not global warming, then the next overdue super-volcano ice age - the imperative is to go faster, generate more degrees of freedom. The planet is just not ever going to be stable, so to flourish, planned immaturity beats planned senescence.

Both views makes sense. As does the further view that of course there must be a balance of these two extremes. The right path must be inbetween.










fdrake November 29, 2017 at 22:16 ¶ #128723
Reply to apokrisis

Can you give a mechanical explanation of how an ecosystem spends degrees of freedom - also degrees of freedom of what?

An example would also be good.
apokrisis November 29, 2017 at 23:39 ¶ #128731
Reply to fdrake Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.

But if the oak gets knocked down in a storm or eaten away eventually by disease, that creates an opening for faster-footed weed species. We are back to a simpler immature ecosystem where the growth is dominated by the strong entropic gradient - the direct sunlight, the actual rainfall and raw nutrient in the soil.

The immature ecology doesn't support the same hierarchy of life able to live off "crumbs" - the weak gradients that highly specialised lifeforms can become adapted to. It doesn't have the same kind of symbiotic machinery which can trap and recycle nutrients, provide more of its own water, like the leaf litter and the forest humidity.

So the degrees of freedom are the system's entropy. It is the through-put spinning the wheels.

An immature ecology is dependent on standing under a gushing faucet of entropy. It needs direct sunlight and lots of raw material just happening to come its way. It feeds on this bounty messily, without much care for the long-term. Entropy flows through it in a way that leaves much of it undigested.

But a senescent ecology has build up the complexity that can internalise a great measure of control over its inputs. A tropical forest can depend on the sun. But it builds up a lot of machinery to recycle its nutrients. It fills every niche so that while the big trees grab the strongest available gradients, all the weak ones, the crumbs, get degraded too.

So the degrees of freedom refers both to the informational and entropic aspects of what is going on. Salthe is explicit about this in his infodynamic account of hierarchical complexity - the forerunner of what seems to have become biosemiosis.

The degrees of freedom fall out of the sky as a rain of entropy, an energy source that would welcome being guided towards some suitable sink. Life then provides that negentropic or informational structure. It becomes the organised path by which sunlight of about 6000 degrees is cooled to dull infra-red.

Then switching to that informational or negentropic side of the deal - the tale of life's dissipative structure - the degrees of freedom become the energy available to divert into orderly growth. It is the work that can be done to make adaptive changes if circumstances change.

A weed can sprout freely to fill a space. It is green and soft, not woody and hard. It remains full of choices in its short life.

An oak wins by trading that plasticity for more permanent structure. It grows as high and strong as it can. It invests in a lot of structure that is really just dead supporting wood.

So degrees of freedom have a double meaning here - which is all part of the infodynamic view of life as dissipative structure.

Life is spending nature's degrees of freedom in entropifying ambient energy gradients. And it spends its own degrees of freedom in terms of the work it can extract from that entropification - the growth choices that it can make in its ongoing efforts to optimise this entropy flow.

So there is the spending of degrees of freedom as in Boltzmann entropy production. Turning energy stores into waste heat. And also in terms of Shannon informational uncertainty. Making the choices that remove structural alternatives.

An immature system is quick, clever and clumsy. A senescent system is slow, wise and careful. An immature system spends energy freely and so always seems to have lots of choices available. A senescent system is economic with its energy spending, being optimised enough to be mostly in a mode of steady-state maintenance. And in knowing what it is about, it doesn't need to retain a youthful capacity to learn. Its degrees of freedom are already invested in what worked.

Which then brings us back to perturbations. Shit happens. The environment injects some unpredicted blast of entropy - a fresh rain of entropic degrees of freedom - into the system. The oak gets blown down and the weeds get their chance again.





Deleteduserrc November 29, 2017 at 23:43 ¶ #128732
@fdrakeQuoting StreetlightX
But then something happens when a variable in the system can relate to that cycle by, to paraphrase Csal, by 'reflexively taking it's own parameters as a variable that can be acted upon': so humans will cultivate food so that we don't have to deal with - or at least minimize the impact of - cycles of food scarcity and die out like wolves with too few deer to prey on. This is the shift from the 'in-itself' to the 'for-itself', where the implicit becomes explicit and is acted upon as such. And this almost invariably alters the behavior of the system, which is why, I think, the two descriptions of the 'X’wunda trade system' (quoted by Csal) are not equivalent: something will qualitatively change if the system itself 'approaches itself' in Friedman's way.


So first: yeah, the system will be changed if it relates to itself a system.

Quick example, from here. (In this case the system becoming self-aware would have negative effects, but of course with different examples it could have positive effects. Either way though, a qualitative change.)

The author is taking about gri-gri, a subsaharan belief/magic system which purports to make individuals immune to gunfire.

Gri-gri comes in many forms – ointment, powder, necklaces – but all promise immunity to weaponry. It doesn’t work on individuals, of course, although it’s supposed to. Very little can go grain-for-grain with black powder and pyrodex. It does work on communities: it makes them bullet proof.
happy people.PNG



The economists Nathan Nunn and Raul Sanchez de la Sierra wrote a paper analyzing the social effects of gri-gri: Why Being Wrong Can Be Right: Magical Warfare Technologies and the Persistence of False Beliefs [...]

The paper argues that gri-gri encourages resistance on a mass scale. Beforehand, given a mix of brave and cowardly, only a small percentage of a village would fight back. If you want to have any hope of surviving, then you need everyone to fight back. Gri-gri lowers the perceived costs of said resistance, i.e. no reason to fear guns when the bullets can’t hurt you. Now everyone fights, hence, gri-gri‘s positive benefits. Moreover: since more people are fighting, each gri-gri participant also raises the marginal utility of the others (it’s better to fight together). And, since there are highly specific requirements for using the powder (if you break a certain moral code it doesn’t work), gri-gri also probably cuts down on non-war related crimes. Take group-level selection: the belief in and use of gri-gri will thus allow any given village to out-compete one without gri-gri. After a time, these will either be replaced by gri-gri adherents (hence spreading it geographically), or they’ll adopt gri-gri themselves (also spreading it).


So despite gri-gri appearing 'irrational', its adoption by a group is eminently rational. So why not keep the real rational benefits, but drop the irrational veneer?

"[imagine that] the state sends a researcher into the village. “We’re sorry,” he says. “We were so stupid to mock you. We totally understand why you do this thing. Let’s explain to you what’s actually going on, now that we have an economic translation.”

The researcher explains that, in fact, gri-gri doesn’t work for the individual, but it has the net-positive effect of saving the community. “Give up these childish illusions, yet maintain the overall function of the system,” he exhorts. A villager, clearly stupid, asks: “So it works?” The man smiles at these whimsical locals. “Oh, no,” he sighs. “You will surely die. But in the long run it’s a positive adaptation at the group level.”

No one would fight, of course. The effect only comes from the individual. If he doesn’t think he can survive a bullet, then it’s hard to see how you’re going to make him fight. “But people fight better in groups, don’t you see?” stammers the exasperated researcher. That’s true as far as it goes, but it’s also no revelation. I trust that at least a couple of those villagers have brawled before. “Fighting six guys alone vs. fighting six guys with your friends” is a fast lesson with obvious application. Still didn’t make them go to war before the introduction of gri-gri. If that didn’t work, why do you think “time for some #gametheory” will convince anyone?



So I agree, but the question of whether the two descriptions of the X'Wunda are equivalent is another thing entirely. I mean in one sense it's obvious they're not equivalent, otherwise they would be the same description. But do they both describe the same thing?

My mistake was to differentiate between the 'in-itself' and the 'for-itself', when the germane Hegelian distinction would be the one between the 'in-itself' and the 'for-us' ( that is, 'for us rational observers observing the system'.)

Importantly, for Hegel, the in-itself and the for-us are the same thing. It's not a matter of noumenal core and phenomenal presentation, but of acting and knowing. The noumenal/phenomenal distinction cast things in terms of a transcendental knower who reaches out toward (hidden noumenal) being. (You could also conceptualize it as a knower not reaching toward, but being affected by, the diffracted rays of a noumenon.)

Hegel, as you know, holds instead that knowing is itself a type of acting (and so also a type of being). Any given type of knowing will unfold, over time, as a series of actions. In doing so it will create a pattern observable to a different, meta-level, knower.

But it's not as though the description of the meta-knower is 'true' while the experience of the object-level knower is false. The patterns the meta-knower observes are themselves driven by the internal logic of the object level-knower. If the object-level knower spoke the meta-language, it would not act the same way, and the object-level (as it existed) would disappear.

So the idea would be: there is indeed a hidden order - a rational in-itself - to how things unfold. It's not a projection by us. It's already there, as long as there's someone to look. But for that order to be there (were someone to look), the order itself has to be 'looking' at something different.

In short: Both descriptions of the X'wunda example are correct, and both refer to the same thing. You can't reduce one to the other, because in reducing the object-level to the meta-level rational one, you lose the object-level altogether. If you don't have the object-level, the meta-level description doesn't refer to anything.( this is why hegel's so concerned with pointing out that the truth is the process as a whole, not simply the result)

And then my broader idea (I guess kind of Schellingian?) is that 'nature' itself 'knows' in some way, and that that knowledge drives it to act as it does. The way in which nature knows is itself (in part) those patterns and parameters we observe, but that it can't itself know those patterns (otherwise it'd be a human.) It knows something else, so to speak.

I suppose, then, we both agree that it's a matter of emergence, though I'm not sure we're thinking of how that happens in the same way (though maybe we are.)
schopenhauer1 November 30, 2017 at 00:38 ¶ #128735
Reply to Galuchat Yes and that the more important questions we should be asking is why we put more people into the world in the first place. What to grow, maintain, and die? At least ecologies and biomes can't control the absurd nature of continuing to continue. Humans can.
apokrisis November 30, 2017 at 01:09 ¶ #128737
Reply to schopenhauer1 Cheer up Schop. Take the long view. Either humanity will work out what it is about or your wish will be granted. You can wait 50 years surely?
schopenhauer1 November 30, 2017 at 02:53 ¶ #128759
Quoting apokrisis
Cheer up Schop. Take the long view. Either humanity will work out what it is about or your wish will be granted. You can wait 50 years surely?


So you're saying through our destructive use of natural resources we will die out. Why would we just not intentionally choose to not add more absurd instrumentality- growth, maintenance, death?
apokrisis November 30, 2017 at 03:35 ¶ #128776
Reply to schopenhauer1 What do you mean? Either we do blow ourselves up, or we do find a long-run ecological balance.

Well, I was just trying to cheer you up. I realise there is in fact a third option where human ingenuity does get used to keep the game going in ever more extravagant fashion. Rather than changing ourselves to fit nature, many people will quite happily go along with changing nature to fit us.

This is the anthropocene. Once we have artificial meat, 3D printed vegetables made from powdered seaweed, an AI labour force and nuclear fusion, who cares about rain forests and coral reefs? Rent your self some VR goggles and live out of that old time stuff if you are sentimental. Meanwhile here is an immersive game universe where you can go hunting centaurs and unicorns.

So probably bad luck. We likely have enough informational degrees of freedom to beat nature at its own game.




Streetlight November 30, 2017 at 07:10 ¶ #128858
Quoting fdrake
AFAIK the mechanisms that link biodiversity to stability are still being researched, so it's far from 'settled science'.


I imagine that approaching ecosystems through network analysis would have alot to say about this: i.e. more biodiverse ecosystems have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole; and moreover, that such robustness also has a catalytic effect - the more robust a network, the more chance for the development of further nodes, etc, etc (I realize on reflection that we have different go-to intuitions with these kinds of subjects - you tend to focus on spatio-temporal specificity - as per the papers you've linked (and the discussion re: gene expression previously) - while I like to 'go abstract' and and think in terms of mechanism-independent structure; it's interesting!).

Quoting fdrake
Do you mean the time series obtaining a local maximum through 'optimisation' or do you mean an ecological model obtaining a local maximum through optimisation? The relationship of the latter to an ecological model is more a matter of model fitting and parameter estimation than how a parametrised mathematical model of an ecology relates to what it models. The parameters are 'best in some sense' with respect to the data.


Yeah, I could have been more clear here: I guess I have something in mind like a ecosystem - or local 'patch' - fluctuating around it's carrying capacity or something similar. I mean, clearly carrying capacity isn't something that the system is 'aiming at': it doesn't tell itself 'ok we going to try and fluctuate around this point', but, like regulatory chemical reactions, it just 'fall outs' of the dynamics of the system.

Quoting fdrake
I personally wouldn't like to think about the 'modelling relation' between science and nature in terms of the 'for-itself' acting representationally on the 'in-itself'. Just 'cos I think it's awkward.


I agree it's clunky as well, but the necessary vocabulary is kinda hard to pin down, I think. I think part of the problem is the fungibility of these terms: what may once have been a non-reflexive variable ('in-itself') may become reflexive ('for-itself'), and vice versa - the only way to find out which is which is to actually do the empirical study itself, and find out what exactly whatever little patch of nature under consideration is in fact sensitive to ('sensitive to' being perhaps a better phrase than 'what nature can see'). So when you say later on that:

Quoting fdrake
I think ecology has some complications that aren't present in simpler relationships between model and world. I'm not sure I could make a list of them all, but there's always a difficulty in measuring properties of ecosystems precisely in a manner useful for modelling. It isn't the same for chemistry.


I think perhaps the 'problem' is that ecology exhibits precisely a higher degree of the fungibility between the implicit/explicit sensitivity than chemistry does. This is what makes it more complex.
Streetlight November 30, 2017 at 08:17 ¶ #128863
Reply to csalisbury That blog post was fascinating! I keep wandering back to psychoanalytic point where the cheating husband's relationship with his lover only 'works' insofar as he is married: were he to leave his wife for the sake of his lover, the lover would no longer be desirable... Of course the psychoanalytic lesson is that our very 'subjective POV' is itself written into the 'objective structure' of things: it's not just window dressing, and if you attempt to discard it, you change the nature of the thing itself.

And I think this slipperiness is what makes it so hard to fix the status of a 'parameter': if you want to make a parameter 'work' (i.e. if you intervene in a system on that basis), you will cause changes - but that doesn't mean the system is 'in-itself' sensitive to such parameters: only that, through your intervention you've made it so.
fdrake November 30, 2017 at 08:26 ¶ #128864
Reply to apokrisis

Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.


See, I can imagine what you mean by degrees of freedom, but - and this is a big but, I don't think it can be used so qualitatively. So when you say:

But if the oak gets knocked down in a storm or eaten away eventually by disease, that creates an opening for faster-footed weed species. We are back to a simpler immature ecosystem where the growth is dominated by the strong entropic gradient - the direct sunlight, the actual rainfall and raw nutrient in the soil.

The immature ecology doesn't support the same hierarchy of life able to live off "crumbs" - the weak gradients that highly specialised lifeforms can become adapted to. It doesn't have the same kind of symbiotic machinery which can trap and recycle nutrients, provide more of its own water, like the leaf litter and the forest humidity.


It's ambiguous to what the degrees of freedom refer. Are you talking about niches? Is the claim that when the oak falls, there are less niches? More niches? More configurational entropy?

An immature ecology is dependent on standing under a gushing faucet of entropy. It needs direct sunlight and lots of raw material just happening to come its way. It feeds on this bounty messily, without much care for the long-term. Entropy flows through it in a way that leaves much of it undigested.

But a senescent ecology has build up the complexity that can internalise a great measure of control over its inputs. A tropical forest can depend on the sun. But it builds up a lot of machinery to recycle its nutrients. It fills every niche so that while the big trees grab the strongest available gradients, all the weak ones, the crumbs, get degraded too.


What makes a gradient strong or weak? Can you cash this out in terms of a thermocline? Say you have water near a shore, it's 10 celcius. 1m nearer the shore, it's 20 celcius. Compare this to a shift of 5 celcius over 1 m. Is that an entropic gradient? What makes it an entropic gradient? What makes it not an entropic gradient? How is it similar to the configurational entropy of niches?

I have in mind a procedure when you're doing an imaginary counting exercise of how many niches are available, then assuming something about the proportion of organisms in each niche, then assuming that it turns out that when an oak dies there's more configurational entropy - it's a combination of changes in occupation probability and the number of terms in the sum. Decreases can result from less degrees of freedom (number of bins/configuration) or less uniform distribution of entities into bins. Or both.

In terms of cashing out your ideas in the specifics, your post was pretty weak. Can you go through one of your 'entropy calculations' using the oak/canopy example?
apokrisis November 30, 2017 at 08:42 ¶ #128865
Reply to fdrake https://en.m.wikipedia.org/wiki/Ascendency
fdrake November 30, 2017 at 08:57 ¶ #128870
Reply to StreetlightX

I imagine that approaching ecosystems through network analysis would have alot to say about this: i.e. more biodiverse ecosystems have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole; and moreover, that such robustness also has a catalytic effect - the more robust a network, the more chance for the development of further nodes, etc, etc


What makes you think that more biodiverse ecosystems 'have more nodes that support certain cycles such that the failure of a few of these nodes would not lead to the failure of those cycles as a whole?'

I can think of a simplified example of it. Say you have 100 wolves and 100 rabbits (only). Wolves only feed on rabbits. Shannon Biodiversity in that system is 100/200 * log (100/200) + 100/200 * log(100/200) = 2*0.5*log(0.5)=log(0.5)=log2 (after removing negative). Elmer Fudd comes in and kills 50 rabbits and wolves: wolves aren't gonna die, rabbits aren't gonna die.

Now say you have 100 wolves and 50 rabbits. Shannon biodiversity there is (50/150)*log(50/150) + (100/150)*log(100/150) = (1/3)log(1/3) + (2/3)log(2/3) = -(1/3)log(3)+(2/3)log(2/3) = (-1/3)*log(3)+(2/3)log2-2/3 log3 = (2/3)log2-log3<log(2) (after removing minus). Elmer Fudd comes in and kills 50 wolves and 50 rabbits. The wolves then starve to death.

Though, if you started off with 2 wolves and 2 rabbits, the Shannon Biodiversity would be log2 still, and Elmer Fudd would destroy the ecosystem.

The idea of biodiversity in your post needs to be sharpened a bit in order for this to apply I think.


I agree it's clunky as well, but the necessary vocabulary is kinda hard to pin down, I think. I think part of the problem is the fungibility of these terms: what may once have been a non-reflexive variable ('in-itself') may become reflexive ('for-itself'), and vice versa - the only way to find out which is which is to actually do the empirical study itself, and find out what exactly whatever little patch of nature under consideration is in fact sensitive to ('sensitive to' being perhaps a better phrase than 'what nature can see'). So when you say later on that:


That's a much better way of putting it. No anthropomorphism, less conceptual baggage (like finding all the parameters nature 'cares' about). Also links directly to perturbation.



fdrake November 30, 2017 at 10:25 ¶ #128875
Reply to apokrisis

Well that didn't have any quantitative analysis. So I did some digging. It seems appropriate to assume that trees that canopy very high have a 'large' energy flow into them. If you removed all the trees from an area, then the energy would flow into plant species on the ground. The total energy coming into that area could probably be assumed to be constant. This doesn't immediately reduce the overhead, which is 'unspent energy' (not necessarily degrees of freedom) Ulanowicz uses, since it depends on the relative apportioning of the solar energy to ground species - how much of the total goes to each. The total can go down, but the proportions can remain unchanged, so the entropy and joint entropy can do that too, so therefore the ascendency and overhead can stay the same.

So while the destruction of a canopy in an area could lead to changes in overhead, it doesn't have to.

edit: see here to play about with the maths yourself, see if I'm right.
apokrisis November 30, 2017 at 11:05 ¶ #128880
Reply to fdrake Your questions seem off the point so I’m struggling to know what you actually want.

If you have a professional interest, then there is a big literature. Maybe start with https://www.jameskay.ca/about/thermo.html

Rod Dewar and Rod Swenson also. I’ve mention Stan Salthe and Robert Ulanowicz. Charlie Lineweaver is another. Adrian Bejan might be the strongest in terms of generic models.

I’ve not been close to the research for 20 years and I was always only really interested in the qualitative arguments. Also the quantitative support wasn’t exactly slam dunk. Measuring ecosystems is not easy.

But for instance, one line of research involved thermal imaging of rainforests and other ecosystems. The hypothesis was that more complex ecologies would stick out by having a cooler surface temperature. They would extract more work from the solar gradient.

Is that the kind of experiment you have in mind?

Here’s a presentation with references at the end as well as charts of data - https://hyspiri.jpl.nasa.gov/downloads/2011_Symposium/day1/luvall%20hyspiri%20ecological%20thermodynamics%20may%202011%20final.pdf




fdrake November 30, 2017 at 11:34 ¶ #128882
Reply to apokrisis

I'm suspicious of the confidence you have in this biosemiotic/thermodynamic/entropic system which crops up almost everywhere you post. You usually make statements based on decreases/increases in entropic or related quantities. You present it as if your interpretation of biosemiosis, thermodynamics and systems as fundamentally dynamical systems of information exchange are a sure thing, and use those to justify whatever quantitative variations you expect.

I ask you for references and derivations to see if there's anything supporting the application of your system to the problem at hand. When I went digging, ascendancy doesn't have any clear relationship to 'degrees of freedom' as you use the term, it manifests as the behaviour of configurational entropy - which is not a monotonic function of the degrees of freedom in the sum. Neither is the joint entropy of a Markov network introduced from normalising flows with total energy (density) a function of 'degrees of freedom' - which is the degree of the network. You can compute effective numbers of species or effective degrees of freedom based off of statistical summaries like the Shannon Biodiversity or the exponential of the ascendency - but since they are monotone functions of the input entropic measure predictions and requirements from the exponentiated quantity must be cashed out in terms of predictions and requirements to its inputs.

Your posts which use your underlying biosemiotic/thermodynamic/entropic system are a fuzzy bridge between this research background and the discussed problems. While it's commendable to base your reasoning and metaphysical systems on relevant science, it isn't at all clear how your cited literature and background concepts manifest as critiques, solutions, reframing attempts to the problematics you engage with. Especially when you use fuzzy notions like 'degrees of freedom' and hope that your meaning can be discerned by reading thermodynamics, some ecological semiotics, applications of information theory to ecological networks and dissipative systems literature.

Whenever you've presented me literature (or someone else in a thread I've watched), I've gone 'huh that's interesting' and attempted to research it, but I've never managed to make the bridge between your research and your opinions. Too many quantitative allusions that aren't cashed out in precise terms.


fdrake November 30, 2017 at 11:48 ¶ #128887
@apokrisis

This isn't necessarily a flaw in your thinking. It could be determined by me not having read the things you've read. Further: the complaint that the background research doesn't cash out in exactly the terms you present it is thephilosophyforum.com first world problems, since you actually base things on research and provide references when prompted.
apokrisis November 30, 2017 at 21:06 ¶ #128952
Reply to fdrake First up, I'm not bothered if my arguments are merely qualitative in your eyes. I am only "merely" doing metaphysics in the first place. So a lot of the time, my concern is about what the usual rush to quantification is missing. I'm not looking to add to science's reductionist kitset of simple models. I'm looking to highlight the backdrop holistic metaphysics that those kinds of models are usually collapsing.

And then a lot of your questions seem to revolve around your definition of degrees of freedom vs mine. It would be helpful if you explained what your definition actually is.

My definition is a metaphysically general one. So it is a little fuzzy, or broad, as you say.

To help you understand, I define degrees of freedom as dichotomous to constraints. So this is a systems science or hierarchy theory definition. I make the point that degrees of freedom are contextual. They are the definite directions of action that still remain for a system after the constraints of that system have suppressed or subtracted away all other possibilities.

So the normal reductionist metaphysical position is that degrees of freedom are just brute atomistic facts of some kind. But I seek to explain their existence. They are the definite possibilities for "actions in directions" that are left after constraints have had their effect. So degrees of freedom are local elements shaped by some global context, some backdrop history of a system's development.

Thus I have an actual metaphysical theory about degrees of freedom. Or rather, I think this to be the way that holists and hierarchy theorists think about them generally. Peirce would be the philosopher who really got it with his triadic system of semiosis. Degrees of freedom equate to his Secondness.

A second distinctive point is that I also follow semiotic thinkers in recognising an essential connection between Boltzmann entropy and Shannon uncertainty - the infodynamic view which Salthe expresses so well. So this is now a quantification of the qualitative argument I just gave. Now biosemiotics is moving towards the possibility of actual science.

Theoretical biologists and hierarchy theorists like Howard Pattee in particular have already created a general systems understanding of the mechanism by which life uses codes to harness entropy gradients. So the story of how information and dynamics relates via an "epistemic cut" has been around since the 1970s. It is the qualitative picture that led to evo-devo. And it is the devo aspect - the Prigogine-inspired self-organising story of dissipative structures - that has become cashed out in an abundance of quantitative models over the past 30 years. I assume you know all about dissipative structure theory.

So what we have is a view of life and mind that now is becoming firmly rooted in thermodynamics. Plus the "trick" that is semiotics, or the modelling relation.

The physico-chemical realm already wants to self-organise to dissipate energy flows more effectively. That in itself has been a small revolution in physical science. What you call configuration entropy would seem to be what I would call negentropy, or the degrees of freedom spent to create flow channelling structure - some system of constraints. And in the infodynamic (or pansemiotic) view, the negentropy is information. It is a habit of interpretance, to use Peirce's lingo. So we have the duality of entropy and information, or a sustaining flow of degrees of freedom and set of structuring constraints, at the heart of our most general thermodynamical description of nature.

Reductionist thinking usually just wants to talk about degrees of freedom and ignore the issue of how boundary conditions arise. The thermodynamics is basically already dead, gone to equilibrium, by the time anything is quantified. So the boundary conditions are taken as a given, not themselves emergently developed. For example, an ideal gas is contained in a rigid flask and sitting in a constant heat sink. Nothing can change or evolve in regard to the constraints that define the setting in which some bunch of non-interacting particles are free to blunder about like Newtonian billiard balls. But the dissipative structure view is all about how constraints can spontaneously self-organise. Order gets paid for if it is more effective at lowering the temperature of a system.

So thermodynamics itself is moving towards an entropy+information metaphysics. The mental shift I argue for is to see dissipative structure as not just merely a curiosity or exception to the rule, but instead the basic ontological story. As Layzer argues, the whole Big Bang universe is best understood as a dissipative structure. It is the "gone to equilibrium" Boltzmann statistical mechanics, the ideal gas story, that is the outlier so far as the real physical world is concerned. The focus of thermodynamics has to shift to one which sees the whole of a system developing. Just talking about the already developed system - the system that has ceased to change - is to miss what is actually core.

So physics itself is entropy+information in some deep way it is now exploring. And then biology is zeroing in on the actual semiotic machinery that both separates and connects the two to create the even more complex phenomenon of life and mind. So now we are talking about the epistemic cut, the creation of codes that symbolise information, capture it and remember it, so as to be able to construct the constraints needed to channel entropy flows. Rivers just carve channels in landscapes. Organisms can build paths using captured and internalised information.

Only recently, I believe the biosemiotic approach has made another huge step towards a quantitative understanding - one which I explained in detail here: https://thephilosophyforum.com/discussion/comment/105999#Post_105999

So just as physics has focused on the Planck-scale as the way to unify entropy+information - find the one coin that measures both at a fundamental level - so biology might also have its own natural fundamental scale at the quasi-classical nanoscale (in a watery world). If you want to know what a biological degree of freedom looks like, it comes down to the unit of work that an ATP molecule can achieve as part of a cell's structural machinery.

To sum up, no doubt we have vastly different interests. You seem to be concerned with adding useful modelling tools to your reductionist kitbag. And so you view everything I might say through that lens.

But my motivation is far more general. I am interested in the qualitative arguments with which holism takes on reductionism. I am interested in the metaphysics that grounds the science. And where I seek to make contact with the quantitative is on the very issue of what counts as a proper act of measurement.

So yes, I am happy to talk loosely about degrees of freedom. It is a familiar enough term. And then I would define it more precisely in the spirit of systems science. I would point to how a local degree of freedom is contextually formed and so dichotomous to its "other" of some set of global constraints. Then further, I would point to the critical duality which now connects entropy and information as the two views of "a degree of freedom". So that step then brings life and its epistemic cut, its coding machinery, into the thermodynamics-based picture.

And then now I would highlight how biophysics is getting down to the business of cashing out the notion of a proper biological degree of freedom in some fundamental quantitative way. An ATP molecule as the cell's universal currency of work looks a good bet.

I'm sure you can already see in a hand-waving way how we might understand a rainforest's exergy in terms of the number of ATP molecules it can charge up per solar day. A mature forest would extract ATP even from the tiniest crumbs dropping off the table. A weedy forest clearing would not have the same digestive efficiency.

So I've tried to answer your questions carefully and plainly even though your questions were not particularly well posed. I hope you can respond in kind. And especially, accept that I just might not have the same research goals as you. To the degree my accounts are metaphysical and qualitative, I'm absolutely fine about that.



fdrake November 30, 2017 at 23:56 ¶ #128996
Reply to apokrisis

First up, I'm not bothered if my arguments are merely qualitative in your eyes. I am only "merely" doing metaphysics in the first place. So a lot of the time, my concern is about what the usual rush to quantification is missing. I'm not looking to add to science's reductionist kitset of simple models. I'm looking to highlight the backdrop holistic metaphysics that those kinds of models are usually collapsing.


This is fine. I view it in light of this:

To sum up, no doubt we have vastly different interests. You seem to be concerned with adding useful modelling tools to your reductionist kitbag. And so you view everything I might say through that lens.


Which is largely true. What I care about in the questions I've asked you is how does the metaphysical system you operate with instantiate into specific cases. You generally operate on a high degree of abstraction, and discussion topics become addenda to the exegesis of the system you operate in. I don't want this one to be an addendum, since it has enough structure to be a productive discussion.

To help you understand, I define degrees of freedom as dichotomous to constraints. So this is a systems science or hierarchy theory definition. I make the point that degrees of freedom are contextual. They are the definite directions of action that still remain for a system after the constraints of that system have suppressed or subtracted away all other possibilities.


What does 'dichotomous to constraints' mean? There are lots of different manifestations of the degrees of freedom concept. I generally think of it as the dimension of a vector space - maybe calling a vector space an 'array of states' is enough to suggest the right meaning. If you take all the vectors in the plane, you have a 2 dimensional vector space. If you constrain the vectors to be such that their sum is specified, you lose a degree of freedom, and you have a 1 dimensional vector space. This also applies without much modification to random variables and random vectors, only the vector spaces are defined in terms of random variables instead of numbers.

I think this concept is similar but distinct to ones in thermodynamics, but it has been a while since I studied any. The number of microstates a thermodynamic system can be in is sometimes called the degrees of freedom, and a particular degree of freedom is a way in which a system can vary. A 'way in which something can vary' is essentially a coordinate system - a parametrisation of the behaviour of something in terms of distinct components.

There are generalisations of what degrees of freedom means statistically when something other than (multi)linear regression is being used. For something like ridge regression, which deals with correlated inputs to model a response, something called the 'effective degrees of freedom' is used. The effective degrees of freedom is defined in terms of the trace of the projection matrix of the response space to the vector space spanned by the model terms (that matrix's trace, something like its size). When the data is uncorrelated, this is equal to the above vector-space/dimensional degrees of freedom.

Effective degrees of freedom can also be defined in terms of the exponential of a configurational entropy, in a different context. So I suppose I should talk about configurational entropy.

Configurational entropy looks like this:

[math]\sum_i f_i p_i \log (p_i)[/math]

Where [math]p_i[/math] are a collection of numbers between 0 and 1 such that [math]\sum_i p_i = 1[/math]. They are weights. The [math]f_i[/math] are included to allow for conditional entropies and the like. The 'degrees of freedom' can also mean the number of terms in this sum, which is equal to the number of distinct, non-overlapping states that can obtain. Like proportions of species in an area of a given type - how many of each divided by how many in total. If the [math]p_i[/math] are treated as proportions in this way, it works the same as the Shannon Entropy. Shannon Entropy is a specific case of configurational entropy.

The Shannon Entropy is related to the Boltzmann entropy in thermodynamics in a few ways I don't understand very well. As I understand it, the Boltzmann entropy is a specific form of Shannon entropy. The equivalence between the two lets people think about Shannon entropy and Boltzman entropy interchangeably (up to contextualisation).

Then there's the manifestation of entropy in terms of representational complexity - which can take a couple of forms. There's the original Shannon entropy, then there's Kolmogorov complexity. Algorithmic information theory takes place in the neighbourhood of their intersection. The minimum number of states required to represent a string (kolmogorov) and the related but distinct quantity of the average of the negative logarithm of a probability distribution (shannon) are sometimes thought of as being the same thing. Indeed there are correspondence theorems - stuff true about Kolmogorov implies true stuff about Shannon and vice versa (up to contextualisation), but they aren't equivalent. So:

The theoretical links between Shannon's original entropy, thermodynamical entropy, representational complexity can promote a vast deluge of 'i can see through time' like moments when you discover or grok things about their relation. BUT, and this is the major point of my post:

Playing fast and loose with what goes into each of the entropies and their context makes you lose a lot. They only mean the same things when they're indexed to the same context. The same applies for degrees of freedom.

I think this is why most of the discussions I've read including you as a major contributor are attempts to square things with your metaphysical system, but described in abstract rather than instantiated terms. 'Look and see how this thing is in my theory' rather than 'let's see how to bring this thing into my theory'. Speaking about this feels like helping you develop the system. Reading the references, however, does not.

I'll respond to the rest of your post in terms of ascendency, but later.

apokrisis December 01, 2017 at 00:07 ¶ #129001
Reply to fdrake I'll find time to respond to your post later. But it is a shame that you bypass the content of my posts to jump straight back to the world from your point of view.

You make very little effort to engage with my qualitative argument. Well none at all. So it feels as though I'm wasting my breath if you won't spell out what you might object to and thus show if there is any proper metaphysics motivating your view, or whether you just want to win by arguing me into some standard textbook position on the various familiar approaches to measuring entropy.

Perhaps I'll let you finish first.
Streetlight December 01, 2017 at 00:22 ¶ #129009
Quoting apokrisis
But it is a shame that you bypass the content of my posts to jump straight back to the world from your point of view.


>:O
apokrisis December 01, 2017 at 00:25 ¶ #129011
Reply to StreetlightX Why the sudden interest in Nick Lane and Peter Hoffmann? Couldn't possibly be anything I said.
Streetlight December 01, 2017 at 00:32 ¶ #129014
Reply to apokrisis Sudden? Lol. Lane's been on my reading list since the book came out, and Hoffman's a nice compliment to that. I fully admit my theoretical promiscuity though - I even have Ulanowicz and Salthe coming up soon! But I'm still sousing in the sweet, sweet irony of your comment : D
apokrisis December 01, 2017 at 00:52 ¶ #129019
Reply to StreetlightX Sousing? You really do have a tin ear when it comes to your ad homs. It absolutely spoils the effect when you come across as the hyperventilating class nerd.
Streetlight December 01, 2017 at 00:56 ¶ #129020
If you say so, buttercup.
apokrisis December 01, 2017 at 01:05 ¶ #129022
Reply to StreetlightX Hmm. Just not convincingly butch coming from you. And more importantly it has no sting. You've got to be able to find a real weakness to pick at here. Calling me buttercup once again ends up saying more about your life experience than mine.
fdrake December 01, 2017 at 01:48 ¶ #129031
Reply to apokrisis

It literally took me an hour to disambiguate the different relevant notions of entropy and degrees of freedom that bear some resemblance to how you use the terms, and I still forgot to include a few. Firstly that degrees of freedom can be looked at as the exponential of entropy measures and also that entropy can be thought of as a limitation on work extraction from a system or as a distributional feature of energy.

I wouldn't be doing my job properly if I accuse you of playing fast and loose with terms while playing fast and loose with terms.
apokrisis December 01, 2017 at 01:54 ¶ #129034
Reply to fdrake So already we agree that the notion is ill-defined? It is a fast and loose term in fact. Just like entropy. Or information. Maybe this is why I am right in my attempt to be clear about the high-level qualitative definition and not pretend it has some fixed low-level quantitative measure.

But I'll keep waiting until you do connect with what I've already posted.
fdrake December 01, 2017 at 02:01 ¶ #129035
Reply to apokrisis

Entropy is absolutely well defined. It's just defined in different ways. There are multiple entropies. They mean different things. They have different formulas. They can relate. The way you use entropy probably isn't well defined yet, it has shades of all of the ones I detailed in both posts, and to speak univocally about entropy as you do is to blur out the specificities required in each application. The same goes for degrees of freedom.
apokrisis December 01, 2017 at 02:16 ¶ #129038
Quoting fdrake
Entropy is absolutely well defined.


What's your single sentence definition then? I mean, just for fun.
fdrake December 01, 2017 at 02:18 ¶ #129040
Reply to apokrisis

The point of that post was to highlight that there isn't a univocal sense of entropy, yet.
apokrisis December 01, 2017 at 02:35 ¶ #129044
Reply to fdrake Yeah. But just have a go. Let's see what you could come up with. It truly might help to make sense of your attacks on mine.

If instead you really want to say that entropy is simply whatever act of measurement we care to construct as its instrumental definition - that there is no common thread of thought which justifies the construct - then how could you even begin to have an intelligent discussion with me here?


AngleWyrm December 01, 2017 at 04:52 ¶ #129068
EntropyQuoting apokrisis
What's your single sentence definition then? I mean, just for fun.


Entropy is the complementary antithesis of order, a synonym to disorder and disarray. A snowflake melting, exothermic release of energy, a battery resolving to no charge, water settling at sea level, a magnet losing it's cohesion.

fdrake December 01, 2017 at 14:29 ¶ #129154
Reply to apokrisis

It's actually a lot of work just to research the background to what you're saying. So I think I have to break up my responses into a series.

A univocal sense of entropy would require the instrumental validity of its applications. This is similar to saying that a measurement of how depressed someone is has to take into account the multiple dimensions of depression - the affective, behavioural and psychomotor components. Validity has three aspects; construct validity, content validity and criterion validity. And these have a few sub-aspects.

Construct validity: the variable varies with what it's supposed to. If you asked a depressed person about how much they liked football, and used that as a measurement of how depressed they are, this measurement would have low construct validity. Construct validity splits into two forms, discriminant validity and convergent validity. Discriminant validity is the property that the variable does not vary with what it's not supposed to - the football depression scale I alluded to above has low discriminant validity since it would be sensitive to the wrong things. It also has low convergent validity, since if its correlation with a real measure of depression was computed, it would be very low. Convergent validity is then the property that a measure varies with what it's supposed to vary. I think that convergent validity of a group of measures (say measures for depression absence, happiness, contentment) consists in the claim that each can be considered as a monotonic (not necessarily linear as in correlation) function of the other.

Content validity: the variable varies in a way which captures all aspects of a phenomenon. The football scale of depression has essentially no content validity, a scale of depression when the psychomotor effects of depression are not taken into account has more, a scale of depression which attempts to quantify all effects of depression and does it well has high content validity.

Criterion validity: the variable varies with outcomes that can be predicted with it. Imagine if someone has taken a test of depression on the football scale and a good scale. Now we administer a test of 'life contentment', high scores on the good depression scale would generally occur with low scores on the life contentment scale. Scores on the football scale will have little or no relationship to the life contentment scale measurements.

So you asked me if I can provide a univocal definition of entropy. I can't, nevertheless I insist that specific measures of entropy are well defined. Why?

Measures of entropy are likely have high construct validity, they measure what they're supposed to. Let's take two examples - ascendency and Shannon biodiversity:

The ascendency is a property of a weighted directional graph. The nodes on such a graph are relevant ecological units - such as species groups in a community. The weights in the graph are measurements of the transfer between two nodes. Let's take an example of wolves, rabbits and grass and construct a food web; assuming a single species for each, no bacteria etc...

Wolves: have an input from rabbits.
Rabbits: have an input from grass and an output to wolves.
Grass: has an input from the sun and an output to rabbits.

Assume for a moment that the energy transfer is proportional to the biomass transfer. Also assume this ecosystem is evaluated over a single day. Also assume that the wolves extract half as much biomass from the rabbits as the rabbits do from the grass, and the rabbits extract half the energy from the grass that the grass does from the sun; and that grass extracts '1' unit from the sun (normalising the chain).

Then:

Transfer(Sun,Grass)=1
Transfer(Grass,Rabbits)=0.5
Transfer(Rabbits,Wolves)=0.25

Denote transfer as T. The ascendency requires the computation of the total throughput, [math]T{.,.}[/math] - the sum of all weights, here 1.75. We then need the average mutual information. This is defined as:

[math]MI=\sum_i \sum_j \frac{T(i,j)}{T{.,.}} \log \Big( \frac{T(i,j)T(.,.)}{T(i,.)T(.,j)} \Big)[/math]

Where [math]T(i,.)[/math] is the total of the flows going from i to others, and the reversed index [math]T(.,j)[/math] is the flows going from others to j. Which I'm not going to compute since the actual value wont provide anything enlightening here - since it won't help elucidate the meaning of the terms. The average mutual information, roughly, is a measure of the connectivity of the graph but weighted so that 'strong' connections have more influence on MI than 'weak' ones.

The ascendency is then defined as:

[math]T(.,.) \times MI[/math]

What does this measure? The diversity of flows within a network. How? It looks at the proportion of each flow in the total, then computes a quantification of how that particular flow incorporates information from other flows - then scales back to the total flow in the system. It means that the diversity is influenced not just by the number of flows, but their relative strength. For example, having a network that consisted of 1 huge flow and the rest are negligible would give an ascendency much closer to a single flow network than another measure - incorporating an idea of functional diversity as well as numerical biodiversity. Having 1 incredibly dominating flow means 0 functional diversity.

The ascendency can also be exponentiated to produce a measure of the degrees of freedom of the network. Having 1 incredibly dominating flow means 0 MI, so 0 ascendency, so the exponential of the ascendency is:

[math]\exp(MI) \approx \exp(0) = 1[/math]

IE one 'effective degree of freedom'. Ulanowicz has related this explicitly to the connectivity of digraphs in 'Quantifying the Complexity of Flow Networks: How many roles are there?'. It's behind a paywall unfortunately. If an ecological network has flows equidistributed on the organisms - each receiving an equal portion of the total flow -, then it will have the same effective degrees of freedom as the number of nodes (number of organism types) in the network. When an unequal portion of total flow is allocated to each, it will diverge from the number of nodes - decreasing, since there's more functional concentration in terms of flow in the system.

Numerically, this would be equal to the exponentiated Shannon Biodiversity index in an ecosystem when the species are present in equal numbers. To see this, the Shannon Biodiversity is defined as:

[math]-\sum_i p_i \log p_i[/math]

Where every [math]p_i[/math] is the proportion of the i-th species of the total. This is a numerical comparison of the relative abundance of each species present in the ecosystem. This obtains a maximum value when each species has equal relative abundance, and is then equal to the number of species in the ecosystem. Look at the case with 2 species each having 2 animals. p is constant along i, being 0.5, then the Shannon Biodiversity is -2*0.5*log(0.5) = log2, so its exponential is 2.

Critically this 2 means something completely different from the effective degrees of freedom derived from the flow entropy. Specifically, this is because there are equal relative abundances of each species rather than equal distribution of flow around the network. The math makes them both produce this value since they are both configurational (Shannon) entropies - and that's literally how they were designed to work.

If we were to take both of these measures individually and assess them for content validity - they'd probably be pretty damn good. This is because they are derived in different constrained situation to be sensitive to different concepts. They adequately measure flow diversity and relative abundance biodiversity. If you take them together - you can see they will only necessarily agree when both the flows and the numbers are distributed equally among all species in the network. This means low construct validity on a sense of entropy attempting to subsume both. It just won't capture the variability in both of them. I'm being reasonably generous here, when the degrees of freedom notion ascendency theory has was applied across a eutrophication gradient, which I assume you will allow as an entropic gradient, the ascendency degrees of freedom varied in an upside down U shape from low eutrophication to high eutrophication - so it doesn't have to agree with other (more empirically derived) concepts of 'flow concentration' (more nutrients to go to plants, less water oxygen, a possible drop in diversification). IE, the middle ground between low and high eutrophication had the highest ascendency degrees of freedom, not either extreme.

I think this is actually fine, as we already know that 'intermediate's are likely to be closer to equidistribution of flows than extremes so long as they contain the same species. The paper puts it this way:

In the light of these results, the network defi-nition of eutrophication (Ulanowicz, 1986) does not appear to accord with the gradient in eutrophicationin the Mondego estuarine ecosystem. Rather, it would seem more accurate to describe the effects of eutrophication process in this ecosystem in terms of a disturbance to system ascendency caused by an intermittent supply of excess nutrients that, when coupled with a combination of physical factors (e.g. salinity, precipitation, etc), causes both a decrease in system activity and a drop in the mutual information of the flow structure. Even though a significant rise in the total system throughput does occur during the period of the algal bloom and does at that time give rise to a strong increase of the system ascendency, the longer-term, annual picture suggests instead that the non-bloom components of the intermediate and strongly eutrophic communities were unable to accommodate the pulse in production. The overall result was a decrease in the annual value of the system TST and, as a consequence, of the annual ascendency as well.



Of course, if you've read this far, you will say 'the middle state is the one furthest from order so of course it has the highest degrees of freedom', which suggests the opposite intuition from removal of dominant energy flows 'raining degrees of freedom' down onto the system. This just supports the idea that your notion of entropy has poor construct validity.

Your notion of entropy has very good content validity, since you will take any manifestation of entropy as data for your theory of entropy, it of necessity involves all of them. However, since we've seen that the construct validity when comparing two different but related entropic measures of ecosystem properties is pretty low, your definition of entropy has to be able to be devolved to capture each of them. And since they measure different things, this would have to be a very deep generalisation.

The criterion validity of your notion of entropy is probably quite low, since your analogies disagree with the numerical quantity you were inspired by.

This is just two notions of entropy which have a theoretical link and guaranteed numerical equality on some values, and you expect me to believe that it's fruitful to think of entropy univocally when two similar measures of it disagree conceptually and numerically so much? No, Apo. There are lots of different entropies, each of them is well defined, and it isn't so useful to analogise all of them without looking at the specifics.

Edit: If you want me to define entropy univocally, it's not a game I want to play. I hope the post made it clear that I don't think it's useful to have a general theory of entropy which provides no clarity upon instantiation into a context.

So about the only thing I can say is that:

Entropy = something that looks like Shannon Diversity.



fdrake December 01, 2017 at 20:21 ¶ #129211
Reply to apokrisis

If instead you really want to say that entropy is simply whatever act of measurement we care to construct as its instrumental definition - that there is no common thread of thought which justifies the construct - then how could you even begin to have an intelligent discussion with me here?


Funnily enough, it's precisely the common thread between different notions of entropy that makes me resist trying to come up with a catch-all definition of it. This is that entropy is a parametrised concept when it has any meaning at all. What does a parametrisation mean?

Will begin with a series of examples, then an empirical challenge based on literature review. Shannon entropy, Boltzmann entropy, ascendency, mutual information - these are all functions from some subset of n-dimensional real space to real-space. What does this mean? Whenever you find an entropic concept, it requires a measure. There's a need to be able to speak about low and high entropy arrangements for whatever phenomenon is being studied.

So-find me example of an entropy in science that isn't parametrised. I don't think there are any.

Examples - ascendency as an entropy is a mapping from n-dimensional real space to the real line where n is the number of nodes in the ecosystem network. Shannon Diversity is a mapping from n-length sequences of natural numbers to the real line, where n is the number of of species in an ecosystem. Gibbs entropy is the same in this sense as Shannon Diversity. From thermodynamics to ecological infodynamics, entropy is always something which is spoken about in degrees, and when qualitative distinctions arise from it - they are a matter of being emergent from differences in degree. Differences in values of the entropy.

You said you didn't mind if I believed your descriptions of entropy are purely qualitative - the problem is that they are not purely qualitative. You speak about entropic gradients, negentropy, entropy maximisation without ever specifying the entropy of what and how the entropy is quantified - or even what an entropy gradient is. Nevermind what 'entropification' is, but more on that later... Anyway, back to the commonalities in entropy definitions.

So a commonality is that they are mappings from some space to the real line. But what matters - what determines the meaning of the entropy is both what the inputs to the entropy function are and how they are combined to produce a number. To speak of entropy in general is to let the what and the how vary with the implicit context of the conversation; it destroys the meaning of individual entropies by attempting to unify them, the unification has poor construct validity precisely because it doesn't allow the what and the how of the mapping to influence the meaning.

So when you say things like:

So the normal reductionist metaphysical position is that degrees of freedom are just brute atomistic facts of some kind. But I seek to explain their existence. They are the definite possibilities for "actions in directions" that are left after constraints have had their effect. So degrees of freedom are local elements shaped by some global context, some backdrop history of a system's development.


In a trivial sense, degrees of freedom are local elements shaped by some global context. You index to the history of a system as if that gives 'its' entropy a unique expression. You can see that this just isn't the case by comparing the behaviour of Shannon Entropy and ascendency - they have different behaviours, they mean different things, they quantify different types of disorder of an ecosystem. And after this empty unification of the concept of entropy, you give yourself license to say things like this:

So evolution drives an ecology to produce the most entropy possible. A senescent ecology is the fittest as it has built up so much internal complexity. It is a story of fleas, upon fleas, upon fleas. There are a hosts of specialists so that entropification is complete. Every crumb falling off the table is feeding someone. As an ecology, it is an intricate hierarchy of accumulated habit, interlocking negentropic structure. And then in being so wedded to its life, it becomes brittle. It loses the capacity to respond to the unpredictable - like those either very fine-grain progressive parameter changes or the out of the blue epic events


'Evolution drives an ecology to produce the most entropy possible' - could be viewed in terms of Shannon Entropy, Exergy, Functional Biodiversity.

'A senescent ecology is the fittest as it has built up so much internal complexity' - could be viewed in terms of Shannon Entropy, Exergy, Functional biodiversity.

'It is a story of fleas, upon fleas, upon fleas' - is now apparently solely a network based concept, so it's a functional biodiversity.

'There are hosts of specialists so that entropification is complete' - this makes sense in terms of numerical biodiversity - relative abundances.

'Every crumb falling off the table is feeding someone.' - this makes sense in terms of functional diversity, like ascendency.

'As an ecology, it is an intricate hierarchy of accumulated habit, interlocking negentropic structure'

And when you say negentropy, you mean configurational entropy, except that means it's nothing about ascendency any more.

'. And then in being so wedded to its life, it becomes brittle. It loses the capacity to respond to the unpredictable - like those either very fine-grain progressive parameter changes or the out of the blue epic events'

I mean, your 'it' and 'unpredictable' are ranging over all available entropy concepts and all possible perturbations to them. You can look at the example of applying ascendency and exergy along a eutrophication gradient to see that such breadth generates inconsistencies.

Then switching to that informational or negentropic side of the deal - the tale of life's dissipative structure - the degrees of freedom become the energy available to divert into orderly growth. It is the work that can be done to make adaptive changes if circumstances change.


Now the degrees of freedom are solely a concept of exergy and available energy? Jesus man. The problem here isn't just that you're equivocating on a massive scale, it's that changes in different entropy measures mean different things for the dynamics of a system.

Life is spending nature's degrees of freedom in entropifying ambient energy gradients. And it spends its own degrees of freedom in terms of the work it can extract from that entropification - the growth choices that it can make in its ongoing efforts to optimise this entropy flow.


I could find two references for 'entropification' - and neither of them are in an ecological context, they're a process for estimating orderliness of errors in statistical models. One of them is an applied example, one of them is looking at it in terms of stochastic geometry. I mean, there's no clear sense of entropification to have. It could refer to any of them, but you probably want it to resemble exergy the most here. And through some analogy to thermodynamics, you'll think this has an accessible meaning. How does entropification work?

You earlier say:

So the degrees of freedom are the system's entropy. It is the through-put spinning the wheels.


This relies upon the exponentiation of a particular entropy measure. As you saw, this idea isn't a unified one - and unification produces terrible construct validity. The degrees of freedom are something of the what of entropy, not the how. You can use the how to look back at the what, but not without context.

Every process described in your post is a placeholder. It reminds me of my favourite sentence I've ever read in academic literature. It is a howler:

During the search phase, subtask relevant teabag features become attentionally prioritised within the attentional template during a fixation.

This is supposed to serve as an example of how different features of an object become relevant and become looked at for a while through the progression of a task. What they actually did was take a description of the process in general:

During the search phase, subtask relevant features become attentionally prioritised within the attentional template during a fixation.

And then thought 'this would be much clearer if we substituted in teabag':

During the search phase, subtask relevant teabag features become attentionally prioritised within the attentional template during a fixation.

How loosely you treat entropy makes almost everything you say a subtask relevant teabag feature. It is an example from a promised theory which has not been developed.

Edit: the authors of subtask relevant teabag features actually did develop a theory, though.

fdrake December 01, 2017 at 20:37 ¶ #129215
Reply to apokrisis

If my responses meet your standard of 'intelligent discussion', feel free to respond at this point.
Deleteduserrc December 02, 2017 at 03:02 ¶ #129277
Quoting StreetlightX
That blog post was fascinating! I keep wandering back to psychoanalytic point where the cheating husband's relationship with his lover only 'works' insofar as he is married: were he to leave his wife for the sake of his lover, the lover would no longer be desirable... Of course the psychoanalytic lesson is that our very 'subjective POV' is itself written into the 'objective structure' of things: it's not just window dressing, and if you attempt to discard it, you change the nature of the thing itself.

And I think this slipperiness is what makes it so hard to fix the status of a 'parameter': if you want to make a parameter 'work' (i.e. if you intervene in a system on that basis), you will cause changes - but that doesn't mean the system is 'in-itself' sensitive to such parameters: only that, through your intervention you've made it so.


Glad you liked it. It's part of larger 'series' ( called 'uruk machines', organized in the archive section) that tries, ambitiously, to synthesize 4 thinkers in order to create a Big Metanarrative ala 'how the west got where it is now.' It's pretty fascinating, whatever you think of his conclusions. The author admits, in a footnote or comment somewhere, that he's trying to trojan-horse continental insights using a rational idiom - and I think he largely succeeds. Definitely worth a read.

This thread has long since reached the escape velocity necessary to go irretrievably over my head, but that's ok. Even if I'm left dick-in-my-hands fumbling with basic Hegelian concepts, it's comforting to know that fdrake is still killing it with the hard applied mathematics and that apokrisis is still 1/x crisping every villain who crosses his path like someone who tapes down the 'x' button to grind out xp while he sleeps. It means everything is still progressing according to some familiar order.

"Perhaps there remains/
some tree on a slope, that we can see/
again each day: there remains to us yesterday’s street/
and the thinned-out loyalty of a habit/
that liked us, and so stayed, and never departed."


So all that being said, acknowledging I can't keep up with the math, I'm still confident enough to engage the OP on its own terms which are, I believe, metaphorical. Which isn't to say I think you think that self isn't literally an ecosystem - I believe you do, and I probably agree - but that I think the significance of this way of looking at the self ultimately relies on - and is motivated by- what can be drawn from it conceptually. It's about drawing on empirically-sourced models to the extent that they facilitate conceptual considerations. It's metaphorical in the literal sense that we're transporting some way of thinking from one level to another.

And what we have conceptually is something like: the self is a hierarchically organized collection of processes that can either be too open to the outside at the risk of being overwhelmed or too entrenched against the outside at the risk of brittle collapse. Basically chaos vs order.

As apo said, this essentially cashes out in goldilocks terms. If this isn't about the nuts and bolts of any actual ecosystem, this is really just a metaphor for: not too open, not too closed cf ecosystems.

So why now? why here? What's being said, really?

To get political: isn't not too closed, not too open, self-regulating while allowing lines of flight - i mean isn't that, in a perfect nutshell, neoliberalism (multiculturalism, whatever)?

I want you to be a bloodless academic punching bag, conceptually defending the current order by means of weak intellectual metaphors that conceal your own place in the system. That would satisfy me to no end. It would mean it's ok I dropped out.

You're not doing the real-ecosystem math thing fdrake is doing, even if you're drawing from his insights when it helps your case. So what are you doing? Prove me wrong! Is there any sense in which your metaphors don't serve the default academic order? Zizek and Deleuze and whoever else reduced to a serving a niche in the web of citation bread-crumbs etc etc. (get attention by drawing on an unexpected source, make your mark by bringing him ultimately back into the fold.)

TimeLine December 02, 2017 at 06:23 ¶ #129297
Quoting StreetlightX
We're basically a series of loops, some only residing 'inside' us, some extending far beyond our skin.


I didn't get a chance to read everything, but in the case of thermodynamic systems, the evolution of any given system is determined toward a state of equilibrium, and ergodicity attempts to ascertain the averages of behaviour within a system (transformations, arbitrary convergence, irreducibility etc) and political systems are an attempt to order the nature of Hobbesian chaos. I really like this:

A baby girl is mysteriously dropped off at an orphanage in Cleveland in 1945. “Jane” grows up lonely and dejected, not knowing who her parents are, until one day in 1963 she is strangely attracted to a drifter. She falls in love with him. But just when things are finally looking up for Jane, a series of disasters strike. First, she becomes pregnant by the drifter, who then disappears. Second, during the complicated delivery, doctors find that Jane has both sets of sex organs, and to save her life, they are forced to surgically convert “her” to a “him.” Finally, a mysterious stranger kidnaps her baby from the delivery room.

Reeling from these disasters, rejected by society, scorned by fate, “he” becomes a drunkard and drifter. Not only has Jane lost her parents and her lover, but he has lost his only child as well. Years later, in 1970, he stumbles into a lonely bar, called Pop’s Place, and spills out his pathetic story to an elderly bartender. The sympathetic bartender offers the drifter the chance to avenge the stranger who left her pregnant and abandoned, on the condition that he join the “time travelers corps.” Both of them enter a time machine, and the bartender drops off the drifter in 1963. The drifter is strangely attracted to a young orphan woman, who subsequently becomes pregnant.

The bartender then goes forward 9 months, kidnaps the baby girl from the hospital, and drops off the baby in an orphanage back in 1945. Then the bartender drops off the thoroughly confused drifter in 1985, to enlist in the time travelers corps. The drifter eventually gets his life together, becomes a respected and elderly member of the time travelers corps, and then disguises himself as a bartender and has his most difficult mission: a date with destiny, meeting a certain drifter at Pop’s Place in 1970.

The question is: Who is Jane’s mother, father, grandfather, grand mother, son, daughter, granddaughter, and grandson? The girl, the drifter, and the bartender, of course, are all the same person. These paradoxes can made your head spin, especially if you try to untangle Jane’s twisted parentage. If we draw Jane’s family tree, we find that all the branches are curled inward back on themselves, as in a circle. We come to the astonishing conclusion that she is her own mother and father! She is an entire family tree unto herself.


If the universe is infinite, so are the possibilities and thus if we were to arrange - again in a statistically thermodynamic manner - the atoms and neurons that make you (your brain) we could easily replicate 'you' as in, the very you and not just merely the body (memories, feelings) and why open systems are intriguing to me. I guess we need to draw the line somewhere as is the case with Boltzmann' Brain.
apokrisis December 02, 2017 at 21:22 ¶ #129409
Quoting fdrake
What does 'dichotomous to constraints' mean?

There are lots of different manifestations of the degrees of freedom concept. I generally think of it as the dimension of a vector space - maybe calling a vector space an 'array of states' is enough to suggest the right meaning. If you take all the vectors in the plane, you have a 2 dimensional vector space. If you constrain the vectors to be such that their sum is specified, you lose a degree of freedom, and you have a 1 dimensional vector space. This also applies without much modification to random variables and random vectors, only the vector spaces are defined in terms of random variables instead of numbers.


In mechanics, degrees of freedom are a count of the number of independent parameters needed to define the configuration of a system. So your understanding is correct.

And they are dichotomous to constraints as they are what are left over as a result of a configuration being thus limited. Constraint suppresses freedoms. What constraint doesn't suppress then remains to become some countable degree of freedom for that system.

Then from an infodynamic or pansemiotic point of view, constraints become the informational part of the equation, degrees of freedom are the dynamics. In the real material world, the configuration can be treated as the knowledge, the structure, that the organismic system seeks to impose on its world. The constraints are imposed by a mind with a purpose and a design. The degrees of freedom are then the entropy, the dynamics, that flow through the organism.

So a structure has to be imposed on the flow to in fact create a flow composed of some set of degrees of freedoms. A bath of hot water will simply cool by using its surrounds as a sink. An organism wants to build a machinery that stands inbetween such a source and sink so as to extract work along the way.

That is why I suggest ATP as a good way to count degrees of freedom in biology. It is the cell's meaningful unit of currency. It places a standard cost on every kind of work. It defines the dynamical actions of which a cell is composed in a way that connects the informational to the entropic aspects of life. An ATP molecule could be spent for any purpose. So that is a real non-physical freedom the cell has built for itself.

An ATP molecule can be used to make a kinesin "walker" transport molecule take another step, or spin the spindle on ATP-ase. But then the spending of that ATP has an actual entropic cost as well. It does get used up and turned into waste heat (after the work is done).

So degrees of freedom are what constraints produce. And in living organisms, they are about the actions that produce units of work. The constraints are then the informational structure that regulates the flow of material entropy, channelling some source to a sink in a way that it spins the wheels of a cellular economy along the way.

A cooling bath of hot water lacks any interesting informational structure apart from perhaps some self-organised convection currents. Like a Benard cell, it might have its thermodynamic flow improved by emergent constraints producing the organised currents that are now some countable set of degrees of freedom. A more chaotic path from hot to cold has had its own vaguer collection of degrees of freedom suppressed so the flow is optimised by a global structure.

But life has genes, membranes, pores, switches, and a host of molecular machinery that can represent the remembered habits of life - some negentropic or informational content - that produces a quite intentional structure of constraints, a deliberately organised set of degrees of freedom, designed to extract self-sustaining work from any available entropy gradient.

Quoting fdrake
So I suppose I should talk about configurational entropy.


Yep. But note that biosemiosis is about how life has the memory to be in control of its physical configuration. It uses a potential gradient to do the work of constructing itself.

So that brings in the informational aspect of the deal - Pattee's epistemic cut. The organism first insulates itself from dynamics/entropy by creating its own informational degrees of freedom. It does this by using a genetic code. But also, it does it foundationally down at the level of the dynamics itself in having "a unit of work" in an ATP molecule that can be used "to do anything a cell might want".

What gets configured is not just some spatial geometry or thermal landscape. The material world is actually being inscribed by an organism's desires. The dynamics is not merely self-organising. It is being organised by a proper self.

It is this situation which your notions of degrees of freedom don't really cover. You are not accounting for the epistemic cut which is the added semiotic feature of this material world now. If you are going to demand quantitative measures, the measures have to span the epistemic cut in some fashion. You are trying to make measurements that only deal with one of the sides of the equation.

















fdrake December 02, 2017 at 21:27 ¶ #129411
Reply to apokrisis

Will you be giving a series of replies? Should I wait?
apokrisis December 02, 2017 at 21:37 ¶ #129416
Reply to fdrake Up to you. It's a nice day outside. I doubt I will tick off every point you raised.
fdrake December 02, 2017 at 21:38 ¶ #129418
Reply to apokrisis

Will wait a bit to see what you do, and to digest the post.
apokrisis December 02, 2017 at 21:47 ¶ #129421
Quoting fdrake
The Shannon Entropy is related to the Boltzmann entropy in thermodynamics in a few ways I don't understand very well.


What's wrong with a reciprocal relation? If Shannon entropy is the degree of surprise to be found in some system, then the Boltzmann entropy is the degree to which that system is in its least surprising state.

So if a system is constrained and is thus composed of some set of independent elements, states, or events, its arrangement can be described somewhere on a spectrum between maximally surprising and minimally surprising. An unsurprising arrangement requires the least amount of information to specify it. And it thus represents the most entropic arrangement.



fdrake December 02, 2017 at 22:00 ¶ #129424
Reply to apokrisis

Shannon's strictly broader than Boltzmann since it allows for non-equidistribution. Gibbs and Shannon are almost equivalent, or rather it can be said that Gibbs entropy is Shannon entropy applied to the distribution of microstates in a macrostate which do not necessarily have equal probability.

I said I didn't understand it very well because I don't know what conclusions people are going to draw by blurring the boundaries between them.
apokrisis December 02, 2017 at 22:05 ¶ #129427
Quoting fdrake
The theoretical links between Shannon's original entropy, thermodynamical entropy, representational complexity can promote a vast deluge of 'i can see through time' like moments when you discover or grok things about their relation. BUT, and this is the major point of my post:

Playing fast and loose with what goes into each of the entropies and their context makes you lose a lot. They only mean the same things when they're indexed to the same context. The same applies for degrees of freedom.

I think this is why most of the discussions I've read including you as a major contributor are attempts to square things with your metaphysical system, but described in abstract rather than instantiated terms.


I'm baffled that you say Shannon entropy came before Boltzmann's entropy.

But anyway, again my interest is to generalise across the different contextual instantiations of the measurement habits which science might employ. I am indeed interested in what they could have in common. So I don't need to defend that as if it were some problem.

And as I have pointed out, when it comes to semiosis and its application to the world, we can see that there is a whole level of irreducible complexity that the standard reductionist approach to constructing indices of information/entropy/degrees of freedom just misses out.

It is fine that science does create simpler indexes. I've no problem with that as a natural pragmatic strategy. But also, with Shannon and Boltzmann, it became clear that informational uncertainty (or configurational degrees of freedom) and entropic material degrees of freedom (or countable microstates) are two sides of the same coin. The mathematics does unite them in a general way at a more abstract level.

And then when it come to biosemiosis, information and entropy become two sides of a mechanically engineered epistemic cut. We are talking about something at a level above the brute physical realm imagined by the physical discourse that gives us Shannon uncertainty and Boltzmann entropy. It thus needs its own suitable system of measurement.

That is the work in progress I see in literature. That is the particular story I am tracking here.

You can keep re-stating that a proper scientist would use the proper tools. You can reel off the many kinds of metrics that reflect the simpler ontology of the reductionist. You can continue to imply that I am somehow being unscholarly in seeking to consider the whole issue at a more holistic level - one that can encompass physicalist phenomena like life and mind. And indeed, even culture, politics, economics, morality and aesthetics.

But I know what I'm about so I'm only going to respond to your critique to the degree it throws light on the connecting commonality, the linkages to that more holistic worldview.



apokrisis December 02, 2017 at 22:15 ¶ #129430
Quoting fdrake
Shannon's strictly broader than Boltzmann since it allows for non-equidistribution.


Does that remain the case now that information theory has been tied to the actual world via holographic theory?

Boltzmann's k turned out to be physically derived from the dimensionless constants of the Planck scale. And Shannon likewise now represents a fundamental Planckian limit. The two are united via the basic physical limits that encode the Cosmos.

The volume of a spacetime defines some entropic content. The surface area of that volume represents that content as information. And there is a duality or reciprocality in the relation. There can't be more entropy inside than there are questions or uncertainties that can be defined on a 4 to 1 surface area measure.

It is about the biggest result of the last 30 years in fundamental physics.

fdrake December 02, 2017 at 22:24 ¶ #129432
Reply to apokrisis

I'm baffled that you say Shannon entropy came before Boltzmann's entropy.


It didn't. Shannon's entropy came after. By throwing in 'original' there I meant Shannon's particular application of entropy to signals and strings.

It is fine that science does create simpler indexes. I've no problem with that as a natural pragmatic strategy. But also, with Shannon and Boltzmann, it became clear that informational uncertainty (or configurational degrees of freedom) and entropic material degrees of freedom (or countable microstates) are two sides of the same coin. The mathematics does unite them in a general way at a more abstract level.


It's a stretch between Shannon Biodiversity and Gibbs entropy - there's no equivalent notion of macrostate other than the vector of relative abundances within an ecosystem.

You can keep re-stating that a proper scientist would use the proper tools. You can reel off the many kinds of metrics that reflect the simpler ontology of the reductionist. You can continue to imply that I am somehow being unscholarly in seeking to consider the whole issue at a more holistic level - one that can encompass physicalist phenomena like life and mind. And indeed, even culture, politics, economics, morality and aesthetics.


I'm not saying you're being unscholarly because you're looking for commonalities in entropy. I'm saying that your application of entropic concepts has a deaf ear for context. Just like sliding straight from Shannon Biodiversity to Boltzmann and back, the 'macrostate' of an ecosystem parametrised solely in terms of its relative abundances isn't anything like a macrostate in a thermodynamic system.

You've shown that you can keep it quite well contextualised. Your post on ATP and work extraction to my reckoning has a single working concept of entropy in it - that of its duality to exergy, work extraction. Then you slide into a completely different operationalisation of entropy:

Then from an infodynamic or pansemiotic point of view, constraints become the informational part of the equation, degrees of freedom are the dynamics. In the real material world, the configuration can be treated as the knowledge, the structure, that the organismic system seeks to impose on its world. The constraints are imposed by a mind with a purpose and a design. The degrees of freedom are then the entropy, the dynamics, that flow through the organism.


Going from ATP being used to fuel an organism straight to a 'global' sense of infodynamics and signals/signs in pansemiosis. It works only when you wave your hands and don't focus on the specifics. When what before was concrete becomes metaphorical, then what was metaphorical becomes concrete. Just like how you slingshot about with the what the degrees of freedom are.

I want you to make substantive posts in terms of a unified sense of entropy. I don't want you to achieve that through handwaving and equivocation.

But I know what I'm about so I'm only going to respond to your critique to the degree it throws light on the connecting commonality, the linkages to that more holistic worldview.


This is why every time you talk about anything tangentially related to entropy, systems or complexity you make all issues an addendum to your worldview. This is why what we're talking about has almost no relation to the OP.

Does that remain the case now that information theory has been tied to the actual world via holographic theory?


I'm not going to pretend to know enough about cosmology to comment.
apokrisis December 03, 2017 at 00:08 ¶ #129471
Quoting fdrake
Going from ATP being used to fuel an organism straight to a 'global' sense of infodynamics and signals/signs in pansemiosis. It works only when you wave your hands and don't focus on the specifics. When what before was concrete becomes metaphorical, then what was metaphorical becomes concrete.


Its not metaphorical if infodynamics/semiosis is generalisable to the material dissipative structures in general.

Again, you might have to actually read the literature - Salthe for instance. But the metaphysical ambition is clear enough.

The information is separate from the dynamics via the epistemic cut in biosemiotic systems - organisms that are living and mindful. A organisation mediated by signs is perfectly concrete.

What still counts as speculative is then generalising that concrete description of life/mind so that it is seen to be a concrete theory of the Cosmos. The Universe would be understood as a dissipative system organised by a sign relation. The biological understanding of the duality of information and entropy would prove to apply to the whole of existence as its scientific theory.

So it is not me waving my hands. It is you demonstrating a deaf ear to context. I am careful to distinguish between the part of what I say which is "normal science" and the part that is "speculative metaphysics". And the speculative part is not merely metaphor because the project would be to cash it out as concrete theory, capable of prediction and measurement.

I agree that may also be a tall order. But still, it is the metaphysical project that interests me. The fact that you repeatedly make these ad hom criticisms shows that you simply wish not to be moved out of your own particular comfort zone. You don't want to be forced to actually have to think.

apokrisis December 03, 2017 at 00:14 ¶ #129476
Quoting fdrake
This is why what we're talking about has almost no relation to the OP.


But the OP was about extracting a political analogy from a lifecycle understanding of ecosystems. I simply responded by saying Salthe's infodynamic perspective gives you a self-explanatory three stage take on that.

You then got shirty about my use of infodynamic or dissipative theory jargon. I'm happy to explain my use of any terminology. And I'm happy to defend the fact that I do indeed have an over-arching worldview. That is way more than nearly anyone else does around these here parts.
apokrisis December 03, 2017 at 00:40 ¶ #129479
Quoting fdrake
What does this measure? The diversity of flows within a network. How? It looks at the proportion of each flow in the total, then computes a quantification of how that particular flow incorporates information from other flows - then scales back to the total flow in the system. It means that the diversity is influenced not just by the number of flows, but their relative strength. For example, having a network that consisted of 1 huge flow and the rest are negligible would give an ascendency much closer to a single flow network than another measure - incorporating an idea of functional diversity as well as numerical biodiversity. Having 1 incredibly dominating flow means 0 functional diversity.


You seem terribly concerned by things that don't seem a big issue from the thermodynamic point of view.

A lot of your focus seems to be on how to we specify richness or complexity or healthy biodiversity. And you instinctively think in terms of counting niches or something else "structually concrete".

But the flow view of an ecosystem would see a hierarchy of flow just happening naturally. You would get complexity arising in a way that is essentially "meaningless".

So think of a scalefree network or other fractal growth model. A powerlaw hierarchy of connectivity will just arise "randomly". It doesn't need evolutionary semiosis or natural selection to create it. The complexity of the flow is not something that needs a designing hand to happen. It is the natural structure of the flow.

Check out Adrian Bejan's constructal law. He is pretty strong on this issue.

So a reductionist would think of a richly organised hierarchy of relations as a surprising and delicate state of affairs. But the switch is now to see this as the inevitable and robust equilibrium state of a freely growing dissipative structure, like an ecosystem. Scalefree order comes for free.

So something like the reason for the occurrence of niches over all scales is not something in need of some "contextualised" metric to index it. We don't have to find external information - some historic accident - which specifies the fact of a hierarchical order. That kind of order is already a natural pattern or attractor. It would take external accidents of history to push it away from this natural balance.

Quoting fdrake
Where every pipi is the proportion of the i-th species of the total. This is a numerical comparison of the relative abundance of each species present in the ecosystem. This obtains a maximum value when each species has equal relative abundance, and is then equal to the number of species in the ecosystem. Look at the case with 2 species each having 2 animals. p is constant along i, being 0.5, then the Shannon Biodiversity is -2*0.5*log(0.5) = log2, so its exponential is 2.


So here, aren't you assuming that we can just count species and have no need to consider the scale of action they might represent? One might be a bacterium, the other an elephant. Both might be matched in overall trophic throughput. We would expect their relative abundance to directly reflect that fact rather than a species count having much useful to say about an ecosystem's healthy biodiversity or state of entropic balance.

Quoting fdrake
Of course, if you've read this far, you will say 'the middle state is the one furthest from order so of course it has the highest degrees of freedom', which suggests the opposite intuition from removal of dominant energy flows 'raining degrees of freedom' down onto the system. This just supports the idea that your notion of entropy has poor construct validity.


Exergy? I mean you seemed to agree that it is about quality of the entropy. So a big tree dropping leaf litter is a rather different story to a forest clearing being blasted by direct sunlight again.






fdrake December 03, 2017 at 01:12 ¶ #129492
Reply to apokrisis

Its not metaphorical if infodynamics/semiosis is generalisable to the material dissipative structures in general.


The bridges between your contextualised uses of entropy are metaphorical. I'll give an apokrisis like description of a process so you can see what I mean. It will be done through a series of thermodynamically inspired haikus just because.

Organisms lay
Jumbled up yet striving for
Their own mouths to feed

Each cell binds a gap
Consuming a gradient
Where to find new food?

Digging in the weeds
An old domain of feasting
For new specialists

Symmetry broken
Entropy from exergy
Degrees of freedom

So it is not me waving my hands. It is you demonstrating a deaf ear to context. I am careful to distinguish between the part of what I say which is "normal science" and the part that is "speculative metaphysics". And the speculative part is not merely metaphor because the project would be to cash it out as concrete theory, capable of prediction and measurement.


I don't actually care about this part. Whether what you're doing is science or not doesn't interest me. Nor do I think it's relevant to our disagreement - from my perspective anyway.

I agree that may also be a tall order. But still, it is the metaphysical project that interests me. The fact that you repeatedly make these ad hom criticisms shows that you simply wish not to be moved out of your own particular comfort zone. You don't want to be forced to actually have to think.


I thought you'd give me more charity than this. I read quite a few papers on ascendency so I could understand it. If researching something you've referenced to assess its merits is being afraid of thinking, I'll remain a coward. Ascendency is a cool concept, btw.

A lot of your focus seems to be on how to we specify richness or complexity or healthy biodiversity. And you instinctively think in terms of counting niches or something else "structually concrete".


I don't. I think that there's no current comprehensive measure of biodiversity or ecosystem complexity that I'm aware of. Quantifying biodiversity and ecosystem complexity has so many things which can be relevant.

A lot of your focus seems to be on how to we specify richness or complexity or healthy biodiversity. And you instinctively think in terms of counting niches or something else "structually concrete".


I took two examples to show what happens when you let one slide into another. They were both measures of an aspect of ecosystem complexity. They were designed to measure different things. A unified sense of entropy must contain both, and sliding over their differences isn't sewing them together.

I'm not suggesting you 'look at the formulas and find a master one', the thing I cared about was that the measures of entropy in terms of ascendency and relative abundance meant different things - they summarise different aspects of the behaviour of the ecosystem.

So here, aren't you assuming that we can just count species and have no need to consider the scale of action they might represent? One might be a bacterium, the other an elephant. Both might be matched in overall trophic throughput. We would expect their relative abundance to directly reflect that fact rather than a species count having much useful to say about an ecosystem's healthy biodiversity or state of entropic balance.


It would be interesting if Shannon biodiversity was related to ascendency. There's a trivial link in terms of node counting - and the number of nodes (species) has an influence on ascendency (though not necessarily on its numerical value). Regardless, high entropy states of abundance don't have to reflect high entropy states of flows. I imagine flow disorder is likely to be less than relative abundance disorder if they were scaled to the same unit (not sure how to translate the units though), since there's probably going to be a primary producer om-nom-noming the sun.

I read a bit of Salthe today. As I've said previously, I don't mind the project of trying to find a unification of entropy. Salthe has a nice article talking about four different notions of entropy, so far I can tell that he contributes a way of speaking about entropy using similar words in all the contexts. He also doesn't seem to like 'pragmatism', or what I think's better called 'instrumentalism' in the sciences. A focus on measurement and mathematisation. What I've got from reading his writing was the occasional 'I can see through time' moment regarding entropy.

I understand that you need to be able to use the following terms to talk about anything in this vista of generalised entropy. 'degrees of freedom' 'energy' 'entropy' 'information' 'order' 'disorder' 'dynamics' 'entropic gradient' 'exergy' 'boundary condition'. Salthe seems to use the words much as you do.

The way he uses 'boundary condition' as a metaphor is probably the strongest concept I've seen from him so far, at least insofar as a generalised parametric approach to entropy goes. It gave me a picture like this:

Dynamical system 1 outputs a set,
Dynamical system 2 takes that set as literal boundary conditions.

I never said I couldn't see the meaning behind the handwaving. This is roughly consistent with the poetic notions of degree of freedom you and Salthe espouse - the boundary condition allows for the evaluation of specific trajectories, and is so a 'constraint' and thus a loss of 'freedom'.

Maybe I'd be more comfortable with what you're saying if you used scarequotes like I do.


Streetlight December 03, 2017 at 01:20 ¶ #129495
Quoting fdrake
Organisms lay
Jumbled up yet striving for
Their own mouths to feed

Each cell binds a gap
Consuming a gradient
Where to find new food?

Digging in the weeds
An old domain of feasting
For new specialists

Symmetry broken
Entropy from exergy
Degrees of freedom


Oh an Apo post!

I love this so much though.
apokrisis December 03, 2017 at 02:37 ¶ #129528
Quoting fdrake
I'm not suggesting you 'look at the formulas and find a master one', the thing I cared about was that the measures of entropy in terms of ascendency and relative abundance meant different things - they summarise different aspects of the behaviour of the ecosystem.


What else do you expect if you take the attitude that we are free to construct metrics which are valid in terms of our own particular interests?

I agree we can do just that. We can describe the world in terms that pick up on some characteristic of interest. I just say that is not a deep approach. What we really want is to discover the patterns by which nature organises itself. And to do that, we need some notion about what nature actually desires.

This is where thermodynamics comes in. This is what is unifying science at a foundational level now. Both biology and physics are reflecting that emergent metaphysical project. And thermodynamics itself is becoming semiotic in recognising the duality of entropy and information.

So you are down among the weeds. I'm talking about the big picture. Again, it is fine if your own interests are narrow. But I've made different choices.

apokrisis December 03, 2017 at 02:42 ¶ #129530
Quoting fdrake
Maybe I'd be more comfortable with what you're saying if you used scarequotes like I do.


Your comfort is definitely my number one priority. I mean "number one priority".
apokrisis December 03, 2017 at 03:22 ¶ #129540
Quoting fdrake
So a commonality is that they are mappings from some space to the real line. But what matters - what determines the meaning of the entropy is both what the inputs to the entropy function are and how they are combined to produce a number. To speak of entropy in general is to let the what and the how vary with the implicit context of the conversation; it destroys the meaning of individual entropies by attempting to unify them, the unification has poor construct validity precisely because it doesn't allow the what and the how of the mapping to influence the meaning.


And yet something still ties all this variety back to some general intuition. The usual response is "disorder".

As I have said, at the metaphysical level, we can only approach proper definitions by way of a dialectical or dichotomistic argument. We have to identify the two complementary extremes that are mutually exclusive and jointly exhaustive. So a metaphysical-strength discussion of entropy has to follow that form. It has to be entropy as "opposed to what?". Your concern is that there seem multiple ways to quantify "entropy". My response has been that a metaphysical-strength definition would be qualitative in this precise fashion. It would lead us to a suitable dichotomy.

Hence why I keep trying to return the conversation to entropy~information as the candidate dichotomy that has emerged in recent times. It seems a stronger statement that entropy~negentropy, or disorder~order, as mere negation is a weak kind of dichotomy, not a strong one.

Likewise constraints~degrees of freedom slice across the debate from another direction. How the two dichotomies of entropy~information and constraints~degrees of freedom might relate is a further important question.

So a qualitative approach here isn't just a hand-waving, anything goes, exercise in speculation. Metaphysics does have a method for clarifying its ideas about reality. And that approach involves discovering a reciprocal relation that connects two opposed limits on Being.

As I said, information and entropy capture a duality in terms of relative surprisingness. An entropy-maximising configuration is reciprocally the least-informational. No need to count microstates even if every one is different. You only need to measure a macrostate to completely characterise the aspect of the system that is of interest.

By contrast, a surprising state of the world is the most informational. It is high on negentropy. And now just one "microstate" is the feature that appears to characterise the whole situation - to the degree that is the aspect of interest.

So "disorder" is just an application of the principle of indifference. A messy system is one in which the details don't matter. Gone to equilibrium, the system will be generic or typical.

However no system can in fact maximise its entropy, reach equilibrium, except that it has stable boundary conditions. So - dichotomously - there is negentropy or order in the fact of boundary stability, in the fact of being closed for causality.

Thus is it common to sum up entropy as about a statistical propensity to be disordered - to be in a system's most typical state. But that "first law of thermodynamics" state of closure (well, it includes the necessity of the third law to put a lower bound on things to match the first's upper bound) is only half the story. Somewhere along the line, the system had to be globally parameterised. The general holonomic constraints or boundary conditions had to form somehow.

So here we see a good example of why dichotomies or symmetry-breakings are basic to metaphysical-strength analysis. They alert us to the other half of the holistic story that reductionists are wont to overlook. A full story of entropy can't just presume stable boundary conditions. Those too must be part of what develops (along with the degrees of freedom that global constraints produce, or parameterise).

To the degree your discussions here are not dialectically explicit, they simply fail the test of being adequately metaphysical. Your obsession about quantitative methods is blinding you to the qualitative discussion where the hunt is on for the right way to frame information~entropy, or degrees of freedom~constraints, as the fundamental dichotomy of a developmental Cosmos.






apokrisis December 03, 2017 at 03:23 ¶ #129541
Quoting StreetlightX
I love this so much though.


I see you jiggling with joy on the sidelines. SX and his man-crushes.
Streetlight December 03, 2017 at 04:38 ¶ #129560
Quoting apokrisis
I see you jiggling with joy on the sidelines. SX and his man-crushes


Of course! Fdrake is among the best posters on the forum, and he most definitely has a cheerleader in me, especially in his apt charactrizsation of your posts - a loosely bound collection of regurgitated, empty buzzwords.
apokrisis December 03, 2017 at 10:20 ¶ #129596
Quoting fdrake
It would be interesting if Shannon biodiversity was related to ascendency


This is the bit that puzzles me. It seems that all your arguments want to circle back to this species diversity index. But that is an utter triviality. It says nothing about the flow of energy through a biological system, nothing about the negentropic response of a system being pushed away from its equilibrium state, nothing about anything which has to do with the global dynamics or the fluxes that are what create ecologies in the first place.

It is pretty clear that I’m talking about dissipative structure. And so is ascendency. So is Salthe, Kay and Schneider, Bejan and the many other cites I offered. But your critique amounts to me failing to relate dissipative structure theory to some mundane measure of species diversity.



fdrake December 03, 2017 at 11:06 ¶ #129600
Reply to apokrisis

What else do you expect if you take the attitude that we are free to construct metrics which are valid in terms of our own particular interests?


This is a lot of flip-flopping Apo. At the start of the thread you advised me to read various entropy measures for context. Especially ascendancy. You must believe there's some good stuff in the entropic measures, that they provide a link between the 'generalised' entropy that you want to speak about to particular contexts. Now when I point out that their aggregation would have poor construct validity, they're no longer serving as instantiations of 'generalised' entropy and caring about their behaviour is too concerned with irrelevant detail.

You also wrote a description of entropy and work in ATP production and digestion, you do care about instantiations to contexts. But apparently you don't care about the specifics insofar as they resist easy unification into a single sense of entropy. Salthe is quite the opposite, he goes from the specific to the general then back again. Entropies are contextualised, and their parametrizations are given some respect:

Salthe:A point of confusion between H and S should be dealt with here. That is that one must keep in mind the scalar level where the H is imputed to be located. So, if we refer to the biological diversity of an ecosystems as entropic, with an H of a certain value, we are not dealing with the fact that the units used to calculate the entropy— populations of organisms—are themselves quite negentropic when viewed from a different level and/or perspective. More classically, the molecules of a gas dispersing as described by Boltzmann are themselves quite orderly, and that order is not supposed to break down as the collection moves toward equilibrium (of course, this might occur as well in some kinds of more interesting systems).


moreover, the distinctions between different systems are interpreted in terms of differences between their individual energetic behaviour. Exceptions are noted. Salthe's method is, essentially, attempting to form a conceptual space whereby entropy can be discussed in general by seeing what is general within the specific contexts entropy arises. He then takes the generalisations and applies them to specific contexts to re-present and contrast the nascent sense of entropy in general with the specifics. This is achieved by the description of entropy in a highly abstract register which is constituted from a web of analogies, citations and examples (but little mathematics, which is unfortunate).

If this sounds familiar to anyone, essentially Salthe is a phenomenologist of entropy. I have no problem with this. What I do have a problem with is taking the nascent sense of generalised entropy and treating all phenomena, especially systems, as addenda to the nascent sense. In Heideggerian terms, this is an error in thematisation: Salthe gets to speak like this since he thematises entropy and writes about entropy. You don't get to speak like this about stuff in general unless you subordinate all inquiry to the inquiry of entropy. The disclosive character of Salthe's discourse has the thematisation of entropy as a necessary constituent. He is an essayist on entropy.

You turn metaphysically ladened discussions (read: almost all discussions) into discussions about your metaphysical system. The space of problems you engage with is not thematised in terms of the problems - apparently intentionally on your part. This is a major methodological error, it occludes the nature of the problem and questions related to it in a fog of analogised and half formed notions. It is a method subsisting in obfuscation of the original character of problems, constituted by analogic metaphorical language disclosing the wrong problematic as a theme.

Recall this is a supposed discussion of ecosystems and people/societies as ecosystems. Instantiations of entropic measures for ecosystems are apparently 'too narrow', despite being part of the topic when viewed from an information-systems perspective. Also note that degrees of freedom means something different for both measures, and neither one can be said to have boundary conditions in the literal sense or constraints in anything like the sense that we agreed upon in terms of state spaces.

I agree we can do just that. We can describe the world in terms that pick up on some characteristic of interest. I just say that is not a deep approach. What we really want is to discover the patterns by which nature organises itself. And to do that, we need some notion about what nature actually desires.


It is not a deep approach to attempting to generalise entropy. But your attempts to generalise entropy could not be called a deep approach to anything but entropy. I think you, perhaps like Salthe though my verdict is still out on his sins, should attempt non-metaphorical procedural descriptions of processes more often. You did a good thing by talking about entropy and work in ATP metabolism - I would commend it more if it was more relevant to entropy and information in ecosystems. You generally characterise that as a wee system 'giving freedom' to a big one. The ecosystem measures are about the big one. Salthe has a means of problematising this kind of misapplication or mislocation - a hierarchy of problem classes relevant to systems. You should be dealing with the 'given freedom' rather than the 'freedom giving' from below.

So you are down among the weeds. I'm talking about the big picture. Again, it is fine if your own interests are narrow. But I've made different choices.


You are a holist, and have holist concerns. Don't be hating on the weeds for being narrow, be hating on the picture for painting weeds as insufficiently general. They implicate systems of all orders of abstractions, and you seem to have forgotten this commitment.
apokrisis December 03, 2017 at 11:30 ¶ #129601
Reply to fdrake I’ve been talking about the thermodynamic constraints that shape dissipative systems like ecologies. You have failed to show why I should care about a species diversity index. The fact that the two don’t relate nicely is only to be expected.

Your notion of generalising entropy talk is epistemic. My interests are ontic. But best of luck with your future endeavours.
fdrake December 03, 2017 at 11:36 ¶ #129602
Reply to apokrisis

I was typing something in anticipation to this before you responded, funnily enough. I have ontic concerns too. This is why I'm looking to add things to my reductionist utility belt.

You would do well by paying more attention to your procedural descriptions - looking at how entropy transfers between 'levels' of organisation. I'll give an example:

(1) Describing the dynamical system of ATP metabolism in terms of inputs and outputs.
(2) Defining an entropic measure, or necessary features of an entropic measure, relevant to this metabolism.
(3) Giving a relational mechanism (not just a name of it) between this metabolism and the energetic dynamics ('metabolism') of the system which uses ATP metabolism as a constituent structure. This could perhaps be achieved through averaging energy consumptions of cells and what energy sources give the fuel for the ATP metabolism - a link between biomass and derived energies.
(4) Give a relationship between the entropic measure in (2) and the energetic measure in (3). Like the function of analogising absence of entropy with exergy.
(5) Looking at the distribution and behaviour of derived energies on the 'top level' of analysis, also including an instantiation of the relational mechanism (maybe something like making the energy flow to one organism or its population proportional to the conversion of biomass to ATP).
(6) Derive an entropy measure, or characterise one, relevant to (5). Relate it to the one in (4) in such a way that the product of (3) can be parametrised in accordance with it.

That you don't do this can be related to the error of thematisation detailed above. The specific dynamics of the 'given freedom' - the behaviour of flows within the ecosystem - is glossed over, there aren't any procedural specifications from the metabolism of ATP to the 'metabolism' of flows in an ecosystem. And what is glossed over would be an excellent contribution to the account of the 'modelling relation', as you call it, between the two and would be a very productive idea in terms of generalised entropy. This is the kind of work required in generalisation, not the automatic conversion-by-handwaving from entropy in one dynamical system to another of different parametrisation and scope. I think you usually handwave this by calling it 'coupling', too (forgetting that coupled systems have shared parameter spaces). Lo and behold, when you put a bit of work in, you can see literal coupling when your figurative sense of coupling was implicated and it looks like there's some way to take the intersection of parameter spaces such that each individual system's has a non-empty intersection with enough of the rest to make a connected network of flows.

How is this for a compromise: you keep using your terms the way you do but put more effort into relations and procedural dynamics between different ontic energy/entropy regimes and how they describe systems.

Productive dynamical inquiry = problematising change in relations and relations of change.
fdrake December 03, 2017 at 14:36 ¶ #129621
Reply to apokrisis

So "disorder" is just an application of the principle of indifference. A messy system is one in which the details don't matter. Gone to equilibrium, the system will be generic or typical.


Except no. The principle of indifference is an idea of equidistribution over a finite set of states. A die is the model case of the principle of indifference. Roll a 1 - same probability as a 6. Even with a charitable interpretation of this, it destroys any idea of there being degrees of disorder. Even Boltzmann entropy, which explicitly uses equiprobability of microstates within a macrostate still constrains the application to a specific macrostate, not a generalised definition.

The principle of indifference cannot be extended as equiprobability to countable or continuous state spaces - this is because a uniform distribution cannot exist on infinite sets of outcomes. To see this, imagine a sphere. Imagine a particle trapped in it, wandering about in a random walk. The probability that it is occupies a given volume within the sphere is proportional to that volume - the proportion of the set volume purported to contain the particle to the sphere's total volume. If you want an equal distribution over all possible points in the sphere and think of it this way - you end up with something that isn't even a probability density (roughly, all points are 0 probability or their probabilities sum to infinity). Specific configurations having 0 probability is consistent with this volumetric idea, however.

Regardless ignoring the elisions and providing lots of charity, this constrains entropy to maximised entropy and destroys relative degrees of disorder and order. It also doesn't work for entropy in continuous processes - only continuous processes which have mappings to discrete outcome spaces. EG, think of a partition of the sphere above for an example, each subset with a probability proportional to its size. Their sizes sum to the volume since it's a partition - but it's no longer a probability distribution indexed to the sphere, it's indirectly related through the partition. The limiting process as set volume tends to 0 doesn't produce a density.

I doubt you would want this in 'modelling' applications, as it's well known that uniform distributions aren't necessarily entropy maximisers. Neither are they necessarily heavy-tailed distributions, which you usually imply as playing a pivotal role in the emergence of entropy over stratified hierarchical systems. They, not surprisingly, depend on the parametrisation and possibly a partitioning procedure of the system in question.
fdrake December 03, 2017 at 16:24 ¶ #129629
Reply to apokrisis

It is pretty clear that I’m talking about dissipative structure. And so is ascendency. So is Salthe, Kay and Schneider, Bejan and the many other cites I offered. But your critique amounts to me failing to relate dissipative structure theory to some mundane measure of species diversity.


Ulanowicz' ascendency can be applied to any ecosystem network parametrised in terms of flow exchange, it need not be applied to a dissipative network. To see this, you can do two things: 1, look at the formula and see there is no temporal component and 2 - do some reasoning about the behaviour of the ascendency. For a definition of ascendency for reference - it is the average mutual information of flows contributing to and coming from a node scaled by the total ecosystem throughput.

It isn't actually so clear cut that ascendency is cashed out in terms of dissipative systems. This would occur if the ecosystem network had an outflow, so that the total throughput decreased over time. Or alternatively, if the ecosystem had an inflow so the total throughput increased over time. Whether ascendency corresponds to or can detect the dissipation of energy of an ecosystem reliably or its inflow depends not just on the total throughput - which is increasing or decreasing over time in dissipative systems - but upon how changes in total throughput manifest (or don't manifest) in changes of flow concentration or equalisation.**

The argument that ascendency has a natural interpretation in terms of dissipation doesn't just deal with the definition of ascendency - Ulanowicz doesn't actually describe it as a measure of a dissipative structure - he describes it as a measure of 'growth of an autocatalytic system'. He has intuitions that 'more developed' ecosystems will have a higher ascendency. He explicitly constructed it as an atemporal index to allow the comparison of ecosystems over time or other indices. So the behaviour of ascendency over time (or some other index) is the thing which may or may not reflect whatever assumptions you have about dissipative systems more generally.

That it generalises to other indices which represent gradients in other quantities is put forward in the paper he defines it in:

Ulanowicz:The process of eutrophication for example is characterised by a rise in ascendency which is due to an overt increase in the activity of the system which more than compensates for its concomitant decrease in in its developmental status.


So what Ulanowicz said - the prediction of rising ascendency on an increasing eutrophication gradient - is empirically falsifiable since it depicts a trend in the fuzzy concept of 'ecosystem development' using a trend in an entropic measure.

Without lingering too long on that fact that that isn't actually correct, there's an observed upside down U shape in ascendency (increase then decrease) over an eutrophication gradient, though since the paper detailing that doesn't do an error analysis it's still up for debate - he has to at least engage with the relative strengths of the terms in the formula. He does.

And apparently none of my criticisms are relevant since my concerns are merely 'epistemic' or 'reductive'. Really? When it turns out that your equivocations aren't sound and you gloss over too much it's fine because I don't care about the 'ontic' status of ecosystems? I think I care about that quite a lot. I've actually highlighted different notions of complexity relevant to ecosystems, presented and analysed the majority of the terms you've used in your posts. Furthermore, I've spelled out the implications in terms of understanding ecosystems that your equivocations imply; then how this hand waving dis-enables productive inquiry regarding dynamical systems and informational relations. And when possible, I've looked at the empirical results relating to the infodynamics of ecosystems. Doesn't that sound like sound ontic inquiry to you?

In contrast: you usually forgo procedural descriptions which are the glue that bind the interrelation of composite systems. You should be detailing the relationships between different subsystems in your posts procedurally, not metaphorically. You don't do this but pretend to, you cite formulas and a wealth of background literature without ever cashing out what you say in their terms. You sample bites from them and imply everything will go your way. Even now, because the single hole in my responses to you is that I haven't explicitly engaged any dissipative systems material, you focus on that as if everything I've said wasn't relevant for another reason. So:

we have to take on trust that you are being faithful in using and interpreting the measures. We have to take it on trust that your descriptions elucidate the relational character of different systems in a manner consistent with your references. And we have to take it on trust that your interpretations of your references are well informed and valid.

I don't think that trust is vindicated any more.
fdrake December 03, 2017 at 17:03 ¶ #129631
A last thing, looking at ascendency as a measure of a dissipative system.

Ulanowicz' use of 'autocatalysis' can signify a growth or amplification of some aspect of the system. This, analogically, relates to dissipative systems. It doesn't necessarily mean the measure will have the 'ampliative' property in terms of necessitating a growing measure over time (as was seen empirically). Nor will the measure necessarily decrease over time (again, empirically, that U-shape is weird). The imaginative background of the measure suggests that it will behave in that manner - however it doesn't.

There might be an avenue of studying the system as dissipative in terms of how a unit of energy is distributed around the network. It would be possible to define a probability distribution on each node for the probability that a given quantum of energy leaves that node and goes to another, and then we add another node for 'wasted energy' which is connected as a sink to all other nodes. This would probably introduce a directed Markovian graph in some sense dual to the system. The iterations of this Markovian graph may have a steady state (equilibrium distribution), and the distance from the steady state from a particular graph could be studied.

But there isn't a theory of dissipation built into the ascendency formula from the get go.
apokrisis December 03, 2017 at 21:41 ¶ #129683
Quoting fdrake
I think you usually handwave this by calling it 'coupling', too (forgetting that coupled systems have shared parameter spaces). Lo and behold, when you put a bit of work in, you can see literal coupling when your figurative sense of coupling was implicated and it looks like there's some way to take the intersection of parameter spaces such that each individual system's has a non-empty intersection with enough of the rest to make a connected network of flows.


This is an example of our different interests. You presume parameter spaces can be glued together. I'm concerned with the emergent nature of parameters themselves. You have your idea of how to make workable models. I'm interested in the metaphysics used to justify the basis of the model.

So it just gets tiresome when your criticism amounts to the fact I'm not bothered about the details of model building for various applications. I've already said my focus is on paradigm shifts within modelling. And core is the difference between mechanical and organic, or reductionist and holist, understandings of causality.

Quoting fdrake
The principle of indifference cannot be extended as equiprobability to countable or continuous state spaces - this is because a uniform distribution cannot exist on infinite sets of outcomes.


Another inventive way to miss the point I was arguing. Sure Boltzmann had a problem if the world wasn't actually atomistic and entropy was a continuous substance, a caloric.

Quoting fdrake
Without lingering too long on that fact that that isn't actually correct, there's an observed upside down U shape in ascendency (increase then decrease) over an eutrophication gradient, though since the paper detailing that doesn't do an error analysis it's still up for debate - he has to at least engage with the relative strengths of the terms in the formula. He does.


You can have your doubts about the robustness of his approach. But again, I was responding to the OP in terms of what ecologists actually say about ecologies based on the kind metaphysics they've actually developed.

And both Salthe and Ulanowicz argue for a three stage lifecycle - that inverted goldilocks U-curve. Regardless of how well it may or may not be cashed out in real world models, the general metaphysical level argument seems sound enough, and obvious enough, to me. If you want to critique that, then great.

Ulanowicz describes the motivating metaphysics in: The dual nature of ecosystem dynamics, 2009 - http://izt.ciens.ucv.ve/ecologia/Archivos/ECO_POB%202009/ECOPO7_2009/Ulanowicz%202009.pdf

The yin and yang of ecology

By now the reader may have noticed that two countervailing tendencies are at play in the development of any dissipative structure. In one direction a continuous stream of perturbations works to erode any existing structure and coherence. Meanwhile, this drift is opposed by the workings of autocatalytic configurations, which drive growth and development and provide repair to the system.

This tension has been noted since Antiquity. Diogenes related that Heraclitus saw the world as a continuous tearing down and building up. With the Enlightenment, however, science opted for a more Platonic view of nature as monistic equilibrium.

Outside of science, Hegel retained Heraclitus’ view of the fundamental tension, but with significant amendment. He noted that, although the two tendencies may be antagonistic at the level of observation, they may become mutually obligatory at the next higher level. Hegel’s view is resonant with the picture of ecosystem dynamics portrayed here.

Indeed, the second law does dissipate what autocatalysis has built up, but it has been noted that singular chance is also necessary if systems are truly to evolve over time and develop novel emergent characteristics. Looking in the other direction, complex, evolved systems can be sustained only through copious dissipation.

The problem with this agonistic view of the natural world is that, unlike the mechanistic (Platonic) convention, dialectic like dynamics cannot be adequately represented as algorithms.

To repeat again, mechanistic simulation models are inadequate to the task of describing ecosystems over the longer run, because the selfsame selection exhibited by autocatalysis can unpredictably replace not only components, but their accompanying mechanisms as well. Not only does the notion of mechanism defy logic, it seems also to poorly match the dynamics that actually are at play.


So complexity is irreducible once we start talking about self-parameterising systems. Like a dissipative structure. All your fussing about a lack of particularisation of parameter spaces by me makes no sense as we are essentially - when doing science of such systems - talking about modelling as an art. We can make better or worse choices. And any choice must be guided by some grounding, if fuzzy or vague, intuition. (Such as that entropy/information/exergy/degrees of freedom/whatever are "this kind of generic thing or process".)

You are continually jumping to the position of there being a right way to measure the world. But it is basic to a particular group of theoretical biologists I respect - Salthe, Pattee, Rosen, Ulanowicz - that measurement is an informal act. An art or exercise of good judgement. And this is because the world of interest is inherently non-linear - it has a complexity that is irreducible.

Ulanowicz then goes on to talk about the lifecycle model which is relevant to the OP. He makes the point that his ascendancy has this dualistic dynamic. The trade-off is between the organisational power of a system - its useful order - versus the systems overhead needed to physically instantiate that pattern of organisation.

The chief advantage of using information theory to describe organization is that it allows one also to quantify the opposite (or complement) to information in similar fashion. Whence everything
that is disordered, incoherent and dissipative in the same network can be captured by a related, non-negative variable called the system’s overhead...Furthermore, a system’s ascendency and overhead sum to yield its overall capacity for development.

The actual pattern of order is the result of two opposing tendencies: In an inchoate system (one with low a), there are manifold opportunities for autocatalytic cycles to form, and those that arise create internal constraints that increase A (and thereby abet a). This tendency for a to grow via autocatalysis exists at all values of a. The role of overhead however, changes as the system progresses toward higher
a.

In inchoate systems (low a), it is ? that provides the opportunities for new cycles to form. In doing so it abets the tendency to increase autocatalysis. However, in systems that are already highly developed
(a ? 1), the dominant effect of ? becomes the disruption of established feedback loops, resulting in a sudden loss of organized performance. (The system resets to much a lower a.)

So at high a, ? strongly opposes further increase in a. Presumably, a critical balance between the countervailing roles of ? exists near the value of a at which the qualitative role of ? reverses.


Or as the Wiki page sums it up more simply...

Originally, it was thought that ecosystems increase uniformly in ascendency as they developed, but subsequent empirical observation has suggested that all sustainable ecosystems are confined to a narrow "window of vitality" (Ulanowicz 2002).

Systems with relative values of ascendency plotting below the window tend to fall apart due to lack of significant internal constraints, whereas systems above the window tend to be so "brittle" that they become vulnerable to external perturbations.


So my reply to the OP was to point out what I find to be a metaphysically reasonable account of a natural lifecycle approach to systems. It is a model developed for describing ecological systems, but both Salthe and Ulanowicz say it does extrapolate to the political and economic levels of sociological analysis.

Thus I demonstrated that within my Peircean/organicist kit-bag of well grounded metaphysical concepts, here is a sound argument that has been already advanced.

Then check it out and you see Ulanowicz is pretty exercised by the Hegelian logic which his approach employs. He is very concerned with the Rosen modelling relation and what it says about the informality of acts of measurement and the incommensurability of mechanistic models to irreducibly complex worlds. He employs the very same metaphysical kitset as me. (Well up to a point. Ulanowicz is a good Catholic and so we differed on the theistic slant he was working towards on the sly. :) )

You then come along with the intent of nit-picking away, complaining that I don't follow through from a general metaphysical view to the particularity of every possible kind of entropy/information/exergy/degrees of freedom/whatever model.

Well, like ... whatever.



fdrake December 03, 2017 at 21:51 ¶ #129689
Reply to apokrisis

You're getting more interesting as I press you more. Thanks for the more detailed reply.
charleton December 03, 2017 at 22:04 ¶ #129694
Succession does not always lead to more complexity. It depends on the specific case of the environment.
For example in the post ice age landscape of the South Downs of England the tundra led to a range of scrub, bushes, heathland and trees. The species diversity increases in some instances, but can as easily become less complex by bearing fewer species.
Ash, elm, beach, hazel, will eventually succumb to the climax vegetation which in this case is oak woodland so dense as to make many larger herbivores seek life elsewhere; all the scrubs and less long lived trees will have to give way to the oak.
Then comes a human and destroys everything but wheat fields. The soil erodes away and any growth at all depends on the application of chemical fertilisers.
This is not to say that humans always result in the destruction of the ecology. Aborigines of New South Wales used to practice fore-stick farming. Where the climax vegetation has led to natural monoculture, setting fire to the landscape can have a massive effect of increasing species diversity and the appearance of nut, and fruit bearing plants.
Metaphysician Undercover December 03, 2017 at 23:44 ¶ #129728
Quoting fdrake
Digging in the weeds


That's it! That's the solution to the whole problem. Get into the garden and dig the weeds. First, we have to determine what exactly is a weed.
apokrisis December 04, 2017 at 01:07 ¶ #129762
Reply to fdrake Here's a book chapter arguing the same thing in a more general philosophical way.

https://www.researchgate.net/profile/Robert_Ulanowicz/publication/292610642_Enduring_metaphysical_impatience/links/56b00ad608ae9c1968b490b7/Enduring-metaphysical-impatience.pdf?origin=publication_detail

I like two key points. Natural systems are irreducibly complex because they feed off their own accidents. The traditional mechanical view wants to separate the formal laws constraining systems from the material accidents composing systems. But nature includes the accidental or spontaneous in the very thing of forming its laws, or constraining regularities.

This is beautifully Peircean. The "stuff" of the world isn't some disconnected universal machinery. It's Being includes its accidents as part of what makes any laws.

The second point is Elsasser's combinatrics argument which says that even a computable universe is very quickly an intractably computable one. Really, the Universe is always "a one off". A propensity view of statistics - as argued by Peirce and Popper - has to be fundamental.

Put the two together and every state of the Cosmos is an instance of one-off chance - or at least as much as it is its traditional "other" of a deterministic, law-constrained, mechanism.

Ulanowicz sums up the dichotomy of this "ordering vs the disordering" tendency in complexity thus.....

Elsasser argued that nature is replete with one-time events - events that happen once and never occur again. Accustomed as most investigators are to regarding chance as simplistic, Elsasser's claim sounds absurd. That chance is always simple, generic, and repeatable is, after all, the foundation of probability theory.

Elsasser, however, used combinatorics to demonstrate the overwhelming likelihood of singular events. He reckoned that the known universe consists of somewhere on the order of 10^85 simple particles. Furthermore, that universe is about 10^25 nanoseconds in age. So at the outside, a maximum of 10^110 simple events could possibly have transpired since the Big Bang.

Any random event with a probability of less than 1 in 10^110 of recurring simply won't happen. Its chances of happening again are not simply infinitesimally small; they are hyper-infinitesimally small. They are physically unreal.

That is all well and good, one might respond, but where is one going to find such complex chance? Those familiar with combinatorics are aware, however, that it doesn't take an enormous number of distinguishable components before the number of combinations among them grows hyper-astronomically.

As for Elsasser's threshold, it is reached somewhere in the neighborhood of seventy-five distinct components. Chance constellations of eighty or more distinct members will not recur in thousands of lifetimes of the universe.

Now it happens that ecologists routinely deal with ecosystems that contain well over eighty distinct populations, each of which may consist of hundreds or thousands of identifiable individual organisms. One might say, therefore, that ecology is awash in singular events, They occur everywhere, all the time, and at all scales.

None of which is to imply that each singular event is significant. Most simply do not affect dynamics in any measurable way; otherwise, conventional science would have been impossible. A few might impact the system negatively, forcing the system to respond in some homeostatic fashion.

A very rare few, however, might accord with the prevailing dynamics in just such a way as to prompt the system to behave very differently. These become incorporated into the material workings of the system as part of its history. The new behavior can be said to "emerge" in a radical but wholly natural way that defies explanation under conventional assumption.


So good luck to any science based on a mechanistic metaphysics that presumes accidents are simply uncontrolled exceptions that can be hidden behind a principle of indifference. Yet also the universe does have lawful regularity. It incorporates its accidents into the habits it then forms.

This is just a far more interesting story of the Cosmos than the usual one that pictures it as a mathematical clockwork - the one where science is reduced to a collection of measurement protocols rather than scientific measurement being the crafty art we know it to be.

Streetlight December 04, 2017 at 06:41 ¶ #129908
Reply to csalisbury Not to put too fine a point on it, but you come across as a bit of a hysteric (in the strict psychoanalytic understanding of it!), looking for the grand-plan behind it all that explains everything else - but... there really is no big Other here, I just genuinely like exploring these ideas and sketching connections where I can find them (if I can find them): this is bricoleur, not conspiracy - give me some credit for my innocence!

But still -

Quoting csalisbury
To get political: isn't not too closed, not too open, being self-regulating while allowing lines of flight - i mean isn't that, in a perfect nutshell, neoliberalism?


Again, I see where you're coming from - although come on, first I'm a conservative, now I'm a neoliberal? - but again, I think you're obscuring the specificity of neoliberalism, which yes, does promote a kind of 'sustained growth' narrative but - only along a single dimension: specifically, that of market values - efficacy, outcome KPIs, best-practices - being the only things that neoliberalism can 'see'/is sensitive to. I think one of the nice things about seeing the world in ecological terms is precisely the fact that it forces one to take into account so-called 'externalities' - but better, it doesn't/can't even discriminate between 'externalities' and er, not-externalities because the whole point is that everything belongs to an ecosystem.

And look, I also acknowledged previously that one ought to not confuse tactics with strategy: perhaps, if we want to build a new, better world, it's necessary to burn this one down. Perhaps this is not the society worth trying to make better. One ought to separate what needs to happen from what one ought to happen - this perhaps being a distinction between politics and ethics.

Quoting csalisbury
So all that being said, acknowledging I can't keep up with the math, I'm still confident enough to engage the OP on its own terms which are, I believe, metaphorical. Which isn't to say I think you think that self isn't literally an ecosystem - I believe you do, and I probably agree - but that I think the significance of this way of looking at the self ultimately relies on - and is motivated by- what can be drawn from it conceptually. It's about drawing on empirically-sourced models to the extent that they facilitate conceptual considerations. It's metaphorical in the literal sense that we're transporting some way of thinking from one level to another.


So yeah, this is a fair way to read the OP - I probably prefer it in fact. But then, I'd also say that this just is a kind of model of philosophy as such: drawing conceptual inferences, re-contextualizing the significance of empirical findings, etc, etc.
Streetlight December 04, 2017 at 12:19 ¶ #130073
Quoting charleton
Succession does not always lead to more complexity. It depends on the specific case of the environment.
For example in the post ice age landscape of the South Downs of England the tundra led to a range of scrub, bushes, heathland and trees. The species diversity increases in some instances, but can as easily become less complex by bearing fewer species.
Ash, elm, beach, hazel, will eventually succumb to the climax vegetation which in this case is oak woodland so dense as to make many larger herbivores seek life elsewhere; all the scrubs and less long lived trees will have to give way to the oak.


This is great! And my immediate thought is to relate this to the longer term dynamics of capitalism, in which certain established actors in the system co-opt resources and so deprive the 'smaller' actors of potential for growth so that they are either driven out or forced to live in what amounts to environmental ghettos. If I've been thought anything here it's that it might be interesting to map and taxonomize the differing trajectories of ecological systems and see what they might teach us about social ones.
Streetlight December 04, 2017 at 13:43 ¶ #130096
Quoting TimeLine
I didn't get a chance to read everything, but in the case of thermodynamic systems, the evolution of any given system is determined toward a state of equilibrium, and ergodicity attempts to ascertain the averages of behaviour within a system (transformations, arbitrary convergence, irreducibility etc) and political systems are an attempt to order the nature of Hobbesian chaos. I really like this:


I think it's an interesting question to see where Hobbes might fit into all of this. I mean, the first and most usual criticism of Hobbes is that his so-called state of nature doesn't look anything like nature; or at least, it only captures an incredibly narrow slice of it, which is otherwise replete with myriad other possibilities of social relation. And in this sense I think ecology is actually incredibly well positioned to show exactly why approaching the political through a Hobbesian lens is so deplorably misleading, insofar as ecosystems do in fact have all these differing and interesting trajectories which are constrained and enabled by all sorts of various parameters.

I mean, this is the point of 'open systems', that, with enough energy flow and some constraints thrown in, you end up with order, or more specifically, self-organization. The social contract, insofar as it is imposed 'from above', is more or less an inversion of this view. Deleuze writes of how Hume, for instance, undertakes exactly such an inversion of Hobbes:

"The fault of contractual theories is that they present us with a society whose essence is the law, that is, with a society which has no other objective than to guarantee certain preexisting natural rights and no other origin than the contract. Thus, anything positive is taken away from the social, and instead the social is saddled with negativity, limitation, and alienation. The entire Humean critique of the state of nature, natural rights, and the social contract, amounts to the suggestion that the problem must be reversed.... This understanding of the institution effectively reverses the problem: outside of the social there lies the negative, the lack, or the need. The social is profoundly creative, inventive, and positive." (Empiricism and Subjectivity); I think this is exactly the model that ought to be adopted.

Or else there is Kropotkin's wonderful anarchist line that "accustomed as we are by hereditary prejudices and absolutely unsound education and training to see Government, legislation and magistracy everywhere around, we have come to believe that man would tear his fellow man to pieces like a wild beast the day the police took his eye off him; that chaos would come about if authority were overthrown during a revolution. And with our eyes shut we pass by thousands and thousands of human groupings which form themselves freely, without any intervention of the law, and attain results infinitely superior to those achieved under governmental tutelage." (The Conquest of Bread).
fdrake December 04, 2017 at 16:10 ¶ #130135
Reply to apokrisis

This is an example of our different interests. You presume parameter spaces can be glued together. I'm concerned with the emergent nature of parameters themselves. You have your idea of how to make workable models. I'm interested in the metaphysics used to justify the basis of the model.

So it just gets tiresome when your criticism amounts to the fact I'm not bothered about the details of model building for various applications. I've already said my focus is on paradigm shifts within modelling. And core is the difference between mechanical and organic, or reductionist and holist, understandings of causality.


I think you're painting my objections as mechanistic and reductionist because I haven't adequately communicated what my objections to your use of 'entropy in general' are. This boils down to two ideas, one is that entropy is a parametrised concept and the other is that you provide little information on how different senses of entropy are at work in your ideas of it.

We can discard the discussion of thematisation errors, since it doesn't seem to interest you. We also have more than enough to talk about.

My motivations in pointing out different senses of entropy aren't supposed to be aimed at demonstrating either the impossibility or futility of generating a generalised conception of entropy. Rather, they are pointing out difficulties in generalisation which I believe are relevant. Another purpose of using them as examples, as well as ascendency, is to show that there are different senses of entropy as they apply to different systems. This isn't to say there aren't general laws or a sufficient description of translation of one type of entropy to another, it's to say that there is a need to make the hows of entropy transfer between constitutive systems in a complex one a part of your accounts.

Let's take a tangent on the idea of transduction. Transduction is the change of one type of energy to another of a qualitatively different but quantitatively related sort. Hold a ball out at arm's length and suspend it against gravity. The position the ball is kept it relative to the floor imbues it with potential energy. When the ball is released, the potential energy is converted into kinetic energy as the ball falls. The description of the dynamics in terms of energies is equivalent to the description in terms of velocities and accelerations. There's also an equation that links potential and kinetic energy for the motion of the ball.

In an ecosystem, it's unfair to expect that there will be a complete description of its energetic and entropic dynamics. I've never had the mechanical hope that they can be specified like this. Regardless, when we talk about the transfer of energy between one functional stratum of an ecosystem to another, a description of the transduction of energy is the right kind of account. This could form part of a series of equations - which are likely to be too precise to capture all relevant stuff -, or a part of a historical description of the procedures constituting the transduction - which is likely to have an insufficient understanding of the specifics.

This is probably easier to understand in the context of economics. To a first approximation, there are two kinds of empirical analysis in economics. One is based on the mathematical analysis of trends and betting strategies, one is based on a historical description of what happened and can utilise mathematics in a less central role. Future prospects are analysed in either the predictive distribution of a mathematical model or qualitative necessities or high probabilities of event occurence (EG, the tendency of the rate of profit to fall in Marx or Say's Law)

James Simmons and Marx are good examples of each school.

I'll take an example of how you describe the dynamics of entropy and complexity in a system.

Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.


'So possibilities get removed' - how? Which possibilities are closed?
'The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas' - the canopy opens up, what degrees of freedom does this create? How do those degrees of freedom get turned into degrees of freedom for certain organisms? Which organisms? What properties do those organisms have? How do the 'degrees of freedom' in the 'crumbs' relate to the 'smaller stable niches', in what manner do they 'rain'? In what manner are they 'spent'?

When I actually try and analyse your descriptions of how things work, there's so many ambiguities - this is an essential part of your posting style in which every procedure of 'entropy transduction' they contain is named but not specified. Your key terms: constraint, entropy, degrees of freedom, possibilities, information, symmetry breaking, dissipation; only obtain their sense in your posts through the background knowledge of their analogies in different contexts. When it actually comes to describing the hows - procedural system dynamics - the approach you take is holistic but empty of content.

Over the course of this discussion, I've probably read 8 or so hours worth of papers and I still don't have a clue on how you actually think things work. When you cite things, they don't actually flesh out the procedural descriptions in your posts. You want to be giving an account of how entropy flows over systems and how it changes each component and introduces new components. The general structure of your posts in this regard is to substitute in a concept related to entropy in an unclear manner and use the web of associations between the varied concepts of entropy and their contexts to flesh out the rest.

The purpose construct validity has in my argument is to highlight this fact. The way you use the key terms in your posts isn't cashed out by the background references. You analogise too much and specify too little.

I like two key points. Natural systems are irreducibly complex because they feed off their own accidents. The traditional mechanical view wants to separate the formal laws constraining systems from the material accidents composing systems. But nature includes the accidental or spontaneous in the very thing of forming its laws, or constraining regularities.


Great! Makes sense! Parametrisations and contextualised procedural descriptions are ways of studying the behaviour of ecosystems. Mathematical models of them aren't complete pictures either, they're supposed to elucidate certain behaviours in parts of ecosystem behaviour. If someone actually believed a particular mathematical model is a perfect description of all of the ecosystem dynamics that it was applied to they'd almost certainly be wrong.

Regardless, the relationship of parameters in ecosystems and how subsystems become parametrised isn't something you can offload to the literature through your analogical web. Talking about these things is what it means to talk about the dynamics of ecosystems. You neglected to mention that one of the subsections of the paper you quoted from Ulanowicz is titled:

The Return of Law to Ecology?

which then goes on to analyse the dynamical properties of ascendency in a partly mathematical and partly phenomenological (in the scientific sense) manner.

So good luck to any science based on a mechanistic metaphysics that presumes accidents are simply uncontrolled exceptions that can be hidden behind a principle of indifference. Yet also the universe does have lawful regularity. It incorporates its accidents into the habits it then forms.


Again yes, I broadly agree with this. Ulanowicz' use of networks - not just quantities - to analyse the behaviour of ecosystems is exactly the kind of mathematical analysis that makes sense in this context. Graphs and networks are highly generalised and already have notions of 'flow','resistance' and 'variation' so long as their edges are weighted. The specific manifestations of the graphs in applied mathematical modelling of ecosystems are pretty bad at predicting their future except sometimes in the short term. So, if you want to think of ecosystems as flow networks, there's a lot of abstract generality there to exploit.

So, we don't have any irreconcilable methodological disagreements. I don't think this is a matter of mechanism vs organicism or reductionism vs holism. The particular beef I have is that you provide poetic descriptions of systems behaviour without fleshing out the details, and this destroys the credibility of the accounts.

So, could you please provide a procedural re-description of:

Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition. So possibilities get removed. The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas.


In a manner that answers the questions I have:

Which possibilities are closed?
What degrees of freedom does this create? How does it create rather than destroy them? How do those degrees of freedom get turned into degrees of freedom for certain organisms? Which organisms? What properties do those recipient organisms have? How do the 'degrees of freedom' in the 'crumbs' relate to the 'smaller stable niches', in what manner do they 'rain'? In what manner are they 'spent'? How does one set of degrees of freedom in the canopy become externalised as a potential for the ecosystem by its deconstruction and then re-internalised in terms of a flow diversification?

Being able to translate out of the abstract register to a specific example is essential to ensure the validity of your concept applications. When I tried with various entropy notions I couldn't, I still can't in terms of ascendency.
fdrake December 04, 2017 at 16:29 ¶ #130137
This is just for interest. A thing I noticed about the ecosystem flow networks is that when you analyse them in terms of flow proportions and add the nodes for the 'system source' and 'energy loss' - the energy loss node is accessible from every other node but can't be escaped. This makes the energy loss node an absorbing state. When energy is transferred around the network over time, it's very likely that this induces a dissolution of the flow network; the flows concentrate on the energy sink. This is one possible configuration where the ascendency tends to 1.

One way of interpreting this is that the flow network in the ecosystem is metastable (figuratively). I think it's quite unlikely that there's another attractor in the flow network when there's a single absorbing state. Another is that if we looked at the evolution of the flow network over time (internal time to the network would be the application of its transition matrix to an initial state vector, external time being a temporal sequence of flow networks), the sink node would have to be re-integrated (have an outflow) to the remainder of the flow network so that the dynamics of the flow network don't dissolve - more generally, it has to become a subgraph that acts as both a source and a sink to the remainder of the network. However, this new ecosystem would also have the universal sink as an absorbing state... It's probably a stretch, but this has a nice interpretation in terms of the diversification of losses into inputs to new nodes - new niches, new connections, new flows. A further stretch, but one I quite like, is that the fact of the non-dissolution of a flow network necessitates the diversification of its losses. More prosaically, it will be forced to recycle in new ways.

Can draw diagrams if someone is interested.
apokrisis December 04, 2017 at 22:31 ¶ #130257
Quoting fdrake
Which possibilities are closed?


The possibility of something else having happened. The existence of the oak is a constraint on the existence of other trees, shrubs, weeds, that might have been the case without its shade. Without the oak, those other entropifiers were possible.

In what sense was that a mysterious statement? I'd just said "Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition."

So excuse me for being baffled at your professed bafflements in this discussion. I mean, really?

Quoting fdrake
What degrees of freedom does this create?


Again, you claim that I'm hand-waving and opaque, but just read the damn words and understand them in a normal fashion.

"The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas."

So the oak becomes the dominant organism. And as such, it itself can be host to an ecology of species dependent on its existence. Like squirrels and jackdaws that depend on its falling acorns. Or the various specialists pests, and their own specialist parasites, that depend on the sap or tissue. Like all the leaf litter organisms that are adapted to whatever is particular to an annual rain of oak leaves.

This is literally ecology 101. The oak trophic network is the primary school level example. You can pick away at its legitimacy with your pedantry all you like, but pay attention to the context here. This is a forum where even primary school science is a stretch for many. I'm involved in enough academic-strength discussion boards to satisfy any urge for a highly technical discussion. But the prime reason for sticking around here is to practice breaking down some really difficult ideas to the level of easy popularisation.

It's fun, it's professionally useful, I enjoy it. I agree that mostly it fails. But again that seems more a function of context. PF is just that kind of place where there is an irrational hostility to any actual attempt to "tell it right".

So bear in mind that I use the most simplified descriptions to get across some of the most subtle known ideas. This is not an accident or a sign of stupidity. And an expectation of failure is built in. This is just an anonymous sandbox of no account. My posts don't actually have to pass peer review. I don't have to worry about getting every tiny fact right because there are thousands ready to pounce on faint errors of emphasis as I do in my everyday working life.

So it is fine that you want that more technical discussion. But the details of your concerns don't particularly light my fire. If you are talking about ecologies as dissipative structures, then I'm interested. If you are talking about something else, like measuring species diversity, or the difficulties of actually measuring exergy/entropy flows in ecosystems, then I really couldn't care less.

For me. diversity just falls out of a higher level understanding of statistical attractors - https://arxiv.org/abs/0906.3507

While actually measuring network flows is a vain dream from a metaphysical viewpoint. Of course, we might well achieve pragmatic approximations - enough for some ecological scientist to file an environmental report that ticks the legal requirement on some planning consent. But my interest is in the metaphysical arguments over why ecology is one of the "dismal sciences" - not as dismal as economics or political science, but plagued by the same inflated claims of mathematical exactness.

Quoting fdrake
What degrees of freedom does this create? How does it create rather than destroy them? How do those degrees of freedom get turned into degrees of freedom for certain organisms? Which organisms? What properties do those recipient organisms have? How do the 'degrees of freedom' in the 'crumbs' relate to the 'smaller stable niches', in what manner do they 'rain'? In what manner are they 'spent'? How does one set of degrees of freedom in the canopy become externalised as a potential for the ecosystem by its deconstruction and then re-internalised in terms of a flow diversification?


OK. Degrees of freedom is a tricky concept as it just is abstract and ambiguous. However I did try to define it metaphysically for you. As usual, you just ignore my explanations and plough on.

But anyway, the standard mechanical definition is that it is the number of independent parameters that define a (mechanical) configuration. So it is a count of the number of possibilities for an action in a direction. A zero-d particle in 3-space obviously has its three orthogonal or independent translational degrees of freedom, and three rotational ones. There are six directions of symmetry that could be considered energetically broken. The state of the particle can be completely specified by a constraining measurement that places it to a position in this coordinate system.

So how do degrees of freedom relate to Shannon or Gibbs entropy, let alone exergy or non-equilibrium structure? The mechanical view just treats them as absolute boundary conditions. They are the fixed furniture of any further play of energetics or probabilities. The parameters may as well be the work of the hand of God from the mechanical point of view.

My approach, following from Peirce, systems science, holism, organicism and other -isms stressing the four causes/immanently self-organising view, seeks to make better metaphysical sense of the situation.

So I say degrees of freedom are emergent from the development of global constraints. And to allow that, you need the further ontic category or distinction of the vague~crisp. In the beginning, there is Peircean vagueness, firstness or indeterminism. Then ontic structure emerges as a way to dissipate ... vagueness. (So beyond the mechanical notion of entropy dissipation, I am edging towards an organic model of vagueness dissipation - ie: pansemiosis, a way off the chart speculative venture of course. :) )

Anyway, when I talk about degrees of freedom, my own interests are always at the back of my mind. I am having to balance the everyday mechanical usage with the more liberal organic sense that I also want to convey. I agree this is likely confusing. But hey, its only the PF sandbox. No-one else takes actual metaphysics seriously.

So organically, a degree of freedom is an action with a direction that has to emerge for some holistic or contextual good reason in the physical universe. So why these basic translational and rotational freedoms? Well Noether's theorem and relativity principles account for why actions in these two directions can never be constrained away even in a spatiotemporal system that represents a state of maximal constraint. They can't be parameterised out of existence. Quantum uncertainty has sure rammed that message home now.

So an ontology of constraints - like for instance the many "flow network" approaches of loop quantum gravity - says that constraints encounter their own limits. Freedoms (like the Newtonian inertias) are irreducible because contraints can make reality only so simple - or only so mechanically and atomistically determined. This is in fact a theorem of network theory. All more complicated networks can be reduced to a 3-connection, but no simpler.

So in the background of my organic metaphysics is this critical fact. Reality hovers just above nothingness with an irreducible 3D structure that represents the point where constraints can achieve no further constraint and so absolute freedoms then emerge. This is nature's most general principle. Yes, we might then cash it out with all kinds of more specific "entropy" models. But forgive me if I have little interest in the many piffling applications. My eyes are focused on the deep metaphysical generality. Why settle for anything less?

Now back to your tedious demand that I explain ecology 101 trophic networks with sufficient technical precision to be the exact kind of description that you would choose to use - one that would pass peer review in your line of work.

Well, again I'm thinking fuck that. I really don't care beyond the possibility that the discussion might be another little window into my bigger picture. I last talked with Ulanowicz probably 15 years ago. So it was interesting to read his recent papers and see how much he has continued on post-retirement in a rather Peircean vein like the rest of that particular crew. (Pattee was the funny one. He got grumpy and went silent for a number of years, despite being the sharpest blade. Then came back blazing as a born-again biosemiotician. The boss once more.)

Anyhow, fill in the blanks yourself. When I talk of a rain of degrees of freedom, as I clarified previously, I'm talking of the exergy that other entropy degraders can learn how to mine in all the material that the oak so heedlessly discards or can afford to be diverted.

The oak needs to produce sap for its own reasons. That highly exergetic matter - a concentrated goodness - then can act as a steep entropy gradient for any critters nimble enough to colonise it. Likewise, the oak produces many more acorns than required to replicate, it drops its leaves every years, it sheds the occasional limb due to inevitable accidents. It rains various forms of concentrated goodness on the fauna and flora below.

Is exergy a degree of freedom? Is entropy a degree of freedom? Is information a degree of freedom?

Surely by now you can work out that a degree of freedom is just the claim to be able to measure an action with a direction that is of some theoretical interest. The generality is the metaphysical claim to be able to count "something" that is a definite and atomistic action with a direction in terms of some measurement context. We then have a variety of such contexts that seem to have enough of your "validity" to be grouped under notions like "work", or "disorder", or "uncertainty".

So "degree of freedom" is a placeholder for all atomistic measurements. I employ it to point to the very fact that this epistemic claim is being made - that the world can be measured with sufficient exactness (an exactness that can only be the case if bolstered by an equally presumptuous use of the principle of indifference).

Then degree of freedom, in the context of ecological accounts of nature, does get particularised in its various ways. Some somewhat deluded folk might treat species counts or other superficialities as "fundamental" things to measure. But even when founding ecology more securely in a thermodynamical science, the acts of measurement that "degrees of freedoms" represent could be metaphysically understood as talking about notions of work, of disorder, of uncertainty. Ordinary language descriptions that suddenly make these different metrics seem much less formally related perhaps.

That is the reason I also seek to bring in semiosis to fix the situation. You complain I always assimilate every discussion to semiotics. But that is just because it is the metaphysical answer to everything. It is the totalising discourse. Get used to it.

So here, the key is the epistemic cut that can connect entropy and information. Disorder and uncertainty can be physically related in terms of rate-dependent dynamics and rate-independent information. I earlier linked to a long post which explained how this is currently being cashed out in a big way in biophysics - hence my mention of ATP as the unit of currency that puts a material scale on a cell's metabolic degrees of freedom. ATP is the concentrated goodness of "pure work". We can ground exergy at life's nanoscale, quasi-classical, intersection where a set of entropies just happen with remarkable convenience to converge.

(Like SX, you really need to add Hoffman's Life's Ratchet to your reading list.)

Right. I'm sure you will have a bunch of nit-picking pedantry welling up inside of you so I will leave off there. Just remember that I really am engaged in a broad metaphysical project. The correct definition of degrees of freedom is certainly a central concern as it is at the heart of the scientific method. It encodes whatever it is that we might mean by our ability to measure the "real facts" of the world. So it is at this level we can hope to discover the presumptions built into any resulting umwelt or worldview.

You keep demanding that I cash out concepts in your deeply entrenched notions of reality. I keep replying that it is entrenched notions of reality that I seek to expose. We really are at odds. But then look around. This is a philosophy forum. Or a "philosophy" forum at least. Or a philosophy sandbox even. What it ain't is a peer review biometrics journal.









fdrake December 04, 2017 at 23:03 ¶ #130266
@apokrisis

Will reply more later, it's late and I'm hooked into some wires.

Now back to your tedious demand that I explain ecology 101 trophic networks with sufficient technical precision to be the exact kind of description that you would choose to use - one that would pass peer review in your line of work.


I didn't want you to engage in some kind of organic/mechanical translation exercise, I wanted you to give some specifics of how your concepts act in an ecosystem (or a classical representation of one). Which you did, along with giving an interesting reference. I wanted you to be technically precise with your use of terms - good that you did this.

I'm sure you will have a bunch of nit-picking pedantry welling up inside of you so I will leave off there.


I certainly have more sympathy for your view when you do attempt to cash it out in the example. I'm not just looking for why you're wrong, I'm looking for how you could be right.
apokrisis December 04, 2017 at 23:35 ¶ #130276
Quoting fdrake
I wanted you to be technically precise with your use of terms - good that you did this.


Great that you think that. But again, my goal is to be technically precise at the most general metaphysical or qualitative level. If you don't yet accept the validity of that, then I don't care. However, just as you insist I cash out my generality in terms of your particular paradigmatic specificity, I only wish you would make an effort to ground your demand for specificity in some more considered ontic basis in fair exchange.

Why do I have to come all the way over to you? Why do you get to be the judge of "what's good enough" here? And don't pretend that this isn't the rhetorical trap you have sought to establish in this thread.

I'm familiar with all those kinds of tricks, so if you truly want a deeper level of mutual engagement, you might want to reconsider. I'm not sure that you actually have that much to offer me in return. But we will see if you can eventually pull some metaphysical insights out of the bag with a high surprisal.

apokrisis December 05, 2017 at 03:07 ¶ #130348
Quoting StreetlightX
(Empiricism and Subjectivity); I think this is exactly the model that ought to be adopted.


Why adopt the sterile old approach where one side of a dialectical relation must be “wrong” so the other can be “rightl?

Functional systems - as in political, economic, social, ecological - are the product of their complementary tensions. Both defining aspects of their dynamics are “right” - at least to the degree that they are together in a functional balance.

So with a social system, that is why both localised or bottom up competition is to be encouraged as much as global top down cooperation. Both need to be vigourous actions even if also then in some mutually beneficial balance.

You lapse into an us vs them rhetorical mode without even thinking. So that leaves you unable to put something like individualism - localised competitive striving - in its appropriate context. One minute you are against neoliberal selfishness, the next pro PC pluralism. Hence your political position ends up radically confused.

So yeah, it would be a problem if the institutional level of social order were somehow taken as the positive, the social or personal as the negative. But then the reversal of this prioritisation is just as bad.

The trick is to see the positive aspect in both the global constraints, the social instinct towards cooperativity, and the local degrees of freedom, the social instinct that celebrates individuality, spontaneity and general striving.

Well, I say trick. It could also be said to be the bleeding obvious.
Deleteduserrc December 05, 2017 at 03:50 ¶ #130354
Reply to StreetlightX I appreciate the conceptually sterilized hystericism you've imputed to me, but I think I'm hysterical in a far more vulgar way, anxiety-at-the-dinner-party. I'm confused, I'm listening to a bunch of people talking, feeling like I don't know if any of it adds up to anything, so 'acting out' if only to get a response. That's where the hysteric's nervous laughter comes from. i got on a public bus, once, years ago and the busdriver complimented me on my sweater, then another dude too. A woman on the bus (with way nicer clothes than me or anyone else there) lost it, crying laughing. I read it like this: "Wtf is this discourse about nice sweaters?! This fucking kid is wearing H&M for christs sake"

I'm certainly not suggesting you have some master plan to justify x or y and here i am trying to sniff it out. You know critical theory, justifications are sneakier than that. Vanishing mediators. Bam crash! you realized you always had the concepts to justify where you are now, if you follow.

I don't think theres anything crazy about implying conservatism one moment and neoliberalism the next. thats basically the gop in a nutshell. But take the model and apply to whatever x.

Anyway im a hysterical bundle of rage and incredulity on a bus of leftists talking ecosystems - ill take it on the chin - and im laughing/crying confused, asking: do you actually believe what youre saying? Whats your career track look like? Is that a zizek sweater from 2006?

Its fantastic youre a bricoleur but that doesn't mean anything. Ed Gein was (literally) a bricoleur - its politically neutral - it just means you use what you got. For what? For what????




Streetlight December 05, 2017 at 03:55 ¶ #130356
Quoting csalisbury
For what? For what????


Joy, OCD.
Deleteduserrc December 05, 2017 at 04:06 ¶ #130359
@StreetlightX it looks bad arguing against joy, so that lets [everyone ever] off the hook. And if a need for control can be chalked up vaguely to a mental illness (that im diagnosed with!) far be it for me to complain. Hope you mean that tho. Full-blooded ocd is a curse. Its not a rhetorical strategy.
apokrisis December 05, 2017 at 04:17 ¶ #130361
Quoting csalisbury
Is that a zizek sweater from 2006?


Quoting csalisbury
Ed Gein was (literally) a bricoleur


Glad to see you around again. And in inspired form. Zing!
Deleteduserrc December 05, 2017 at 04:20 ¶ #130363
@apokrisisi zinged you too in my last post but you may have missed it. Listen: i dont trust anyone - but my girl *romantic kiss*
Streetlight December 05, 2017 at 05:02 ¶ #130366
Reply to csalisbury I dunno, I quite like wielding joy aggressively. It's a much underrated way of approaching things - very few know how to deal with weaponized joy. It also means I really don't have much to offer you, by way of your hysteria - I'm bound to disappoint your drives, and I'm perfectly okay with that. I like your provocations though qua provocations- they sharpen me, and are relievingly unformulaic, unlike certain others around here.
fdrake December 06, 2017 at 14:39 ¶ #130900
The possibility of something else having happened. The existence of the oak is a constraint on the existence of other trees, shrubs, weeds, that might have been the case without its shade. Without the oak, those other entropifiers were possible.

So excuse me for being baffled at your professed bafflements in this discussion. I mean, really?


I didn't doubt that you understood the 'ecology 101' folklore of how biomass flows and how niches are distributed in the canopy-forest floor trophic network. Why I asked was to see how you used your dictionary of concepts to explain the trophic network in those terms.

Again, you claim that I'm hand-waving and opaque, but just read the damn words and understand them in a normal fashion.

So the oak becomes the dominant organism. And as such, it itself can be host to an ecology of species dependent on its existence. Like squirrels and jackdaws that depend on its falling acorns. Or the various specialists pests, and their own specialist parasites, that depend on the sap or tissue. Like all the leaf litter organisms that are adapted to whatever is particular to an annual rain of oak leaves.

The oak trophic network is the primary school level example. You can pick away at its legitimacy with your pedantry all you like, but pay attention to the context here. This is a forum where even primary school science is a stretch for many. I'm involved in enough academic-strength discussion boards to satisfy any urge for a highly technical discussion. But the prime reason for sticking around here is to practice breaking down some really difficult ideas to the level of easy popularisation.


I'm not in the business of asking you to describe a simplified trophic network in the usual way it's described then saying 'aha, it was too simple', that'd be an empty rhetorical strategy. Again, what I wanted you to do was use your concepts in a way which clarified their meaning in a simplified trophic network. I take it you agree that a generalised theory of entropy has to be able to instantiate to real world examples, otherwise it's a metaphysics divorced from the reality it concerns.

It's fun, it's professionally useful, I enjoy it. I agree that mostly it fails. But again that seems more a function of context. PF is just that kind of place where there is an irrational hostility to any actual attempt to "tell it right".


I thought my responses were precisely demands to 'tell it right' from your perspective. This is commensurate with when you say:

So bear in mind that I use the most simplified descriptions to get across some of the most subtle known ideas. This is not an accident or a sign of stupidity. And an expectation of failure is built in. This is just an anonymous sandbox of no account. My posts don't actually have to pass peer review. I don't have to worry about getting every tiny fact right because there are thousands ready to pounce on faint errors of emphasis as I do in my everyday working life.


I'm not in the business of playing peer-review level criticism to your ideas, I don't think my comments have been like that.

So it is fine that you want that more technical discussion. But the details of your concerns don't particularly light my fire. If you are talking about ecologies as dissipative structures, then I'm interested.


More technical discussion = apo specifies what his terms mean and how they work in the contexts he describes. I think you'll agree that the style of the post I'm currently replying to is quite different from your usual subsumption of a problem phenomenon to your dictionary of concepts.

For me. diversity just falls out of a higher level understanding of statistical attractors - https://arxiv.org/abs/0906.3507


It's an interesting paper. Though it doesn't provide any explicit links between systems that internalise the constraints they use and biodiversity. It looks at specific entropy measures for various spaces then derives maximal entropy distributions subject to constraints. Take the binomial example, it's a discrete distribution with constrained counts, you get out of the analysis in the paper that when you assume a partitioning structure with 2 bins, look at summations of Bernoulli trials - and constrain the mean to a constant - you get the binomial distribution as the maximum entropy one.

This is a nice link between entropy and the binomial. However, certain configurations of the binomial are entropy maximising - so there's a qualitative distinction between the entropy maximisation occurring in the space of distributions and the entropy maximisation occurring on the maximum entropy distribution that's picked out. Similarly with the space of distributions: the degrees of freedom in the space of distributions are essentially infinite, the degrees of freedom in terms of applied constraints are 1, and the degrees of freedom within the binomial formula are also 1 since the sum is constrained.

This goes some way to addressing the 'transduction of entropy'. Through a single calculation you end up with the relation of two different entropy concepts and three different degrees of freedom concepts. The caveat is the application of the Lagrange-constraints narrows the application of the results to pre-specified parameter spaces, so an initial justification that a system cares about those constraints (and cares about entropy maximisation) would have to be provided.

While actually measuring network flows is a vain dream from a metaphysical viewpoint. Of course, we might well achieve pragmatic approximations - enough for some ecological scientist to file an environmental report that ticks the legal requirement on some planning consent. But my interest is in the metaphysical arguments over why ecology is one of the "dismal sciences" - not as dismal as economics or political science, but plagued by the same inflated claims of mathematical exactness.


Inflated claims of mathematical exactness are a problem across any science whose subject matter is difficult in an epistemic sense. The empirical humanities, including medicine, are actually waking up to this fact at the minute, see the replication crisis.

OK. Degrees of freedom is a tricky concept as it just is abstract and ambiguous. However I did try to define it metaphysically for you. As usual, you just ignore my explanations and plough on.

But anyway, the standard mechanical definition is that it is the number of independent parameters that define a (mechanical) configuration. So it is a count of the number of possibilities for an action in a direction. A zero-d particle in 3-space obviously has its three orthogonal or independent translational degrees of freedom, and three rotational ones. There are six directions of symmetry that could be considered energetically broken. The state of the particle can be completely specified by a constraining measurement that places it to a position in this coordinate system.

So how do degrees of freedom relate to Shannon or Gibbs entropy, let alone exergy or non-equilibrium structure? The mechanical view just treats them as absolute boundary conditions. They are the fixed furniture of any further play of energetics or probabilities.


I'm not sure what you mean by boundary conditions, but I'm guessing it's something like 'background assumptions required for the formation of a measure'.

The parameters may as well be the work of the hand of God from the mechanical point of view.


I appreciate that you are attempting to find a sense of 'becoming relevant' of parameters, and I think the paper you linked about maximum entropy distributions is a step in the right direction. But I don't think it's appropriate to treat parameters as 'God given', as you put it.

If you want to mathematise something, it'll have a bunch of assumptions of irrelevance so that it fits on a page. EG, when you look at something solely in terms of a binomial distribution, you care about counts of stuff - not how the counts became relevant. A phenomenological description of what's happening in a system is always useful and should be a mandatory preparatory measure for a couple of reasons. Maybe you'll see some dialectical correspondence in this:

(1) It expresses the model building intuitions and the purported significance of included terms and the irrelevance of excluded ones.
(2) It allows the relation of the mathematisation to the imaginative background of the phenomenology that derived it.

So I say degrees of freedom are emergent from the development of global constraints. And to allow that, you need the further ontic category or distinction of the vague~crisp. In the beginning, there is Peircean vagueness, firstness or indeterminism. Then ontic structure emerges as a way to dissipate ... vagueness. (So beyond the mechanical notion of entropy dissipation, I am edging towards an organic model of vagueness dissipation - ie: pansemiosis, a way off the chart speculative venture of course. :) )

Anyhow, fill in the blanks yourself. When I talk of a rain of degrees of freedom, as I clarified previously, I'm talking of the exergy that other entropy degraders can learn how to mine in all the material that the oak so heedlessly discards or can afford to be diverted.

The oak needs to produce sap for its own reasons. That highly exergetic matter - a concentrated goodness - then can act as a steep entropy gradient for any critters nimble enough to colonise it. Likewise, the oak produces many more acorns than required to replicate, it drops its leaves every years, it sheds the occasional limb due to inevitable accidents. It rains various forms of concentrated goodness on the fauna and flora below.


Instantiating it:

Oak community has X number of species dependent solely on its existence to exist.
Oak community has Y number of species which are reduced in number solely from what would happen without the oak community.

These are degrees of freedom in the first sense.

Species in X have network of flows. Oaks removed, X goes to 0.
Species in Y have networks of flows. Oaks removed, Y probabilistically increases.

Flows:

Complete degradation of network consisting of X, inputs to X are reassigned to other networks.
Total throughput in Y increases if Y has species which were constrained by species in X - since input node to Y increases if it is a function of input to X.

Total throughput - sum like variable - assumed constant so long as trophic network is stable or permits immediate recolonisation of destroyed niches with the same efficiency and that concentration of flows will not degrade the ecosystem - decreasing degrees of freedom in the first sense.

If energy from removal of X's effects are distributed evenly among functional roles, degrees of freedom in the second sense increase a lot. If they are equally concentrated, degrees of freedom remain roughly constant. Degrees of freedom in the second sense - similar to exponentiation of flow entropy.

Measurement - variables
X and Y can be identified without error, but inclusion in study can miss some out.
Total throughput - two measurements required to detect change, likely noisy, nodes in study can miss some out.

Expected behaviour-
Entropy maximisation - requires that distributional changes resulting from X's removal increase entropy in the functional sense. Occurs through function of total throughput and the proportions obtained of it by new niches.
Generalised entropy maximisation - has occurred if distributions in the pre-removal of X era are shifted closer to derived maximal entropy distribution with entropy maximising parameters.

Does this sound like a transcription of the canopy-floor ecosystem into your abstract register?

If so: there's rather a lot of counterfactuals there. Especially to assume without evidence.

Anyway, when I talk about degrees of freedom, my own interests are always at the back of my mind. I am having to balance the everyday mechanical usage with the more liberal organic sense that I also want to convey. I agree this is likely confusing. But hey, its only the PF sandbox. No-one else takes actual metaphysics seriously.

So an ontology of constraints - like for instance the many "flow network" approaches of loop quantum gravity - says that constraints encounter their own limits. Freedoms (like the Newtonian inertias) are irreducible because contraints can make reality only so simple - or only so mechanically and atomistically determined. This is in fact a theorem of network theory. All more complicated networks can be reduced to a 3-connection, but no simpler.

So in the background of my organic metaphysics is this critical fact. Reality hovers just above nothingness with an irreducible 3D structure that represents the point where constraints can achieve no further constraint and so absolute freedoms then emerge. This is nature's most general principle. Yes, we might then cash it out with all kinds of more specific "entropy" models. But forgive me if I have little interest in the many piffling applications. My eyes are focused on the deep metaphysical generality. Why settle for anything less?


Looking at the how your background conceptions apply to the real world is an excellent way of revealing conceptual and practical problems in your metaphysics. It isn't settling for less

Surely by now you can work out that a degree of freedom is just the claim to be able to measure an action with a direction that is of some theoretical interest. The generality is the metaphysical claim to be able to count "something" that is a definite and atomistic action with a direction in terms of some measurement context. We then have a variety of such contexts that seem to have enough of your "validity" to be grouped under notions like "work", or "disorder", or "uncertainty".

So "degree of freedom" is a placeholder for all atomistic measurements. I employ it to point to the very fact that this epistemic claim is being made - that the world can be measured with sufficient exactness (an exactness that can only be the case if bolstered by an equally presumptuous use of the principle of indifference).


Hurrah, it was a placeholder. I understood what you meant!

Then degree of freedom, in the context of ecological accounts of nature, does get particularised in its various ways. Some somewhat deluded folk might treat species counts or other superficialities as "fundamental" things to measure. But even when founding ecology more securely in a thermodynamical science, the acts of measurement that "degrees of freedoms" represent could be metaphysically understood as talking about notions of work, of disorder, of uncertainty. Ordinary language descriptions that suddenly make these different metrics seem much less formally related perhaps.


Could you comment on my attempt at instantiating your concepts to the canopy-floor ecosystem example?

That is the reason I also seek to bring in semiosis to fix the situation. You complain I always assimilate every discussion to semiotics. But that is just because it is the metaphysical answer to everything. It is the totalising discourse. Get used to it.


Why do you think semiotics is the totalising discourse? I'm quite suspicious of the claim that there are genuine totalising discourses; attempts to reduce reality to one type of thing fail for precisely the same reasons systems science became so popular (perhaps with some irony resulting from the view of everything as a system).

You keep demanding that I cash out concepts in your deeply entrenched notions of reality. I keep replying that it is entrenched notions of reality that I seek to expose. We really are at odds. But then look around. This is a philosophy forum. Or a "philosophy" forum at least. Or a philosophy sandbox even. What it ain't is a peer review biometrics journal.


What kind of description would satisfy your desire for a better 'ontic development' of my presumptions?

You keep complaining that I'm attacking your concepts because solely they're not biometrically sound. This is the same kind of thing as saying that I have a mechanist's vantage point on ecology. The reason I'm using pre-developed entropy measures is to highlight the ambiguity in your presentations of the concept. The purpose was to get you to describe how stuff worked in your view without the analogising.

So I can add to my apokrisis dictionary: what's a vague-crisp distinction when it's at home? And what's the epistemic cut?
fdrake December 06, 2017 at 17:36 ¶ #130934
@apokrisis

I'm not going to respond to anything quantum or differential-geometric unless you think it's essential. Things are involved enough as it is.


apokrisis December 06, 2017 at 21:50 ¶ #130976
Reply to fdrake Sorry fdrake, but I don't get where this is going. Your responses are vague as if you are only intent on creating some endless descent into technicalities with no finishing line. If you don't signal what you agree with, then I'm just guessing at where any useful disagreement lies.

Do you want to have a go at summing up what you think has been revealed to be essentially wrong about my general metaphysical approach here? What would be the core disagreement in terms of orientation?

I explained for instance that a degree of freedom is a placeholder for the brute claim to be able to measure "actions with directions". You replied, hurrah, you were right that it is a placeholder. But then didn't comment at all on the kind of placeholder I said it was.

Then again, I specified that we find various notions of "actions with directions" being counted. Degrees of freedoms can be decomposed into various more qualitative or contextual notions, like "work", "disorder", "uncertainty". Once more, no comment whether you either agree or disagree.

Nor will you tie anything back to my original reply to the OP - my mention of Salthe/Ulanowicz's lifecycle analysis and its applicability to political theory. A metric like ascendency tries to pick up on something even more subtle than the usual dissipative structure story.

Degrees of freedom in this context are the reserve, the overhead, that a living system needs to keep in reserve so as to be able to adapt to perturbation. An organism (or society) can't afford to spend all its entropic "income" on here and now maximal growth. It wants a reserve of fat, a reserve of degrees of freedom, to deal with unexpected challenges.

My point is that "degrees of freedom" is a useful generic term because it is dichotomous to "constraints", it signals "whatever is definitely countable in terms of some parameterised theory", and it is undefined enough to encompass an ever branching family of thermodynamically related thought - as in capturing this notion of a reserve of adaptive capacity. So I don't use terminology in some unthinking handwaving fashion, as has been your repeated accusation. There is a proper metaphysical structure that organises my ideas. And it is a way of looking at the issues which I learnt firsthand from folk like Salthe and Ulanowicz.

So again, is there anything more here than you want me to break a still-developing metaphysics of a pansemiotic Cosmos down into everyday measures you can employ to do a better job of modelling some ecosystem with?

Sum it up. What do you agree with and what is any core disagreement or vital question that remains to be tackled?



charleton December 06, 2017 at 22:59 ¶ #130992
Reply to StreetlightX Interesting reaction. I think you could put yourself in danger of seeing the results of the dynamics which are actually contingent upon unique conditions are elevating them into a a series of casual factors. The tail of the system waging the dog of necessity.
apokrisis December 07, 2017 at 20:47 ¶ #131208
Quoting fdrake
So I can add to my apokrisis dictionary: what's a vague-crisp distinction when it's at home? And what's the epistemic cut?


Vagueness is that to which the principle of non-contradiction fails to obtain. It is ultimate ambiguity in that it is neither a something nor nothing. It is a state of radical indeterminism. The crisp would then be its matching complementary opposite. It would be the absolute determinate, the definite and certain. The PNC would be in full effect.

Imagine being in a boat on a lake drifting in a fog. Whether you were moving somewhere or going nowhere would be indeterminable. There just wouldn’t be a definite answer either way. But then you suddenly bump the shore. Now you definitely know. Well, either it is now definitely the case you drifted in the shore’s direction or the shore moved and managed to bump into you.

The epistemic cut is Howard Pattee’s term for the semiotic modelling relation that is the basis of life and mind. He was drawing on von Neuman’s theory of self reproducing automata to talk about the necessary division between a dynamical system and its symbolically-encoded self-description.

So it is about the separation between observer and observables, laws and initial conditions, software and hardware, genes and metabolism, etc. Or in general, our metaphysical distinction between rate dependent dynamics and rate independent information.

See - https://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html and
http://www.academia.edu/863864/The_physics_of_symbols_and_the_evolution_of_semiotic_controls

For instance, Pattee captures the strangely hybrid metaphysics of the statistical view rather nicely....

There has always been an apparent paradox between the concept of universal physical laws and semiotic controls. Physical laws describe the dynamics of inexorable events, or as Wigner
expresses it, physical explanations give us the impression that events ". . . could not be otherwise." By contrast, the concepts of information and control give us the impression that events could be otherwise, and the well-known Shannon measure of information is just the logarithm of the number of other ways.

One root of this paradox is the fact that the formulation of physical laws depends fundamentally on the concepts of energy, time, and rates of change, whereas information measures and the syntax of formal languages and semiotic controls are independent of energy, time, and rates of change. A second root of the paradox is that fundamental physical laws, as they are described mathematically, are deterministic and time-symmetric (reversible), whereas informational concepts like detection, observation, measurement, and control are described as statistical and irreversible.

Perhaps the deepest root of the problem, however, is the conceptual incompatibility of the concepts of determinism and choice, a paradox that has existed since the earliest philosophers. The modern attempts in physics to live with this paradox require introducing statistical concepts that allow alternatives into the framework of physical laws by reinterpreting the essential distinction between the laws themselves that describe all possible alternatives and the initial conditions that determine one particular case. Statistical physics accepts the inexorability of the laws, but assumes that virtual alternatives can exist in the microscopic initial conditions.

One measure of the alternatives is the entropy. Thus, we create imaginary statistical ensembles of systems which all follow the same dynamical laws, but that have different sets of initial conditions. These virtual microscopic states are restricted only by statistical postulates and their consistency with macroscopic state variables.

A modification of this classical view by Born points out that initial conditions of even one particle can never be measured with formal precision, and therefore even the classical laws of motion can predict only probability distributions for trajectories. Only when a new measurement is made can this distribution be altered.

The fact remains, however, that all our formal semiotic descriptions and computations, whether we interpret them as probabilistic, statistical, or fuzzy, are in practice assumed to be manipulated by crisp, strictly deterministic rules, even though physical laws require the execution of semiotic rules to be stochastic events.

The physics of symbols and the evolution of semiotic controls - 1996


Here is a more recent quote where Pattee makes a full-fledged connection to Peircean semiotics...

A description requires a symbol system or a language. Functionally, description and construction correspond to the biologists’ distinction between the genotype and phenotype. My biosemiotic view is that self-replication is also the origin of semiosis.

I have made the case over many years (e.g., Pattee, 1969,1982, 2001, 2015) that self-replication provides the threshold level of complication where the clear existence of a self or a subject gives functional concepts such as symbol, interpreter, autonomous agent, memory, control, teleology, and intentionality empirically decidable meanings. The conceptual problem for physics is that none of these concepts enter into physical theories of inanimate nature

Self-replication requires an epistemic cut between self and non-self, and between subject and object.

Self-replication requires a distinction between the self that is replicated and the non-self that is not replicated. The self is an individual subject that lives in an environment that is often called objective, but which is more accurately viewed biosemiotically as the subject’s Umwelt or world image.

This epistemic cut is also required by the semiotic distinction between the interpreter and what is interpreted, like a sign or a symbol. In physics this is the distinction between the result of a measurement – a symbol – and what is being measured – a material object.

I call this the symbol-matter problem, but this is just a narrower case of the classic 2500-year-old epistemic problem of what our world image actually tells us about what we call the real world.

http://www.informationphilosopher.com/solutions/scientists/pattee/



fdrake December 21, 2017 at 13:55 ¶ #135838
Reply to apokrisis

Been busy, there will be a reply. I've been working a little bit on mathematical details for this 'native generation of parameters' using entropy. I'm hoping you'll find it more interesting than my criticisms.