You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Justin's Insight

TheMadFool March 17, 2020 at 06:21 10650 views 60 comments
Justin (Robot) made headlines when it gained a new ability - catching objects thrown at it with an (?)80% success rate. The way the inventors made a big deal of Justin's catching ability is an indication of how difficult it is for a robot to copy even, what appears to us as, "simple" human motor skills.

I don't deny that a toddler will have a much lower success rate than Justin in catching thrown objects; this ability probably ranks higher in motor-skills ability than just grasping stationary objects. However, for the human child this ability will be learned and then mastered to a point that an 8 year old will probably beat Justin in a game of throw and catch.

I'm not particularly concerned about the relative difference between humans and robots in re the level of difficulty for this catching ability ; what I'm interested in is the difference between humans and robots in terms of how (the processes involved) the two catch balls.

When an object is thrown at me, and I hope I'm representative of the average human, I make an estimate of the trajectory of the object and its velocity and move my body and arm accordingly to catch that object. All this mental processing occurs without resorting to actual mathematical calculations of the relevant parameters that have a direct bearing on my success in catching thrown objects.

I tried looking up how Justin actually catches objects but it drew a blank on the www. However, I'm somewhat confident that Justin's catching ability involves actual mathematical calculations of a thrown object's trajectory and velocity and its own movements. The takeaway here being that without some actual number crunching Justin or, for that matter, any other robot won't have the ability to catch thrown objects.

So,

1. It's impossible for Justin to possess an ability to catch thrown objects without actually performing some mathematical calculations.

2. Humans possess the ability to catch thrown objects and we, unlike Justin, routinely catch objects without even thinking of mathematics let alone doing any actual calculations.

This is problematic to say the least. Justin suggests that we actually do perform mathematical calculations but are just unaware of it; it happens subconsciously so to speak. I'm not claiming this is the case but it seems to fit the data quite well: by data I mean what people refer to as insight. Insight has been described as a sudden realization of key features of a problem and generally leads to a solution to that problem. Insight is thought of as distinct from deliberate conscious thinking on a problem or issue and occurs when we're actually engaged in some other activity not related to the problem we gain insight on. It seems then that thinking, logical thinking and even complex mathematical calculations are being performed subconsciously in our minds. Thus I offer here what I think is an insight on what insight could be.

The other possibility is that we don't need mathematics to catch a ball and roboticists are barking up the wrong tree. Roboticists need to rethink their approach to the subject in a fundamental way. This seems, prima facie, like telling a philosopher that logic is no good. Preposterous! However, to deny this possibility is to ignore a very basic fact - humans don't do mathematical calculations when we play throw and catch, at least not consciously.

Comments...






Comments (60)

Nobeernolife March 17, 2020 at 06:49 #392802
Quoting TheMadFool
I tried looking up how Justin actually catches objects but it drew a blank on the www. However, I'm somewhat confident that Justin's catching ability involves actual mathematical calculations of a thrown object's trajectory and velocity and its own movements.


I would not be too sure about that. For example, in the translation business, they moved away from algorythmic approaches to fuzzy learning. When Google Translate did that, the change almost over night was mind boggling. The service went from a joke to something that almost flawless in large sections.
Streetlight March 17, 2020 at 06:53 #392805
From Andy Clark's Being There:

"Consider the act of running to catch a ball. This is a skill which cricketers and baseball players routinely exhibit. How is it done? Common experience suggests that we see the ball in motion, anticipate its continuing trajectory, and run so as to be in a position to intercept it. In a sense this is correct. But the experience (the "phenomenology") can be misleading if one believes that we actively compute such trajectories. Recent research suggests that a more computationally efficient strategy is to simply run so that the acceleration of the tangent of elevation of gaze from fielder to ball is kept at zero. Do this and you will intercept the ball before it hits the ground.

Videotaped sequences of real-world ball interception suggest that humans do indeed — unconsciously — use this strategy. Such a strategy avoids many computational costs by isolating the minimal and most easily detectable parameters that can support the specific action of interception"

In other words: catching the ball isn't a matter of 'computation' so much as keeping constant a certain relation of movement between ball and person: an attempt to keep something invariant in your visual field.
Wayfarer March 17, 2020 at 07:39 #392827
Quoting TheMadFool
Justin suggests that we actually do perform mathematical calculations but are just unaware of it; it happens subconsciously so to speak.


Not at all. It's the requirement of computers - they process binary code, and anything they're programmed to do must be coded. But it's a way of modelling reality, not reality itself. Russell said 'Physics is mathematical, not because we know so much about the physical world, but because we know so little: it is only its mathematical properties that we can discover. ' I would modify that to say that in order to model a process in a computer, it has to be quantifiable, but that organisms don't have to do that.

The impressive aspect of the 'Justin' experiment is that catching a ball is a 'fuzzy' problem - the ball can move in a variety of directions and a variety of velocities. That's what requires the smarts.
Metaphysician Undercover March 17, 2020 at 12:27 #392911
Quoting StreetlightX
Recent research suggests that a more computationally efficient strategy is to simply run so that the acceleration of the tangent of elevation of gaze from fielder to ball is kept at zero.


Right, if you're a sailor you'll know that avoiding the zero tangent is how to avoid a collision. And, if you're a storm chaser, and the tornado isn't moving to the right or the left, it's coming right at you. These are the situations when evasive maneuvers are called for.
fdrake March 17, 2020 at 14:47 #392972
Quoting TheMadFool
humans don't do mathematical calculations when we play throw and catch, at least not consciously.


Not doing one type of calculation consciously does not imply much about doing other calculations. Not being aware of doing one type of calculation implies even less. The mental model of calculations being done in the brain, then passing through consciousness, then the body reacting isn't particularly apt. It's more like perception data in our neural nets feeds forward to motor control functions and our internal models at the same time but in different ways. It feeds forward with most of the focus payed to immediate environmental differences, like changes in the visual field corresponding to the ball moving, and changes in the ball's relative trajectory relative to promoted motor functions.

There's no guarantee that even if a calculation of type X was somehow used by the body to interface the perceptual inputs with the motor outputs that it would feed forward in precisely the same way to our internal models, so we wouldn't necessarily become aware of its character even if it was happening as described.

There is also no guarantee that Justin uses the same interfacing strategy between its perceptual data and its motor functions that we do just because they have the same outputs; we'd be better off comparing Justin's behaviour in catching the ball to human behaviour catching the ball (say, with eye tracking goggles and a body sensor) to see if they do strategies with similar outputs (behavioural incentives and catching strategies).

I see that @StreetlightX made much the same point with a quote.
bongo fury March 17, 2020 at 14:58 #392976
Quoting TheMadFool
Roboticists need to rethink their approach to the subject in a fundamental way.


And they did. The price of the neural network revolution was giving up (or at least severely compromising) the model of the brain (or computer) as a processor of stored symbols - internal words and pictures representing external objects. Ironically, it had to revert to Skinner's behaviourist model, a "black box". Training, without necessarily understanding the learning.

How Justin does it is as much up for speculation as how we do it.

https://www.google.co.uk/url?sa=t&source=web&rct=j&url=https://www.cs.bham.ac.uk/~jxb/PUBS/AIRTNN.pdf&ved=2ahUKEwjC56ut4qHoAhX3WRUIHWRVDsEQFjABegQIARAB&usg=AOvVaw1RdayKc25rZVAjdcLrK0D7
TheMadFool March 17, 2020 at 22:25 #393211
Reply to Nobeernolife Noted!

Well, I Quoting StreetlightX
Recent research suggests that a more computationally efficient strategy is to simply run so that the acceleration of the tangent of elevation of gaze from fielder to ball is kept at zero. Do this and you will intercept the ball before it hits the ground.


Sounds very mathematical.

Imagine Justin and me in a room and objects are thrown at us and we manage to catch them with equal success rates. While we may differ in appearance we're practically identical in terms of ability, catching thrown objects. By analogy, hopefully not a poor one, the processes that confer the catching ability should also be similar if not identical. I wouldn't go so far as to say this must be so but it does seem a reasonable inference to make.

Also, catching is an ability that evinces precision: setting aside acceptable error margins because of the sizes of the object thrown and the size of our hands, the whole activity requires some level of exactitude in the velocity of the arm and the amount of force I exert on the thrown object; move even slightly slower/faster, exert an inappropriate amount of force, and catching can't be accomplished. I guess what I'm getting at is there's a level of precision in catching that points to some form of computation process going on in our heads, somewhere but concealed from our consciousness.

Quoting Wayfarer
Russell said 'Physics is mathematical, not because we know so much about the physical world, but because we know so little


You always find the right quote for the right occasion. Thanks. I understand that the reason why math is so highly regarded is because it's the only game in town and there may be more to physics than just numbers. What intrigues me is that human abilities, albeit only the physical, can be reasonably mimicked by mathematical computational models.

Strip it down to the basics for clarity: it's obvious that a thrown object's position at the moment the catch is made is completely determined by its exact velocity (speed and direction). If I or a robot must intercept the ball, we must move our arm at the correct velocity and take measures to apply the correct amount of force on the object thrown at us. The point here being that there's a level of precision in the act of catching that, to me, eliminates the possibility that it's done using rough estimates like I was suggesting because the error margin involved is very small; even a small deviation from the correct velocity for our arm or the force we apply will result in a failure make a catch. Basically, catching throw object's isn't fuzzy. Ergo, it seems probable that our brains actually do the math when catching thrown objects and by extension, when doing other physical activities.

An interesting thought on this is that, in line with Russell whom you quoted, physics may have a non-mathematical dimension to it and our brains tap into it when we do something physical without the need to deal with actual numerical calculations. This option is almost too tempting to resist, for me at least, but all it takes for it to be falsified, if only for the issue at hand, is to examine what actually goes on in our minds when we're in the act of catching objects thrown at us: from personal experience I'm under the impression that we take note of the speed and direction (velocity) of the object thrown at us and move accordingly. So, yes, there may be, as Russell claimed, a non-mathematical side to physics but it doesn't seem to figure in the act of catching thrown objects.

Quoting fdrake
I see that StreetlightX made much the same point with a quote.


Kindly read my reply to StreetlightX then.

Reply to bongo fury :ok: :up:













Metaphysician Undercover March 18, 2020 at 01:48 #393274
Quoting TheMadFool
By analogy, hopefully not a poor one, the processes that confer the catching ability should also be similar if not identical.


Why would you ever conclude that? If a group of people show up at work in the morning do you conclude that they took similar, if not identical modes of transportation, just because they demonstrate that they have the capacity to get to work?
Streetlight March 18, 2020 at 02:38 #393280
Quoting TheMadFool
Sounds very mathematical


Sorry, I should have realized I needed to translate my quote into dumb. Thing move, I move, must make it so that one thing move in certain way in relation to other thing; if move good, catch ball.
VagabondSpectre March 18, 2020 at 02:46 #393282
I'm working on a fairly advanced AI project. It has quite a lot to say about the notions in this thread...

@StreetlightX

You're right to say that these kinds of problems are not necessarily solved via computation. The very structure of our bodies and the environment (directional gaze/perspective + 3D motion) make it so that we can just line up a few "values" (the horizontal displacement of the ball with respect to one's own change in velocity in this example) to intercept the ball...

However...

Much more trivial than learning how to catch a ball is learning to control one's limbs in the first place. Running involves coordination and sequence timing of hundreds of individual muscles, and running smoothly requires high-speed feedback reflexes and modulatory signals both from the external environment (like when your foot touches the substrate) and from different parts of the CNS.

Running is actually a much more complicated problem than just catching a ball (catching is a brief episode with very specific goals and requirements) because running requires very fine coordination of the entire body. Staying balanced, applying the rightforce ratios, timing our contractions, and responding to perturbations is the complexity that prevents us from creating good humanoid robots. Yes, Boston Dynamics spent ten years making a hard coded algorithm that can run on a very specific course that it has been optimized for, but it cannot *run in general* (meaning if you change the course slightly or introduce anomalies, it will fail).

A common way to describe this problem is to call it "the curse of dimensionality". Imagine that our muscles are controlled by a series of dials that determine contraction strength. There are about 640 knobs that each control individual muscles in the human body. Almost any random combination of settings on these knobs will result in seizure like behavior. The person would fall over and begin writhing randomly (and probably injure itself in the process). Because there are so many individual values, and because changing one of these dials can drastically change what happens to the body (example:walking would be a specific sequence of dial settings, but if you suddenly activate a random muscle while walking, it might cause the body/agent to fall over or fail), figuring out how to correctly manipulate these dials for good motion patterns is extraordinarily difficult. (the problem space is too large to search with regular machine learning algos, and it's also too complex to have the motion rules hard-coded by hand).

This brings me to the spine and "central pattern generators". These are neural circuits that hard-code primitive and basic motion patterns (from utero), and this essentially biases out a large portion of the problem space referred to earlier. Exmaple: most walking patterns are very similar, even across different species of animals.The fundamental base of the pattern can be more or less hard coded in a primitive way with these pattern generators, and this gives the new-born agent an obvious starting point when it is learning to walk. Ontop of this, these central pattern generators can be hooked up with autonomic reflexes that essentially solve part of the walking problem automatically (example: they can apply greater force when they receive a signal that one of the legs or feet are stuck,which can be based on rudimentary proprioception signals).

Chickens can run with their heads cut off (because the pattern generators in their spine are already wired and optimized for running) and baby deer can stand up almost instantly for the same reason. This is why when we ourselves want to walk or run, we don't have to think about each and every muscle, or each and every foot fall; we just think a high level thought "run", and our lower brain and spine basically sorts the rest out for us.

In other words, the non-intelligent body (the structure of the body, the way the reflexes are wired up, and the default spinal systems that simplify control for the CNS) is actually a fundamental part of higher intelligence. Relating this back to the ball-catching problem, the only reason that we're physically dexterous enough to even contemplate catching a ball is because the unconscious parts of our brain and body do most of the work for us.

Another ramification of this realization is that it's unreasonable to expect current neural network architecture to be able to express itself with broad/general intelligence when it is given a complex set of outputs to control.

How can an agent learn to speak if controlling the voice box is too complicated? How can it paint or play soccer if the problem of ambulating an arm and a leg is beyond the ability of our best networks?

How can an agent learn to communicate using language if it cannot learn and relate language-encoded concepts using the myriad of other senses that may be required for experiencing the concept or thing being described? For example, how will a language transformer ever understand what a cup actually is if it has no experience of 3d space? If it has never explored or touched a cup? Can it truly understand cups if it is too far away from a thing that could actually use one?

We see all kinds of wonderfully good results coming out of machine learning, but they're all stupendously narrow functions. We can train an agent to identify animals within images, or we can train it to balance a ball, but not both. We cannot yet integrate these different intelligence into a single AI with any degree of elegance whatsoever. We're still behind in this respect because we have yet failed to comprehend how animal intelligence is built from the bottom up in the first place. Philosophically inclined ML devs (especially) only seem to concern themselves with high level cognition where concepts and ideas are already there to be played with, having no sweet clue where they really come from beyond "it's the magic of end to end learning", and hence they get nowhere but to produce lofty theories about how it all might work (they're in for decades of speculation).
Wayfarer March 18, 2020 at 04:41 #393291
Reply to VagabondSpectre Excellent post. But I've seen some of those spooky Boston Dynamics robot videos, and they're pretty darn good at freestyle running!

Quoting TheMadFool
Ergo, it seems probable that our brains actually do the math when catching thrown objects and by extension, when doing other physical activities.


Nope. I'm sure maths has nothing to do with how organisms perform such actions. They can be modelled mathematically, but (for instance) a chameleon capturing an insect with its flexible tongue or an owl catching a field mouse through its acute sensory abilities requires great accuracy but absolutely no mathematical ability.

History - recall the breakthrough in science that arose from the combination of Rene Descartes' algebraic geometry (described here) combined with Newton's laws of motion. These were two of the foundations of what was then called 'the new science' - which indeed it was. Science since then has built on those foundations by using that methodology to model all kinds of processes - anything that moves through space can theoretically be modelled by such a methodology (which is why it was key to modern science generally). That's where the mathematical modelling of catching a ball comes in - it requires very sophisticated mathematics to allow for all the millions of possible variations. But when you and I catch a ball, we don't rely on mathematics at all. What mathematics does, is allow us to mathematically model such processes.
TheMadFool March 18, 2020 at 04:53 #393293
Quoting Metaphysician Undercover
Why would you ever conclude that? If a group of people show up at work in the morning do you conclude that they took similar, if not identical modes of transportation, just because they demonstrate that they have the capacity to get to work?


Your analogy is too weak because there are relevant dissimilarities like the differences in the distance of the employees' homes from work which would imply they would at least have to make their trip in different forms of transportation or at different times.

In my robot-human analogy, not only is the ability to catch near-identical when observed but the methods employed seem to be similar in terms of needing quantification (math). This isn't hard to prove; just observe yourself catching something in the air. If a ball is thrown at you, you'll notice yourself adjusting your arm's and body's motions to intercept the ball as per its speed and direction (velocity). The conscious determinants of these bodily adjustments are very vague i.e. what you'll be aware of maybe just very vague descriptions of the ball's movement such as it's very fast, fast, medium, slow, very slow, etc. As is obvious these are instances of quantification (math) albeit in very vague terms or so it seems. My contention is that there's a level of precision in catching thrown objects that require actual mathematical computation; after all, if you move your arm even a little faster/slower and the direction of movement is off by more than a few degrees, you'll fail to make the catch. Ergo, the vagueness in the necessary quantification/math involved in catching ability is not the real truth of the matter; the precision required entails actual math to be done.
TheMadFool March 18, 2020 at 04:57 #393295
TheMadFool March 18, 2020 at 05:10 #393300
Reply to Wayfarer So, there's something non-mathematical in human and animal physics? If that's true then how come mathematical physics is applicable to kinesiology and biomechanics; after all human limbs are essentially mechanical levers and the amount of force muscles can exert can be quantified. I don't see how the body's physical abilities are quantifiable mathematically and yet the brain controls it non-mathematically. At the very least the applicability of physics, a mathematical enterprise, to our bodies indicates that somewhere along the chain from intending a movement to the actual movement itself there is some math involved.



VagabondSpectre March 18, 2020 at 05:13 #393302
Quoting Wayfarer
Excellent post. But I've seen some of those spooky Boston Dynamics robot videos, and they're pretty darn good at freestyle running!


Thanks!

Boston Dynamics has some quadruped robots that aren't too bad, but it's a much more stable and simplified body, and they're still quite fail-prone.

The humanoid robot simply cannot do freestyle running. All we ever get to see are the very best results on specific setups that they try over and over again. And to boot, they won't even talk about their underlying methods (cause it's embarrassing hard coded rules systems that is over-complicated, hard to generalize with, and they don't want a decade + of mind numbing effort to be stolen by competition)
Wayfarer March 18, 2020 at 05:51 #393312
Quoting TheMadFool
I don't see how the body's physical abilities are quantifiable mathematically and yet the brain controls it non-mathematically


I don't think it's that hard to see. Remember the mathematization process is a method - that's why I mentioned its history. I mean, before Descartes came along nobody had ever thought of modelling three dimensional space as geometrical co-ordinates. (This is one of the discoveries that qualifies Descartes for the title of genius.)

Almost anything physical is quantifiable using that method insofar as it has mass, velocity, and other attributes that can be quantified. That methodology was very much the consequence of the discoveries of Newton, Galileo and Descartes, among a few others - crucial to modern science. Nobody from before their time thought about things that way. And that methodology is universal in scope - you can use it to model almost anything from the atomic to galactic scales (with the caveat that the discovery of relativity and quantum theory have shown that Newtonian physics is only universal within certain scales.)

So, that methodology is what is used in robotics, artificial intelligence, and so on - it all relies on the computation of quantifiable attributes. It's not that there's something intrinsically mathematical about what's being modeled (in this case, although there might be in other subjects). It's simply that mathematical modelling is what is behind all such technologies.

The whole question of 'what is maths' and 'what is number' is also a really interesting one, but it's not actually connected with the question of how 'Justin' does its stuff.
TheMadFool March 18, 2020 at 06:22 #393318
Quoting Wayfarer
I don't think it's that hard to see. Remember the mathematization process is a method - that's why I mentioned its history. I mean, before Descartes came along nobody had ever thought of modelling three dimensional space as geometrical co-ordinates. (This is one of the discoveries that qualifies Descartes for the title of genius.)

Almost anything physical is quantifiable using that method insofar as it has mass, velocity, and other attributes that can be quantified. That methodology was very much the consequence of the discoveries of Newton, Galileo and Descartes, among a few others - crucial to modern science. Nobody from before their time thought about things that way. And that methodology is universal in scope - you can use it to model almost anything from the atomic to galactic scales (with the caveat that the discovery of relativity and quantum theory have shown that Newtonian physics is only universal within certain scales.)

So, that methodology is what is used in robotics, artificial intelligence, and so on - it all relies on the computation of quantifiable attributes. It's not that there's something intrinsically mathematical about what's being modeled (in this case, although there might be in other subjects). It's simply that mathematical modelling is what is behind all such technologies.

The whole question of 'what is maths' and 'what is number' is also a really interesting one, but it's not actually connected with the question of how 'Justin' does its stuff.


Well, to be fair, there is no reason why there shouldn't be non-mathematical determinants of motion. However, given that we can model motion based on only math it seems either these non-numerical determinants of motion are superfluous or operate in parallel to the mathematical ones. Do you have any idea what such a system of non-numerical determinants of motion that makes predictions possible would look like?
Wayfarer March 18, 2020 at 07:43 #393331
Quoting TheMadFool
Do you have any idea what such a system of non-numerical determinants of motion that makes predictions possible would look like?


I'm the first to agree that there are many things that can't be quantified, but I can't see how this is one of them.
TheMadFool March 18, 2020 at 08:16 #393333
To all (if interested)

I think most of the replies in this thread referred to learning and I wish to build up on that to make the case that our brains actually do math with our bodies.

As we all know Justin can't learn i.e. his success rate with catching is never going to improve unless he gets a software upgrade. The nature of this upgrade will be faster computation ability in tandem with more accurate measurements of relevant parameters (mass, velocity, etc).

Humans and to a limited extent other lifeforms are, unlike robots, well-known for their learning ability. I've heard people say "practice makes perfect" and this maxim is nowhere more appropriate than physical activities, specifically sports. Practice is the key to becoming a great sportsman and although people speak of talent, it goes without saying that talent is somewhat secondary to practice. How does practice lead to an improvement of a physical ability?

Let's look at basketball as a sport because I have a personal story to tell that's relevant to the topic. In my opinion, what practice does is it helps improve our estimates about relevant parameters. Balls are standard and so there'll be minimal deviations of the relevant quantity, mass. So, the more we practice, the closer our measurement of the ball's mass to its actual mass and that allows us to exert the right amount of force to score a basket. In other words, practice is nothing more than getting an accurate measurement of a key parameter in a a sport.

I know we don't actually get a reading of the ball's mass like a weighing scale; it's more of a feel. Nevertheless, getting familiar with the game is another way of saying measure key parameter accurately.

Now my story: I remember playing basketball and although I'm not really good at it, I recall practising enough to see an improvement in my performance. Initially the ball wouldn't do the thing I wanted it to do but slowly I began to get accustomed, as it were, to the ball and it would then with less effort do as I intended. One day I decided to use a volleyball to play basketball and to my dismay I went back to being a complete novice. Now, the only noticeable difference here was the mass of the ball, volleyballs weighing less than basketballs, and mass is a determinant for how much force needs to be applied to the ball to make it follow a certain trajectory.

Ergo, given what practice is and how my story reveals that changing the key physical quantity involved in a sport can turn you from a pro to a beginner, it must be that our brains do math; after all, all that changed in my story was a mathematical quantity, the mass of the ball and it resulted in a difference in performance. It isn't much of a problem to stretch this conclusion to all physical activities.

Note to @Wayfarer: changes in a number (physical quantity) can cause a difference in performance. Doesn't that indicate math is involved? If math wasn't a part of it then tweaking with the numbers shouldn't have an effect on our physical performance.



Wayfarer March 18, 2020 at 08:44 #393336
Reply to TheMadFool Textbook example of putting the cart before the horse.
TheMadFool March 18, 2020 at 09:23 #393342
Quoting Wayfarer
Textbook example of putting the cart before the horse.


How? Which is the cart and which is the horse?
TheMadFool March 18, 2020 at 09:30 #393344
Reply to Wayfarer The reasoning is quite simple actually.

1. Either catching involves math or catching doesn't involve math

2. If catching doesn't involve math then changing mathematical parameters (like mass) shouldn't affect catching ability

3. Changing mathematical parameters (like mass) does affect catching ability

So,

4. It's false that catching doesn't involve math (2, 3 modus tollens)

Ergo

5. Catching involves math (1, 4 disjunctive syllogism)

6. No perceptible difference exists between catching and other physical activities i.e. they appear to be similar in process

Ergo

7. All physical abilities require the brain to do math
Wayfarer March 18, 2020 at 11:30 #393370
Quoting TheMadFool
1. Either catching involves math or catching doesn't involve math

2. If catching doesn't involve math then changing mathematical parameters (like mass) shouldn't affect catching ability


But you're not comparing like with like. When a human catches, then the action consists of muscular reflexes, hand-eye co-ordination, and on a micro-cellular level, the exchanges of ions across membranes, and so forth and so on.

Those actions can be modelled by machines, but that modelling relies on maths.

When a machine performs an action, then you have motors which position instruments controlled by binary code. This is reliant on mathematics, in a way which is completely different to the way in which organic performance is.

If you can't see that, I give up. (But then, this should have been obvious from the way the thread was named, as nothing about what 'Justin' can do, connotes 'insight', and indeed the word is not to be found in the linked Wikipedia page.)
TheMadFool March 18, 2020 at 11:55 #393381
Quoting Wayfarer
But you're not comparing like with like. When a human catches, then the action consists of muscular reflexes, hand-eye co-ordination, and on a micro-cellular level, the exchanges of ions across membranes, and so forth and so on.

Those actions can be modelled by machines, but that modelling relies on maths.

When a machine performs an action, then you have motors which position instruments controlled by binary code. This is reliant on mathematics, in a way which is completely different to the way in which organic performance is.

If you can't see that, I give up. (But then, this should have been obvious from the way the thread was named, as nothing about what 'Justin' can do, connotes 'insight', and indeed the word is not to be found in the linked Wikipedia page.)


I'm offering a very simple argument here. Compare physical abilities to that of the ability to discern color. If, as you say, the physics of human motion is non-mathematical then this comparable to saying we don't discern colors.

What would be a simple test to prove/disprove the two theories above, yours about our brains not doing math and the other that we don't discern colors?

A simple test would be to show us a variety of colors and check if different colors produce different responses and by analogy if we wish to check if math is an integral part of our physical abilities we should introduce variations in mathematical parameters we know are involved. If there is, and there is, a noticeable difference, in our response to different colors and in our physical performance, to these variations then it's quite clear that both we can discern color and our brain does math, right?
Metaphysician Undercover March 18, 2020 at 12:04 #393383
Quoting TheMadFool
n my robot-human analogy, not only is the ability to catch near-identical when observed but the methods employed seem to be similar in terms of needing quantification (math).


Your analogy is worse than mine. A human being doesn't use math to catch a ball. That's a false premise. You admit this yourself when you say a human beings adjustments are "vague". Math is not vague.

Quoting TheMadFool
So, there's something non-mathematical in human and animal physics? If that's true then how come mathematical physics is applicable to kinesiology and biomechanics; after all human limbs are essentially mechanical levers and the amount of force muscles can exert can be quantified. I don't see how the body's physical abilities are quantifiable mathematically and yet the brain controls it non-mathematically. At the very least the applicability of physics, a mathematical enterprise, to our bodies indicates that somewhere along the chain from intending a movement to the actual movement itself there is some math involved.


The fact that mathematics cannot adequately predict human motions, because it cannot predict changes due to free will decisions, ought to indicate the falsity of that premise to you. Human motions cannot be modeled with mathematics.

Quoting TheMadFool
Well, to be fair, there is no reason why there shouldn't be non-mathematical determinants of motion. However, given that we can model motion based on only math it seems either these non-numerical determinants of motion are superfluous or operate in parallel to the mathematical ones. Do you have any idea what such a system of non-numerical determinants of motion that makes predictions possible would look like?


Perhaps we model motion only with math, but math is inadequate for modeling human motions. So you ask what is the nature of such "non-numerical determinants of motion", and the answer is conscious judgements. A mathematical judgement is one very specialized type of conscious judgement. However, the vast majority of conscious judgements, including those which induce motion are not mathematical judgements.

To disprove your theory, just watch a dog jump for a stick, or a treat, then try to get the dog to apply some mathematics. I'm pretty sure that the dog catches without applying mathematics.
Streetlight March 18, 2020 at 12:34 #393388
Quoting TheMadFool
Ergo, given what practice is and how my story reveals that changing the key physical quantity involved in a sport can turn you from a pro to a beginner, it must be that our brains do math


This doesn't follow at all. Just more idiotic reasoning, as with all of your posts.
TheMadFool March 18, 2020 at 12:37 #393390
Reply to Metaphysician UndercoverWell, by what means could I determine if the brain does math or not when involved in a physical activity? How do we know that our brain has the ability to recognize different colors?

Change a mathematical quantity and observe differences in the physical activity that involves that mathematical quantity. Change colors and look for changes in response.

If there's a noticeable difference in either the physical activity or the response to different colors then can I not conclude that our brain does math and that we can discern different colors?

In fact color is an excellent example of our brain doing math because we can discern colors and color is completely determined by a mathematical quantity viz. frequency of EM waves.
TheMadFool March 18, 2020 at 12:39 #393391
Quoting StreetlightX
This doesn't follow at all. Just more idiotic reasoning, as with all of your posts.


Noted. Will work on that. :up:
Harry Hindu March 18, 2020 at 13:29 #393400
Quoting TheMadFool
1. It's impossible for Justin to possess an ability to catch thrown objects without actually performing some mathematical calculations.

2. Humans possess the ability to catch thrown objects and we, unlike Justin, routinely catch objects without even thinking of mathematics let alone doing any actual calculations.


You seem to be conflating "performing" with "thinking". Does Justin "think", or "perform"? Is there a difference between "performing" mathematical calculations as opposed to "thinking" of mathematical calculations?

Quoting Wayfarer
Not at all. It's the requirement of computers - they process binary code, and anything they're programmed to do must be coded. But it's a way of modelling reality, not reality itself.

What about brains? Are brains programmed? The model is just as much part of reality as what is being modeled. The model has causal relationship with what is being modeled and has causal power itself (it changes your behavior based on the model and what is being modeled).

Quoting StreetlightX
Thing move, I move, must make it so that one thing move in certain way in relation to other thing; if move good, catch ball.
Looks like you are being run by an IF-THEN program. A high-level language is a representation of the machine language that computers understand. So is your mind a representation of what is going on at the neurological level. You're not aware of the mathematical calculations your neurons are performing. You mind's mental imagery is a representation of what is going on at the neurological level, just as you aren't aware of what is going on inside the computer by just looking at the screen, but the screen is a representation of what is happening inside the computer.

Here is an excerpt from Steven Pinker's, How the Mind Works, that might shed some light here:
Steven Pinker:Mathematics is part of our birthright. One-week-old babies perk up when a scene changes from two to three items or vice versa. Infants in their first ten months notice how many items (up to four) are in a display, and it doesn't matter whether the items are homogeneous or heterogeneous, bunched together or spread out, dots or household objects, even
whether they are objects or sounds. According to recent experiments by the psychologist Karen Wynn, five-month-old infants even do simple arithmetic. They are shown Mickey Mouse, a screen covers him up, and a second Mickey is placed behind it. The babies expect to see two Mickeys when the screen falls and are surprised if it reveals only one. Other babies are shown two Mickeys and one is removed from behind the screen. These babies expect to see one Mickey and are surprised to find two. By eighteen months children know that numbers not only differ but fall into an order; for example, the children can be taught to choose the picture with fewer dots. Some of these abilities are found in, or can be taught to, some kinds of animals.

Can infants and animals really count? The question may sound absurd because these creatures have no words. But registering quantities does not depend on language. Imagine opening a faucet for one second every time you hear a drumbeat. The amount of water in the glass would represent the number of beats. The brain might have a similar mechanism, which would accumulate not water but neural pulses or the number of active neurons. Infants and many animals appear to be equipped with this simple kind of counter. It would have many potential selective advantages, which depend on the animal's niche. They range from estimating the rate of return of foraging in different patches to solving problems such as "Three bears went into the cave; two came out. Should I go in?"

Human adults use several mental representations of quantity. One is analogue—a sense of "how much"—which can be translated into mental images such as an image of a number line. But we also assign number words to quantities and use the words and the concepts to measure, to count more accurately, and to count, add, and subtract larger numbers. All cultures have words for numbers, though sometimes only "one," "two," and "many." Before you snicker, remember that the concept of number has nothing to do with the size of a number vocabulary. Whether or not people know words for big numbers (like "four" or "quintillion"), they can know that if two sets are the same, and you add 1 to one of them, that set is now larger. That is true whether the sets have four items or a quintillion items. People know that they can compare the size of two sets by pairing off their members and checking for leftovers; even mathematicians are forced to that technique when they make strange claims about the relative sizes of infinite sets. Cultures without words for big numbers often use tricks like holding up fingers, pointing to parts of the body in sequence, or grabbing or lining up objects in twos and threes.

Children as young as two enjoy counting, lining up sets, and other activities guided by a sense of number. Preschoolers count small sets, even when they have to mix kinds of objects, or have to mix objects, actions, and sounds. Before they really get the hang of counting and measuring, they appreciate much of its logic. For example, they will try to distribute a hot dog equitably by cutting it up and giving everyone two pieces (though the pieces may be of different sizes), and they yell at a counting puppet who misses an item or counts it twice, though their own counting is riddled with the same kinds of errors.
Formal mathematics is an extension of our mathematical intuitions. Arithmetic obviously grew out of our sense of number, and geometry out of our sense of shape and space. The eminent mathematician Saunders Mac Lane speculated that basic human activities were the inspiration for every branch of mathematics:
Counting -» arithmetic and number theory
Measuring —> real numbers, calculus, analysis
Shaping —> geometry, topology
Forming (as in architecture) —> symmetry, group theory
Estimating —> probability, measure theory, statistics
Moving —> mechanics, calculus, dynamics
Calculating —> algebra, numerical analysis
Proving —> logic
Puzzling —» combinatorics, number theory
Grouping —> set theory, combinatorics

Mac Lane suggests that "mathematics starts from a variety of human activities, disentangles from them a number of notions which are generic and not arbitrary, then formalizes these notions and their manifold interrelations." The power of mathematics is that the formal rule systems can then "codify deeper and non-obvious properties of the various originating human activities." Everyone—even a blind toddler—instinctively knows that the path from A straight ahead to B and then right to C is longer than the shortcut from A to C. Everyone also visualizes how a line can define the edge of a square and how shapes can be abutted to form bigger shapes. But it takes a mathematician to show that the square on the hypotenuse is equal to the sum of the squares on the other two sides, so one can calculate the savings of the shortcut without traversing it.

Consider this request: Visualize a lemon and a banana next to each other, but don't imagine the lemon either to the right or to the left, just next to the banana. You will protest that the request is impossible; if the lemon and banana are next to each other in an image, one or the other has to be on the left. The contrast between a proposition and an array is stark. Propositions can represent cats without grins, grins without cats, or any other disembodied abstraction: squares of no particular size, symmetry with no
particular shape, attachment with no particular place, and so on. That is the beauty of a proposition: it is an austere statement of some abstract fact, uncluttered with irrelevant details. Spatial arrays, because they consist only of filled and unfilled patches, commit one to a concrete arrangement of matter in space. And so do mental images: forming an image of "symmetry," without imagining a something or other that is symmetrical, can't be done.

The concreteness of mental images allows them to be co-opted as a handy analogue computer. Amy is richer than Abigail; Alicia is not as rich as Abigail; who's the richest? Many people solve these syllogisms by lining up the characters in a mental image from least rich to richest. Why should this work? The medium underlying imagery comes with cells dedicated to each location, fixed in a two-dimensional arrangement. That supplies many truths of geometry for free. For example, left-to-right arrangement in space is transitive: if A is to the left of B, and B is to the left of C, then A is to the left of C. Any lookup mechanism that finds the locations of shapes in the array will automatically respect transitivity; the architecture of the medium leaves it no choice.

Suppose the reasoning centers of the brain can get their hands on the mechanisms that plop shapes into the array and that read their locations out of it. Those reasoning demons can exploit the geometry of the array as a surrogate for keeping certain logical constraints in mind. Wealth, like location on a line, is transitive: if A is richer than B, and B is richer than C, then A is richer than C. By using location in an image to symbolize wealth, the thinker takes advantage of the transitivity of location built into the array, and does not have to enter it into a chain of deductive steps. The problem becomes a matter of plop down and look up. It is a fine example of how the form of a mental representation determines what is easy or hard to think.


TheMadFool March 18, 2020 at 16:08 #393427
Quoting Harry Hindu
You seem to be conflating "performing" with "thinking". Does Justin "think", or "perform"? Is there a difference between "performing" mathematical calculations as opposed to "thinking" of mathematical calculations?


Well, if you must make a distinction between performance and thinking it must be mean that the former doesn't involve the latter and also the converse. That makes sense for the reason that normal physical activity tends to occur at a level of neural activity that doesn't register in consciousness which my guess is what you mean by thinking. However, as far as my argument is concerned, "thinking" means everything that occurs in the brain, whether our consciousness is aware of it or not.
Harry Hindu March 18, 2020 at 17:37 #393464
Quoting TheMadFool
However, as far as my argument is concerned, "thinking" means everything that occurs in the brain, whether our consciousness is aware of it or not.


Then the mind "thinking" how to catch a ball is the same as the brain "performing" mathematical calculations?
TheMadFool March 19, 2020 at 06:21 #393622
Quoting Harry Hindu
Then the mind "thinking" how to catch a ball is the same as the brain "performing" mathematical calculations?


To begin with, I want to avoid discussions on mind if meant as some form of immaterial object different to the brain. I will use "mind" here only to the brain and what it does.

Nevertheless, the complexity of the brain necessitates a distinction - that between higher consciousness and lower consciousness/the subconscious. The former refers to that part of the mind that can make something an object of thought. What do I mean by that? Simply that higher consciousness can make something an object of analysis or rational study or even just entertain a simple thought on it. Our higher consciousness is active in this discussion between us, for example.

By lower consciousness/the subconsciousness I mean that part of the mind that doesn't or possibly can't make things an object of rational inquiry or analysis. While sometimes this part of the mind can be accessed by the higher consciousness, the subconscious is for the most part hidden from view. As an example, our typing the text of this discussion is the work of the lower consciousness/the subconscious. The higher consciousness doesn't decide which muscles to contract and which to relax and calculate the force and direction of my fingers in typing our text. Rather the lower consciousness/the subconscious carries out this activity.

So, what you mean by performance is being carried out by the lower consciousness/the subconsciousness and this friendly exchange of ideas between us is the work of our higher consciousness. The difference in opinion we have is that for me both higher and lower consciousness is thinking but for you thinking seems to apply only to higher consciousness.
Cabbage Farmer March 19, 2020 at 11:33 #393648
Quoting TheMadFool
When an object is thrown at me, and I hope I'm representative of the average human, I make an estimate of the trajectory of the object and its velocity and move my body and arm accordingly to catch that object. All this mental processing occurs without resorting to actual mathematical calculations of the relevant parameters that have a direct bearing on my success in catching thrown objects.

I would not say I ordinarily estimate the trajectory and velocity. I look and catch, look and throw.

Something in me does estimate the trajectory and velocity when I look and catch and throw. I expect you are right to suggest that this process is largely subconscious and in a sense involuntary or automatic, once I set myself to a game of catch. But the process manifests in my awareness as a feature of the cooperation of my perception, motion, expectation, and intention.

I expect you are right to suggest that this process of anticipation in us does not ordinarily involve mathematical calculation.

Quoting TheMadFool
The other possibility is that we don't need mathematics to catch a ball and roboticists are barking up the wrong tree. Roboticists need to rethink their approach to the subject in a fundamental way. This seems, prima facie, like telling a philosopher that logic is no good. Preposterous! However, to deny this possibility is to ignore a very basic fact - humans don't do mathematical calculations when we play throw and catch, at least not consciously.

Isn't it the job of the robot designers to design robots that perform certain actions, like drilling or catching? What does it matter whether the processes involved are the same as the processes in us? How could they be exactly the same sort of processes?

It's not clear to me what claim are you objecting to.
Harry Hindu March 19, 2020 at 12:18 #393652
Quoting TheMadFool
So, what you mean by performance is being carried out by the lower consciousness/the subconsciousness and this friendly exchange of ideas between us is the work of our higher consciousness. The difference in opinion we have is that for me both higher and lower consciousness is thinking but for you thinking seems to apply only to higher consciousness.


No, I'm trying to clarify what you are really asking in your OP. I asked (I wasn't asserting anything) if the brain and mind were doing the same thing, but if we are using different terms to refer to the same thing - thinking and performing - and the terms have to do with different vantage points - from within your own brain (your mind thinking) or from outside of it (someone else looking at your mind and seeing a brain performing mathematical calculations).

Quoting TheMadFool
The higher consciousness doesn't decide which muscles to contract and which to relax and calculate the force and direction of my fingers in typing our text. Rather the lower consciousness/the subconscious carries out this activity.

When you are learning how to do these things for the first time, you are applying your "higher" consciousness. For instance, learning to ride your bike requires conscious effort. After you have enough practice, you can do it without focusing your consciousness on it.

So, if the higher level passes the work down to the lower level, what exactly is it passing down - mathematical calculations? Thoughts? What? At what point does the brain pass it down to the lower level - what tells the brain, "Okay, the lower level can take over now"? How does the brain make that distinction?

Quoting TheMadFool
Nevertheless, the complexity of the brain necessitates a distinction - that between higher consciousness and lower consciousness/the subconscious. The former refers to that part of the mind that can make something an object of thought. What do I mean by that? Simply that higher consciousness can make something an object of analysis or rational study or even just entertain a simple thought on it. Our higher consciousness is active in this discussion between us, for example.

So, is the subconscious just an object of thought in the higher level, or is it really a "material" object in the world independent of it being objectified by the higher level? You seem to be saying that brains and the subconscious are objects before being objectified by the higher level. If they are already objects in the material world, then why does the higher level of the brain need to objectify those things? What would it mean for the "higher" level to objectify what is already an object?







TheMadFool March 19, 2020 at 13:50 #393659
Quoting Harry Hindu
No, I'm trying to clarify what you are really asking in your OP. I asked (I wasn't asserting anything) if the brain and mind were doing the same thing, but if we are using different terms to refer to the same thing - thinking and performing - and the terms have to do with different vantage points - from within your own brain (your mind thinking) or from outside of it (someone else looking at your mind and seeing a brain performing mathematical calculations).


Either your English is too good or my English is too bad :rofl: because I can't see the relevance of the above to my position. Either you need to dumb it down for me or I have to take English clases. I'm unsure which of the two is easier.

Quoting Harry Hindu
When you are learning how to do these things for the first time, you are applying your "higher" consciousness. For instance, learning to ride your bike requires conscious effort. After you have enough practice, you can do it without focusing your consciousness on it.


You're in the ballpark on this one. The only issue I have is I don't see the involvement of higher consciousness in learning to ride a bike in the sense that your consciousness is directly involved in deciding which muscles to contract and which to relax and how much force each muscle should exert.

If you ask me, though physical ability is too generic in the sense that it basically involves control of a few basic body structures like head, neck, limbs and torso, the process of learning a skill that isn't part of "normal" physical activity e.g. riding a bike, or juggling, etc, appears to be a bottom up process; in other words, what is actually happenning is that the subconscious is feeding the higher consciousness cues as to how your limbs and torso must move in order to ride a bike.

It isn't the case that the higher consciousness is passing on information to the subconscious in learning to ride a bike; to the contrary the subconscious is adjusting itself to the dynamics of the bike. I've stopped riding bikes but I remember making jerky, automatic i.e. not conscious and voluntary, movements with my arms, legs and torso. The bottomline: the role of the higher consciousness begins and ends with the desire to ride a bike.

Quoting Harry Hindu
So, is the subconscious just an object of thought in the higher level, or is it really a "material" object in the world independent of it being objectified by the higher level? You seem to be saying that brains and the subconscious are objects before being objectified by the higher level. If they are already objects in the material world, then why does the higher level of the brain need to objectify those things? What would it mean for the "higher" level to objectify what is already an object?


Forget that I said that. I wanted to make a distinction between the part of the "mind" (brain function) that generates intentions with respect to our bodies and the part of the mind that carries out those intentions. As an example, as I write these words, I'm intending to do so and that part of the mind is not the same as the part of the mind that actually moves my fingers on the keyboard. For me what both these parts do amounts to thinking.

You said that I was conflating thinking with performing but that would imply the former doesn't involve the latter but as I explained above I consider both the intent to do a physical act and the performance of that act both as thinking.

Harry Hindu March 19, 2020 at 14:29 #393670
Quoting TheMadFool
You're in the ballpark on this one. The only issue I have is I don't see the involvement of higher consciousness in learning to ride a bike in the sense that your consciousness is directly involved in deciding which muscles to contract and which to relax and how much force each muscle should exert.


How did you learn to ride a bike? What type of thoughts were involved? Didn't you have to focus on your balance, which in fine control over certain muscles that you might or might not have used before? What about ice-skating which uses muscles most people haven't used (in your ankles) that have never ice-skated before.

The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain. Using your theory, learning to ride a bike would be no different than knowing how to ride a bike. Learning requires the conscious effort of controlling the body to perform functions the body hasn't performed before. Once you've learned it, it seems to no longer require conscious effort to control the body. Practice creates habits. Habits are performed subconsciously.

I think part of your problem is this use of terms like "material" vs. "immaterial" and "physical" vs. "mind". You might have noticed that haven't use those terms except to try to understand your use of them, which is incoherent.

What is "intent" at the neurological level? Where is "intent" in the brain?
Harry Hindu March 19, 2020 at 15:05 #393688
Quoting TheMadFool
No, I'm trying to clarify what you are really asking in your OP. I asked (I wasn't asserting anything) if the brain and mind were doing the same thing, but if we are using different terms to refer to the same thing - thinking and performing - and the terms have to do with different vantage points - from within your own brain (your mind thinking) or from outside of it (someone else looking at your mind and seeing a brain performing mathematical calculations).
— Harry Hindu

Either your English is too good or my English is too bad :rofl: because I can't see the relevance of the above to my position. Either you need to dumb it down for me or I have to take English clases. I'm unsure which of the two is easier.

Why is it that when I look at you, I see a body with a brain, not your mind? If I wanted to find evidence of your mind, or your intent, where would I look? Would I see what you see? If I see a brain causing the body to perform actions, and you experience intent causing your body to perform actions, why the difference?

Quoting TheMadFool
Forget that I said that. I wanted to make a distinction between the part of the "mind" (brain function) that generates intentions with respect to our bodies and the part of the mind that carries out those intentions.

Are you saying that your intent moves your brain into action? How is that done? Forgetting you said that is forgetting how your position is incoherent.
TheMadFool March 19, 2020 at 15:17 #393701
Quoting Harry Hindu
How did you learn to ride a bike? What type of thoughts were involved? Didn't you have to focus on your balance, which in fine control over certain muscles that you might or might not have used before? What about ice-skating which uses muscles most people haven't used (in your ankles) that have never ice-skated before.

The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain. Using your theory, learning to ride a bike would be no different than knowing how to ride a bike. Learning requires the conscious effort of controlling the body to perform functions the body hasn't performed before. Once you've learned it, it seems to no longer require conscious effort to control the body. Practice creates habits. Habits are performed subconsciously.

I think part of your problem is this use of terms like "material" vs. "immaterial" and "physical" vs. "mind". You might have noticed that haven't use those terms except to try to understand your use of them, which is incoherent.

What is "intent" at the neurological level? Where is "intent" in the brain?


Headless Chicken

User image

The part of our mind that generates intent isn't necessary for carrying out physical activities. If the chicken can walk without a head then surely learning to ride a bike, which is nothing more than glorified walking, can be done without the intent-generating part of the brain, completely at the level of the subconscious.

Thanks for the information VagabondSpectre.
Harry Hindu March 19, 2020 at 21:08 #393843
Quoting TheMadFool
The part of our mind that generates intent isn't necessary for carrying out physical activities. If the chicken can walk without a head then surely learning to ride a bike, which is nothing more than glorified walking, can be done without the intent-generating part of the brain, completely at the level of the subconscious.

You're avoiding the questions and this post doesn't address any of the points I have made.

All you have done is provide an explanation as to why the part of our mind that generates intent isn't necessary. Then why does it exist? What is "intent" for? You're the one that proposed this "intent" in our minds and now you're saying it isn't necessary.

I wonder, if a child lost it's head before learning how to walk, if the child would be able to walk after losing it's head? Why or why not?
VagabondSpectre March 20, 2020 at 00:05 #393890
Quoting Harry Hindu
I wonder, if a child lost it's head before learning how to walk, if the child would be able to walk after losing it's head? Why or why not?


User image

For humans the answer is almost definitely no, simply because we're super complicated. When children first learn to stand up, balance, and walk, they're doing it with conscious effort, and overtime they get better and better at doing this. We have more fine-control requirements than something like a chicken (it has a much lower center of gravity (and therefore more balance-plausible positions), including bigger feet and fewer muscles). I know that in adults balance can be handled subconsciously by central pattern generators (look at a time lapse video of someone standing "still", they actually sway back and forth constantly, which is caused by pattern generators correcting the skeletal muscles (like those in the hips and back) which directly affect balance (e.g: if your actual balance perception or your force/position sensing afferent neurons indicates a problem happening in a given direction, then a reflex can issue a correcting force until the problem is marginalized)).

For chickens, there is still early learning that must take place. The 'pecking' motion that some birds (but especially baby chickens) do is actually an automatic hard-wired reflex that gets triggered by certain stimulus (like looking down with the right body position, and seeing something shiny). However the chick doesn't know what its doing at first, or why; it just thrusts its beak randomly toward the ground. And once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more rewards.

The nursing/suckling motion performed by the the jaw muscles of human babies is another example. This is definitely a central pattern generator circuit being activated automatically (just by the touch receptors near the lips), but initially the baby has no clue what it is doing and cannot nurse effectively. It learns to adapt and modify the movements quite quickly (breast milk must be the bee's knees for a newborn), and this is what enables it to quickly learn the somewhat complicated and refined motor control for nursing. The same pattern generators probably get us babbling later on, and then are used for speaking more distantly.

Likewise with baby deer, they can stand up almost instantaneously,but not actually instantaneously. They still have to get the hang of it with some brief practice.

The hard wired starting patterns we have from birth are like very crude ball-park/approximate initial settings that basically get us started in useful directions.

CONTENT WARNING: Disturbing

This is video of a decerebrated cat (its cerebrum has been disconnected from its body, or damaged to the point of non-activity).

[hide][/hide]

It is being suspended over a treadmill so that its legs are touching the substrate. As can be seen in the video, without any central nervous modulation whatsoever, the still-living peripheral nervous system can still trigger and modulate the central pattern generators in the spine to display a range of gaits.

Whether or not this would work with a kitten is the experiment needed to start estimating how much learning takes place in the spine and brain-stem itself throughout the lifespan of the animal (is it the higher brain that improves it control over the body, or do the peripheral body systems adapt overtime to better suit the on-going needs of the CNS (or, what combination of both?). Increasing the default coupling strength or activation thresholds of different neurons in these central pattern generator circuits can effectively do this). How much motor control refinement takes place, and where it takes place, certainly differs between very different species of animal. (insects, fish, quadrupeds, bipeds, etc...)

In summary, different bodies have different demands and constraints when it comes to motion control strategies. Centipedes don't need to worry to much about balance, but the timing between their legs must remain somewhat constant for most of their actions. Spider's worry a lot about balance, and they have even more rigid timing constraints between their leg movements (if they want to move elegantly). These kinds of systems may benefit from more rigidly wired central pattern generator (lacking spines, they still do have cpgs). Evolution could happily hard wire these to a greater extent, thereby making it easier for tiny insect brains to actually do elegant high level control. At a certain point of complexity, like with humans, evolution can't really risk hard-wiring everything, and so the cpg's (and the central nervous system controlling them) may take much longer to be optimized, but as a result are more adaptive (humans can learn and perfect a huge diversity of motor control feats). Four legged animals have a much easier time balancing, and their basic quadrupedal gaits are somewhat common to all four legged critters, meaning it's less risky to give them a more rigid cpg system.
Harry Hindu March 20, 2020 at 04:27 #393942
You wasted your time. None of this addresses the questions I asked TMF.
VagabondSpectre March 20, 2020 at 05:31 #393947
Quoting Harry Hindu
You wasted your time. None of this addresses the questions I asked TMF.


Oh... I thought you asked if a decerebrated infant can walk before learning to walk. The answer is no, unless little or no learning is required for walking in the first place (this may actually be the case for some insects to get going). I also have explained why; the loose neural circuitry we come hard wired with only solves the problem part way, and conscious learning is very often required to refine a given action or action sequence (refined via the central nervous system responding to intrinsic rewards).

If you take my answer to your sarcastic question seriously, then it also answers some questions about "intent", which I won't hazard to define. If you're not interested that's alright, but I'm sure many others are, so please forgive my use of your post as a springboard for my own.
TheMadFool March 20, 2020 at 07:01 #393960
Quoting Harry Hindu
You're avoiding the questions and this post doesn't address any of the points I have made.

All you have done is provide an explanation as to why the part of our mind that generates intent isn't necessary. Then why does it exist? What is "intent" for? You're the one that proposed this "intent" in our minds and now you're saying it isn't necessary.

I wonder, if a child lost it's head before learning how to walk, if the child would be able to walk after losing it's head? Why or why not?


You said Quoting Harry Hindu
The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain.


which implies that you think learning involves a top-down process where the skill is passed down from the higher consciousness to the subconscious. The headless chicken disproves that claim.

As for intent in re the brain, think of it as the decision making body - it decides what, as herein relevant, the body will do or learn e.g. I decide to learn to ride a bike. The actual learning to ride a bike is done by another part of the brain, the subconscious.

Our discussion began when you said I was conflating "performance" with "thinking" but for that to obtain these two must be different from each other which, for me, they are not. Performance, insofar as physical activity is concerned, involves the subconscious and that is thinking albeit we lack awareness of it.


Harry Hindu March 20, 2020 at 18:04 #394089
Quoting VagabondSpectre
If you take my answer to your sarcastic question seriously, then it also answers some questions about "intent", which I won't hazard to define. If you're not interested that's alright, but I'm sure many others are, so please forgive my use of your post as a springboard for my own.

If you didn't define "intent" or show how it has a causal influence on the brain, then no, you didn't come close to addressing my earlier points that both you and TMF have diverted the thread from by bringing up one chicken who could walk after a botched decapitation.

Harry Hindu March 20, 2020 at 18:16 #394097
Quoting TheMadFool
You said
The reason you say that this "physical" act is done by some other part of the brain is because you've already passed the performance to another part of the brain.
— Harry Hindu

which implies that you think learning involves a top-down process where the skill is passed down from the higher consciousness to the subconscious. The headless chicken disproves that claim.

Read it again. I'm talking about what YOU said. YOU are the one using terms like "physical", "immaterial", "higher consciousness", "lower consciousness", etc. I'm simply trying to parse your use of these terms and ask you questions about what YOU are trying to ask or posit. I haven't put forth any kind of argument. I am only questioning YOU on what YOU have said.

Quoting TheMadFool
As for intent in re the brain, think of it as the decision making body - it decides what, as herein relevant, the body will do or learn e.g. I decide to learn to ride a bike. The actual learning to ride a bike is done by another part of the brain, the subconscious.

And I asked you how does intent make the body move? How does deciding to learn to ride a bike make the body learn how to ride a bike? Where is "intent" relative to the body it moves? Why is your experience of your intent different than my experience of your intent? How would you and I show evidence that you have this thing that you call, "intent"?
TheMadFool March 20, 2020 at 18:54 #394121
Reply to Harry Hindu I'm sure you know what intent is. Did you not have an intent to join this forum or to engage with me in this discussion or to have whatever meal that you ate today, etc.?

Intent is simply to have a purpose, aim, or goal. To want to ride a bike is to have an intent.
VagabondSpectre March 20, 2020 at 20:09 #394141
Quoting Harry Hindu
If you didn't define "intent" or show how it has a causal influence on the brain,


This is an oxymoronic statement. Intention is generated by brains. Brains cause intention...

Quoting Harry Hindu
you didn't come close to addressing my earlier points that both you and TMF have diverted the thread from by bringing up one chicken who could walk after a botched decapitation.


It's clear you haven't read my posts...
VagabondSpectre March 21, 2020 at 01:21 #394251
For anyone else who missed it, here's the short version:

The 'pecking' motion that some birds (but especially baby chickens) do is actually an automatic hard-wired reflex that gets triggered by certain stimulus. However the chick doesn't know what its doing at first, or why; it just thrusts its beak randomly toward the ground. Once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more "reward" (a hard wired pleasure signal that plays an essential role in the emergence of intelligence and intention).
Metaphysician Undercover March 21, 2020 at 01:32 #394254
Quoting TheMadFool
In fact color is an excellent example of our brain doing math because we can discern colors and color is completely determined by a mathematical quantity viz. frequency of EM waves.


This is wrong, the eyes are what we use to determine colour, not mathematics. And colour is not determined by frequency of EM waves. That's a false myth.
TheMadFool March 21, 2020 at 05:15 #394311
Quoting Metaphysician Undercover
This is wrong, the eyes are what we use to determine colour, not mathematics. And colour is not determined by frequency of EM waves. That's a false myth.


Myth? Color is a frequency-property of light/EM waves. Change the frequency of light and color changes i.e. without changes in frequency there are no changes in color.

Although frequency may not be the sole determinant of color for there maybe other explanations for the origins of color, existing color theory bases colors on frequency of light.
Metaphysician Undercover March 21, 2020 at 11:35 #394355
Quoting TheMadFool
Color is a frequency-property of light/EM waves.


You clearly do not know what colour is. Do you recognize that what we sense as colour is combinations, mixtures of wavelengths, and that the eyes have three different types of cone sensors, sensitive to different ranges of wavelengths? The fact that human minds judge EM wavelength using mathematics, and we classify the different types of sensors with reference to these mathematical principles, does not mean that the cones use mathematics to distinguish different wavelengths.

Consider a couple different repetitive patterns, a clock ticking every second, and something ticking every seven seconds. You can notice that the two are different without using math to figure out that one is 1/7 of the other. It's just a matter of noticing that the patterns are different, not a matter of using math to determine the difference between them.

Noticing a difference does not require mathematics. We notice that it is bright in the day, and dark at night without using math, and we notice that it is warm in the sun, and colder at night without determining what temperature it is.

Quoting TheMadFool
Change the frequency of light and color changes i.e. without changes in frequency there are no changes in color.


Your logic is deeply flawed. Change the amount of salt in your dinner and the taste changes, therefore there are no changes to taste without changing the amount of salt
Harry Hindu March 21, 2020 at 13:55 #394407
Quoting VagabondSpectre
The 'pecking' motion that some birds (but especially baby chickens) do is actually an automatic hard-wired reflex that gets triggered by certain stimulus. However the chick doesn't know what its doing at first, or why; it just thrusts its beak randomly toward the ground.

How do you know what is in a chick's mind? What does it mean for the chick to not know what it is doing at first? It seem to me that the chick is showing intent to feed, or else it wouldn't peck the ground. How do you know that what it does instinctively, is what it intends to do in it's mind? For an instinctive behavior - one in which it is not routed through the filtering of consciousness - what it does is always what it wants to do. It is in consciousness that we re-think our behavior. I'm famished. Should I grab John's sandwich and eat it? For the chick, it doesn't think about whether it ought, or should do something. It just does it and there is no voice in their mind telling what is right or wrong (their conscious). What is "right" is instinctive behavior. Consciousness evolved in highly intelligent and highly social organisms as a means of fine-tuning (instinctive) behavioral responses in social environments.

Based on your theory, there is no reason to have a reward system, or intent. What would intent be, and what would it be useful for? How did it evolve? If the chick uses stimuli to peck the ground, then why would it need a reward system if natural selection already determined the reward as getting food when the chick pecks the ground, so it passes that behavior down to the next generation. Odds are, when a certain stimuli exists, it pecks the ground. The reward would be sustenance. Why would there need to be a pleasure signals, or intent when all is needed is a specific stimuli to drive the behavior - the stimuli that natural selection "chose" as the best for starting the pecking behavior, because that stimuli and the pecking behavior is what gets food. There would be no need for reward (pleasure signals (what is a pleasure signal relative to a feeling of pleasure), or intent because the stimuli-behavioral response explains the situation without the use of those concepts.

Quoting VagabondSpectre
Once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more "reward" (a hard wired pleasure signal that plays an essential role in the emergence of intelligence and intention).

What was the stimuli? If the stimuli was a visual or smell of the bug or grain that started the instinctive behavior of pecking, then what purpose if the reward? If it already knows there is a bug or grain via it's senses, and that causes the instinctive behavior, then what is the reward for if they already knew there was a bug or grain on the ground?

Harry Hindu March 21, 2020 at 14:03 #394410
Quoting TheMadFool
Myth? Color is a frequency-property of light/EM waves. Change the frequency of light and color changes i.e. without changes in frequency there are no changes in color.

Although frequency may not be the sole determinant of color for there maybe other explanations for the origins of color, existing color theory bases colors on frequency of light.

Frequency is a property of light. Color is a property of minds. I don't need frequencies of light to strike my retina for me to experience colors. I can think of colors without using my eyes.

Interesting thought, colors seem to be a fundamental building block of the mind. I know I exist only because I can think, and thinking, perceiving, knowing, imagining, etc. are composed of colors, shapes, sounds, feelings, smells, etc. - sensory data - and nothing seems more fundamental than that.
VagabondSpectre March 21, 2020 at 22:39 #394606
Quoting Harry Hindu

What does it mean for the chick to not know what it is doing at first?
It means that it has no prior experience of the thing it is doing, and also that the proximal cause of the thing is not its high level thoughts. After it gets experience of the thing it is doing, and figures out how to do it on demand, and how to refine the action to actually get food, then we might say "it knows what it is doing".

Quoting Harry Hindu
It seem to me that the chick is showing intent to feed, or else it wouldn't peck the ground. How do you know that what it does instinctively, is what it intends to do in it's mind?


The science of behavior calls them "fixed action responses/patterns". They're still somewhat mysterious (because neural circuitry is complex), but they're extensively studied.

When a female elk hears the mating call of a male, she automatically beings to ovulate. Does the female deer "intend" to ovulate? Does she "intend" to mate? i know you'll answer 'yes' to that last one, so what about in the case of a recently matured female deer who has never met an adult male, and never mated before. How does that female elk "intend" to do something that she has never experienced, does not understand, and doesn't even know exists?

The chick has never eaten before. It has no underlying concepts about things. In the same way that a baby doesn't know what a nipple is when it begins the nursing action pattern. We know because there is no such thing as being born with existing experience and knowledge; if we put chicks in environments without grains or bugs to eat, they start pecking things anyway (and hurt themselves).

Again, it's an action they do automatically, and overtime they actually learn to optimize and utilize it.

Quoting Harry Hindu
For an instinctive behavior - one in which it is not routed through the filtering of consciousness - what it does is always what it wants to do. It is in consciousness that we re-think our behavior. I'm famished. Should I grab John's sandwich and eat it? For the chick, it doesn't think about whether it ought, or should do something. It just does it and there is no voice in their mind telling what is right or wrong (their conscious). What is "right" is instinctive behavior. Consciousness evolved in highly intelligent and highly social organisms as a means of fine-tuning (instinctive) behavioral responses in social environments.


Why are you now making claims about "consciousness"? Are you trying to exclude chickens from the realm of consciousness?

Are chickens not highly intelligent and social animals?

Quoting Harry Hindu
Based on your theory, there is no reason to have a reward system, or intent. What would intent be, and what would it be useful for?


How come you have to look out for obstacles when you are driving or walking down the street?

Why do fishermen need to come up with novel long term strategies to catch fish when the weather changes?

How come we all aren't exactly the same, in a static and unchanging environment that requires no adaptation for survival?

"Brains" first emerged as a real-time response-tool to help dna reproduce more. DNA is a plan, and it can encode many things, but it has a very hard time changing very quickly; it can only redesign things generation to generation; genetically. Brains on the other hand can react to "real time" events via sensory apparatus and response triggers. If you're a plant that filters nutrients from the surrounding water, maybe you can use a basic hard coded reflex to start flailing your leaves more rapidly when there is lots of nutrients floating by.

But what if your food is harder to get? What if it moves around very quickly, and you need to actually adjust these actions in real time in order to more reliably get food?

That's where the crudest form of central decision making comes into play. Evolution cannot hard code a reliable strategy once things start to get too complicated (once the task requires real-time adaptation), so brains step in and do the work. Even in the most primitive animals, there's more going on than hard-wired instinct. There is real time strategy exploration; cognition. the strategies are ultimately boot-strapped by low level rewards, like pain, pleasure, hunger,and other intrinsic signals that give our learning a direction to go in.

Quoting Harry Hindu
What was the stimuli? If the stimuli was a visual or smell of the bug or grain that started the instinctive behavior of pecking, then what purpose if the reward? If it already knows there is a bug or grain via it's senses, and that causes the instinctive behavior, then what is the reward for if they already knew there was a bug or grain on the ground?


It can actually happen without any stimulus (sometimes it's a trigger mechanic with a stimulus threshold, a release mechanic, a gradient response to a stimulus, or even something that happens in the absence of a stimulus). Chicks will start pecking things regardless, but certain body positions or visual patterns might trigger it more often. But that said, randomly pecking is not good in and of itself; the action needs to be actively refined before chicks can do it well.

The reward is the taste of the bug itself. It gives the bird a reason to keep doing the action, and to do it better.
TheMadFool March 22, 2020 at 05:28 #394688
Quoting Harry Hindu
Frequency is a property of light. Color is a property of minds. I don't need frequencies of light to strike my retina for me to experience colors. I can think of colors without using my eyes.

Interesting thought, colors seem to be a fundamental building block of the mind. I know I exist only because I can think, and thinking, perceiving, knowing, imagining, etc. are composed of colors, shapes, sounds, feelings, smells, etc. - sensory data - and nothing seems more fundamental than that.


:ok: :up:
fdrake March 22, 2020 at 14:42 #394755
Reply to VagabondSpectre

Your posts in this thread have been excellent!

For the uninitiated, the curse of dimensionality is a problem that occurs when fitting statistical models to data; it occurs (roughly) when there are more relevant factors to the model (model parameters) than there are observations (data) to train the model on.

Regarding the curse of dimensionality; If you look at the combinatorics of the learning, what would be required to store all the distinctions we need to store as binary strings, there's way more ways of manipulating muscles than there is input data for how to learn how to manipulate them in specified ways without strong prior constraints on the search space (configurations of muscle contractions, say). Neurons are in the same position, there's way more distinctions we recognise and act upon easily than could be embedded into the neurons in binary alone. Another way of saying this consequence is that there isn't (usually) a 'neuron which stores the likeness of your partner's face". What this suggests is that how we learn doesn't suffer from the curse of dimensionality in the way expected from machine learning algorithms as usually thought of.

There're a few ways that the curse of dimensionality gets curtailed. Two of which are:

Constraining the search space; which Vagabond covered by talking about central pattern generators (neurological blueprints for actions that can be amended while learning through rewarded improvisation).

Another way it can be curtailed is by reducing the amount of input information which is processed without losing the discriminatory/predictive power of your processed input information, this occurs through cleverly aggregating it into features. A feature is a salient/highly predictive characteristic derived from input data by aggregating it cleverly. A classical example of this are eigenfaces. which are images that are constructed to correspond to components of variation in images of human faces maximally; the pronounced bits in the picture in the link are the pronounced parts of the face which vary most over people. Analogically, features allow you to compress information without losing relevant information.

When people look at faces, they typically look at the most informative parts during an act of recognition - eyes, nose, hair, mouth. Another component of learning in a situation where the curse of dimensionality applies is feature learning; getting used to what parts of the environment are the most informative for what you're currently doing.

Quoting VagabondSpectre
Once it manages to snatch something tasty (like a bug or grain), it can start to optimize the pecking motion to get more "reward" (a hard wired pleasure signal that plays an essential role in the emergence of intelligence and intention).


Like with pecking, it's likely to be the case that features that distinguish good to peck targets (like seeds' shapes and sizes or bugs' motion and leg movements) from bad to peck targets become heavily impactful in the learning process, as once an agent has cottoned onto a task relevant feature, they can respond quicker, with less effort, and just as accurately, as features efficiently summarise task relevant environmental differences.

Edit: in some, extremely abstract, respect, feature learning and central pattern generators address the same problem; Imagine succeeding at a task as a long journey, central pattern generators direct an agent's behavioural development down fruitful paths from the very beginning, they give an initial direction of travel, feature learning lets an agent decide how to best get closer to their destination along the way; to walk the road rather than climb the mountain.
VagabondSpectre March 22, 2020 at 21:51 #394942
Reply to fdrake HUZZAH! My kindred!!!

Everything you said is bang on the nose!

The dimensionality reduction and integration of our sensory observations is definitely a critical component of prediction (otherwise there is information overload and quick impacts on upward efficiency and scalability).

My own project began simply as an attempt to make an AI spider (since then it has exploded to encompass all forms of animal life)... As it happened, spiders turned out to be one big walking super-sensitive sensory organ (all the sights, all the sounds, all the vibrations, etc...), which is to say they have incredibly dense and "multi-modal" sensory inputs (they have sight, hearing, vibrational sensing, noses on two or more of their legs, and sensory neurons throughout their body that informs them about things like temperature, injury, body position, and acceleration, and more). And to integrate these many senses, spiders have a relatively puny brain with which they get up to an uncountable number of interesting behaviors with an ultra complicated body (if their flavor of intelligence can be scaled up, it would amount to a powerful general intelligence). It's not just the sensory density and body-complexity, but also the fact that they actually exhibit a very wide range of behaviors. Not only can they extract and learn to encode high level features from individual senses, they can make associations and corroborate those features in and between many different complimentary sensory modalities (if it looks like a fly, sounds like a fly, and feels like a fly: fly 100%).

I could expound the virtues of spider's all day, but the point that is worth exploring is the fact that spider's have a huge two-ended dimensionality problem (too much in, too much out), and yet their small and primitive-looking nervous systems make magic happen. When I set out to make a spider AI, I didn't have any conception of the dimensionality curse, but I very quickly discovered it... At first, my spider just writhed epileptically on the ground; worse than epileptically (without coherence at all).



I had taken the time to build a fully fleshed out spider body (a challenge unto itself) with many senses, under the assumption that agents need ample clay to make truly intelligent bricks. And when I set a reinforcement learning algorithm to learn to stand or to walk (another challenge unto itself), it failed endlessly. A month of research into spiders and machine-learning later, I managed to train the spider to ambulate...

And it wasn't pretty... All it would ever learn is a bunch of idiosyncratic nonsense. Yes, spider, only ever flicking one leg is a possible ambulation strategy, but it's not a good one dammit! Another month of effort, and now I can successfully train the spider to run... Like a horse?



Imagine my mid-stroke grin; I finally got the spider running (running? I wanted it to be hunting by now!). It took so much fine-tuning to make sure the body and mind/senses has no bugs or inconsistencies, and in the end it still fucking sucked :rofl: ... A spider that gallops like a horse is no spider at all. I almost gave up...

I decided that there must be more to it... Thinking about spiders and centipedes (and endlessly observing them) convinced me that there has to be some underlying mechanism of leg coordination. After creating a crude oscillator and hooking up the spider muscles with some default settings I got instant interesting results:



It's still not too pretty, but compared to seizure spider and Hi Ho Silver, this was endlessly satisfying.

Over the next half a year or so, I have been continuing to research and develop the underlying neural circuitry of locomotion (while developing the asset and accompanying architecture). I branched out to other body-types beyond spiders, and in doing so I forced myself to create a somewhat generalized approach to setting up what amount to *controller systems* for learning agents. I'm still working toward finalizing the spider controller system (spider's are the Ferrari of control systems), but I have already made wildly good achievements with things like centipedes, fish, and even human hands!

(note: the centipede and the hand have no "brain"; they're headless)





These fish actually have a central nervous system, whereas the centipede and the hand are "headless" (im essentially sending signals down their control channels manually). They can smell and see the food balls, and they likey!



The schooling behavior is emergent here. In theory they could get more reward by spreading out, but since they have poor vision and since smell is noisy and sometimes ambiguous, they actually are grouping together because combined they form a much more powerful nose (influencing eachother's direction with small adjustments acts like a voting mechanic for group direction changes). (I can't be sure of this, but im in a fairly good position to make that guess).

They have two eyeballs (quite low resolution though, 32x32xRGB) each, and these images are passed through a convolutional neural network that performs spatially relevant feature extraction. it basically turns things into an encoded and shortened representation that is then passed as inputs to the PPO RL training network that is acting as the CNS of the fish (the PPO back-propagation passes into the CNN, making it the "head" classifier, if you will).

This observational density aspect of the system has absorbed almost as much of my focus as the output side (where my cpg system is like a dynamic decoder for high level commands for the CNS, i need a hierarchical system that can act as encoder to give high level and relevant reports to the CNS. Only that way can I lighten the computational load on the CNS to make room for interesting super-behaviors that are composed of more basic things like actually walking elegantly). I have flirted with auto encoding, and some of the people interested in the project are helping me flirt with representation encoding (there's actually a whole zoo of approaches to exhaust).

The most alluring approach for me is the hierarchical RL approach. Real brains, after-all, are composed of somewhat discrete ganglia (they compartmentalize problem spaces), and they do have some extant hierarchy thanks to evolution (we have lower and higher parts of the brain. Lower tending to be basic and older, evolutionary speaking, with higher areas tending to be more complexly integrated, and more recent acquisitions of nature). Sensory data comes in at the bottom, goes up through the low levels, shit happens at the top and everywhere in-between, shit flows back down (from all levels), and behavior is emerged. One evolutionary caveat of this is that before the higher parts could have evolved, the lower parts had to actually confer some kind of benefit (and be optimized). Each layer needs to both add overall utility, AND not ruin the stability and benefits of the lower ganglia (each layer must graduate the system as a whole). The intuitive take-away from this that we can start with basic low level systems, make them good and stable, and then add layers on-top to achieve the kind of elegant complex intelligence we're truly after. For most roboticists and researchers, the progress-stalling rub has been finding an elegant and generalized low level input and output approach.

Quoting fdrake
Like with pecking, it's likely to be the case that features that distinguish good to peck targets (like seeds' shapes and sizes or bugs' motion and leg movements) from bad to peck targets become heavily impactful in the learning process, as once an agent has cottoned onto a task relevant feature, they can respond quicker, with less effort, and just as accurately, as features efficiently summarise task relevant environmental differences.

Edit: in some, extremely abstract, respect, feature learning and central pattern generators address the same problem; Imagine succeeding at a task as a long journey, central pattern generators direct an agent's behavioural development down fruitful paths from the very beginning, they give an initial direction of travel, feature learning lets an agent decide how to best get closer to their destination along the way; to walk the road rather than climb the mountain


One of the more impressive things i have been able to create is a cpg system that can be very easily and intuitively wired with reflexes and fixed action responses (whether they are completely unconscious like the patellar knee reflex, or whether they are dynamically modulated by the CNS as excitatory or inhibitory condition control). But one of the trickier things (and something that I'm uncertain about) is creating autonomic visual triggers (humans have queer fears like tryptophobia; possibly we're pre-wired for recognizing those patterns). I could actually train a single encoder network to just get really good at recognizing stuff that is relevant to the critter, and pass that learning down through generations, but something tells me this could be a mistake, especially if novel behavior in competition needs to be recognized as a feature).

I'm still in the thick of things (currently fleshing out metabolic effects like energy expenditure, fatigue, and a crude endocrine system that can either be usefully hard-coded or learned (e.g: stress chemicals that alter the base tension and contraction speed of muscles)).

In the name of post-sanity I'll end it here, but I've only scratched the surface!

Harry Hindu March 23, 2020 at 14:22 #395090
Quoting VagabondSpectre
The chick has never eaten before. It has no underlying concepts about things. In the same way that a baby doesn't know what a nipple is when it begins the nursing action pattern. We know because there is no such thing as being born with existing experience and knowledge; if we put chicks in environments without grains or bugs to eat, they start pecking things anyway (and hurt themselves).


That depends on what knowledge is. We possess knowledge that we don't know we have. Have you ever forgotten something, only later to be reminded?

As usual with this topic (mind-matter) we throw about these terms without really understanding what we are saying, or missing in our explanation.

Quoting VagabondSpectre
That's where the crudest form of central decision making comes into play. Evolution cannot hard code a reliable strategy once things start to get too complicated (once the task requires real-time adaptation), so brains step in and do the work. Even in the most primitive animals, there's more going on than hard-wired instinct. There is real time strategy exploration; cognition. the strategies are ultimately boot-strapped by low level rewards, like pain, pleasure, hunger,and other intrinsic signals that give our learning a direction to go in.

Interesting. This part looks like something I have said a number of times before on this forum:

Harry Hindu:I am a naturalist because I believe that human beings are the outcomes of natural processes and not separate or special creations. Human beings are as much a part of this world as everything else, and anything that has a causal relationship (like a god creating it) with this world is natural as well. Evolutionary psychology is a relatively new scientific discipline that theorizes that our minds are shaped by natural selection, not just our bodies. This seems like a valid argument to make as learning is essentially natural selection working on shaping minds on very short time scales. You learn by making observations and integrating those observations into a consistent world-view. You change your world-view with new observations.


Harry Hindu:The brain is a biological organ, like every other organ in our bodies, whose structure and function would be shaped by natural selection. The brain is where the mind is, so to speak, and any change to the brain produces a change in the mind, and any monist would have to agree that if natural selection shapes our bodies, it would therefore shape how our brains/minds interpret sensory information and produce better-informed behavioral responses that would improve survival and finding mates.


Larger brains with higher order thinking evolved to fine-tune it's behavior "on the fly" rather than waiting for natural selection to come up with a solution. You're talking evolutionary psychology here. In essence, natural selection not only filters our genes, but it filters our interpretations of our sensory data (and is this really saying that it is still filtering our genes - epigenetics?). More accurate interpretations of sensory data lead to better survival and more offspring. In essence, natural selection doesn't seem to care about "physical" or "mental" boundaries. It applies to both.

So then, why are we making this dualistic distinction, and using those terms? Why is it when I look at you, I see a brain, not your experiences. What about direct vs. indirect realism? Is how I see the world how it really is - You are a brain and not a mind with experiences (but then how do I explain the existence of my mind?), or is it the case that the brain I see is merely a mentally objectified model of your experiences, and your experiences are real and brains are merely mental models of what is "out there", kind of like how bent sticks in water are really mental representations of bent light, not bent straws.

VagabondSpectre March 25, 2020 at 00:58 #395624
Quoting Harry Hindu
That depends on what knowledge is. We possess knowledge that we don't know we have. Have you ever forgotten something, only later to be reminded?

As usual with this topic (mind-matter) we throw about these terms without really understanding what we are saying, or missing in our explanation.


Granted the high level stuff is still up for interpretation as to how it works, but what I have laid out is the fundamental ground work upon which basic learning occurs. The specific neural circuitry that causes fixed action responses are known to reside in the spine, and we even seem them driving "fictive actions" in utero (before they are even born) that conform to standard gaits.

Forgetting and remembering is a function of memory, and how memory operates and meshes with the rest of our learning and intelligent systems + body is complicated and poorly understood. But unless you believe that infants are born with per-existing ideas and knowledge that they can forget and remember, we can very safely say that people are not born with preexisting ideas and beliefs (we may not be full blown tabula rasi, but we aren't fully formed rembrants either); only the lowest level functions can be loosely hard-coded (like the default gait, or the coupling of eye muscles, or the good/bad taste of nutritious/poisonous substances).

Quoting Harry Hindu
Larger brains with higher order thinking evolved to fine-tune it's behavior "on the fly" rather than waiting for natural selection to come up with a solution. You're talking evolutionary psychology here. In essence, natural selection not only filters our genes, but it filters our interpretations of our sensory data (and is this really saying that it is still filtering our genes - epigenetics?). More accurate interpretations of sensory data lead to better survival and more offspring. In essence, natural selection doesn't seem to care about "physical" or "mental" boundaries. It applies to both.
Evolutionary endowed predispositions have these complex effects because they bleed into and up through the complex system we inhabit as organisms (e.g: environment affects hormones, hormones affect genes, genes create different hormone, different hormone acts as neurotransmitter, non-linear effects emerge in the products of affected neural networks), but they are also constrained by instability. When you change low level functionality in tiered complex systems you run the risk of having catastrophic feedback domino effects that destabilize the entire system.

Quoting Harry Hindu

So then, why are we making this dualistic distinction, and using those terms? Why is it when I look at you, I see a brain, not your experiences.
Because we have to distinguish between the underlying structure and the emergent product. Appealing to certain concepts without giving a sound basis for their mechanical function is where the random speculation comes in to play. I minimize my own speculation by focusing on the low level structures and learning methodology that approximates more primitive intelligent systems. Ancient arthropods that learned to solve problems like swimming or catching fish (*catching a ball*) did so through very primitive and generic central pattern generator circuits and a low level central decision makers to orchestrate them. In term of what we can know through evidence and modeling, this is an accepted fact of neurobiology. I try to refrain from making hard statements about how high level stuff actually works, because as yet there are too many options and open questions in both the worlds of machine learning and neuroscience.

Also, you don't see my brain; you don't even see my experiences; you experience my actions as they express, which emerge from my experiences, as orchestrated by my brain, within the dynamics and constraints of the external world, and then re-filtered back up through your own sensory apparatus.

Quoting Harry Hindu

What about direct vs. indirect realism? Is how I see the world how it really is - You are a brain and not a mind with experiences (but then how do I explain the existence of my mind?), or is it the case that the brain I see is merely a mentally objectified model of your experiences, and your experiences are real and brains are merely mental models of what is "out there", kind of like how bent sticks in water are really mental representations of bent light, not bent straws.


We cannot address the hard problem of consciousness, so why try? We're at worst self-deluded into thinking we have free will, and we bumble about a physically consistent (enough) world, perceiving it through secondary apparatus which turn measurements into signals, from which models and features are derived, and used to anticipate future measurements in ways that are beneficial to the objectives that drive the learning. Objectives that drive learning are where things start to become hokey, but we can at least make crude assertions like: "pain sensing neurons" (measuring devices that check for physical stress and temperature) are a part of our low level reward system that gives our learning neural networks direction (e.g: learn to walk without hurting yourself).

There are a few obvious implications that come from understanding the 'low level" workings of biological intelligence (and how it expresses through various systems). I would say that i have addressed and answered the main subject of the thread, and beyond. A homunculus can learn to catch a ball if it is wired correctly with a sufficiently complex neural network, sufficient and quick enough senses, and the correct reward signal (and of course the body must be capable of doing so).
Harry Hindu March 25, 2020 at 14:27 #395767
Quoting VagabondSpectre
Granted the high level stuff is still up for interpretation as to how it works, but what I have laid out is the fundamental ground work upon which basic learning occurs. The specific neural circuitry that causes fixed action responses are known to reside in the spine, and we even seem them driving "fictive actions" in utero (before they are even born) that conform to standard gaits.

Forgetting and remembering is a function of memory, and how memory operates and meshes with the rest of our learning and intelligent systems + body is complicated and poorly understood. But unless you believe that infants are born with per-existing ideas and knowledge that they can forget and remember, we can very safely say that people are not born with preexisting ideas and beliefs (we may not be full blown tabula rasi, but we aren't fully formed rembrants either); only the lowest level functions can be loosely hard-coded (like the default gait, or the coupling of eye muscles, or the good/bad taste of nutritious/poisonous substances).

First, you talk about learning, then the next sentence talks about "fixed action responses". I don't see what one has to do with the other unless you are saying that they are the same thing or related. Does one learn "fixed actions", or does one learn novel actions? One might say that instincts, or "fixed action responses" are learned by a species per natural selection, rather than an organism. Any particular "fixed action response" seems like something that can't be changed, yet humans (at least) can cancel those "fixed" actions when they are routed through the "high level stuff". We can prevent our selves from acting on our instincts, so for humans, they aren't so "fixed". They are malleable. Explaining how "fixed action responses" evolved in a species is no different than explaining how an organism evolved (learned) within it's own lifetime. We're simply talking about different lengths, layers or levels in space-time this evolution occurs.

Some on this forum talk about knowing how vs knowing that. If a chicken can walk after a botched decapitation, then it walking entails "knowing" how to walk, or are those people using the term, "know" too loosely? Do "fixed action responses" qualify a "knowing how"? What about the level of the species? Can you say that a species "knows how" to walk, or do only organisms "know how" to walk? Also, knowing how to walk and actual walking didn't exist at the moment of the Big Bang. So how did walking and knowing how to walk (are they really separate things?) come about if not by a similar process to what you call "learning" - by trying different responses to the same stimuli to see what works best - kind of like how natural selection had to "try" different strategies in solving the same problem (locomotion) before it "found" what worked and is now a defining feature of a species (walking on two legs as opposed to four)? It is interesting that we can use these terms, "try", "found" etc. when it comes to the process of natural selection and how computers work. It reminds me of what Steven Pinker wrote in his book, "How the Mind Works".

Steven Pinker:And then along came computers: fairy-free, fully exorcised hunks of metal that could not be explained without the full lexicon of mentalistic taboo words. "Why isn't my computer printing?" "Because the program doesn't know you replaced your dot-matrix printer with a laser printer. It still thinks it is talking to the dot-matrix and is trying to print the document by asking the printer to acknowledge its message. But the printer doesn't understand the message; it's ignoring it because it expects its input to begin with '%!' The program refuses to give up control while it polls the printer, so you have to get the attention of the monitor so that it can wrest control back from the program. Once the program learns what printer is connected to it, they can communicate." The more complex the system and the more expert the users, the more their technical conversation sounds like the plot of a soap opera.

Behaviorist philosophers would insist that this is all just loose talk. The machines aren't really understanding or trying anything, they would say; the observers are just being careless in their choice of words and are in danger of being seduced into grave conceptual errors. Now, what is wrong with this picture? The philosophers are accusing the computer scientists of fuzzy thinking? A computer is the most legalistic, persnickety, hard-nosed, unforgiving demander of precision and explicitness in the universe. From the accusation you'd think it was the befuddled computer scientists who call a philosopher when their computer stops working rather than the other way around. A better explanation is that computation has finally demystified mentalistic terms. Beliefs are inscriptions in memory, desires are goal inscriptions, thinking is computation, perceptions are inscriptions triggered by sensors, trying is executing operations triggered by a goal.


Quoting VagabondSpectre
Because we have to distinguish between the underlying structure and the emergent product. Appealing to certain concepts without giving a sound basis for their mechanical function is where the random speculation comes in to play. I minimize my own speculation by focusing on the low level structures and learning methodology that approximates more primitive intelligent systems. Ancient arthropods that learned to solve problems like swimming or catching fish (*catching a ball*) did so through very primitive and generic central pattern generator circuits and a low level central decision makers to orchestrate them. In term of what we can know through evidence and modeling, this is an accepted fact of neurobiology. I try to refrain from making hard statements about how high level stuff actually works, because as yet there are too many options and open questions in both the worlds of machine learning and neuroscience.

What do you mean by "emergent product", or more specifically, "emergent"? How does the mind - something that is often described as "immaterial", or "non-physical" - "emerge" from something that isn't often described as "immaterial", or "non-physical", but is often described as the opposite - "material", or "physical"? Are you talking about causation or representation? Or are you simply talking about different views of the same thing? Bodies "emerge" from interacting organs. Galaxies "emerge" from interacting stars and hydrogen gas, but here we are talking about different perspective of the same "physical" things. The "emergence" is a product of our different perspectives of the same thing. Do galaxies still "emerge" from stellar interactions even when there are no observers from a particular vantage point? There seems to be a stark difference between explaining "emergence" as a causal process or as different views of the same thing. The former requires one to explain how different things "physical" can cause "non-physical" things. The latter requires one to explain how different perspectives can lead one to see the same thing differently, which as to do with the relationship between an observer and what is being observed (inside the The Milky Way galaxy as opposed to being outside of it, or inside your mind as opposed to outside of it). So which is it, or is it something else?

Quoting VagabondSpectre
Also, you don't see my brain; you don't even see my experiences; you experience my actions as they express, which emerge from my experiences, as orchestrated by my brain, within the dynamics and constraints of the external world, and then re-filtered back up through your own sensory apparatus.

Well, I was talking about if I cut open your skull, I can see your brain, not your experiences. Even if I cut open your brain, I still would't see something akin to your experiences. I've seen brain surgeons manipulate a patient's speech when touching certain areas of the brain. What is the experience like? Is it that you know what say, but your mouth isn't working (knowing what to say seems to be different than actually saying it, or knowing how to say it (think of Steven Hawking), or is that your entire mind is befuddled and you don't know what to say when the brain surgeon touches that area of your brain with an electric probe? How does a electric probe touching a certain area of the brain ("physical" interaction) allow the emergence of confusion ("non-physical") in the mind?

Quoting VagabondSpectre
We cannot address the hard problem of consciousness, so why try? We're at worst self-deluded into thinking we have free will, and we bumble about a physically consistent (enough) world, perceiving it through secondary apparatus which turn measurements into signals, from which models and features are derived, and used to anticipate future measurements in ways that are beneficial to the objectives that drive the learning. Objectives that drive learning are where things start to become hokey, but we can at least make crude assertions like: "pain sensing neurons" (measuring devices that check for physical stress and temperature) are a part of our low level reward system that gives our learning neural networks direction (e.g: learn to walk without hurting yourself).

Solutions to hard problems often come in looking at the same thing differently. The hard problem is a product of dualism. Maybe if we abandoned dualism in favor of a some flavor of monism, then the hard problem goes away. But then what is to say that the mind is of the same stuff as the brain, so the why do they appear so differently? Is it because we are simply taking on different perspectives of the same thing, like I said before? Is it the case that:
Quoting Harry Hindu
Then the mind "thinking" how to catch a ball is the same as the brain "performing" mathematical calculations?


Quoting VagabondSpectre

There are a few obvious implications that come from understanding the 'low level" workings of biological intelligence (and how it expresses through various systems). I would say that i have addressed and answered the main subject of the thread, and beyond. A homunculus can learn to catch a ball if it is wired correctly with a sufficiently complex neural network, sufficient and quick enough senses, and the correct reward signal (and of course the body must be capable of doing so).

Well, that's a first that a philosophical question has actually been answered. Maybe you should get a nobel prize and this thread should be a sticky. The fact that you used a scientific explanation to answer a philosophical question certainly makes me give you the benefit of the doubt, though. :wink: