Techno-optimism is most appropriate
There is talk of science being to blame for today's woes. But it is important to remember science and technology are simply tools. Instead, focus on a user's intent.
Even further, many bodies aim to "put the genie back into the bottle" regarding what science has brought into the world. While the other side of this spectrum argues science is our only hope. And through science and technology, humanity will free itself from extinction.
What are your thoughts about this issue?
Even further, many bodies aim to "put the genie back into the bottle" regarding what science has brought into the world. While the other side of this spectrum argues science is our only hope. And through science and technology, humanity will free itself from extinction.
What are your thoughts about this issue?
Comments (45)
Quoting Bret Bernhoft
That's not something I have heard to be honest and it is unclear what you mean by 'today's woes' - many of which seem cultural and political, not scientific as such.
It's pretty obvious that we need some technological solutions to problems created by technology (pollution for instance). But I suspect it is capitalism and the market economy that is responsible for many ills, not just those brought on by science, but also those brought on by manufacturing, marketing and media.
But really, the question is what do you consider to count as science? Do you include cars, medicine, computers, clothing, airplanes, mobile phones, x-rays...? Most things that human's do and build have a shadow side, whether it be damming a river, or putting through a highway.
Technology is slowly evolving through us
we cant make it go faster or slower
What you believe is light is actually darkness!
"Scientism is the view that science is the best or only objective means by which society should determine normative and epistemological values. While the term was originally defined to mean "methods and attitudes typical of or attributed to the natural scientist", some religious scholars (and subsequently many others) adopted it as a pejorative with the meaning "an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation (as in philos"
What's the difference with what our scientists think? Don't most scientists think so too? Isn't this even put in practice in modern society?
Why is this the question? The question is if it's a bad thing to be against it. If you are a scientist then the answer seems obvious: yes. You would be guilty of treason if you said "no".
The more important question is: Should it constitutionally and institutionally be made a measure for all of us?
I think in modern times the line between hypothesis and scientific knowledge is being blurred. The theory of evolution is often presented as done and dusted with only some loose ends to be tied up. The reality is, there are many gaping holes in it. (The theory is right in some aspects but it is far less complete than a lot of science writers pretend.) Another problem is the gene-of-the-gaps theory: all kinds of things are routinely explained away with vague references to genes. As a result people are wont to say things like "Cricket goes way back in our family, it is in our genes.". Likewise with the 'alcoholic gene'. That was very popular some years back but has gone out of fashion now. The line is being blurred between science and scientism and this is not a good development.
Yeah, I think you are right here. Scientific knowledge is often presented in better shape than it actually is, especially complicated stuff like you refer to. Evolution dealt with organisms once. After DNA was isolated and examined, the story of evolution was projected onto genes. Leading to a picture of a battle between the strands (of DNA), to which organisms are attached like puppets on a string. I once saw an image like, an artist's impression, accompanying an article on evolution; and I wondered if the artist took this view seriously or if he was criticizing it. DNA even got turned into a selfish macro molecule. And you already mentioned cricket and alcoholism. Features like intelligence, nastiness, criminality, love, you name it, are projected on it without a further thought given.
The truth is though that this just can't be done. There simply are no selfish and dumb genes with a criminal attitude.
It are the streamlined versions of theories that reach the public. Maybe this gives rise to scientism whose proponents can torment and torture the ideas even further to fit their scheme, I don't know. I'm not a "scientismist(ator?)".
I wouldn't normally do this, but as this is a philosophy forum, I think this kind of pedantry is acceptable - treason isn't the correct word here, and you're verging into rhetoric, not philosophy, by it's use.
Quoting Bret Bernhoft
Are you supportive of Mark Andreesen’s techno-optimist manifesto?
Or do you agree with this critique of Andreesen?
I do agree with Mark Andreesen's "The Techno-Optimist Manifesto", in that at least he's throwing his clout behind embodying a daydreamer. We need those kinds individuals right now.
But I was unable to review the critique, as I do not have a NYT subscription. And there is a paywall in front of the article.
Quoting Bret Bernhoft
A Tech Overlord’s Horrifying, Silly Vision for Who Should Rule the World:
It takes a certain kind of person to write grandiose manifestoes for public consumption, unafflicted by self-doubt or denuded of self-interest. The latest example is Marc Andreessen, a co-founder of the top-tier venture capital firm Andreessen Horowitz and best known, to those of us who came of age before TikTok, as a co-founder of the pioneering internet browser Netscape. In “The Techno-Optimist Manifesto,” a recent 5,000-plus-word post on the Andreessen Horowitz website, Mr. Andreessen outlines a vision of technologists as the authors of a future in which the “techno-capital machine” produces everything that is good in the world.
In this vision, wealthy technologists are not just leaders of their business but keepers of the social order, unencumbered by what Mr. Andreessen labels “enemies”: social responsibility, trust and safety, tech ethics, to name a few. As for the rest of us — the unwashed masses, people who have either “unskilled” jobs or useless liberal arts degrees or both — we exist mostly as automatons whose entire value is measured in productivity.
The vision has attracted a good deal of controversy. But the real problem with Mr. Andreessen’s manifesto may be not that it’s too outlandish, but that it’s too on-the-nose. Because in a very real and consequential sense, this view is already enshrined in our culture. Major tent-poles of public policy support it. You can see it in the work requirements associated with public assistance, which imply that people’s primary value is their labor and that refusal or inability to contribute is fundamentally antisocial. You can see it in the way we valorize the C.E.O.s of “unicorn” companies who have expanded their wealth far beyond what could possibly be justified by their individual contributions. And the way we regard that wealth as a product of good decision-making and righteous hard work, no matter how many billions of dollars of investors’ money they may have vaporized, how many other people contributed to their success or how much government money subsidized it. In the case of ordinary individuals, however, debt is regarded as not just a financial failure but a moral one. (If you are successful and have paid your student loans off, taking them out in the first place was a good decision. If you haven’t and can’t, you were irresponsible and the government should not enable your freeloading.)
Would-be corporate monarchs, having consolidated power even beyond their vast riches, have already persuaded much of the rest of the population to more or less go along with it.
As a piece of writing, the rambling and often contradictory manifesto has the pathos of the Unabomber manifesto but lacks the ideological coherency. It rails against centralized systems of government (communism in particular, though it’s unclear where Mr. Andreessen may have ever encountered communism in his decades of living and working in Silicon Valley) while advocating that technologists do the central planning and govern the future of humanity. Its very first line is “We are being lied to,” followed by a litany of grievances, but further on it expresses disdain for “victim mentality.”
It would be easy to dismiss this kind of thing as just Mr. Andreessen’s predictable self-interest, but it’s more than that. He articulates (albeit in a refrigerator magnet poetry kind of way) a strain of nihilism that has gained traction among tech elites, and reveals much of how they think about their few remaining responsibilities to society.
Neoreactionary thought contends that the world would operate much better in the hands of a few tech-savvy elites in a quasi-feudal system. Mr. Andreessen, through this lens, believes that advancing technology is the most virtuous thing one can do. This strain of thinking is disdainful of democracy and opposes institutions (a free press, for example) that bolster it. It despises egalitarianism and views oppression of marginalized groups as a problem of their own making. It argues for an extreme acceleration of technological advancement regardless of consequences, in a way that makes “move fast and break things” seem modest.
If this all sounds creepy and far-right in nature, it is. Mr. Andreessen claims to be against authoritarianism, but really, it’s a matter of choosing the authoritarian — and the neoreactionary authoritarian of choice is a C.E.O. who operates as king. (One high-profile neoreactionary, Curtis Yarvin, nominated Steve Jobs to rule California.)
There’s probably a German word to describe the unique combination of horrifying and silly that this vision evokes, but it is taken seriously by people who imagine themselves potential Chief Executive Authoritarians, or at the very least proxies. This includes another Silicon Valley billionaire, Peter Thiel, who has funded some of Mr. Yarvin’s work and once wrote that he believed democracy and freedom were incompatible.
It’s easy enough to see how this vision might appeal to people like Mr. Andreessen and Mr. Thiel. But how did they sell so many other people on it? By pretending that for all their wealth and influence, they are not the real elites.
When Mr. Andreessen says “we” are being lied to, he includes himself, and when he names the liars, they’re those in “the ivory tower, the know-it-all credentialed expert worldview,” who are “disconnected from the real world, delusional, unelected, and unaccountable — playing God with everyone else’s lives, with total insulation from the consequences.”
His depiction of academics of course sounds a lot like — well, like tech overlords, who are often insulated from the real-world consequences of their inventions, including but not limited to promoting disinformation, facilitating fraud and enabling genocidal regimes.
It’s an old trick and a good one. When Donald Trump, an Ivy-educated New York billionaire, positions himself against American elites, with their fancy educations and coastal palaces, his supporters overlook the fact that he embodies what he claims to oppose. “We are told that technology takes our jobs,” Mr. Andreessen writes, “reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything.” Who is doing the telling here, and who is being told? It’s not technology (a term so broad it encompasses almost everything) that’s reducing wages and increasing inequality — it’s the ultrawealthy, people like Mr. Andrees.
It’s important not to be fooled by this deflection, or what Elon Musk does when he posts childish memes to X to demonstrate that he’s railing against the establishment he in fact belongs to. The argument for total acceleration of technological development is not about optimism, except in the sense that the Andreessens and Thiels and Musks are certain that they will succeed. It’s pessimism about democracy — and ultimately, humanity.
In a darker, perhaps sadder sense, the neoreactionary project suggests that the billionaire classes of Silicon Valley are frustrated that they cannot just accelerate their way into the future, one in which they can become human/technological hybrids and live forever in a colony on Mars. In pursuit of this accelerated post-Singularity future, any harm they’ve done to the planet or to other people is necessary collateral damage. It’s the delusion of people who’ve been able to buy their way out of everything uncomfortable, inconvenient or painful, and don’t accept the fact that they cannot buy their way out of death.
Quoting Joshs
Thanks for that! :up: The whole article reverberates quite well with me.
As what I find to be a somewhat humorous apropos to what's here quoted:
One’s death - irrespective of what one assumes one’s corporeal death to this world to imply ontologically - can be ultimately understood to be the “obliteration of one’s ego” (whether one then no longer is or else continues being only being a possible appended issue). Taxes on the other hand - something that many, especially those who are rich, are also morbidly averse to - are when symbolically addressed “one’s contribution to the welfare of a community/ecosystem/whole without which the community/ecosystem/whole would crumble” (be it a tyranny, a democracy, or any other politics when it comes to the monetary contribution of taxes per se).
At least when thus abstractly understood, I can jive with Franklin in that death and taxes, as much as they might be disliked, are sooner or later both certainties for individuals partaking of a societal life and, hence, for humans - transhumanist of otherwise.
Not an argument I'm gonna defend. Just an opinionated observation to be taken with a grain of salt.
I think it's interesting that you've decided to frame this in terms of productivity. What exactly are we producing and why is it worthwhile to produce?
I generally agree that advances in technology are important for human flourishing. In an important way, technology enhances our causal powers, makes us "free to do," things we could not before. So to, the attainment of knowledge presupposes a sort of transcendence, a move to go beyond current beliefs and desires into that which lies beyond us.
In The Republic, Plato leaves most of society in the cave precisely because ancient society required that most people spend most of their time laboring to produce the prerequisites for life. Technology at least opens up the possibility of more people being free to ascend out of the cave.
However, we can consider if the futurist vision of A Brave New World is a utopia or a dystopia. It seems that by technocratic standards that focus solely on consumption, production, and the ability of the individual to "do what they desire," it must be the former.
Most people in that world are happy. They are free to fulfill their desires. They are bred for their roles, their cognitive abilities intentionally impaired if they are to be menial laborers. There is ubiquitous soma, a drug producing pleasure, mass media entertainment, always enough to eat and drink, and organized sexual release. When there are those who don't fit into this scheme, those with the souls of scientists and artists, they get secreted off to their own version of Gault's Gulch where all the creative and intellectual people can pursue their own ends, consuming as much as they want.
What exactly is lost here? The world of A Brave New World seems like a technocratic paradise. Here is the vision of unity found in Plato's Republic and Hegel's Philosophy of Right, but it ends up coming out looking all wrong, malignant.
What seems wrong to me is the total tyranny of the universal over the particular. People are made content, but not free; they consume, but don't flourish. There is an important distinction to be made between freedom to fulfill desire and freedom over desire. Self-determination requires knowing why one acts, what moves one. It is a relative mastery over desire, instinct, and circumstance, not merely the ability to sate desire.
We never get a good view of the leaders of A Brave New World. We must assume they are ascended philosopher kings since they don't seem to abuse their power, and work unerringly towards the unity of the system. But their relationship to the rest of humanity seems like that between a dog trainer and the dogs. They might make the dogs happy and teach them how to get on in role they have dictated for them, but that's it.
And this is why I find no reason to be optimistic about the advances of technology in and of themselves. If all sense of virtue is hollowed out, if freedom is cheapened into a concept of mere "freedom from external constraint," and productivity and wealth becomes the metric by which the good is judged, it's unclear how much technology does for us. It seems capable of merely fostering a high level equilibrium trap of high consumption and low development.
My essential point is that advances in technology are inherently good. We can, in seconds, accomplish what would have otherwise taken countless hours. Such as analyzing 86,000+ lines of text about Norse Paganism. Which a simple Python script that I wrote can do.
Here's a word cloud from that analysis:
That's what I thought. But what makes this good? Is something or someone's "being good" identical with "advances technology?" If so, then can we say that a human life is worthwhile and that a person is a "good person" to the extent that they contribute towards advancing technology?
Or would it be more appropriate to say that advancing technology is good in virtue of something else? It's obviously much more common to argue the latter here, and the most common argument is that "technological progress is good because it allows for greater productivity and higher levels of consumption."
Yet this just leads to a regress. Are productivity and consumption equivalent with the good? Is a person a good person living a worthwhile and meaningful life to the extent they are productive and consume much? This does not seem to be the case. We look to productivity and consumption largely because these are easy to quantity and welfare economics argues that these are good proxies for "utility," i.e. subjective well-being.
This brings out another question. First, are productivity and consumption perfect proxies for utility or do they only tend to go with it? It seems it could be the latter as self-reported happiness and well being is tanking in the US even as that nation crushes all other developed countries in consumption growth and productivity gains. It is not clear that technology always increases utility.
The assumption that it does has led to absolutely massive inequality between generations in the West generally, and the US in particular, such that Baby Boomers and those older hold a phenomenal amount of total wealth and political offices while also passing on a $33 trillion debt to their children and representing another $80+ trillion in entitlement liabilities that were not funded during their working years. We tend to discount investment in children and young adults because we are techno optimists. We assume that because the life of the average American in 1950 was much better than that of one in 1850, that the lives of Americans in 2050 must be vastly more comfortable than one living in 1950. This seems like an increasingly hard assumption to defend. Life expectancy and subjective well-being are declining year over year, and have been for a while. It has been over half a century since median real wages stagnated and productivity growth became totally divorced from median income growth (something true across the West, productivity gains only track with real wage growth for the bottom 90% in developing nations; this trend is almost 60 years old now).
Second, it begs the question, "is utility good in itself?" Is being a good person and living a worthwhile life equivalent with levels of subjective well being? This is not automatically clear. It would seem to imply that wealthy hedonists live far more worthwhile lives than martyrs and saints. I would tend to agree with MacIntyre's assessment that moderns have a hard time even deciding what makes a life good because the concepts of practices and the virtues have been eroded.
My point is that all these questions should call into question why exactly we think it is that technological progress is good and if it is always good. If it isn't always good, how do we engage in the sort of progress that is good?
Sure. I used to focus on technical skills for my staff because one analyst who knew SQL, M, and DAX could do the work on 10 analysts with only basic Excel knowledge. But in virtue of what is making a word map about Norse Paganism good? It doesn't seem to be the case that doing all things more efficiently is good. We can produce car batteries with far fewer man hours today, but it hardly seems like it would be a good thing to produce 10 times as many batteries if we don't need 10 times as many cars, or if we're going to hurl those batteries into our rivers and poison our water table. The goodness stands outside the production function.
At what point does an advance become inherently good? For example, It has been shown that AI systems propagate the inherent biases of the developers who select the data used to train the neural networks. Most recently the example of the AI system designed to evaluate human emotion from faces that identified an inordinate number of black people as angry. So, for all intents and purposes, AI systems are mechanisms for perpetuating biases in the guise of science. How is that inherently good?
The proliferation of digital technologies has fundamentally altered the way that people assimilate and utilize data. There is an apparent correlation between the rise of technology and the decline of IQ. Digital communication is quickly replacing personal communication. But digital communication is a poor substitute. People act differently behind a veil of full or partial anonymity. They are more aggressive, more critical. I'm trained as a coder, worked twenty-five years as a systems administrator and a systems engineer. I use my phone less than five minutes a week and plan to keep it that way.
No, technology is not inherently good. Nothing is inherently good. The use that people make of something, that is what is good or bad.
Or you could refer to it as "animistic atheism". And that would be legitimate too.
I understand this is simply an illustrative example for your sensible point, but yet on this particular reason programmer bias is likely not the reason. If the AI detects them as angry, there must a reason why, surely the programmer did not hard code "if race == black{emotion=angry}". It could be that the black people from the sample indeed have angrier faces than other races for whatever reason, but it could also be that the data that was fed to the AI has angry people as mostly black — though a Google query "angry person" shows almost only whites.
Neural networks work through pattern identification basically. However there is always a known input, it is the responses to known inputs against which the backpropagation of error corrects. In the example I gave, the original dataset of training images had to be identified each as representing "joy", "surprise", "anger," etc. And the categorizations of interpretations of the images of black people were found to be reflective of the selection bias of the developers (who did the categorizing). And all AI systems are prone to the embedding of such prejudices. It is inherent in their being designed to certain purposes that specific types of preferences dominate (aka. biases).
Did they? AI models typically use thousands or millions of datapoints for their algorithm. Did the programmers categorise all of them?
Absolutely. A computer doesn't "decide" the meaning of a facial configuration. The only way a computer knows what is "happy" is if someone feeds it in say one-hundred pictures of "happy" faces and tags each one as "happy". Then you can feed it in millions more pictures, if you like, and refine it's capabilities. But if that is to work (backpropagation of error) then there has to be some standard against which to "correct" the input. i.e. someone still has to correctly identify when the computer makes a mistake in categorizing a happy face, and propagate that error back through the neural network architecture.
I just don't think that happened because programmers don't spend their time tagging millions of data-points, it is usually that the data-points are externally-sourced and the tags come with them. On this topic, the pictures would come from Google and the tag would be whatever emotion name was in the query. I will change my mind if you provide a source.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3281765#:~:text=First%2C%20AIs%20could%20display%20a,the%20doubt%E2%80%9D%20in%20uncertain%20situations.
However, re. your argument, there is absolutely no question that people are categorizing things like this. If you are talking purely material states, ok sure, you can accurately and comprehensively identify a material configuration, circles, magnetic profiles of ships, you name it. But anything that involves a human evaluation for which there can be a range of interpretations is subject to this bias. It's absolutely inherent in the nature of neural networks.
https://theconversation.com/emotion-reading-tech-fails-the-racial-bias-test-108404
https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
The dataset can never be more neutral than what is being evaluated. People are interested in categorizing what people do in order to get them to do certain things. It's probably the most subjectively-laden data-interpretation that there is (even when it comes to identifying what constitutes data). Ask ChatGPT if its answers could be subject to unknown selection biases its developers may have passed on to it accidentally through data-categorization.
edit. I just did it. Here's what it told me.
So if that's the case, then couldn't your ability to identify something be subject to unknown selection biases your developers may have passed on to you accidentally through the initial data selection and identification?
ChatGPT
Yes, you've touched upon a crucial aspect of machine learning systems, including neural networks: the potential for biases to be inadvertently introduced through the training data and labeling process. These biases, which may be present in the data collected or the decisions made during the training phase, can influence the behavior and performance of the model, potentially leading to unintended consequences. Here are some key points to consider:
Data Selection Bias: The selection of training data plays a significant role in shaping the behavior of a machine learning model. If the training dataset is not representative of the target population or contains inherent biases, the model may learn to replicate and amplify these biases in its predictions. For example, if a facial recognition system is trained predominantly on images of individuals from certain demographic groups, it may perform poorly on individuals from underrepresented groups.
Labeling Bias: The process of labeling training data can also introduce biases if the criteria for assigning labels are subjective or influenced by human judgment. Annotators may inadvertently introduce their own biases or assumptions when labeling data, leading to inaccuracies or distortions in the training dataset. For instance, if annotators have preconceived notions about certain concepts or categories, they may label data in a way that reflects these biases.
Algorithmic Bias: Even if the training data is unbiased, the algorithms used to train machine learning models may exhibit bias due to the optimization process or inherent limitations of the algorithm. For example, certain types of neural network architectures may be more prone to amplifying biases present in the data, or the choice of loss function used during training may inadvertently prioritize certain types of errors over others.
Mitigation Strategies: To address these concerns, developers and researchers employ various strategies to identify, mitigate, and address biases in machine learning systems. These may include carefully curating diverse and representative training datasets, implementing fairness-aware algorithms that explicitly account for biases, and conducting rigorous evaluation and testing to assess the performance of the model across different demographic groups and use cases.
Overall, while machine learning systems offer powerful tools for pattern recognition and decision-making, it's important to recognize and mitigate the potential for biases to influence their behavior. By taking proactive steps to address biases in the data, algorithms, and decision-making processes, developers can help ensure that machine learning systems are fair, transparent, and accountable.
A gateway to what? And is it always necessarily a gateway to something good? Doors are useful, but so are walls. Sometimes we'd prefer our gates stay closed, it depends on what is outside.
I'm not really sure what that's supposed to mean. Both animism and religion are ubiquitous in early human cultures.
What is a techno-animist? You think technology develops because the technology wants this to happen? An axe chops because it wants to chop? A car drives because it wants to drive in the same way an animist will say a cloud is dark because 'the sky is sad?'"
I could certainly see the merits of an information theory/Hegelian informed approach where the development of technology is part of larger historical processes that guide human civilization, but not one where individual instances of technology are in any way animate, or even possessing internal purposes. I think Aristotle is right to identify artifacts as unique in that their telos/purpose lies external to them. It is the man who makes the axe who decides that "an axe is for chopping."
It does not seem like any of the links prove that programmers' beliefs are affecting the AI. The first one simply states correctly that the AI works in a racially biased way. While the other two just seem to me like the usual screeching of Anglosphere leftists whenever some fact of science or technology does not agree with their politically formed confusions about the world — all the time since 2015. What is apparent is the contrary, that these articles are calling for the direct injection of human aesthetic preferences into the code.
Quoting Pantagruel
ChatGPT is programmed to give the most milquetoast, politically neutral, common sense answers to any given question. Whatever it is that you ask it, if there is a slight chance of controversy, it will start with a disclaimer. ChatGPT is also not rational
https://chat.openai.com/share/96378835-0a94-43ce-a25b-f05e5646ec40
https://chat.openai.com/share/b5241b53-e4d8-4cab-9a81-87fa73d740ad
In any case, I still have not seen any proof that programmers are categorising their own data by hand.
I think this is a good point. It is not technology itself which can be judged as good or bad, but the way that it is used, which is judged as good or bad. Technology can be used in bad ways as well as good ways, and certain technologies could be developed specifically toward evil ends. The point being that the purpose of human existence is not to produce technology, it is something other than this, so technology is only good in relation to this other purpose, regardless of whether we know what it is, or not.
I'm sorry, perhaps you just do not understand the way neural networks function. Do you think that the data categorizes itself? This isn't a subject of debate, it is how they work. I've provided excellent, on-point information. Beyond that, I suggest studying the "training up phase" of neural network design:
Usually, an artificial neural network’s initial training involvesbeing fed large amounts of data. In its most basic form, this training provides input and tells the network what the desired output should be. For instance, if we wanted to build a network that identifies bird species, the initial training could be a series of pictures, including birds, animals that aren’t birds, planes, and flying objects.
Each input would be accompanied by a matching identification such as the bird’s name or “not bird” or “not animal” information. The answers should allow the model to adjust its internal weightings to learn how to get the bird species right as accurately as possible.
https://blog.invgate.com/what-is-neural-network
I've been studying neural networks since the 1990's, long before they were popular, or even commonly known.
There is no "AI" my friend. All there is is "pattern-matching" software, that has been trained based on pre-selected data, which selection has been made by biased human beings. The "Big-AI" players are all commercial enterprises. Do you not think that there are massive agendas (aka biases) skewing that data? Come on.
the AI system trained with such historical data will simply inherit this bias
This study shows that a bias (error) originally inherited by the AI from its source data in turn is inherited as a habit by students who learned a diagnostic skill from the AI.
So technology is actually amplifying an error, and the confidence which people have in it only exacerbates the extent to which this is a problem. Which is what I originally suggested.
The Dawn of Everything evaluates this position, and also explores the unique power of the indigenous world view through some historical analysis informed by native sources and details.
I don't think anything in what I said would suggest that I don't know this. Your point is that the AI returns certain outputs because of researcher bias in categorising data-points. Large AI models receive billions and billions of data-points. The researchers' do not categorise the data-points themselves. One can train an AI on emotions by feeding them Google pictures. Images whose query was "angry" will be categorised as angry, images whose was "happy" will be such. Any output that the AI might show could only represent dataset bias, not researcher bias.
Quoting Pantagruel
The article you linked sets out to show that humans may inherit the biased information given to them by an AI (duh), not that AI inherits human bias. :meh:
Moreover, Lucía Vicente and Helena Matute are psychologists, not people who would know about the workings of AI.
[hide="Reveal"]
If anything, the researchers are simply pointing out that people believe in AIs more than they should.
Mrs Vicente is a student while Mrs Matute is a senior researcher. This seems to be junk research made with the intent of boosting Mrs Vicente's resume — nothing wrong with that, that is just how the academy works nowadays, but worth pointing out.[/hide]
Quoting Pantagruel
Oh, please.
Which bias originally derived from the biased input data, as is in the article.
Like I said, it's a fact. Do some reading. "Training up" a neural net. Categorization is supplied, it's not intrinsic to the nature of a picture Copernicus.
No, the article has zero to do with the topic in hand.
Quoting Pantagruel
What is the authority of such a statement? Someone who has "studied neural networks since the 90s" [hide="Reveal"](whatever that means, software engineering is worlds different since then)[/hide] but has not displayed knowledge in programming? I mean, if you are "studying" neural networks for 30 years I would expect you to be at least advanced in three different programming languages. Is that really a unrealistic expectation to have?
Quoting Pantagruel
This would be the 4th time I reply to the same disingenuous point in this conversation.
Quoting Pantagruel
Yes, supplied by external sources, not by the researchers. There, the fourth time.
Pattern recognition is a process of finding regularities and similarities in data using machine learning data. Now, these similarities can be found based on statistical analysis, historical data, or the already gained knowledge by the machine itself.
A pattern is a regularity in the world or in abstract notions. If we discuss sports, a description of a type would be a pattern. If a person keeps watching videos related to cricket, YouTube wouldn’t recommend them chess tutorials videos.
Examples: Speech recognition, speaker identification, multimedia document recognition (MDR), automatic medical diagnosis.
Before searching for a pattern there are some certain steps and the first one is to collect the data from the real world. The collected data needs to be filtered and pre-processed so that its system can extract the features from the data. Then based on the type of the data system will choose the appropriate algorithm among Classification, Regression, and Regression to recognize the pattern.
https://www.analyticsvidhya.com/blog/2020/12/an-overview-of-neural-approach-on-pattern-recognition/
Filtering and pre-processing means identifying exactly how the training data fits the data-categories for which the neural network is to be trained.
I'll ask it one more time: How do you think the computer system gains the initial information that a certain picture represents a certain thing? It does not possess innate knowledge. It only knows what it has been told specifically. I know how it's done, its done by training up the system using a training dataset in which the data is identified. The classic example is the mine-rock discriminator. Sonar profiles of "known mines" are fed into the system. Along with sonar profiles of "known rocks". These are pre-categorized by the developers. After that, the neural network is fed novel data, which it then attempts to categorize. If it is wrong, the error is "back-propagated" across the network to correct the "weights" of the hidden-architecture neurons. And this back-propagation is ALSO a manual function, since the computer does not know that it is making an error.
Training an Artificial Neural Network.In the training phase, the correct class for each record is known (this is termed supervised training), and the output nodes can therefore be assigned
Quoting Deleted user
The people developing the neural net (aka the developers) are the external sources. Who else do you think, the neural-net police? The bureau of neural net standards? Jeez. Here's wikipedia on Labeled_data
Labeled data is a group of samples that have been tagged with one or more labels. Labeling typically takes a set of unlabeled data and augments each piece of it with informative tags. For example, a data label might indicate whether a photo contains a horse or a cow, which words were uttered in an audio recording, what type of action is being performed in a video, what the topic of a news article is, what the overall sentiment of a tweet is, or whether a dot in an X-ray is a tumor.
Labels can be obtained by asking humans to make judgments about a given piece of unlabeled data. Labeled data is significantly more expensive to obtain than the raw unlabeled data.
Anyway, to the OP in general, I think I've conclusively and exhaustively demonstrated my point. And illustrated the very real dangers of a naive techno-optimism. If anything, we should be constantly tempering ourselves with a healthy and ongoing attitude of informed techno-skepticism.
One final cautionary. I worked as a technical expert in the health-care industry until this year, so I've seen a couple of these studies circulated on "baked-in" AI bias.
For example, if historical patient visits are going to be used as the data source, an analysis to understand if there are any pre-existing biases will help to avoid baking them into the system,