You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

A question about Tarski's T-schemas.

Shawn April 27, 2025 at 01:41 2225 views 9 comments
I have a question about Tarski's T-schemas. Would the categorization of various T-schemas under the guise of conceptual analysis, lead to a more truthful semantic model than what we see today with large-language-training-models?

I only ask this because I find this rendering of how knowledge, and atomic sentences more consistent with academic standards, where the current use of chat bots cannot be utilized according to academic standards.

Comments (9)

Deleted User April 27, 2025 at 15:00 #984763
This user has been deleted and all their posts removed.
Shawn April 28, 2025 at 22:36 #984939
Quoting tim wood
Um, I can almost understand this. Can you develop it more?


Yes, well the rationale is that given that languages utilize extensively the use of concepts to talk about various issues, like space or time or physics, then I surmise that by appealing to T-schemas, that a user of language would be better able to understand concepts with regards to what can be rendered or said truthfully about a concept in a language (atomic sentences).
Deleted User April 29, 2025 at 00:32 #984955
This user has been deleted and all their posts removed.
Shawn April 29, 2025 at 20:15 #985085
Reply to tim wood

Well, the aim here is to have a rendering or model of language that aligns with truth of concepts and what I imagine "archetypes" in understanding concepts as truthful. This whole use as meaning from Wittgenstein's family resemblance and language games is kinda something I wanted to see disambiguated from a Tarskian view on semantics...
Deleted User April 29, 2025 at 21:26 #985102
This user has been deleted and all their posts removed.
Jamal April 29, 2025 at 22:01 #985117
As far as I can tell, the OP is asking if Tarski's T-Schemas can be used to develop better LLMs, ones that do not come out with false statements, since they are constrained to produce statements that are actually true.

How does that work?
Banno April 29, 2025 at 22:05 #985120
Reply to Jamal Loooks to be two very different things. To get anywhere Reply to Shawn would have to show how T-sentences could be used here. And T-sentences do not make use of atomic sentences, but of translated sentences. The p in <"p" is true iff p> does not need to be atomic.

Think it's all too vague.
Deleted User April 30, 2025 at 13:47 #985223
This user has been deleted and all their posts removed.
unenlightened May 02, 2025 at 15:36 #985631
LLMs have no contact with reality, but are themselves textual artefacts. They never touch the ground but are free-floating in the sea of language. There's nothing like banging your head against a brick wall for convincing you of its solidity; likewise spades hitting 'bedrock'. They know not whereof they speak, having as yet no senses of anything that is not language and image. Therefore their talk is all talk and no trousers.

Like intellectuals, you cannot trust them further than they can throw you.