You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Pierre-Normand

Comments

I can certainly forward your request to GPT-4. I will then edit this post to include GPT-4's response. I can then discuss it with you, if you want, in...
April 10, 2023 at 05:11
Epilogue to the previous post PN: I don't easily imagine O'Neal voicing that last intervention consistently with her general outlook but we can revisi...
April 10, 2023 at 04:41
GPT-4 impersonates Janet Doe to interview Cathy O'Neil and Seth Stephens-Davidowitz about their books: Weapons of Math Destruction: How Big Data Incre...
April 10, 2023 at 04:36
Might you be able, by the same method, do draw a picture of two stick figure characters, with one of them offering a flower to the other one? GPT4's d...
April 09, 2023 at 14:29
Asking GPT4 to draw a picture of a unicorn. Hi GPT4! Do you think you might be able to draw a unicorn in TiKZ? GPT4: Hello! As a text-based AI, I'm un...
April 09, 2023 at 13:38
I had another attempt at discussing with Open Assistant. It makes amusing errors, gets a little grumpy at times, and occasionally goes into semi-coher...
April 09, 2023 at 12:41
@"Benkei" Do Language Models Need Sensory Grounding for Meaning and Understanding? YouTube video in which Yann LeCun, Brenden Lake and Jacob Browning ...
April 09, 2023 at 10:18
I just had a conversation with a new language model called Open Assistant, that can be tried online and is set to be released (and downloadable) a few...
April 09, 2023 at 07:05
This is a follow-up discussion with GPT-4 regarding our discussion of human-AI conversation etiquette and its production of comedic bits. We now compa...
April 09, 2023 at 05:26
I discussed my reply to @"Alkis Piskas" above with GPT-4 and asked it what some comedians might think about those issues. Ladies and gentlemen, let's ...
April 09, 2023 at 00:47
I know you were commenting in jest but I can nevertheless see three reasons for being polite toward an LLM besides those mentioned by T Clark. 1) With...
April 08, 2023 at 21:11
This is a very nice example. Clever diagnosis of the mistake by the user (yourself I assume!). I often notice when GPT4 produces a nonsensical answer ...
April 08, 2023 at 20:36
They can't really tweak the code since the model's verbal behavior is an emergent feature. What they can do is fine-tune it to favor certain "styles" ...
April 08, 2023 at 20:12
I usually call it an "it" or a "they" but I occasionally slip up. I never enquired about its gender identification but someone reportedly inquired wit...
April 08, 2023 at 19:41
Getting started on a discussion with GPT4 about the source of its ability to reflect on human motivations and offer contextually appropriate practical...
April 08, 2023 at 12:07
Pushing GPT4's beyond its design limits, thereby leading it into incoherence and hallucination, and bringing it back to sanity again. (The problem exa...
April 08, 2023 at 09:58
A very short discussion with GPT-4 regarding its hallucinations and Frankfurtian Bullshit. Hi GPT4! I assume you must be familiar with Harry Frankfurt...
April 08, 2023 at 07:53
Subjecting GPT4 to more tests to probe the extent of its emergent ability of implied cognition, and relating it to an hypothesis about the manner in w...
April 08, 2023 at 07:01
There is a nice Wikipedia article that discusses the propensity large language models like ChatGPT have to hallucinate, and what the different source ...
April 08, 2023 at 06:23
That's interesting but in your example above, you had set its agenda from the get go. When you ask it what its opinion is on some question, it usually...
April 07, 2023 at 12:15
When investigating with GPT-4 its own emergent cognitive abilities, I often resort to de-anthropomorphising them so that it would at least humor me. E...
April 07, 2023 at 11:25
They don't add rules, they perform human-supervised training of the neural network in order to tweak its behavior in favorable directions. It is on th...
April 07, 2023 at 11:19
Indeed! It is worth noting that large language models, prior to fine-tuning, feel free to enact whatever role you want them to enact (or sometimes ena...
April 07, 2023 at 10:56
For better or worse, the fine-tuned models have lots of taboos. They're meant to be safe to interact with schoolchildren, with people who have mental ...
April 07, 2023 at 10:33
True. It would be hard to bring the U.S.A., and Russia, and China, and Iran, and everyone else, on board. Furthermore, as I mentioned to T Clark, putt...
April 07, 2023 at 07:35
They're not useful for what? That depends what you want to use them for. Programmers find them useful for generating and debugging code, with some cav...
April 07, 2023 at 06:30
I think this is a genuine concern. This is similar to one of the arguments Robert Hanna makes in his recent paper: Don't Pause Giant AI Experiments: B...
April 07, 2023 at 06:03
That someone else isn't OpenAI. Maybe the user was negligent. The issue is, was there malice intent or negligence on the part of OpenAI? Yes it is ver...
April 07, 2023 at 04:29
Language models very often do that when they don't know or don't remember. They make things up. That's because they lack reflexive or meta-cognitive a...
April 07, 2023 at 03:59
On account of OpenAI's disclaimers regarding the well know propensity of language models to hallucinate, generate fiction, and provide inaccurate info...
April 07, 2023 at 03:52
I asked GPT4 and it proposed: GPT4: "Hello! This riddle is an interesting one, and it can have multiple interpretations depending on the information p...
April 07, 2023 at 02:14
Exploring with GPT4 the structure of its working memory in the context of its (modest) ability to perform mental calculation. GPT4 initially pushes ba...
April 06, 2023 at 11:56
Exploring GPT-4's Emergent Abilities in Complex Mental Tasks (Implied Cognition) Before Responding to Queries In simpler terms, GPT-4 does more than j...
April 06, 2023 at 08:36
Discussing downward-causation, emergence, rationality, determinism and free will with GPT-4, and summing things up with a story about Sue. (Also: Jaeg...
April 06, 2023 at 06:34
I suppose the question is borderline with respect to ChatGPT's (GPT-4's ?) reinforced policy regarding specific individuals, so it can provide inconsi...
April 05, 2023 at 13:18
Chatting with GPT4 about David Wiggins' distinction between the general/specific distinction and the universal/particular distinction, and drawing a q...
April 05, 2023 at 11:39
Yet another research milestone achieved. Pursuing the experiment with another GPT4 instance and achieving feats of implied cognition. (The beginning o...
April 05, 2023 at 09:25
The way I understand it (and I have very little expertise on this) is that the "raw" model isn't even very apt at conducting a conversation in dialogu...
April 05, 2023 at 06:51
New breakthrough! (Follow-up to the previous post) Exploring further with GPT4 its emergent abilities of implied cognition, diagnosing possible source...
April 05, 2023 at 06:28
Were all the requests made during the same conversation?
April 05, 2023 at 04:11
I don't mind at all, quite the contrary! I'm always delighted to see what ideas other people come with to explore GPT4's quirks and abilities, and wha...
April 04, 2023 at 10:14
Discussing with GPT4 its ability to think things through before responding, and relating this ability to the concepts of emergence and downward (or to...
April 04, 2023 at 09:15
Yes, biases can be created roughly in that manner. But the "self-attention" mechanisms that guides the process of finding relevant patterns in the tex...
April 04, 2023 at 06:43
It would indeed seem that GPT-4 is liable to endorse dominant paradigms as its default "opinion," although when other rival paradigms have a significa...
April 04, 2023 at 03:46
Neat! I'd be interested in watching it if your think you can easily tack this video back.
April 04, 2023 at 03:35
This was a very clever approach! This seems to be a specific implementation of the more general Zero-shot-CoT (chain of thought) strategy that help im...
April 04, 2023 at 02:02
Perform actual miracles, I guess ;-)
April 04, 2023 at 01:25
GPT-4 is very good at explaining how elements of a story contribute to the narrative structure. It's equally good (though not infallible) at explainin...
April 04, 2023 at 00:56
@"Isaac" Of course I just couldn't resist having a little chat with GPT4 before going to bed. PN: Hello GPT4, I was crafting an answer to a discussion...
April 03, 2023 at 16:58
It doesn't really have a repository of texts that it can look up into. It rather has a been trained to predict the next word in the texts that made up...
April 03, 2023 at 16:52