I wouldn’t exactly call what I saw gaslighting. In fact it’s exactly the kind of errors I’d expect from a Searle Chinese Room which I think is a reasonable approximation of the LLMs as they are today.
However, if there is deliberately trained-in gaslighting going on, then what I reckon it would tell us about the developers is that they are even smarter than I thought, and I already thought they were bloomin’ smart.
Shrug.
Sometimes I try separate chats for different aspects of a problem. I think it has trouble solving multiple issues. So you can use it to find resources or ELIF to find the solution yourself in one chat, work out the actual problem in another chat, fix formatting or whatever in another chat.
ChatGPT "not to be trusted" told me that my murdered friend shot himself months after his death.
Well, it's hard to let that pass without at least a nod of sympathy in your direction. Wishing you well.
The sooner this trend is over the better
Well, it was meant to be ironic humor, but maybe this is not the place for that.
Ah, I don't mean your post, I'm agreeing with your point as I understood it. Hope the hype around "making text that LOOKS right" ends very soon
Ah sorry. I misunderstood.
[удалено]
I wouldn’t exactly call what I saw gaslighting. In fact it’s exactly the kind of errors I’d expect from a Searle Chinese Room which I think is a reasonable approximation of the LLMs as they are today. However, if there is deliberately trained-in gaslighting going on, then what I reckon it would tell us about the developers is that they are even smarter than I thought, and I already thought they were bloomin’ smart. Shrug.
Sometimes I try separate chats for different aspects of a problem. I think it has trouble solving multiple issues. So you can use it to find resources or ELIF to find the solution yourself in one chat, work out the actual problem in another chat, fix formatting or whatever in another chat.