Where ChatGPT was sycophantic, Claude is just straight up preachy. So after using these models for about three years now for daily chat work, I think I have some feedback.
First of all, I think the memory feature is not as good as these companies want it to be. It feels more like a hostage situation, or like an ex who pulls up old baggage to bring up on you in the middle of a conversation. I actually abandoned ChatGPT altogether and cleared all my memory and moved over to Claude partially due to this, and then I'm having the same problem on Claude. I just recently turned off the feature to allow it to search past conversations to gather context, but it still keeps some kind of internal memory and still searches through project context, which is annoying. I've also had to prune my memory a lot more and I still find it pulling information from places I'm not sure of.
In general, I don't think I want it to act like a human. It should be smart and understand that it is at the end of the day a tool in the computer. I think at all times it should refrain from personal judgment since after all it doesn't know what it's like to be a human, and I think all these companies need to train it to stop talking down to people. We already have enough problems with mainstream adoption of AI and this definitely isn't going to help.
Comments