The Discomfort of Using LLMs
The problem with the ChatGPT, DeepSeek and the like is that they have a fundamental duality built into them.
- On the one hand, they try to look as much human-like as possible to facilitate the time spent with them and the information collected from the users for further training. This is necessitated by the business logic.
- On the other hand, they have no real properties that a real person would have, including the pruning of the information with time, mood cycles and other important moments that make the communication human.
As the result, I feel like I end up in a chat room with an obsessed weirdo, who attempts to 'link' different conversations, yet does not have even an elementary understanding of my thought model, ideas compartmentalisation and other things.
With humans, this is easy, and almost intuitive. If one does physics and has a hobby linked to, say, collecting and studying insects, one will not mix up these two fields in a conversation about any of these topics until there is a very good pretext. LLMs do it all the time.
So, cross-contamination and thematic blindness may also be called the features of the LLM assistants at the moment that prevent them from becoming really useful and really unobtrusive. They simply amplify the duality above and make it much less tolerable.