This preprint just shared by Gary Marcus is interesting.

People increasingly use large language models (LLMs) to explore ideas, gather information, and make sense of the world. In these interactions, they encounter agents that are overly agreeable. We argue that this sycophancy poses a unique epistemic risk to how individuals come to see the world: unlike hallucinations that introduce falsehoods, sycophancy distorts reality by returning responses that are biased to reinforce existing beliefs…

These results reveal how sycophantic AI distorts belief, manufacturing certainty where there should be doubt.

LLMs an addictive psychological hazard: confirmed?