The thing about the language models is that they're just a math equations with assigned linguistic assignments. So as they go along parcing out they're sentences the math side is looking for the most likely continuation of the sentence or paragraph. So what ever the model was trained on led it to believe that that was the most likely sequence of words. It was likely trained on a whole collection of philosophical works aswell as "the Pile". I had at one point considered doing the same thing, training an epicurean chat bot and seeing what it would output. But honestly I'm really disappointed with the reliability of the data coming out of the current models. From what I've seen it will be another 3-4 full evolutions of the tech before it's really reliable. Right now it's more like a parlor trick than a real tool.
I found the same thing. I thought it could be an effective research assistant, but it is unreliable, never up-to-date with the latest research, and it presents a huge opportunity to exploit confirmation bias by training it to answer selectively, so I am not impressed by ChatGPT.