I'm going to be blatantly and frankly honest and say I didn't even read what ChatGPT responded with. I think it is extremely dangerous to consult "answers" from AI chatbots, no matter how sophisticated, to questions like this. I will admit it can be a novelty or intriguing, but any answers we want to questions like that should be weighed against *human* feelings and sensations. Chatbots can **only** regurgitate text that has been fed into it and its output only uses an algorithm that tries to piece together predictive sentences that statistically occur adjacent to or in proximity to other texts.
I've been discussing this whole chatbot thing with my students in library and information science this semester so I freely admit this is a raw nerve, so to speak. But I think it's insidious and, in contexts like this and similar threads on the forum, is frightening. I would encourage us, as Epicureans, not to succumb to the siren song of AI. Don't let the convenience and novelty of this lull us into consulting this technological oracle as if it had some great insights. It may provide some "food for thought" but we would be better served by growing our own food, to finish that metaphor.
Deep breath.... ... ...
Kalosyni , this is not directed at you in any way, and I apologize if all that came across as such. That's not my intent. But this seemed an opportunity to unload, as it were, and put all my cards on the table in regards to ChatGPT and its ilk.