PD 10 - Contemplating the limits of desires (with online help)

  • PD10 (Bailey):

    "If the things that produce the pleasures of profligates could dispel the fears of the mind about the phenomena of the sky and death and its pains, and also teach the limits of desires (and of pains), we should never have cause to blame them: for they would be filling themselves full with pleasures from every source and never have pain of body or mind, which is the evil of life."


    I typed in a question on the limits of desires into ChatGPT, and here is the result:



    Do you think this enhances the understanding of PD10 and is in alignment with Epicurus' teachings?

  • I'm going to be blatantly and frankly honest and say I didn't even read what ChatGPT responded with. I think it is extremely dangerous to consult "answers" from AI chatbots, no matter how sophisticated, to questions like this. I will admit it can be a novelty or intriguing, but any answers we want to questions like that should be weighed against *human* feelings and sensations. Chatbots can **only** regurgitate text that has been fed into it and its output only uses an algorithm that tries to piece together predictive sentences that statistically occur adjacent to or in proximity to other texts.

    I've been discussing this whole chatbot thing with my students in library and information science this semester so I freely admit this is a raw nerve, so to speak. But I think it's insidious and, in contexts like this and similar threads on the forum, is frightening. I would encourage us, as Epicureans, not to succumb to the siren song of AI. Don't let the convenience and novelty of this lull us into consulting this technological oracle as if it had some great insights. It may provide some "food for thought" but we would be better served by growing our own food, to finish that metaphor.

    Deep breath.... ... ...

    Kalosyni , this is not directed at you in any way, and I apologize if all that came across as such. That's not my intent. But this seemed an opportunity to unload, as it were, and put all my cards on the table in regards to ChatGPT and its ilk.

  • I thought this segment from this past Sunday's Last Week Tonight with John Oliver summed up the promise and peril of AI pretty well:

    External Content youtu.be
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    Caveat: Please note that this show is on **HBO** so viewer discretion is advised, for language (at least). John Oliver doesn't pull any rhetorical punches.

  • I think it is extremely dangerous to consult "answers" from AI chatbots, no matter how sophisticated, to questions like this.

    No doubt, but it's also inevitable that millions (billions?) of people are soon going to be doing exactly that, so we'll need to want to explore this - just as you are doing - so we can figure out the best response.

  • I think it is extremely dangerous to consult "answers" from AI chatbots, no matter how sophisticated, to questions like this. I will admit it can be a novelty or intriguing, but any answers we want to questions like that should be weighed against *human* feelings and sensations.

    Perhaps the biggest danger is that somehow any human being might be tempted to give ChatGPT some kind of "authority status" and to somehow think that it is smarter than any human being.


    For example, if you had to "weigh" this:

    -- Kalosyni says "xyz" vs ChatGPT says "xyz"


    It is possible that some people out there would give more credit to ChatGPT?


    In some sense asking certain questions of ChatGPT is "lazy", and I myself could sit down and think and write out a list of possible answers, especially given my own knowledge of the world based on my 52 years of existence. But a much younger person, for example a teenager, won't have the knowledge to do that.


    And in some ways it isn't any different than asking the opinion of another human being. When I was attending a Buddhist Zen group, there would sometimes be new people asking questions that were very "simple" (almost cringeworthy) which most likely they should have just taken the time to answer for themselves (but the Zen teacher would answer anyway).

    No doubt, but it's also inevitable that millions (billions?) of people are soon going to be doing exactly that, so we'll need to want to explore this - just as you are doing - so we can figure out the best response.

    I think we should create a special section perhaps (and which has it's own folder).


    I personally think that this particular question that I started the thread out with: "What are the limits of desires?" to be a worthwhile question. And in some ways the ChatGPT maybe didn't fully answer it, so will need to think some more on it.

  • Perhaps the biggest danger is that somehow any human being might be tempted to give ChatGPT some kind of "authority status" and to somehow think that it is smarter than any human being.


    For example, if you had to "weigh" this:

    -- Kalosyni says "xyz" vs ChatGPT says "xyz"


    It is possible that some people out there would give more credit to ChatGPT?

    This is an excellent point!! That "authority status" is a real problem, especially in people accepting a "good enough" answer and moving on, heedless that they've been given a response entirely void of intellect, human feeling, and introspection (and I use "void" purposefully).

    I will always give more authority and respect to Kalosyni than any AI.

    AI's real promise (as John Oliver shows) is in narrow applications where huge amounts of data need to be winnowed and organized. And I wouldn't call that "authority" just utility.

    And in some ways it isn't any different than asking the opinion of another human being. When I was attending a Buddhist Zen group, there would sometimes be new people asking questions that were very "simple" (almost cringeworthy) which most likely they should have just taken the time to answer for themselves (but the Zen teacher would answer anyway).

    But at least they were asking questions and getting a human response.

    As a librarian, I cringe when I hear other librarians making fun of patrons' "stupid" questions. If someone has a question, they have a void in their information environment that they feel needs filled. The allure of things like ChatGPT is that people can get "good enough" answers and may never know if it's reliable or even accurate.

    I personally think that this particular question that I started the thread out with: "What are the limits of desires?" to be a worthwhile question. And in some ways the ChatGPT maybe didn't fully answer it, so will need to think some more on it.

    Oh, fully agree that it's worthwhile to ask that! I would add that ChatGPT didn't even in reality "answer" the question. It responded algorithmically with segments of text from its training data, mathematically splicing and dicing text that it predicted would occur adjacent to each other, and put it into grammatically comprehensible text. There was no consideration, thought, reflection, scholarship, etc. ChatGPT supplies text that the reader imbues with "authority" that is not present in the product itself.

  • I would characterize the "answers" that ChatGPT gives as glorified versions of the Magic 8 Ball with simply a larger repertoire of available responses.

    For those younger readers:

    Magic 8 Ball - Wikipedia
    en.wikipedia.org

    At least the haruspex was relying on their own experience and interpretive "skills" when looking for "signs" in sacrificial entrails. X/ (Not an endorsement of haruspicy btw).

  • I find the Chat GPT to be a useful tool in filling in obvious holes in my own questioning, and even some rudimentary reasoning and giving me a pretty good story to start a search for information with. That story-crafting can save an enormous amount of time. I think of it as a next generation search engine. It also generally reaches the limits of what sorts of information is readily available through online sources, and when I actually get into deeper questions and minutiae, Chat GPT tends to dry up. It's also great to ask it for book titles for further research. Give it a few rounds of Socratic questioning and you will likely find some jumping off points into academic research. It's going to change academia in some big ways that could actually be positive in a more optimistic view. Sure there's a lot of disconcerting elements about it. We live under a regime of information tech "progress", creative destruction and disruption without a lot of social progress happening, so deep concern is a reasonable response.

    As for Epicurean studies, communities like this will always be a great place to really explore beyond the readily available knowledge, and actually get to philosophizing ourselves. Especially since the goal is to reconstruct an understanding of the original philosophy.

  • If the controversy over CHATGPT reminds everyone that "Question Authority" is *always* a good idea - at least when you are an adult - then it's a good thing. Seems to me that society at large has grown far too complacent in accepting whatever is thrown at it as the "truth," and most people need a healthy additional dose of skepticism.