General Identification of the Argument in "On Methods of Inference"

  • Elayne and i have been having some discussions which I think are going to lead back to the material in "On Methods of Inference" ("OMOI") and I would like to tackle the task of identifying "Just What Is The Argument Presented in On Methods of Inference?" In other words, before getting into the details, can we at least begin to get a handle on what the argument was about?


    The primary source material I have found most helpful on OMOI is the book by Phillip and Estelle De Lacy. Their introduction and their endnotes are extensive and I think bring some degree of clarity to a very complex topic, but even with all those notes is difficult to get a handle on what the issues were, and what the Epicurean position was on those issues. So in the following excerpts and comments I am going to try to make a start at grasping the big picture, and I hope others will see what they can do to help.


    First, I think I have identified two key paragraphs in the introduction which purport to be a summary of the main issues of the work. Unfortunately the meaning of the terminology in them about "contraposition" and "common and particular signs" is not immediately clear, but at least this gives us a place to start. In the end, it appears to me that we're ultimately after a formula by which we can decide how to attack things for which there is no direct evidence. In other words - in legal terms - we are talking about the proper method of using circumstantial evidence, and when (if ever) it is possible to state a conclusion with confidence based on evidence that is only circumstantial. This material, and the excerpts that follow, begin on page 13 of the text.



    I am going to read more and enter more comments on this thread, but if someone already has a command of this material and wants to try to shortcircuit the need for a deep dive into this subject, please feel free to jump in and save us all some time!


    Failing that, I think this is an issue that underlays a great deal of Epicurean philosophy, and explains how it differs from competing philosophies, and also probably explains how different people who consider themselves to be within the Epicurean tradition can find themselves reaching different conclusions based on much the same evidence. I don't want to distract Susan Hill from her current project, but I think the issues involved here are going to have a deep impact on how we should understand the conclusions of Epicurus on divinity as well as on many other matters.


    So from here let's go further and see what we can read from the signs.

  • One issue that has already come up in recent discussions is that posed by "exceptions to what we think is a general rule," Does not the frequency in which we discover exceptions to the rules which we think we know show that it is improper to ever generalize by analogy, from matters we have observed, to assert a conclusion about matters on which we have no direct evidence? In response to that, check this paragraph:


  • As to this paragraph, it is necessary to elaborate on what is meant by "contraposition." I need to look that up again and come back with an elaborate definition, but I think it is safe to generalize and say that "contraposition" refers to a method of reasoning using a logical syllogism, or in even simpler terms, "an argument based on logic," i think the meaning of this paragraph is that the Epicureans held that arguments based on at least a certain type of logic are "valid only in so far as they are supported by analogy." It's tempting to rewrite that as "arguments based on a certain type of logic are based valid only in so far as they are supported by direct evidence," but it seems likely to me that "reasoning by analogy" is actually a reference to "reasoning by circumstantial evidence."




    In answer to the question "When is it proper to reason by analogy and when is it not?" we have this:


  • This one I include because I think it should remind us of the very opening section of Lucretius, where Epicurus is praised for exploring nature with his mind and coming back to us to explain the "limits and boundaries set forever" which he found in his exploration:


  • "As long as the differences are uniform" is a critical clause. A single exception _can_ disprove a general hypothesis in some cases. It depends on the hypothesis, and on whether the exception is predictable/uniform. A single exception, if a clear mechanism can be found which has already been validated, could be called predictable.


    For example, if the hypothesis is "Boeing jets never crash", one crash which can be clearly demonstrated by the black box to be owing to a mechanical failure known to reliably cause crashes, perhaps with evidence of a coverup, would be enough to disprove that general statement. We would have to change it to "they rarely crash."


    The original hypothesis then must be modified to "x thing always happens except under y condition." Exceptions give us useful information, and they do require refinement of an initial "always" hypothesis.


    If an exception is not predictable or explained yet, it can be held in reserve in case it isn't really an exception-- maybe something happened to make the original conditions different from the stated hypothesis, etc. An example would be if the black box can't be found or doesn't show an obvious problem, or it shows something that hasn't caused crashes before and investigators aren't sure if it crashed or was shot down, or maybe it was a counterfeit plane, etc.


    *** One general rule being brought down by an exception does not mean all general rules will be-- only if exceptions were uniform would that be the case.


    So far this looks to me like part of the history of science. Although reasoning by analogy was a stopping point then, continued observations of nature have taught us analogy is insufficient. It can generate hypotheses which then are tested. Testing of hypotheses-- making predictions based on a hypothesis and observing the results-- had not been discovered yet. Still, it's a direct descendant of the insistence Epicurus had on making observations directly, so I think it falls into the category of a detail that requires refinement but still consistent with the high level view.

  • This is the last of the major points I wanted to drop here before coming back later. The Epicureans were referring to something known as "inconceivability:"


  • Although reasoning by analogy was a stopping point then, continued observations of nature have taught us analogy is insufficient. It can generate hypotheses which then are tested. Testing of hypotheses-- making predictions based on a hypothesis and observing the results-- had not been discovered yet

    That paragraph from Elayne points to series of questions that will require a lot of detail, starting at least with:


    1. "Have taught us that analogy is insufficient." That is the question. What was the Epicurean method in full, and how did they deal with the obvious issues that can arise from use of analogy? We know they were using analogy in part, but probably not in whole and alone, and apparently they were trying to tie analogy as tightly as possible to empirical observation. There is apparently a lot of detail in the texts that do survive, as they were challenged in their methodology by the Stoics, and they composed extensive responses in reply.
    2. "Testing of hypotheses... had not been discovered yet." I suspect that that will require a lot of review in order to predict how the Epicureans would respond to that. I think that's really the issue here, that of grasping a workable understanding of the issues involved that can be understood by a normal person and applied in real life -- because if all we come up with is a hugely complicated formula with a lot of variables, our result isn't usable in real life, and we are left back with a "faith" issue of how to pick those scientists whose methods we don't understand, but whom we decide to trust.

    That's why I think Philodemus' book is particularly useful as it helps us flesh out these issues so we can come to something understandable and workable.

  • I think that's really the issue here, that of grasping a workable understanding of the issues involved that can be understood by a normal person and applied in real life -- because if all we come up with is a hugely complicated formula with a lot of variables, our result isn't usable in real life, and we are left back with a "faith" issue of how to pick those scientists whose methods we don't understand, but whom we decide to trust.

    I think a quote commonly attributed to Einstein but which has an unknown source is relevant: everything should be made as simple as possible, but no simpler. Part of our difficulty showing modern humans that the universe is material is that so many people lack adequate science and math education. To get the evidence they need, now that we know more, people are going to need to put in some effort. Fortunately, there are many excellent popular physics books out there, very readable, such as Vic Stenger's work.


    To try and reassure people with explanations we know are outdated and not accurate is a bad idea, because it erodes trust and leaves them with no good arguments against supernaturalists. Giving people reassuring sounding but incorrect information is no better than false religion. It leaves them open to believing in things like ESP type images of gods and so on.


    People who are not able or willing to learn some physics are likely not going to be able to withstand supernaturalists anyway, I suspect, because a savvy supernaturalist can give arguments against the outdated details in Epicurean philosophy and cause the person to lose confidence. But that is a hypothesis which could be tested-- what approach provides the most resistance to supernatural fears and to further incorrect ideas about material reality? What inoculates people against woo that could backfire on their pleasure? That's a social science question to study-- not something to guess at with logic. Sometimes the answers to that kind of question are surprising!

  • As far as analogy having been a stopping place back then, I should qualify that by saying sometimes it was. Definitely there were very detailed observations also. But sometimes, at least in Lucretius, analogy is used to assert a conclusion, and we've seen that many of those details concluded from only analogy were incorrect. I would use analogy as a hypothesis generator though.


    So far I have not seen a clear example of Epicurus testing a hypothesis by making a prediction and then seeing if experimental observations fit. If he did, that would be amazing, but I think that method came about much later.

  • Just came across this and didn't know if it would be helpful. Posting here for reference:

    https://orb.binghamton.edu/sagp/157/

    I'll claim ignorance here, but was Aristotle also one who amassed evidence before attempting to come to any conclusion? I see articles claiming Aristotle as originator of the/a "scientific method." Or is that because Aristotle was more palatable to the Christians and was allowed to have his writings survive?

  • This is all very complex but I think the Epicureans would assert that reasoning by analogy is in fact the very definition of amassing evidence before coming to a conclusion, and of what is today thought of as the best scientific method.


    Its inconceivable that the Epicureans would have turned their back on any true discoveries of Aristotle or anyone else, or would have failed to use a common sense approach to problem solving such as testing alternatives before choosing among them. It seems to me the issue is more probably how they choose to handle the philosophical implications of limitations in evidence, which is inherent always in beings which are not "omniscient." That's the most basic level of this issue I think - recognizing that we never have all the information we would like to have, and deciding how to move forward giving that fact.


    I think in this review we want to examine Francis Wright's extended discussion of observation vs. Theory in AFDIA. I still tend to think that her analysis there ends up being the conclusion of one line of thinking on this topic, but I am not sure anymore how to categorize it. At the moment I am only 50% confident that it follows the position that Elayne is asserting, but I think there is at least that 50% chance that it does.


    The only thing I am 100% confident of is that the topic we are discussing now is of extreme importance and that I (and I think many of us) have not devoted sufficient time to it.

  • Is everyone clear on the difference between analogies, making observations before conclusions, and the scientific method, which is a more accurate way of testing observations? It's hard for me to tell from the conversation so far.


    Observations would be:

    I've observed that on the visible level, everything I see is made of parts, and those particles have various features like being smooth or rough, heavy or light, etc. I haven't seen any exceptions.


    Analogy based on observations would be: I've also seen phenomena that could only be explained if these visible particles were composed of particles too small to see. I am concluding that the smallest particles also have features like smooth, rough, different shapes, etc, like the visible ones, and many visible behaviors of matter would make sense to me if that is correct. This is the kind of process we see outlined in Lucretius.


    A modern analogy would be: I have tested this drug in mice, which are mammals, and it works, so it will work in humans. Or, I've tested this drug in a bunch of humans and it worked for them so it _will_ work for you. Which many people, including some doctors, think is what we do, but it's really not, lol.


    The modern process that I have not seen in Epicurus is: I've made a lot of observations about matter. I have some ideas about how that might be happening-- about the mechanisms and the composition of matter too small to see. I am going to form a specific, falsifiable hypothesis which makes a prediction about how matter will behave under certain circumstances-- how it would behave if my hypothesis is right-- and I'm going to run experiments. I'm going to repeat those experiments many times and see if other scientists can repeat them and get the same results. I'm going to get direct sensory observations by building instruments sensitive enough to register what my eyes can't, as an extension of my senses. I will be using my senses to make direct observations under strictly controlled conditions.


    I'll remain aware of possible confounding factors I haven't controlled my experiments for, and I will also do other experiments to test my hypothesis. After multiple different types of evidence have been obtained by multiple people, I will accept the conclusions as factual enough to use. The science definition of fact is pragmatic, neither skeptical nor dogmatic. But once a fact has been repeatedly observed as reliable, very strong evidence would be required to falsify it. For instance, now that we have observed electrical activity in the brain to cause seizures, and even in some case particular genetic mutations resulting in ion channelopathies and seizures, if someone wants to say naw, it's demon possession, they need to produce a demon.


    This difference between the simpler observation to conclusion process vs observation to hypothesis to testing process is in the reliability of the conclusions, something which has also held up (that's a sort of meta experiment-- what kind of data collection turns out to be reliable over time).


    This is so well established that doing "post hoc" analysis of data collected to test another hypothesis is called derisively the "spaghetti method." This is why it is standard in meta-analysis to exclude post-hoc papers. In the publish or perish world, researchers will mine their data for other patterns. Intuitively it seems that should be fine, but it is notoriously unreliable. Instead, the reliable approach is to take any post hoc observations as a new hypothesis and design tests for it.


    Cassius, what you will immediately notice is that there is the issue of exactly how many times and in how many ways results need to be replicated before we are going to accept them as reliable. That is an important issue, and it's where statistics come in. We use things like p values and control groups to tell us how likely it is that our results are to be different from chance. We can choose how certain we want to be about a particular conclusion.


    That issue is present always, but the point is that we can compare reliability, not that we can make anything 100% reliable.


    For some hypotheses only one counter example is required. For instance, the plane crash. Those are the always/never type hypotheses. But usually in biology, it's more about "will this drug work for more people than not using it, or for more people than a current treatment?"


    As far as levels of evidence-- because we are not typically using always/never hypotheses, in medicine we have levels which are not arbitrary but based on how reliable conclusions are from each type of evidence. Some people include expert consensus but I think that's silly-- I would only include the level of evidence they used.


    The lowest level of evidence in medicine is the kind that is historically least likely to be correct. For example, if one person given orange juice recovers from flu. We don't have enough data to decide if that is different from chance. But a single case is sometimes enough to be attention grabbing. What if we give someone OJ for a disease no one is known to have survived, and they survive? It's a hypothesis worth testing, but since OJ itself isn't known to be risky unless you are allergic, there would be a low threshold for using it before any controlled study-- and depending on how that went, you might decide it is unethical to design a higher level of evidence study.

    Case series are the next level, more reliable than one case. And so on up to double blinded, randomized, controlled studies with replication _and_ a documented mechanism of action.


    So when we are looking at the conclusions, it is not consensus whether a level of evidence is more reliable than another. That's something we have directly observed by comparing methods. That's the case even though sometimes a single case will turn out to be reliable but a large study turn out to have unexpected confounding variables.


    This is what lets us say with confidence things like "there is a plausible mechanism for seizures that does not involve demons, so we are going to disregard your demon idea unless you produce evidence" or "there are plausible mechanisms for the repeated observations we've made about the neuroscience of dreams which do not require a missing element of some particle or energy transmission from outer space penetrating the skull, and furthermore, we have evidence that people often erroneously perceive agency where none exists upon closer examination, so we are going to disregard that idea unless you can produce evidence stronger (more replicable) than what we have so far."

  • That is an important issue, and it's where statistics come in. We use things like p values and control groups to tell us how likely it is that our results are to be different from chance. We can choose how certain we want to be about a particular conclusion.

    I see this as the key to the issue. There is, so far as I know, no bright line that statistics themselves can provide -- there is ultimately some other standard, outside of statistics itself, which ultimately governs what "p value" we are going to accept and "how certain" we want to be. Ultimately there remains a key decision, above simple statistics, that we have to decide how to live with. is that not correct?

  • Cassius, yes-- that's what I've said-- but it's not arbitrary whether one p value gives more statistical confidence than another. So for someone to say "I'm going to drink OJ for my cancer because I had a dream it will work" is not nearly as likely to yield the desired results of a cure as saying "I'm going to try this treatment which has worked for 95% (or 60% or whatever is the number you can get) with side effects x percent, a risk which is acceptable to me."


    If we have many large studies saying OJ has never worked for a single patient, then it would be even more foolish... but if we have studies showing it's no different from chance at the same p value where prune juice is different from chance, then I'd drink the prune juice.


    I would avoid making the error of saying that because we can choose the level of confidence, there is no difference between one level of confidence and another, which is like saying a single view of the straw in a water glass is as accurate as making multiple observations.

  • Yes absolutely and I think what you're saying is very clear. But I think we are going to find that when we review the Epicurean material that still exists that there is good reason to think that what you're saying there is something they would readily accept. I think what the ancient debate was all about is the next step beyond the statistical analysis method, and that the statistical analysis focus today might actually be a regression.


    They seem to have been grappling with "what is certainty" and "can any level of statistical analysis ever be worthy of being called 'certainty'" and issues that seem very close to being word games, but which some people take very seriously, even those who are at the cutting edge of whatever science is available to them.

  • As a modern example-- I have a friend who goes to a nun who sexes ginkgo seedings with a pendulum. If it swings one direction, it's male-- another, female. Apparently it's difficult to tell, and a lot of people want to avoid smelly fruit in their yards 😂.


    My spontaneous question was "Interesting! What's her success rate?" We have studies showing these pendulum things are like a ouija board-- the pendulum swinger obviously causes the direction. I wondered if the nun was noticing something about the seeds with her "fast brain", her pattern recognition.


    My friend looked at me with total confusion and then said "oh! I get it! You are doing science. This is not about that."


    And I realized it was the experience she was going for, the sense of believing intuition was correct. She didn't want to submit it to experiment because no amount of data could override her sense of certainty.

  • Oh, I do think Epicurus would agree with me, if he had access to what I have access to. I think if he knew about pragmatism and "certain enough to use for decisions", he would be fine with that. I don't think stats are a regression unless people don't understand what they are doing. I strongly suspect Epicurus would understand the uses of stats in the way I have outlined.


    I think the real underlying question is "what degree of confidence do most people need before they can stop being anxious about other possibilities?" It is a totally different question from "how can we be 100% certain", a question which has the answer "we can't, but our confidence can be strong enough that we can forget about our worry."


    A person with OCD can check that the stove is off 100x and still be uncomfortable with uncertainty the instant they leave the room. A typical person will go about their day, maybe checking once. This is a question for human psychology research. You have to do it as an experiment to know what works for people's loss of anxiety.


    We have some loose observations about people in predominantly atheist countries-- that they are less anxious. Do all of them understand physics or do they just trust the physicists? Did Epicurus' students all understand him, or was he charismatic?


    My working hypothesis is that any explanation must be as reliable as possible for all types of students. Some will believe in science based on credibility assignment alone. Others need to have it explained in a way that makes sense to them, like with Stenger's books. Others need to visit your lab and inspect your equipment. Some will never be able to use their sense observations to modify their conclusions.

  • A way of stating my hypothesis is "most people do not require 100% certainty in order to not worry. Of those who do, no amount of checking will remove the worry (as in OCD)."