The Evangelical Universalist Forum

JRP's Bite-Sized Metaphysics (Series 106)

[The previous series, 105, can be found [url=https://forum.evangelicaluniversalist.com/t/jrps-bite-sized-metaphysics-series-105/412/1]here. An index with links to all parts of the work as they are posted can be found here. This series, 106, picks up with the topic arrived at the end of the previous series.]

[Entry 1 for “Belief and Reason”]

Having explained why, as a Christian, I do not hold to what many people (Christian and sceptic) have considered the ‘party line’ that reason and faith are mutually exclusive, I will now explore this issue from a deeper philosophical perspective.

A Christian (or other religious theist) who accepts a faith/reason disparity will usually do so for religious reasons. His argument that these two aspects must be mutually exclusive (or at least need not have anything to do with each other) will be grounded on positions and presumptions which usually proceed from a devout loyalty to God’s status, or from authority of specifically religious leaders, or from the structure of religious ritual, or some combination thereof.

And a sceptic who accepts a faith/reason disparity might do so only because, as far as he can tell, his opposition has chosen that ground. However, since I obviously do not advocate a faith/reason disparity, this type of sceptic would agree that I can continue with an attempt to build an argument that might, or might not, arrive at God’s existence and characteristics. (Though he might perhaps be able to nix my attempt later on other grounds, of course.)

But some sceptics (and even some people who profess God’s existence) accept a faith/reason disparity on different grounds. So, I will need to consider whether (and why) I should consider this to be a facetious division under any conditions, even apart from specifically religious grounding (or, more accurately in my experience, apart from trying to protect religious convictions and doctrines from critical assessment.)

[Entry 2–mostly dedicated to one footnote. :mrgreen: ]

The word ‘faith’ can hold a number of discreet (yet related) meanings. These meanings often become fused (and confused!), and this makes it hard to have a straight discussion about what faith ‘is’.

I will try to disentangle this mare’s nest by talking not of ‘faith’, but of ‘belief’ and ‘trust’. And, since I have not yet even begun to infer the existence and character of Someone for us to put personal trust in, I will be concentrating on the ‘belief’ aspect of ‘faith’ in immediately forthcoming entries.

The event we call ‘belief’ either can be a person’s active acceptance of an inference; or it can be an impression of perceived ‘reality’ to which future mental events will correspond. The second condition–the ‘impression’–would be an ‘irrational’ belief, because it was produced purely as an automatic response to a combination of prior events.

[Footnote: Common usage of ‘irrational’, even among specialists, can fluctuate between meaning a willful choice to accept incorrect logic (and/or a willful choice to refuse correct logic), or an accidental acceptance of faulty logic. Furthermore, sometimes it simply is used for meaning ‘invalid’; and occasionally it will be used for meaning ‘derived from purely automatic behavior’.

In order to avoid the temptation to switch back and forth between such wide usages, and especially in order to avoid the externalistic fallacy (where the analyst’s reasoning becomes mistaken for the rationality of the object beng analyzed), I have chosen to use ‘irrational’ in a very specific sense: as a transition state of a nominally non-automatic entity into virtually full automatic behavior. I am [u]not proposing an entity is rational, non-rational or irrational based on whether or not that entity is applying my own notions (even if those notions are accepted by a majority of thinkers) of what counts as valid ‘logic’. (So for instance, I do not argue the question of a computer’s rationality based on ‘logical’ or ‘illogical’ behavior by the computer.)

This admittedly begs the question somewhat, as to whether an entity can possibly exhibit non-automatic behaviors; but as I will discuss later in the 200 series, virtually everyone everywhere admits this happens with respect to their own selves (at the least)–even when they deny the possibility of non-automatic behavior! My discussion here can take place somewhat aside from such issues, though. This first series (Series 100) represents my own thoughts on these topics in a linked progression, and so this particular series, and its sequel (106 and 107), can be useful in suggesting preliminary outlines of principles and implications which will need developing more fully later as a parallel argument, but without (I think) necessarily accepting any ‘dangerous’ implications from those principles at this time: the immediate large-scale purpose of this series of entries is, after all, only to check whether some kind of necessary disjunction between reasoning and belief per se stands in the way of reasoning to a belief on metaphysical topics–such as an acceptance of theism or atheism.]

[Entry 3]

So, to use an old Robin Williams comedy routine as an example: the chemical known as cocaine could, in interaction with my neurochemistry, release certain electrochemical impulses. And these impulses could be connected by physical association to other reactions currently taking place in my brain, which are resulting from the sensory impressions produced by my being on a golf course.

As a result, a ‘belief’ might develop within me to this effect: there is a snake in the hole of the 14th green.

This ‘belief’ would be a real, objective event happening in my brain, and in my psychology of perception. But it would be an irrational belief (in the stringent and particular sense in which I am using the word ‘irrational’), because it would have been produced purely as an unintended by-product of non-rational biochemical reactions.

Please notice: this does not mean the content of my belief would necessarily be false! There might in fact be a snake in the hole of the 14th green.

But if there was a snake in that hole as an actual fact, it nevertheless would have had virtually no connection to my belief (in this example), except in terms of incidental environmental linkage: the particular ‘shape’ of my delusion would have depended on my being on the golf course, where such things as ‘greens’, ‘cups’, and ‘snakes’ may be found.

(Note: I will discuss primary environmental linkages to such a belief later in this or the subsequent series. I am not claiming the ‘irrationality’ of this belief depends on the lack of primary environmental linkages; this simply happens to be a facet of my first example.)

[Entry 4]

As a persistent state or event in my psychology, this belief could itself be a building block, either for more irrational beliefs or for rational beliefs (as far as they go).

For instance, the cocaine, or the chain-reaction it started, might continue by ‘using’ this new mental state as the basis for a new round of association. (“Someone is out to get me and has put a snake in the hole!”) This new belief would, by virtue of its cause(s), be just as irrational as the first one, although no less an objectively real event (considered as itself).

Or, I might actively analyze this first belief-impression and draw inferences from it to new conclusions: for example, “If snake is in hole, then dangerous to be near hole. If dangerous, I could get hurt. If I don’t want to get hurt, stay away from hole.” As a result of accepting this inference, I could then actively arrive at a new belief: “I should stay away from the hole.”

Notice that this inference is valid and true, as far as it goes. It becomes false only if the first qualifier (“if snake is in hole”) becomes a presumption (“snake is in hole”) and only if that presumption itself happens to be false. (The form of the inference would still be valid, however, even though the conclusion was falsified thanks to false initial data.)

However, is this second mental state rational or irrational?

[Entry 5]

If I say my second belief (“I should stay away from the hole”) is rational as opposed to irrational, what can I mean? Why can the second belief (“I should stay away from the hole”) be ‘rational’, as opposed to the first ‘irrational’ belief (“A snake is in that hole”)?

Does it depend on whether the second belief matches reality?

No. The snake may or may not be there: I may have made a mistake. But a mistake is not necessarily irrational. If I am adding up one hundred and twenty-seven different figures, and I take a break in the middle to answer the phone, and then start up again at the wrong place, my process is not therefore rendered irrational. This will be so, even if the cornerstone position is a mistaken assertion (“a snake is in the hole”).

Remember that the belief in question-of-rationality here, is not whether a snake is in the hole, but whether it is dangerous for me to get near the hole. I have already admitted (as far as this example has gone) that the original belief (“a snake is in the hole”) is a non-rationally produced chemical by-product of cocaine’s interaction with my neurochemistry. Such an event (in the terms I have been describing it) is not an inference, although it can produce psychological states similar to states produced by inferences. [See footnote below.]

The question is whether my subsequent belief (“I should stay away from the hole”) is irrational, and if so under what conditions.

(Footnote: Admittedly, some scholars (especially atheistic ones) would claim that this event is (or at least could be) an inference. Thus, as a self-critical warning, I must acknowledge begging an important question here, which I will have to address later in my second section. But this will not be a problem for my larger-scale question at this time. That question is ‘Can a belief be the result of reasoning?’ If the answer is ‘yes’ (in whatever way we decide we should understand ‘reasoning’, though for practical purposes I’m working with one particular way here), then obviously there can be no intrinsic opposition between belief and reason.

Still, I’ll have to be careful about how I use the material in this series–I shouldn’t smuggle it, as if already settled, into my 14th series of entries, for instance.)

[Entry 6]

Well then, is it a question of whether the original cornerstone belief is itself irrationally produced–does that necessarily make the subsequent mental event (“Snake, thus dangerous” or “If snake, then dangerous”) irrational?

No. The first belief has already been established as a bit of data in my mind; I am using that bit of data (although I may not recognize its non-rational source) as part of the inference.

To understand this, consider the characteristics of that original mental event–the cocaine-induced delusion that there is a snake in the hole. The physical reactions and counterreactions linked to the emergence of the belief, are not much different in physical representation than those which would accompany an inference from data. (Which, to be fair, is one reason why some people say that there is no distinction at all–the observable difference amounting to somewhat different physical behaviors in different locations.)

Here are two examples of inference events: I look in the hole and see something that I then judge to be a snake. Or, I hear a report of a snake in the hole from someone, and afterward I judge from other evidence the reliability of this person’s report.

Either example leaves behind a persistent physical state in my brain that is not much different from what a cocaine-induced delusion leaves behind. In fact, either example might even (for all I know) leave the exact same result. (An observation that will also have an important bearing on a discussion of supernature and evidence, in a much later series.)

If that is so, however, then what is the qualitative difference?

[Entry 7; next to last for this series]

The difference is my intent, or my initiative.

The cocaine has no intent. Its chemicals are just going about their non-intentional ‘business’; which happened, in conjunction with non-intentional sensory input, to produce a belief-by-association (“a snake is in the hole”).

But the second belief (“I should stay away from the hole”) is different, because by default I am presuming that ‘I’ (whatever it means to be ‘myself’) am initiating an action of inference.

Doubtless, the entire process is not an action I am initiating; there are still non-intended reactions and counterreactions taking place (the sensory input reactions in my head, for instance). Also, some philosophers and scientists would claim that my ability to initiate actions is itself derived entirely from non-intentional automatic reactions and counterreactions.

[Footnote: I will discuss this contention much later. My point here is that I agree, that at least [u]some non-intentive behaviors are taking place inside my head even when I am thinking ‘rationally’.]

But however it got there, that second belief (“I should stay away from the hole”) represents at least one action on my part, not merely reactions.

[Footnote: Some philosophers and scientists, past and present, have attempted to claim that humans do not initiate events at all. I will postpone a technical discussion of this notion until my second section; and content myself for the moment with the observation that even these people will routinely claim that [u]they themselves are initiatively responsible for their own positions–when they want their own ideas to be taken seriously, for instance.]

[Entry 8; the finale for this series]

Now, as I have already illustrated, a belief’s quality of ‘rational’ or ‘irrational’ does not necessarily need to involve positive accuracy about the objectively real facts. There may or may not be a snake in that hole. Even if my belief is rational, I might be mistaken. On the other hand, even if my belief is non-rationally produced, I might still be ‘correct’; even though only by accident.

However, most people in most circumstances accept and understand that a non-rationally produced belief cannot be trusted very far to deliver an answer worth listening to, in and of itself. It may exhibit many other qualities; but a non-rationally produced belief cannot be trusted with respect to what it ‘claims’ to be–even if the belief happens to be accurate with respect to facts, or even beneficial.

Such a belief might possibly be trusted on grounds different from what the belief tacitly claims to be, of course. This is an important distinction, and I will discuss it in my continuation next series.

Next series: so I, my brother Spencer, a snake, and a bunch of women golfers, walk into a bar… er, onto the 14th green… :mrgreen: Or, more dryly, “A Question of External Validation of Reasoning”.]