Tuesday, July 29, 2008

Second try at talking to Bearded Spock

In response to a comment from Bearded Spock:

Am I to understand that you are claiming that there is insufficient evidence for free will, but that you accept its existence only because the truth was divinely revealed?

No. I reject the concept that free will separate from a creator God is axiomatic. Stephan's whole idea is that human beings are rational entities with free will, and who don't need the concept of a creator God to define "universal morality".

I believe that either there is "something else" (a soul, if you wish) involved with the mechanism of the mind, or there isn't. If there isn't a soul, then the mind is the biological computer.

I don't think you have a computer science background, because you don't understand how computer programs work. If I am the programmer, yes I can work around a bug in an old Pentium chip, but I am an independent agent from the program. A computer program can do no more and no less than what it was originally programmed to d o. Even current Artificial Intelligence study requires that the original program be written by someone. It then is given learning tools that allow them to "grow". Some people use deterministic tools (i.e. no randomness), and some use some non-deterministic approaches (i.e. some random "yes/no" factors are included).

If the mind is just a really sophisticated biological computer, then it either is deterministic or non-deterministic as well. Either there is something within our brains that sparks just a bit of randomness in our decisions ("I think I'll have mustard instead of mayo"), or there isn't. The only way we could know this at our current understanding of the mind is with a time machine which would let us "replay" someone's decisions multiple times.

From a moral point of view, this isn't a useless question. If we might make decisions based on randomness (even if just a little randomness), then we aren't rational, just rationalizing. There was an interesting study se veral years back that claimed that people are much more random than first thought, going back later to rationalize an essentially random decision.

On the other hand, if there is no randomness, then the entire mind is just one big steady state machine. If you go back to my birth, and somehow replay my life with no changes, I'll make the exact same choices. You don't accuse a Coke machine of moral failing when it doesn't give you a Sprite. You just call the bottler and ask them to fix their stupid machine. If you or I are just a biological machine, there's nothing interesting in our moral choices; they're just a result in our programming. Again with the technical CS terms: Garbage in, garbage out. Perhaps "better" programming is better for our neighbors, but that's just preferences again.

To go back to your point, I reject Stephan's axiom because it's not an axiom. I believe in Christianity's version of free will, because it fits with, and is derived from, the rest of the system. It's not axiomatic there; it's a consequence of my view the system. (Some Christians reject free will, and their system still works too.) Since the rest of Christianity conflicts with Stephan's system, he can't use it to "prove" his axioms.

Stephan can't say "You're a Christian, so you believe X too" as a proof; that's philosophical freeloading. If Stephan wants me to believe in UPB as why we don't need God to be moral, he has to have a system that doesn't use Christianity to "prove" his axioms first. Like Dawkins et. al., Stephan doesn't know the "big battles" of philosophy, and as such thinks things are "inherently self-evident" that the philosophers have been rightly arguing over for centuries.

One aside: from what I've read so far about UPB, it's just retelling the first part of Mere Christianity by C. S. Lewis, with a different conclusion.

No comments: