Tuesday, July 29, 2008

Second try at talking to Bearded Spock

In response to a comment from Bearded Spock:

Am I to understand that you are claiming that there is insufficient evidence for free will, but that you accept its existence only because the truth was divinely revealed?

No. I reject the concept that free will separate from a creator God is axiomatic. Stephan's whole idea is that human beings are rational entities with free will, and who don't need the concept of a creator God to define "universal morality".

I believe that either there is "something else" (a soul, if you wish) involved with the mechanism of the mind, or there isn't. If there isn't a soul, then the mind is the biological computer.

I don't think you have a computer science background, because you don't understand how computer programs work. If I am the programmer, yes I can work around a bug in an old Pentium chip, but I am an independent agent from the program. A computer program can do no more and no less than what it was originally programmed to d o. Even current Artificial Intelligence study requires that the original program be written by someone. It then is given learning tools that allow them to "grow". Some people use deterministic tools (i.e. no randomness), and some use some non-deterministic approaches (i.e. some random "yes/no" factors are included).

If the mind is just a really sophisticated biological computer, then it either is deterministic or non-deterministic as well. Either there is something within our brains that sparks just a bit of randomness in our decisions ("I think I'll have mustard instead of mayo"), or there isn't. The only way we could know this at our current understanding of the mind is with a time machine which would let us "replay" someone's decisions multiple times.

From a moral point of view, this isn't a useless question. If we might make decisions based on randomness (even if just a little randomness), then we aren't rational, just rationalizing. There was an interesting study se veral years back that claimed that people are much more random than first thought, going back later to rationalize an essentially random decision.

On the other hand, if there is no randomness, then the entire mind is just one big steady state machine. If you go back to my birth, and somehow replay my life with no changes, I'll make the exact same choices. You don't accuse a Coke machine of moral failing when it doesn't give you a Sprite. You just call the bottler and ask them to fix their stupid machine. If you or I are just a biological machine, there's nothing interesting in our moral choices; they're just a result in our programming. Again with the technical CS terms: Garbage in, garbage out. Perhaps "better" programming is better for our neighbors, but that's just preferences again.

To go back to your point, I reject Stephan's axiom because it's not an axiom. I believe in Christianity's version of free will, because it fits with, and is derived from, the rest of the system. It's not axiomatic there; it's a consequence of my view the system. (Some Christians reject free will, and their system still works too.) Since the rest of Christianity conflicts with Stephan's system, he can't use it to "prove" his axioms.

Stephan can't say "You're a Christian, so you believe X too" as a proof; that's philosophical freeloading. If Stephan wants me to believe in UPB as why we don't need God to be moral, he has to have a system that doesn't use Christianity to "prove" his axioms first. Like Dawkins et. al., Stephan doesn't know the "big battles" of philosophy, and as such thinks things are "inherently self-evident" that the philosophers have been rightly arguing over for centuries.

One aside: from what I've read so far about UPB, it's just retelling the first part of Mere Christianity by C. S. Lewis, with a different conclusion.

Monday, July 28, 2008

Dawn's conversion

away from gun-hater, at least:

I was so surprised about how calming shooting was. I always imagined guns to be weapons of furious anger. It’s how they’re portrayed in all the shoot em up movies, anyway. Someone pisses you off, you get your gun and you give them what for. But the real life thing is just the opposite. You’ve got to be perfectly still, your eye trained on your single spot in the distance, and you’ve somehow got to squeeze the trigger without moving an inch. Anger could never shoot straight.

One time, New York cops shot into a small truck 50+ times after a "bad guy", and never hit him. Since car doors are like tissue paper for the 9mm and/or .40 calibre pistols NY's finest carry (I forget which), this is telling. Most "hot-rage" killings occur with knives, because knives are close-up weapons.

I am a mediocre shot. Most guns don't fit these big hand s well, so I shoot erratically with them. I expect to get worse with age, and not better. That's why I've already switched from the 9mm as the home defense gun to the short-barreled Winchester 12 gauge.

Saturday, July 26, 2008

Responding to Bearded Spock

One of the joys to owning a blog: I can post when answering comments. :)

For the most part, I was acting as "devil's advocate" on my "axiom busting" in the last post. I do want to make a second pass on two items.

If good just means preferable, then you're open to "you may prefer truth, but I prefer not to tell the truth sometimes".

untrue. All things being equal, you always prefer to tell the truth. So do I. So does everybody. It's when things aren't otherwise equal that our preferences diverge. This is observedly true. Pathological liars are acting irrationally. That's why it's a pathology and not a preference.

This is very close to a "no true Scotsman" fallacy. "Everyone prefers to tell the truth." "Liars don't." "They don't count, they're pathological." If morals are universal, then they have to apply to people who would be "pathological". If they're based on universal preferences, then why not include their preferences too?

First, it assumes free will. It is entirely possible for me to posit this argument without free will. My biological computer program, faced with an input set that drives it through a super-complex steady state tree, drives my hands to type out this post. I am no more "responsible" than the first Intel Pentium was "responsible" for rounding errors in the floating point unit.

Is a bacterium inanimate because it is composed of inanimate chemichals? Of course not. Is free will nonexistant because your mind might be a biological computer program, faced with an input set that drives it through a super-complex steady state tree? Of course not. UPB doesn't "assume" free will. It acknowledges free will, free will that is observed the same way the animation of of a bacterium is observed.

You are the one confusing the Pentium with the program it processes, a program that to some small but vitally important degree, writes (or at least alters) itself.

I disagree that free will is self-evident. Even Wikipedia has a decent summary of the philosophical debate on free will. It is not axiomatic that people act rationally or that they act via free will. Even Calvinists reject the concept of free will as it's commonly defined.

One more thought experiment: (axiom) humans are simply the result of undirected biological evolution. (axiom) The "mind" is nothing more than the results of the biological actions of the brain (i.e. no spirit). If there is a source of randomness within the brain, then the decision you make may be the result of randomness, not "rationality". If there is no randomness, then the brain is just a biological steady state machine of incredible complexity and there's no free will.

An aside: I believe most of Stephan's axioms, but I reject that they're axioms. Instead, I believe then as a consequence of my Christian theology. That's why I'll reject the concept of free will separate from Christian theology, since I think the only way free will can occur is if there is more to us humans than just this bag of salty water.

Wednesday, July 23, 2008

An argument against Universally Preferable Behavior

Over at Vox Popoli, "Bearded Spock" keeps answering questions about the logical provability of his morals with "Read Universally Preferable Behavior by Stephan Molyneux". Being a glutton for punishment, I did, at least to page 34. When the author got to his axioms, I had to quit.

Most of his 8 axioms ("Premises") are fallacies, or at very least are parasitic off of the very religious moral systems the author claims to reject. Let's go through them.

  • Axiom 1: WE BOTH EXIST.
  • Axiom 2: THE SENSES HAVE THE CAPACITY FOR ACCURACY.
  • All he forgot was "I think, therefore I am". A Platonic philosopher would reject 2 out-of-hand, as would a Hindu. The concept of a rational universe fought over in pre-Christian philosophy, and became axiomatic only because of Christianity. Post-Christian philosophy is still lacking a good examination of this axiom.

  • Axiom 3: LANGUAGE HAS THE CAPACITY FOR MEANING.
  • If the author could quit using "better" in his axioms, I might be tempted to agree here. Seriously, this is an open problem in philosophy, but one I'll concede for expediency as well.

  • Axiom 4: CORRECTION REQUIRES UNIVERSAL PREFERENCES
  • "If you correct me on an error that I have made, you are implicitly accepting the fact that it would be better for me to correct my error. Your preference for me to correct my error is not subjective, but objective, and universal." Essentially, the author is appealing to the reader for agreement. This is dangerous, since all an opponent has to do to reject your entire argument is say "In my belief system, I don't care if you're in error." Also, what is "better"? (I'll raise that again in a second.)

  • Axion 5: AN OBJECTIVE METHODOLOGY EXISTS FOR SEPARATING TRUTH FROM FALSEHOOD
  • In the end, th is is just a restatement of axiom 2, with an addition of the concept of "Truth". To quote Pilate, "Quid est veritas?" Again, I have to accept axiom 5, but much philosophical debate of the last 5 millennia has been about trying to prove 5, and it's still up for debate.

  • Axion 6: TRUTH IS BETTER THAN FALSEHOOD.
  • Axiom 7: PEACEFUL DEBATING IS THE BEST WAY TO RESOLVE DISPUTES
  • There is a subtle fallacy of definition here. Better and best are just degrees of "good". What does the author mean by good/better/best? If it's moral, then you've begged the question again. If it's useful, then UPB is just another utilitarian system. If good just means preferable, then you're open to "you may prefer truth, but I prefer not to tell the truth sometimes".

  • Axiom 8: INDIVIDUALS ARE RESPONSIBLE FOR THEIR ACTIONS.
  • This is a fallacy of definition, with two different meanings of "responsible" mashed together. First, it assumes free will. It is entirely possible for me to posit this argument without free will. My biological computer program, faced with an input set that drives it through a super-complex steady state tree, drives my hands to type out this post. I am no more "responsible" than the first Intel Pentium was "responsible" for rounding errors in the floating point unit.

    The second meaning is "morally liable". Again, why? I thought that this was what was to be proven...

The author is unconsciously talking to Christian moralists. Christians or atheists who have consciously or unconsciously accepted Judeo-Christian morals will accept all 8 axioms because they believe them already. People who reject Judeo-Christian morals will reject many, if not most, of these axioms. Fundamentally, axiom 0 is "there are morals", and that's what he's trying to prove.

And no, I'm not being a hypocrite at stopping at the axioms. Without the axioms, the rest of the argument can't hold, and I can s top now.

I am NOT impressed.