Groucho Marx (from Wikipedia)

Yes, Virginia, you CAN prove a negative

Groucho Marx (from Wikipedia)
Thanks for loaning the elephant, Groucho. I’ll try to have the pajamas pressed. (Image from Wikipedia)

Logic class, today! After a week of house hunting, a quick post like this feels like just the thing to cleanse the palate, so please forgive my indulgence.

Though it is often claimed–and tempting to believe, because it can sound sensible–it is completely false that you cannot prove a negative. (That is, for instance, that you cannot prove something doesn’t exist.)

I have heard the claim many times, often by wonderful and sincere people but, to be sure, wonderful and sincere people who don’t know what they are talking about — club of which all of us are members from time to time. For instance, I have heard atheists say “You can’t prove a negative!” in an effort to absolve themselves of the need to justify their belief that God does not exist. On the other side, I have heard Christians say “You can’t prove a negative!” in an effort to show that the atheist position is impossible.

Both are in error. Both seem to miss the fact that we prove negatives all the time and the fact that the same sort of “reasoning” they offer would defend belief in Santa Clause, the Easter Bunny, and flying purple leprechauns named Marty.

This was brought up to me more than once by someone who objected to what I wrote for Tomorrow’s World publications concerning the non-existent 2012 Mayan Apocalypse. I would point out that, based on all the evidence we have, the Mayans said no such thing about the year 2012. All of the hoopla and hype was due to New Age goofiness (drug use included) and sloppy, agenda-driven non-scholarship performed by hobbyists and individuals with something to sell. And this is definitely the record we have of the Maya culture–no modern, credible scholar of Mesoamerican culture disagrees with the assessment that the Maya simply did not believe in a 2012 apocalypse.

However, someone apparently bothered when I pointed that out would sometimes write, saying, “You can’t prove a negative!” His point seemed to be that you can’t say that the Mayans never said that the universe would end in 2012. Of course, if it is true that you can’t prove the Mayans did not say something, then it would also be “logically” unreasonable to believe that the Mayans never said President Obama would be elected in 2008, that the Mayans never said “Rome wasn’t built in a day,” or that the Mayans never said they were the descendants of the undiscovered planet Great Googly Gumdrops and never prophesied the coming of their most dangerous foe, Mork from Ork.

Often (though not always, it should be said), the claim “you can’t prove a negative” is made in reaction to something one does not want to hear, as if it will somehow back their opponent into a logical corner. But that is far from the truth.

In fact, you absolutely can prove a negative.

Now, I should qualify that when I say “prove” I mean the same thing we faulty human beings commonly mean when we talk about “proving” anything — for instance, establishing something as the most reasonable position to take among known alternatives. If “prove” means “prove with mathematical exactness and precision but in real life” then virtually all “proofs” would escape us, meaning we could prove neither negatives nor positives! (Actually, we can thank Gödel for helping us to see that, in a very real way, such “proofs” can’t even be assumed for mathematics, itself.)

But if you mean “prove” as in “I can prove you took the cookie from the cookie jar” — a belief established by the preponderance of the evidence — then, oh yeah, we’re golden. We can prove negative statements to just as high a level of certainty as we are able to prove positive statements. In fact, we draw reasonable, sound conclusions about the truth of negatives all the time.

It seems to me that the question is often related to the old saying, “Absence of evidence is not evidence of absence,” which is usually abused in this context. Because, very simply, sometimes absence of evidence is, indeed, evidence of absence. For instance, if I told you that, right now, there was an elephant in your kitchen wearing your pajamas (hat tip to Groucho), and you went into your tiny kitchen and saw no pajama-wearing elephant, you would be perfectly justified by the lack of evidence in saying, “I have proven there is no elephant in my kitchen wearing my pajamas.” Why? Because were a pajama-wearing elephant actually in your kitchen, you would be justified in expecting evidence to be left. If you don’t even see a table pushed out of the way as the elephant fled in embarrassment upon hearing your approach (elephants have big ears), you have very good cause to say that your position is proved. For someone to say, “Well, you can’t say you’ve proven there is no elephant in your kitchen because you can’t prove a negative!” would say more about their misunderstanding of logic than it would about your argument. Your argument would be absolutely valid and sound.

If evidence is to be expected and no evidence is present, then absence can be logically inferred. So, perhaps the saying should be amended to say, “Absence of evidence is not evidence of absence unless evidence should be expected.”

This is why we can, indeed, reasonably conclude that the ancient Mayan culture did not expect the universe to end in some sort of cataclysm on December of 2012. For all the New Agers’ and misguided hobbyists’ hoopla about what was supposed to be a universe-changing event, the evidence that the Maya thought about it as such a vastly significant date is just simply absent. Despite the vast volumes of cultural artifacts we have including volumes and volumes of information they, themselves, inscribed and wrote down, they say nothing about such a day being the end of the world. I won’t go into all of the details again [you can search the blog on “2012” and probably find more than you ever wanted to know], but the tiny crumbs that are generally offered by ill-informed hobbyists and tainted “researchers” always fail to pass the test. Monument 6 in Tortuguero? Understood in cultural context (as opposed to ignorantly imposing upon it non-Mayan ideas), it says nothing about the end of the world. The Comalcalco tile? Ditto. The much-later, Christianity-corrupted Chilam Balam? Actually evidence against 2012 date-setting theories when you understand it. The Dresden Codex? Not even.

(FYI on that last point: As all the unchristian 2012-addiction died down back then, the last stab I saw at trying to magically turn the Dresden Codex into “evidence” that the Mayans thought 2012 might be the end of the world was claiming that the last page of the codex is depicting the transit of Venus. No one offered proof the last page said anything like this, or even real evidence. Just an assertion that it is so, in the apparent hope that a confident sounding statement will add some credibility to what they are saying. Except that people — people with actual training in astronomy and Mayan works — have said that, no, the Dresden Codex absolutely does not mention the Venus Transit. Anyone who says the transit of Venus is in the Codex has no credibility. In fact, there’s a negative that can be proved: “The Dresden Codex does not mention the Transit of Venus.” — Sorry! So much of that pointless 2012 goofiness is still running around in my noggin that it spills out sometimes… Back to the post!)

For what should have been the one of the most significant events in their culture’s eschatology, the supposed “end of the world” date of December 2012 was remarkably and unreasonably absent from the vast collection of writings we have. Indeed, absence of evidence is, in this case, evidence of absence.

And, frankly, all of that ignores the positive evidence that the Mayans did not believe 2012 was the end of the world: many inscriptions concerning dates further out that 2012, the calendar discovery at Xultún, et al., ad nauseam. But that is an aside unrelated to the point of this lazy post, today. 🙂

In similar manner, you can prove the negative that Santa did not come down your chimney last Christmas. (Of course, he’d better not come to my house!) The absence of evidence that a fat man crawled down your chimney while you were asleep is pretty good evidence for the absence of such a fat man.

We can, indeed, prove negatives, and lack of evidence is sometimes evidence, itself. When an atheist claims that he doesn’t need to justify his belief that God doesn’t exist because you can’t prove a negative, he is not being rational. When a believer claims that the atheist’s position is not logical because you can’t prove a negative, he is also not being rational. No one gets off the hook. (Don’t get me started on the illogical fad among many atheists today to claim that “belief” doesn’t mean “belief” anymore. That would be a whole ‘nuther post…)

If someone ever tries to shut you down by claiming “You can’t prove a negative,” feel free to ask them to prove that such a proof does not exist, since that would require proving a negative, themselves. (Did you see that? I turned it around, didn’t I? Yes, I do think I’m clever, thank you.) Or, you can just ask them if it’s reasonable to strongly believe that Santa Claus does not exist. If they won’t say “Yes” to that, then I suspect they have more problems than their grasp of logic. In that case, you might recommend that they keep an eye out for any pajama-coveting elephants…

I’ve hardly scratched the surface of the topic, but I’ve seen the “you can’t prove a negative” fallacy used enough that I thought it would be something fun to write about. Yes, I have an odd idea of “fun,” but it has succeeded in relaxing me a bit after all of this house hunting! If anyone wants to read more about the mistaken notion that one cannot prove a negative, here is a decent essay by Dr. Steven Hales of Bloomsburg University, appropriately titled “You Can Prove a Negative” — knock yourself out. 🙂

NASA pic

Thank you, Lee Smolin: The Multiverse as an exit sign for real science and an “explanatory failure”

NASA pic
Let’s play “Count Universes”–Yay! OK, here we go: ONE… Uh… Well, all I see is one. Um… Do you have any evidence of any others? No? OK… Well, maybe we should just stick with one. Sound good? Yes? OK, good here, too.

Quick hit, today. Life is pretty occupied with other things!

I’ve wanted to write about this for a while, but will have to settle for just referring to it. If you’ve read the Tomorrow’s World article “Do We Live in a Multiverse?” then you are already aware that I’m not a fan of the theory. There is no good scientific reason to believe that multiple universes exist (let alone some of the weirder versions of multiverse theory, which the article did not have enough space to include in detail), and it seems that when you dig one of the main reasons the concept is latched onto is because it is seen as a means of avoiding a Creator. As one quote in the article from New Scientist says, the fine tuning of the universe’s parameters “has two possible explanations. Either the Universe was designed specifically for us by a creator or there is a multitude of universes—a ‘multiverse’.” And as the scientist quoted in the article said very plainly, “If you don’t want God, you’d better have a multiverse.” And, let’s face it: Many people don’t want God.

(It should be noted before continuing, by the way, that even if it did exist the multiverse does not actually do away with the need for God’s existence. True for many reasons, one of which I hit here: “Invasion of the Boltzmann Brains!”)

And, as I have ranted about in a rantiforous, ranty rant here on my personal blog, some of the ideas of a multiverse–especially the extreme versions–are science destroyers. It becomes the ultimate “God of the Gaps”–really, a “Multiverse of the Gaps” that explains everything. The rant is here: “The Multiverse Kills Science”–it’s a long post (definitely longer than it needed to be), so don’t try to read it without a venti salted caramel mocha in your hand and, as my Beautiful Wife might suggest, a fork to stab yourself in the leg with to keep yourself awake. (Funny Spokesman Club story associated with that reference I will have to post sometime.) But I can summarize it with the words “Jello-filled 747s raining from the sky.” OK, maybe that doesn’t summarize it very well. How about this: “In a quantum mechanical multiverse where all things happen somewhere, even those things with an unimaginably low probability of being true, science becomes impossible and cause and effect is useless as a means of understanding anything. All is explained, meaning that nothing is explained.” Maybe that’s better. If you want to slog through my terrible writing of that day and let me know how you would summarize it, feel free.

However, you could also just read this one article by physicist Lee Smolin and learn the same thing.

His article is “You think there’s a multiverse? Get real” and it has officially become my favorite New Scientist article ever. It’s in the 17 January 2015 issue. While New Scientist‘s consistent cheerleading for multiverse ideas is normally a great irritant for me–though I can hardly blame them too much, given how popular an idea it is–they deserve kudos for featuring this essay as a counterbalance, however small, to the mass of nonsense they have helped to peddle.

The statement right under the title summarizes Smolin’s point well: “Positing that alternative universes exist is just disguising our lack of knowledge of the cosmos. It’s time to move on.” Pretty plain, that.

But the rest is worth a read for anyone who is interested in the topic. For instance, he summarizes the better points of my rant-ish post I mentioned earlier very succinctly: “Thus the multiverse theory has difficulty making any firm predictions and threatens to take us out of the realm of science.” Later: “As attractive as the idea [of a multiverse] may seem, it is basically a sleight of hand, which converts an explanatory failure into an apparent explanatory success. The success is empty because anything that might be observed about our universe could be explained as something that must, by chance, happen somewhere in the multiverse.” That’s a science killer. And later, still: “And thus with an infinite ensemble of unobservable entities we leave the domain of science behind. In some sense, the multiverse embodies the unreal ensemble of all possible solutions to the laws of physics, imagined as elements of an invented ensemble of bubble universes. But this just trades one imaginary, unreal ensemble for another.”

It’s all good stuff. And Dr. Smolin’s essay puts the lie to such completely inane statements as physicist David Deutsch’s ridiculous comment that “Multi-universe physics has the same kind of experimental basis as the theory that there were once dinosaurs.” (Just seeing that sentence again gives me the willlies and makes me feel embarrassed for the man. What rot. Hopefully the article’s author was experiencing a medication mix-up and the quote from Deutsch is actually the result of a chemically-induced hallucination.)

It should be noted that Smolin gained a great deal of attention back in 2006 for his broadside attack on string theory in his book The Trouble with Physics. It stirred a large amount of controversy (much needed controversy, IMHO), and in it he makes many of the same points he makes in this multiverse article. And it is great to see that he is still at it. While I don’t agree with his implied assessment of intelligent design theories as inherently untestable hypotheses, I like the fact that he points out the hypocrisy of scientists getting on the multiverse bandwagon while rejecting intelligent design as somehow “not science.”

As for the three principles Smolin and his colleague Roberto Mangabeira Unger recommend in the article to solve the problems in science that result in things like assuming multiverses everywhere, I’m good with #1 and #3, although for #2 I’m very happy with “time is real” but unsure about what he states is the consequence of that conclusion. Still, I’m open, and I hope their ideas get enough traction to be seriously considered. (Also of note: Unger is a philosopher, and, personal evaluations of Unger’s ideas aside, kudos to Smolin for seeing the benefit of philosophy in the work of science where its place and position used to be, and should be, a given.)

The multiverse really is an example of how many scientists who wish to bash believers in God cling to gods of their own, and often do so for reasons flimsier than those they attribute to those same believers. How nice it would be if the sentiment expressed by Dr. Smolin was an indication of sanity returning to science.

OK, I said this would be a “quick hit” and, as usual, took longer than I thought. We now return you to your regularly scheduled surfing…

Computer Brain (square)

Some frontiers of Artificial Intelligence (and Artificial Life)

(Been a while since I have posted! Lots I could talk about–such as the amazing Charlotte 

Both Artificial Intelligence and Artificial Life (AI and AL for the rest of this post) fascinate me. I have been interested in AI since I was a wee lad and the idea that we could create something that seemed every bit as intelligent and interactive as a human being intrigued me. Back when I got one of my first books on programming in BASIC (indeed, I am that old), it had a program representing a scaled-down version of ELIZA that interested me in the possibility that one day a created machine might be able to have a plain-language conversation with a human being.

And we are making amazing advances in the field of AI, to be sure. In part, many of those advances are because some researchers have stopped trying to mimic the functioning of human intelligence, which remains to some extent inscrutable. As a result, more and more programs are becoming what would be called, by many measures, “intelligent” but with an “intelligence” that is increasingly inscrutable to their programmers. That is, the “thought” processes aren’t always open to being deconstructed in ways we would completely understand, yet the results do generally show some chain of successful reasoning — at least, that’s what I have read on the topic. It seems that as we are increasingly creating programs and machines that can “learn” more than we program them with, we are losing the ability to describe how they actually come to the “conclusions” they do.

This might be related to some of the weird answers Watson came up with on Jeopardy!, which I mentioned in my post here: “My Dear Watson Isn’t So Elementary” (which also refers to ELIZA).

The practical concerns of this–using “intelligences” that draw conclusions in ways that we don’t exactly understand–are real. For instance, such AI systems have been used, in at least one case that I know of, to solve a real world crime. In such a case, if the AI system’s “reasoning” cannot be reconstructed, then the humans have to be able to construct valid reasoning on their own. If they cannot, would the AI system’s conclusion be sufficient as evidence? What if lives are at stake? It’s like the mathematician’s dilemma concerning computer-aided proofs of theorems, but with, you know, jail time.

Still, such conundrums aside, I find that a little disappointing — “that” being the idea that in order to make strides in Artificial Intelligence we are abandoning to some extent an attempt to model our own intelligence. Call it sentience-bias, but the intelligence I would most like to see simulated is one that is very much like our own. At the same time, in some cases it seems a matter of abandoning the method in order to embrace the same outcome. Since our own intelligences grow over time through learning and experience, part of the hope of some is to cease trying to recreate intelligence from scratch and to simply create the mechanisms by which intelligence is able to grow. In some cases, the resulting intelligence may be very foreign to what we would think of as human intelligence, while in others it may be very similar.

There are two AI efforts I have read about in the last few months that really intrigue me. One is being driven by the creators of Apple’s Siri in their new company, Viv Labs. I remember the first Siri app, which I had before Apple bought the company behind it (and which stopped working the moment Apple bought the company). And while Siri is a disappointment to many, it actually is pretty impressive in a number of ways. Now, we have others seeking to outshine it, like Microsoft’s Cortana, named after the clothing-challenged AI interface in the popular Microsoft HALO game series. (I think Google has a voice-based assistant, as well, but I do not know if it has a name. Googlette?) All of these, though, pale in comparison to what we really want in an AI assistant, which is, essentially, Tony Stark’s “Jarvis.” We want “someone” who can understand us in real, natural, human language. So far, Siri, Cortana, and their comrades fall short of that and don’t seem like real “AI” as much as simple voice-based interfaces.

But what Viv is working on should change that in an amazing way. Wired had a good article on the operating system that Viv Labs is creating (called “Viv,” itself) if you are interested: “Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask.” If you just want a quick hit instead of reading the whole article (though the whole article is interesting, IMO), the diagram at the bottom showing Viv dissect the statement “On my way to my brother’s house, I need to pick up some cheap wine that goes well with lasagna” and producing (in 0.05 seconds) a helpful list of possibilities is fascinating by itself. Closest thing to a Jarvis that I’ve seen, and light years beyond Siri.

The second research effort that has grabbed my attention is Baby X, being built by New Zealand’s Auckland Bioengineering Institute Laboratory for Animate Technologies. This article on “The Creators Project” blog covers Baby X pretty nicely: “Baby X, The Intelligent Toddler Simulation, Is Getting Smarter Every Day” (the videos below are also featured in that article and may not seem as impressive unless you read the article).

While the creators of Baby X are smart to choose a child’s face, I think, because the features that make it seem child-like also help us to overcome the “uncanny valley” effect (that is, the somewhat ironic revulsion we feel towards things that seem near-human, such as digitally animated faces that seem somehow creepily “off,” as opposed to things that seem more obviously less so, such as C3PO’s face; the nearer to actual human appearance, the more our revulsion until the “valley” is crossed to full human appearance where we become accepting instead of revulsed–clearly too much for parentheses, so read this if you are curious). Child-like features (big eyes, etc.) make us instinctively want to like the image, which may help overcome the counter-instinct that feels revulsion at something nearly-but-subtly-not-fully human.

Regardless, from what I get from the article and from the Institute’s website, Baby X’s programming is based on a model of actual biological activity (the release of dopamine in rewarding circumstances, etc.) in the hopes that it will learn and respond in a more human way if the manner of learning is related to how we learn and respond.

Here are a couple of videos if you are interested:

Clearly, an actual baby’s brain is not being fully modeled in detail, as those interactions are still beyond our ability to accomplish. The vast network of neuronal connections in an adult brain’s connectome is even beyond the Internet’s complexity (as discussed in someone’s Tomorrow’s World article: “The Enigmatic Human Brain”). Baby X looks as though it is based on more simple models that attempt to reproduce brain activity without modeling every detail that produces that activity.

C. elegans in action. (From Wikipedia)

Another area of research that has grabbed my attention, however, does model brain activity at every level of detail, and represents, to me, more of an attempt at Artificial Life than Artificial Intelligence. The key factor making it possible is that the brain involved is remarkably simple: That of the tiny worm Caenorhabditis elegans (or C. elegans, for short) — a small (about 1 mm long), non-parasitic round worm. It is the only living creature to date which has had its entire connectome (the entirety of its neuronal connections) completely mapped out. It helps that the worm’s brain consists of only 302 neurons.

As a result, one can virtually create a digital creature that behaves just like C. elegans, even though the digital creation is not, in the same sense, alive. Once all the neurons are in place and the connections established, activity within the network begins acting just like it does in the real worm.

In fact, some have gone further. Using LEGO Mindstorms kits — simple robot-building kits using LEGO bricks — people have created robot versions of C. elegans. Certain things have to be modified, to be sure — for instance, where the real creature responds to signals indicating the presence of food, the robot’s sensors might detect the presence of sound, or motor neurons (or their worm equivalent) might be made to activate wheel instead of a muscle.

However, what is fascinating is that the only real “programming” of the robot worm is that the neurons and their connections are simulated. That is, there is no line of human code saying, “If you run into a wall, turn around and go a different direction.” Rather, the neurons are put into place, they are all connected to each other in the same was as in the worm, and then the worm is turned on. The result? The robot does, indeed, behave just like the worm. No additional programming necessary: Just neurons responding to each other and activity moving around the connections from one neuron to another. In order to “program” the worm, it wasn’t necessary to tell it what to do: Just create the neurons, order them in the same way as the real worm’s, and the robot worm comes to “life” behaving just as its real-life inspiration does.

Here’s a video of someone’s LEGO C. elegans in action:

The video does a nice job of showing the “neurons” in action. And, again, though the robot’s behavior may not seem all that impressive, it should be noted that no one gave the robot a bit of human-generated computer code that said, “When you hit a wall, turn around.” Rather, the worm’s neurons and their connections were modeled “as is” and the behavior just occurs. By reproducing the worm’s neuron arrangement in the robot, the worm’s behavior has also been reproduced in the robot. In a sense, this is “Artificial Intelligence” (a worm’s intelligence, to be sure), but it seems to me to be more fully like “Artificial Life” and, in some ways, I find AL more fascinating as AI.

Actually, there is so much more I would like to say, but I have spent more time on this post than I planned — a nice break from the other things I am working on today. So I will leave the questions all of this might generate to you in the comments. (OK, here’s one: Should it ever be possible to completely map a human being’s connectome–his or her neurons and all of their connections–would we expect the human being to be perfectly reproduced, as well? What role would the presence of the human spirit have in that? Would it be a hobbled reproduction lacking in the real “spark” that truly makes us “us”? What would it be? Where are the boundaries? At what point between C. elegans and H. sapiens would such modeling break down? — All right, that’s more than one…)

Fascinating stuff! At least it is for me. 🙂 If you have any thoughts, feel free to share them below.

Cumberbatch as Alan Turing

Alan Turing, flawed heroes, and the nature of reality

You know, I look at the title I just wrote and it makes it seem as though this will be a monster post of amazing depth and insight. Don’t get your hopes up. 🙂 I am busy working on a couple of Tomorrow’s World scripts for taping next week (one on the biblical “Man of Sin” and one about the observance of Easter), but it helps to unknot the brain every few moments by doing something else. This post will be one of those “something elses” and a nice brain stretch — a standing up, stretching the legs, and walking around before sitting down to hammer away at the task at hand some more.

The Alan Turin biopic coming out — “The Imitation Game” — got my attention when I first saw the initial trailer. It comes across like a WWII intellectual thriller and character study, where viewers will watch Benedict Cumberbatch play the role of Alan Turning as he and others crack the Enigma code. Turing was an interesting figure, one which current movie tastes certainly makes attractive for cinema (more on that in a moment), and the idea of seeing a movie depicting the man who laid the foundation for modern computing and, truly, so much more (as I will also get to in a moment) is appealing.

And yet it isn’t. Alan Turing was also an unrepentant and publicly professed homosexual in a time when such activity was illegal. The talk I hear concerning the movie, which premiers November 28, is that it is an “important story for our times” (or something like that), and such language, given its subject matter, suggests to me that much will be made of Turing’s sexuality and how someone so crucial to victory in the war–someone so gifted, etc.–was persecuted for simply “loving differently.” Whatever. Hollywood is very good at crafting stories influencing us to admire the heroes it presents us with in such a manner that our actively cultivated admiration may cause us to overlook whatever chosen element of immorality they are trying to change our minds on. It would be as if the Bible were rewritten by an author hoping to use David’s virtues to get us to think less critically of adultery, as opposed to the Bible’s actual approach, which is to present its flawed heroes as just that: flawed heroes. And recognizing someone as a flawed hero requires one to recognize flaws for what they are: flaws.

Alan Turing’s immoral sexual behavior was a flaw of character. It does not diminish the greatness of his intellect or insight. However, neither does the greatness of his intellect or insight diminish the immorality of his sexual behavior. I suspect that Hollywood is hoping that we will ignore the second of these facts.

So, I’m not as interested in seeing the movie as I otherwise would be. And why would I be interested in seeing the movie? Because I think Turing’s work, like Kurt Gödel’s and, more recently, Gregory Chaitan’s, has played a significant role in changing not only how mathematics is seen, but how reality is seen.

I’ve mentioned Gödel before — actually, not on the blog, I think, but in a telecast: “What Is Truth?” Don’t have 28 minutes to watch? We also made a TW Short version of “What Is Truth?” that is only 3 minutes long and an article of the same title. Actually, Kurt Gödel is mentioned in all three incarnations of that work, and including a mention of his “logical nuclear bomb” is one of my all time “feel good moments” concerning my work in our media. (And kudos to our video editors, who also allowed me to get images of the Fundamental Theorem of Calculus in there.)

That “logical nuclear bomb” was his work on the incompleteness of mathematics, demonstrating mathematically that not all mathematical statements can be proven, nor can the consistency of mathematics be proven mathematically. (That’s my own summary–forgive me for washing over details for those who are nitpickers.) I’ve long thought it fascinating since I first saw the result mentioned in a PBS program as a child (probably a NOVA episode, but I am not sure). However, an old OMNI magazine article (anyone remember OMNI?) pressed me to consider what the result might be saying about reality, given the intimate connection between reality and mathematics — an intimacy deeper than that of the physical sciences, since it is the relationship on which the sciences depend.

I’m currently enjoying Chaitan’s popular book Meta Maths: The Quest for Omega, which talks about the author’s own fascination with Gödel’s work and his extension of Turing’s discoveries concerning the halting problem (i.e., the question of whether or not there would ever be a way to predict which programs, out of all possible programs, a computer could run that would come to a stopping point versus running forever without stopping; turns out there is no way to do this for all programs). The point of the book is to discuss the implications of the number omega, defined to be the probability of a randomly constructed program of halting. It is a well-defined number that surely exists, and yet no computer will ever be able to calculate it — not because of limits to memory, computing power, programming language, etc., but because it is actually, in its existence, impossible to compute. Its every digit, in a sense, represents a mathematical truth that mathematics cannot determine, though the existence of the number is well established.

Among the things I am enjoying about the book is that Chaitan discusses thoughts that have been rattling around in my noggin for a few years, now (though he does so with intelligence, experience, and insight, where as my thoughts have been characterized by more of a dull hum…). For instance, do what we call the “real numbers” exist in the world? I don’t mean that in the sense of Platonism (i.e., is there some sort of abstract “reality” where these things exist in a non-physical sense), but, rather, is there any real physical representation of them anywhere? We can take a square that is exactly 1 meter by 1 meter in dimension, and its diagonal would be the square root of 2 meters. And we can take a circle that is exactly 1 meter in diameter and its circumference would be π (pi) meters. So, surely, the square root of 2 and π are things that exist in reality… except that there is no square in existence where the sides are exactly 1 meter each, nor is there a circle in existence with a diameter of exactly 1 meter. Can such numbers still be said to exist?

OK, just imagine the points invisibly in space, without a physical object assigned. Can’t we define such numbers by these invisible distances between these infinitely small points, which, though conceptual, surely do exist as locations in real space? No, not necessarily. Evidence, by some accounts, continues to mount that our physical reality is not continuous like what we imagine the line of a perfect circle to be but discrete and made up of “bits” like a circle on a computer screen looks when you look close enough to notice the pixels making up the image. Life (and reality) would not be a continuous flow, but a passing from frame to frame, like a strip of film showing in a movie theater–an illusion of continuous movement but actually a series of stills shown in rapid succession.

If there is no true continuity to existence, if all is discrete, then there is no room for infinite strings of digits in reality. And without infinite strings of digits, the vast majority of real numbers on the number line disappear into nothingness — including our favorites, like π and e. They remain only as “useful fictions” that allow us to use an imaginary continuity to model a very discrete reality.

Leaving earth behind, might there be a perfect circle in heaven, or a perfect square? Since the word “perfect” here reflects not an actual perfection of morality or existence but, rather, conformity to an idea than man has defined, I’m not so sure there are.

My old Platonism is looking pretty tattered these days. 🙂 As much as I love Cantor’s work and believe there is real value to it, more and more I am beginning to think of Leopold Kronecker’s famous statement “God made the natural numbers; all else is the work of man” as being possibly true at more levels than I had ever given it credit for.

Anyway, that sort of stuff, among other things, is what comes to mind when I hear of Alan Turing. And I look forward to the general resurrection when, if I can be so indulged at that time, maybe I can see Gödel, Turing, Cantor, and Kronecker at tea together not only discussing such insights but comparing and contrasting their thoughts with a revealed view of reality, all while having the privilege of helping them to get to truly know the Author of all they had been studying.

Wow — I really wandered around in this one, huh? Probably fell into a few ditches, too. Looking back, I read some of what I wrote and think, “Just what was in this salted caramel mocha, anyway?” If you read this far without falling asleep or getting a headache, congratulations! This probably hasn’t been the best expression of my thoughts on these things, but it still feels good to get it into writing in some way. I might try and write about these things again in the future — I’ve been wanting to write about how I first began to lean toward the belief that zero might not actually be a number in a very real sense, and if some of you out there have been having a hard time falling asleep, let me know and I will go there for you. 🙂 And the break has been nice — now, back to the scripts!

Space Banner Picture (NASA)

The Multiverse Kills Science

NASA pic
Somewhere in the multiverse, you may be reading this blog post on a beach instead of wherever you are. Also you might be riding a unicorn.

OK, bear with me a bit, today. Some of you may not be interested in this topic at all, but I’ve thought about it a lot and, well, it’s my blog. 🙂

In yesterday’s post on the New Scientist ad, I referred to the multiverse concept in a couple of derogatory ways. The first was that it is often used as a multiverse-of-the-gaps to help explain our existence when nothing else known by science will. I touch on this in my upcoming article in the next Tomorrow’s World magazine. And that topic is worth exploring in another post, perhaps, since there’s a lot to be said, though I wouldn’t say all of it rises to any level above mere blogworthiness.

The other derogatory reference, though, was to the “science-killing ‘all things happen somewhere’ multiverse.” This sort of multiverse (and there are many flavors of multiverse theories–a consequence of being supported my little more than imagination) is one I would have enjoyed spending more time on in the TW article, but there just wasn’t space to do so. In fact, a good bit of the back and forth between me and editorial was about trying to say well what little I could say in the space we had (and, as usual, editorial’s help was tremendous).

Before going into it, let me explain a bit of background. What is the “multiverse” concept? Loosely defined, it is the idea that all we normally think of as “the universe” is really just one of many universes. In some theorizations, there are virtually infinite universes. Such ideas have long been the playground of science fiction, comic books (greetings, fellow citizens of Earth-616!), etc. However, such ideas are now also very popular among the halls in which actual scientists walk. (Not necessarily because they are good ideas, but these days that isn’t necessarily a requirement in academia.)

I say “some theorizations” because physicists and others have imagined a variety of ways in which multiverses could exist. The versions that have my attention here are those such as the Many Worlds Interpretation (MWI) of quantum mechanics or Max Tegmark’s Ultimate Ensemble (UE).

The first of these, MWI, essentially posits that any time there can be a different “choice” made, the universe “splits” into multiple varieties in which each possible outcome is realized. For instance, in a given moment a uranium atom may randomly decay or not decay. If MWI were true, this would mean that two universes are generated in that moment by this one fact: one where the atom decays and one where it does not. In like manner, all possibilities are assumed to be realized in some universe “somewhere.” For instance, if you are flipping a coin, in one universe it is heads and in another universe it is tails. Both universes are taken to exist in reality. Take to the extreme, in one universe you would be be an Olympic champion and in another you would be an infamous mass murderer. Again, all possibilities are taken to exist in a virtually infinite collection of universes. (I should add that some would say a literally infinite collection of universes.)

[And, yes, shocking number of scientists do believe that this is the way reality truly is, Olympic champion-you and mass murderer-you and all. If you’d like to read more, check out New Scientist‘s “Life in the Multiverse” in its Sep. 27, 2014 issue, available online as “Multiverse me: Should I care about my other selves?” Even if most of the article is only available to subscribers, the first few paragraphs shown in the preview should be enough to illustrate what I am talking about. (The ridiculous “dinosaur” statement made by David Deutcsh in that article just about exploded my brain with its inanity.)]

The other idea above, Max Tegmark’s “Ultimate Ensemble” idea is something I have discussed before. It considers mathematics to be the ultimate reality and that all mathematically possible worlds/universes actually do exist–again, a virtually infinite number of realities. [It’s no shocker that Dr. Tegmark is featured in the New Scientist article I mentioned parenthetically just above.] And, again, the implications of UE are just as vast as those of MWI: That all possible realities are realized–that all things that can happen, however improbable, do happen in some universe somewhere.

How the idea that we live in not just a universe but a multiverse is often used as a God-substitute is discussed in the Tomorrow’s World article. For this post, I’d like to focus on how such ideas destroy science.

I don’t mean that they destroy science in a conventional sense, such as “Wow, these ideas are widely accepted with virtually no evidence to accept them–that’s not very scientific!” Though this is true (and discussed in the TW article), the damage done to science by such multiverse concepts is much deeper and more profound.

For instance, consider how many twins have been born in the history of man. It is, surely, a whole bunch. (“Whole bunch” being a technical term, representing an amount much bigger than “a smidgen” but less than “a bazillion.”) In each case, both twins have eventually died since, so far, life has proven to be 100% fatal since Adam and Eve. (Yes, Jesus rose again, but He did die first!) In some cases, one twin outlived the other by quite a long time–years and years. In other cases, the twins may have died very soon one right after the other. Certainly, throughout history, it has been a mix.

In the latter case, when those things happen they get our attention. For instance, say one twin of a pair dies at 9:52AM on a Wednesday and the other one dies at 9:56AM on the same day, even though each one lived in different places and died of completely different causes. Those who knew of the deaths might remark “Wow, what a coincidence!” Others, due to the fact that they were twins, might be tempted to invoke some sort of “cosmic hand” coordinating such effects.

Now, consider the implications of multiverse hypotheses in which all possibilities, however improbable, happen somewhere in some universe out of the infinite number of universes available (in fact, it happens, by some accountings, an infinite number of times). In these multiverses what was once improbable becomes inevitable.

If that is the case, then somewhere there is a universe (actually, many universes) that that looks remarkably similar to ours–is just as real as ours–and, yet, in which every pair of twins throughout history have always died within five minutes of each other. Every single pair. Throughout the centuries–throughout the millennia–every time a twin has died, his or her brother or sister has died within the next five minutes.

I’m not saying there would be a cause for this–it would simply be a cosmic “roll of the dice.” Given the laws of probability and nothing else, the odds against such a thing would be staggering, of course! Why would one twin dying of pneumonia in Texas cause, in any way, the other twin being in a car accident in New York within the next five minutes?

Surely there would be some causal connections in some few cases. Some twins share, for instance, inherited, fatal diseases. Still, the odds that they would die within five minutes of each other would be small. Further still, the odds that every single pair of twins have ALL died within five minutes of each other throughout history would be phenomenally small! Virtually infinitesimal.

Yet, the probability would not be zero.

And, hence, according to these multiverse ideas, there would be universes–perhaps many, many, many universe–in which this happens: where, literally every pair of twins mankind has ever produced has always, without fail died within five minutes of each other.

How would scientists react in those universes? They would explore the phenomenon, looking for causes. They would discover lots of things, surely, in genetics, sociology, etc. They would, perhaps, even “discover” things they consider to be possible causes for the “Twin Death Law” of their universe. And, yet, there would be no cause. The only “reason” for the phenomenon would be that their universe just happened to be “one of those” where such a thing happens.

In fact, imagine the first pair of twins in one of those (again, possible infinite number) who do not die within five minutes of each other. One dies and the other continues to live. Wouldn’t scientists pour over the remaining individual, analyzing everything they can–biologically, sociologically, quantum mechanically (“Are twins like entangled particles? Were these decoupled somehow?”), whatever–trying to desperately find out why these two were a rare, previously unknown exception to the “Twin Death Law.”

And, again, there would be no cause. No real reason. No underlying physical law other than dice rolling and random happenstance. The scientists will never discover an underlying “cause” because there is none.

Actually, the use of twins might be introducing an element that hides the absurdity. Imagine that, rather, it’s a universe in which everyone named “Marty” dies on their 36th birthday. Again, highly improbable, and, yet, according to these theories, something that does happen somewhere because its probability is not actually zero. Again (or, perhaps, “Again²”), everything that can happen does happen somewhere according to these theories. What could be discovered about this very real phenomenon in such a universe? What could scientists discover about the connection between the name “Marty” and death on birthday #36? there would be nothing to discover. No cause at all. Just random happenstance–and, yet, a very real phenomenon that could not be denied.

Some would begin to make predictions: “Well, your name is Marty, so you’re going to die tomorrow on your 36th birthday.” Those predictions, in a subset of those universes, would always be true. The “Marty dies at 36 Law” would be a reality. In other universes, there would be one rare exception. In other universes, there would be two exceptions. In others, there would be three, etc. New studies would be initiated to discover why these Marty’s didn’t die at 36. Maybe they would discover “reasons” in some cases — for instance, in some of those universes, the Marty’s that survived 36 were those who chewed Double Bubble bubblegum every day before the birth of their second child. (Or whatever.) In other universes, no “cause” could be found, and research would continue. Regardless, there would be no real cause whatsoever. Just relentless probability.

Actually, the problem is worse than this. According to quantum mechanics, the probabilities of many weird things are not zero, even though they are remarkably small.

For instance, the probability of a 747 filled with lemon Jello spontaneously forming 500 feet over my head isn’t necessarily zero. It’s low. Way low. (For an estimation of the probability of you, personally, randomly experiencing quantum tunneling (think “teleporting”), take a look at this. And note what it says at the end: “Almost impossible.”)

Way low. But not zero.

So, in extreme multiverse scenarios in which all things that can happen do happen, it does happen. (…he types, as he looks out the window nervously.) In fact, if there actually is an infinite number of universes in which all quantum particle possibilities are realized, there would be universes and earths in which such lemon Jello-filled 747s rain down on the earth daily. (Bring an umbrella.) Imagine looking for causes behind that. Yet, again, these is no real “cause.” If the reign/rain of 747s began on July 12, 2011 in that universe, would some poor scientist have to get on the news to tell the public, “Well, it looks like we’ve picked the short straw, and–for no real cause whatsoever–our universe is just one where Jello-filled 747s rain down from the sky at random in some strange quirk of quantum mechanical probability. Hopefully it will stop one day. Really, the odds are that it will stop immediately, yet (Wow, I just heard another one land down the street)… Anyway, I know the dramatically improbable keeps happening every day, and that none of this should really be happening, but–well–whatever.”

And, really, I don’t think that is the worst of it.

In our own universe, there are a number of things for which we don’t have good explanations, yet which seem to be very real phenomena–experienced repeatedly and consistently. With each observation, the probability that what has been observed is just chance gets smaller and smaller. Yet it never, really, becomes zero. And in some universe, somewhere, such things happen to innocent protons, neutrons, puppies, whatever, purely by chance and not by real “cause.” How do we know that ours isn’t one of those? How do we know that what we’re observing is truly the effect of some underlying cause? How do we know that the unexplainable correlation that has caught our attention in the data of our particle colliders or our beakers and test tubes is truly not a result of being part of an infinite multiverse in which, however improbable, that persistently observed correlation, in experiment after experiment, is random and uncaused, even though it seems as though there should be a reasonable cause. I’ve focused on twin deaths, Marty deaths, and spontaneous Jello-filled 747s, but there are more reasonable-to-the-mind possibilities. How can we guarantee that, in our universe, we’re experiencing an actually caused correlation in the laboratory, versus the possibility that we’re just living in “one of those universes” where these things happen? We can’t use probability, since–if everything that can happen, no matter how improbable, does happen somewhere–the utility of probability has been hobbled in an infinite multiverse.

This has been long and rambly, I know. And the ideas expressed are probably faulty and poorly expressed. (Again: blog.) But at their heart are real concerns.

Those who invoke the “multiverse” to explain away the improbability of our existence and the existence of our universe actually explain away much more–they explain away virtually every improbable event conceivable. After all, if the explanation we give is “Well, it had to happen somewhere in an infinite multiverse,” that explanation works for everything that could possibly occur at all, ever.

And when an infinite number of universes exist in which even the most improbable things can occur, deceptively indicating nonexistent underlying causes or laws (or hiding actual underlying causes or laws), it seems to be that science would be dead.

Thankfully, in the real world, the actual evidence of an infinite multiverse is non-existent. (David Deustch’s adamant claim otherwise in the earlier referenced New Scientist article is, in technical terms, hooey.) Even the more reasonable bases for conceiving of a multiverse of different universes–such as inflationary cosmology–have no experimental confirmation, yet. Not that cosmologists haven’t tried: You can piece together the history here and here, or access the original studies if you are so inclined (e.g., here, here, and here). The results so far? Not very promising for “multiverse” adherents, but not conclusive (read: “Hope springs eternal”).

And, as I have mentioned before, the multiverse is no solution to the improbability of mankind’s existence. Really, it is a no-win scenario, even for those who want to rid themselves of a Creator. All the best, current theorizing–even augmented with multiverse ideas–still points to a Creator and Designer behind it all. Narrowed by actual, current evidence, even the illusion of escape fades away.

But now I’m getting off track. My point was to address, IMHO, how the concept of infinite multiverses as popularly advocated–in which anything that can happen, however improbable, does happen–kills science. And I may have done so poorly, but at least I’ve gotten it off my chest. That’s got to be good for something. 🙂

Question Mark

Without God, what is outstanding about man?

I see that New Scientist magazine is advertising the newest issue of their anthology series, New Scientist: The Collection. This one is titled “The Human Species” and, as they describe it, it is…

“A compilation of classic New Scientist articles, The Human Story explains how an ordinary ape evolved into the most remarkable species the Earth has ever known.”

Mmm hmm. OK. Sure it does.

Actually, it doesn’t. As a subscriber to New Scientist, I actually enjoy each issue, but I also recognize that the magazine is rampant with unjustified underlying assumptions and anti-theistic bias. I’ve touched on some of that before here in the blog, such as this post about an article from the recently-late Victor Stenger. (Key sentence in the post: “Stenger likes the pretensions of an untainted commitment to truth, but his words reveal that his ‘commitment’ plays second fiddle to his personal bias and inclinations.”) It’s sort of a given for the editors and authors of New Scientist, and you will often find Darwin-of-the-gaps and Multiverse-of-the-gaps comments in their writings. (Actually, an article I recently wrote about the multiverse concept will be in the next Tomorrow’s World magazine, I believe.)

To expect the articles in this compilation to be any different would be silly. Still, that’s actually not my point in this post. I’m used to such things in New Scientist, but what got my attention was this statement in the e-mailed advertisement:

“A compilation of classic New Scientist articles, The Human Story explains how an ordinary ape evolved into the most remarkable species the Earth has ever known.”

The linked-to online ad expanded:

“We are a truly remarkable species. In the space of a few thousand years we have transformed the planet, created a technological civilisation the likes of which has never been seen, and even begun to explore space.”

[For those worried that my spell checker is broken, “civilisation” is spelled in the British manner.]

I don’t mean to assume that no one can give an answer, but really: Apart from God’s existence and purpose for him, what is remarkable about man?

By evolutionary standards, we aren’t necessarily the most successful species on the planet. (Though, the meaning of “successful” and the other such descriptors is a question, and I will get to that.) As this article starts off:

“As the most intelligent and technologically advanced species on Earth, we humans like to think that we own the place. But evolutionary success can be measured any number of ways. As evolutionary biologist Stephen G. Gould once noted, complexity, intelligence, and ferocity don’t count for much in the long run — adaptability and reproductive success matter more.”

The article then goes on to describe eight non-human organisms (bacteria, beetles, et al.) who, from an evolutionary perspective, might easily be considered more successful than mankind.

“Successful” is only meaningful to humans defined in terms we care about.

Still, the advertisement said man was “the most remarkable species the Earth has ever known,” not the most “successful.” However, I think the difficulty still applies.

I take that being “the most remarkable” means having qualities that are most worthy of being remarked on. defines “remarkable” as “notably or conspicuously unusual; extraordinary” or “worthy of notice or attention” but it would seem to me that from a materialist, Darwinistic perspective it’s sort of begging the question to say that mankind has qualities that make it the “most remarkable species” according to mankind. Maybe not really question begging, but a little–I don’t know–meaningless?

I mean, really–what makes mankind so remarkable?

Don’t get me wrong–I certainly think mankind is the most remarkable species! But the things I value most and find most worthy of noting are things I value because God’s revealed values give them meaning: our intelligence, our ability to create, our culture, our different religions, etc. And not all of those “remark-worthy things” are good. Some of man’s qualities are quite remarkable because they are very, very evil. Yet, even that–our capacity for moral or immoral action only truly has meaning in an existent God who gives real, objective meaning to morality.

The advertisement mentions the (perhaps debatable) relative speed at which we have “transformed the planet”; our creation of an advanced, technological civilization “the likes of which has never been seen [(1) It should say “never been seen before” since we are, currently, seeing ours, and (2) to which species’ technological civilization are we comparing it? The great technological civilization of the horseshoe crab?]; the fact that we are now exploring space (don’t many theorize that earth was seeded by microbes from Mars or elsewhere?); our culture and other items.

But from the (unjustifed) value-free point of view of modern evolutionary thinking, what makes any of these truly “more remarkable” than the extreme attributes of other species? Nothing, really. In fact, when one embraces the nihilism that is the logical end of God-less, materialist, evolutionary thinking — especially when the science-destroying “all things happen somewhere” multiverse is thrown in — very little, at all, is worthy of remark. There is nothing to be truly valued over anything else, and why should one actually appreciate any attribute in any species at all? Even the supposedly evolution-programmed instinct to reproduce can be ignored when nothing at all has any real meaning or value that isn’t merely imagined.

[And, as an aside: I note that it is possible that by “remarkable” it is meant by the magazine’s marketers to (effectively) mean “remarkable to the sensibilities of most humans, regardless of the lack of actual, objective value of the ‘remarkable’ characteristics.” But that is just as unsatisfactory. That humans would be the most remarkable species to… other humans? Duh. Gary Larson nailed that schtick when he drew the “Far Side” cartoon where one dog in a car is totally fixated on another dog outside as the most interesting thing in the world, all while the city around the car is in chaos, a nuclear explosion is going off, and people are running for their lives. “Humans are the species that humans find most remarkable” seems the least revelatory statement I’ve heard in a long time. (UPDATE: Might be able to see that cartoon here.)]

Interestingly, other science articles here and there are busy selling themselves to us based on how unremarkable mankind is (sort of a biological “Copernicus Principle,” perhaps) and how we’re just another animal, yet this one attempts the opposite, claiming that we are super remarkable, while embracing the same materialist philosophy that drives the others. Well, there are magazines to sell, you know, and dollars to be collected. (Sorry: pounds, in this case.)

A bit of a rant, today, I know. Don’t mean to be cynical, but after wading through so much God-less gobbledygook that tends to come out of folks such as the editors and writers of New Scientist, comments like those in the ads just strikes me as philosophically dishonest. I don’t know. Might just be me.

End of rant.

Well, we're back from the Feast! Wait... this doesn't look like Ohio... I knew we should have taken that left turn at Albuquerque.

Great site concerning Moon Landing conspiracies

Well, we're back from the Feast! Wait... this doesn't look like Ohio... I knew we should have taken that left turn at Albuquerque.
Well, we’re back from the Feast! Wait… This doesn’t look like Ohio… I knew we should have taken that left turn at Albuquerque.

We’re back from the Feast! Sort of. Physically, you get back, but (1) your mind still wanders to the amazing things you heard and experienced, and (2) there is a lot to do to get back up to speed with real life again. 🙂 So, today my family and I jump into doing the second thing. I know the boys are looking forward to getting back into their math classes today. (OK, I am pretending that my boys are looking forward to getting back into their math classes, today.)

And there is lots that could be blogged about, today — Ebola! Houston pastors & subpoenas! Our Feast! — let me blog on something random. Consider it a palette cleanser.

Of all the “conspiracy ideas” that I have encountered over the years (and there have been several), one of the most fun has been the one claiming that the American’s manned moon landings, beginning in 1969, were a hoax and deliberately faked by the U.S. government.

I have never found the idea believable, and after looking at a large amount of “evidence” that initial impression has not been reasonably challenged. And it has been tackled by a large number of people, including — in one of their many enjoyable episodes — by the Mythbusters team.

Well, while doing some unrelated research this Feast, I came across a neat website completely devoted to busting various “moon landing hoax” theories: Moon Base Clavius.

It doesn’t seem to be updated frequently, and, of course, doesn’t need to be. Once something is debunked, it doesn’t need to be constantly re-debunked, and there isn’t much left for “moon landing hoax” to come up with. Still, they do keep up with the news apparently. The current Clavius homepage has a link to a fairly recent (September 18, 2014) item about a new computer gaming process (specifically, an advanced one for modeling secondary lighting from reflective surfaces) that proves the lighting of Buzz Aldrin’s ladder descent in a famous photograph is exactly what one would expect in the moon environment given their surroundings at the time. Designed to consider the terrain, materials present, etc., the computer model recreated the photograph nearly perfectly — and in contradiction to conspiracists, who claim that the photo should be impossible due to the lack of air on the moon. In fact, the modeling effort was a 2-for-1 effort, because it also demonstrated the falsehood of another conspiracist claim: That if the shots were actually on the moon, the stars should be visible. The model demonstrates why this is not the case.

These things have been debunked before, but the use of the computer model to do so was novel and a neat article (IMHO). The YouTube video at the referenced site was brief and interesting to watch, as well. Of course, the point of the video was to promote the software, not just to demonstrate how the hoax theorists were wrong, but it’s still educational.

So, if it’s something up your alley, check out Moon Base Clavius. I know that those whose alley includes “moon landing hoax busting” represent a pretty small population, but, hey, it’s my blog, so, there you go. 🙂

It is great to be back, and it’s nice to roll up my sleeves and work to return to normalcy — or, at least, what passes for that in the Smith household! I pray that all of us will be able to put to work in our lives all the things God blessed us with at this past Feast of Tabernacles. That will be my focus for some time to come, to be sure. Meanwhile, I’m working on faking a Mars landing in my basement. Anyone know where I can purchase eighteen tons of red dust for a good price?

What would happen to your memories or personality if half of your brain were surgically removed? Don't be so sure...

Memory, the Spirit in Man, and Hemispherectomies

What would happen to your memories or personality if half of your brain were surgically removed? Don't be so sure...
What would happen to your memories or personality if half of your brain were surgically removed? Don’t be so sure… (Image credit: Wikipedia)

The most recent article I submitted for the Tomorrow’s World magazine is about the brain, and researching the topic was a real pleasure. What an amazing creation! There is a reason that “mind/brain” posts show up on my personal blog from time to time–I find the topic utterly fascinating, and I always wish I had more time to dive into it and swim around for much longer than I normally can before other areas of life require me to get out of the pool.

In particular, while writing this most recent article I came across tales of Rasmussen’s syndrome, a terrible condition in which the victim–generally a child–experienced swelling in and destruction of one hemisphere of their brain, causing debilitating seizures. Remarkably, one means of treating the condition is the complete removal of one half of the brain–an entire hemisphere. The procedure is called, appropriately enough, a hemispherectomy. The damaged hemisphere, right or left, is completely removed, leaving only the unaffected half. Johns Hopkins is known for its expertise in the procedure, as is the currently-popular Dr. Ben Carson. [Scientific American has a brief article on hemispherectomies here, and Wikipedia’s entry on the matter is not bad.]

While it sounds as though such a procedure might turn a person into a permanently brain-damaged individual with little hope for a normal life, the opposite is true. While there is often some paralysis associated with the side of the body controlled by that hemisphere (in the right side if the left hemisphere is removed, and vice versa), the result is generally a ceasing of the seizures and the retention of the individual’s personality, sense of humor, and memories. And, for children, the neuroplasticity of the brain–stronger in the young than us oldies–means that the remaining portion of the brain can often rewire itself learn to take over the functions that the removed portion had controlled. This last fact is what I focused on for the article, but I readily admit that it is not the fact that fascinated me the most. Rather, my mind keeps returning to the observation that removal of half of the individual’s brain does not affect their personality or their sense of humor and does not remove their memories.

Wow. Really, every time I pause to ponder the thought, I’m hit with a “wow” moment.

It is tempting to make the leap of concluding that this is evidence that it is the spirit in man in which memories truly reside, thus removing half of the brain does not remove the memories. Tempting, tempting, tempting. After all, I believe in the truth of what Paul is implying when he rhetorically asks, “For what man knows the things of a man except for the spirit of the man which is in him?” (1 Cor. 2:11) — the spirit given to us is part and parcel of who we are, one of the two components, spirit and brain, of the human mind. So, seeing memories persist after half of the brain is removed? Tempting, tempting, tempting.

But, drawing hasty conclusions is a dangerous habit. And there may be material explanations. (Though they wouldn’t change the truth of what Paul said, mind you.) For instance, perhaps the hemispheres’ neurons work redundantly, with each backing up the other when it comes to memory, though I have seen nothing published to suggest this. Actually, I have seen the opposite, such as recent research with rats (of course it’s rats–it’s always rats) demonstrating that it may be possible to erase a memory by making a neuron-level alteration in one location. Or, perhaps it is related to the fact that hemispherectomy patients are young and their memories are still within the time range (maybe 12 years from what I have read) in which memory recall is still dominated by the hippocampus–although I believe that half of the hippocampus is also removed in a hemispherectomy (the only structures unaffected, as I understand the procedure, are the thalamus, brain stem, and basal ganglia), and, regardless, there are also successful adult hemispherectomies, which would certainly involve long-term memory stored in the cerebral cortex, not in the hippocampus. Regardless of any of these possibilities, neuroplasticity, the ability of the brain to adapt and change itself, doesn’t explain it. How can the brain adapt to instantly “recreate” memories and elements of personality that have been physically removed? How would it know what to recreate? It would be like using half of a broken digital DVD to somehow recreate the entire movie.

I just don’t see a purely materialist explanation for the retention of personality and memory after hemispherectomy based on all I have read, though I am open to such an explanation should it be discovered. The physical brain is undoubtedly a vital part of the human mind, and that it has a role in taking in, processing, retaining, and recalling memory is undeniable. Yet, is it possible that the brain is simply an accessing mechanism? That neuronal patterns represent access codes to memories that exist outside the brain–perhaps in the human spirit–and that in the case of hemispherectomies the remaining hemisphere is able to continue to access an untouched, immaterial reserve of those memories? Somehow, the brain is “wet wired” to interface with the spirit — that much seems sure. Are we seeing clues about the nature of that interface in such procedures?

It would be foolish to conclude anything strongly based on this level of knowledge (really, this level of ignorance) concerning the interaction of the brain and the spirit. Still: tempting, tempting, tempting. And it truly is remarkable that while many materialists strive to convince us that the physical organ that is our brain is “all there is” to the human mind and personality–that, in essence, you are your brain–fully half of that brain can be physically removed while leaving the things that make you you untouched. Methinks they are guessing. Or, perhaps, engaging in wishful thinking.

Regardless, I’m glad that God has allowed me to have both my hemispheres this long. Something tells me that I need all the help I can get…


The Four Bacteria of the Apocalypse

Bacteria image from NIH
Howdy, little fellows! Now, behave… (Credit: NIH)

Looks like the fourth horseman continues his ride.

I was just reading my most recent copy of New Scientist (actually, may be one behind), and it has an interesting article about the growing crisis in antibiotics resistance. In particular, an inset report caught my attention with its catchy title: “Four Bacteria of the Apocalypse.” It concerned four particular bacteria that are apparently frightening health experts for due to their powerful resistance to antibiotics–in some cases, including “antibiotics of last resort.”

The four are:

  1. Multi-drug resistance tuberculosis (MDR-TB). Half of these cases are untreatable by current drugs.
  2. Methicillin-resistant Staphylococcus aureus (MRSA). The article points out that another staph bacteria has similarly developed a resistance to a last-resort antibiotic–this time, vancomycin. I suppose that means we’ll be hearing about VRSA, or VRS-something, in the near future.
  3. Carbapenem-resistant Enterobacteriaceae (CRE). This is a gut bacteria, and carbapenems are another group of antibiotics of last resort.
  4. Gonorrhea. The magazine reports that untreatable cases of this sexually transmitted disease have emerged.
The entire article can be found online here (the insert appears to be on page two of the online version).
Dr. Michael Behe speaking with a family after his presentation in Cincinnati (photo credit: me)

Review of Michael Behe’s Intelligent Design lecture in Cincinnati

Dr. Michael Behe speaking with a family after his presentation in Cincinnati (photo credit: me)
Blurry photo of Dr. Michael Behe speaking with a family after his presentation in Cincinnati (photo credit: me)

I had an unexpected opportunity last Sunday night to attend a lecture by Intelligent Design theory advocate Michael J. Behe — professor of biochemistry at Lehigh University and author of the watershed Intelligent Design work Darwin’s Black Box.

He had been invited to speak at the Schilling School for Gifted Children here in Cincinnati. SW and, I believe, CR brought it to my attention this past Sabbath, just in time for me to make sure I had room for it in my plans for Sunday evening. So, make room I did, and come 7pm Sunday night I found myself in a room of 80-100 people, many of whom were parents and students of the school, listening to a presentation from Mr. Behe — a very personable and seemingly unassuming gentleman who has become a lightning rod of criticism on the topics of Darwinian evolution and Intelligent Design. (He had fun with that, inviting any in the audience interested in reading critiques of his ideas to visit any web search engine and type in his name followed by any common curse word that comes to mind.)

The school invited him as a part of what I gather is a series of lectures by influential thinkers. The math & science department head of the school mentioned that after the Nye-Ham debate, they had invited Bill Nye (the Sort-of-Science Guy) to come and give a talk, which turned out to be a pleasant event. Wanting to keep the conversation going, they invited Dr. Behe as a representative of one alternative “middle ground” that the Nye-Ham debate missed: That of Intelligent Design, representing neither religion-based nor materialism-hobbled theorizing.

Dr. Behe’s presentation had, in my estimation, modest goals: Explain the concept of Intelligent Design, explain why it is real science (contrary to the assertions of its detractors), and explain why he considers it a more reasonable and more credible theoretical framework in comparison to Darwinian evolution. In these goals, I think his presentation succeeded.

Sure, the discussion could go deeper. His points would surely be disputed by evolutionists, and their disputations would be counter-disputed by IDers, etc. His presentation wasn’t a debate-ender, and it wasn’t meant to be. It was a gentle-but-persuasive presentation and not meant to be a bare knuckle “throw down” — and in this, it was refreshing. It was a pleasant atmosphere and solid presentation, appropriate to its audience — which was clearly composed of both skeptics and supporters — that did not avoid hard questions and which, in simple and clear terms, explained a topic that is shamefully banned from many of our public schools by those who fear open minds and thoughtful criticism of their most cherished theory.

Here are a few observations from the lecture, presented as points and elaboration. The points are generally points he made, but the elaboration is mostly mine. Still, I will try to mention his comments, as well, since that is probably what most of you reading are actually interested in. 🙂

  • “Intelligent Design” is legitimate science.

One of the most shameful tactics taken against ID by its opponents is that it does not represent legitimate science, and it is utter nonsense. If the question, “Does any element of life demonstrate signs of intelligent design?” is not accessible by scientific inquiry, then what is?

I think this is an important question even beyond biology. If an explorer or artificially intelligent probe were to stumble upon a structure of some sort on another planet, would the question, “Has this been designed by intelligence or is it a natural formation?” completely inaccessible to science? That such questions cannot be addressed by science is ridiculous to me. Is it impossible to design an artificially intelligent probe, for instance, that could encounter something like the ancient ruins of Greece or Rome and conclude that there was intelligence behind their construction? What if the probe came upon the Louvre in Paris? We recognize such things, immediately, as intelligently designed, and the implication of that fact is that we could design probes to do the same. (Turing fans and AI folks, feel free to run with this assumption.)

If so, cannot such reasoning be turned toward the structures we find involved with life?

If the statement “Life empirically demonstrates characteristics for which the most reasonable explanation is intelligent design” is not a statement that can be evaluated by science, then why not?

One can claim (falsely, I believe) that Intelligent Design theory represents ideas that have been disproven, but one cannot legitimately claim that it is not science. Like a president standing in front of a crowd and talking about a contentious and unpopular piece of legislation and claiming that “The debate is over” (ahem), such claims sound more like desperation than fact. If SETI represents a scientific enterprise–that is, activity and research grounded in real science–so does the work of Intelligent Design theorists and researchers. I’ve read some try to defend SETI as activity grounded in science but Intelligent Design research as pseudoscience. (Amanda Gefter’s 2010 article in New Scientist is a good example.) But their arguments ring hollow and demonstrate themselves to the careful reader as poor reasoning motivated by either ideological predisposition or by ignorance of the work done by ID theorists.

Ignore the hypocrisy and the smoke screens of ID’s detractors who say otherwise. And the detractors are many. More honest and/or educated critics, even while not agreeing with the conclusion of intelligent design, recognize ID for the scientific endeavor it represents and do not feel the need to dodge legitimate debate through such illegitimate means.

(Actually, in response to a question Behe offered an argument that ID may represent legitimate science better than evolutionary theory does. I will try to remember to describe his point later.)

  • The identity and nature of the “intelligence” is irrelevant to ID theory

Weird concern about the identity of the “intelligence” behind the intelligent design evident in life is a red herring often brought out to distract people from considering ID theory (school district decision makers, gullible judges, etc.). The fact-based detection of the presence or absence of intelligence in design should not depend on how one feels about who or what the source of that intelligence might be.

Imagine a police investigation into  death that begins to point to murder. Should the investigation be abandoned because some folks are uncomfortable with the possibilities of who such a murderer might be? Of course not.

Dr. Behe brought this point out (not the murder example, but the point above), and it makes perfect sense. Some of the attendees did not get it, one or two of the school’s students, in particular. Questions about the “designer” in Intelligent Design theory are irrelevant to determining whether or not there is intelligence present in the design. The fact that some of the students were either oblivious to the point or that they had been feeding on various anti-ID tropes were pretty evident. For instance, one individual asked how a perfect designer (clearly, here, a “Designer” with a capital-D is in mind) would designed creatures that show so many imperfections. While both philosophy and theology (and, importantly, the Bible) address such questions related to God’s Creation, in terms of the scientific theory of Intelligent Design, the question is irrelevant. Determining whether or not intelligence is necessarily involved in the design of living systems is not dependent on whether the “designer” is perfect or imperfect, a single intelligence or multiple intelligence, etc. For example, detecting what seem to be flaws or inefficiencies in the design of of a Volkswagen Beetle does not negate the obvious fact that there is, indeed, intelligence present in the car’s design. That is, outstanding questions about the nature of the “designer” of the Volkswagen Beetle (number of designers, his or their purpose, power, or intent, etc.) do not negate the conclusion that there is intelligence on display in the design of the Volkswagen.

There is literally zero scientific justification for rejecting the theory of Intelligent Design on the grounds that it leads to questions about the nature of the “designer.” Quite the contrary, questions that lead to other questions are normally part of what scientists enjoy.

I think that this is potentially where some atheists sometimes show their bias-driven prejudices. For instance, the idea of a finely-tuned universe designed to make life possible is rejected by a cacophony of voices among various atheists. (I say a cacophony, because many of them do not agree with each other–running away from a feared conclusion instead of running toward a truly better one tends to produce such results.) However, I believe that once a “fine tuner” can be safely hypothesized that will provide an “escape hatch” away from the more natural conclusion that a divine God is the Creator, suddenly “fine tuning” will become more acceptable. I’ve already seen this in one major, mainstream publication, where someone pointed out that certain particle physics work has the potential (important: under some theories) to create multiple universes that are expanding alongside our own. Seeing that (again, under some theories) we have the power to initiate such “creations” of alternate “cosmoses” as we smack particles together, one person speculated that perhaps in the future we will learn to “fine tune” such creations to craft universes with particular characteristics. He then speculated that perhaps our own universe was initiated on some past laboratory table top by a physicist in a previous universe who had learned to do just that.

And that’s how it goes: Once we come up with possibilities we are more comfortable with, we become willing to embrace certain conclusions. Once we can substitute someone in the place of a God to Whom we might be accountable, suddenly “fine tuning” becomes palatable. Until then: No way, José.

And that’s wrong. The proponents of ID continue to say that they long only to follow the evidence where it leads: If to an intelligent designer of some sort, then so be it. If to no designer at all, then so be it. But let science be honest with the evidence. What sort of self-respecting scientist would disagree with that?

Again, drawing a conclusion on the intelligent design of life should be a matter that is irrelevant to the matter of who or what that designer could be.

All of this is related to a point I felt Dr. Behe made very well, coming up next:

  • Science has gotten into more trouble in the past trying to ignore conclusions that felt uncomfortable than it has in embracing them.

Some have said that Intelligent Design must be ruled forbidden out of hand simply because its implications are uncomfortable and seem “unscientific” (they aren’t, but play along).

Of course, the same scientists lament about how many people they believe avoid trusting them about Darwinian evolution because of its implications. Sauce for the goose is not, apparently, sauce for the gander.

But more to the point, Dr. Behe’s example was a good one. He presented an idea: “Maybe Intelligent Design could be true, but its conclusion is radical enough that perhaps its acceptance should be put off — say, a century or so — while efforts are focused on finding alternate explanations that are more palatable.” He compared such sentiment to the resistance originally expressed concerning the Big Bang theory. Imagine how far behind physics would be if we put off accepting the Big Bang theory due to discomfort about its implications.

Some might say that the difference between the Big Bang and Intelligent Design is a matter of compelling evidence. The evidence of a Big Bang, while not necessarily completely unavoidable, eventually became strongly compelling, while the evidence for Intelligent Design simply is not. I would disagree with that conclusion, and the fact that resistance crumbled concerning the Big Bang but remains strong concerning Intelligent Design is due, I believe, to the stakes involved. Given the vagueness and impersonal nature of the universe’s origins–conceptually distant–the Big Bang is easier to embrace, regardless of its metaphysical implications, because those implications are easier to “shelve” and emotionally avoid. It can be ascribed to impersonal “forces” and “conditions”–and although a thorough consideration of the possibilities for such “forces” and “conditions” leads unavoidably to the same uncomfortable metaphysical implications, there is a comfortable cushion of abstraction that aids one’s efforts at denial and self-deception or distraction. However, the idea that life, itself, has been designed by an “intelligence”–that is more personal. The metaphysical implications of that are much harder to avoid. That life may have a “designer” means that you may have a “designer” . . . a “designer” who may actually be a Designer, if you get my capitalized drift. And many people do not want a Designer.

  • The evidence against Darwinian evolution as the mechanism by which life has developed in complexity is rather damaging.

Michael Behe's book, The Edge of EvolutionDr. Behe summarized a number of the points he makes in his book The Edge of Evolution, including the observation that Darwinian evolution (natural selection–survival pressures–acting on random mutations) can be seen in life’s development but only in ways that are very clearly not creative in nature.

His examples in the lecture were solid, looking at research on literally tens of thousands of generations of E. Bola bacteria and, in more detail, evolution in humans enabling resistance to malaria. In the latter case, for instance, humans have, indeed, “evolved” some resistance to malaria and, as natural selection would dictate, those who have “evolved” that resistance have had better reproductive success, growing to represent disproportionately larger segments of the population in areas where malaria represents a serious challenge. However, as Behe points out in detail, the mutations in the human genome that have enabled the increased resistance are, in every case, the results of genetic information being destroyed in the human genome, not information being built or added. None of the mutations have demonstrated an increase in complexity — rather they are, in a sense, a matter of de-evolution.

His analogy was a good and memorable one. Behe showed a picture of a bridge in South America destroyed by drug lords to prevent the army from coming in to their area and halting their operations and pointed out that this is the equivalent in what we see with malaria resistance. The “bridges” have been wiped out genetically, preventing the disease from being able to proceed in those individuals whose mutations have protected them. The mutations are destructive–not constructive–and, in many ways, harmful, but in the case of preventing malaria from killing the individual, they have been helpful. In these ways, Dr. Behe points out, Darwinian evolution can be seen in action.

However, the picture painted by proponents of evolution is of natural selection plus random mutation as a great, materialistic bridge builder. The idea we are supposed to believe is that nature–with no assistance from any designer at all–can build bridges where there are none, yet this is overwhelmingly not observed in the laboratory or in the field. Bridge destruction, sure. Bridge building, not so.

  • The charge of “science stopper” is a terrible excuse not to do science and to avoid following the evidence where it leads.

This was one area where I benefitted from the lecture in a way I did not expect.

Intelligent Design is often called a “science stopper” because it is felt that once activity is accredited to a “designer” then it can no longer be explored, tested, or investigated. Consequently, all research on life would apparently stop and we would all just sit on the floor with our smartphones and play Angry Birds. More seriously, the idea seems to be that once Intelligent Design is concluded, there is no more exploration of possible non-intelligent means and mechanisms concerning life and its processes. If things aren’t materialistic, then they are not accessible to science, so we “must” continue to assume materialism lest we stop prematurely and cease to learn. So, accepting the conclusions of Intelligent Design supposedly puts a stop to the production of testable hypotheses and predictions — hence the term, “science stopper.”

If I got him rightly, Michael Behe shed fantastic light on this attack and why it is disingenuous. The points to be made are several.

Less revelatory to me, personally, were the facts that the “science stopper” claim is simply not true as it is pictured. The concept that life on earth has a richer information source in its past or that it has access to richer information resources than generally accepted could invigorate additional avenues of research, including investigating claims by researchers such as James Shapiro concerning seemingly intelligent genetic engineering going on at the cellular level. Being freed from the assumption that only blind mutation and fundamentally undirected selection are at work in life could create the sort of environment where new ideas can grow instead of meeting the stifling resistance they now suffer. Being free to consider the world of genetics in a context that is more accurate — materialistic and naturalistic or otherwise — would begin to allow new frameworks of understanding, which could hardly be “science stopping.” Accurately understanding the limits of mechanistic, undirected processes would help in understanding them better, as well. How is this somehow “anti-science”? And if science must take place in a context where a true fact must be disallowed because it is a “science stopper,” then we’ve lost sight of just what science is supposed to achieve.

[Asked in a different context but related to being a “science stopper” was a student’s question to Behe about the falsifiability of Intelligent Design. He answered this well, by demonstrating that various principles of Intelligent Design are, indeed, falsifiable (that is, subject to being shown false), where as it is the theory of Darwinian evolution that is treated in an unfalsifiable manner. Every finding that demonstrates evolution’s weaknesses is dismissed, and every experiment or cumulative experience that demonstrates its unviability is discounted and excused.]

However, those things aside, one thing Dr. Behe pointed out has stuck with me: “Science stopping” conclusions aren’t failures at all but successes. His examples really sold the point.

For instance, consider the success of Einstein’s theories of relativity. While they are generally not referred to as “science stoppers” that is exactly what they were. One can read of the fantastic experiments that were being done, for instance, by many trying to discover the medium in which light travelled and the many theories that were multiplying about the “æther” that carried light along. Newtonian mechanics-based theories on various scales were “killed” — slain by the understanding that they were inaccurate and inadequate. A great deal of science was stopped cold — and rightly so. In this way, relativity is a success not a failure.

Anti-relativity theories are still formulated and pursued by some, to be sure. But the success of relativity has put such researchers on notice: “Don’t expect this to be promising work.” While a “science stopper,” relativity has, instead, been a “science focuser.”

The same could be said about other theories. Should the recent evidence supporting the existence of the Higgs boson be ignored because it solidifies the Standard Model of particle physics and makes alternate theories less likely? Is it a “science stopper”?

The success of the Big Bang theory certainly put an end to considerations of an eternal universe. Any such theorizing certainly stopped — at least in any significant volume. But cosmology has been properly and profitably focused by the theory’s success, hash;t it? Shouldn’t unprofitable and inaccurate science be stopped?

Bringing our work and research more in line with reality should always be embraced, should it not?

In this case, the charge of “science stopper” is simply a matter of trying to smear a good theory with a negative sounding pejorative. People worried that accepting a beginning to the universe would move cosmology into the “metaphysical” and put an end to science and research. And yet, cosmology has exploded (sort of a Big Bang pun there!) with theories, research, experiments, etc. The beginning hasn’t gone away — the “Genesis Problem,” as it has been called, is still there. But the science goes on, with new questions, new findings, new knowledge, and new theories — actually, with deeper questions, more illuminating findings, more accurate knowledge, and more profitable theories. As a “science stopper” — even one with metaphysical implications — accepting the universe’s beginning has been a “science focuser” and a “science energizer.”

Doesn’t it make sense that embracing truth should do just that? And if embracing something that is increasingly seen as false is necessary for science to “continue” then haven’t we lost our way a bit?

Al the best theories are, in a number of ways, “science stoppers.” If Intelligent Design is a “science stopper,” it is only so in all the right ways. Don’t let the name-calling fool you.

I know there was more, but if I don’t post this review today, I may never do so. 🙂 Life is busy with the Spring Holy Days knocking on the door, so I think I will cut it short here. If I think of additional points to make, I will try to follow up with a “Part 2,” but for now I think this will do.

It was a great talk, and I enjoyed the opportunity. I took my copy of Darwin’s Black Box up for Dr. Behe to sign and was able to chat with him a bit. I had the chance to ask him about the work of William Dembski and others concerning trying to quantify information and signs of intelligence in a way that may add more objective analysis, and he said he thought it was promising as long as the work stays rooted in the realm of experimentation. I wanted to ask about David Berlinski, as well, but feared I would turn into a fan boy in that case. 🙂

Michael Behe was a very nice fellow, and I enjoyed the brief interaction and the chance to hear him present his case in person. It is my understanding that he stayed overnight so that he could spend time with the students of the school in a more intimate setting the following day, and I am sure that they found it profitable.