Well, not quite but… Lance Wallnau played a clip from Joe Rogan’s show with Coleman Hughes. Lance was talking about, well in my own words, speaking the truth as Christians. His base scripture was:
1 Peter 3:15 But sanctify the Lord God in your hearts: and be ready always to give an answer to every man that asketh you a reason of the hope that is in you with meekness and fear:
He related this verse to what is called “Apologetics” – a term used in Christianity I’ve always disliked. Apparently, the term comes from Greek word apologian (to “make defense” or as used above “give an answer”). Why don’t I like it? Because it evokes an image of someone “apologizing” for God’s Word and I’m not about to do that. Ever. I’ll readily apologize should someone point out any error I’ve made in expressing my understanding of scripture but never for the Word of God itself.
So… back to Lance, he was basically saying we Christians would be well-advised to be soft-spoken rather than “in your face”. I get that. I agree with this. He then used part of the Rogan-Hughes exchange to make his point. Interestingly enough, it is well-known Joe Rogan is not considered “conservative” at all. I am reasonably certain he is not a Christian – a quick check says he is an “agnostic” so I dunno. On the other hand, Hughes says he does not believe in God.
So Joe Rogan and Coleman Hughes says he voted for Biden in 2020 and Clinton before that. He also says he does not believe in God. That said, watching these two men converse, I felt a good deal of affinity towards each. Both are intelligent, well-spoken, and articulate. Coleman Hughes is especially soft-spoken. I listened to the entire three-hour podcast and enjoyed most of it.
While Lance pointed out Hughes’ thoughts in Israel – thoughts that were well-considered and well said, what prompts me to post today is when their conversation turned to AI. I was fascinated.
Both men agreed AI is very scary and I agree with them but not for the same reasons. I did find it rather amusing how Joe Rogan, in particular, extrapolated the future of AI. He projected how AI could soon overtake human’s as far as comprehending the mysteries of the Universe. He speculated on how AI could “evolve” (my words not his) into our “god”, possibly understanding or even become the force that started it all with the “Big Bang”. Oi.
Is any of that even possible? Meh – according to some theories, maybe. Some who understand quantum mechanics might be able to make a case for time travel. So what Joe, who believes there is a god (or gods? I don’t know agnosticism can take so many forms.) is saying is we humans are in the process of creating “god” (or gods) and this creation of ours could be responsible for creating humans. Right.
Coleman Hughes more or less went along with the idea or so it seems. Of course once he has time to think this through then I’m sure he would either have to reject it or admit that god must either exist already or is certain to exist in the future.
How does this fit with what I’ve been saying all along? Sorry. Not at all. I’m standing pat. My contention is no AI program will ever be sentient or capable of original, individual thought. Software is a collection of instructions that turn series of switches on or off. That’s it. Neither the switches nor the instructions are capable of autonomy – individually or together – nor would they be if they were installed in some sort of organic or positronic “brain”. It. Is. Not. Going. To. Happen. God (and I mean the One True God) help us if I am wrong. This doesn’t mean the whole AI phenomena isn’t scary as hell. Here’s why.
First let me offer an example. I was using ChatGPT the other day. I asked it to list all the states with online voting. It provided information from 2022. I know this because it told me so. I told it is was wrong and it revised the list to one that was almost correct. How did this software manage to correct itself? I don’t know, but it did. This indicates this software might be capable of self-learning. And this is what I find alarming.
I’ll say this first, AI software can and should, by definition, be self learning. I expect this. What scares me is the realization of what this means and what the true push for “AI” really means and that is massive information gathering.
There is a race going on right now to develop AI. We are being led to believe this race is to develop machines that can “think”. The real race is to develop machines capable of collecting, assimilating organizing, manipulating, and dispensing knowledge. Think about that for a moment. Now consider two words – “deep fake”.
Worse yet, because no AI will ever be sentient, no AI is capable of morality. AI has neither consciousness or conscience. It can and only do what it is told – or programmed to do.
What’s worse is as fast as all this AI is being developed, it may well be too late before the world understands what is really happening to be able to put a stop to it.
The bottom line is AI is dangerous as hell but not for the reasons we are expected but because this technology is sure to end up in the hands of evil actors with evil intent where their hearts should be.
God help us all.