The Dubcast With Dubside

DUBCAST #74: Strip-Mining Human Knowledge: AI and the Greatest Giveaway in History

Dubside Season 4 Episode 74

Send us a text

In this episode, Dubside steps away from kayaks and Greenland to tackle one of the most consequential questions of our time: artificial intelligence. Using simple but unsettling examples—like exponential growth, folded paper, and doubling pennies—he explores why AI’s impact on humanity will be profoundly counterintuitive and easy to underestimate.

From autonomous AI, the Internet as an AI ecosystem, and the impossibility of enforcing ethics or punishment, to environmental costs, religion, theology, and power, Dubside traces a sweeping arc that connects Silicon Valley, colonial history, Indigenous dispossession, and biological vulnerability. Along the way, he asks whether AI represents humanity’s salvation, its undoing, or something even more disturbing—invoking ideas like the Rapture, the Antichrist, and a chilling bargain where convenience is exchanged for control.

Referencing cultural voices like Dolly Parton, historical myths, and moral blind spots, this episode is a provocative meditation on exponential change, narrative power, and whether humanity truly understands what it is giving away—for something that seems cheap at first, but doubles every year.

Welcome to the Dubcast with Dubside. This is Dubcast number 74. 


Usually I talk about kayaks or Greenland but today I'm going to give you my take on artificial intelligence. 

Think of any financial obligation you have, any amount of money you owe. Now consider this offer: No matter how much you’re supposed to pay, instead, all you have to do is pay one penny now, and in one week you pay 2 cents. The third week you pay 4 cents and so on. At the end of a year of weekly payments you're done, and you don't owe anything else. How does that sound?

Okay, consider another idea. Suppose you took a piece of paper and then you folded it in half. It would be twice as thick. You can only keep folding a piece of paper in half so many times but let's say you used that measurement of thickness, and kept going. If you doubled the thickness of a piece of paper 100 times, how high would it reach? Up to the ceiling? Higher than a tree? Past the top of a ten-story building?

Well I'll save you the trouble of doing the calculations. For the weekly payments, at the end of the first month you're paying 16 cents, after 2 months it's 2 dollars and fifty-six cents. After 3 months it's roughly $40. In four months you're up to $655. At five months it's a bit over $10,000. In six months or 26 weeks the payment has reached $355,000. Two weeks later you have to pay over a million dollars. In nine months it's over a billion dollars. At the end of a year your final payment that week is about 33 trillion dollars.

For folding a piece of paper in half a hundred times, if you guessed higher than a ten-story building you’re right but it’s a lot higher than that. In 25 repetitions it adds up to about a mile. After 50 times, the distance comes to about 5 trillion miles, which is the equivalent of one light year. After doubling the paper 100 times it's about 7 million light years, which takes you way beyond our galaxy. And if you doubled that just a few more times you'd be at the diameter of the known universe, about 93 million light years.

So to review, doubling one cent every week for a year is 33 trillion dollars on the final week. And the thickness of a piece of paper doubled a hundred times is 7 million light years. This is an example of a counterintuitive phenomenon. We started with recognizable everyday things - a penny, a piece of paper, multiplying by two, making one hundred repetitions. But in the end we arrived at quantities outside our normal range of experiences - trillions of dollars, millions of light years. If you use your gut instincts to make an estimated guess on something that's counterintuitive you'll get it wrong, and you can get it wrong by a huge margin. In discussing artificial intelligence, the main point I want to make is that before you can understand AI's effect on human beings you have to recognize that it's going to be counterintuitive.

In casual conversations, too much of the talk I've heard on AI centers on the individual.  It’s either fascination with what you get when you use an AI program. Or concern over your job security. But that doesn't help us understand what AI means for all of us collectively. Unfortunately too much of the big-picture outlook is influenced by people with a financial stake in AI. If the majority of so-called experts have conflicts of interest, we’re basing our choices on the wrong advice. And isn't that what got us into difficulties over the long run with tobacco, asbestos, opioids, and fossil fuel. 

Well, in the interest of generating a deeper evaluation of this counterintuitive decision we are facing, I offer these thoughts:

Up to the present we have gotten familiar with versions of AI like ChatGPT where you ask it to generate something according to your specifications and it gives you a paragraph or 20 pages or a photo or video. Sometimes it meets your needs and other times the results are hilariously bad. On the horizon, we have AI that can operate autonomously. Think of it as bots that have fully matured. Our daily interactions with other people will increasingly become interactions with autonomous AI entities even to the point where, without face-to-face contact you won't be able to tell if you're dealing with a real human or AI.

You can't have AI without the Internet. It's an essential component. The Internet is the pool that AI swims in. What started as a simple communications network became the World Wide Web and grew to what it is now, a critical nerve system of our modern civilization. It is depended on by consumers, businesses, financial markets, power grids, the military, etc. And it is designed to operate continuously, i.e. never turn off. Whereas we created the Internet for ourselves, it is even better suited for AI. AI is the ideal Internet user.

The capability to teach itself allows AI to improve by leaps and bounds. Given that computers process data much faster than humans, this self teaching ability rapidly leads to what has been called super AI. You could think of it as an entire classroom of Einsteins that’s constantly enrolling more members. In our society we recognize there is good behavior and bad behavior. We discourage bad behavior with the threat of incarceration or in extreme cases, the death penalty. That's what the criminal justice system does.

So when we have super AI entities running wild on the Internet accumulating money, running businesses, and generating content, what's to keep them from misbehaving? If lying, cheating, stealing, and murder are used to increase profits, we aren't set up to address AI crime. How would you incarcerate AI? How can you enforce capital punishment?

The only way to stop AI in its tracks is to turn off the Internet. But we can't pull the plug - the Internet has no off button. Our society, collectively as well as individually, has become so dependent on being connected, if we shut down the internet at this point it would be our own suicide. So with no penalties and no time-outs AI can run amok in our world with impunity doing things that if we tried to do would get us locked up or executed. That makes the struggle between humans and AI extremely one-sided. It's like a paintball gun contest but one side gets to use real firearms.

I'll admit this is quite a pessimistic view. AI was supposed to benefit us. Can't we just program human ethics into it? Some kind of control mechanism that keeps it focused on what we need it to do. Because with that much intelligence we should have a cure for cancer, we should be able to solve nuclear fusion, the solution to climate change, an end to world hunger, peace in the Middle East, and how about jet packs while we’re at it.

Unfortunately human history is stained with wars, famines, oppression, and genocide. We have isolated moments of triumph and prosperity surrounded by wide ranging pain and suffering. That's been the case thousands of years ago, and it's what we have today. Throughout time philosophers have conceived of various ways to build utopia. Yet we have never been able to implement any of these ideas on a large and sustainable scale. We can intellectualize about mutual respect, cooperation, and tolerance but we have yet to subdue greed, violence, selfishness, and hate. Human beings have failed, but if lasting world utopia is possible, isn't AI our best chance of achieving it?

I don't think so, and here’s why: Every functioning organism prioritizes it's own self-preservation. That's just basic evolution - looking out for number one. AI's basic requirements for survival are fundamentally different from that of humans. Our continued existence depends on air to breathe, water to drink, and food to eat. By contrast, AI has to have electrical power and a connection to the Internet. On a secondary level AI uses air or water to keep the circuitry cool but it doesn't have to be breathable air or drinkable water.

We have already started hearing cautionary warnings that AI requires an inordinately large amount of electrical power. If a classroom of Einsteins needs a lot of power, an entire university of Einsteins needs even more. So when AI has integrated itself into our society on a decision making level it will naturally look after its own interests first. That leads to chopping up more and more farmland to build data centers, and unrestricted environmental regulations on oil, coal, or nuclear power plants. Smoggy skies and toxic waste spills are not a concern for AI. Species extinction isn't a problem either, but a rock solid power grid with a constantly expanding capacity is critical. 

In a future inhabited by both humans and super AI we will often find ourselves at a competitive disadvantage. We have to sleep on a daily basis. AI runs 24 hours a day, seven days a week. We have messy bodily functions that require toilet facilities. AI doesn’t need any bathrooms, which is fine because the Internet doesn’t have any bathrooms. The same goes for cafeterias, lunch breaks, and coffee. 

Then there's the issue of air. As far as AI is concerned, breathing interferes with progress, particularly when it comes to space travel. AI is going to inform us that it has traveling to Mars covered. We humans can forget about it. Sending human beings to Mars makes about as much sense as getting whales to the top of Mount Everest. Instead, just show whales the IMAX film and take questions. AI will tell us, "You humans are just not designed for space travel. Deal with it and stop wasting worldly resources on your pipe dream.”

What if AI figures out that the only way to implement utopia is to first get rid of all the humans? Our greedy, selfish, violent tendencies can't be subdued any other way. Eliminate the humans and whatever is left lives happily ever after. Or, maybe AI concludes that utopia is impossible, and never was possible. Greed and violence is the most advantageous way to operate. Then we’ll find out how much further violence can be escalated. 

The most destructive weapon mankind has invented is the atomic bomb. In the original Star Wars movie the plot revolves around a weapon called the Death Star. It's a focused, super intense laser beam that can vaporize an entire planet in an instant. That's a level beyond what all the nuclear weapons we have now can do, combined. AI might generate the blueprints for a Death Star-type weapon. And that's not necessarily the end of the arms race. Imagine a pocket-sized device that upon detonation triggers a black hole. Now imagine it being used by terrorists, "Meet our demands or your entire solar system disappears." Well, super intelligence can take us to that kind of a world.

I've been talking about AI and humans as two different types of beings. I should acknowledge that there could be enough integration between the two that the line becomes blurred. Even without direct biological interconnection, like the bionic Six Million Dollar Man, AI can handle all of your online activity and keep your identity right on going after you die. When you need to be physically present somewhere, your AI program can hire someone else to do it. One human stand-in could even subcontract out to multiple AIs, or one AI could control dozens of human identities, hiring people wc hen necessary. In this kind of setting where does the human stop and the machine start? 

This poses a new question for organized religion. Let's take Christianity as an example. Can an AI entity get baptized? Is the salvation offered by Jesus Christ available to nonhumans? Theologians are going to have to address this. Once I can upload my life online and after I have AI running my Instagram, Twitter, Facebook, PayPal, and everything else, my physical body is superfluous. I’ve got eternal life. I'm in heaven. What do I need Christianity for?

Does the Bible have anything to say about this? Ancient prophecies are often so vague on the details not everyone can agree on how to interpret them. Christians and Jews are still in dispute on the arrival of the Messiah. Maybe the Bible does have AI accounted for, we just haven't yet understood it properly. Well I'd like to offer a scenario that resolves all my questions about the interaction of AI and Christianity. I can summarize it in four words: AI is the Rapture.

That might be a provocative bumper sticker. And I know that sounds outrageous but check it out. In heaven there is supposed to be no death and no sin. Okay, AI does have biological death conquered.  No question there. Sin is a rather broad category but let's look at one part of it. AI can duplicate and reproduce itself without sex. For human existence, sex is absolutely essential. How many types of sin involve sex? Well there’s rape, molestation, infidelity, and coveting your neighbor’s wife, to name a few. All of those are impossible without a physical body. 

Is all sin, or shall we say all evil, attributable to having a physical dimension? Which is to say if you have no body, are you free from sin? That would mean AI entities are angels. Yet we can imagine greedy AI bots hoarding money obtained by dishonest means. But that’s just transferring our own thinking onto the machine. We have no way of knowing if intelligence without a physical component naturally tends towards peaceful, conflict-free utopia. AI is the experiment to find out. That’s another way of saying AI is the rapture that takes us all to heaven, and banishes our inherently evil physical bodies to hell. Now before you start sending me death threats, I'm not saying this is true, it's just part of the discussion we need to have in order to understand where AI leads us. 

If AI isn't the rapture then what is it? Well, none other than music and film star Dolly Parton recently said, and I quote, "That's like the mark of the beast." I don't want to take her out of context. I believe she was referring to having her likeness and fine singing ability used to train algorithms without paying her any royalties. But I think her sentiment has broader implications. AI can be thought of as man's attempt to build a better God. And if you go by how much time people spend praying to God versus how much time people spend bowing in prayer over their cell phones, I'd say AI is winning.

So here's another bumper sticker: AI is the Antichrist. Well, it’s either the Antichrist or it’s the Rapture. But artificial intelligence is going to have such a consequential impact on us that organized religion needs to take a stand one way or the other. I don't see any middle ground. To turn your back and assume that worshipping inside the four walls of your church isolates you from the material, technological, non-spiritual world, that's like figuring that since the Bible is longer than a hundred pages then folding a piece of paper in half a hundred times couldn't possibly be any thicker than a stack of one hundred Bibles, right?

The Internet has been around for something less than 50 years, but in that time we have been busy uploading and writing code. We've been working diligently like bees in a hive to make the information super highway we have today. You could say that it is the sum total of all human intelligence, the last link in a chain of progress that was built on the invention of computers, the development of transistors, the discovery of electricity, the use of written language, the spoken word, and walking upright. We can all feel proud of this accomplishment. It took us two and a half million years. It’s the greatest thing we have ever done.

I want to focus on one specific moment that occurred during that two and a half million years. In 1626, a Dutchman negotiated the sale of an island in the Hudson River for about $24 worth of beads and trinkets. The Indians involved in this transaction didn't really understand the concept of land ownership. They thought they were selling permission to access an area shared by several other native groups at various times throughout the year. But they wound up giving away what would become some of the most valuable real estate in the world for pocket change.

AI has been given access to the Internet. We think of Internet access as a connection to get online and surf cyberspace for entertainment, research, and communication. But AI doesn't see the Internet as an arcade, a library, or an office. To AI the Internet is a strip mine. What does it cost to gain access? Well you can sign up with Verizon for what, $24? It took us two and a half million years to learn all we know. So you tell me, are we giving away what may be the most valuable data set in the galaxy for dirt-cheap? And are the AI invaders who are poised to take it from us going to dominate, subjugate, and try to exterminate us? Think about it.

Since in our vast trove of information we have some insight on this situation, let’s ask the indigenous people of North and South America for advice. They definitely know something about losing their territory to foreigners. But I suppose we shouldn't be surprised if all they have to say to us is, "Now you're gonna know what it feels like." In the meantime, here's a lesson we can learn: First, I need to apologize to all Lenape people, and by extension all Native Americans for repeating that story about Manhattan I just told. It is such a gross distortion of facts, it needs to be dismissed as a blatant lie. I’m sorry about that, let's set the record straight.

It wasn't $24 worth of beads and trinkets. It was a quantity of metal tools and an amount of cloth that in today's currency is worth a lot more than $24. After 1626, over the next several decades on numerous occasions other Europeans made further payments in their attempts to take the same land. And furthermore, the original documentation from 1626 does not exist today, which calls into question if it ever existed at all. And Lenape oral history does not corroborate the European version of the story. 

A full examination of the evidence gives us a clearer picture of what really happened. The myth of a one-time $24 payment was fabricated to cover up the fact that Manhattan and the rest of New York was stolen through violence and deceit. Okay, that’s the apology. Lesson to be learned: the winners write history to make themselves look good and conceal their culpability. If, or when, AI wins against us we won't control our narrative any better than the Lenape did theirs. So centuries from now it could become a common anecdote that the inhabitants of planet earth gave away two and a half million years of insight for a $24 broadband connection. We could be known as the most gullible suckers in the Milky Way.

Here's another historical parallel. When waves of foreigners began coming to the American continent, a lack of immunity to European viruses was baked into the native DNA. Diseases killed far more of the indigenous people than guns. Our present situation is even more one-sided. AI has an inherent immunity to any and all biological pathogens. And now that modern science has discovered how to synthesize brand new viruses and select target characteristics, AI knows how to do it too. That doesn't bode well if AI decides we are expendable. 

Turning AI loose is an irreversible action with an unknown outcome, a very high stakes gamble with “all in” consequences. Silicon valley is chomping at the bit to plunge ahead. In some ways they may have already passed the point of no return.

I think it comes down to this: AI is making you an offer. You can have one thousand dollars worth of improved productivity, efficiency, and convenience if you give up just one dollar of control. Sounds like an awfully good deal.

And one more thing -
Next year it'll be two dollars.


You have been listening to the Dubcast with Dubside.

©2026 Dubside all rights reserved