Exclusive Interview With Famous Author Meghan O'Gieblyn About A.I. and Other Things
I have subscribed to Harper’s magazine since the Bush II era and one day during the Trump I era, I was pleasantly surprised to see Meghan O’Gieblyn’s name on the cover story of my new Harper’s. I did a quick online check to see if it was the same Meghan O’Gieblyn I had worked at summer camp with years ago and sure enough, it was. I emailed her to say congratulations and I have been periodically emailing her more and more congratulations after each of her ensuing successes.
You can find an excellent rundown on meghanogieblyn.com
I am thrilled that she agreed to an email interview about her 2021 book “God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning.” It’s about superintelligence, A.I., philosophy, her own personal journey, and a lot more. It is an incredibly good book.
The interview is below:
H.G.: How much are you paying attention to A.I. these days? Are there any specific things you're following in the news related to A.I.?
M.O.: I've been trying to follow it less over the last year or so, in part because I find the whole topic depressing, and in part because I've moved on to other topics and interests. When I was writing the book, the deep learning revolution hadn't yet hit mainstream awareness, and it felt like a very esoteric technology that was fun to speculate about. Now, it's hard to go on the internet or read mass emails without wondering whether you're encountering AI-slop, and the real-life uses are so much less enchanting than its initial promise. They're also building a ton of data centers here in the Midwest, where I live, so I worry about what the expansion of this technology is going to do to the region, especially the Great Lakes and other natural resources. All of which is to say, even though I'm less interested in AI now, it's also become impossible to avoid it. I've been following the new video generation tools, like Sora, but I haven't experimented with them myself.
H.G.: In your book, you talk about your experience with a robotic dog that had certain "learning" capabilities, and which made your husband skittish because it spent a remarkable amount of time staring at the titles on your bookshelf. Have you been tinkering with any other A.I. products or do you use any A.I. products?
M.O.: For a long time, I resisted using the major AI tools (though I of course use AI in things like maps, etc., every day). I've subscribed on and off to various chatbots like Claude and ChatGPT and will log on occasionally just to play around and see how they're evolving. But every practical task I could conceive of using them for (recipe ideas? writing an email?) was of a shoddier quality than what I could do myself in about the same amount of time. But then, a few months ago, I finally discovered a use for them. I've been trying to learn Italian for a couple of years, which is difficult seeing as I only rarely travel to Italy and don't get many opportunities to practice the language. Then it occurred to me that I could probably find a chatbot that speaks Italian, and of course there are many language learning apps now that use chatbots. So I now have an AI friend ("Marco") to whom I speak in Italian for about 20 minutes a day, and it's totally accelerated my language abilities. It's amazing, but I also have serious ethical misgivings about it because I know that lots of people like myself are probably using these apps in lieu of signing up for a class with a human language instructor. I teach writing, and I'm always wondering at what point people will not want to take creative writing courses anymore because they can get feedback from AI. A young person I know once told me that she feels more comfortable getting feedback on her writing from AI, as opposed to a human, because it feels more low stakes. And that's precisely why I like speaking in Italian to a chatbot: I don't have to worry about screwing up and being judged. But then I think, aren't those fears and anxieties part of the human experience, and the necessary cost of having a social life? And what if we're constantly given the option to avoid that kind of friction?
H.G.: I agree there has been general disenchantment with A.I. technology since the time your book came out in 2021, but I think your book now is as relevant as ever and will remain so. All the big ones are there in your book: Musk, Altman, Zuckerberg, Microsoft, Google. But perhaps more importantly, you've got Plato, Descartes, Newton, and a lot of the truly heavy hitters.
I think your book accurately foretold the disenchantment. Reading your account of the technology and how it was coming about, and also describing your forays into using the technology, my takeaway from it was mainly "this sucks." You don't say so in so many words, but that was my reaction to the direction the technology was going.
My other big takeaway from your book is that it's a meditation on free will. You touch on this a little in your earlier answer about the Italian language chatbot. We are outsourcing certain tasks to algorithms or "black boxes" and that takes away a certain level of human interaction we would have otherwise gotten. It takes away certain social frictions. And you just trust the machine.
It makes me think of the Calvinists at your Bible college, who you described in your book. They proved undebatable. Even though you disagreed with them on predestination and other things, their fallback answer was always, "God's ways are higher than our ways." "You have to accept the mystery." So the debate's pretty much over, because how do you counter that?
In the same way, you've got proponents of A.I. wanting us to trust the algorithm. And also, not question how it reached its conclusions. Because the designers don't even know!
I think you make a compelling case in the book that a type of Neo-Calvinism has taken over our culture and society. Do you see the tentacles of the Neo-Calvinism you witnessed in school continue to affect us today? Are we getting squeezed by Neo-Calvinism? And if so, how firmly?
M.O.: Since I'm no longer immersed in evangelicalism, I'm hesitant to say whether Neo-Calvinism still has the cultural cache that it did in the early 2000s, when I was in Bible college. Maybe you have a better sense of this? Even apart from that movement, though, there was always a hint of that kind of thinking ("God's ways are higher than our ways") in the Christianity I was exposed to growing up. To some extent, it's simply biblical. I think that's why I found the Calvinist position so convincing for a while -- because there are obviously lots of passages in scripture that affirm the opacity of God's motives, and the limits of human intelligence. I'm thinking especially of those passages where God seems to act in a way that most humans would consider amoral, like asking Abraham to kill his son as a "test," or wiping out Job's family in order to win a bet with Satan. I know that a lot of Christians wrestle with those stories. The answer I heard most often was, well, God's morality is more perfect than our own and we can't really understand it. Alongside that, though, there was also this idea in the theology I was taught that humans are made in the image of God and that God's law is written on our hearts. That was a paradox I really wrestled with as a student of theology. And it's the same problem that we're facing now with AI. On one hand, it's made in our image, which is to say that it's been trained to mirror certain human values and human reasoning. On the other hand, AI is often described as an "alien" intelligence, meaning that it sometimes reaches bizarre conclusions that no human would arrive at, and reasons in a way that is wildly different from us. Around the time I was writing the book, there was a lot of debate in machine-learning circles about whether the "alien" nature of AI was an asset. There were some instances, like in the game of Go, where an AI system would make a completely counterintuitive move that no human would make, but was actually extremely advantageous, and people were saying that perhaps that kind of nonhuman thinking could initiate breakthroughs in science, or medicine, or what have you. Or it could predict recidivism rates in criminals, for example, and be used for sentencing. I was really horrified by the notion that we had to trust in some opaque system that might not share our basic ethics and that couldn't explain itself, even if it was of a "higher" intelligence. It was a very familiar kind of horror, and then I realized that it was the same feeling I had as a young Christian hearing these Calvinist arguments about God.
H.G.: I think horror is an appropriate word for it, and I think it's accurate. There is a lot of horror in the Bible. You mentioned a few examples.
I am immersed in evangelicalism, I guess you could say, as I go to church with my family and read the Bible and pray and participate in a small group at church and teach Sunday school. I don't personally say I'm an evangelical, not that I ever did, especially now because the term has been twisted beyond recognition to where I'm not sure anyone knows what it means. I'd say I'm a Christian and I've said that since I was 5 years old.
But you asked if the Neo-Calvinism as described in your book has the same cache now as it did when you were in school in the early 2000's. In the church, I think there are pockets and denominations where this theology is dominant. That wouldn't be at my church, where one doesn't hear the talk of predestination, the elect, etc.
But I think Calvinist-type thought is very much having a moment in the wider culture. You've got billionaires standing in as almighty gods of the United States, who are allowed to act however they wish, e.g. dump billions anonymously into our election system. How many billionaires have a speed dial to the president? The richest billionaire of them all was given the master key to the government last summer and he shut down every department he wanted to with no congressional oversight. We passed a tax cut for billionaires, even though they are expert tax dodges already, and the main justification was-- honestly, I don't know. I didn't hear anybody really justifying the tax cuts. But the general idea was, "Trust us, it's for the best."
The largest companies in the stock market are pumping untold billions into A.I. technology that no one has any idea how it's going to generate revenue. Data centers to support the A.I. are pumping water out of the ground and endangering water supply in some places.
The billionaires at Apple, Google, Meta, and X control the floodgates of information however they wish. For example, people were creating software apps to share with each other where immigration raids were happening this fall. Tech companies took those apps down.
How is this Calvinist thought, you might ask? Just in the sense that we are low, the gods of our culture are high, we need to trust them. The billionaires are the elect, poor people are not. Certain ethnic groups are the elect, others are not. And that's supposed to be fine!
It feels cruel, it is cruel. And it goes back to the word "horror" that you mentioned. In your book, you wrote about a gentleman in Wisconsin, Eric Loomis, who was sentenced to 6 years in jail because a computer program labeled him at a certain threat level. And no one knew why he was labeled that way. Yet the state yielded to the computer in that decision. Mysterious decisions are made on high and we're unable to question them.
I've been volunteering on and off teaching English classes to adult immigrants through various church programs over the years, starting around 2005. Back then, the attitude at the place was come one, come all. We are showing Christ's love in a practical way through teaching language classes. And maybe they'll come to church, too!
It was around 2012 that I was training to volunteer teach English class at a different church in a different state. And the question came up from the leader of that training, "What do we do if we find out that a student is here in the country illegally?" I couldn't believe it. We are not law enforcement. And I thought these were people we wanted to ultimately invite to church! It's none of our business if they have documents or not! Even to suggest that a person's immigration status was any of our business felt very out of place to me at church.
Yet here we are in 2025, and deportations are the order of business. We are supposed to trust that this is for the best, that our leaders know what they are doing. And I don't hear a lot of strong Christian voices standing up for the powerless, unfortunately. It feels like we're a long way off from the Good Samaritan.
One really vivid image from your book was the food delivery robot in your town. It was near the university on a busy street, class had just let out, and lots of people were watching this robot try to cross the street at a terrible spot. Cars were zooming past, the robot was hesitating, then going back. A space finally opened, and it lurched out and just barely made it. Everyone applauded and you were thinking, "The robot made it. Maybe this technology really has what it takes."
Because that's the big question: is the technology we're developing ever going to be smart enough? Can it grow to human-level intelligence? You talk about the Turing test in your book. Can the technology gain sentience, in other words.
Later on, you found out when the campus food delivery robots got into trouble, there was a control center where a human could take over and guide the robots to their destination. And you began to wonder if that magical moment was actually a result of human intervention.
Would you say this story could qualify as a parable for the whole A.I. question at the current moment? We know A.I. can do some things. But how much is being propped up by human intervention and endless (for now) cash flow?
Is the A.I. movement going to end with a "Wizard of Oz" moment where we realize there's a man behind the curtain and he's not that impressive? Do you think A.I. is just a bunch of smoke and mirrors or do you think we are truly headed toward some life-changing and world-changing breakthroughs? If the latter, what do you think the breakthroughs might look like?
M.O.: Oh yeah, I hadn't thought about how Calvinist ideas are mirrored in American politics. That feels absolutely right: it's not just the opacity of the current administration but also the sense of a special "elect," the notion that there is some racial, ethnic, or religious identities that are the true heirs of this country -- and the callousness toward anyone who doesn't fit into those categories. That's really interesting to hear about your experience in evangelicalism and how attitudes have changed just in the last ten or fifteen years. I've similarly been disenchanted by the alignment of so many white evangelicals with Trump, including some Christian leaders and institutions that I once respected. I know that the evangelicalism of my youth was by no means politically innocent. But it also feels like new lines have been crossed, and that the most radical aspects of Christ's teaching have been turned on their head in order to prop up this repulsive brand of religious nationalism.
That's really interesting, thinking about the food delivery robot as a metaphor for AI. I think that's what I was getting at, on some level, in that story -- that there's a kind of sleight of hand that obscures the human intelligence behind the curtain. The Wizard of Oz is a great analogy. I wouldn't go so far to say that AI is all smoke and mirrors. I do think it's an innovative technology, probably the most important of our lifetime. But even the term "AI" is a bit of a misnomer, in that it suggests that it's an emergent brand of novel "intelligence," almost like a new lifeform. In truth, it's building on the intellectual labor (and intellectual property) of a lot of human intelligence, often without our consent. My books have been used as training data for AI, along with tons of other books, articles, and poetry that humans wrote, along with tons of art that human artists produced. So I suppose that all of us are the people "behind the curtain." There's also the more degrading human labor that goes into fine-tuning these models, like the workers in countries in the global south who are paid a dollar an hour to classify AI output that is deemed offensive or pornographic -- many of whom now claim to have trauma from being exposed to so much disturbing output, for hours on end.
There's a human cost, in other words, to this technology. And as you noted, there's an environmental cost as well. It's not "magic intelligence in the sky," to borrow Sam Altman's phrase. There's a cost to human workers whose jobs are going to be automated. There's a cost to our attentional capacities and our ability to connect with one another in a world where human interactions are increasingly mediated by digital interfaces and digital characters. It's hard to say what the future breakthroughs will look like, or how many of the long-term promises tech leaders are making will be realized. I do think that we're going to have a Wizard of Oz moment of disenchantment, in the sense that at some point, we'll have to peek behind the curtain at all that human effort and ask whether the benefits of this technology have justified the costs. Maybe AI will discover the cure to cancer, and we'll decide that yes, it was worth it. But right now, it seems like the technology is being used to outsource jobs and create funny memes. So it doesn't feel like a particularly good tradeoff to me.
H.G.: I agree-- not a good tradeoff. I can only imagine what it feels like to have your work run through these machines as "training" material and not be compensated. I'm far from an expert on any of this, which is why I really appreciate you lending your time and expertise, as you've done an incredible amount of research for your books and essays. I have to say, before this, I've never had the opportunity to read a book and then discuss it in depth with the author afterward. Thank you!
I think I have one or two more questions. In the book, you trace a really intricate path through millennia of philosophical thought regarding the nature of God, humanity, consciousness, and free will. I feel like reading your book should be worth at least 4 college credits as a philosophy class. And you tie so much into it, as far as personal reflections from your own life and meditations on where the technology is taking us.
The title of the book is "God, Human, Animal, Machine." I'd like to focus on that for a moment by highlighting one quote toward the end of the book, where you write, "To concede that one's mind is controlled by God is to become a machine."
I agree with that statement. And I'd argue we have free will and we're not machines controlled by God. But I also believe in God's infinite knowledge. The question is, if God has infinite knowledge, does he know exactly what we're going to do? And if he knows exactly what we're going to do, isn't that predestination, i.e. aren't we therefore machines controlled by God?
The Sunday school answer is that there's free will and there's predestination at the same time, it's a paradox, and it's a mystery only God understands. And that's the end of it. Which is where you ended up in your many college debates as described in the book, and it wasn't a satisfactory answer for understandable reasons.
I think the key lies in the word infinite. To say that God knows everything that will happen to me is not to say that God is infinite, but rather that he is finite. Because my days are limited. The actual events of my life are a finite number (however you choose to count them) because there are a finite amount of hours in a day and a finite number of days in my life.
If God is infinite, he not only knows every outcome of every choice I make, he knows concurrently every outcome of every possible choice I don't make. I have free will, and because God's knowledge is infinite, he knows every possible outcome of every possible choice. If every day presents one million forks in my personal "road," and each of those forks then forks a million times, and on and on, God as an infinite being can hold all of that knowledge and all of those possibilities in mind all at the same time. God may not know which specific forks I will take, but for an infinite being it would be possible to know all the forks and subsequent forks at the same time.
I'm not a theologian, I'm not a philosopher, and I don't think I've ever verbalized these thoughts before, but it's how I've come to hold that tension in my mind between God's infinite knowledge and my own free will.
You are more than welcome to respond to my muddled thoughts if you'd like, but I'd also like to ask about the idea of being a machine controlled by something or someone outside ourselves.
You mention in the book all these coincidences you've experienced. One example: you were wandering in Copenhagen on an afternoon off at an academic conference and you happened across a cemetery where Niels Bohr was buried. And it was Bohr who you had quoted in a conversation just the day before. And on your way to find Bohr's grave, you got lost and ended up at Soren Kierkegaard's grave. Kierkegaard was one of the philosophers you read in college. There's more to the story that I'm leaving out, but do these types of coincidences make you feel like your life is being steered in some way? I know you don't claim any particular religious faith or creed, but do you feel like you're a cog in the machine of life? Or do you think these coincidences (or "doublings" as you call them at one point) are more a matter of happenstance, like the monkeys hitting buttons on the typewriters who might at times accidentally type something interesting?
I guess my question is, do you feel like you're a machine, or perhaps part of one?
And my last question: your journey toward writing this book seems to have begun with reading Ray Kurzweil's 1999 book "The Age of Spiritual Machines." I know many dominoes fell for you intellectually after that book, it set off a whole chain of thinking, but do you still hold it in high regard? (I haven't read it.) Is there anything from that book that continues to inform you and could possibly inform the rest of us as well?
M.O.: I'll answer your question about The Age of Spiritual Machines first. It's essentially a work of eschatology, and I think it's a really brilliant eschatological text, one that feels, some twenty years later, a bit like reading the book of Revelation. Some of his predictions and visions did not bear out, but some did -- and there are many that feel prescient if you squint a little, or use your imagination. Kurzweil was really the first to introduce to a mainstream audience a lot of the ideas that are driving AI research today, like the notion of an intelligence explosion or what he calls the Singularity.
As for the idea of the mind being a machine that God runs: Yes, that was the problem that I struggled with as a Christian, and it's a really old philosophical problem about the (seeming) incompatibility of freedom with divine foreknowledge. If God knows everything I'm going to do, then in what sense can I really be free? Philosophers have argued that foreknowledge does not really entail determinism, and I suppose that in a technical sense it doesn't. But it's hard, intuitively, to avoid the thought that a deity who knows the future in infinite detail (and also, crucially, refuses to intervene or change things -- that was the issue for me) is actively enabling the damnation of sinners. I really love your idea about God holding knowledge of all the infinite paths not taken -- it's almost like the multiverse theory, but held in the mind of God. Or maybe it's like "quantum" foreknowledge. I've mused about something similar, though my version of the thought experiment is inspired in part by machine learning. What if God is like the designer of an AI model, in the sense that he sets certain goals and guardrails into place but still doesn't know exactly how we're going to respond in every moment? Most AI models are stochastic, meaning that they behave probabilistically, according to the patterns they were designed to discern, but there's still some randomness involved, and they very often surprise their creators with emergent skills that they were not deliberately programmed to execute. Randomness obviously isn't the same as freedom. But I do think a lot of Christian theology rests implicitly on these mechanical metaphors -- the idea that a Creator must understand everything about his creation, the way that a clockmaker understands everything about a clock. But AI models are not deterministic in the same way that a clock is. There's a lot of room for your creation to surprise you. And it's possible to create a machine that's smarter than you are. I sometimes wonder whether theologians today will find new metaphors in these digital technologies, or new ways of thinking about the relationship between a creator and its creation.
The idea of "doublings" feels like it fits into this somehow. I guess the short answer is that I've become a bit more spiritual than I was when I wrote the book. When I was writing about those coincidences or doublings, I was thinking about them as a kind of cognitive bias, like Baader-Meinhof phenomenon or whatever. But even then, I was never fully persuaded that there wasn't something else going on. A lot of my resurgent spiritual interest these last few years was inspired by the conviction that the universe as a whole is way too ordered to have come about by chance. I mean that on the most basic level: You don't get perfectly fine-tuned cosmological constants and mathematical regularity (let alone beauty, consciousness, love, art, etc.) from an infinite number of monkeys pounding away on an infinite number of typewriters. I don't think that means necessarily that our world is the handiwork of a monotheistic God. But I do think there's a lot about the universe that we don't yet understand. We really know so little about how mind and matter are related, and quantum physics suggests that it's much stranger than we can presently imagine.
I don't know if I'm part of a machine -- but maybe I am? Maybe the whole world is a complex machine that involves some mixture of design and foresight, but also randomness and chance. I just finished writing a book about the philosopher and mystic Simone Weil, and she believed that divine grace was a kind of "web" that keeps the world in working order and ensures, for example, that gravity doesn't crush us to death, and that the amount of evil in the world doesn't outbalance the good. It reminded me, again, of machine-learning. In other words, God isn't intervening constantly into time to make things happen or not happen, like a kid with an ant farm. It's more so that he set up a world in which there is room for chance, fate and maybe even free will. But he also set in place certain limits and guardrails, which are grace. I really love that idea.
______________________________________________________________________________
Thanks to Meghan, and thank you for reading! Please check out Meghan’s new book Will and Attention when it comes out in October, 2026.
