Before we welcome AI as a helper, we should ask what it may slowly train us to trust, obey, and become.
With the incidence of recent health issues and flu, Martha and I have been watching sermons on streaming sites. Two of those sermons were on artificial intelligence (AI) —somewhat unusual for a typical sermon in a traditional congregation. The last few reflective thoughts have been on topics of high concern by most of the public—loneliness, the affordability crisis, and support or not of government actions. However, if one finds the topic that is most discussed throughout all forms of media, none of these are at the top of the list. Artificial Intelligence is. Questions asked are about its ability to be a personal assistant, whether it will be addictive or harm reasoning, or in the spiritual realm will it led or become an idol?
To answer these questions a more precise definitions of the forms of AI that are being pursued is needed. Many are familiar with the form of AI where you enter a simple single question and AI pops out the answer almost immediately. “Agentic AI is a form of artificial intelligence designed to do more than answer a prompt. It can pursue a goal, plan steps, use tools, make limited decisions, and carry work forward with some degree of independence. “ There are four forms of agentic AI : “1) Assistant agent — helps with tasks step by step under close human direction. 2) Workflow agent — carries out a defined multistep process such as research, drafting, checking, and revising. 3) Autonomous agent — acts with much greater independence toward a goal, with less human supervision. ) Multi-agent system — several AI agents work together, each handling a different role. A very short distinction is this: ordinary AI responds, agentic AI acts toward a goal.” To provide an example of ordinary AI, I asked ChatCPT to define agentic AI and its forms . You see the result of ordinary AI in the quotes in this paragraph.
My background came from the world of no electricity, running water, or oil-based machines. I have witnessed the immense labor-saving gains technology can bring. Heat from a propane stove is instant and requires essentially no labor compared to getting it by hand by chopping wood. Naturally, being a researcher by nature, I began using OpenAI ( ChatCPT ) shortly after it became available. I am not alone. Its adoption rate is more than 5 times that of radio, tv, or the Internet with 100 million users within two months after launch on Nov. 30, 2022.
I do use it as agentic AI. When I use ChatGPT Plus, I use it not simply as a chatbot, but as a bounded form of agentic AI for staged academic research. It helps me move through a disciplined process of gathering evidence, organizing structure, drafting sections, expanding citations, checking redundancy, and refining the final manuscript. In that sense, it functions as more than a writing assistant, yet still remains under human direction, judgment, and responsibility. I used it in this way to research AI background and development for this reflection. The result was a 67-page research document with 96 credible citations. This would have taken days manually but was done in about 3 hours with ChatCPT. Since it was done in stages I did have to be present during all of the various processes—another productivity gain.
What these forms of AI are called matters, because their proper use, their secular risks, and their spiritual risks are not all the same. Personal use occurs in clusters. The first cluster is information work. This includes topic research, book and article comparison, product evaluation, policy or regulation summaries, trip planning, and other tasks in which the burden falls on searching, reading, comparing, and synthesizing. The second cluster is document and data work. Here the agent is less a researcher than a transformation engine. It turns rough notes into letters, extracts deadlines from documents, converts scans into usable text, reorganizes meeting notes into action items, summarizes long files, and helps clean or explain spreadsheets. The third cluster is transactional browsing and execution. This includes filling in forms, comparing bookings, navigating shopping sites, reordering items, and interacting with software interfaces on the user’s behalf. A fourth cluster is coordination work. This includes household task systems, calendars, reminder frameworks, recurring maintenance plans, volunteer assignment lists, event preparation packets, and communication drafts for groups. The value here is cumulative. A fifth cluster is learning support. Personal agents can construct study plans, generate review questions, explain unfamiliar terms, summarize readings, and adapt follow-up material to the learner’s level. All of these require human checkpoints whenever the task becomes financially binding, legally meaningful, personally sensitive, or morally weighty. If you wish to use some of these personal assistants they can be found at ChatCPT+. CoPolit, Gemini, Alexia +, and Zapier.
Industrial use follows the same pattern: the form of the AI shapes both its usefulness and its dangers. You are now and going to encounter industrial uses in IT and software, IT, and operations. For example, I can tell ChatCPT to write code for a website and it will – without errors. You have probably been as frustrated as I have when you get the computer AI for customer service and sales. Advances in healthcare and research promise to be dramatic. Unfortunately, some of our experiences with healthcare administration mean some have not caught up yet. Multi-agent semi-autonomous use is the next phase of industrial use.
My view of AI is it has great potential for massive increases in productivity and both great secular and spiritual risks. The first risk is technical. A wrong assumption or action can propagate throughout a system with lightening speed. My systems background urges caution. Of course, privacy and security risk increase the further any action is from direct human control. Another risk is accountability—where does accountability lie? For example, the algorithms used for Meta (Facebook) proport to use your postings just like putting a note on a blackboard. Thus, Meta did not assume responsibility for any behavior of a user. Two juries have determined these AI algorithms have an addictive and behavior modification influence. Thus, Meta is liable though for years they thought they were immune. Though the predictions of widespread near-term labor displacement are misplaced, there will be in some fields like customer support, software support, and administrative tasks. Any use that has legal, medical, financial, or widespread consequences have risks.
As I view the AI landscape, the greatest risk is spiritual. AI can make us lose our ability to reason and make discern between paths best for us as humans created in the image of God. In fact, we can — and some algorithms promote this—become actual slaves to use and judgment of AI. As Christians, we cannot off load moral, personal, and spiritual questions to the non-human collective AI memories. AI is not God breathed. Some take the possibility of almost infinite memory and access to information to create the equivalent of a human. AI becomes their God. If you want a companion, create your own just like God. These combinations of robotics and AI are already being sold.
So, how does a Christian use AI responsibly and avoid the risks—especially all of the spiritual one? Read my Commentary to find out