Friday 13th of March 2026

spiritual advice from the god of artificial intelligence...

About a third of practicing American Christians say the spiritual advice they get from artificial intelligence is just as good as that from a pastor, with practicing Christians more likely to agree with this notion than non-practicing Christians and non-Christians, according to new research from the Barna Group.

 

A third of Christians trust spiritual advice from AI as much as pastor: study

By Leonardo Blair

 

The data was collected as part of Barna’s "State of the Church" initiative in partnership with Gloo, a leading technology platform connecting the faith ecosystem to advance human flourishing. The latest findings were released at the National Religious Broadcasters  International Christian Media Convention in Tennessee last week. 

INSERT ADVERT ON HOW TO CLEAN A TOILET BOWL...

In a survey of over 1,514 U.S. adults conducted in November 2025, researchers found nearly a third of U.S. adults (30%) now "somewhat" or "strongly" agree that spiritual advice from AI "is as trustworthy as advice from a pastor," the data shows. And among Generation Z and millennials, that share jumps to 39% and 40%, respectively.

About a third (34%) of practicing Christians somewhat or strongly agreed that AI advice is just as trustworthy as advice from a pastor, while 29% of non-practicing Christians and 27% of non-Christians agreed with this sentiment. 

The findings show that AI is "influencing everyday spiritual habits," researchers note in a report

Four in 10 Christians say AI has helped them with prayer, Bible study and spiritual growth. Data from a December survey of over 442 Protestant pastors in the U.S. shows that more than 41% of pastors report using AI for Bible study preparation.

"At the same time, many church leaders acknowledge uncertainty," Barna researchers wrote.

The Barna study found that while about a third of practicing Christians expressed a desire for guidance from their pastors on navigating technology, only 12% of pastors say they are comfortable teaching on the matter.

“Though the majority of practicing Christians remain the most cautious about embracing AI as a spiritual tool, their views are shifting and remain largely uninformed by their pastor,” Daniel Copeland, Barna’s vice president of research, said in a statement on the research. “There’s a real opportunity here for pastors to disciple their congregants on how to use this technology in a beneficial way.”

The Barna data comes as the December 2025 "State of AI in the Church Survey Report," prepared by AiForChurchLeaders.com and Exponential AI NEXT, reports that nearly two-thirds of church leaders surveyed who prepare sermons said they use AI tools in their sermon writing process. The data from that report is based on responses from 594 pastors and church staff members. ChatGPT and Grammarly were reported as the top two AI tools used.

The latest findings from Barna research also highlight the influence of Christian media.

INSERT PICTURES OF MOULDY CARROTS.....

Based on a February 2025 survey of 2,025 U.S. adults, Barna found that nearly two-thirds (61%) engaged with Christian media in some form, while half (51%) said they engaged weekly.

Two in three U.S. adults also see Christian media as "valuable" and "trustworthy." Some 45% of those who consume Christian media frequently, however, judged the content as "divisive," while another 40% said it makes “Christians look bad.” 

“As trust in mainstream media has declined in recent years, it’s encouraging to see that confidence in Christian media remains relatively high,” Scott Beck, Gloo co-founder and CEO said in a statement. “What a privilege to release these findings at an event full of Christian broadcasters and leaders who can return to their respective cities inspired to continue to do the important work they are doing to help people flourish and communities thrive.”

Compared to just over two years ago, the use of AI has increased by 80% across all ministries in churches, and a growing share of people have been turning to apps like Text With Jesus for spiritual guidance, The Christian Post recently reported.

As the rapid adoption of AI in church life continues, Pastor Ray Miller of First Baptist Church in Abilene, Texas, has warned against the technology becoming "another type of idol pulling at our attention."

“Often, people turn to AI because they do not have another human being or pastor or priest to turn to, and it becomes convenient. With discernment and care, I believe we can develop some best practices when it comes to AI usage for churches and use for faith in general," he told CP.

"We are living in the midst of a technological revolution unseen in human history since the advent of the printing press. That technological shift had profound implications for faith, as the Bible was finally placed in the hands of the people,” he explained.

"As we begin to sift through what AI will do to us as humans, the Church will have to help answer the question: what does it mean to be human, to be made in God's image in an age of digital AI?” he added. "We will have to double down on discipling people to develop their own slow interactive relationship with God.”

INSERT VIDEO OF MONKEYS STEALING PUPPIES.....

OR AN OLD WOMAN BUYING AN UGLY SHED.....

https://www.christianpost.com/news/a-third-of-christians-trust-spiritual-advice-from-ai.html

 

YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.

 

         Gus Leonisky

         POLITICAL CARTOONIST SINCE 1951.

 

ask claude...

US Secretary of War Pete Hegseth has formally designated artificial intelligence firm Anthropic a “supply-chain risk to national security,”following President Donald Trump’s order to blacklist the company across the federal government.

Hegseth announced the decision Friday, accusing the company of attempting to impose ideological constraints on US military operations after Anthropic refused to remove safeguards limiting the use of its artificial intelligence systems for mass surveillance and fully autonomous weapons.

“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military,” Hegseth said in a statement posted on social media. “That is unacceptable... The Department of War must have full, unrestricted access to Anthropic’s models for every lawful purpose in defense of the Republic.”

I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security... This decision is final,” Hegseth stated.

The designation would bar any contractor, supplier, or partner working with the US military from engaging in commercial activity with Anthropic, significantly widening the impact beyond the Pentagon itself. Defense officials said the company will be permitted to continue providing services for up to six months to allow military systems to transition to alternative providers.

The move follows Trump’s sweeping directive ordering federal agencies to halt use of Anthropic technology after the company resisted Pentagon demands to lift contractual limits governing how its flagship AI model, Claude, could be deployed.

Anthropic has maintained two non-negotiable restrictions: prohibitions on mass domestic surveillance and on fully autonomous weapons operating without meaningful human control. Pentagon officials argue such safeguards could hinder operational flexibility during military crises.

The “supply-chain risk” designation – typically reserved for companies linked to foreign adversaries – marks an extraordinary step against a US-based technology firm deeply embedded in national security operations.

Anthropic was the first and only commercial AI developer to deploy advanced language models on classified Pentagon networks under a contract valued at up to $200 million. Its systems have supported intelligence analysis and military operational planning, including the raid that targeted Venezuelan President Nicolas Maduro.

https://www.rt.com/news/633148-pentagon-anthropic-national-security-risk/

 

SEE ALSO: 

Anthropic, one of Silicon Valley’s leading artificial intelligence firms, is locked in a standoff with the Pentagon over how far powerful AI systems could be used for war and surveillance.

The dispute centers on Anthropic’s Claude chatbot, which has been running on US military classified networks and was reportedly used in planning the operation to capture Venezuelan President Nicolas Maduro.

The Department of Defense has blacklisted the company as a “supply chain risk” after it ignored the ultimatum to lift key safeguards by 5:01pm Eastern Time (22:01 GMT) on Friday.

READ MORE: Pentagon designates key AI contractor a ‘national security risk’

President Donald Trump simultaneously ordered all federal agencies to halt the use of Anthropic’s tech, threatening the company with severe legal consequences if it refused to cooperate during a six-month phase-out period.

Why the Claude chatbot matters to the Pentagon

Claude is deeply embedded in US defense workflows. The company says its models are already used across national security agencies for intelligence analysis, simulations, operational planning, cyber operations and other “mission‑critical” tasks.

Anthropic became the first AI firm to deploy systems on the Pentagon’s classified networks, signing a contract worth up to $200 million with the Department of War last summer.

Other major AI providers have so far only reached deals to run their models on the military’s unclassified systems, putting Claude in a privileged position inside the US defense establishment.

What are the Pentagon‘s demands?

Within Anthropic’s acceptable‑use policy for the Department of War are explicit bans on using Claude for mass domestic surveillance and for fully autonomous weapons. Those contractual safeguards reflect the company’s internal rules.

The Pentagon has demanded those limits be scrapped. Officials say they want to be able to use the system for “all lawful purposes” and according to US media, have pressed the firm to provide a “clean” version of the model stripped of moral and ethical constraints.

“You can’t lead tactical operations by exception,” an unnamed Pentagon official was quoted as saying by CNN, insisting that “legality is the Pentagon’s responsibility as the end user.” The military argues it cannot afford to be in a crisis and have to ask a private contractor for permission to remove guardrails.

US Secretary of War Pete Hegseth, who met Anthropic CEO Dario Amodei this week, has publicly complained that the Pentagon does not need neural networks “that can’t fight” and has threatened to designate Anthropic a “supply chain risk” – a label usually reserved for firms seen as extensions of foreign adversaries.

READ MORE: Top AIs deploy nukes in 95% of war game simulations – study

Pentagon spokesman Sean Parnell claimed the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement,” but stressed: “We will not let ANY company dictate the terms regarding how we make operational decisions.”

What are Anthropic’s red lines?

Anthropic says it is willing to keep working with US national security agencies, but will not drop two core restrictions on how its systems are used.

Threats do not change our position: we cannot in good conscience accede to their request,” Amodei said in a statement on Thursday, adding that the Pentagon’s demands “have never been included in our contracts… and we believe they should not be included now.”

The company set two clear red lines for its AI, declaring it will not support mass domestic surveillance or fully autonomous weapons. It argues that large‑scale monitoring of Americans is “incompatible with democratic values” and that today’s models are “not reliable enough” to make lethal decisions without human control.

Amodei insists these carve‑outs have not prevented the US military from using Claude for other “mission‑critical” tasks and says the firm still wants to support US national security – but not at the cost of enabling mass surveillance at home or fully autonomous killing.

Can Anthropic survive being blacklisted?

Amodei says the Department of War has warned that, if Anthropic keeps its safeguards, it could be removed from military systems and declared the aforementioned “supply chain risk” – a designation never before applied to an American firm.

Losing a contract worth up to $200 million would not be existential for Anthropic, valued at nearly $400 billion, but such a label could hit much harder. Any company doing business with the Pentagon would have to prove that its own systems do not rely on Anthropic’s technology, potentially complicating or chilling large enterprise deals with firms that also supply the US military.

For the Department of War, cutting ties would also be costly. Officials would have to replace internal tools built around Claude. One Pentagon source told US media that Elon Musk’s Grok AI system is “on board with being used in a classified setting,” but acknowledged that Grok is not regarded as being as advanced as Anthropic’s model.

Will Silicon Valley push back?

The developer’s stance has triggered an unusual wave of public support inside Silicon Valley. Late on Thursday, hundreds of current employees at Google and OpenAI – two of Anthropic’s main rivals, both of which also supply AI models to the US military – signed an open letter backing the company’s refusal to comply with the Pentagon’s demands.

The petition, titled ‘We Will Not Be Divided’, had been publicly signed by 421 Google and 76 OpenAI staffers as of Friday. Citing a US media report, the letter accuses the Department of War of targeting Anthropic for “sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.”

“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the signatories wrote, alleging that officials are trying “to divide each company with fear that the other will give in.” The letter calls on the two firms’ leadership to “put aside their differences and stand together to continue to refuse” the Defense Department’s demands.

What the showdown means for AI

The clash between Anthropic and the Pentagon has drawn interest from technology and defense analysts, who warn it may set precedents for how powerful AI is governed in any future conflicts. Adam Connor, vice president for technology policy at the Center for American Progress, told US media the dispute is likely to be read across the industry as a signal that defense officials do not want contractual limits on how military users can deploy advanced models.

The Pentagon’s move marks a historic escalation, observers say, effectively turning one of America’s most advanced commercial AI products into a pariah inside its own defense ecosystem. Gregory Allen, a senior adviser at the Center for Strategic and International Studies, argued that treating Anthropic this way would be akin to burning one of the US tech sector’s “crown jewels” at a time when Washington is comparing the AI race with China to the space race with the Soviet Union. He suggested there are better ways to resolve the dispute than the “absolutist” stance the Trump administration has taken.

https://www.rt.com/news/633126-pentagon-anthropic-ai-war-surveillance/

 

====================

 

READ FROM TOP.

 

YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.

 

         Gus Leonisky

         POLITICAL CARTOONIST SINCE 1951.