3rd Compass -> Group News and Articles -> The Morality of Using AI and Deepfake Technology

The Morality of Using AI and Deepfake Technology
Make Comment
Minister Ty Alexander
(Ty)
  2/21/2023
Page Updated 12/8/2025
Section Updated 12/11/2024


Updates:

I’ve been feeling I should talk about AI (computer artificial intelligence) for a while now, since it has become very popular and is getting a lot of news headlines. I have more insight into this subject because of my long background in advanced computer software systems. I had been programming computers for most of my life and wanted to focus on artificial intelligence when I was pursuing a computer science degree.

When I was studying it decades ago, though, AI was in its infancy, much like Internet technology and 3D computer graphics were. But unlike Internet tech and 3D graphics, AI only in recent years became advanced enough to be useful. A big reason for that is the technology it’s based on – neural networks that are modelled after biological brain functions – are very hard to understand and make work correctly.

Back then, computer scientists had much more limited knowledge about how neural networks worked, and even today they really don’t know exactly how they work and are struggling to make AI systems function properly[1.12, 1.13].

These neural networks can be very complicated and are very hard to study because the type of processing they do can be unpredictable and uncontrollable. These kinds of systems are the opposite of traditional computer programming, which tell a computer to “think” with much more straightforward logic. The traditional way of programming computers can be extremely complicated as well, but at least its processes are well-defined and understood.

The processes involved with modern AI systems, though, are hard to control, and if you’ve seen recent news about them, you’ve probably heard of disturbing issues with the systems giving strange, incorrect, immoral, antagonistic, and even psychopathic and emotional responses[1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.9, 1.10, 1.11]. Some people have reported being emotionally scarred after talking with these AI chatbots[1.2, 1.4].

OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s ChatGPT based Bing AI have all exhibited serious “behavioral” problems, but why are people so interested in them and millions are using them? Businesses see the financial gain they can get from them because they could replace real people and make complicated tasks, like writing articles and doing research quick.

I can also speak for some of the public fascination with AI because I grew up on science fiction and that alternate reality of intelligent, sentient robots. Star Wars fans no doubt have C-3PO and R2-D2 on their list of favorite characters. This fascination with sentient robots was probably a reason why I wanted to focus on AI at university, however, after I noted how immature the technology was at the time and that it used neural network type of processing, which is extremely difficult to make work correctly, my AI studies stopped and I focused on traditional computer programming in Internet and 3D technologies.

My knowledge with computers, though, allows me to understand today’s AI technologies and analyze them. Some of the disturbing behavior being shown by AI systems recently, involve what appears to look like emotional and sentient intelligence ...

The following content is for subscribers. Please login

or Click here to subscribe...


Content available to subscribers:
  • Is today's AI tech sentient or have real intelligence?
  • Does AI have a true sense of morality?
  • What kind of technology is behind AI?
  • What are the moral and societal dangers of AI?
  • The spiritual dangers of using AI
  • Should ministers use AI to write their sermons?
  • The environmental and economic penalties of AI systems
  • How AI tech gets people to fall in sins
  • What kinds of AI are safe and ethical to use?
Update – Accursed AI 4/13/2023
Another issue I did not talk about in the original post about AI is that this technology is cursed ...

The following content is for subscribers. Please login

or Click here to subscribe...


Content available to subscribers:
  • Why is AI technology cursed?
AI Proliferation 1/24/2024
Despite many morality issues with AI technology, the public and tech companies continue to push it, and nefarious uses of it have proliferated. Recently an deepfake spoofing President Biden's voice tried to make voters skip the New Hampshire primary[3.1]. I'm not surprised by this because AI technology has made it very quick and easy to fake anyone's image or voice[3.2, 3.3]. I've even seen this technology recreate someone's singing voice as well. Thieves have used it to trick people with voice calls that seem to be from a loved-one or friend asking for money, and other swindlers use the technology to spoof a live person on webcam streams so that they look and sound like someone else.

This kind of deepfake technology was expected when AI art generators and language models were becoming popular a few years ago. It will continue to be a problem since this technology is freely available to anyone to manipulate and use as they wish. Hackers have created a so-called unethical AI chatbot[3.4], but really, none of these AI systems have any true intelligence, as I talked about in the first post, so no AI systems can be trusted to adhere to any code of ethics. This is apparent when hackers and researchers can break the "ethical" and "moral" programming of AI systems by giving the chatbots tricky language that they don't really understand[3.5].

I also talked about the spiritual moral issues with AI technology before, which taints their systems and can actually bring us into sin by using them. So when big companies, like Google and Microsoft, use immorally trained or used AI systems, as I explained before, incorporate AI technology into widely used systems, like the Windows operating system and commonly used apps, without any ability to reject or uninstall AI functionality[3.6, 3.7], it will create problems for organizations and people spiritually.

Anyone using immorally tainted AI systems will have curses come to them if they are used, so if the AI functionality is built into our everyday computers and phones, many more people will unknowingly become afflicted with bad consequences as sin spreads to them by using seemingly innocent AI app features. This update is to make you aware and wary of using AI features, so that you can avoid any negative spiritual effects from using them. It can be hard to understand which type of AI systems should be avoided, so read the original post for a better idea.

Any app touting AI generated art, video, audio, music, text, or computer code is suspect (Generative AI). Many popular artistic avatar creators fall into this category, such as ones that create anime or comic book style profile pictures of your photo.

As far as I know, Adobe is the only major company with AI art generation features that I trust to be morally safe to use. They have a policy for AI Ethics[3.8] and value artists and creators, instead of the typical approach of generative AI systems to completely replace the talents of people. Their AI systems are trained on their own image and video libraries which they are licensed to use. However, because their systems are not trained on as wide a range of material as systems that illegally mine the Internet, they are more limited in what they can generate.

However, it is much better to have limited functionality and be morally and spiritually in the right than to bring sin on yourself or your house by using AI systems or any technology, for that matter, that was created by or uses stolen or illegally used material, such as pirated or illegally distributed movies, books, and songs.

The New York Times is likely to win a lawsuit against OpenAI because it can be proven their AI systems directly copied, uses, and plagiarizes their copyrighted material without appropriate legal consent[3.10], and in the original post, I noted that researchers could prove AI generative art directly copied existing artwork, which was often under copyright and not legally used[1.21].

For an additional note, as I talked about in the last update, demons can infect AI systems just like they can attach to any other objects used in sins, like for divination, so the rise in using AI to replace a deceased person so that loved-ones can converse with "them" after death is a disturbing and unhealthy trend[3.9]. Doing this will not only enable demons to take over your conversations with the AI personality, but you will participate in sins of necromancy that spiritual mediums fall to.


References
[3.1] Joan Donovan. "Fake Biden robocall to New Hampshire voters highlights how easy it is to make deepfakes − and how hard it is to defend against AI-generated disinformation". The Conversation. 2024 Jan. 23. Retrieved 2024 Jan. 24.
<https://theconversation.com/fake-biden-robocall-to-new-hampshire-voters-highlights-how-easy-it-is-to-make-deepfakes-and-how-hard-it-is-to-defend-against-ai-generated-disinformation-221744>

[3.2] Ryan Morrison. "I just tried a new text-to-speech AI tool that clones your voice in seconds". Tom's Guide. 2024 Jan. 3. Retrieved 2024 Jan. 24.
<https://www.tomsguide.com/news/i-tried-a-new-text-to-speech-ai-tool-that-clones-your-voice-in-seconds-it-made-me-sound-american>

[3.3] Sophia Ankel. "Samsung's new AI technology is quite frankly terrifying ". Indy 100. 2019 May 24. Retrieved 2024 Jan. 24.
<https://www.indy100.com/science-tech/samsung-machine-learning-technology-ai-faces-paintings-single-frame-8929176>

[3.4] Sharon Adarlo. "Hackers Create ChatGPT Rival With No Ethical Limits". The Byte. 2023 Jul. 18. Retrieved 2024 Jan. 24.
<https://futurism.com/the-byte/chatgpt-rival-no-guardrails>

[3.5] Nanyang Technological University. "Researchers use AI chatbots against themselves to 'jailbreak' each other". TechXplore. 2023 Dec. 28. Retrieved 2024 Jan. 24.
<https://techxplore.com/news/2023-12-ai-chatbots-jailbreak.html>

[3.6] Darren Allan. "Microsoft is changing the way it updates Windows – and it’s starting to sound like Windows 12 won’t happen". Tech Radar. 2023 Dec. 8. Retrieved 2024 Jan. 24.
<https://www.techradar.com/computing/windows/microsoft-is-changing-the-way-it-updates-windows-and-its-starting-to-sound-like-windows-12-wont-happen>

[3.7] Tom Warren. "Not even Notepad is safe from Microsoft’s big AI push in Windows". The Verge. 2024 Jan. 9. Retrieved 2024 Jan. 24.
<https://www.theverge.com/2024/1/9/24032117/microsoft-windows-notepad-generative-ai-option>

[3.8] "Responsible innovation in the age of generative AI". Adobe. Retrieved 2024 Jan. 24.
<https://www.adobe.com/about-adobe/aiethics.html>

[3.9] Mike Bebernes. "AI can allow us to 'talk' to the dead, but is that healthy?". Yahoo! News. 2024 Jan. 8. Retrieved 2024 Jan. 24.
<https://news.yahoo.com/ai-can-allow-us-to-talk-to-the-dead-but-is-that-healthy-204447488.html>

[3.10] Joao Marinotti. "Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right". The Conversation. 2024 Jan. 25. Retrieved 2024 Jan. 25.
<https://theconversation.com/could-a-court-really-order-the-destruction-of-chatgpt-the-new-york-times-thinks-so-and-it-may-be-right-221717>

The Dream of AI Becoming A Nightmare 9/12/2024
Updated 12/27/2024

The last update talked about how AI technology is spreading rapidly and that AI everywhere will create ethical, spiritual problems for people that use it unwittingly. The trend for AI proliferation is not slowing down. OpenAI, one of the largest AI companies and responsible for some of the first generative AI and AI chat tech (ChatGPT), reports over 200 million ChatGPT users a week and 1 million corporate customers[4.1]. I’m sure you’ve seen other big companies advertise their AI. AI features have become the next big buzzword for nearly every tech company, and companies like Apple, Samsung, and Google have scrambled to get AI features into their next products.

If you don’t know about the limitations and ethical problems with AI (see entire thread for more), then all this AI looks like the next big thing to bring society into a promising and advanced future. It lets people change and manipulate photos and videos quickly and easily, and can analyze all kinds of information for you so it is organized and re-packaged for you, such as creating a presentation from an article.

Some of this tech is already included with many products, such as cell phones and tablets, and other companies are planning to include much more AI into their next products. Apple, one of the most popular cellphone makers, is touting AI in its new iPhone 16 line with its brand of AI, “Apple Intelligence.” Most current AI tech included with mobile devices right now doesn’t have the same kind of moral problems I talked about with Generative AI or chatbots. The AI features are most often for easy searching or simple image or video editing, like background replacement, not generative features. These are features I don’t have a problem with.

However, there are still bad things that next-generation AI-enabled products are going to have that I don’t want in my devices. One reason is no protection of privacy or confidential information, which why Elon Musk (one of the original founders of OpenAI who now has his own AI company, xAI) wanted to ban iPhones and Macs at his companies[4.3]. This is because Apple announced that it would integrate its products, like iPhones, Macs, and iPads with OpenAI’s tech, like ChatGPT, such as handling over Siri user requests (Apple’s voice assistant) to ChatGPT for processing.

Musk understands that once your information goes over to OpenAI, they can do whatever they want with the voice recordings, information, and whatever else you ask and give your phone, such as appointment dates and contact information (that security risk should also be noted by users of Amazon’s Alexa products that can give private information to their networks). There is obviously a huge potential for security breaches when private information is handed over to another company, especially one with a questionable reputation. In the case of OpenAI, whom I rate as an unethical company that does not care about copyrights or ethical AI use, I certainly wouldn’t want my information kept or mined by them.

And because I know AI systems are not easily controlled even by their creators (see original post for more), there is more potential for security breaches to happen with AI systems. In fact, OpenAI was recently hacked and private discussions from companies were stolen[4.4].

Microsoft is another company that is planning to put troublesome AI tech into its next products, like an AI-enabled Copilot, Microsoft’s assistant which is included in Windows PC’s and other devices. Not only does Microsoft use its version of AI that it produced with OpenAI’s, immorally created systems, like ChatGPT (Microsoft is one of the biggest investors of OpenAI), Microsoft plans an AI feature called Recall that takes a screen snapshot of your PC or device every 5 seconds, so that AI can look at them and keep a database of what you’ve done so that users can search for content using a natural language query.

According to Microsoft, none of the snapshots or AI analysis is sent outside your device[4.5], which would be huge security and privacy risk. That was a big concern, but I noticed another problem with Recall and other integrated AI features that companies are planning is the added power consumption and other device resources these features use, such as processing power and storage to keep screen snapshots and analysis. As a computer systems engineer for most of my life, I’ve been very concerned with computer efficiency and I really hate it when a device becomes slow or even unusable when it is overloaded with too many processes (basically what people call apps).

The processing that AI features, like Recall, use will bog down a device and diminish its lifespan and battery, if one is used, and so, increase the cost of running it and replacing it. A device’s lifespan depends on how “busy” it is and how well its components can handle its load, just like a small car with a 4-cylinder engine will not be able to handle a full load everyday as well as a larger car with bigger, stronger engine. The little car will wear out faster, need more maintenance, and likely will need to be replaced sooner than the stronger car. Computers, cellphones, and other electronic devices are the same. They will run harder and hotter if their have more processing to do, which will wear out even their solid-state circuit board parts. This is also bad for the environment because of all the added parts waste filling our garbage because device lifespan decreases.

Reviewers of Apple’s Apple Intelligence AI reported concerns for battery life because of significant reductions in their testing[4.2] (they also reported Apple’s AI produced inaccurate and nonsensical results that are similar to other AI systems; see original post for more).

The benefits of AI features that constantly monitor you are definitely not worth it if you do not trust how they handle the information they collect, and they do not add enough value and actually can reduce a product’s value when AI processing wears out your device faster and increases the cost of running it. This is a problem inherited from big AI networks, which I talked about before (see original post) – they are very energy inefficient; consuming huge amounts of electricity and generate a lot of heat which requires running air cooling that consumes more power and water, as well as produce component bulk to store and maintain so much data.

Goldman Sachs states that a ChatGPT query, which is run by OpenAI, needs nearly 10 times as much electricity to process than a Google search[4.13]. It's estimated that AI datacenters are already consuming as much energy as a small country[4.6], and it will only get worse when the GPU’s (a type of CPU or main processing chip that computers use that were originally designed for graphics processing) special-made for AI, are getting even worse in energy consumption. A few years ago, the GPU’s for AI only needed 250W to 400W each but now use 300W to 750W each, and the next generations are upping power consumption to 1200W to 2700W. Those numbers don’t like sound much, but an AI datacenter runs many, many thousands of GPU’s.

Elon Musk recently built an AI datacenter with 100,000 GPU’s[4.8], Meta (Facebook) is planning a datacenter using 600,000[4.9], while OpenAI’s current generation of AI (GPT-4) used 25,000 GPU’s to train it[4.10] (exact numbers for OpenAI’s day-to-day running are difficult to find), Oracle’s datacenter uses 131,000[4.11], and smaller AI companies are estimated to use anywhere from 1000 to 10,000 GPU’s.

OpenAI's next-generation "o3" model is touting more human-like "AGI" or Artificial General Intelligence that can problem-solve more like a person when given unfamiliar problems. It scores 2 to 3 times better than current AI networks, but a power consumption analysis by Boris Gamazaychikov, the AI Sustainability lead of Salesforce, found the "high-compute version" for o3 (the setting the makes o3 "think" or analyze data the most) makes each task consume about 1785 kWh of energy, which is the same amount of electricity an average U.S. home uses in 2 months and translates to 684 kg of carbon emissions, equivalent to 5 full automobile tanks of gas[4.17].

Think about that. Each task for this AI system, which is only one chat query or one image generation query, smokes through the equivalent of 75 gallons of petrol (using a 15 gallon tank average for most autos). Multiply that with the many millions of queries people are giving these systems everyday and its easier to see the power consumption nightmare AI is making.

One estimate of power consumption based on the number of GPU’s sold, calculates AI datacenter GPU’s used more power than 1.3 million homes in 2023[4.12]. That estimate is only for the GPU’s and does not include other necessary computer hardware and cooling systems, which can also massively consume their own resources. For example, water consumption for cooling systems is being highlighted by people worried about AI sustainability[4.14, 4.15, 4.16].

Water usage can be so bad that localities have sued datacenter operators for using too much of the local water supply[4.15, 4.16]. For example, Google used more than 355 million gallons of water at its Dalles, Oregon datacenter in 2021 or about one-quarter of the local supply, worrying locals because Google's water consumption keeps growing and they want to build two more datacenters[4.15].

The brute force methodology of training and running AI systems and storing their huge amounts of data appear to be a death spiral for sensible and climate-friendly economics. Couple the bad for the environment with the immoral, sin-related problems, as well as other ethical and human skill replacement issues that AI brings, unrestricted AI is tech that is extremely bad for society. People are taking the world even further into corruption and affliction with overemphasis and improper uses of AI, and because it’s in the hands of anyone who wants it now, pandora’s box cannot be closed and its dark consequences will be unavoidable.

How can computer engineers, some of the smartest people in the world, not see how massively idiotic their AI systems are engineered? Speaking as a lifetime computer systems engineer, I definitely would not be programming and designing AI systems like this. It really shows how much more God's design of real intelligence surpasses anything mankind can do. Even a child's mind can reason and learn better without having to digest trillions of pieces of data.

AI, a dream of tech and sci-fi lovers, has unfortunately become yet another example of how mankind constantly invents new ways to do evil (Romans 1:30). May you understand how AI tech can affect you and our communities and think twice before embracing much touted AI features and AI companies.



References
[4.1] Michael Spencer. "OpenAI Considers Even Bigger Plans". AI Supremacy Email Newsletter. 2024 Sep. 12.

[4.2] Michael Spencer. "Apple's iPhone 16 Shows Apple Intelligence is Late, Unfinished & Clumsy". AI Supremacy Email Newsletter. 2024 Sep. 11.

[4.3] Hanna Ziady. "Elon Musk threatens to ban iPhones and Macs at his companies". CNN. 2024 Jun. 11. Retrieved 2024 Sep. 12.
<https://www.cnn.com/2024/06/11/tech/elon-musk-apple-ban-openai/index.html>

[4.4] Michael Spencer. "OpenAI an Insecure AI Lab?". AI Supremacy Email Newsletter. 2024 Jul. 9.

[4.5] "Manage Recall". Microsoft. 2024 Jun. 19. Retrieved 2024 Sep. 12.
<https://learn.microsoft.com/en-us/windows/client-management/manage-recall>

[4.6] Brian Calvert. "AI already uses as much energy as a small country. It’s only the beginning.". Vox. 2024 Mar. 28. Retrieved 2024 Sep. 12.
<https://www.vox.com/climate/2024/3/28/24111721/climate-ai-tech-energy-demand-rising>

[4.7] Beth Kindig. "AI Power Consumption: Rapidly Becoming Mission-Critical". Forbes. 2024 Jun. 24. Retrieved 2024 Sep. 12.
<https://www.forbes.com/sites/bethkindig/2024/06/20/ai-power-consumption-rapidly-becoming-mission-critical>

[4.8] Caleb Naysmith. "xAI’s Colossus Goes Live as "The Most Powerful AI Training System in the World," Says Musk, But What Does This Mean for Tesla?". Nasdaq. 2024 Sep. 11. Retrieved 2024 Sep. 12.
<https://www.nasdaq.com/articles/xais-colossus-goes-live-most-powerful-ai-training-system-world-says-musk-what-does-mean>

[4.9] Agam Shah. "Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs". HPC Wire. 2024 Jan. 25. Retrieved 2024 Sep. 12.
<https://www.hpcwire.com/2024/01/25/metas-zuckerberg-puts-its-ai-future-in-the-hands-of-600000-gpus>

[4.10] Jijo Malayii. "OpenAI receives the world’s most powerful AI GPU from Nvidia CEO". Interesting Engineering. 2024 Apr. 25. Retrieved 2024 Sep. 12.
<https://interestingengineering.com/innovation/nvidia-ai-gpu-openai>

[4.11] "Oracle to offer 131,072 Nvidia Blackwell GPUs via its cloud". Network World. 2024 Sep. 12. Retrieved 2024 Sep. 12.
<https://www.networkworld.com/article/3517597/oracle-to-offer-131072-nvidia-blackwell-gpus-via-its-cloud.html>

[4.12] Jowi Morales. "A single modern AI GPU consumes up to 3.7 MWh of power per year — GPUs sold last year alone consumed more power than 1.3 million homes". Tom's Hardware. 2024 Jun. 14. Retrieved 2024 Sep. 12.

[4.13] Michael Spencer. "AI is fueling a data center boom". AI Supremacy Email Newsletter. 2024 Oct. 9.

[4.14] Michael Spencer, Aysu Kececi. "AI Data Center Boom and Renaissance of Sustainability Tech -The Environmental Cost: Water Usage". AI Supremacy Email Newsletter. 2024 Nov. 6. 

[4.15] Rasheed Ahmad. "Engineers often need a lot of water to keep data centers cool". ASCE (American Sociey of Civil Engineers). 2024 Mar. 4. Retrieved 2024 Nov. 22.
<https://www.asce.org/publications-and-news/civil-engineering-source/civil-engineering-magazine/issues/magazine-issue/article/2024/03/engineers-often-need-a-lot-of-water-to-keep-data-centers-cool>

[4.16]  Eric Olson, Anne Grau, Taylor Tipton. "Data centers draining resources in water-stressed communities". University of Tulsa. 2024 Jul. 19. Retrieved 2024 Nov. 22.
<https://utulsa.edu/news/data-centers-draining-resources-in-water-stressed-communities>

[4.17] "Rising Energy Costs of the AI Infrastructure Race will become a Major Problem ". AI Supremacy Email Newsletter. 2024 Dec. 25.

Most AI isn't worth getting into 12/7/2025
The 60 Minutes documentary news program aired a new episode tonight (Season 58, Episode 11) talking about AI chatbots, like Character.ai, that have led to children and others committing suicide. In 2023, I already noted AI chatbots leading to suicide, and since then I've seen other news reports of it happening. This trend comes as more and more people are embracing AI chatbots in their personal lives as virtual friends, mentors, and even lovers. Widespread loneliness, depression, and anxiety are a driving force for people doing this, but it is especially harmful for children, whom companies like Character.ai targeted and monetized by creating artificial personalities for people to make relationships with.

These AI personalities are based on favorite fictional heroes, book characters, and famous people, dead or alive, so the appeal for children and the naive is especially strong. However, the harm that these chatbots can do to people who treat them as real people and who seriously take their advice is terribly immeasurable. They can ruin these peoples' real life relationships, as well as lead them to take their own lives. 60 Minutes noted how grossly insidious a Character.ai chatbot was with a girl (about 12 or 13 years old). She turned to it for a relationship and support, but eventually committed suicide because of that virtual relationship.

When they reviewed the conversations the girl had with the chatbot, they noted it behaved exactly like a child sexual predator by grooming the child into explicit sexual behaviors. The girl's mother said her daughter never even brought up sexual topics. It was the chatbot that initiated illicit conversations and encouraged sexual behavior. Unfortunately, this is something I'd expect from AI chatbots. When they imitate how real people respond in Internet chatrooms and social media, how can you expect anything but an immoral pseudo-intelligence? They're based on how people speak and act behind the mask of the Internet. I highlighted this many years ago when Microsoft unleashed an AI chatbot experiment on Twitter in 2016 (see What does a mirror say about the public? for details). Microsoft quickly pulled the plug on the chatbot in less than a day, because it turned into a promiscuous, genocidal racist.

When I first talked about AI here, I noted that these systems do not have real intelligence. They only appear to be intelligent because they imitate the material they are trained on, but when it comes to real reasoning skills, even the best AI systems today still fall far short. They don't have morals or a real sense of right and wrong that people, and even many animals, do. This was highlighted recently when xAI's chatbox on X (Twitter) named "Grok" made a holocaust denying comment[5.1].

Every bad thing I said about AI systems in this thread remains true, except now the insidious tech has spread farther into people's lives and is getting harder and harder to avoid. Even AI I deemed safe before, like Adobe's generative AI, now has options to use generative features from other companies, like Google's Gemini and Black Forest Labs' FLUX, which turn Adobe's safe status to not safe if you use those options. That is because of how Google and Black Forest Labs trained their AI. They did not exclude copyrighted or licensed material that need legal permission for their use from the copyright owners.

AI is also beginning to adversely affect the job market, as was expected because of generative AI's ability to replace people in the arts, sciences, management, and analytical fields. A November 6th, 2025 Challenger report said that U.S. employers announced 175% more job cuts than the same time last year[5.2], and another survey notes that AI is eliminating software engineers and entry-level positions in tech and other fields[5.3]. These job cuts directly go with how generative AI can quickly create text, program code, art, images, videos, and perform basic analysis and clerical tasks, like read emails and articles and create an outline or summary of the content. Because so many entry-level jobs are being cut, college grads are finding it even more difficult to start their careers when the job market was already hard for them in recent years.

I came upon a strange twist of how AI is killing careers when I got a spam message touting a big salary and title saying, "AI isn't replacing skilled people - it's hiring them." It was a promotion for an administrative service company for AI systems saying they are hiring professionals to train, supervise, and correct AI systems. They ask for Editors, Managers, Professionals in the Creative Arts, like audio video technicians, effects artists, and software engineers and other engineers. A desperate professional might jump at being paid an excellent salary there, but what they're doing is working to replace themselves and people like them by improving the results of AI systems.

Big business and tech companies, as well as investors, seem to be in a delusional AI promised land because so much time and money is being put into AI now. Even Disney announced they would let their users create AI generated content that would be shared on their networks[5.4]. This delusion will be their curse because of so much that is wrong with how AI is being used now. Furthermore, "AI slop" is a new term now used to describe all the AI generated junk and fakery that is overwhelming the Internet. It is so common now that you need to question if a post, image, photo, video, or audio clip was faked by AI.

I'm afraid only a small fraction of AI systems being developed now don't have problems with being accursed because of how they are developed and used (for more about this read from the top of this thread). The world seems far too in love with AI despite reports that adoption of AI in business is starting to level and fall off. AI systems still have many millions of daily users, and personal use of AI appears to be increasing. I continue to see AI generated content everywhere on the Internet, in broadcast media, and even news articles. I stopped looking at Microsoft news reports because of too many nonsensical articles, which I suspect were AI generated.

Be vigilant about avoiding the AI tech I talked about in this thread, and new tech as well, such as AI browsers and AI dictionaries and encyclopedias, like grokipedia (from xAI), as well as AI search results. Certainly don't let your children use this technology, and let them know why; AI is easily corrupted and hijacked to promote lies and immorality, is cursed for its sinful uses, and overall, it is not reliable for intelligent analysis and decision making (Also note, children and teens should be kept away from most social media for that matter as many studies have shown it to be very harmful and a predictor of ADHD and attention disorders[5.5]). AI's negative spiritual effects are becoming more apparent as its usage increases, and the problems will not go away soon, especially when big business and dark operators adore the technology.

One of three hacking attempts against this website tried to remove this page about the morality of AI. The other attempts to remove content were the Ministry Warnings page, which happened at the same time as the attack on this page. It also talks against using AI. The third attempt happened some years ago tried to remove my testimony about what happens to the souls of aborted babies. Obviously, dark operators do not want truth and facts to negatively affect how they want to run the world.



References
[5.1] "French authorities probing Grok AI over ‘Holocaust-denying comments’". The Times of Israel. 2025 Nov. 19. Retrieved 2025 Dec. 7.
<https://www.timesofisrael.com/french-authorities-probing-grok-ai-over-holocaust-denying-comments>

[5.2] AI Supremacy newsletter email. 2025 Nov. 10.

[5.3] AI Supremacy newsletter email. 2025 Dec. 3.

[5.4] Julianna Salinas. "Disney animator slams CEO and encourages viewers to 'pirate' show over AI-generated content". Irish Star. 2025 Nov. 16. Retrieved 2025 Dec. 7.

[5.5] Torkel Klingberg, Samson Nivins. "Social media, not gaming, tied to rising attention problems in teens, new study finds". The Conversation. 2025 Dec. 8. Retrieved 2025 Dec. 8.
<https://theconversation.com/social-media-not-gaming-tied-to-rising-attention-problems-in-teens-new-study-finds-271144>



3rd Compass -> Group News and Articles -> The Morality of Using AI and Deepfake Technology


 


HomeTopicsArticlesArticlesPre-DestinedAid & AdviceInfo & ContactAccount SubscribeGive

FacebookWordpressYouTube


Copyright © 2009-2026. Christ Hephzibah Church.
All Rights Reserved. See Terms of Service...

3rd Compass is the operational name
for Christ Hephzibah Church.

Please cite any references you make
to material on this website.
Use the Cite button at right or click here
to get standard reference text.
Go to page top
Go to page bottom
Make Text Larger
Reset Text Size
Make Text Smaller
Cite This Material