3rd Compass -> Group News and Articles -> The Morality of Using AI and Deepfake Technology

The Morality of Using AI and Deepfake Technology
Make Comment
Minister Ty Alexander
(Ty Huynh)
  2/21/2023
Updates:


I’ve been feeling I should talk about AI (computer artificial intelligence) for a while now, since it has become very popular and is getting a lot of news headlines. I have more insight into this subject because of my long background in advanced computer software systems. I had been programming computers for most of my life and wanted to focus on artificial intelligence when I was pursuing a computer science degree.

When I was studying it decades ago, though, AI was in its infancy, much like Internet technology and 3D computer graphics were. But unlike Internet tech and 3D graphics, AI only in recent years became advanced enough to be useful. A big reason for that is the technology it’s based on – neural networks that are modelled after biological brain functions – are very hard to understand and make work correctly.

Back then, computer scientists had much more limited knowledge about how neural networks worked, and even today they really don’t know exactly how they work and are struggling to make AI systems function properly[1.12, 1.13].

These neural networks can be very complicated and are very hard to study because the type of processing they do can be unpredictable and uncontrollable. These kinds of systems are the opposite of traditional computer programming, which tell a computer to “think” with much more straightforward logic. The traditional way of programming computers can be extremely complicated as well, but at least its processes are well-defined and understood.

The processes involved with modern AI systems, though, are hard to control, and if you’ve seen recent news about them, you’ve probably heard of disturbing issues with the systems giving strange, incorrect, immoral, antagonistic, and even psychopathic and emotional responses[1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.9, 1.10, 1.11]. Some people have reported being emotionally scarred after dealing with these AI chatbots[1.2, 1.4].

OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s ChatGPT based Bing AI have all exhibited serious “behavioral” problems, but why are people so interested in them and millions are using them? Businesses see the financial gain they can get from them because they could replace real people and make complicated tasks, like writing articles and doing research quick.

I can also speak for some of the public fascination with AI because I grew up on science fiction and that alternate reality of intelligent, sentient robots. Star Wars fans no doubt have C-3PO and R2-D2 on their list of favorite characters. This fascination with sentient robots was probably a reason why I wanted to focus on AI at university, however, after I noted how immature the technology was at the time and that it used neural network type of processing, which is extremely difficult to make work correctly, my AI studies stopped and I focused on traditional computer programming in Internet and 3D technologies.

My knowledge with computers, though, allows me to understand today’s AI technologies and analyze them. Some of the disturbing behavior being shown by AI systems recently, involve what appears to look like emotional and sentient intelligence...

The following content is for subscribers. Please login

or Click here to subscribe...

Update – Accursed AI 4/13/2023
Another issue I did not talk about in the original post about AI is that this technology is cursed...

The following content is for subscribers. Please login

or Click here to subscribe...

AI Proliferation 1/24/2024
Despite many morality issues with AI technology, the public and tech companies continue to push it, and nefarious uses of it have proliferated. Recently an deepfake spoofing President Biden's voice tried to make voters skip the New Hampshire primary[3.1]. I'm not surprised by this because AI technology has made it very quick and easy to fake anyone's image or voice[3.2, 3.3]. I've even seen this technology recreate someone's singing voice as well. Thieves have used it to trick people with voice calls that seem to be from a loved-one or friend asking for money, and other swindlers use the technology to spoof a live person on webcam streams so that they look and sound like someone else.

This kind of deepfake technology was expected when AI art generators and language models were becoming popular a few years ago. It will continue to be a problem since this technology is freely available to anyone to manipulate and use as they wish. Hackers have created a so-called unethical AI chatbot[3.4], but really, none of these AI systems have any true intelligence, as I talked about in the first post, so no AI systems can be trusted to adhere to any code of ethics. This is apparent when hackers and researchers can break the "ethical" and "moral" programming of AI systems by giving the chatbots tricky language that they don't really understand[3.5].

I also talked about the spiritual moral issues with AI technology before, which taints their systems and can actually bring us into sin by using them. So when big companies, like Google and Microsoft, use immorally trained or used AI systems, as I explained before, incorporate AI technology into widely used systems, like the Windows operating system and commonly used apps, without any ability to reject or uninstall AI functionality[3.6, 3.7], it will create problems for organizations and people spiritually.

Anyone using immorally tainted AI systems will have curses come to them if they are used, so if the AI functionality is built into our everyday computers and phones, many more people will unknowingly become afflicted with bad consequences as sin spreads to them by using seemingly innocent AI app features. This update is to make you aware and wary of using AI features, so that you can avoid any negative spiritual effects from using them. It can be hard to understand which type of AI systems should be avoided, so read the original post for a better idea.

Any app touting AI generated art, video, audio, music, text, or computer code is suspect (Generative AI). Many popular artistic avatar creators fall into this category, such as ones that create anime or comic book style profile pictures of your photo.

As far as I know, Adobe is the only major company with AI art generation features that I trust to be morally safe to use. They have a policy for AI Ethics[3.8] and value artists and creators, instead of the typical approach of generative AI systems to completely replace the talents of people. Their AI systems are trained on their own image and video libraries which they are licensed to use. However, because their systems are not trained on as wide a range of material as systems that illegally mine the Internet, they are more limited in what they can generate.

However, it is much better have limited functionality and be morally and spiritually in the right than to bring sin on yourself or your house by using AI systems or any technology, for that matter, that was created by or uses stolen or illegally used material, such as pirated or illegally distributed movies, books, and songs.

The New York Times is likely to win a lawsuit against OpenAI because it can be proven their AI systems directly copied, uses, and plagiarizes their copyrighted material[3.10], and in the original post, I noted that researchers could prove AI generative art directly copied existing artwork, which was often under copyright and not legally used[1.21].

For an additional note, as I talked about in the last update, demons can infect AI systems, so the rise in using AI to replace a deceased person so that loved-ones can converse with "them" after death is a disturbing and unhealthy trend[3.9]. Doing this will not only enable demons to take over your conversations with the AI personality, but you will participate in sins of necromancy that spiritual mediums fall to.


References
[3.1] Joan Donovan. "Fake Biden robocall to New Hampshire voters highlights how easy it is to make deepfakes − and how hard it is to defend against AI-generated disinformation". The Conversation. 2024 Jan. 23. Retrieved 2024 Jan. 24.
<https://theconversation.com/fake-biden-robocall-to-new-hampshire-voters-highlights-how-easy-it-is-to-make-deepfakes-and-how-hard-it-is-to-defend-against-ai-generated-disinformation-221744>

[3.2] Ryan Morrison. "I just tried a new text-to-speech AI tool that clones your voice in seconds". Tom's Guide. 2024 Jan. 3. Retrieved 2024 Jan. 24.
<https://www.tomsguide.com/news/i-tried-a-new-text-to-speech-ai-tool-that-clones-your-voice-in-seconds-it-made-me-sound-american>

[3.3] Sophia Ankel. "Samsung's new AI technology is quite frankly terrifying ". Indy 100. 2019 May 24. Retrieved 2024 Jan. 24.
<https://www.indy100.com/science-tech/samsung-machine-learning-technology-ai-faces-paintings-single-frame-8929176>

[3.4] Sharon Adarlo. "Hackers Create ChatGPT Rival With No Ethical Limits". The Byte. 2023 Jul. 18. Retrieved 2024 Jan. 24.
<https://futurism.com/the-byte/chatgpt-rival-no-guardrails>

[3.5] Nanyang Technological University. "Researchers use AI chatbots against themselves to 'jailbreak' each other". TechXplore. 2023 Dec. 28. Retrieved 2024 Jan. 24.
<https://techxplore.com/news/2023-12-ai-chatbots-jailbreak.html>

[3.6] Darren Allan. "Microsoft is changing the way it updates Windows – and it’s starting to sound like Windows 12 won’t happen". Tech Radar. 2023 Dec. 8. Retrieved 2024 Jan. 24.
<https://www.techradar.com/computing/windows/microsoft-is-changing-the-way-it-updates-windows-and-its-starting-to-sound-like-windows-12-wont-happen>

[3.7] Tom Warren. "Not even Notepad is safe from Microsoft’s big AI push in Windows". The Verge. 2024 Jan. 9. Retrieved 2024 Jan. 24.
<https://www.theverge.com/2024/1/9/24032117/microsoft-windows-notepad-generative-ai-option>

[3.8] "Responsible innovation in the age of generative AI". Adobe. Retrieved 2024 Jan. 24.
<https://www.adobe.com/about-adobe/aiethics.html>

[3.9] Mike Bebernes. "AI can allow us to 'talk' to the dead, but is that healthy?". Yahoo! News. 2024 Jan. 8. Retrieved 2024 Jan. 24.
<https://news.yahoo.com/ai-can-allow-us-to-talk-to-the-dead-but-is-that-healthy-204447488.html>

[3.10] Joao Marinotti. "Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right". The Conversation. 2024 Jan. 25. Retrieved 2024 Jan. 25.
<https://theconversation.com/could-a-court-really-order-the-destruction-of-chatgpt-the-new-york-times-thinks-so-and-it-may-be-right-221717>



3rd Compass -> Group News and Articles -> The Morality of Using AI and Deepfake Technology


 


HomeTopicsArticlesArticlesPre-DestinedAid & AdviceInfo & ContactAccount SubscribeGive

FacebookWordpressYouTube


Copyright © 2009-2024. Christ Hephzibah Church.
All Rights Reserved. See Terms of Service...

3rd Compass is the operational name
for Christ Hephzibah Church.

Please cite any references you make
to material on this website.
Use the Cite button at right or click here
to get standard reference text.
Go to page top
Go to page bottom
Make Text Larger
Reset Text Size
Make Text Smaller
Cite This Material