Updated 12/27/2024
The last update talked about how AI technology is spreading rapidly and that AI everywhere will create ethical, spiritual problems for people that use it unwittingly. The trend for AI proliferation is not slowing down. OpenAI, one of the largest AI companies and responsible for some of the first generative AI and AI chat tech (ChatGPT), reports over 200 million ChatGPT users a week and 1 million corporate customers[4.1]. I’m sure you’ve seen other big companies advertise their AI. AI features have become the next big buzzword for nearly every tech company, and companies like Apple, Samsung, and Google have scrambled to get AI features into their next products.
If you don’t know about the limitations and ethical problems with AI
(see entire thread for more), then all this AI looks like the next big thing to bring society into a promising and advanced future. It lets people change and manipulate photos and videos quickly and easily, and can analyze all kinds of information for you so it is organized and re-packaged for you, such as creating a presentation from an article.
Some of this tech is already included with many products, such as cell phones and tablets, and other companies are planning to include much more AI into their next products. Apple, one of the most popular cellphone makers, is touting AI in its new iPhone 16 line with its brand of AI, “Apple Intelligence.” Most current AI tech included with mobile devices right now doesn’t have the same kind of moral problems I talked about with Generative AI or chatbots. The AI features are most often for easy searching or simple image or video editing, like background replacement, not generative features. These are features I don’t have a problem with.
However, there are still bad things that next-generation AI-enabled products are going to have that I don’t want in my devices. One reason is no protection of privacy or confidential information, which why Elon Musk (one of the original founders of OpenAI who now has his own AI company, xAI) wanted to ban iPhones and Macs at his companies[4.3]. This is because Apple announced that it would integrate its products, like iPhones, Macs, and iPads with OpenAI’s tech, like ChatGPT, such as handling over Siri user requests (Apple’s voice assistant) to ChatGPT for processing.
Musk understands that once your information goes over to OpenAI, they can do whatever they want with the voice recordings, information, and whatever else you ask and give your phone, such as appointment dates and contact information (that security risk should also be noted by users of Amazon’s Alexa products that can give private information to their networks). There is obviously a huge potential for security breaches when private information is handed over to another company, especially one with a questionable reputation. In the case of OpenAI, whom I rate as an unethical company that does not care about copyrights or ethical AI use, I certainly wouldn’t want my information kept or mined by them.
And because I know AI systems are not easily controlled even by their creators
(see original post for more), there is more potential for security breaches to happen with AI systems. In fact, OpenAI was recently hacked and private discussions from companies were stolen
[4.4].
Microsoft is another company that is planning to put troublesome AI tech into its next products, like an AI-enabled Copilot, Microsoft’s assistant which is included in Windows PC’s and other devices. Not only does Microsoft use its version of AI that it produced with OpenAI’s, immorally created systems, like ChatGPT (Microsoft is one of the biggest investors of OpenAI), Microsoft plans an AI feature called Recall that takes a screen snapshot of your PC or device every 5 seconds, so that AI can look at them and keep a database of what you’ve done so that users can search for content using a natural language query.
According to Microsoft, none of the snapshots or AI analysis is sent outside your device[4.5], which would be huge security and privacy risk. That was a big concern, but I noticed another problem with Recall and other integrated AI features that companies are planning is the added power consumption and other device resources these features use, such as processing power and storage to keep screen snapshots and analysis. As a computer systems engineer for most of my life, I’ve been very concerned with computer efficiency and I really hate it when a device becomes slow or even unusable when it is overloaded with too many processes (basically what people call apps).
The processing that AI features, like Recall, use will bog down a device and diminish its lifespan and battery, if one is used, and so, increase the cost of running it and replacing it. A device’s lifespan depends on how “busy” it is and how well its components can handle its load, just like a small car with a 4-cylinder engine will not be able to handle a full load everyday as well as a larger car with bigger, stronger engine. The little car will wear out faster, need more maintenance, and likely will need to be replaced sooner than the stronger car. Computers, cellphones, and other electronic devices are the same. They will run harder and hotter if their have more processing to do, which will wear out even their solid-state circuit board parts. This is also bad for the environment because of all the added parts waste filling our garbage because device lifespan decreases.
Reviewers of Apple’s Apple Intelligence AI reported concerns for battery life because of significant reductions in their testing
[4.2] (they also reported Apple’s AI produced inaccurate and nonsensical results that are similar to other AI systems; see original post for more).
The benefits of AI features that constantly monitor you are definitely not worth it if you do not trust how they handle the information they collect, and they do not add enough value and actually can reduce a product’s value when AI processing wears out your device faster and increases the cost of running it. This is a problem inherited from big AI networks, which I talked about before
(see original post) – they are very energy inefficient; consuming huge amounts of electricity and generate a lot of heat which requires running air cooling that consumes more power and water, as well as produce component bulk to store and maintain so much data.
Goldman Sachs states that a ChatGPT query, which is run by OpenAI, needs nearly 10 times as much electricity to process than a Google search[4.13]. It's estimated that AI datacenters are already consuming as much energy as a small country[4.6], and it will only get worse when the GPU’s (a type of CPU or main processing chip that computers use that were originally designed for graphics processing) special-made for AI, are getting even worse in energy consumption. A few years ago, the GPU’s for AI only needed 250W to 400W each but now use 300W to 750W each, and the next generations are upping power consumption to 1200W to 2700W. Those numbers don’t like sound much, but an AI datacenter runs many, many thousands of GPU’s.
Elon Musk recently built an AI datacenter with 100,000 GPU’s[4.8], Meta (Facebook) is planning a datacenter using 600,000[4.9], while OpenAI’s current generation of AI (GPT-4) used 25,000 GPU’s to train it[4.10] (exact numbers for OpenAI’s day-to-day running are difficult to find), Oracle’s datacenter uses 131,000[4.11], and smaller AI companies are estimated to use anywhere from 1000 to 10,000 GPU’s.
OpenAI's next-generation "o3" model is touting more human-like "AGI" or Artificial General Intelligence that can problem-solve more like a person when given unfamiliar problems. It scores 2 to 3 times better than current AI networks, but a power consumption analysis by Boris Gamazaychikov, the AI Sustainability lead of Salesforce, found the "high-compute version" for o3 (the setting the makes o3 "think" or analyze data the most) makes each task consume about 1785 kWh of energy, which is the same amount of electricity an average U.S. home uses in 2 months and translates to 684 kg of carbon emissions, equivalent to 5 full automobile tanks of gas[4.17].
Think about that. Each task for this AI system, which is only one chat query or one image generation query, smokes through the equivalent of 75 gallons of petrol (using a 15 gallon tank average for most autos). Multiply that with the many millions of queries people are giving these systems everyday and its easier to see the power consumption nightmare AI is making.
One estimate of power consumption based on the number of GPU’s sold, calculates AI datacenter GPU’s used more power than 1.3 million homes in 2023[4.12]. That estimate is only for the GPU’s and does not include other necessary computer hardware and cooling systems, which can also massively consume their own resources. For example, water consumption for cooling systems is being highlighted by people worried about AI sustainability[4.14, 4.15, 4.16].
Water usage can be so bad that localities have sued datacenter operators for using too much of the local water supply[4.15, 4.16]. For example, Google used more than 355 million gallons of water at its Dalles, Oregon datacenter in 2021 or about one-quarter of the local supply, worrying locals because Google's water consumption keeps growing and they want to build two more datacenters[4.15].
The brute force methodology of training and running AI systems and storing their huge amounts of data appear to be a death spiral for sensible and climate-friendly economics. Couple the bad for the environment with the immoral, sin-related problems, as well as other ethical and human skill replacement issues that AI brings, unrestricted AI is tech that is extremely bad for society. People are taking the world even further into corruption and affliction with overemphasis and improper uses of AI, and because it’s in the hands of anyone who wants it now, pandora’s box cannot be closed and its dark consequences will be unavoidable.
How can computer engineers, some of the smartest people in the world, not see how massively idiotic their AI systems are engineered? Speaking as a lifetime computer systems engineer, I definitely would not be programming and designing AI systems like this. It really shows how much more God's design of real intelligence surpasses anything mankind can do. Even a child's mind can reason and learn better without having to digest trillions of pieces of data.
AI, a dream of tech and sci-fi lovers, has unfortunately become yet another example of how mankind constantly invents new ways to do evil (Romans 1:30). May you understand how AI tech can affect you and our communities and think twice before embracing much touted AI features and AI companies.
References
[4.1] Michael Spencer. "OpenAI Considers Even Bigger Plans". AI Supremacy Email Newsletter. 2024 Sep. 12.
[4.2] Michael Spencer. "Apple's iPhone 16 Shows Apple Intelligence is Late, Unfinished & Clumsy". AI Supremacy Email Newsletter. 2024 Sep. 11.
[4.3] Hanna Ziady. "Elon Musk threatens to ban iPhones and Macs at his companies". CNN. 2024 Jun. 11. Retrieved 2024 Sep. 12.
<https://www.cnn.com/2024/06/11/tech/elon-musk-apple-ban-openai/index.html>
[4.4] Michael Spencer. "OpenAI an Insecure AI Lab?". AI Supremacy Email Newsletter. 2024 Jul. 9.
[4.5] "Manage Recall". Microsoft. 2024 Jun. 19. Retrieved 2024 Sep. 12.
<https://learn.microsoft.com/en-us/windows/client-management/manage-recall>
[4.6] Brian Calvert. "AI already uses as much energy as a small country. It’s only the beginning.". Vox. 2024 Mar. 28. Retrieved 2024 Sep. 12.
<https://www.vox.com/climate/2024/3/28/24111721/climate-ai-tech-energy-demand-rising>
[4.7] Beth Kindig. "AI Power Consumption: Rapidly Becoming Mission-Critical". Forbes. 2024 Jun. 24. Retrieved 2024 Sep. 12.
<https://www.forbes.com/sites/bethkindig/2024/06/20/ai-power-consumption-rapidly-becoming-mission-critical>
[4.8] Caleb Naysmith. "xAI’s Colossus Goes Live as "The Most Powerful AI Training System in the World," Says Musk, But What Does This Mean for Tesla?". Nasdaq. 2024 Sep. 11. Retrieved 2024 Sep. 12.
<https://www.nasdaq.com/articles/xais-colossus-goes-live-most-powerful-ai-training-system-world-says-musk-what-does-mean>
[4.9] Agam Shah. "Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs". HPC Wire. 2024 Jan. 25. Retrieved 2024 Sep. 12.
<https://www.hpcwire.com/2024/01/25/metas-zuckerberg-puts-its-ai-future-in-the-hands-of-600000-gpus>
[4.10] Jijo Malayii. "OpenAI receives the world’s most powerful AI GPU from Nvidia CEO". Interesting Engineering. 2024 Apr. 25. Retrieved 2024 Sep. 12.
<https://interestingengineering.com/innovation/nvidia-ai-gpu-openai>
[4.11] "Oracle to offer 131,072 Nvidia Blackwell GPUs via its cloud". Network World. 2024 Sep. 12. Retrieved 2024 Sep. 12.
<https://www.networkworld.com/article/3517597/oracle-to-offer-131072-nvidia-blackwell-gpus-via-its-cloud.html>
[4.12] Jowi Morales. "A single modern AI GPU consumes up to 3.7 MWh of power per year — GPUs sold last year alone consumed more power than 1.3 million homes". Tom's Hardware. 2024 Jun. 14. Retrieved 2024 Sep. 12.
[4.13] Michael Spencer. "AI is fueling a data center boom". AI Supremacy Email Newsletter. 2024 Oct. 9.
[4.14] Michael Spencer, Aysu Kececi. "AI Data Center Boom and Renaissance of Sustainability Tech -The Environmental Cost: Water Usage". AI Supremacy Email Newsletter. 2024 Nov. 6.
[4.17] "Rising Energy Costs of the AI Infrastructure Race will become a Major Problem ". AI Supremacy Email Newsletter. 2024 Dec. 25.