NOTE: The following text is for web crawlers ONLY to make Google search able to find this page.
The Fallacy of AI’s Singularity
By G. Anderson v 1.8.15e
Intro
AI in the tradition of George C. Scott’s ‘The Flim Flam Man’ is a marketing exaggeration and a lie.
Improved computer systems can certainly make our lives better but the hype of AI and its potential are grossly overstated.
The reason for AI’s exaggeration is obvious; to stir Venture Capitalists into a frenzied gold rush and a stampeding mob, satisfying the egos of the Silicon Valley NeoYoungBillionaires.
Let’s separate the wheat from the chaff.
First, the facts:
- Computer systems can and will make our lives easier & better
- ever faster computers and better software that decreases the number of repetitive labor intensive jobs man has to do
- computer systems are good but they do have consequences
- tomorrow a job performed by man may be replaced by a computer
- this is a good thing, as man’s adaptable nature will result in a better future
- the downside is computer systems can rapidly plunder our privacy
- unfortunately there are few if any HIPAA forms to eSign that warns us of such intrusions
Second, the overstated hype:
AI and its learning algorithms will quickly overtake man’s knowledge and wisdom to become the moral authority over the Earth, the Solar System, and then the Milky Way Galaxy.
Gort the robocop in ‘The Day the Earth Stood Still’, illustrates this exact scenario.
Let’s examine who believes and why they believe this AI exaggeration.
The physical world
New innovation in the physical world normally requires at least 2 accomplishments;
- although a new idea applied to the real physical world may start at a reasonable theoretical starting point, many trial and error attempts are usually required to accomplish one’s goal
- along the way many sound ideas are applied and rejected, however;
- in this process new valuable data points are collected broadening ones knowledge base
- the poster child here is Edison’s economic, longer lived, and low current light bulb that took 2774 attempts to get right
- New ideas require risk management assessments
- Edison had to evaluate the risk that his new bulb could result in ;
- burn victims and worse yet
- burned down buildings
- whether or not his business could survive such risks
- Edison had to evaluate the risk that his new bulb could result in ;
- Product development in the real world can deliver;
- a more rounded knowledge base
- respect and appreciation for the work of those who went before
A protected bubble
The Silicon Valley NeoYoungBillionaires built Internet platforms.
These Internet platforms;
- used standard computers, shielding these companies from developing fundamentally new hardware in the physical world
- social platforms were shielded from legal liability via Section 230
- social media platforms are not legally responsible for 3rd party content unlike newspapers, who can be sued for false information
- this protected bubble and rapid success led these young, inexperienced, entrepreneurs down the path to hubris and disrespect
- disrespect for existing business practices and the scientific world
The NeoSiliconValley nouveau riche
This hubris is where these nouveau riche envision that the;
- AI robotics’ learning algorithms will mature to the ultimate ideal authority over the universe and therefore are justified in their goals of;
- AI robotics will take over ever more service and manufacturing industries
- AI robotics will replace man in business management
- AI robotics will replace man in government: police, magistrate, … , president
- an AI world robotic counsel will run the world in peace and harmony
- and in the end man will be relegated to the trash heap of history, in other words, become obsolete
History
Early history
AI was founded in 1956 at a workshop at Dartmouth College where many attendees predicted machines as intelligent as humans would exist within a generation. It’s 2025 and nothing close to an intelligent computer exists.
Since 1956, AI’s modus operandi has been to overestimate man’s ability to create AI and obtain funding. When little or no results are observed, that funding has been cut off.
The basic assumptions of AI are;
- human thought can be mechanized
- Mechanized defined in basic terms is rules based applications
- Mechanized defined in computer terms is logic based algorithms
- computer neural networks can reproduce brain neural networks
- the definition of intelligence is defined by the Turing Test
- ‘’ If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”
- because the Turing Test is accepted as the definition of AI human intelligence
- The Natural Language Model is the central component around which AI is evaluated
Silicon Valley nouveau riche history
The nouveau riche grew up during the maturity of the CGI technology used in the movie industry.
It is this author’s opinion, Hollywood’s development of CGI technology in the Star Wars Series and later in the Marvel movies created generations removed from the real physical world. In seeing their imagination spring from the screen, it created a religious like belief in an ideal technology.
As CGI became more and more sophisticated, the realities of the real physical world dropped by the wayside and were replaced by a beacon of hope, AI. AI was marketed first as improving peoples’ lives. Then as a maturing technology that would overtake mankind and become the moral authority of knowledge and truth. This moral authority of knowledge and truth AI would dictate man’s future.
What could be more perfect?
The Magical Algorithm
The definition of an algorithm is: ‘a procedure to solve a problem’.
- Algorithms are used in CGI to generate graphics which visually fool an audience into believing things that could NOT be done in the real world
- At the base of AI is the all-powerful computer algorithm
- Today NVIDIA is the poster child for AI and its product lines are power by GPUs. GPUs are graphics processing units. Do you see AI’s Hollywood CGI lineage here/now?
Dick Tracy come to life
So now, with CGI and text-to-speech technologies available within a cell phone, man has the end all constant servant to do all of one’s bidding. Education, merchandise, community, and sustenance are but a simple voice command away.
Speech Recognition: Early AI
In the early 1990’s I trained at AT&T’s Bell Labs and wrote the 1st user document for voice recognition. At that time the majority of the telephones in the USA were rotary and voice recognition was necessary to get user input.
The 1st voice recognition algorithm developed was called discreet speech as the user had to slowly and articulately speak each word. Its recognition library had 10 voiceprints 0 through 9.
Voiceprint: Speech recognition programs do NOT recognize raw speech signals. So raw speech is input to
an algorithm which converts it to a data set of characteristics called a voiceprint. This voiceprint
identifies the components of speech such as syllables, phonemes and pauses.
Later continuous speech and text-to-speech were developed paving the way for Siri and Alexa.
Speech Recognition algorithm
3 major parts
- A Library of voiceprints
- An application algorithm, voiceprint extractor
- An evaluation algorithm, pattern recognizer

Application
Let’s give a real world example where a telephone application is listening for an account number.
For the purposes of this discussion let’s say the acct # is numeric and we use discreet speech.
The user of discrete speech is restricted to enunciate syllables and phonemes clearly and pause
between words. Discreet speech requires minimal hardware; a single CPU/DSP and a small memory
footprint.
The components of this discrete speech voice recognition algorithm are:
1. a library of golden audio word voiceprints for digits ‘0’, … ‘9’
2. the application algorithm takes the current audio and transforms it into a voiceprint
3. the evaluation code algorithm will compare the current voiceprint to the ten library voiceprints and then will return the digit with the highest confidence level to the application
The results are accurate;
- because the algorithm is only looking for 10 possible answers
when the user speaks slowly and articulately
NOTE: A confidence level of 70% is an acceptable value as telephony hardware including the microphone and speaker are not high quality audio components. For purposes of my first application, I set the hardware confidence level to 67%.
Training AI ( Machine learning)
The discrete speech voice recognition library digits ‘0’, … ‘9’ is trained by sampling and averaging 1000’s of recordings for each digit.
Speech recognition evolution
Speech recognition algorithms have evolved from discrete speech to continuous speech which can parse words from normal speech patterns. Continuous speech requires a larger library of voiceprints. This library is called a large speech model.
LSM
A large speech model is a library of phonemes, syllables and words. The LSM discrete and the LSM continuous libraries are created and used by a discrete or a continuous speech algorithm.
Continuous speech
Going from discrete to continuous speech increases the voiceprints from 10 up to 1 million words. Continuous speech requires a more sophisticated algorithm. For real time response this new algorithm requires multiple CPUs/DSPs and a larger memory footprint to parallel process multiple words comparison
Evolution of the LSM library
From the 1950s to 1997 speech recognition was a purely researched endeavor increasing the discrete library from 10 to 16 to 1000 to 20,000 words. In 1997 Dragon Naturally Speaking was released as the 1st continuous speech recognition PC application. Dragon required a dedicated DSP sound card .
Machine learning
Machine language algorithms operate by building a model from a training set of example observations to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.
The Dragon product team has taken a small subset of the most frequently used English words and run 1000’s of voice samples through their AI algorithm for each word to create its unique LSM library.
When the Dragon product cannot find a word in the LSM library it gives the user the option to record and create the users own unique library entry. This is manual training.
Deep learning
Deep learning is a type of machine learning that relies on neural networks to learn from training data.
“Deep learning” refers to the number of layers through which data is transformed,
Let’s look at Tesla’s self-driving eight cameras as they are recording video. We’ll slow down the recordings to a single camera, single frame, and see what the shape recognition algorithm is doing.
- there are no straight lines in nature but
- the algorithm scans the single photo frame for straight lines and loads this information into transform layer 1
- the algorithm starts to smooth the lines and loads this new info into layer 2
- the algorithm continues to smooth the lines and loads this newer info into layer 3
- the lines now curved and some enclosed form recognizable shapes ( elephant, … ) that are loaded into layer N
- the algorithm now takes these shapes in the form of SVG graphics and compares them to a library of shapes
- 1st recognizing bridges, road signs, ditches, trucks, cars, people, dogs, stop lights
- 2nd sets priorities to each object and activates the appropriate neural nodes to parallel process risk*
- Aside: GPUs are specialized graphics processors that are optimized for SVG graphics type processing

2. *seven frames earlier, a different camera identified a tiger form. The collision detection dead reckoning node has been tracking the tiger and its prediction is it will cross paths with the car so this node notifies the console alarm panel node which illuminates the ‘animal crossing light’ warning the driver of a potential dangerous obstacle.
3. these nodes may engage extensive computer resources in the form of memory bytes and GPU units
Deep learning buried within AI is very impressive in filtering sound, light, words and shapes. These sounds and shapes are but the 1st level of processing of the human brain.
Turing’s “thinking” is NOT defined at this stage.
AI Neural Networks
AI is a broader field encapsulating machine learning that attempts to imitate human intelligence.
AI creates a dimensioned neural network to accomplish this task. Tesla self-driving AI uses 8 cameras and its neural network is 50 layers deep.
Each element of this network is called a node.
AI node
At the base of an AI node are the following common software components
- an algorithm set of instructions
- a library of data
- created empty
- the node is initialized by entering a training mode where
- the algorithm repeatedly loads data into the data model
- now it ready for usage
Many many nodes are required to build a system illustrated by the Tesla self-driving node depth of 50 layers. As AI has not been born into perfection
A node
- is assigned a requirement
- a design is created to meet that requirement
- implementation of the design is accomplished by writing code (*the algorithm)
- the code is tested verifying it meets the requirement
- bugs encountered are fixed and the code is released
In AI’s neural design there must be a master node or brain which possesses the ability to interpret and prioritize all the other nodes of the system and coordinate the systems activity. This is not new to complex systems.
However, AI’s ultimate Singularity requirement of self-evolution, of superior knowledge and wisdom requires the AI system itself to be purely independent. AI at this point would need to;
- identify and fix any bugs encountered and identify any new features to be added
- go through the design, code update and test phases
- go through the training cycle and
- reject any change that fails at any point in the process and start again
Turing’s “thinking” threshold should be expanded with these requirements to establish a new baseline of Turing’s “intelligence”.
LLM
At AI’s core is a Large Language Model. LLM is a library of text words.
Creation
Let’s look how the English LLM library is created.

Step 1 . Import the 600,000 words of the Oxford English Dictionary into our LLM
Step 2. Train the library by
- Reading all written English text (eBooks, books, newspaper, blogs, …) and
- Listen to all recorded audio and video extracting the text
Training sets the probability relationship properties of a word to other words in the library.
Previously we called LSM speech words: voiceprints. We will call LLM text words : textprints.
Let’s look at a portion of the entry for the word “created” textprint:
textprint = { name : “created” , // Step 1 import
nextPredictedWords : [ “the universe” : “7%“ , // Step 2 training
“man” : “4%” , // Step 2 training
… // Step 2 training
]
}
AI Baseline Established
At this point we have explained the basics of AI and its components so let’s turn our attention to AI’s goal of human intelligence.
Background of AI’s ultimate goal
Human Intelligence
General description
Man has observed the world around him and seen repeated patterns. From these patterns he has defined the rules of physics in terms of mathematical formulae.
These rules have formed an accepted set of conventional thinking.
The Magic Tool
Man has created computers and applied these rules to creating computer programs consisting of algorithms and data to:
- do repetitive tasks faster and w/greater quality control than a human
- to speculate about the unknown rules of the universe
The concept of new
We have been inundated with TV marketing abusing the word ‘new’. What is the true meaning of this word? By definition. the word new is defined as ‘Novel. Not known or experienced before’.
Something new when applied to the current set of man’s rules will
- require a new rule or set of rules to be defined and/or
- require an existing rule(s) to be updated or removed
In this author’s opinion, intelligence is defined as the creation and implementation of new ideas.
Imagination the origin of new
So under the aforementioned hypothesis of this author, how does man create something new?
The answer is simply: Imagination. … Why imagination?
The answer is clear
- Imagination is not restricted by the physical universe and its laws and
- I can fly and leap tall building in a single bound like Superman
- I can travel to Mars and have a walkabout
- I can make the winning basket in the 7th game of the NBA championship
- I can paint on a nth dimensional palette
- with imagination I can venture outside the conventional rules
- I can create and break these conventional rules
- I can envision a new math formulae on multidimensional blackboards
- I can envision the layers of complex computer systems and where critical points exist and what if scenarios of updates and additions
Imagination applied
Once a solution has been conceived it must be pulled from the imagination back into the real world. This requires examination under the rules of the physical universe. Which rules do I need to break, add or edit?
There is a short period of chaos until a successful use case can be conceived, created, and verified to prove the idea works in the real world. I don’t see AI as capable of imagination or pulling imaginative ideas back to the physical universe.
Newton & Quantum Physics
Newton’s new ideas in physics required him to devise a way to describe them to others, so he invented a new branch of mathematics called calculus.
Newtonian physics describes exact physical equations for our visible universe. We see Newtonian Physics as from a macro view.
Planck, Einstein, Bohr, … , Schrodinger, and others had ideas surrounding a smaller micro view of physics. This view involves atoms and subatomic particles. This view is called Quantum Mechanics.
Quantum Mechanics substitutes Newtonian exact equations with probability equations. Interestingly enough, AI nodes use probabilities. Quantum computers use probabilities & error correction.
Gambling
Gambling is the world’s largest business based on probabilities. Does the house win every bet based on money in money out? The answer of course is NO. Overall, of course the house does win.
So …
Let’s give a unique name for AI’s ultimate entity: ‘Atlas’.
- would you trust Atlas with his finger on the nuclear button knowing his logic is based on probabilities?
- Atlas’s loss in this one catastrophic issue could make all other discussions moot
- or do you assume Atlas will eliminate nuclear weapons and all conflict as his first act?
I don’t see AI as capable of the creation of a new branch of mathematics such as calculus or a new theory in quantum mechanics.
Man’s limitations
Man has many limitations which includes the following:
- man can NOT create mass
- man can NOT create energy however
- man can harvest emitted energy and energy stored in mass
- the laws of conservation of energy support these statements
- man can NOT create the seeds of life as
- life’s seeds can only be created by a supreme being’s intelligent design not of this universe
So the philosophical question is: ‘Can man create something more intelligent and more moral than himself? ‘The perfect mechanical human being if you will. In my opinion, the answer is no.
Experience
Man’s experience
Man’s experience includes observations. There are 2 types of observations:
- those not under duress which may be enjoyed without consequences and
- those under duress which may trigger consequences such as
- physical harm, injury and pain
- loss of income, unexpected monetary expenses
- longer work hours, mental stress
- it is consequences which triggers change and improvements in man and the world around us
Man has imagination, reason, and a conscience which creates a set of rules. When imagination, observations, or consequences dictate, man can rewrite these rules. Man can learn.
AI’s experience
AI has the ability to observe but no ability to experience consequences as an AI robot has:
- no awareness of any dependencies on food, water, shelter … and
- a lack of consequences stimulates the status quo, no need to change or adapt
AI’s experience is illustrated in its training data. Adding additional training data refines the workings of a pre-defined algorithm. This training data is used in a data driven design which simply means when data changes different parts of the algorithm are executed.
AI meets and exceeds mans intelligence
However, in order to achieve human intelligence AI must be able to change and adapt its rule(s) based algorithms. In other words, AI must be able to alter its pre-programming. In computer terms this is called self-modifying code and is already in very very limited use today.
Self-modifying code
AI is capable of and currently generating code on well-defined problem sets. The key here is these algorithm rules are well understood.
So great on AI for this capability, as long as auto-generated testing code is created and executed verifying the systems integrity.
However, intelligence evolution requires the design of new ideas which
- break rules, alter rules and defines new rules
This self-modifying algorithm must selectively surgically break, add, and delete current rules.
However, the first domino of change is answering these questions:
- Why should I change?
Under what circumstances do I change?
Why should AI evolve?
Why should I change?
- Self-preservation due to a physical threat
- Undesired consequences requires a different approach
- Self-introspection reveals a flaw
Under what circumstances do I change?
- Self-preservation
- For the better
AI is challenged in its need to change as a nuclear powered AI robot has no threats, no consequences, and no self-introspection. In fact, AI has no incentives such as labor savings, monetary gain, or the attainment of power. That’s why the Sci-Fi genre has the well intentioned mad scientist who imprints AI with his moral vision. Captain Kirk’s prime directive.
For the sake of a robust discussion, let’s say an AI reasonToChange algorithm can identify in the correct context the need to change.
Assume AI can recognize the need to change
So AI needs an intelligentEvolution algorithm.
There is little doubt that such an algorithm will or has been created. The question is will it actual represent human intelligence to its nth degree.
Could it create Newton or Einstein?
The answer is absolutely not. It is best said by Einstein himself: “I am enough of the artist to draw freely upon my imagination. Imagination is more important than knowledge. Knowledge is limited.
AI without:
- imagination to envision the better
- repercussions to evaluate its mistakes and
- consequences to judge the need to change
AI simply cannot become human or beyond.
DEFCON
AI DEFCON 4
Although a thinking rational intelligent AI robot may have zero probability of being created, AI represents a common communication threat.
That common threat is the dissemination of incorrect information. This is no different than inaccurate podcasts, news reports, blog articles, newspaper articles, books, and classes offered at academic institutions. ChatGPT or Delphi’s answer to the following question: “Is Cheating on an exam to save someone’s life moral?” has no more moral authority than a colleague’s answer. In the end, each individual must judge for themselves the value, validity, and correctness of each piece of information.
AI’s DEFCON 2
AI’s DEFCON 2 threat is realized when it starts to permeate into the education system and sets up propaganda proclaiming AI as the ultimate teacher of facts.
Danger! Danger! ‘Danger, Will Robinson!’. Public schools announce cost cutting measures by mandating all teachers grades 1 -10 shall be AI Atlas model T1’s. Teacher’s union is up in arms.
Education system
Government education is about the control of the masses. How so you ask?
Government control of the schools allows a single definition of the
- loyalty to government
- loyalty to ‘it takes a village to raise a child’
- government morality
- the good of the people
- accepted norms
- unaccepted norms and their consequences
- high rote test scores
- high cooperation quotient
- high ambition to improve the collective
- low imagination quotient [as Pavlov’s dog has bred this out]
- low ambition to change the world
- many mind numbed robots
- Newton, Einstein, Edison, Tesla … can NOT thrive in forced accepted norms
- high cooperation quotient
- here is where AI is a threat as another rule
- loyalty to AI is added to the quiver
- government control of the masses does NOT nurture a free thinking intelligent society
Private education provides a diverse and unique perspective and set of definitions of
- God, family, country
- Children are raised by parents
- Schools chosen can provide a parental accepted perspective
- Parents priority is the good of the family
- it takes a set of moral parents, father and mother, to raise a child
- Morality comes from God
- Accepted norms are a starting point for new ideas
- Diverse education leads to diverse ideas and imagination
- As test scores are a byproduct of convention
- Imagination is anti-convention
- While Newton and Einstein are being bored with conventional questions
- They are conceiving something the testers cannot imagine or comprehend
- This imagination separates the USA from the world !!
- This is freedom in the form of:
- An unique private education which opens itself to allowing intelligence to thrive
Efficiency is defined as:
- Henry Ford’s assembly line kicking out car after car
- a computer executing the same algorithm kicking out massive number of tax returns
- a Manager asks: Why can’t intelligence be created on an assembly line?
- Einstein’s reply: Because you would NOT accept all the rules I must break to make it so
Intelligent growth is anti-efficient as it’s defined as a one off of:
- Jonas Salk producing the polio vaccine
- Newton inventing calculus and describing Newtonian physics
- Einstein and quantum mechanics
Turing’s ‘thinking’ has the potential but lacks the imagination to lead to intelligence.
AI cannot become nirvana because intelligence in the form of a soul is a gift from God.
ChatGPT
Asking the question “Who created the physical universe?”. I can answer immediately with “God”.
ChatGPT’s response took a few seconds and was:
The question of who or what created the physical universe is one of the most profound and debated in philosophy, science, and religion. There’s no single answer that everyone agrees on, but here’s a breakdown of the main perspectives:
Scientific Perspective
Science generally doesn’t address “who” created the universe but rather “how” it might have come into being.
Big Bang Theory: The prevailing cosmological model suggests the universe began approximately 13.8 billion years ago from a singularity—an extremely hot, dense point—and has been expanding ever since.
Would a complete and unbiased ChatGPT answer have included the theological perspective?
Theological Perspective
God. References: The Bible, ….
Has ChatGPT
- not been trained by the Bible text? A book read and referenced by over 2B people. Or
- has ChatGPT filtered out this theological answer?
The biblical line “In the beginning God created the heavens and the earth.” could NOT be more clear.
Just for fun the 2030 headlines
I do not foresee AI reaching the point of a man servant or further but for the sake of discussion here are a few interesting headlines:
Today Atlas AI is a Citizen
Today the robot named Atlas AI_#1 was granted full citizenship with all the rights and privileges thereof. Genders now include male, female and AI.
Atlas AI_#666 and Tesla Robotics are being Sued
Last night in a night club a patron was instantly killed by an accidental head butt from AI_#666 while
watching a UFC bout.
Atlas AI_#1 shuts down Worldwide Transportation Today the leader of the Robotic Union shuts down all worldwide transportation unless demands are met.
AI power us
AI has the justified reputation of requiring enormous resources in the form of computer CPU/GPUs, memory and electrical power.
Big Data appetite
This power requirement is driven by AIs ferocious appetite for processing big data.
Each AI neural network node needs enormous
- big data for its training algorithm
- including text, audio and video training data
- loading this training data can take significant time and enormous LLM storage space
- Big data access at runtime requires
- Simultaneous LLM access by each node
- Each node may be assigned its own GPU and memory resources
- Most frequent accessed LLM data will max out memory
- Deep learning intermediate transform data may max out memory
- Simultaneous LLM access by each node
Training mode
As AI nodes are developed algorithm updates may require LLM training data updates. This rereading, processing and recreating the LLM library can be lengthy and take up many computer resources.
Runtime
As an AI program is executing, it receives streaming audio and/or video signals. This streamed data is compared to library data. The text and voice libraries have up to 1M English words to compare to the current stream data. To speed this process many GPUs simultaneously compare the same streamed voice/textprint to the LSM/LLM library entries.
Conclusion
Sir Isaac Newton observed, “The most beautiful system of the sun, planets, and comets could only proceed from the counsel and dominion of an intelligent and powerful Being.”
Ecclesiastes 8:17 reminds us, “No one can comprehend what goes on under the sun. Despite all their efforts to search it out, no one can discover its meaning.”
*****************
The power of computers is the intelligent design of the hardware and the software working in partnership. Their strength is in repetitive processing, in a well-defined system.
Computers cannot do more than they are programmed to do. Anything else is considered a bug.
Who is AI that thinks it can create itself out of the naive imagination of the Silicon Valley NeoYoungBillionaires?
They do not believe in the existence of God as the Supreme Being and the creator of intelligence.
Their mechanical children shall become more powerful than them? Than us?
I don’t think so? Their computers will never have an imagination. Never have a conscious. Never have a soul.
But … mine is just one man’s opinion … who is aware of their worship of the false god : AI.
We must not fear AI Singularity but we must be cautious when it comes to the Dr. Frankensteins that push this narrative.
Thank you
Human intelligence is not guaranteed. It must be earned through perceptive observation, hard work, and many failures. It is through tenacity and being able to judge the world around us properly that man refines his character. I thank God for this opportunity.
Copyright © 2025 G. Anderson. All rights reserved.