Most humans have not been taught logical thinking, but most humans are still intelligent. Contrary to the majority view, it is implausible that the brain should be based on Logic; I believe intelligence emerges from millions of nested micro-intuitions, and that Artificial Intelligence requires Artificial Intuition. Intuition is surprisingly easy to implement in computers.
Intuition and Logic
Intuition and Logic are two strategies for prediction and problem solving.
We hear so much about the virtues of logic that we'd be excused to believe that logic was somehow the superior method, but a quick analysis shows that most actions we perform on a daily basis mainly use intuition.
Logic is not better, just different. Both strategies have their advantages and apply in different situations. Sometimes we need to use both. Sometimes we can use either one, because the problem is so simple it doesn't much matter how we solve it. Sometimes it matters; if we happen to choose the wrong approach, it may prevent us from solving our problem.
Computer-based intuition - "Artificial Intuition" - is quite straightforward to implement, but requires computers (a recent invention) with a lot of memory (only recently available cheaply enough). These methods were simply unthinkable at the time AI got started, not to mention at the time we discovered the power of Logic and the Scientific Method. The tendency to continue down a chosen path may have delayed the discovery of Artificial Intuition by a few years.
Logic is used a lot in the hard sciences, such as Mathematics, Physics, and Chemistry. We can think of most of mathematics, including things like the rules of Algebra, as part of a framework that is anchored in Logic. Physics and related sciences use logical and mathematical models to describe the world. These models are what allows the hard sciences to be so accurately predictive.
Computer hardware is designed based on principles of Boolean Logic, and Logic is also used in programming them. We use Logic for puzzle problems, and we are taught it in school, either formally or informally.
Logical formulas can be manipulated "mechanically", by following syntax based rules that specify which operations are allowed. In performing these manipulations, Innovation is (in theory) unnecessary — that would be using Intuition.
Logical methods have many advantages: The can be used to make long term predictions, such as predicting planetary orbits years into the future. They can also make high precision predictions, such as predicting masses of elementary particles to the fifth decimal before experiments to establish the masses have been conducted.
Logical methods are productive. Valid Logical theories lead logically to new theories through mechanical manipulation of the formulas. Again, no innovation is (in theory) needed for this process of producing "new knowledge".
Logic and science have an excellent track record. Logical methods have solved innumerable important problems over centuries.
But Logical methods also have their limits.
One of the most surprising limits is that they require Theory, i.e. a high level model of the problem domain. This is surprising only because many cannot fathom that this is a limitation, since they believe doing anything without a solid Theory is impossible. But, as we shall see, Intuition requires no Theory or Logic based models. So this is in fact a limitation on Logic.
Logical methods require idealized conditions. Anyone who has opened a textbook on Physics or Mechanics has time and again encountered the phrase "All else being constant...". This is how Physics avoids problems with Systems that require a Holistic Stance, for instance any system that is constantly adapting to an environment that it cannot be separated from.
Logical and scientific models are relatively simple. It is true that some formulas can run to multiple pages, but this would still be simple compared to the complexity we discover in nature.
But first and foremost, Logic cannot handle Bizarre Systems, and therefore cannot solve many important problems in the life sciences, and cannot handle everyday problems such as Discovery of Semantics, including language.
Intuition is what we use to handle everyday problems such as predicting limb positions and controlling muscle movement, to understand and generate speech, to read, to analyze what we see, to drive a car, etc. In short, all the things we do that we take for granted and which we do "without thinking". Many of these are non-trivial and hard or impossible to do using current computer technology.
Most languages distinguish the quickly gained and more logic-influenced "intelligence" of youth from the growing set of effective intuitions accumulated during a lifetime known as "wisdom". Wisdom gives reliable guidance in complex social situations, for instance those involving humans with conflicting goals.
We also use intuition to get new ideas of all kinds, to generate novelty and make innovations. Every sentence we speak is an invention.
Intuition is fast. We make life-and-death decisions in split seconds, when we have to, and we are often correct. This is of course the reason Intuition evolved in the first place — it increases our chances of survival.
Intuition is Theory-free. It does not require a high-level logical model. This neatly solves a bootstrapping problem of Artificial Intelligence. You cannot create high-level models until you already have Intelligence.
This also makes Intuition much more Biologically Plausible than Logic since a considerably larger amount of mechanism would be required before Logic could be used to improve predictions. Intuition-based mechanisms could conceivably evolve in small steps from simpler prediction based mechanisms, with incrementally available benefits every step along the way.
Since there is no high-level Logic based model, then there is no model to get confused by the illogical Bizarreness of the world. AN based systems are immune to all the problems in Bizarre domains, such as constantly changing conditions, paradoxes, ambiguity, and misinformation. It does not mean that sufficient misinformation won't lead such a system to make incorrect predictions, but it means that the system does not require all information to be correct in order to operate at all. Intuition is fallible, and occasional misinformation makes failure slightly more likely. The system can keep multiple sets of information active in parallel (some more correct than others) and in the end, more often than not, the information that is most likely to be correct wins. This happens in humans, and will happen in AN based systems.
Intuition also has severe limitations; some of these mirror the advantages of Logic based systems.
Intuition based systems cannot do long term predictions, cannot do high precision predictions, and are not productive. They cannot generate new knowledge by mechanical manipulation of the existing theory since there is no such thing as "theory".
Intuition requires prior experience. Intuitions are acquired by learning, and the benefit of learning what happens in a given situation is only available if you encounter a sufficiently similar situation again. Lacking prior experience with a identical situation, you have to make a generalization of a previous "precedent" experience in order to guess what the "consequent" event will be. This is an error prone operation; the ability to generalize correctly is intimately tied to the ability to get to the "semantics" of the situation and is likely the reason we evolved semantic capabilities in the first place.
Intuition based skills improve with practice
Intuition based skills improve with practice, whereas Logic based skills do not. You can use this as a test to determine which of our skills are based on Logic.
Logic is Logic. If we had the opportunity to tell a purely logical being such as Mr. Spock in Star Trek about some trick of Mathematics such as L'Hospital's rule then he should immediately be able to use it to its full capacity. But we need to practice our skills in order to perfect them.
In school, we had to practice doing arithmetic in order to get it right. A few years ago, it was discovered that certain Pentium processors had a problem with certain arithmetic operations. A nine year old would be excused for believing the Pentium needed to practice more arithmetic, but we understand that this would be absurd.
— Richard Feynman
"... the outstanding intuitionist of our age, and a prime example of what may lie in store for anyone who dares to follow the beat of a different drum." — Julian Schwinger on Richard Feynman
A mathematician in front of a whiteboard covered with formulas will use intuition to select the next substitution, simplification, or experiment to try. Sooner or later, something may well work. They will verify the validity of what they did using logical methods. This will be the only path that will be shown in published work. This traditional hiding of the intuitive part and the joy of discovery makes Science look boring to outsiders.
How does Intuition work?
Intuition operates at a "level below logic". It is not unscientific or illogical, it is sub-scientific and sub-logical. Intuition operates on events, not theories.
Some of our senses observe events in the world around us and others observe our own bodies to track things like positions of our limbs and our center of balance. Friedrich Hayek has observed that all sensory information is converted to one single kind of nerve signals before reaching the brain. The brain then processes these incoming nerve signals by sending further nerve signals to other parts of the brain.
We can view all nerve signals as events, no matter what their origin or purpose. Memory allows us to track and remember these signaling events.
We can now start remembering which events precede which other events. Sometimes the former are frequent predictors for the latter, and remembering this correlation would be valuable. Intuition is a process that uses this kind of correlation data to make short-term predictions that are correct often enough to improve our survival.
Evolution has, over millenia, discovered many elegant shortcuts to this primitive brute-force version. So have I, in six years of exploration. Some of these shortcuts are (or will be) described elsewhere, and others will (currently) be discussed only with collaborators.
Note that Intuition makes no attempt to model causality, or create any kind of high level models or theories. That would be using Logic. Intuition simply tracks events. This means it is immune to all the listed problem types in Bizarre Domains that confuse Logic based systems.
Intuition is Fallible
Intuition is fallible. Intuition attempts to make short term predictions in Bizarre Domains. This is a best-effort process. Intuition can often make useful predictions in spite of ambiguous, incomplete, or misleading information in constantly changing chaotic environments.
Since Logic requires a good high-level theory, and in general also insists on correctness, it is incapable of making any predictions at all. This is one more indication that Logic is Biologically Implausible since it is difficult to imagine any kind logic-based intelligence evolving piecemeal. Intuition, on the other hand, could easily evolve since even small amounts of memory could provide a significant advantage over competing agents that don't use it to predict their environment.
Neurons are Fallible
Neurons communicate by using electrochemical processes involving diffusion of neurotransmitters across synapse junctions. Neurotransmitter receptors are sensitive to ramping concentrations. These mechanisms have indeterminate delays, and the massive parallelism in the brain will be sensitive to race conditions between parallel paths of signaling. The brain is therefore inherently unreliable at the neuron level.
How can the brain work as well as it does in spite of this inherent fallibility? There is redundancy in the brain, and a delayed signal may often cause the desired effect even if it arrives too late. But I believe there is another effect at work in the brain and in my Artificial Intuition based systems:
Emergent Reliability handles internal errors
Artificial Intuition systems make internal predictions of future events, both external (sensory input events) and internal events. When these predictions are frustrated it indicates either a new situation that requires learning or some problem, internal or external, that may require skepticism and/or corrective action.
AN systems have layers of nested predictors and at each level they attempt to detect problems of this kind. The layers are "shallow" enough to make this process possible in many cases. Add to this the redundancy inherent in distributed representation, which is how the system represents all emergent concepts.
What happens is that a system that has learned a lot, and learned it well, will be skeptical to perplexing internal events at some layer and will often be able to ignore and/or correct the problem.
Emergent Robustness handles ambiguous input data
AN systems are fallible "all the way down" to the input layer. This means that ambiguous, incomplete, or erroneous input data can be detected and ignored/corrected with about the same accuracy as internal errors.
Put another way, in systems with emergent reliability the same mechanism that handles internal errors makes the system resilient against errors in the input data, such as ambiguous, missing, or incorrect information.
Hard Symbolic AI and Fallibility
The hard sciences, such as Mathematics and Physics, insist on correctness. Computer Science was born in Mathematics departments at universities worldwide, and Computer Science is therefore a hard science. Programs are expected to be correct and to run as specified. Artificial Intelligence was born in Computer Science departments, and inherited their value sets including Correctness. This mindset, this necessity to be logical, provable, and correct has been a fatal roadblock for Artificial Intelligence since its inception.
The world is Bizarre, and Logic can not describe it. Artificial Intuition will easily outperform Logic based Artificial Intelligence for almost any problem in a Bizarre problem domain.
From the very beginning, Artificial Intelligence should have been a soft science.