Skip to: Site menu | Main content

The Tradeoff
We can trade the Seven Values of Logic Based Science for about a dozen Benefits of Intuition Based Methods. The brain needs none of the former and provides all of the latter, which indicates that Artificial Intuition is a Biologically Plausible theory. The system is designed to encourage all of these at low levels. The most important features are expected to be emergent at higher levels.

For Every Complex Problem...

... there is a solution that is simple, neat, and wrong.
   — H. L. Mencken

The ideas we are talking about are that the brain works using Intuition and Prediction, not Logic; that Intelligence is 99% Intuition; that Intuition based methods allow short-term prediction in Bizarre problem domains; that they also allow Discovery of Semantics from mere observations of chains of events such as those in spatiotemporal sequences; that artificial systems based on these ideas can learn and understand languages and partially understand the world using only text as input

My rough estimate is that over a million person-years has been spent on AI and closely related topics worldwide

The ideas discussed on this site sound simple and neat enough. How can this view possibly be correct, considering the amount of research that has gone into Cognitive Science and Artificial Intelligence without producing anything that lives up to the claims I make?

At the root of this conundrum is the mis-classification of AI as a hard science; basically, AI research is conducted by Programmers and other Computer Scientists, and as such, almost all research and funding has been governed by criteria that are appropriate for Mathematics or Physics.

There is a cost to using Intuition based methods; people that judge this cost based on the criteria of the Hard Sciences believe this cost is very high. Therefore all attempts to use Intuition based methods have been rejected with prejudice. While this is still happening, the cost equation has changed in radical ways with the arrival of much more powerful computers; it is time for more people to notice this.

Understanding Artificial Intuition requires discarding this prejudice. People in the Soft Sciences, such as Biology, Ecology, and Psychology are more likely to already have adopted this stance. It is likely easier to teach them programming than it is to cross-train a programmer (especially one with significant Logical AI experience) to use Intuition based methods.

The Tradeoff

Scientists and engineers that use mainly Logic based methods are not above compromising one or a few of the values listed below on an occasional basis. But it is hard for a researcher to switch to AN since they have to let go of all seven of these principles. My task is therefore to show that they will get something worthwhile in trade.

The seven values, as well as the AN advantages are listed here without any justification beyond that which has already been given for some of them; they will be discussed in detail elsewhere when sufficient background material has been introduced. I believe this information is thought-provoking and interesting enough to be provided even without justifications.

Values of Logic based Methods

Elsewhere I refer to these as "The Seven Virtues of Reductionist Science". Logic and Reductionism are on the same side in this dichotomy. The extended discussion is out of scope for this introductory site.

Optimality

We strive to get the best possible answer.

Completeness

We strive to get all answers.

Repeatability

We expect to get the same result every time we repeat an experiment under the same conditions.

Timeliness

We expect to get the result in bounded time.

Parsimony

We strive to discover the simplest theory that fully explains the available data.

Transparency

We want to understand how we arrived at the result.

Scrutability

We want to understand the result.

Unachievable in Bizarre Domains

The brain needs none of these seven values. Humans are not optimal, repeatable, etc.

These values are not achievable in Bizarre Domains. Best-effort is the best any methodology can do in Bizarre Domains. Optimality and Completeness are unattainable. Millions of possible solutions may be available, but the search space is so large we would be happy to find even one of them. Consider the task of speaking. We can express what we think using thousands of choices of words and different sentence constructs. A casual remark might not reach the clarity and wit of what William Shakespeare could have said when expressing a very similar idea, but the casual remark will convey our idea to the listener; yet, to a computer, finding any grammatical sentence to express a concept is an unsolved problem; mostly because we cannot even represent concepts in a useful way.

The seven values of Logic based systems are considered harmful in AN based systems. They will turn into seven hobgoblins that constantly attempt to trick the logically-minded implementor to be ever so slightly more optimal here or more repeatable there. Giving in is often a fatal mistake; Months later you may may discover that your little optimization has been blocking some desireable and otherwise naturally emerging effect.

Artificial Intuition Benefits

Letting go of all of the above values is a tall order. Let us examine what we could gain by switching. As we read the list below, we should note that each one of these is essential to the goal of creating an Artificial Intelligence.

Some of these we have seen before. Others are listed without sufficient motivation for the reason that this introduction to Artificial Intuition omits a lot of technical detail for reasons of brevity. The details necessary to understand how these are accomplished will shortly be provided elsewhere after discussion of the required background information.

The brain provides all of these. An Intuition based algorithm could provide all of these.

Theory-Free Solution Discovery

We gain the ability to operate in opaque domains; we will often discover answers even though we don't fully understand the problem. We will often reach plausible conclusions even when given partially incomplete or inconsistent data. In other words, we get the ability to opportunistically discover valid solutions to problems we could not otherwise solve.

If you are attempting to build an intelligent system from intelligent components then you are just pushing the problems down one level

The less intelligence we can get away with, the less intelligence we need to explain as components in the brain. Ideally we want to state that Intelligence emerges from unintelligent components, and the principle of Theory-Free Solution Discovery allows us to do exactly that.

Novelty

Very few attempts have been made in Logic-based Artificial Intelligence to provide Novelty and Innovation. In fact, I think of Novelty as the "white elephant" of Logic based AI; nobody talks about it or attempts to implement it because nobody has any idea how to even approach it. But AN trivially provides useful novelty and innovation. The details of this will be discussed elsewhere.

Prediction

Prediction is the origin of Intelligence; it is the advantage that provided the evolutionary pressure to evolve higher levels of intelligence, and is still a fundamental low level mechanism in brains. Granted, predictions in Bizarre Domains are only short-term and may well fail, but they are successful often enough to be useful.

Ambiguity, Diversity and Multiple Viewpoints

Diversity is a side effect of novelty; the brain (or an AN based system) is generating scores of new ideas. These ideas need to be evaluated for usefulness. We will constantly generate and maintain (for some time, at least) multiple ideas and viewpoints in any given concept space and for any given problem. Indeed, some concepts may have been learned from the input, and some may have been confabulated by the system itself; there would be no real difference. The ability to deal with multiple and possibly conflicting internally represented concepts implies the ability to deal with ambiguous and self-contradictory input.

Reliability

Neurons are inherently unreliable since they are dependent on electrochemical processes and dissipation of signal substances. We cannot guarantee how long it will take for an impulse to propagate, or whether it will propagate at all.

Since everything is a "best effort" attempt, and since there are no a priori correct answers, and since there is a lot of nondeterminism in the internal operation of the brain or system, the system would have to be resilient to minor internal errors (typically caused by "bad luck" in a non-deterministic part of the algorithm). Any non-systematic error would drown in the other nondeterministic micro-behaviors during normal operations.

Robustness

Robustness is the dual of reliability. Input data might be incomplete, contain factual errors, be ambiguous, etc. The system will be resilient to such problems to the same degree a human would. The ability to entertain multiple contradictory viewpoints at once, and the ability to operate if given incomplete or ambiguous input can be traced to the same mechanism as the ability to operate when implemented on an unreliable substrate.

Self-organization and Self-repair

Several of the above benefits (and some listed below) originate in the same low-level mechanism — Self-organization.

If domain knowledge is well established — "learned well enough to be viewed as a competence", then we can expect acceptable behavior... Just like we would with a competent human.

Self-Organization is the process of adding elements to incomplete and "poorly behaving" parts of the system to make it more competent. Self-repair is simply self-organization when some organization already exists but has developed a problem. When viewed at this lowest algorithm level, there is no difference between Self-repair of damage caused by

In all cases, the self-repairing properties of the system will, if given enough time and appropriate input and experience, correct the problem. The details of this are discussed elsewhere.

From the above list it is fairly easy to see that over a half-dozen benefits of Intuition based methods emanate from this single low-level mechanism.

Emergent Benefits

All of the above advantages are explicitly designed into the substrate code of an Artificial Intuition based system. For many of them, I can point to the lines of the substrate code that accomplishes them. The remaining advantages below are at best encouraged at low levels; the full effect is expected to emerge at higher levels as the system learns. Belief that this emergence will happen is the largest leap of faith required by those that want to work with Artificial Intuition systems. The strength of these effects are quite difficult to measure at lower ("IQ") levels. Still, many who understand the theory of AN in detail have no problems believing these effects will emerge.

Semantics

We want to be able to automatically create high-level models that explain low-level observations. We want to be able to start from low level observations of spatiotemporal event streams, such as text, audio, or vision data, and automatically reach useful higher (semantic) level interpretations that allow us to make more accurate short-term predictions. Please note that these are not theories; they are predictors at higher semantic levels that guide interpretation and context-based disambiguation of lower level events.

Abstraction

AN systems use Distributed Representation. All concepts have numerous abstractions. Abstractions are represented in distributed representation systems as incidental and/or partial activations of the abstractions that matter. This will be elaborated upon elsewhere.

Generality

A hallmark of Intelligence is that one mechanism can solve a multitude of disparate problems. By this criterion, current chess playing computers are not intelligent since chess is all they can do. An AN based language model contains sufficient language information that it can be used for dozens of different language related tasks by simply providing slightly differing wrapper programs.

Learning

The term "World Model" is here again used to denote a sub-logical world model.

Learning is the incremental extension of an existing world model. How this works in detail will be discussed elsewhere.

Scale-free-ness

The system scales indefinitely without degradation in performance as long as sufficient memory is available.

Sub-linear time access

The system is capable of performing its operations in nearly constant time regardless of the size of the backing database.

The system doesn't learn slower just because it already knows a lot of things. This matches human experience — knowing more does not mean you think or read slower.