The Logical Basis of Readware Technology

As stated earlier, ATS offers an inductive means and semantic rules to convert or transform any word root (found in set A) into an abstract mathematical mapping such as f: X ==> Y or f(x) (selected from Table 3: Adi, 2007, pp. 198, 200). The seven abstract processes of ATS (Table 1) have a precedence order that is shown in Table 2 (Adi, Ewell, et. al. 2009) that assists in the determination of the abstract theory of interpretation. Given the correct selection of a formula, consonants that refer to processes with higher precedence play the role of the function f of the mapping while the remaining consonants play the role of domain X or range Y of the mapping.

Unlike General Semantics and L. Ron Hubbard's Dianetics, each of which cannot be proved or disproved, ATS was pragmatically corroborated by developing the oracle machine we called Readware. It is generally understood that an oracle machine can perform all of the usual operations of a Turing machine, and one can also query the oracle to obtain a solution to any instance of the computational problem for that oracle.

The problem for our oracle (Readware) was to find the correspondence (the corresponding expressions) correlating physical references with conceptual entities;-- by way of a relevance measure. In our oracle machine, (the Turing machine on which Readware is running) Readware responds to this expectation by generating an atomic weight or score (a whole number) as the product of each abstract theory given in its computable description language that is a least as efficient as the natural language input. It is fair to ask how Readware is able to overcome the complexity while maintaining the richness of meaning.

In the study of Kolmogorov complexity, which is known by most NLP and AI engineers, the invariance theorem is one of the core results. It says that there are optimal universal description languages, which are at least as good as any other description language (except for a constant overhead). Such a universal description language is the basis of a universal artificial intelligence or AIXI (as Marcus Hutter puts it).

Adi's description language gives us a reliable and complete computational process by which we can transform descriptions of objects (of knowledge, of awareness) into correlative attributes of such objects (in memory; in situ). The description language in Readware is about the attributes of the manifest image, the organism of attributes, described above. Recall that the purpose of running a statistical process and obtaining probabilities is to find the attributes (usually unknown). That is why an inductive procedure such as Solomonoff's or Bacon's or Adi's is needed.

The difference between Readware and Bayesian systems ought to now become clear. A Bayesian system uses regression analysis and predictive coding to find or discover the unknown attributes (inherent causes or condition for something). While Readware uses a knowledge K in the form of a mechanism or organism O --as an oracle that has knowledge K of the attributes and can predict their values. Readware uses this organism and its knowledge K (its model M), of the attributes (or conditions) of (the context of) awareness, at each instance, and in certain kinds of decision making or function problem solving --those involving judgment and determination of significance, salience or relevance ---as necessary to conceptual analysis and information analysis and retrieval.

Many people claim that understanding text or "natural language' -- that reading text -- is a complex problem, and that the objects of text are too complex and abstract for most people to grasp, let alone computers. The idea behind Kolmogorov complexity is that the complexity of an object is the length of its shortest description in some fixed description language.

Unfortunately, the length of the shortest description will depend on the choice of description language. What has been found in practice is that it takes long and drawn out descriptions to describe problems that are summarized and approximated by a single word in a natural language. I am referring to problems such as Justice, Trust, Coherence, Knowledge, Information and Intelligence.  Adi's discovery was the breakthrough needed to achieve a concise, reliable, powerful and optimal description language.

A description languages is optimal, in the following sense: given any description of an object in a description language, I can use that description in my optimal description language with a constant overhead. The constant depends only on the languages involved, not on the description of the object, or the object being described. In our case, the case here of natural language and our awareness of its objects, the constant depends only on the size of the symbolic denotations involved (e.g. the record of its notational system; the text).  

Like any optimal description language, Readware contains descriptions with two parts:

  1. The first part describes another description language.
  2. The second part is a description of the objects in that language.

In more technical terms, the first is a description of the apparatus (defining structure and arrangement) and rule or rules (e.g., given as a calculus; an algorithm), with the second part being the input (word stems) to that computational process which produces the object (a relevance relation) as output.

What is the object of (an increase or change in) awareness?

Adi's theory is always presented in two parts, where Part 1 describes the 11 attributes (7 processes, 4 polarities) of the environment and the inheritance rule (governing the attributes  and their relationships). Part 2 is a derivation of roots (rigid conceptual forms) in an original (not-derived) and (pure in that sense) natural language, and a description of their structure given the description language (Adi, Ewell, et. al. 2009).

Recall the invariance theorem: Given any description language , our optimal description language is at least as efficient as any other, with some constant overhead. The following proof was given in the wikipedia article indexing the invariance theorem.

Proof: If we have a description D in L, we can convert it into a description in our optimal language by first describing L as a computer program P (part 1), and then using the original description D as input to that program (part 2). The total length l of this new description is (approximately):

l(D') = l(P) + l(D)

The length of P is a constant that doesn't depend on D. So, there is at most a constant overhead, regardless of the object we're trying to describe. Therefore, it follows that our optimal language is universal up to this additive constant.

A by-product of the testing and the commercial development of text classification products was the development of the necessary means to analyze various domains and types of knowledge, to establish conceptual categories and classify terms and their relationships according to modern contexts and topics of inquiry.  

Because this is known to be a difficult, tedious and recurring problem in modern organizations and institutions, we used ATS to amplify the capacity for developing and corroborating abstract theories and conjectures as to the way things fit into the poietic schemata.  An inductive algorithm was developed to induce predictive interpretations according to the self-organized apparatus introduced in Table 1.  

Algorithm C, described below, is used manually in conjunction with Tables, 1 through 5 (Adi, Ewell, et. al. 2009). Using Adi's tables as reference, anyone can use this procedure to create, study and test conjectures of the meaning of any word. 

Summary of Super-recursive algorithm C

  1. Generate an abstract theory (from set A) given a denotative structure (e.g: anger). For each instance, use the abstract result to generate several alternative hypotheses any of which may likely become a more concrete theory).
  2. Select a more likely one and make predictions that are not likely to be observed; e.g. test alternate interpretations; try to disprove them.
  3. If any one prediction fails, select and test alternative hypotheses until one becomes refined and corroborated.
  4. Step 3 produces promising theories, i.e. ones likely to be corroborated. If, for a certain concrete theory, many predictions are made and all of them are observed, then consider it as corroborated theory. If later observations or corroborated theories about related phenomena clash with this theory, then revise the theory using new predictions from interpretations of the observations and return to Step 2

Algorithm C is a systematic procedure for performing strong inference.

In its separate elements, strong inference is the method of inductive inference that goes back to Francis Bacon. The pattern is familiar to some college students and the steps are practiced, on and off, by every researcher and many scientists. The difference comes in their systematic application of this pattern of inductive inference.

Strong inference consists of applying the following steps to every problem, formally and explicitly and regularly:

  1. Devising alternative hypotheses; Algorithm C establishes multiple formal hypotheses of the attributes to the production of information given a denotative knowledge in an environment of awareness; (using Tables 1,2 and 3).
  2. Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses (This is done using Tables 4 and 5);
  3. Carrying out the experiment so as to get a clean result;
  4. Recycling the procedure, making sub-hypotheses or sequential hypotheses to refine the possibilities that remain, and so on.

Algorithm C is a general method of inference applying to the valid scope and range of the relative definition of a creative, constructive symbolic knowledge as: the "capacity for predicting the value which an attribute may take,-- given a denotative domain of these attributes and their semantic correspondence,-- to symbolize a given form in an environment of such an awareness."

This environment of awareness is specified by Adi's semantic matrix (Table 1). This is, essentially, a schemata of the organism of the attributes of the environment which regulates the correspondence between word forms and their domains.

In this sense, it is an intellectual invention: Using the procedure of Algorithm C, the invention may be used to (contingently) map the denotative structure (the notational system of representation) to the attributes of its creation: the initial conditions to the connotative forms of human awareness.

The attributes of the micro-environment of creation (poiesis) are equivalent to the whole or entire structure of the macro environment and its contingent domains. The attributes are what is referenced with speech acts and what is being denoted and represented in the domain --which is itself a structural domain of production, creation or biophysical "poiesis". It is a structural domain of these attributes because it is the domain where the connotative representations (ethnographic symbolism) of the referenced attributes are composed, configured or constructed and exposed or projected in order to generate information with them.

The connotative forms can be understood, in this context, to be expressive of the psychological, affective, spiritual and metaphysical domains --it is the place for interpretations of what is being suggested or implied, for forming beliefs about the denotative structure and the attributes to which it refers. Where the denotative (and most objective) domain is given in the system of notation (e.g. an alphabet).