Mind AI

See Symbolic AI for the initial incentive to explore their work.

Reading their Technical whitepaper and building up a base of prerequisite knowledge to understand what's up.

All generic knowledge goes into their respective nodes and this one only serves as an index into what they are up to.

1. Whitepaper Notes

1.1. Introduction

  • Mentions the prevalence of neural networks, duces concept of Connectionist AI : As put by McCulloh somewhere: this is about treating the brain like a Turing Machine
  • while I can see how one can go about applying a symbolic approach in the domain of NLP, I need to explore the opportunities in the domain of CV.
  • Introduced to a new logical unit of reasoning : termed as the canonical.
    • claim full transperency
    • allows debugging specific mistakes in the understanding pipeline

1.2. Theory of Information

  • a new paradigm to represent knowledge has been discussed:
    • using NIL to represent the symbolic initiations
(defmacro define (a b)
  "mathematically introduce the form a as b"
  (...))

(defmacro delta (a)
  "represent the change in a concept"
  (...))

(defmacro def-assert (forms)
  "a collection of assertions"
  (...))

(defmacro def-concept (atom descr)
  "introduce an atomic concept with a description"
  (...))

(defvar *concepts*
  '(potential
    information
    measurement
    state
    meaning
    ...))

(loop for concept in *concepts*
      do (def-concept concept "..."))

(def-assert
    (define information (change potential))
    (define measurement (change information)))
  • a functionally representable way of reasoning is discussed, further moving to actually defining a canonical form in the next section

1.3. Canonical form

  • based on a graph, check Canonicals (Mind AI)
  • interpreting a canonical with the concepts explored above, it can be summarized as :
    1. the position is captured between the primary and the context
    2. the measure is captured between the context and the resultant
    3. the meaning is captured between the primary and the resultant
  • i.e. with info about the primary's position in the context and applying suitable measures on the context, on can obtain the meaning captured between the primary and the resultant.
  • the notion of querying a node or a link is represented by a ? at the respective position

1.4. Conversion to canonicals

  • in the context of NLP, the input is POS tagged before being transformed into the canonicals
    • A tree like structure when parsing (s-expressions) allows for convenient heirarchical transformation into the relevant canonicals

1.5. Upper Ontology

  • fundamental ideas are incorporated into the system by a set of predicates
    • collectively termed as upper/uppermost ontology

1.6. Compartmentalization

  • Ontology (softly translate as comprehension rules) can be sectioned off into global, local, user-wise and session-wise rules..

1.7. Contextualization and Ontology Versioning

  • versioned knowledge rather than deletion : by marking ontologies as deprecated.
  • helps follow the evolution of reasoning with time.

1.7.1. Logical Reasoning

  • Operations and Structure of a canonical can be used to model the 3 logical reasoning embodiments.

1.8. Ontological Topology

  • aka O-topology
  • two expressions are stated to be equal in case of semantic similarity.
    • The cat is in the box
    • The box emboxes the cat
  • trying to be a little ambitious:
    • The cat maybe alive
    • The cat maybe dead
  • the above probably aren't emotionally similar
  • introduced to their idea of acheiving critical mass by allowing capturing canonical similarities using various approaches
    • critical mass is acheived when the engine is satisfactorily able to handle any abstract text : termed as the model having "learned how to learn"
  • O-topology is also invoked upon to deal with issues with purely Symbolic AI
  • now, as discussed previously, because network is augmentable (see transforming canonicals) and calls upon o-topology to deal with the brittleness of purely symbolic AI, the authors term the approach as augmented topological network.

1.9. The 3 logical embodiments

  • The authors now describe how one would go about conducting deduction, induction and abduction using the canonical
  • a point worth noting : neural networks build upon the idea of the neuron : a bottom up , function follows form approach. Whereas augmented topological networks (what these canonicals are used in) model the notion of reason in a top-down form follows function manner.

1.10. Known, Unknowns and Disambiguation

  • discusses how the canonical deals with ambiguities, ontologically transforms queries to those that explicitly apply to a canonical (by apply : I mean pattern similarity).
  • also reiterate the idea of learning essentially being comprehending, completing, comparing and augmenting patterns. Learning how to learn can itself be modeled as a pattern.

1.11. Natural Language Generation

  • when inputs are parsed and processed into canonical, the natural language's properties are observed (a pattern) and results can be produced when needed accordingly.

1.12. Transparency of Operations (Interpretability)

  • given the discrete nature of it's reasoning process, explainability boils down to following the nodes and links that were traversed when processing the query.
  • this path can be explicitly debugged to weed out any errors in the reasoning process.

1.13. Critical Mass

  • towards bettering its learning patterns and more when having built a large enough nucleus of canonicals.
    • all unknowns boiled down to lacking relevant assertions or relevant principles needed to answer a query.
      • stated to be an intractible problem that can be dealt with any time.
  • proceeds into meta-theoretics

1.14. Meta Theoretics

  • On the final goal of such an engine
    • to build upon theories by comprehending theories of theories to conduct personal research to fill in gaps and more…
      • hypothesizing and experimenting for rejection/verification…
  • they elaborate on what the idea of acheiving intelligence actually means to them.

2. Conceptual Captures

  • partitioned concepts from the whitepaper that would be aided by minor elaborations

2.1. Symbolic representation Index

symbol interpretation actualization
? query, potential "some"
{} none, nil "not"
}{ all, any "is"
<> bind "has"
>< open "goes"

2.2. Canonicals (Mind AI)

  • based on a simple directed graph : composed of 3 nodes and 3 links
(defmacro make-node (node-tag node-info)
  (...))

(defmacro make-link (link-tag from-node to-node)
  (...))

(defun make-canonical (canonical-tag)
  (let ((primary (make-node 'primary (...)))
        (context (make-node 'context (...)))
        (resultant (make-node 'resultant (...)))
        (<> (make-link 'bind primary context))
        (>< (make-link 'open context resultant))
        (}{ (make-link 'all primary resultant)))
    #'(lambda (message)
        "A LOL (let-over-lambda): protected state exposed by functional access points "
        (cond ((...) (...))
              ((...) (...))
              ((...) (...))
              ((...) (...))))))

  • The notion of reasoning is now idiomatically capturable by the above LOL.
  • note that nodes and the links are contextually homoiconic.
    • a canonical can be transformed into another pseudo-similar canonical where the nodes become links and vice-versa.
  • it is possible to substitute nodes and links with canonicals themeselves and further represent more complex reasoning objects.
  • This is termed by them as an "augmented network"
Tags::AI:org: