Neural Symbolic Learning and Reasoning - A survey and interpretation
Tarek R. Besold, Artur d’ Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kühnberger et al.
Presented by:
Rohit Sanjay Inamdar
SBU ID: 114504643
Contents
Overview
Kyle Hamilton, Aparna Nayak, Bojan Boži´c and Luca Longo
https://arxiv.org/pdf/2202.12205
Prolegomena of N-S Computation
Principles and mechanisms
1. Translation of symbolic knowledge into network
2. Gaining additional knowledge from examples
3. Reasoning
4. Symbolic extraction from the network.
Network A: P(X, Y)
Network B: Q(Z)
Output: P(X, Y) Λ Q(Z) → R(X, Y, Z)
Fibring
Conceptual overview of Neuro Symbolic System (Garcez et al.)
NSCA for Neuro-Symbolic Computing (de Penning et al.)
NSCA Mechanisms
P(V = v, H = h) where, V - visible layer; H - hidden layer
v,h - vectors encoded in the layers
v encodes data into binary or real values
h encodes the posterior probability P(H|v)
Recurrent Temporal RBM (Sutskever, Hinton, Taylor, 2009)
Each hidden unit Hj is a hypothesis of rule Rj
rule Rj computes the posterior probability i.e:
P(R|B=b, Rt-1 = rt-1) where B=b: beliefs observed in the visible layer
r ∝ P(R|B=b, Rt-1 = rt-1)
RTRBM Scenario
Conditions:
(1) (Weather > good) //belief i.e a pre-defined data feature
meaning: the weather is at least good
Scenario:
(2) ApproachingIntersection ^ ⋄ (ApproachingTraffic = right)
meaning: the car is approaching an intersection and sometime in the future traffic is approaching from the right
(3) ((Speed > 0) ^ HeadingIntersection) S (DistanceIntersection < x) → ApproachingIntersection
meaning: if the car is moving and heading towards an intersection since it has been deemed close to the
intersection, then the car is approaching the intersection.
Assessment:
(4) ApproachingIntersection ^ (DistanceIntersection = 0) ^ (ApproachingTraffic = right) ^ (Speed = 0) →
(Evaluation = good)
meaning: if the car is approaching an intersection and arrives at the intersection when traffic is coming from the
right and stops then the trainee gets a good evaluation
Neuro-Symbolic Integration in and for Cognitive Science
Neuro-Symbolic Integration in and for Cognitive Science(contd.)
1. Rule-guided problem solving: The process of modelling complex relations in a problem could poorly map errors based on preconditions.
2. Central Executive Function: Symbolic AI is capable of accurately model the central control system of the brain and perform tasks of overriding pre-existing responses to alter the strategies of achieving the goal or outcome.
3. Syntactic Structures: Connectionist models are prone to incorrectly interpret the syntactic structure of data, and thus only “learn” the pattern without any rule usage.
4. Compositionality: Connectionist models are unable to reflect the representational compositionality of data, and require to learn explicitly relationships between entities.
Binding and First-Order Inference in a Neural Symbolic Framework
Fodor, Pylyshyn Computational Model (1988)
1. Combinatorial syntax and semantics for mental representations: Recursive building of representations from atomic models. Non-atomic semantics should be function of the atomic semantics
2. Structure Sensitivity of processes: Operations used should adhere to the syntactic structure representation.
Inference Specifications as Fixed Points of ANNs
ANN specifications of FOL Inference Chains
Dynamic Binding
Connectionist First-Order Logic
Connectionist First-Order Logic (contd.)
where, A ∊ I, I ∊ L
Markov Logic Networks
Markov Logic Networks (contd.)
Markov Logic Networks - SPNs
Leaves: variables; sum and products: internal nodes with weighted edges
e.g: junction tree(fig. a), Naive Bayes model (fig. b)
Relational Sum-product Networks (RSPNs) (Nath & Domingos, 2015)
Recent Developments and Future Work
Conceptors (Jaeger, 2014)
Recent Developments (contd.)
Neural Turing Machine (Graves et al., 2014)
Future Work
Character-level language models as an interpretable testbed for such cells.
References