Update draft on symbol grounding
This commit is contained in:
parent
2fde5442ef
commit
86325089bc
2 changed files with 109 additions and 20 deletions
|
@ -47,3 +47,35 @@
|
|||
DATE_ADDED = {Thu Nov 7 14:36:52 2019},
|
||||
}
|
||||
|
||||
@Book{marcus2019_reboot_ai,
|
||||
author = {Marcus, Gary},
|
||||
title = {Rebooting AI : building artificial intelligence we
|
||||
can trust},
|
||||
year = 2019,
|
||||
publisher = {Pantheon Books},
|
||||
address = {New York},
|
||||
isbn = 9781524748258,
|
||||
}
|
||||
|
||||
@article{miller2003_cognit_revol,
|
||||
author = {George A Miller},
|
||||
title = {The Cognitive Revolution: a Historical Perspective},
|
||||
journal = {Trends in Cognitive Sciences},
|
||||
volume = {7},
|
||||
number = {3},
|
||||
pages = {141-144},
|
||||
year = {2003},
|
||||
doi = {10.1016/s1364-6613(03)00029-9},
|
||||
url = {https://doi.org/10.1016/s1364-6613(03)00029-9},
|
||||
DATE_ADDED = {Thu Dec 26 11:09:31 2019},
|
||||
}
|
||||
|
||||
@book{kahneman2011_think_fast_slow,
|
||||
author = {Kahneman, Daniel},
|
||||
title = {Thinking, Fast and Slow},
|
||||
year = 2011,
|
||||
publisher = {Farrar, Straus and Giroux},
|
||||
url = {https://books.google.fr/books?id=SHvzzuCnuv8C},
|
||||
isbn = 9780374275631,
|
||||
lccn = 2012533187,
|
||||
}
|
||||
|
|
|
@ -3,19 +3,19 @@ title: "Reading Notes: \"The Symbol Grounding Problem\", Stevan Harnad"
|
|||
date: 2020-02-02
|
||||
---
|
||||
|
||||
cite:harnad1990_symbol_groun_probl defined the /symbol grounding
|
||||
problem/, which is one of the most influential issues in natural
|
||||
language problems since the 1980s. The issue is to determine how a
|
||||
formal language system, consisting in simple symbols, can be imbued
|
||||
with any /meaning/.
|
||||
cite:harnad1990_symbol_groun_probl [[https://eprints.soton.ac.uk/250382/1/symgro.pdf][(PDF version)]] defined the /symbol
|
||||
grounding problem/, which is one of the most influential issues in
|
||||
natural language problems since the 1980s. The issue is to determine
|
||||
how a formal language system, consisting in simple symbols, can be
|
||||
imbued with any /meaning/.
|
||||
|
||||
From the abstract:
|
||||
#+begin_quote
|
||||
How can the semantic interpretation of a formal symbol system can be
|
||||
made /intrinsic/ to the system, rather than just parasitic on the
|
||||
meanings in our heads? How can the meanings of the meaningless symbol
|
||||
tokens, manipulated solely on the basis of their (arbitrary) shapes,
|
||||
can be grounded in anything but other meaningless symbols?
|
||||
How can the semantic interpretation of a formal symbol system be made
|
||||
/intrinsic/ to the system, rather than just parasitic on the meanings
|
||||
in our heads? How can the meanings of the meaningless symbol tokens,
|
||||
manipulated solely on the basis of their (arbitrary) shapes, be
|
||||
grounded in anything but other meaningless symbols?
|
||||
#+end_quote
|
||||
|
||||
In this landmark paper, Harnad makes the issue explicit, in its
|
||||
|
@ -25,24 +25,81 @@ combination of symbolic and connectionist properties. The problem
|
|||
itself is still highly relevant to today's NLP advances, where the
|
||||
issue of extracting /meaning/ is still not solved.
|
||||
|
||||
# cf Gary Marcus, /Rebooting AI/, and post on /The Gradient/
|
||||
|
||||
* What is the symbol grounding problem?
|
||||
|
||||
** Context: cognitivism, symbolism, connectionism
|
||||
|
||||
/Cognitivism/ is the general framework in which all experimental
|
||||
psychology takes place. It replaced old-fashioned /behaviorism/,
|
||||
replacing it by an empirical science allowing to question the inner
|
||||
workings of brains and minds.
|
||||
/Behaviourism/ was the dominant framework of experimental psychology in
|
||||
the first half of the 20th century. It grounded psychology firmly in
|
||||
an empirical setting, arguing that mental events are not observable,
|
||||
and that only external behaviour can be studied
|
||||
citep:miller2003_cognit_revol.
|
||||
|
||||
Behaviorism restrained scientific inquiries to external behavior,
|
||||
explicitly forbidding to make theories about what goes on inside the
|
||||
mind. Cognitivism allowed the scientist to make hypotheses about
|
||||
In the 1950s, new theories, in particular Chomsky's theories in
|
||||
linguistics, started to question this approach and highlighted its
|
||||
limitations. /Cognitivism/ arose as a way to take into account
|
||||
internal mental states. It allowed scientists to make hypotheses about
|
||||
unobservable phenomenons, provided they made predictions testable in
|
||||
an experimental setting.
|
||||
|
||||
"Meaning" is one such unobservable phenomenon.
|
||||
Harnad defines a /symbol system/ as a set of arbitrary token with
|
||||
explicit rules (also in the form of tokens or strings of tokens) to
|
||||
combine them. Note that the set of rules should be explicit and not
|
||||
defined as posteriori, because nearly every phenomenon can be
|
||||
interpreted as following a set of rules.
|
||||
|
||||
An additional (and most relevant for us) property of symbol systems is
|
||||
that they are /semantically interpretable/: we can associate a meaning
|
||||
in a systematic fashion to every token or string of tokens.
|
||||
|
||||
This exposes /symbolism/, i.e. the view that cognition is a symbolic
|
||||
system. The alternative view, /connectionism/, has its root in
|
||||
biological models of the brain, and posits that the network of
|
||||
connections in the brain is what defines cognition, without any formal
|
||||
symbol system.
|
||||
|
||||
#+begin_quote
|
||||
According to connectionism, cognition is not symbol manipulation but
|
||||
dynamic patterns of activity in a multilayered network of nodes or
|
||||
units with weighted positive and negative
|
||||
interconnections. citep:harnad1990_symbol_groun_probl
|
||||
#+end_quote
|
||||
|
||||
One common criticism of connectionism is that it does not meet the
|
||||
compositionality criterion. Moreover, we cannot give a semantic
|
||||
interpretation of connectionist patterns in a systematic way as we can
|
||||
in symbolic systems. This issue was recently raised again by Gary
|
||||
Marcus in his recent book /Rebooting AI/
|
||||
citep:marcus2019_reboot_ai. Human cognition makes extensive use of
|
||||
internal representations. Chomsky's theories on the existence of a
|
||||
"universal grammar" is a good example of such internal structure for
|
||||
linguistics. These cognitive representations seem to be highly
|
||||
structured (as demonstrated by the work of Kahneman and Tversky
|
||||
citep:kahneman2011_think_fast_slow), and compositional. (See also his
|
||||
[[https://thegradient.pub/an-epidemic-of-ai-misinformation/][recent article]] in /The Gradient/.)
|
||||
|
||||
#+CAPTION: Connectionism versus symbol systems (Taken from cite:harnad1990_symbol_groun_probl.)
|
||||
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| *Strengths of connectionism:* |
|
||||
| 1) Nonsymbolic Function: As long as it does not aspire to be a symbol system, a connectionist network has the advantage of not being subject to the symbol grounding problem. |
|
||||
| 2) Generality: Connectionism applies the same small family of algorithms to many problems, whereas symbolism, being a methodology rather than an algorithm, relies on endless problem-specific symbolic rules. |
|
||||
| 3) "Neurosimilitude": Connectionist architecture seems more brain-like than a Turing machine or a digital computer. |
|
||||
| 4) Pattern Learning: Connectionist networks are especially suited to the learning of patterns from data. |
|
||||
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| *Weaknesses if connectionism:* |
|
||||
| 1) Nonsymbolic Function: Connectionist networks, because they are not symbol systems, do not have the systematic semantic properties that many cognitive phenomena appear to have. |
|
||||
| 2) Generality: Not every problem amounts to pattern learning. Some cognitive tasks may call for problem-specific rules, symbol manipulation, and standard computation. |
|
||||
| 3) "Neurosimilitude": Connectionism's brain-likeness may be superficial and may (like toy models) camoflauge deeper performance limitations. |
|
||||
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| *Strengths of symbol systems:* |
|
||||
| 1) Symbolic Function: Symbols have the computing power of Turing Machines and the systematic properties of a formal syntax that is semantically interpretable. |
|
||||
| 2) Generality: All computable functions (including all cognitive functions) are equivalent to a computational state in a Turing Machine. |
|
||||
| 3) Practical Successes: Symbol systems' ability to generate intelligent behavior is demonstrated by the successes of Artificial Intelligence. |
|
||||
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| *Weaknesses of symbol systems:* |
|
||||
| 1) Symbolic Function: Symbol systems are subject to the symbol grounding problem. |
|
||||
| 2) Generality: Turing power is too general. The solutions to AI's many toy problems do not give rise to common principles of cognition but to a vast variety of ad hoc symbolic strategies. |
|
||||
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
|
||||
** Exposing the issue: thought experiments
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue