dc.contributor.author
Garagnani, Max
dc.date.accessioned
2025-01-06T08:01:11Z
dc.date.available
2025-01-06T08:01:11Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/42808
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-42524
dc.description.abstract
The ability to coactivate (or “superpose”) multiple conceptual representations is a fundamental function that we constantly rely upon; this is crucial in complex cognitive tasks requiring multi-item working memory, such as mental arithmetic, abstract reasoning, and language comprehension. As such, an artificial system aspiring to implement any of these aspects of general intelligence should be able to support this operation. I argue here that standard, feed-forward deep neural networks (DNNs) are unable to implement this function, whereas an alternative, fully brain-constrained class of neural architectures spontaneously exhibits it. On the basis of novel simulations, this proof-of-concept article shows that deep, brain-like networks trained with biologically realistic Hebbian learning mechanisms display the spontaneous emergence of internal circuits (cell assemblies) having features that make them natural candidates for supporting superposition. Building on previous computational modelling results, I also argue that, and offer an explanation as to why, in contrast, modern DNNs trained with gradient descent are generally unable to co-activate their internal representations. While deep brain-constrained neural architectures spontaneously develop the ability to support superposition as a result of (1) neurophysiologically accurate learning and (2) cortically realistic between-area connections, backpropagation-trained DNNs appear to be unsuited to implement this basic cognitive operation, arguably necessary for abstract thinking and general intelligence. The implications of this observation are briefly discussed in the larger context of existing and future artificial intelligence systems and neuro-realistic computational models.
en
dc.format.extent
18 Seiten
dc.rights.uri
https://creativecommons.org/licenses/by/4.0/
dc.subject
Concept combination
en
dc.subject
Multi-item working memory
en
dc.subject
Brain-constrained modelling
en
dc.subject
Semantic representations
en
dc.subject
Artificial cognitive system
en
dc.subject
Cell assembly
en
dc.subject
General intelligence
en
dc.subject.ddc
400 Sprache::410 Linguistik::410 Linguistik
dc.title
On the ability of standard and brain-constrained deep neural networks to support cognitive superposition: a position paper
dc.type
Wissenschaftlicher Artikel
dcterms.bibliographicCitation.doi
10.1007/s11571-023-10061-1
dcterms.bibliographicCitation.journaltitle
Cognitive Neurodynamics
dcterms.bibliographicCitation.number
6
dcterms.bibliographicCitation.pagestart
3383
dcterms.bibliographicCitation.pageend
3400
dcterms.bibliographicCitation.volume
18
dcterms.bibliographicCitation.url
https://doi.org/10.1007/s11571-023-10061-1
refubium.affiliation
Philosophie und Geisteswissenschaften
refubium.affiliation.other
Brain Language Laboratory

refubium.resourceType.isindependentpub
no
dcterms.accessRights.openaire
open access
dcterms.isPartOf.eissn
1871-4099
refubium.resourceType.provider
WoS-Alert