Language influences cognitive and conceptual processing, but the mechanisms through which such causal effects are realized in the human brain remain unknown. Here, we use a brain-constrained deep neural network model of category formation and symbol learning and analyze the emergent model’s internal mechanisms at the neural circuit level. In one set of simulations, the network was presented with similar patterns of neural activity indexing instances of objects and actions belonging to the same categories. Biologically realistic Hebbian learning led to the formation of instance-specific neurons distributed across multiple areas of the network, and, in addition, to cell assembly circuits of “shared” neurons responding to all category instances—the network correlates of conceptual categories. In two separate sets of simulations, the network learned the same patterns together with symbols for individual instances [“proper names” (PN)] or symbols related to classes of instances sharing common features [“category terms” (CT)]. Learning CT remarkably increased the number of shared neurons in the network, thereby making category representations more robust while reducing the number of neurons of instance-specific ones. In contrast, proper name learning prevented a substantial reduction of instance-specific neurons and blocked the overgrowth of category general cells. Representational similarity analysis further confirmed that the neural activity patterns of category instances became more similar to each other after category-term learning, relative to both learning with PN and without any symbols. These network-based mechanisms for concepts, PN, and CT explain why and how symbol learning changes object perception and memory, as revealed by experimental studies.