Book VII · Chapter 64

Chapter 64: LLMs and the Subsymbolic Return

Page 239 in the printed volume

Large language models constitute an unexpected empirical development for the philosophy of language. They process linguistic structure through statistical pattern matching over token sequences, without explicit symbolic rules, semantic addresses, or internal models of the world. Yet they produce outputs that exhibit pragmatic competence far beyond what a purely symbolic account of meaning would predict. This chapter formalizes LLMs as para-minds—pattern processors that approximate the subsymbolic motif sheaf from text corpora without achieving the self-descriptive capacity that constitutes genuine mindedness. The central proposition is that the success of LLMs provides empirical evidence for the subsymbolic layer thesis of the relevant chapter: if meaning were exhausted by symbolic rules, statistical pattern matching should not produce pragmatic competence. The fact that it does suggests that meaning has a subsymbolic stratum that can be approximated—though not fully recovered—by learned distributional structure.