As a rule, a linguistic success is a non-lucky success. In other words, grammaticality, communication, interpretation, mutual understanding, and the forms of social coordination thereby enabled, succeed by innate capacity, acquired skill, and social convention, as opposed to luck or chance. To be sure, we sometimes manage to luckily pick out the same referent as an interlocutor (e.g., Clark Kent and Superman), hope that we are using the right words in a second-language conversation, and express the right message with ambiguous emoticons or jokes. Yet, while we occasionally have the sense that we are rolling dice with words and hoping for good luck, meaning and communication would be impossible if we only and always succeed no better than luck would allow. Stable grammars, word meanings and conversational norms require very robust regularities.
Meaningful communication requires a reliability far greater than chance produces. Day in and day out, communicating with words is drastically different from rolling dice at a gaming table. Getting things wrong because of a linguistic misunderstanding can prove inconvenient at best and fatal in the worst circumstances, but even good luck is bad too because our real communicative intentions have failed and common ground for subsequent linguistic exchange has not accrued. Thus, if we interpret luck as achieving a goal with a rate not higher than chance, all kinds of communicative actions require a reliability much greater than luck affords. Luck in linguistic communication must be the exception rather than the rule.
“Social coordination succeeds by innate capacity, acquired skill, and social convention, as opposed to luck or chance.”
Recently, large language models (LLM) have challenged the assumption that only humans can communicate through language. ChatGPT employs luck-reducing mechanisms to achieve a very high success rate in sentence completion and contextualization, at least at the performance level. A crucial question facing us with the rise of LLMs is how, if at all, human linguistic success differs from ChatGPT’s linguistic success? One novel and fruitful way to pursue this question is to compare how each system manages to succeed in linguistic luck reduction.
Do human and AI systems utilize the same luck-reducing mechanics, or are they essentially different? How we answer this question has significance for understanding fundamental issues in linguistics and philosophy of language, as well as current issues about the nature of language raised by LLMs. How do we humans reduce dependence on luck? How does ChatGPT do it? What, if anything, is the difference between the two?
We suggest the following broad framework for pursuing the questions above, although it leaves us with a fruitful dilemma rather than an easy answer. We suggest that causation, inference, and intention are the main sources of luck reduction in syntax, semantics, and pragmatics. Humans utilize (at least) this degree of variation in the luck-reducing mechanisms that underlie reliable communication at different levels of linguistic competence. (More detailed frameworks for approaching linguistic luck reduction can be found in Linguistic Luck: Safeguards and Threats to Linguistic Communication.)
“A crucial question facing us with the rise of LLMs is how, if at all, human linguistic success differs from ChatGPT’s linguistic success?”
Unlike these diverse sources of linguistic luck reduction in humans (some of which we share with animals), ChatGPT seems to be a “one-trick pony” when it comes to luck reduction. LLM luck reduction is based on single-method, statistical-predictive analysis of syntactic features. The way in which recent LLMs compress information into the next most likely set of words or features might be enough to mimic the causal, inferential, and intentional mechanisms employed by humans.
The lack of diversity in luck-reducing mechanisms appears to be one clear difference between human linguistic success and ChatGPT linguistic success, and may point us to other fundamental differences. However, to show that any difference in luck-reducing artillery indicates a fundamental difference in linguistic capacities, it must be shown that the diversity of human mechanisms cannot be reduced to the powerful, monolithic luck-reduction of ChatGPT (and future LLM systems). The diversity of human mechanisms may simply reflect the limited capacities we have as humans in comparison to systems like ChatGPT. If the initial differences discerned in luck-reducing mechanisms dissolve upon further analysis, they do not indicate a fundamental difference between human and machine linguistic capacities.
This can be framed as a dilemma illustrating the importance of understanding linguistic luck-reducing mechanisms: either the diverse human luck-reduction mechanisms (causation, inference, intention) are reducible to the single process ChatGPT employs, or they are not. If they are not reducible, then further inquiry in this direction should reveal some deep difference between human and machine linguistic capacities. This would be a fruitful and important result, and may turn out to be a scientifically powerful tool to understand how human language is irreducible to statistics and computer power.
“Unlike the diverse sources of linguistic luck reduction in humans, ChatGPT seems to be a ‘one-trick pony’.”
However, if the full range of human luck-reducing mechanisms turn out to be the workings of a single formal/syntactic mechanism used by LLMs, then human luck reduction is ultimately explainable as just so much ChatGPT-style luck reduction. If humans had the computational power of the emerging LLMs, the remaining luck-reducing processes may be unnecessary. This would also be a fruitful and important result for understanding human linguistic capacities.
In either case, reducible or not, inquiry linking luck reduction, LLMs, and human linguistic systems is fruitful for understanding essential features of human language.
Recent Comments
There are currently no comments.