One Foot In Front The Other: How LLMs Work
·1745 words·9 mins
You think ChatGPT is ’thinking’? It’s rolling dice, one token at a time. LLMs don’t plan, reason, or understand: they sample from probability distributions based on statistical patterns. Worse, if you’re working in Swedish, Arabic, or most non-English languages, you’re getting a fundamentally degraded product due to tokenization bias. And as these models increasingly train on their own outputs, they’re collapsing into irreversible mediocrity. Understanding what’s actually happening changes everything.