This work focuses on neuron-level mechanisms for confidence calibration in LLMs, identifying entropy and token frequency neurons. Entropy neurons have high weight norms but minimal direct impact on logits. They write onto an unembedding null space and leverage layer normalization to scale down logits, effectively modulating output distribution entropy. Token frequency neurons boost or suppress token logits proportionally to their frequency, shifting the output distribution towards or away from the unigram distribution. Experiments demonstrate the interaction between entropy neurons and other mechanisms, such as induction heads, resulting in improved model performances thanks to the calibration of model outputs across the entire distribution.
\n","updatedAt":"2024-06-25T10:15:25.656Z","author":{"_id":"5e7749883d77a72421292d07","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e7749883d77a72421292d07/M4AmBReZk_otxCIG3o0bL.jpeg","fullname":"Gabriele Sarti","name":"gsarti","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":233}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8569597601890564},"editors":["gsarti"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/5e7749883d77a72421292d07/M4AmBReZk_otxCIG3o0bL.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2406.16254","authors":[{"_id":"667a74f59f501609d28e34b9","user":{"_id":"64c7dda1d418013c77d3acec","avatarUrl":"/avatars/10bc3e8754e1a9815da57d3aec2f0bb3.svg","isPro":false,"fullname":"Alessandro Stolfo","user":"alestolfo","type":"user"},"name":"Alessandro Stolfo","status":"admin_assigned","statusLastChangedAt":"2024-06-25T13:38:41.156Z","hidden":false},{"_id":"667a74f59f501609d28e34ba","name":"Ben Wu","hidden":false},{"_id":"667a74f59f501609d28e34bb","user":{"_id":"631f446fdaa9591e52319035","avatarUrl":"/avatars/34d7ff8bc59aac0ed77a961596c4e4b1.svg","isPro":false,"fullname":"Wes Gurnee","user":"wesg","type":"user"},"name":"Wes Gurnee","status":"admin_assigned","statusLastChangedAt":"2024-06-25T13:39:02.040Z","hidden":false},{"_id":"667a74f59f501609d28e34bc","user":{"_id":"614c57f1ee44bcfe57b366d6","avatarUrl":"/avatars/186a9aed84681246f48ed2a012c50def.svg","isPro":false,"fullname":"Yonatan Belinkov","user":"belinkov","type":"user"},"name":"Yonatan Belinkov","status":"admin_assigned","statusLastChangedAt":"2024-06-25T13:39:07.898Z","hidden":false},{"_id":"667a74f59f501609d28e34bd","user":{"_id":"62e795ac7c106973ac6515eb","avatarUrl":"/avatars/2eaa80217aae2a8dbd6cc350b7f35311.svg","isPro":false,"fullname":"Song","user":"Xingyi","type":"user"},"name":"Xingyi Song","status":"admin_assigned","statusLastChangedAt":"2024-06-25T13:39:17.590Z","hidden":false},{"_id":"667a74f59f501609d28e34be","name":"Mrinmaya Sachan","hidden":false},{"_id":"667a74f59f501609d28e34bf","user":{"_id":"62669380c8bc5cf80ca97350","avatarUrl":"/avatars/6d5cd2261163308b82341c1ce28984d1.svg","isPro":false,"fullname":"Neel Nanda","user":"NeelNanda","type":"user"},"name":"Neel Nanda","status":"admin_assigned","statusLastChangedAt":"2024-06-25T13:39:26.400Z","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/wky1DxJeLezxAhcnyLOBX.png"],"publishedAt":"2024-06-24T01:31:03.000Z","submittedOnDailyAt":"2024-06-25T08:44:18.388Z","title":"Confidence Regulation Neurons in Language Models","submittedOnDailyBy":{"_id":"5e7749883d77a72421292d07","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e7749883d77a72421292d07/M4AmBReZk_otxCIG3o0bL.jpeg","isPro":false,"fullname":"Gabriele Sarti","user":"gsarti","type":"user"},"summary":"Despite their widespread use, the mechanisms by which large language models\n(LLMs) represent and regulate uncertainty in next-token predictions remain\nlargely unexplored. This study investigates two critical components believed to\ninfluence this uncertainty: the recently discovered entropy neurons and a new\nset of components that we term token frequency neurons. Entropy neurons are\ncharacterized by an unusually high weight norm and influence the final layer\nnormalization (LayerNorm) scale to effectively scale down the logits. Our work\nshows that entropy neurons operate by writing onto an unembedding null space,\nallowing them to impact the residual stream norm with minimal direct effect on\nthe logits themselves. We observe the presence of entropy neurons across a\nrange of models, up to 7 billion parameters. On the other hand, token frequency\nneurons, which we discover and describe here for the first time, boost or\nsuppress each token's logit proportionally to its log frequency, thereby\nshifting the output distribution towards or away from the unigram distribution.\nFinally, we present a detailed case study where entropy neurons actively manage\nconfidence in the setting of induction, i.e. detecting and continuing repeated\nsubsequences.","upvotes":10,"discussionId":"667a74f69f501609d28e3583","ai_summary":"Entropy neurons and token frequency neurons in large language models influence uncertainty by scaling down logits and adjusting output distributions based on token frequency, respectively.","ai_keywords":["large language models","entropy neurons","token frequency neurons","normalization","logits","unembedding null space","residual stream norm","unigram distribution","induction"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"5e7749883d77a72421292d07","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e7749883d77a72421292d07/M4AmBReZk_otxCIG3o0bL.jpeg","isPro":false,"fullname":"Gabriele Sarti","user":"gsarti","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"},{"_id":"655ac762cb17ec19ef82719b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/655ac762cb17ec19ef82719b/1kDncYrGLYS_2SR8cNdAL.png","isPro":false,"fullname":"Welcome to matlok","user":"matlok","type":"user"},{"_id":"63996725f123767aa2e46283","avatarUrl":"/avatars/3acd1390c6dba96d712765d302eb33e3.svg","isPro":false,"fullname":"Alan","user":"hiyata","type":"user"},{"_id":"64587be872b60ae7a3817858","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64587be872b60ae7a3817858/BbdOOxOCEzWTvEpkWp8MM.png","isPro":false,"fullname":"Minbyul Jeong","user":"Minbyul","type":"user"},{"_id":"65308c4c70a88b63f005901e","avatarUrl":"/avatars/45ec8c49c4c421e08e93ce4e94535cc8.svg","isPro":false,"fullname":"Dylan Hillier","user":"DylanASHillier","type":"user"},{"_id":"667b989320887e69a06640f3","avatarUrl":"/avatars/b0cf12d3659f44a3ede7101f3240374f.svg","isPro":false,"fullname":"Sowmya R.","user":"RSowmi","type":"user"},{"_id":"61e7c06064d3c6c929057bee","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/61e7c06064d3c6c929057bee/QxULx1EA1bgmjXxupQX4B.jpeg","isPro":false,"fullname":"蓋瑞王","user":"gary109","type":"user"},{"_id":"6538119803519fddb4a17e10","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg","isPro":false,"fullname":"samusenps","user":"samusenps","type":"user"},{"_id":"63b6f2e752c02ae8acbaa4d8","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1672934038280-noauth.jpeg","isPro":false,"fullname":"Habibullah Akbar","user":"ChavyvAkvar","type":"user"}],"acceptLanguages":["*"],"dailyPaperRank":0}">Confidence Regulation Neurons in Language Models
Abstract
Entropy neurons and token frequency neurons in large language models influence uncertainty by scaling down logits and adjusting output distributions based on token frequency, respectively.
Despite their widespread use, the mechanisms by which large language models (LLMs) represent and regulate uncertainty in next-token predictions remain largely unexplored. This study investigates two critical components believed to influence this uncertainty: the recently discovered entropy neurons and a new set of components that we term token frequency neurons. Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits. Our work shows that entropy neurons operate by writing onto an unembedding null space, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves. We observe the presence of entropy neurons across a range of models, up to 7 billion parameters. On the other hand, token frequency neurons, which we discover and describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution. Finally, we present a detailed case study where entropy neurons actively manage confidence in the setting of induction, i.e. detecting and continuing repeated subsequences.
Community
New in the Daily Picks in Interpretability & Analysis of LMs (https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9)
This work focuses on neuron-level mechanisms for confidence calibration in LLMs, identifying entropy and token frequency neurons. Entropy neurons have high weight norms but minimal direct impact on logits. They write onto an unembedding null space and leverage layer normalization to scale down logits, effectively modulating output distribution entropy. Token frequency neurons boost or suppress token logits proportionally to their frequency, shifting the output distribution towards or away from the unigram distribution. Experiments demonstrate the interaction between entropy neurons and other mechanisms, such as induction heads, resulting in improved model performances thanks to the calibration of model outputs across the entire distribution.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper