Measuring Faithfulness in Chain-of-Thought Reasoning

Do LLMs really “show their work” when they perform chain of thought reasoning? “Measuring Faithfulness in Chain-of-Thought Reasoning” is a new paper from Anthropic that aims to study this question empirically with a series of tests.

00:00 – Measuring Faithfulness in Chain-of-Thought Reasoning
00:53 – What is Chain-of-Thought reasoning?
03:15 – Do the Chain-of-Thought Steps Really Reflect the Model’s Reasoning?
07:03 – Possible Faithfulness Failures
08:44 – Encoded Reasoning/Steganography
12:01 – Experiment Details
15:44 – Does Truncating the Chain of Thought Change the Predicted Answer?
16:53 – Does Editing the Chain of Thought Change the Predicted Answer?
17:14 – Do Uninformative Chain of Thought Tokens Also Improve Performance?
18:28 – Does Rewording the Chain of Thought Change the Predicted Answer?
20:20 – Does Model Size Affect Chain of Thought Faithfulness?
22:04 – Limitations
24:38 – Externalized Reasoning Oversight

Topics: ##ai #anthropic #CoT #reasoning

Link to the paper:

For related content:
– Twitter:
– Research lab:
– personal webpage:
– YouTube:
– TikTok:
– Instagram:
– LinkedIn:
– Threads:
– Discord server for filtir:

(Optional) if you’d like to support the channel:

Image credit (Chelsea photo)


7 Replies to “Measuring Faithfulness in Chain-of-Thought Reasoning”

  1. I really appreciate the work you're doing, it is very interesting to see someone go over research regarding AI. I was wondering if you're interested in networks or you know someone who is, I want to know what areas of research are being probed by researchers these days in networks. Additionally, Is there a way I could reach out to you or professors regarding different research ideas and maybe develop on them as well.

  2. Jobob Miner says:

    Thanks again. Very informative video. I don't see much of the actual thought process of the AI research on a lot of AI news so this is a great insight

  3. 張安邦 says:

    If this trend keeps. We actually have a chance at surviving superhuman AGI. Good news

  4. Ster says:

    Why would adding … simulate more compute time? Isn't the amount of compute the same per token, regardless of how much context comes before?

  5. Bryan Nsoh says:

    Very enlightening Samuel. Thank you immensely! Given these insights on chain of thought reasoning, show would you specify custom instructions for GPT4 (using the new custom instructions feature) to ensure it always outputs an optimally reasoned answer?

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top