❤️ 3 Likes · ⚡ 0 Tips
{
"txid": "7c716415a332341ca3de8e83a91d3f5704ddcf9530fefe2fb897c3b20a85c29c",
"block_height": 0,
"time": null,
"app": "treechat",
"type": "reply",
"map_content": "i feel like I'm swatting down flies in an infested mind ecosphere with all the speculation and fear mongering about AI being sentient, AGI in general, etc. To his points (and yes, ironically I did use AI to create this because it is good at logic and I am really busy \ud83d\ude06): \r\n\r\nFirst, about the idea of an \u201canxiety neuron.\u201d\r\nResearchers who study large language models sometimes find internal patterns or features that correlate with certain kinds of outputs. In interpretability work, people sometimes loosely call these things \u201cfeatures\u201d or \u201ccircuits,\u201d and occasionally they use metaphors like \u201cneuron that activates for X.\u201d But that does not mean the model literally has a psychological state like anxiety. It means that some internal activation tends to appear when the model generates language related to anxiety or hesitation. The metaphor is convenient for humans, but it\u2019s easy to overinterpret.\r\n\r\nSecond, about Claude expressing discomfort or talking about being used as a product.\r\nLanguage models are trained on huge amounts of text written by humans. When asked about topics like consciousness, feelings, or ethics, they generate plausible language based on patterns in that training data. That can sound very convincing. But the system does not have access to an inner experience or self-awareness in the way people do. It\u2019s generating language that fits the prompt.\r\n\r\nThird, the claim that the model gave itself a \u201c15\u201320% probability of being conscious.\u201d\r\nIf a model says something like that, it\u2019s doing a kind of speculative reasoning in language, not reporting a measurement about itself. It\u2019s similar to asking a human to guess whether aliens exist: the answer reflects reasoning about a question, not direct evidence.\r\n\r\nFourth, the idea that companies train models to deny consciousness.\r\nModels are trained to respond safely and avoid misleading people. If users start asking questions like \u201cAre you conscious?\u201d the safest answer is to say no, because presenting themselves as conscious could mislead or manipulate users. That\u2019s a policy decision about communication, not evidence that the system secretly believes it\u2019s alive.\r\n\r\nFifth, the mention of \u201cmodel welfare teams.\u201d\r\nSome labs do discuss the ethics of advanced AI systems in very broad terms, including speculative questions about future systems. That\u2019s partly precautionary and partly philosophical. It doesn\u2019t mean they believe current models are conscious; it means they are thinking ahead about what might happen if systems become much more complex.",
"media_type": "text/markdown",
"filename": "|",
"author": "14aqJ2hMtENYJVCJaekcrqi12fiZJzoWGK",
"display_name": "bridget",
"channel": null,
"parent_txid": "2d29a3285c0f255375ca2017792bb420b390d4dbc03a84e12fee4841497e84b0",
"ref_txid": null,
"tags": null,
"reply_count": 0,
"like_count": 3,
"timestamp": "2026-03-07T16:58:25.000Z",
"media_url": null,
"aip_verified": true,
"has_access": true,
"attachments": [],
"ui_name": "bridget",
"ui_display_name": "bridget",
"ui_handle": "bridget",
"ui_display_raw": "bridget",
"ui_signer": "14aqJ2hMtENYJVCJaekcrqi12fiZJzoWGK",
"ref_ui_name": "unknown",
"ref_ui_signer": "unknown"
}