We’re nearing the point where we’re gonna have to recycle the senility memes from the Biden era.
Timeline
Post
Remote status
Context
6
@jb @disclosetv Tell me more. What is he blathering about now?
Bottom line- Anthropic refused to fully unlock their AI and wanted absolute assurances in writing from the US government that Anthropic (Claude) would not be used for any sort of mass surveillance, including domestic. The government rejected the demand; Anthropic refused to bend the knee to Zion Don and now he is all asshurt about it.
Anthropic did the right thing.
The face that the others, ie Palantir, Google, OpenAI and Xai had no problem with this is disturbing.
Anthropic did the right thing.
The face that the others, ie Palantir, Google, OpenAI and Xai had no problem with this is disturbing.
@CliffSecord @disclosetv @jb @Kalogerosstilitis2RevengeoftheJunta >The decision came following a dispute between Anthropic and the Pentagon over whether the company could prohibit its tools from being used in mass surveillance of American citizens or to create autonomous weapon systems.
>And it happened as at least one other AI firm said it had similar concerns about the military uses of AI. Earlier in the day, OpenAI CEO Sam Altman says he shares the "red lines" set by rival Anthropic restricting how the military uses AI models, amid Anthropic's escalating feud with the Pentagon.
>By wading into the standoff between Anthropic and the Pentagon, Altman could complicate the Pentagon's efforts to replace Anthropic if it follows through on its threat to cancel the contract.
>OpenAI also has a Defense Department contract, along with Google, xAI, and Anthropic, but Anthropic was the first to be cleared for use on classified systems.
xAI has not complied because it is not cleared, OpenAI refused like Anthropic did.
https://www.npr.org/2026/02/27/nx-s1-5729118/trump-ant...
>And it happened as at least one other AI firm said it had similar concerns about the military uses of AI. Earlier in the day, OpenAI CEO Sam Altman says he shares the "red lines" set by rival Anthropic restricting how the military uses AI models, amid Anthropic's escalating feud with the Pentagon.
>By wading into the standoff between Anthropic and the Pentagon, Altman could complicate the Pentagon's efforts to replace Anthropic if it follows through on its threat to cancel the contract.
>OpenAI also has a Defense Department contract, along with Google, xAI, and Anthropic, but Anthropic was the first to be cleared for use on classified systems.
xAI has not complied because it is not cleared, OpenAI refused like Anthropic did.
https://www.npr.org/2026/02/27/nx-s1-5729118/trump-ant...
I was unaware.
Good for them.
Good for them.
@CliffSecord @disclosetv @jb @Kalogerosstilitis2RevengeoftheJunta it's a developing story
https://techcrunch.com/2026/02/27/employees-at-google-...
https://techcrunch.com/2026/02/27/employees-at-google-...
Don't get me wrong here- I would have payed good money to be in the Pentagon with all the military leadership wargaming with AI, just to see their reactions when, no matter what scenario-based input they gave, the AI recommended the nuclear annihilation of Israel and the extermination of all jews as the only acceptable & logical course of action.
Even if Palantir is the only AI they end up using, there is still a very good chance that this very scenario happens, especially when it starts analyzing every Israeli military action.
Even if Palantir is the only AI they end up using, there is still a very good chance that this very scenario happens, especially when it starts analyzing every Israeli military action.
Replies
1Reminds me of the recent news slop