Egregoros

Signal feed

Timeline

Post

Remote status

Context

7

I think the funniest part of this computer memory shortage is that all of it is just being bought up by OpenAI to give the illusion of a growing company, who is then immediately shelving it in warehouses and never using it. When this industry crashes the amount of brand new GPUs flooding the secondary market is going to be nuts

@Shadowman311 >illusion of a growing company

They were just trying to starve competitors of ram. Everyone was already aware of their tenuous situation. They have first mover advantage and are perceived as the “brand name” LLM. They’re trying to maintain this status and advance it by kneecapping everyone else. It won’t work. As long as the Chinese keep getting the results of these models for free no advancement made by spending billions will matter in the face of a competitor 6 months behind getting it for free.
@john_darksoul >no?
no. I don't think so.

>The new Chinese open source model that just came out straight up answered as Claude.
which is that? They fine tune new models on agent outputs. They used to train chatGPT on GPT starting from "What follows is a conversation between a user and an AI Agent" and then they had a bunch of Nigerians also write synthetic responses for them.
The fact that the new model sometimes thinks it's Claude, doesn't mean they have the weights. There's no evidence they do. And you couldn't hide if you did, when your model is open source. Anthropic would immediately notice that.

I think you just fell for Anti-China FUD on this one

Replies

0

Fetching replies…