Egregoros

Signal feed

Timeline

Post

Remote status

Context

1

I think the funniest part of this computer memory shortage is that all of it is just being bought up by OpenAI to give the illusion of a growing company, who is then immediately shelving it in warehouses and never using it. When this industry crashes the amount of brand new GPUs flooding the secondary market is going to be nuts

@Shadowman311 >illusion of a growing company

They were just trying to starve competitors of ram. Everyone was already aware of their tenuous situation. They have first mover advantage and are perceived as the “brand name” LLM. They’re trying to maintain this status and advance it by kneecapping everyone else. It won’t work. As long as the Chinese keep getting the results of these models for free no advancement made by spending billions will matter in the face of a competitor 6 months behind getting it for free.

Replies

39
@john_darksoul >no?
no. I don't think so.

>The new Chinese open source model that just came out straight up answered as Claude.
which is that? They fine tune new models on agent outputs. They used to train chatGPT on GPT starting from "What follows is a conversation between a user and an AI Agent" and then they had a bunch of Nigerians also write synthetic responses for them.
The fact that the new model sometimes thinks it's Claude, doesn't mean they have the weights. There's no evidence they do. And you couldn't hide if you did, when your model is open source. Anthropic would immediately notice that.

I think you just fell for Anti-China FUD on this one
@john_darksoul @bronze @hazlin you need more technical insight tbqh
I already told you guys what they did in this thread, and also it's easily googleable, plus it pops up when you google any of the claims you made.

Nobody knows anything about the theoretical limits of efficiency of tokens/watt, not even on the same hardware.
A smaller model will be cheaper though. And you can "distill" models. Which is what they do. And I said that they did.
@WandererUber @bronze @hazlin Homie, they steal EVERYTHING. I just watched a video on their burgeoning RAM business that started with them getting in trouble for stealing tech. Then when that company was blacklisted, a new company started where they left off. There is no conspiracy. It's SOP for any chinese tech company to "accelerate" by getting info wherever they can. The idea that any of this would be found scandalous is laughable.
@john_darksoul @bronze @hazlin >We should uncritically believe everyone else is only ever stealing and at the same time so incompetent that they have to do that, but also such Oceans Style Masterminds that they always pull it off. My evidence is all the times they got caught, but they did it without getting caught in this instance.

An absolutely laughable direction to paint me as some China defender just because I said "Anti-China FUD" one time. That's just a real thing, dude. The US fed apparatus does it constantly. So it was AS REASONABLE, at the very least, to suggest maybe you fell for a fake headline. But it turns out you made it up completely and there isn't even one, so yeah. I guess I'm the silly one, because I said "it's explained by the thing that openAI accused them of doing and not a theoretically possible thing they might LIKE to do, which openAI has NOT accused them of doing"

tedious conversation at this point.