Conversation
apparently there's a whole discourse on LLMs learning to reason in chinese and some of the conjecture is because kanji is denser token-wise
1
0
0
@nuukaset in theory it's not but the asterisk is if its actually using chinese or if chinese is just such a large part of the token domain that its shunted its reasoning there

we've hilariously gone back to markov style discrete states but now its overpriced
1
0
0
@icedquinn people say llms should use iversonian-likes for reasoning about programs too but im not really persuaded by it lol. is token density really the end all be all for LLMs efficency lol. idk wtf am saying
1
0
0
@nuukaset i have no idea i don't design LLMs blobcatgoogly the ones i study are neuromoprhics
1
0
1
@nuukaset neuromorphics get you to "haha its running in an echo loop is that a bug or a win condition ablobcatgooglytenor"

fed continuous and a discontinuous function and it does something but nobody can tell if this is winning.

it looks like it's making some attempt at forming around the input stimulus but decoding that back out to check is deeply weird. either stack it and it starts turning in to a brain, or its fucked, and the difference between these two states is blobcatdunno
0
0
0