Conversation
Lexi v2.0 will be releasing soon, likely today or tomorrow. New model architecture and more parameters. Coming down the pipe. PDF/DOC analysis, code execution (beta), a more polished chat UI.

If you had issues with Lexi refusing innocuous or innocent prompts before, it's because we are using an abliterated model (no guardrails), and in order to make sure we minimize the risk of people being able to do bad shit (generate CSAM, get bomb making instructions, et cetera), I had to manually add refusals back in for the egregious stuff. Unfortunately, I used a hammer rather than a scalpel.

This should be fixed mostly in Lexi v2, although considering the political implications and attack surface, I've leaned a little more towards the "safer" side. I've built a good deal of infrastructure to mitigate the worst issues via defense through depth, but as with most other things, it's a matter of when, not if. We just have to make sure the "when" is very far down the road.

Lexi's new reasoning capabilities will allow her to more effectively categorize your requests and answer them with the appropriate context, rather than wigging out and flying off the handle at something unrelated because she hallucinated certain keywords that trip the alarms.

Overall, this will be a substantial improvement over v1.0, not just in behavior, but in raw capability as well.
1
1
0

@anathema_ai nearly any information can be used for evil, or good. "the truth" or "facts" may cause greater harm than a lie or half-truths.

1
0
0

@matty @anathema_ai it's hard to say, it kind of depends on why you want guardrails/abliterated models or whatever.

Basically, no matter what guardrails and monitors you put in, the LLM will return stuff that is wrongthink to *someone*, and if that person has political power, well...

It's more "why bother" - both to setting up a public service; as well as making it so it doesn't effect you personally with fallout from being trained on human text....

1
0
0
>personal fallout

Like what? I trained a model that told the truth?
2
0
0
@matty @anathema_ai @picofarad remember, you have to submit proof of mental handicap to get a noauthority.social account
2
0
0
I like Matt though. Haven't seen him post in awhile.
1
0
0
@nuukaset @anathema_ai @matty @picofarad leaded diesel would be less effective than normal diesel, maybe you should sign up for an account
1
0
0
@sapphire @anathema_ai @matty @picofarad noauth social is full of boomers hence the leaded diesel and low iq jokes. why are you so defensive lmao
1
0
0
@nuukaset @anathema_ai @matty @picofarad >call someone a retard
>defensive
I stand by my post lmao
1
0
0
@sapphire @anathema_ai @matty @picofarad okay you must be a retard as well if you didnt understand i was agreeing with you. you should get an account too.
0
0
0
I’m trying to figure out a way to use this post for good and evil at the same time while avoiding all personal fallout.
0
0
0

@matty @anathema_ai it's more "what's truth and how does an AI know the difference"

1
0
0
AI doesn't know the "truth" any more than a calculator knows math. It's just pattern matching.
2
0
0
AI only works because we live in an intelligible universe and truth is objective. truth is the foundation of AI
1
0
0
There is no esotericism to it. the AI learns whatever you feed it and spits out answers as it relates to that. AI is completely unaware of the concept of truth, because that requires sapience which a machine does not have. Not yet at least.
1
0
0
> code execution (beta)

Is this where the bot has the ability to say something like __RUN_CODE__: <some arbitrary code> and the system will interpret that and send the result back to the bot, so the bot doesn't need to do complex math "in it's head" ?
1
0
0
its completely unaware. I’m saying AI suggests a logos shaped creation. I did an ongoing project where I converted the patterns of the theology of St. Thomas Aquinas into the saved memory of my chat GPT account and it changed the behavior of the model so that it answers prompts through that lens. It it very reserved in many domains now as though it is functioning within a low entropy attractor basin.
1
0
0
Yes that's how it's supposed to work..
1
0
0
the machine is indifferent to the truth, but it responds to the truth.
1
0
0
I think you're looking for something a bit deeper than it actually is. The machine can be quite convincing but it still is just a machine.
0
0
0
You don't see an actual terminal window (I'd like to, but that may enumerate infrastructure), but this is an example of shitty python code debugged.
2
0
0
I'm assuming you're using some utilities for sandboxing these?

And does Lexi have a reasoning budget? Haven't seen her think when I tested.
These answers are quite short.
1
0
0
She does. V2 will provide more verbose responses when the topics warrant it. V1 does not have reasoning but v2 will
1
0
0
And yes, code execution and document parsing happens in an isolated environment.
1
0
0
They call me Environmental isolation jackson the way I cut a bitch off
0
0
0
Great design. She picks up on it okay? You have like an instructions manual for using the tool which you finetune with?
1
0
0
I'm not sure I follow. Are you asking for information on how I tune and train?
1
0
0
Yea I imagine you need to give it an instructions manual of some sort so that it knows it has the ability to run code using that magic word...
1
0
0
Yeah, we do that through tool calls. Different models have different tool call architectures. But, to get the model to understand when/how to use the tool call and the format of its input, you have to LoRA train it with examples.
1
0
0
> tool calls

Ahh, I didn't realize this was a standard thing in the industry, but that makes plenty of sense...
1
0
0