ok im sorry pls don't ban me, ill end up in mental asylum without .gg, i redact everything i've said...u know u can just ban him right
ok im sorry pls don't ban me, ill end up in mental asylum without .gg, i redact everything i've said...u know u can just ban him right
just go on meso rx or t nationGpt got another safety update and groks getting worse and worse
I need a little slave to fetch me roid sources
Pleaaaassseeee
Non ion want the Jews knowing I want to remove diddys clothesGpt got another safety update and groks getting worse and worse
I need a little slave to fetch me roid sources
Pleaaaassseeee




DO you use your LLM locally on your computer ? which model, cause I know you can do the but its very model dependentYou can just download the ai and remove the safety features
Yea this is true for the most part, tho there are GitHub repos that unlock restrictions if you look hard enough n have decent technical knowledge, and with newer more accurate LLM's it become harderDownloading the model doesn't work because, first of all, you need to remove the restriction that requires actual technical knowledge.
Secondly, you need a supercomputer to process the information. Your RTX GPU won't do anything and will only produce a horrible, inaccurate result.
If you high IQ and know how to script, use terminal and some python here:
https://github.com/p-e-w/heretic
This question is better asked to people who make AI porn, as those people have this down to the biggest tea which is the hardest restriction
the GPU, the training, & the data it needs to be fed isn't going to allow itYea this is true for the most part, tho there are GitHub repos that unlock restrictions if you look hard enough n have decent technical knowledge, and with newer more accurate LLM's it become harder
This super computer logic only applies when your actually training a model, like configuring neural networks(Which most people here dont even have a mathematical background to even accomplish) But you can run/modify local models via terminal with scripting.the GPU, the training, & the data it needs to be fed isn't going to allow it
you NEED a supercomputer to host an LLM.
perplexity works just fine for this
90% of the time you won't face any restriction
it's not about hypotheticals at allI haven't really looked in to preplexity but ill try it out as I feel most of the time when you use word like "hypothetically and in theory" when asking a question its more likely to give you a awnser your looking for so its mostly prompt engineering on users end
nopeThis super computer logic only applies when your actually training a model, like configuring neural networks(Which most people here dont even have a mathematical background to even accomplish) But you can run/modify local models via terminal with scripting.
Now in terms of locally configuring a huge LLM than yea an expensive GPU does come in handy but I dont think bro owns a law firm or is doing massive data analysis to even garnish the extra load.
this is also falsen honestly Larger models only matter if the task in hand is complex enough to demand said model which in case a nicer GPU is needed, majority of time if you have to ask said question, than the task at hand isnt even complex enough to begin with


