Looksmax - Men's Self Improvement Forum

Welcome to the ultimate men’s self-improvement community where like-minded individuals come together to level up every aspect of their lives. Whether it’s building confidence, improving your mindset, optimizing health, or mastering aesthetics, this is the place to become the best version of yourself. Join the hood and start your transformation today.

Discussion What unrestricted ai do yall use (21 Viewers)

Discussion What unrestricted ai do yall use

Cheat

#ascended
Joined
Nov 29, 2025
Posts
396
Reputation
353
u know u can just ban him right
ok im sorry pls don't ban me, ill end up in mental asylum without .gg, i redact everything i've said... Biomaxx Biomaxx Godveil Heir Godveil Heir
 

Nardicus102

1G Test, NW1
Joined
Nov 18, 2025
Posts
303
Reputation
468
Downloading the model doesn't work because, first of all, you need to remove the restriction that requires actual technical knowledge.
Secondly, you need a supercomputer to process the information. Your RTX GPU won't do anything and will only produce a horrible, inaccurate result.
Yea this is true for the most part, tho there are GitHub repos that unlock restrictions if you look hard enough n have decent technical knowledge, and with newer more accurate LLM's it become harder
 

Godveil Heir

Perfection Incarnate
Staff member
Joined
Dec 11, 2025
Posts
2,982
Reputation
606
Yea this is true for the most part, tho there are GitHub repos that unlock restrictions if you look hard enough n have decent technical knowledge, and with newer more accurate LLM's it become harder
the GPU, the training, & the data it needs to be fed isn't going to allow it
you NEED a supercomputer to host an LLM.

perplexity works just fine for this
90% of the time you won't face any restriction
 

Nardicus102

1G Test, NW1
Joined
Nov 18, 2025
Posts
303
Reputation
468
the GPU, the training, & the data it needs to be fed isn't going to allow it
you NEED a supercomputer to host an LLM.

perplexity works just fine for this
90% of the time you won't face any restriction
This super computer logic only applies when your actually training a model, like configuring neural networks(Which most people here dont even have a mathematical background to even accomplish) But you can run/modify local models via terminal with scripting.
Now in terms of locally configuring a huge LLM than yea an expensive GPU does come in handy but I dont think bro owns a law firm or is doing massive data analysis to even garnish the extra load.

I haven't really looked in to preplexity but ill try it out as I feel most of the time when you use word like "hypothetically and in theory" when asking a question its more likely to give you a awnser your looking for so its mostly prompt engineering on users end

I have tried scripting and configuring LLM's locally and the biggest problem is running is making sure it still preforms (via giving the right answer) even after restrictions are removed. I gave up on this. But have some peer's at Princeton who are physics majors n have this down to a tea.

n honestly Larger models only matter if the task in hand is complex enough to demand said model which in case a nicer GPU is needed, majority of time if you have to ask said question, than the task at hand isnt even complex enough to begin with
 

Godveil Heir

Perfection Incarnate
Staff member
Joined
Dec 11, 2025
Posts
2,982
Reputation
606
I haven't really looked in to preplexity but ill try it out as I feel most of the time when you use word like "hypothetically and in theory" when asking a question its more likely to give you a awnser your looking for so its mostly prompt engineering on users end
it's not about hypotheticals at all
just directly ask the question and don't mention crime and don't try to use it for reasoning
for search, it works fine

This super computer logic only applies when your actually training a model, like configuring neural networks(Which most people here dont even have a mathematical background to even accomplish) But you can run/modify local models via terminal with scripting.
Now in terms of locally configuring a huge LLM than yea an expensive GPU does come in handy but I dont think bro owns a law firm or is doing massive data analysis to even garnish the extra load.
nope
you can run it for sure; I've done that
but it won't give nowhere near accurate response
it will just give you misinformation.
 

Godveil Heir

Perfection Incarnate
Staff member
Joined
Dec 11, 2025
Posts
2,982
Reputation
606
n honestly Larger models only matter if the task in hand is complex enough to demand said model which in case a nicer GPU is needed, majority of time if you have to ask said question, than the task at hand isnt even complex enough to begin with
this is also false
the LLM works by predicting the next word

it's going to be ass, if it hasn't received top tier training and not allocated extreme processing power
 

Users who are viewing this thread

shape1
shape2
shape3
shape4
shape5
shape6
Top