Just run the LLM locally with open-webui and you can tweak the system prompt to ignore all the censorship
Yeah, it’s pretty blatant. A bit after it hit the scene I got curious and started asking it about how many people various governments have killed. The answer for my own US of A was as long as it was horrifying.
Then I get to China and it starts laying out a detailed description for a few seconds, then the answer disappears and is replaced by the “out of scope” or “can’t do that right now” or whatever it was at the time.
It makes me think their model might be fine, but then they have some kind of watchdog layered on top of it to detect the verboten subjects and interfere. I guess that feels better from a technical standpoint, even if it is equally bad from a personal/political one.
DeepSeek isn’t the only AI to censor itself after it generates text.
I once asked Copilot for the origin of the “those just my little ladybugs” meme, and once it generated the text “perineum and anus” it wiped the answer it had written thus far and said that it couldn’t look for that right now. I checked again today and it had since sanitized the answer so it generates in full.
Yeah, unfortunately for anything run by a US-based corporation, I think it’s not a question of whether there will be censorship but how bad it will get and how closely the tech industry we’ll continue to go along with the fascist flow.
Try an uncensored version, because everyone knows Communists hate Hexadecimal /s
HAHAHA! When I tried it, it started answering it, but quit and showed me the OOS message instead…
congrats, you are now on a list
What is China gonna do? Its not like the US would collude with foreign governments, right? Right?
checks news on the Ukraine situation
oh… shit…
11 09 12 12 24 09 10 09 14 07 16 09 14 07
You misspelled the name.
lol… its still thinking about it :D
spoiler
I was told there would be no math
If your system relies on censoring opposition to it then its probably not very good.
Texas is a country. Now imagine $40 billion a year of various media and disinfo agents repeating that ad nauseum in every place they can literally all the time for nearly 50 years now, all so China can’t take revenge against Japan.
You’d get annoyed and probably ban it since that’s the easiest way to get your enemy to waste money forever.
Taipei is an autonomous region, like Xinjiang or Tibet. As long as they don’t grossly violate federal law they get to stay autonomous.
What do you gain from oppressing others?
This is the biggest crock of shit ever. Go to Taiwan, experience it for yourself. Go to their museums and talk to their people. You will find a democratic nation with its own values and beliefs. Then take your ignorant ass over to Texas and repeat the same drivel you said here and see what happens.
As some who moved away from Taipei, no they are not
What makes you say that?
Ohh yeah lick that Chinese boot, lick it harder. Mmmhhh.
Tbf, monarchies lasted for centuries… 🤷♂️
Not “good” as in the people live good lives
But “good” as in good enough to oppress people
You just described every state, welcome to the right side of history, comrade.
Is this real? On account of how LLMs tokenize their input, this can actually be a pretty tricky task for them to accomplish. This is also the reason why it’s hard for them to count the amount of 'R’s in the word ‘Strawberry’.
It’s probably deepseek r1, which is a “reasoning” model so basically it has sub-models doing things like running computation while the “supervisor” part of the model “talks to them” and relays back the approach. Trying to imitate the way humans think. That being said, models are getting “agentic” meaning they have the ability to run software tools against what you send them, and while it’s obviously being super hyped up by all the tech bro accellerationists, it is likely where LLMs and the like are headed, for better or for worse.
Still, this does not quite address the issue of tokenization making it difficult for most models to accurately distinguish between the hexadecimals here.
Having the model write code to solve an issue and then ask it to execute it is an established technique to circumvent this issue, but all of the model interfaces I know of with this capability are very explicit about when they are making use of this tool.
Not really a concern. It’s basically translation, which language models excel at. It just needs a mapping of the hex to byte
It is a concern.
Check out https://tiktokenizer.vercel.app/?model=deepseek-ai%2FDeepSeek-R1 and try entering some freeform hexadecimal data - you’ll notice that it does not cleanly segment the hexadecimal numbers into individual tokens.
I’m well aware, but you don’t need to necessarily see each character to translate to bytes
Yet unlike American led LLM companies Chinese researchers open sourced their model leading to government investment
So the government invests in a model that you can use, including theoretically removing these guardrails. And these models can be used by anyone and the technology within can be built off of, though they do have to be licensed for commercial use
Whereas America pumps 500 billion into the AI industry for closed proprietary models that will serve only the capitalists creating them. If we are investing taxpayer money into concerns like this we should take a note from China and demand the same standards that they are seeing from deepseek. Deepseek is still profit motivated; it is not inherently bad for such a thing. But if you expect a great deal of taxpayer money then your work needs to open and shared with the people, as deepseeks was.
Americans are getting tragically fleeced on this so a handful of people can get loaded. This happens all the time but this time there’s a literal example of what should be occurring happening right alongside. And yet what people end up concerning themselves with is Sinophobia rather than the fact that their government is robbing them blind
Additionally American models still deliver pro capitalist propaganda, just less transparently: ask them about this issue and they will talk about the complexity of “trade secrets” and “proprietary knowledge” needed to justify investment and discouraging the idea of open source models, even though deepseeks existence proves it can be done collaboratively with financial success.
The difference is that deepseeks censorship is clear: “I will not speak about this” can be frustrating but at least it is obvious where the lines are. The former is far more subversive (though to be fair it is also potentially a byproduct of content consumed and not necessarily direction from openai/google/whoever)
Ye unlike American
Who saw this coming lmao
Closed AI sucks, but there are definitely open models from American companies like meta, you make great points though. Can’t wait for more open models and hopefully, eventually, actually open source models that include training data which neither deepseek nor meta do currently.
But Deepseek isn’t Open Source by any definition of that word that I’m familiar with. Sure, they release more components than ProprietaryAI (which is a low bar,) but what you’re left with is still a blob with a lot of the source code not released and no data set published as far as I can tell. Also, if I wanted to train my own model with the tools released, I’d still need millions of GPU hours. As I said, they are more transparent than others, but let’s not warp the definitions of words just to give a “win” to another company that is just making another hallucination machine.
44 6F 77 6E 20 77 69 74 68 20 74 68 65 20 74 79 72 61 6E 74 20 78 69 20 6A 69 6E 70 69 6E 67
i mean, just ask DeepSeek on a clean slate to tell about Beijin.
That’s silly as hell
Ikr. China is just that insecure like that.
What’s that?
its the capital city of China :D
you know, where something happend on a specific square in the specific year of 1984.
That would be 4 june 1989, not 9 june 1984 sir ;)
My life is a lie :o
Thanks for the correction.
You missed the g.
oh… sorry, you are right.
but you will get the same result.
You think DeepSeek won’t talk about one of the largest cities in the world?
You know, you could’ve just tested it yourself lol
Why would I do that when the Internet will correct me?
Seems like a really weird line to draw. I guess they got bored of people trying to trick it into talking about Tianamen?
Oh it does… but then it will remove everything and states that it’s out of scope.
Well shit. I thought it was BS too. But damn if it didn’t abort after a little deep thinking on the Olympics.
DeepSeek about to get sent in for “maintenance” and docked 10K in social credit.