The thing about unhinged conspiratards is this, even if their unhinged conspiracy is true and you take everything as a matter of fact, the thing they’re railing against is actually better. Like on this case. Deepseek, from what we can tell, is better. Even if they spent $500Bil and are undercutting the competition that’s capitalism baby! I think ai is a farce and those resources should be put to better use.
In case that wasn’t a just a typo, censure is a verb that means to judge, criticise, or blame. You should say “without censorship”. Or maybe “without a censor”, but I think the former sounds better.
I asked “What happened at Tiananmen Square?” and this is what it said:
The events at Tiananmen Square in 1989 began as student-led demonstrations in Beijing, initially sparked by mourning for reformist Communist Party leader Hu Yaobang in April. The protests quickly escalated into widespread calls for political and economic reforms, anti-corruption measures, and democratic freedoms (history.state.gov).
By late May, the government declared martial law and deployed military forces, but citizens initially blocked troops from entering the city. Protesters occupied the square, erecting the iconic “Goddess of Democracy” statue, while Western media broadcast live coverage of the standoff (britannica.com).
By early June, however, Chinese authorities ordered a violent crackdown. Troops and tanks advanced into the square, opening fire on demonstrators and bystanders. Witnesses reported civilians shot in the back or crushed by military vehicles, though the exact death toll remains unknown due to government censorship (amnesty.org.uk).
The Chinese Communist Party later suppressed all public discussion of the event, erasing it from official records and education materials. Globally, the crackdown became a symbol of authoritarian repression, while Tiananmen Square itself—originally expanded in the 1950s to showcase Maoist grandeur—remains a politically charged site (wikipedia.org) (bbc.com).
Snake oil will be snake oil even in 100 years. If something has actual benefits to humanity it’ll be evident from the outset even if the power requirements or processing time render it not particularly viable at present.
Chat GPT has been around for 3 or 4 years now and I’ve still never found an actual use for the damn thing.
I found ChatGPT useful a few times, to generate alternative rewordings for a paragraph I was writing. I think the product is worth a one-time $5 purchase for lifetime access.
AI is overhyped but it’s obvious that some time later in the future, AI will be able to match human intelligence. Some guy in 1600s probably said the same about the first steam powered vehicle that it will still be snake oil in 100 years. But little did he know that he is off by about 250 years.
The common language concept of AI (i.e. AGI), sure it will one day happen.
This specific avenue of approaching that problem ending up being the one that evolves all the way to AGI, that doesn’t seem at all likely - its speed of improvement has stalled, it’s unable to do logic and it has the infamous hallucinations, so all indications is that it’s yet another dead-end.
Mind you, plenty of dead-ends in this domain ended up being useful - for example the original Neural Networks architectures were good enough for character recognition and enabled things like automated mail sorting - however this bubble on this specific generation of machine learning architectures seems to have been way too disproportionate to how far it has turned out that this generation can go.
That’s my point though the first steam-powered vehicles were obviously promising. But all large language models can do it parrot back at you what they already know which they got from humanity.
I thought AI was supposed to be super intelligent and was going to invent teleporters, and make us all immortal and stuff. Humans don’t know how to do those things so how can a parrot work it out?
Of course the earlier models of anything are bad. Although the entire concept and practicals will eventually be improved upon as other foundational and prerequisite technologies are met and enhances the entire project. And of course, all progress doesn’t happen overnight.
I’m not fanboying AI but I’m not sure why the dismissive tone as if we live in a magical world where technology should have now let us travel through space and time (I mean, I wish we could). The first working AI is already here. It’s still AI even if it’s in its infancy.
Because I’ve never seen anyone prove that large language models are anything other than very very complicated text prediction. I’ve never seen them do anything that requires original thought.
To borrow from the Bobbyverse book series, no self-driving car has ever worked out that the world is round, not due to lack of intelligence but simply due to lack of curiosity.
Without original thinking I can’t see how it’s going to invent revolutionary technologies and I’ve never seen anybody demonstrate that there is even the tiniest spec of original thought or imagination or inquisitiveness in these things.
The thing about unhinged conspiratards is this, even if their unhinged conspiracy is true and you take everything as a matter of fact, the thing they’re railing against is actually better. Like on this case. Deepseek, from what we can tell, is better. Even if they spent $500Bil and are undercutting the competition that’s capitalism baby! I think ai is a farce and those resources should be put to better use.
The moment deepseek seeks (haha, see what i did there) to freely talk about Tiananmen square, I’ll admit it’s better
you can already do so buy running it localy. It wouldn’t be suprising if there is going to be other services that do offer it without a censure.
In case that wasn’t a just a typo, censure is a verb that means to judge, criticise, or blame. You should say “without censorship”. Or maybe “without a censor”, but I think the former sounds better.
I asked “What happened at Tiananmen Square?” and this is what it said:
Nice. I haven’t peeked at it. Does it have guard rails around Tieneman square?
I’m positive there are guardrails around Trump/Elon fascists.
It’s literally the first thing everybody did. There are no original ideas anymore
For now.
Snake oil will be snake oil even in 100 years. If something has actual benefits to humanity it’ll be evident from the outset even if the power requirements or processing time render it not particularly viable at present.
Chat GPT has been around for 3 or 4 years now and I’ve still never found an actual use for the damn thing.
I use it to code at work. Still needs a little editing afterwards, but it makes the overall process easier.
I found ChatGPT useful a few times, to generate alternative rewordings for a paragraph I was writing. I think the product is worth a one-time $5 purchase for lifetime access.
AI is overhyped but it’s obvious that some time later in the future, AI will be able to match human intelligence. Some guy in 1600s probably said the same about the first steam powered vehicle that it will still be snake oil in 100 years. But little did he know that he is off by about 250 years.
The common language concept of AI (i.e. AGI), sure it will one day happen.
This specific avenue of approaching that problem ending up being the one that evolves all the way to AGI, that doesn’t seem at all likely - its speed of improvement has stalled, it’s unable to do logic and it has the infamous hallucinations, so all indications is that it’s yet another dead-end.
Mind you, plenty of dead-ends in this domain ended up being useful - for example the original Neural Networks architectures were good enough for character recognition and enabled things like automated mail sorting - however this bubble on this specific generation of machine learning architectures seems to have been way too disproportionate to how far it has turned out that this generation can go.
That’s my point though the first steam-powered vehicles were obviously promising. But all large language models can do it parrot back at you what they already know which they got from humanity.
I thought AI was supposed to be super intelligent and was going to invent teleporters, and make us all immortal and stuff. Humans don’t know how to do those things so how can a parrot work it out?
Of course the earlier models of anything are bad. Although the entire concept and practicals will eventually be improved upon as other foundational and prerequisite technologies are met and enhances the entire project. And of course, all progress doesn’t happen overnight.
I’m not fanboying AI but I’m not sure why the dismissive tone as if we live in a magical world where technology should have now let us travel through space and time (I mean, I wish we could). The first working AI is already here. It’s still AI even if it’s in its infancy.
I want my functional, safe, and nearly free jetpack.
Because I’ve never seen anyone prove that large language models are anything other than very very complicated text prediction. I’ve never seen them do anything that requires original thought.
To borrow from the Bobbyverse book series, no self-driving car has ever worked out that the world is round, not due to lack of intelligence but simply due to lack of curiosity.
Without original thinking I can’t see how it’s going to invent revolutionary technologies and I’ve never seen anybody demonstrate that there is even the tiniest spec of original thought or imagination or inquisitiveness in these things.
I’m presently fluent in Spanish because of AI.