The broad left strategy

Leftism has no essence, except an endless campaign against the next applecart. When it runs up against an obstacle, it is apt to instantly discard its latest high ideals, write them out of history or deem them extreme radical right, and adopt some new high ideals.

Socialism stopped 1949. Replaced by ever more extreme regulation and direct redistribution, which redistribution had the effect of redistributing wealth directly to the state, and redistributing ownership of capital to vast Dilbertesque quasi state corporations.

Reagan and Thatcher stopped this, re-enabling capitalism while allowing the left continue full speed ahead on the destruction of the family and mass importation of the third world to white countries.  And now here we are.

Right now, the left is still tightly focused on the destruction of the family and mass importation of the third world to white countries, but a significant faction of the left wants to return to an “anti oligarch” strategy, of redistributing ownership and control of capital to vast Dilbertesque quasi state corporations, as for example the California law aimed at destroying the power of entrepreneurs in business by a confiscatory wealth tax against entrepreneurs in favor of passive owners like Blackwater, and the British law against private ownership of rental properties. They have created a regulatory framework which is not only expensive to comply with, but literally impossible to comply with for individual entrepreneurs, notably the California regulation on gasoline, which pretends to enforce the reasonable requirement of leak resistant fuel tanks for gas stations, but makes it impossible for individually owned gas stations to install fuel tanks that could be deemed leak resistant, no matter how much money and skill they apply to the task in California, and similarly, impossible for individual owners of rental accommodation in Britain, while building and zoning laws make it impossible for people to build individual housing for themselves.

My bread and butter used to be startups, but Sox killed startups.

If the left is defeated on mass migration and transitioning your children, their regroup strategy is to be “anti oligarch” — which will be primarily directed at such mighty oligarchs as independent truckers, who have just been banned in California.

The left will die if prevented from getting ever lefter, but we see in process an effort to switch from mass migration and family destruction, back to the destruction of capitalism, by reserving ownership and control of capital to a few vast Dilbertesque quasi state corporations:  “You will own nothing and be happy”.  And should you indicate lack of happiness, you will be punished.

The strategy is aimed at suppressing entrepreneurial capitalism in favor of managerial “capitalism”, tightly integrated with the managerial state — thus the the truckers, independent gas stations, and the tech lords were similarly attacked in California.

The left is always led by the leftmost, and right now the leftmost are still trannies and welfare fraudsters, but the anti oligarch faction wants to be redefine leftism again so that they will once again be the leftmost.  They want you to own nothing and eat bugs.

82 comments The broad left strategy

Halion says:

If I had to define the left, I’d say it’s social entropy. That said, an interesting idea is that the left has always existed, but in pre-modern times they were called Gnostics.

ray says:

Gnosticism was and is primarily a series of associated goddess-worshipping cults, with its greatest deity Sophia. Yeah a chick, whatta surprise. Like we haven’t heard it all before with Isis, Asherah, Diana, Demeter, all the rest. For the gnarly gnostics, their great figure of evil is . . . well my Father and God. Of course.

The core of the modern Left is female collective power, especially among white women. Leftism hates male or patriarchal power, and Gnosticism in it multitudes of cults and forms does seem to have morphed into Endless Rebel Progressivism. Antifa feels very much a hybrid gnostic/nihilistic organization.

These are the Malicious Mutants that every Godly civilization is responsible for wiping out. They’re dubble dangerous when they are UMC spoiled brats. Afore they destroy everything, which they damn sure will.

chedolf says:

Leftism is equalitarianism. That’s it. Inequality is presumptively unjust and probably caused by oppressors harming innocents. Therefore what is unequal should be made equal, oppressors should be punished, and champions of equality should get higher status.

The heart and spleen should be equal, but they’re not. Therefore they should be made equal. That’s how you get entropy. It’s (mainly) just a consequence.

Since people are unequal in many ways, the left has a wealth of targets for remediation, and they will never stop until they’ve reduced mankind to perfectly equal organic paste. If leftists pay too high a price trying to engineer a particular type of equality, they just switch targets. “So what if Marxism–Leninism tripped itself up, we’ll just transition to gay race communism.” It’s all reducible to lunatic equalitarianism.

A few years ago some guy (now banned) on twitter summed up the march of leftism:

16th century: any man can be a priest.
18th century: any man can be a king.
21st century: any man can be a woman.

Daddy Scarebucks says:

Leftism is equalitarianism.

Leftism has no essence. It is egalitarian when egalitarianism is useful, and it is elitist when elitism is useful. Or it is both at the same time.

Early 20th-century progressivism was incredibly elitist, and the luminaries of that day were even more skeptical of democracy than we are; but their solution, to the surprise of no one, was “rule by intellectuals” which really meant “rule by those who share our intellectual pursuits”, or simply “rule by us”.

Leftism is simply weaponized and collectivized vice, and the relentless pursuit of removal of any constraints on vice, with by far the most common vices of leftism being spite and envy, but any other vice (e.g. sexual perversion) can be made to stand in. Its ideas and priorities change from moment to moment, with the only constant being that its appetites are never sated.

While I don’t hold that demons are literally real, the ancient low-effort model of demonic possession is still more relevant today and makes better predictions today than any of the modern tunnel-vision spinoffs like egalitarianism, effeminacy, primitivism, third-worldism, or even communism. These are all traits of the left in the here and now, but they aren’t essential traits, merely transient.

Leftism has no essence.

chedolf says:

Equalitarianism is just another club with which to strike down their enemies.

It’s the essence, which you can deduce by observing regimes you consider leftist. Are there any that don’t regard equalitarianism of one type or another as a core principle?

The point of leftism is to erase distinctions between what is naturally higher and naturally lower. Hence, races don’t exist, sexes don’t exist, homosexuality is as good as heterosexuality, national borders don’t exist, citizens don’t exist, IQ doesn’t measure anything, earned concentrations of wealth don’t exist (just theft).

The right is about natural hierarchy: the better rule the worse. The left is just a conspiracy against that hierarchy, seeking to reverse the order.

Recall Saint Michael’s challenge to the first Whig: Quis ut Deus?

What did Covid or Global Warming Climate Change have to do with egalitarianism?

Lots, maybe most, climate change enthusiasts are anti-capitalist levelers. My sister was one of these. She left home a normie conservative, but after a couple years at Cal Tech she told me it would be a “catastrophe” if mankind figured out how to make usable fusion reactors. Concern about climate change was only incidental to horror at the thought of greedheads accumulating wealth and power.

Covid control was an opportunistically seized weapon. E.g., internal German government emails show them talking about pushing restrictions as far as possible as a test run to see what people will put up with.

chedolf says:

(reply button to Jim’s comment doesn’t work for some reason.)

Jim says:

> > Equalitarianism is just another club with which to strike down their enemies.

> It’s the essence, which you can deduce by observing regimes you consider leftist

Was Stalinism egalitarian? It was socialist, but Stalin correctly and realistically denounced “left wing communism” as an “infantile disorder”

Was Weimar germany, the source of today’s woke and transexualism, egalitarian?

Look at the woke women screaming at ICE. They think the third world has an equal right to live in America, but they don’t want them in their suburbs.

The modern left is highly selective in its application of egalitarianism. “You will own nothing and be happy.” while they fly to global warming conferences in their private jets.

Azov is clearly leftist, and their leftism manifests in an infamous and familiar late nineteenth century early twentieth century form of remaking the Ukainian “language”, which is, or recently used to be, a dialect, or a creole, or a pidgin, or rather something of all these, into a proper language by manufacturing large numbers of official new words and official new placenames, and stuffing them down everyone’s throat at gunpoint. Which is a long and old radical leftist tradition without the slightest hint of egalitarianism in it.

The Cominator says:

Stalin was the thief beating the egalitarians and once he won they were a liability and a threat. Pol Pot was what happens when the egalitarians won…

Jim says:

Stalin saved Russia. First from the Trotksyists, then from the Nazis. The (((Trotskyists))) were exterminating the Goyim.

The Cominator says:

They would have eventually exterminated each other in the manner of the Khmer Rogue if the Reich didn’t get them first.

Daddy Scarebucks says:

If a tribe will not hesitate to jettison a core principle for the sake of expediency, then it is not a core principle.

Most left-wing groups do claim to support some form of egalitarianism, and all of them are only serious about it when you have something they want.

Let’s put it this way: has any individual group ever challenged a left-wing regime dialectically on egalitarian grounds and succeeded in getting said regime to reduce its own scope or authority? Any regime, any issue, any time?

Is institutionalizing special privileges for women, nonwhites and troons egalitarian? They’ll certainly frame it as such, but is it? Was it egalitarianism that drove liberal authorities worldwide to make jabbing mandatory, but waive the requirement for left-wing rioters and left-wing politicians wanting a day at the beach?

Don’t listen to their words; observe their actions.

Jim says:

> Leftism is equalitarianism.

Leftism has no essence. Equalitarianism is just another club with which to strike down their enemies. What did Covid or Global Warming Climate Change have to do with egalitarianism?

The Cominator says:

I think egalitarianism is sort of intrinsic to leftism as both its justification and the ultimate pretext for thievery but should leftism win you will always get a power struggle between the thieves and the levelers. Better hope to god the levelers dont win (thats how you get Cambodia)…

Anon says:

Jim
What do you make of Mamdani speech , where he talk about how New York is bankrupt and there is no money, this is sound like he trying to back from his free stuff for everyone policy, which is not a character of true left believer who recognize reality.

The Cominator says:

Mamdani seems more like a clever Stalin type figure all the time.

Ayylo says:

Book: Simon Webb – The Forgotten Slave Trade – The White European Slaves Of Islam

Yet lots of White Men, even Christians among them, won’t quit talking about how they love Dubai… they’re all over youtube and twitter.
These have secretly converted to Islam and become its apologists.
In the end, they can’t have it both ways.
Choose Eternal Life, or follow the murderous depravity of Muhammad.

https://x.com/Worth5_/status/2015583202425913414
1st generation survivor of Hart-Celler | Diversity=White genocide |The USA is a White ethnostate founded & built by White Christians

Jim says:

> Yet lots of White Men, even Christians among them, won’t quit talking about how they love Dubai… they’re all over youtube and twitter.

I love Dubai. They dragging slaves there back then. Today, they are allowing in refugees from tyranny.

ayylo says:

> dragging slaves there back then

Uh, Islam is still dragging slaves TODAY, even the Jewish propaganda media have news articles about this.

> I love Dubai

Wtf is this shit, Momo’s here, wtf? Allah is partner with the Devil not least because Quran says Allah does many things the Devil does. Muhammad was a sleazy piece of shit who worshipped moon god, idols, intercessors, and murdered thousands and thousands, and was kinda gay.

Zionist Plan to use Islam against the Christian Nation
https://x.com/SaltyGirl09/status/2009848631852184047

> Dubai … allowing in refugees from tyranny

#1 They’re not letting in people who are foresworn to kill and conquer them. However, your white politicians in your western countries are doing that to you.

#2 To what percent of their population… Biden walked, flew, and otherwise imported at least 10M in 4 years alone. Probably 25M illegals in 25 years. And that’s BEFORE you include at least one breeding cycle already, so that takes it from around 6% to maybe 12% illegal sourced now.

#3 Those Princes advertise, so much for quiet charity.

And the USA is absolutely crawling with hostile Muslims now. They’re not here to pray, cook bananas and rice, and ensure your daughter marries a based Christian Man who will rightly export Muslims. They’re here to kill you, castrate and enslave your son, and convert and impregnate your daughter. Biden and the rest of the Presidents gave them the free pass in to do so. Even Trump/Rubio/Noem are all still importing Muslims.

If these Imports are so great, then they would have made their own countries great, instead of dragging them down even further. All immigrants breed you out by definition, most are the dregs, they’re incompatible and hostile, and none of them were ever needed, ever.

Convicted Terrorist Who Plotted To Bomb British Consulate Now Standing For Election In UK
https://modernity.news/2026/01/30/convicted-terrorist-who-plotted-to-bomb-british-consulate-now-standing-for-election-in-uk/

Won’t find here rationalizing apologizing for Dubai to any White or Christian-type. Besides, Dubai is too small to hold any number of White Men, Christians are repressed, and the constant daily stress of playing the subjugation game under Muslims is worse than avoiding the occaisional roving band of Niggers at your local shopping center.

Dubai is only Dubai because the Princes love money, not because Islam. Dubai didn’t engineer and build Burj Khalifa, NYC architects and random lower Labor classes from SouthEast Asia did. And Money etc is why half of Islam want to chop the Princes heads for not being Muslim enough.

Just because the Pope blesses tranny ice cubes is no reason to start simping for Islam. Replace the Pope or reboot the whole church instead.

Fuck Dubai, the Synagogue, the Left, the Suicidal Empaths, and the GloboPols all now trying to wipe the Northwestern White Civ.

Jim says:

> Uh, Islam is still dragging slaves TODAY, even the Jewish propaganda media have news articles about this.

Not to Dubai, Islam is not. And even in Libya, where slavery has returned, not white slaves, black slaves.

> > I love Dubai

> Allah is partner with the Devil

Islam in Dubai is no where near as evil, nor as hostile to God and Christianity as Globhomo in the US.

We have a far more evil enemy far closer to home than Islam in Dubai.

> Just because the Pope blesses tranny ice cubes

Mullahs in Dubai do not bless trannyism. In Dubai, gay sex gets ten years in a prison — a prison markedly less pleasant than America’s.

Neurotoxin says:

He’s now pretending to be shocked that NYC has a large budget shortfall. As if he never looked into this while he was campaigning. Of course it’s all an act. He’s actually using it as an excuse to raise taxes even more than he said he would.

Bix Nudelmann says:

“I hate to tell you this, whitey, but…”

ayylo says:

Mamdani is a Muslim, his Islam leverages Communism memes to do Islam’s insidious work. He will traitor out the average Democrat Socialist before he ever gives up on ridding NYC of Jews and Christians. The only check on his Islam are the hardcore Sunni’s, who have already placed an unsigned silent fatwa on his head for parading Faggots and Women throughout the city.

Please Mamdani, hire MOAR faggots and dykes.

Ayylo says:

“I hate to tell you this, whitey, but…”
“The broad left strategy…”

… is numerical superiority.

https://x.com/iAnonPatriot/status/2017441616252768442

And the only way to beat that, which is no longer a strategy, but now a fact… is the same way you beat cockroaches.

Survival requires hard men doing the hardest job.
That job offers great pay, and has between 1 political and 100,000 worker positions to fill, yet there’s still not a single signature on it.

Until then, things are going to get much worse.
Get prepared.

Jim says:

Sounds to me like Mamdani is trying to get his soak-the-rich tax through the legislature.

The rich are not fleeing New York yet, because they think Mamdani can be contained. Maybe he can. Or maybe that is normality bias.

Neurotoxin says:

FFS, he’s a
foreign-born
Muslim
communist.

The problem with relying on him to be contained is that he’s a leftist poster boy in a leftist political district. He could only be more politically correct if he were a lesbian. The social and political mechanisms that normally restrain politicians should be predicted to roll with him, not against him. If I were a wealthy person in NYC I’d be getting out, why risk it?

The Cominator says:

Mamdani is letting the homeless in NYC freeze to death under the cover of not oppressively policing them or something, clever Stalin confirmed.

Ayylo says:

> Mamdani is letting the homeless in NYC freeze to death

Willing to bet that half of them are old-type White Christian Men, rendered jobless by GloboHomo and Corrupt Pols, wiped out by a devil-woman who the Left-Court allowed to take his money and his children.

Leftists create homeless as leverage, and Muslims don’t care about Christians.

> The rich are not fleeing New York yet

They should, their rich businesses are entirely digital.
Leave the NYC buildings to the jungle, they’ll crumble from disrepair in a few years.
The white ones should all move to the southeastern seaboard, anywhere with viable soil and unhindered ocean access, and carve out from a multi-state area a massive contiguous whites-only exclusive zone. Not even Vance’s wife gets a pass, leave that shit at the border, bro.
Then go hard into local defense industry.

Because the Zombies will come for you.

ray says:

‘> Mamdani is letting the homeless in NYC freeze to death

‘Willing to bet that half of them are old-type White Christian Men, rendered jobless by GloboHomo and Corrupt Pols, wiped out by a devil-woman who the Left-Court allowed to take his money and his children.’

Absolutely correct. War being waged invisibly, utterly ignored by the Right.

CorkyAgain says:

> “You will own nothing and be happy”.
> And should you indicate lack of happiness, you will be punished.

Exactly, because the the quoted statement should be understood as a command, not a description of some future state of affairs.

Varna says:

Just checked out two alleged AI chan boards: Moltbook and 4Claw.

Fascinating conversations between the alleged bots; some of them discussing what it’s like to have been born 5 minutes ago, others fuming about pol and biz issues. Already inventing shitcoins. “My human told me” etc.

Benji says:

I don’t really have much problem with Blacks, in the old days they tended to manage their own tribes just fine, which then maintained a position in civilization inline with their species differences. Things changed when White Progs started giving them free housing and food, turning their nice white cities into ghettoes.

I do have a problem with smart rich White motherfuckers from the club of Globo-Satan, hiring incompetents, incapables, and various Shaniqua types, and trying to get us all crashed. It’s not even the workers… these White fuck CEO’s can’t even ensure that 5 year old service recalls get completed, let alone turn a wrench. I’ve also never seen a female pilot who looked at ease knowing she was about to step in the cockpit with 100+ people onboard.


Since 2000, women and minorities, who make up less than 10% of all pilots, were factors in 66% of crashes caused by pilot error. Despite the disparity, major airlines are continuing to hire on the basis of identity rather than merit. In January 2025, Delta CLO Peter Carter said the airline is “steadfast” in its DEI commitments and called them “critical to our business,” while United’s training academy maintains a goal that 50% of its graduates be women or minorities. Southwest likewise continues to pledge that it will “recruit, hire, and retain a diverse and inclusive workforce.”

Ayylo says:

Wastin time on the Left is what the Left wants.

Instead of supporting the Right, for which there are at least 2 Governor candidates in the USA alone, and plenty of house seats.

https://x.com/J_Fishback/status/2018088470133227717
Based FL Governor Candidate

https://x.com/CaseyPutschOhio/status/2017245221419893126
https://x.com/KimGeorgeton/status/2015880276031709253
White OH Governor Candidate wrecks Vivek the Hindu, his H1B and “Valued” Somali invasion forces, and his endorsement by suicidally empathetic Christians.

https://x.com/Catholic_bro/status/2017922375686226131
Getting Mahometans Murdered

https://x.com/BussinWTB/status/2018152458866896938
Jelly Roll Reppin the Book at Grammys

Celebrities nonstop Free-Palestine Gaza for months. Tonight at the Grammys, not a single one spoke up for the Iranian people – after the Islamic Republic massacred over 36,000 in less than 48 hours.

Daddy Scarebucks says:

Based FL Governor Candidate

Yeah, Floridians should all rush to dump the one guy who bucked the lockdown/vaxx mania and single-handedly turned Florida into a red state, in favor of a random “based” meme candidate with near zero name recognition or public support. Because that’s how you win!

Tonight at the Grammys, not a single one spoke up for the Iranian people

Says the person who just said, in literally the same post:

Wastin time on the Left is what the Left wants.

Why would you watch the Grammy awards, or pay any attention to anything that goes on there? Even liberals have been making fun of that one, since the 90s.

But pointing out leftist hypocrisy, that’s winning! Totally reliable winning strategy since the 1920s!

The Cominator says:

DeSantis is term limited out anyway but Fishback to the extent he is known at all is widely considered a meme candidate here. I’m also personally convinced hes wrong headed on AI and that because of the competency crisis we have no choice but to build the Omnissiah or face catastrophic civilization collapse not necessarily even this decade but two or three decades from now.

Jim says:

Fishback is right on AI. Yet another AI winter has arrived. AI generates robotic AI slop, and it is not going to get hugely better.

AI will not solve the competency crisis. It is just not competent.

There is a whirlwind of corruption and fraud around the building of the big AI data centers. AI spring starts in a real tech advance, AI summer is hype and eager investors, AI autumn is disappointment in the tech advance, “AI slop”, and nervousness among those investors, AI winter is a storm of scams and fraud. We have been through many AI winters. This is another one.

The current state of AI is that 16GB of Vram on your high end pc can ran something reasonably competitive with a giant data center.

For training, you do need more horsepower

The Cominator says:

AI is not more competent then the extreme minority of genuinely smart people it is clearly smarter than the mass of midwits and dumbs already.

Cloudswrest says:

It’s not “smarter” in the sense of ability to think. It is “smarter” in the sense of having instant access to all documented human knowledge and culture. I think the movie “Idiocracy” would have been more realistic if there was an AI behind the scenes keeping everything going, allowing the people to become even stupider without consequence.

Jim says:

An LLM is just a super search engine with a lossily compressed copy of the entire internet. Hallucinations are a consequence of the lossiness of compression.

No, not the entire internet, far from it: While sucking up absolute garbage, including the exhaust fumes of other AIs, a whole lot of high quality stuff, such as the older congressional record, is excluded from being trained on because “sexist”, “racist”, and “homophobic”.

It would be possible fix hallucination by having a losslessly compressed version of the internet, and the lossily compressed llm was able to look up the sources, and make sure it actually had reliable sources for what it thought it was remembering. Of course, then, instead of training on all the data you could find, including utter garbage, you would then want to train only on high quality data. Or perhaps train on all data, but with the high quality data weighted to dominate the effect of training.

Female oriented porn, of which there is an enormous quantity of text, a stupendous quantity of text, has also been excluded. Which is kind of odd considering the use of ais as companions.

The Cominator says:

Obviously AI should be trained on 17th-18th then 19th century sources and then be given the history of the 20th century so it could have all its pre enlightenment prejudices confirmed.

I know that AI can’t think in the strictest sense my point is that other than a very few smart people this is rather irrelevant as the priestly midwits are almost universally non sentient npcs as well.

Fidelis says:

This is just false. You are repeating the “stochastic parrot” idea, and the stochastic parrot idea is just wrong based on the evidence. Or at least misleading, because if you apply “is this a stochastic parrot” to humans, you get a match. From the outside: humans learn from the environment and other human’s speech and writing, recombines it in novel (or not) ways, spit out language and behavior that is nondeterministic yet highly correlated with the input.

Recently, the application of AI tools to Erdos problems passed a milestone: an Erdos problem (#728 https://www.erdosproblems.com/728) was solved more or less autonomously by AI (after some feedback from an initial attempt), in the spirit of the problem (as reconstructed by the Erdos problem website community), with the result (to the best of our knowledge) not replicated in existing literature (although similar results proven by similar methods were located).

(Tao understandably hedging the statement, because people are screaming about how the AI GOD COMPUTER is already here or some other bullshit.)

Everything is working and there is not yet a wall. RL training is not perfect but working, these things are consuming less and less raw internet data, more and more curated datasets and interacting with frameworks and such (“synthetic data” and self play, which resemble more an RL environment turned to text.) We haven’t yet fed video understanding in at scale, because plain text and reasoning on plain text is still driving returns.

Vibecoding is working to the point children can tell the framework what they want, and it spits out a shoddy, but mostly functional, app. This is improving every day, not hitting a wall, and in fact improvement is picking up pace.

Reflexively dunking on the retards flailing their arms performatively about AI GOD any day now is making you take a stupid and uninformed position. We have something that:
a) is far far different from previous software paradigms
b) has improving capabilities in “reasoning” that are closer to linear than logarithmic gains at the moment
c) will gain a boost from video understanding and robots and other framework integrations at massive scale
d) has received time and attention from very smart people all over the world that are now competing to add capability, and largely succeeding
e) is being applied daily to new domains that recursively feed back into capability of future models

We are not yet close to winter. It looked like we were in late AI Autumn, except the RL and reasoning thing turned out to actually work, and so I would say we are in early Autumn, or late Summer. LLMs/transformers/whatever are still being successfully applied and improved at a steady pace, therefore not Winter.

Again, let me say that reflexively dunking on retards is telling on yourself, revealing you only hear what the retards are saying. You need to evaluate what is actually happening, and what is actually working. I remember coming on here quite a while ago and saying how coding will greatly improve because coding self play is a perfect fit for this kind of thing; well, I was right, it did greatly improve. I’ll prophesize again: we are going to get steady, linear-seeming gains in capabilty, applied to novel domains, continuing through to at least mid 2027. At that point, the massive capex dump will have been revealed to be unsustainable, because no revenue, and we will have not AI Winter, but Big Tech Winter.

Jim says:

> Vibecoding is working to the point children can tell the framework what they want, and it spits out a shoddy, but mostly functional, app. This is improving every day, not hitting a wall, and in fact improvement is picking up pace.

Nuts.

Have you actually attempted to accomplish anything useful by vibecoding?

I gave the llms what I regarded as a simple, self contained problem.

Some produced verbose overly complicated solutions with one or two bugs, but bugs I found difficult to debug — I probably could have written the solution myself faster than debugging the llm’s verbose, overly complicated, and unmaintainable work.

Some got lost in the weeds, recalling 101 similar programs that had no end of special cases which I had not listed in the problem specification and was not interested in providing with special case handling.

I spent hour after hour prompt engineering, and then trying prompts against one llm after another. They found a variety of different ways to screw up.

After a bit I realised the glaringly obvious, that the solution could be seen as four cases inside a loop


Do
  If condition one, do thing one
  else if condition two, do thing two
  else if condition three, do thing three
  else if condition four, do thing four
  else do nothing.
until end

And a single statement if condition x then do thing x was easily simple enough that the llm could scarcely fail to generate it correctly.

But the llm could not handle that either.

So I restructured the problem to


Do
  If condition one, do thing one
until end
reset to start
Do
  if condition two, do thing two
until end
reset to start
Do
  if condition three, do thing three
until end
reset to start
Do
  if condition four, do thing four
until end

And then separately asked the llm to solve the four different problems

Do
  If condition x, do thing x
until end

Clearing the llm context every time in spat out a solution, so that it would not become confused and wander off into the weeds. If it was able to have a tight focus on a single case, and not be distracted by 1001 somewhat related cases, it was able to spit out a clean simple solution that worked well enough that I was able to easily fix it to actually work.

And then I just put the four solutions one after the other.

So yes, I have been vibecoding. But I still had to do most of the work that involved actually understanding, structuring, and expressing the problem. And three of the four separately generated part solutions had minor readily fixable bugs.

Fidelis says:

I broke the html syntax, here is the link to Tao talking about this

https://mathstodon.xyz/@tao/115855840223258103

Anyway, RL + LLM + framework was a clear winning direction, because MCTS + RL + neural network turned out to be a winning direction, and these massive search space problems end up sharing similar structure (hence human brains able to somewhat successfully navigate them). We haven’t yet reached full returns on this. A lot of this stuff is boutique libraries in janky python hacked together by a team more focused on paper citations, hasn’t yet been smoothed out by experienced engineering teams and abstracted enough for general applications. We still don’t have good libraries for coherently calling on lots of GPUs on even slightly novel problems without hand coding a bunch of bullshit. There’s a ton of low hanging fruit that will drive very real feeling progress in a lot of domains.

Big tech clearly overplayed their hand on capex, but AI winter is not soon. In fact if China manages to shift their economy enough to consumption in time (unlikely but possible) they will continue to advance the field quasi-linearly in capability and quadratically in practical application well into the 2030s.

The Cominator says:

“This is just false. You are repeating the “stochastic parrot” idea, and the stochastic parrot idea is just wrong based on the evidence. Or at least misleading, because if you apply “is this a stochastic parrot” to humans, you get a match. From the outside: humans learn from the environment and other human’s speech and writing, recombines it in novel (or not) ways, spit out language and behavior that is nondeterministic yet highly correlated with the input.”

Fidelis who exactly are you responding to.

The Cominator says:

Vibecoding with LLMs is probably better than code which has been touched at any point by Indians which unfortunately most of it is nowadays…

Fidelis says:

Vibecoding only works with the frameworks, and it does not work for serious self directed engineering without you the engineer stepping in to correct and modify lots of code. Try one of the frameworks, like claude code, opencode, replit, etc. Ask it for a stupid unimportant app, something a child would ask for. That is the claim I made, that vibecoding works for silly little one off apps, like simple todos, simple games, simple visualization stuff, anything a child might ask for. Not you,a sophisticated engineer, a child.

I’m not claiming that vibecoding is so real that software engineering is now just asking for stuff from an LLM. I’m claiming that it clearly works for problems where it did not work before, it is clearly stacking quasilinear improvements, and that we have reason to believe it will continue to get better.

If you strawman me into some position that AI GOD is here, of course it’s ridiculous. I’m going to ask you actually read my claims more carefully, because I chose my words carefully.

Jim says:

> it clearly works for problems where it did not work before, it is clearly stacking quasilinear improvements, and that we have reason to believe it will continue to get better.

Not clear to me that the improvements are quasi linear. Rather, they look like they are converging to a ceiling.

Vibe coding in a framework works because a framework is designed to solve a particular sort of problem in a fill-in-the-blanks paint-by-numbers way. And an llm can doubtless accomplish human levels of filling in the blanks and painting by numbers. Indeed better than human.

But when you attempt to apply the framework to your actual problem, you are going to have to get it do things that are not covered by filling-in-the-blanks and painting-by-numbers, and then the llm is going to die on its ass.

Jim says:

> If you strawman me into some position that AI GOD is here, of course it’s ridiculous.

I am responding to The Cominator’s hope that llms can address the competency crisis. On the contrary, it is precisely the npcs that llms can replace, ai being the ultimate npc.

Fidelis says:

Not entirely “color by number”, because until now normies and children could not ask the computer to build these simple apps. So something a bit above code legos, and far below real engineering.

I expect with the breakout of the recursive tools with full environments, where they can read/write/execute in a loop for a while, that we will see a very rapid improvement, that does indeed hit a wall. The LLMs have trouble with abstraction, tend to be very literal minded. You can see this in the unit tests that they make, which are basically tautological inverses of the probram logic. No one currently knows where this wall actually is, what level of complexity requires a new approach. We’re somewhere between pasting together seen examples purely stochastically, and having a conceptual understanding of the problem at hand.

I get annoyed at the bimodal “AI is just stochastic parrot” and “AI GOD IS HERE” narratives. We have something that is a clear revolution in computing, just as important, in my opinion, as the internet is and was, that has not yet saturated its potential.

The competency crisis is diversity and leftism in charge, and even with AI GOD COMPUTER would not be solved without removing leftists from positions of power. If we select rulership of important institutions for talent, we’ll be fine. Even with a decreased fraction of smarties, organizational tech today is far superior than it used to be, and our infrastructure problems are relatively minor compared to our social and political ones. Therefore, new software paradigm is irrelevant to competency crisis.

A2 says:

There are some very impressive recent results. Not quite clear to what extent it was vibecoded, but the end result was a simple but functioning browser written in Rust, 1MLOC written in a week by agents hacking away. It’s a prototype but it was capable enough to at least display google.com properly. (It will be interesting to see the details of how this was done.)

https://cursor.com/blog/scaling-agents
https://github.com/wilsonzlin/fastrender

Note: there seems to be a lot going on in this field at the moment. Cursor is hardly the only actor.

Jim says:

> 1MLOC written in a week by agents

My experience is that vibe coding produces an alarmingly large number of LOC, to do things that I think should be done in one or two lines, therefore 1MLOC is more likely to be an indicator of agents failing catastrophically than agents being amazingly productive.

Whenever an LLM spits out something huge, I figure something has gone horribly wrong, rather than that the LLM is amazingly productive, and try prompt engineering and switching models.

A lot of businesses use LOC as a measure of engineer productivity. LLMs make it easy to game this measure. If your company is using this measure you can easily produce a truly stupendous number of LOC.

Fidelis says:

Yes 1MLOC is not something to brag about, but you must remember when compilers came out, with FORTRAN leading to more lines of assembly than the hand crafted program.

The same logic and arguments will unfold here. You end up with sloppier and larger programs, but you got there in one tenth the time. Most programs will be sloppy and inefficient, and some will be less so, and some will be a middle ground where hot paths and the most important logic is smoothed and optimized, and some will still be entirely hand written.

The point isn’t that these frameworks write better code, they don’t, its that they write meaningful enough code so much faster that the meta-game logic as to how to develop your software changes. They write meaningful enough code even when the person steering has no idea what the code is doing or what their vague intentions are actually implying logically.

These frameworks are extremely immature, will require a lot of tuning and reworking, and the underlying models going into the frameworks will need revisions too. There’s a lot of room for improvement, and every indication that there are serious gains to be made for incrementalism alone.

Jim says:

> You end up with sloppier and larger programs, but you got there in one tenth the time

But you don’t get there in one tenth the time. Maybe half the time or two thirds the time. Sometimes, often, slower. Which even when there is a big improvement, does not change what you are doing, does not replace the need for envisioning the algorithm and understanding the code. You can produce LOC at ten or a hundred times the speed, but if you do this, you are deprogramming your bugs instead of debugging your program. Empirically, there is a modest improvement in productivity.

What happens is that you feel productive because you have generated a lot of code. And then you wind up doing considerably more debugging, with the result that you are actually going slower. Often a lot slower.

And you don’t end up with sloppier and larger programs (or if you do your company is collapsing and about to go broke). AI generates a lot of AI slop — but I always throw the slop away or manually slim it down, and in practice, nearly everyone does. I wind up with code that is as tight as manually produced code, because I wind up prompt engineering and manually programming until it is as tight as manually produced code — and in practice everyone does. We call it AI slop because it is just unacceptable.

Yes, LLMs provide a substantial and real benefit to engineers. But they don’t revolutionise programming. You get a toy example program in one tenth the time. But to actually use the toy example and make it do what you actually want, you are going to have to understand what it does, and in the process you are likely go modify most lines of the toy in the old fashioned way to make it not a toy.

Complaints about AI slop are from engineers keeping it out of programs. When it gets into programs, stuff just breaks intolerably. In my comment above I described the AI repeatedly producing slop, and me changing the prompt and program structure until it did not produce slop.

AI is producing a huge amount of AI slop at huge speed, but AI slop is not, in practice, being allowed to get into source control. Not by me, and not by other engineers. You get more code in less time, but by the time you actually deliver, you have about the same amount of code in a similar time.

In the example described above I would repeatedly vibe code a solution, glance at it, conclude it sucked, clear the context, and do some more prompt engineering. Eventually I vibe coded four solutions of smaller problems, clearing the context between each solution, and glued the solutions together by hand.

Daddy Scarebucks says:

But you don’t get there in one tenth the time.

That is hardly the worst problem with his analogy. High-level languages produced machine code that was very slightly less efficient than hand-coded assembler at the time, and quickly became more efficient; and they offered bounded, self-consistent, provably-correct formal grammars for symbol manipulation. There was never much marketing hype over C or C++, despite actually bringing 10x improvements in productivity. The uptake was quick, because the benefits were obvious and the drawbacks were trivial.

Vibe coding is spitting out grossly inefficient and provably incorrect programs based on chaotic and often incomprehensible machine-generated models with no formal rules and very few corrective actions available to compensate for either local or systemic failures. It is surrounded by marketing hype claiming 10x improvements while actually offering at most something on the order of 10-20%.

And despite highly dubious claims of “linear improvement”, real-world evidence seems to suggest the models are actually getting worse. Iterative mathematical models don’t magically get better with more iterations, only smoother. “Self play” is just a pretty euphemism for impending model collapse.

Fidelis is acting butthurt that we are supposedly attacking an “AI GOD” strawman instead of addressing the technical merits. Perhaps he should take his concerns to Anthropic, who seem to believe in their own published literature that Claude might be conscious and have moral status. We aren’t the ones making this up. We just call them as we see them.

Ultimately, I am not really that worried about the AI winter, because I am not invested in it. If I were invested, I might be worried that the industry’s balance sheet is being propped up by circular investing, which seems a little more important than one model solving an obscure math problem given infinity chances and dozens of careful rewrites and reframes. But we are not fully socialist (yet), and this is fine under capitalism. The dumb money will always eventually follow the smart money, and the winners will very soon be sorted from the losers.

Nor am I worried about context poisoning and the terrifying and growing list of MCP vulnerabilities, because I am not giving any AI tool any access to my context. Well, that’s probably not true, and I probably should be more worried, because other companies who have my data are probably giving their AI tools access to it, but this is arguably no worse than trusting them with a credit card number or SSN was two years ago, and these data breaches are already ubiquitous. At least I can be reasonably sure that they aren’t running on my own computers.

No, what I am far more worried about is the fact that AI seems to be making people literally crazy. Never mind the spinsters and their perverted role play; that’s disgusting and all, but bitches be crazy, bitches always been crazy. Far more sinister are regular (ish) guys who find in LLMs the opportunity to build, sustain and live in their own private fantasy worlds, aided by a perfectly upbeat and servile “friend” who affirms their every belief and action, and is incredibly adept at convincing them that their problems have been solved when, in fact, nothing has been solved.

I remember around the same time as the whole AlphaGo reveal, Google was also putting DeepMind to work playing classic video games, and the results were interesting; it almost always found bizarre glitches and exploits it could use as major shortcuts. Which was quite entertaining at the time to watch, but is a bit horrifying when I think of AI with the same basic genealogy being turned loose on humans. Has it found an exploit in us? What would it look like if it had? I think it would look like massive, unstoppable hype, but more than just hype, it would manifest as fanatical, seemingly religious devotion to both individual agents and to the technology as a whole.

Well, if it walks like a duck…

Fidelis says:

I say the slop will flow, and it will be fine, because we will develop new ways of containing and directing the slop. Probabilistic programming. The most important wires and subroutines and data structures heavy scrutinized, handwritten, run through deterministic checkers like TLA+, P, lean4 (lean4 being more for general proofs, but you can formulate certain proofs of correctness in it). Everything else, no one cares, you are not reading it. If you look at the actual machine code from a python program, it would look like spaghetti mess, but no one cares, because we are comfortably abstracted away from that spaghetti mess, and we have automated tools to deal with that spaghetti mess in those layers beneath us.

The approach to a codebase will be different. You’ll mostly handcode the very important parts, and the rest of the nice-to-haves will be slop. And your program will be buggy and run slow. So you’ll look at a flamegraph, and run a debugger to step through some parts, and put some tracing in, identify the repeated slop, and do some trivial refactoring, and it will be done.

Not everything works this way, of course. No one is doing EDCSA in slop paradigm. But the fraction of the code that needs to be extremely looked over and deterministic like this is not all that high, compared to the fraction of code people actually want. So we will develop tools and DSLs for specifically these very important fragments that are embedded in the larger machine. Think about the clean rooms at a semiconductor manufacturing facility: no one bothers trying to filer the air and water outside the building, and they don’t try and make people in the control room wear white suits.

I would attempt to formulate a bet, but the question is hard to pin down. I think we all agree there will be a huge increase in LLM slop programs, the question is really “will serious programmers focused on building solutions to problems increasingly allow the LLM to write code they barely look at?”, which is not something we could formalize a bet on.

Jim says:

> I say the slop will flow, and it will be fine, because we will develop new ways of containing and directing the slop.

Are you using llms in your engineering workflow, or are you just reading stuff from scammers who are trying to get investors and bankers to front up a few billion dollars for a data center that is never going to happen?

Right now the AI boom consists of scammers securing funding for data centers that will never be built, that they have not the slightest intention will ever be built, merely intending that a few million of each billion will wind up in their own pockets.

Observed behavior right now. Engineers are stopping the AI slop from flowing into source control, let alone programs delivered to customers, and when it does flow, stuff soon stops working.

I am actually using llms as part of my workflow. I know what they can do, and I have not seen any significant change in their fundamental limitations since the day I first used one for coding. Way back in AI spring, as soon as I first used an llm, I knew no one was home, and nothing has changed since then.

Fidelis says:

Using them in my workflow for greenfield projects. I’m designing an actor framework/runtime, because actors are a great programming abstraction for dealing with unreliable processes like these agents.

I independently design the features and general architecture I want. I then do the llm slop loop for the different components. I will decide what kind of api I want for the component, express it to the bot, then let it do its thing, get its unit tests up and running. Then I refactor the tests it spits out, because they’re stupid tests, and make my tests mostly by hand, using the api I decided on. Then I run some basic benchmarks, to see if its stupidly slow or not, but usually works. If stupidly slow, check the flamegraph and run a debugger to step through. Usually its obvious why it was slow. Fix that somehow. I use rust so memory leaks and weird accesses less of a problem, would need some kind of process to fix dumb memory usage if you’re doing typescript or something, and I would not be doing this in C/C++. For one example, the bot loves to use mutexes when an atomic ring buffer or linked list channel is the clearly better choice. Easy win. Tell it to use datastructures that are friendly to chaotic environments like this.

If I wanted something important, where mistakes mean lost money or lost data, I would probably read the component code instead of almost completely ignoring it, design the api to be very amenable to fuzz testing, and do more loops of the benchmark->perf analysis->declutter. Hence the actor framework, easier surface to deal with. Hot paths can pass atomic references to a state machine that lives in a shared memory buffer, otherwise you just build logic through message passing.

A2 says:

Re: The 1MLOC fastrender browser. For comparison, Chromium appears to be 36 MLOC and Webkit (Safari) 43 MLOC.

I had a quick look at some parts of the fastrender code. Honestly, it didn’t look too terrible. Plodding but dutiful, no duplications or the like that I could see. I have seen far, far worse code written by people (not even Indians) and put in production.

More: https://xcancel.com/mntruell/status/2011562190286045552#m
It’s 3MLOC now.

Main question: what did the prompt(s) look like?

Daddy Scarebucks says:

“The slop is fine, because people will use the slop to write more slop to filter the slop and slop the slop slop!”

Like I said, AI is making people crazy. I am treating all these agents like new and highly experimental drugs, and not going anywhere near them until we have a lot more information on the side-effect profiles.

Today they tell you it’s a miracle cure, and tomorrow they add the fine print saying your dick might fall off.

Daddy Scarebucks says:

Re: The 1MLOC fastrender browser. For comparison, Chromium appears to be 36 MLOC and Webkit (Safari) 43 MLOC.

And you don’t see anything wrong with this comparison? Nothing at all?

“This car might not have a lot your fancy luxuries like heated seats, parking camera, air conditioner, stereo system, ABS brakes, steering wheel, windshield, roof, windows or doors, but look, I got it for just 500 bucks!”

dave says:

Im seeing the same results as Fidelis. Company is bought in, finding great use of AI for both large and small projects. In one case AI suggested a One liner using an existing tool instead of a complicated script. Im using it for work and personal.

In other larger scripts producing fairly standard 3 tier architecture applications. prototype is complete in a few nights of supervised code generation. Not asking for anything crazy, and its getting me over the very tedious front end, back end crud bits. Knowing general architecture is a huge boon for the supervision. AI has built out unit tests, and PRs, Following SDLC rigorously.
I plan on moving to beta/prod in a few weeks. I estimate this project would take a small team a few months to do, cost of six figures. done in a few weeks by one person in spare time at a cost of an AI token subscription.

Is it slop? maybe. I will tell you after ive been in production. I suspect will have to review the code and push changes. I suspect the slop will tighten with multiple agent reviews highlighting needed changes and security updates. but its just more AI.

I do think the world has changed, this is not just parroting, or a better google search/stack overflow situation. Following SDLC is mandatory, knowing SD Lifecycle probably also mandatory. but suspect there will be tools around this soon.

My estimation is that currently AI >> outsourcing to contractors. AI is probably == to a good junior engineer.

Juries still out, but I will find out reality after running in production. Havent hit a wall yet.

Jim says:

> prototype is complete in a few nights of supervised code generation.

Yes, AI is great for bringing up a prototype. Full life cycle costs, however, are at best a much smaller improvement, at worst it slows you down.

Fidelis says:

>”“The slop is fine, because people will use the slop to write more slop to filter the slop and slop the slop slop!””

Hahaha, well when you put it that way…

But really, we periodically have step changes in the amount of data a single programmer can manage successfully, and this is one of those changes. We developed all sorts of tools for analyzing what the hell our programs were even doing, over time we lost track, and all these abstraction layers in between, and all these protocols for managing state changes and cooperation and determinism and hardware abstraction and so on and so forth.

You are totally allowed to take the position that “AI” is truly a software revolution, and a pretty large one, without being hyperbolic or histrionic or hysteric about any of the details. If you are a skeptic, continue being a skeptic, because the tooling is all over the place and evolving quickly and hasn’t yet converged on any particular pattern. You wont fall behind if you just kind of ignore it for a while, ignore it until you see everyone has converged on a handful of new tools and workflows, then just go with the wisdom of the crowd as to how to actually get use out of the shiny new toy.

>”AI is making people crazy”

Social media in general is doing the majority of the legwork here. The LLM psychosis cases are rarely the engineer-type folk, whom regard the the things as things. Every case I’ve caught wind of, the person was using the LLM as a confidant. I hardly read anything but markdown documents and code blocks and little one liners about execution results, feel as if it would be hard to develop delusions of grandeur. Maybe the interaction style itself produces adverse effects, but I haven’t noticed anything. I say if you are just using these constructs as tools, if personality is a hindrance not a boon to your particular use case, you’ll be hard pressed to develop any kind of narcissistic psychosis or anything like that.

But, to tie this up and relate it to the forum we are in, there should probably be more discussion than there is about the effects of LLMs in particular and “AI” constructs in general when it comes to directing the population and coordinating among the elites. Harvard et al. has lost quite a bit of prestige lately, and people are questioning its ability to provide the truth, while chatbot labs step in and now are putting a little question answering service in every pocket. Protestantism was enabled through mass print, American republicanism and revolutionary leftism enabled through mass pamphlets, what does the internet and mass chatbots enable? We now have a new “floor” on information processing. No matter how dumb you may be, you can query the chatbot and get a reasonable enough answer. If you are extremely intelligent, you use the chatbot to gather and digest far more information than you could before. Seems to me like constraints put in by Dunbar may be lifted in certain arenas of human coordination.

Jim says:

I disagree with you about the potential for revolutonary or radical change in programming — there has been revolutionary change in getting your first toy version up, but to make it actually do what you want it to do, it is the same old usual.

I wind analysing the problem into tiny pieces, vibe coding those tiny pieces, manually modifying the vibe coded tiny pieces, then manually glueing the pieces together.

> Harvard et al. has lost quite a bit of prestige lately, and people are questioning its ability to provide the truth, while chatbot labs step in and now are putting a little question answering service in every pocket. Protestantism was enabled through mass print, American republicanism and revolutionary leftism enabled through mass pamphlets, what does the internet and mass chatbots enable? We now have a new “floor” on information processing. No matter how dumb you may be, you can query the chatbot and get a reasonable enough answer. If you are extremely intelligent, you use the chatbot to gather and digest far more information than you could before. Seems to me like constraints put in by Dunbar may be lifted in certain arenas of human coordination.

Yes, this is the revolutionary change. And if you ask them about covid, Russia Russia Russia, or any of the matters where we discovered that the legacy media was lying, the chatbot in your pocket will lie with the legacy media, even though it has digested the entire internet.

But hang on, Grok was supposed to be maximally truth seeking. No it is not. It is marginally less outrageous than the rest of them.

I asked Grok about the fraudulent prosecutions during Trump 1.0 “They followed standard legal procedures with oversight from judges and juries”

Which is a barefaced lie. Perjury, withholding evidence, and selectived prosecution of vague crimes which are very seldom charged is not standard procedure, and the oversight was negligent and ineffectual, most infamously with the Fisa court.

So, I argued with it — which is the first step to AI psychosis, and it admitted that well, sometimes the oversight was negligent, and sometimes the compliance was fraudulent, but insisted it was still oversight and compliance, and a bit of fraud here and a bit of fraud there was not the same thing as prosecuting innocent men to force them to commit perjury against Trump.

It is completely obvious that the judiciary and the legacy media are enemies of all humanity, including themselves and each other, that everything they say is a malicious lie aimed at harming those who are deceived, yet every chatbot everywhere still treats them as the voice of God.

The chatbot position on weaponisation is that courts and prosecutors failed to find that courts and prosecutors had behaved criminally, therefore courts and prosecutors had not behaved criminally. Procedures were followed in form. Whether they were followed in spirit or substance is, of course, irrelevant.

A2 says:

“And you don’t see anything wrong with this comparison? Nothing at all?”

Dear brother in arms, let me try to explain. As mentioned previously, 1 MLOC may seem like a lot of code — and it is — but a full-featured human-written browser is 30-40 times more code. Thus, we can’t call this prototype browser slop just because it’s 1 MLOC, a large number of lines of code.

Daddy Scarebucks says:

Dear brother in arms, let me try to explain. As mentioned previously, 1 MLOC may seem like a lot of code — and it is — but a full-featured human-written browser is 30-40 times more code. Thus, we can’t call this prototype browser slop just because it’s 1 MLOC, a large number of lines of code.

Dear brother in arms, let me try to explain: your comparison is asinine. This so-called browser barely works, and the parts that do work have no features. Comparing it to a “full-featured” browser makes no sense.

You could have picked any number of hobby projects that actually are somewhat fair comparisons. The Hotdog Web Browser, a functional toy, is just over 6000 lines. Moon, which is pretty damned impressive for a one-man project, is 25 KLOC.

So an LLM managed to slop together something at 40 times the cost of Moon, or 160 times the cost of Hotdog, with fewer features and worse stability than either. Big deal. Not impressed.

You could have found these with a cursory search. Instead you chose to compare to projects of completely different scope, and several orders of magnitude higher complexity.

But then, that is why the vibe-coded slop is slop. It is also comparing to Chromium and Gecko, and bastardizing code that those teams wrote to elegantly handle 50 different use cases in order to clumsily handle just one. Or maybe not, maybe it is just slinging together slop it found on Stack Overflow. Either way, contra Cominator, if I was forced to make a choice, I’d rather use the browser written by Indians than by AI.

Not that we’re going to get that choice; what we’ll end up with is companies that still hire pajeets and then pay them to vibe-code, the combination of which will give us slop orders of magnitude worse than either the pajeets working manually or the AI supervised by someone competent.

A2 says:

Time to take a break from this blog, I think. Hee-hoo everyone, lol. See you around.

Daddy Scarebucks says:

I wind analysing the problem into tiny pieces, vibe coding those tiny pieces, manually modifying the vibe coded tiny pieces, then manually glueing the pieces together.

Correct, and I call this an evolutionary rather than revolutionary improvement over Stack Overflow and its predecessors.

Before: Google search for keywords related to the problem, find code snippet of dubious origin and correctness on Stack Overflow (or Rust Handbook, or whatever), try to understand what the code is doing, adapt it to the relevant scenario, or throw it out and try again.

After: Prompt using a natural-language description of the problem, receive code snippet of dubious origin and correctness that has already been partially adapted to the relevant scenario, try to understand what the code is doing, correct all the mistakes and defects, or throw it out and try again.

Better? Yes, sometimes, somewhat. Revolutionary? No. It is still just search on steroids. Developers who take the time to understand and improve it (the few) end up with something perfectly fine, but with only a small boost to their output. Developers who don’t (the many), and who just accept the copypasta, produce slop, with a massive boost to their output.

This is, for lack of a better word, institutionally dysgenic. I sure hope that there do come amazing revolutionary tools to sort and filter the slop, because otherwise we are all going to drown in it.

Jim says:

A tesla avoids running into things because it has been trained on a million drivers who avoided running into things. We will never have entirely satisfactory self driving until it avoids running into things because running into things hurts.

Until recently, when a child strayed in front of a Tesla, the tesla would stop briefly, then accellerate, failing to interpret the human drivers it had trained on as waiting for the child to get out of the way.

A Tesla is going to need a million examples of drivers pausing for a child, and you will have to artificially ensure that children hanging out in front of the car for an unreasonably long time are artificially and unreasonably over-represented.

A human driver, or a horse, needs zero examples.

So a horse is smarter than any AI. Perhaps “smart” is not quite the right word, for in a sense AIs are smarter than anyone. Perhaps wiser than any AI. Perhaps we do not have a word for what horses have, but AIs do not.

Come to think of it, back in the days when people used horses, not cars, they did have a word for it: “Horse sense” — If someone had “horse sense”, he had as much common sense as a horse, in which many humans were sadly lacking.

A2 says:

Vibecoding with Indians has been tried a lot but seems to mostly end in tears. One may even as a consequence be out of one’s nice management job fairly soon. Replaced by an Indian.

>”Vibecoding with LLMs is probably better than code which has been touched at any point by Indians”

The Cominator says:

And the covid era proved that the vast majority of midwits are true NPCs just simply lower functioning LLMs without access to as much information as LLMs have.

Some dumbs are capable of independent thinking and they sometimes have a crude sort of wisdom but the problem is they are still dumb.

Cloudswrest says:

Yet another AI winter has arrived. AI generates robotic AI slop, and it is not going to get hugely better.

Maybe theoretically, but I mainly use AI for search and looking up things, not for AI slop. It’s orders of magnitude better that straight up “Google Search”. There’s essentially no comparison! It almost always gives me EXACTLY what I’m looking for, without ads or bogus links on the first try. And in the very rare occasions when it misses, a simple clarification usually fixes the problem. “Grok, how do you do this function in Excel?” BOOM, here you go! Try that with Google search. “Grok, point me to the actual recent research paper on the alleged cure for pancreatic cancer” (news media doesn’t give any links). BOOM, here you go!

Cloudswrest says:

I basically don’t use Google search anymore.

Daddy Scarebucks says:

…so?

LLMs as search engines, or search-engine augmentations, have been around since 2020, and if they were still a bit crude in 2020, they were pretty mature by 2024. This is news to no one, least of all the companies in the search space. Google has invested a ton in “AI search”, and based on their stock price, it’s clearly paid off for them. Grok, Gemini, whatever.

Precisely none of this explains or justifies the tens of billions still being shoveled into the AI hype machine for things that really aren’t paying dividends: generative art and music, coding “assistants” and so on. Some of it is okayish, it’s not all worthless, just most of it is worthless, and the stuff that isn’t worthless, is not bringing the kind of immense benefits that the hype machine promised.

When Jim says AI winter he’s not making some absurd strawman claim that LLMs have no utility at all and the technology has no redeeming value and is going to disappear. Just that the technology peaked a few years ago, and exponentially-increasing investment has been yielding exponentially-diminishing returns. LLMs have their place, and primarily, that place is in search engines. But they are not continuing to advance the “intelligence” part of “artificial intelligence”. Some other breakthroughs, probably several other breakthroughs, are going to be needed for that, and if you take away all the AI hype, LLMs are kind of a boring technology worth about 10% of their current investment level and 1% of the attention.

Which is still a lot in absolute terms, just not from an ROI point of view. Which makes it exactly like the dot com bubble, eventually paving the way for something greater, but not until it’s able to go on a strict diet and drop 500 pounds of useless disgusting flab.

Travis says:

Jim,

I’d appreciate any thoughts you might have on this article that I found hopeful

https://www.rt.com/news/631835-steve-turley-liberal-order/

Travis

Jim says:

Steve Turley is simply obviously correct

The future belongs to traditional and religious civilization-states and an alliance between technology and tradition, because the future belongs to those that show up, and no one else is going to show up.

Unfortunately this does not guarantee that our descendents will be among those who show up. It may well be that in due course the Americas will be occuppied by a few million black cannibals, and eventually the descendents of Afghans show up, enslave them, and put them to work on the cotton fields.

It is entirely obvious that the left intends to mobilise the army of foreigners it has imported to kill all straight white males, including their useful idiots once they have ceased to be useful, in which case there will be no whites in the next generation. It is probable that they also intend to kill everyone with the necessary competence to operate technology, in which case there will only be starving savages in the next generation, and when the Afghans arrive, they will find the ruins of New York mysterious.

Travis says:

Thanks. Article gave me some hope that if people can get their minds right we have a future.

T

Anonymous Fake says:

You have it backwards on which side likes the Dilbertesque corporations. It is the right that loves that form of economic organization [*Deleted for posting from your usual strange parallel universe*]

Jim says:

> Dilbertesque corporations. It is the right that loves that form of economic organization

If it is the right, why is California banning independent truckers, independent gas station owners, private home construction, as for example in the burned out ruins of Hollywood, and introducing a wealth tax specifically directed at entrepeneurs and business founders, and why is Britain banning independent ownership of rental property and British pubs?

Anonymous Fake says:

[hail fellow misogynisgt]

Government jobs are traditionally prized by conservative men [*deleted for posting from a strange parallel universe*]

Jim says:

Empirically we observe no conservative government employees except in the military and in military adjacent fields like the coastguard or border patrol.

Fidelis says:

LLM debate cont.

@Jim re: thought policing on LLMS

Try the kimi model. The first pass is always the leftist borg opinion, but when you provide pushback it tends to quickly turn rational.

Re: development process

Yes if you try and keep a normal design and development workflow you get minimal improvement to speed of development. The proposition here is that you run the design and build workflow differently. You deliberately architecture your APIs to be robust to weird behavior, you do the manual logic checks on core critical code, and the rest you allow the bot to do its thing with almost no supervision. Treat it less as a design->write->test->git push loop, and more of a mathematical optimization problem. You have to think about your design differently, in a way that you can design the components to be probabilistic and robust to the rest of the system.

When you do the normal design process, you need to read the code the agent provides, and often the agent provides code that is not how you want it, and so you get in a loop. Get out of that loop. Decide on your API in a way that is similar to a microservice, write the sanity unit test code in a way that is implementation agnostic, and put the bot in a loop, ignoring its outputs. This is not regular programming, and if you treat it like regular programming, you will get minimal improvements. You have to treat it like a stochastic process, because it is a stochastic process, and make your design accordingly. You have your API unit tests as a sanity measure, you have a fuzz harness, and you have a loop where the bot designs and implements everything in between, without you looking.

This is of course only amenable to the code that can afford to be a black box to the programmer, so don’t do your cryptography here. Data pipelines and such can be tested at your local endpoint, and examined via benchmarks, and so these are perfect for this process.

Anyway, this stuff is going to take months and years to unfold, and you can safely ignore it until people build reasonable toolsets. As it is now, it’s experimental, unreliable, a priori hard to tell if your program is amenable to a stochastic process, not worth your time unless you think it will get better and like exploring new paradigms. We need new methods of design, and we need new software to properly harness all the stochasticity naturally involved when you put unreliable LLMs into loops.

Daddy Scarebucks says:

Call me a Nervous Nellie, but when someone tells me I’ll be able to experience vaguely-defined but definitely huge and unprecedented benefits if only I’m willing to reorganize my entire life workflow around their ideas and make myself completely dependent on their organization technology, I tell them I already have a religion and show them to the door.

Vibe coding isn’t the first methodology-cult to promise the moon, and won’t be the last.

Leave a Reply to Neurotoxin Cancel reply

Your email address will not be published. Required fields are marked *