Newcomb’s paradox

Newcomb’s paradox is a paradox because half of very smart people think one answer is obvious, and half of very smart people think the other answer is obvious, and both think the other is being stupid.

The scenario is that a demon that can accurately anticipate people’s actions, or an ai, or a committee of psychiatrists, or something like that, offers, you two boxes. A box that contains a thousand dollars, and a mystery box that may contain a lot more than a thousand dollars, or may contain nothing. If the demon has predicted you will open both boxes, he has put nothing in the mystery box. If the demon predicted you will open only the mystery box, and return the box that definitely contains a thousand dollars, he has put considerably more than a thousand dollars in the mystery box.

You know that this entity is very good at predicting such actions. Do you take both boxes, or only the mystery box.

What makes this paradoxical is that the demon and the action is morally neutral. So there is no right answer, even though pretty much everyone has unreasonable confidence that they have the right answer, there is no solution that can be attained rationally.

Let us suppose that it is not a demon, but a pastor of an underground church. The pastor is very good at judging people’s character, and you have agreed not to take the thousand dollar box. But because the church is underground, there will be no consequences for defecting on him, other than that he will have successfully tested his ability to judge people’s character. The only negative consequence for you is that he will think ill of you.

Well in this case, assuming he is good at assessing character, there is no paradox. The rational self interested action is to be the person you hope he thinks you are.

If it is possible to accurately judge character, and if character remains consistent over time, then rational self interest aligns with virtue — hence the tendency of smart people to behave better than stupid people, and for wicked people to deny that character is consistent over time. Supposedly, just because a criminal has been arrested three times for murdering three people, we cannot conclude that putting him in a oubliette and leaving him there forever is a good idea, and should instead release him almost immediately as this is costless and benevolent.

Cooperate/cooperate equilibrium is by definition preferable to defect/defect equilibrium. But the standard game theoretic analysis leads to the conclusion that cooperation is only rational is there are going to be a large and indefinitely number of future interactions. If there is a definite limit to the number of future interactions, or if terminating the relationship is relatively low cost, defection is rational. If, however, people have significant capability to predict cooperation, then cooperation becomes rational optimal under considerably broader circumstances.

This may explain why the doctrine of Predestination tends to bear good fruit, even though it would seem to imply the doctrine of Antinomianism, which heretical doctrine bears horrifyingly bad fruit.

61 comments Newcomb’s paradox

Mayflower Sperg says:

A common feature of social life in Russia and Ukraine is the immense and enduring popularity of mafia games. There are hundreds of different games wherein most players want to cooperate and complete the assigned task, but a randomly selected minority, known to each other but hidden from the good players, conspires to sabotage them.

Are mafia games un-Christian? Are they socially harmful for teaching people to lie, or socially beneficial for teaching them to spot liars?

alf says:

In the West you have the card game ‘werewolves’ and the computer game ‘among us’, both pretty popular.

Are they socially harmful for teaching people to lie, or socially beneficial for teaching them to spot liars?

Yea that’s actually kind of funny. Sometimes you’re a liar, sometimes you’re supposed to spot the liars. Seeing how it gets kind of stressful to be a liar, I’d lean to socially beneficial. Though of course at the end of the day, just a game.

Larf says:

[*deleted for not conforming to the moderation policy*]

Jim says:

Be wise as serpents and harmless as doves. Need to play games that hone the wisdom of serpents.

Redbible says:

I’ll be brief, But I’ve played several different Mafia/social deduction style games, and found that generally people have no issues with a person that got “the bad guy role” lying to THE GROUP. The point it can start to hurt or damage relationships for real is in cases where a person is directly lying to AN INDIVIDUAL. (i.e. in the video game Among us, sometimes a baddie might try sticking around a innocent person to help establish trust, so that they don’t get voted out.)

The story of the prototype “tank tactics” game by Halfbrick shows the issues with a game the has people needing people to trust and work together, only to get backstabed by those people. Here is a link to a GDC talk about the story: https://www.youtube.com/watch?v=t9WMNuyjm4w

Contaminated NEET says:

>The point it can start to hurt or damage relationships for real is in cases where a person is directly lying to AN INDIVIDUAL.

When I was in school, my friends and I used to play Diplomacy occasionally. It always led to personal, out-of-game acrimony, almost without exception. The point of the game is to make alliances and cooperate closely in common strategic efforts, and then betray your allies at exactly the right moment. Everyone knows this going in, but nobody can help feeling hurt when their ally betrays them. I used to think my group was just too immature to play the game. Then I heard a story on NPR about the world Diplomacy championships. Here were strangers, extremely experienced in the game, whom I expected to take betrayal in stride as the part of the game that it is. Nope. They were every bit as hurt and childish in the face of betrayal as my high school friends and I. They tried to play it off to the journo like acting that way was a strategic choice to make betrayal more costly, but it was painfully obvious they were taking it personally.

Alf says:

People take games stupidly serious. Hence the popularity of safer cooperation games like Escape Room or DnD.

Daddy Scarebucks says:

I’ve played some of these too and agree with Alf’s “games are games”.

Spy games encourage real espionage and deception to the same extent that poker encourages real deception, or video games encourage real violence, or porn encourages real rape, which is to say, not at all. And whenever anyone’s tried to study the transferability of skills from games to real life, not counting the kinds of typing/math games that literally have you perform that exact task, they end up finding very little transfer or none at all.

The important thing is probably to actually be playing games in a real social setting with real people. These games are all social lubricants and people socializing is always better than people hiding out in man caves.

Alf says:

Games, like alcohol, are best consumed not alone but in company. Which is not meant as a disparaging remark to the guy who likes to play say Europe Universalis in his spare time, but there is just something very off-putting about a lot of men pouring hours into popular games like Rocket League or Apex Legends, both of which have an uncanny resemblance to slot machines.

suones says:

And whenever anyone’s tried to study the transferability of skills from games to real life, not counting the kinds of typing/math games that literally have you perform that exact task, they end up finding very little transfer or none at all.

Skyking learnt enough to do a barrel roll in a DASH-8 after playing video games. Only Shudras don’t learn anything from the “games” made for them. A wise man learns from ALL games. Eklavya was sacrificed at the altar of nepotism, Skyking at the altar of being a white guy.

Humungus says:

If you had a child, then raised them to believe they are special above all others, that they can do no wrong, then you have set them up to be selfish and unreasonable so that when they go out into the world, they will deal with all others as defect/defect. Choosing greed over cooperation in all instances. The world will come to despise them for their greed.

On a macro scale, if there is an entire culture that behaves this way, the natural outcome is war or annihilation then people choose to survive.

Humungus believes in reasonable outcomes. So when they aren’t apparent, reason will force itself by circumstance.

Humungus says:

I questioned two AIs, ChatGPT and Grok, based on Newcomb’s Paradox using memory cards instead of money boxes. Both chose the two memory card option reasoning that it was the optimal choice as I predicted. Without the human element, empathy does not exist.

If the point of this puzzle is to show predictive modeling, a test of greed, or both it demonstrates a response lacking any empathy toward future transactions.

On another note, I recall placing a bowl of treats on the porch one Halloween night with a note to only take one. I observed the White children would show the cooperate/cooperate path, taking only the one candy. The blacks would just loot the bowl entirely showing a more animal response.

Jehu says:

This is a good test of neighborhood quality. In a good neighborhood, you can put out a bowl with instructions and it’ll be followed fairly well. You’ll come back and the amount of candy left will be a reasonable approximation of the number of trick or treaters times the candy per kid. In a less good neighborhood, it’ll all be gone pretty fast.

In a really good neighborhood, you’ll see things like unmanned sale of bundles of wood. There’s a coffee can, and you can make change as you need to. Nobody is watching. If you see that, you’re in an area where the high trust element is hegemonic. It wouldn’t exist if the owner got ripped off even infrequently.

Needless to say, racial composition, homogeneity, and religious faith all drive what makes a good or a very good neighborhood.

Cloudswrest says:

Seems Amazon coders have been relying too much on AI and causing big company screw ups.

The briefing note describes a trend of incidents with “high blast radius” caused by “Gen-AI assisted changes” for which “best practices and safeguards are not yet fully established.” Translation to human language: we gave AI to engineers and things keep breaking?
The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off. AWS spent 13 hours recovering after its own AI coding tool, asked to make some changes, decided instead to delete and recreate the environment (the software equivalent of fixing a leaky tap by knocking down the wall). Amazon called that an “extremely limited event” (the affected tool served customers in mainland China).

https://x.com/lukOlejnik/status/2031257644724342957

Cloudswrest says:

Reminds me of this Feynman quote, “What I cannot create, I do not understand.”
Shouldn’t be pushing code you didn’t write, or at least comprehend.

Daddy Scarebucks says:

One commentator described AI coding as being multiplicative to productivity rather than additive. If you multiply negative productivity, you get very negative productivity.

A lot of developers in a lot of companies, especially those in junior roles, have negative productivity. This doesn’t necessarily mean they have no talent–although often that is the case–but at their current level of experience, they suck up more productivity from seniors who help them, review their work, and fix their mistakes. Welcome to apprenticeship.

It really should have been obvious to the Amazons, Microsofts and Googles of the world that giving apprentices tools to rapidly increase the quantity of their output, with either a neutral or negative effect on its quality, would be disastrous. Unfortunately, the divisions are often run by fools who refuse to understand the concept of apprenticeship, i.e. that inexperienced employees start as net liabilities but eventually turn into net assets, and instead act as if “headcount is headcount” and that simply having the right tools and processes can turn a negative into a positive.

There are, of course, a lot of executives and managers who totally get this, but unfortunately a much larger number who do not, and never will, because they have never done any real work themselves and do not understand the trade.

Jim says:

The public part of this is ai slop submissions to open source projects.

It is clear that businesses have a private problem with ai slop, and because they have been demanding the heavier use of ai, and heavier integration of ai generated code in deliveries, they having crises with the delivery of ai slop to customers. As for example slop all over the venerable windows notepad. Why update notepad? It already does what it is supposed to do. There is no demand for new features in Notepad. That is like putting new features in Nano. Nano does what it is for, ande Notepad does what it is for. Well, it used to, until it was updated.

Daddy Scarebucks says:

GitHub is full of issues demonstrating Microsoft’s internal tribulations with Copilot AI slop that are equal parts hilarious and painful, see e.g. dotnet/runtime#115743. I don’t blame AI slop for Notepad security exploits, though, as that kind of stuff can only happen when some idiot manager (probably a jeet, but who knows) in the loop decides that Notepad should have a Markdown parser in the first place. Microsoft was already shipping terrible anti-features years before the slopfest.

AI slop in open source is indeed a really big problem already, such that several projects have had to close down public contributions just to stop the spam. It is Eternal Sloptember, and may already be too late to close Pandora’s box.

Jim says:

Implementing markdown display in Notepad was managerial idiocy. That is not the use case for Notepad, that is the use case for Vscode, and its vastly superior open source equivalent, Vscodium. (Better known as Codium.)

However the way it was implemented is not managerial idiocy, but rather indicative of ai slop. A human would not have done what was done, because he would be imitating the way Vscode does it.

An ai generates code from its memories of other code — but tends to mix in code from memories that a human would know not to imitate.

You frequently get into arguments with the AI “Don’t do it this way, do it this other way”, and it just stubbornly persists in doing it the wrong way, because it is influenced by memories of code that was appropriate in some other context, which is different from your context.

Fidelis says:

This is a known problem and the solution is not to argue, because you unintentionally fixate the inner representation on the bad solution.

If you are in the loop, you are going to get frustrated, because it does not think like a person. It will keep pulling wrong answers out of its internal model, and get further stuck when your natural response is to repeat yourself in different words why the previous answer was wrong. They do not think like a human does, they have a dynamic process where the tokens are driving an internal state space. The “reasoning” is the model searching its own internal representation space. When you reiterate what it got wrong, you are unintentionally fixating the model in a malformed zone, and many times will not escape that zone.

When you are having this problem, what model are you using and do you have the ‘thinking’ option turned on? The thinking traces allow the model to steer its attention to a new basin point with better solutions. If you are alright with using western model providers, I recommend claude. If not, I recommend kimi. Both of these models are good enough at generating multiple solution approaches. OpenAI model also works, but not their chatbot interface, you’d need to use a paid token api provider. Grok sucks unfortunately. z.ai GLM-5 is supposedly good at coding tasks, but I haven’t used it enough to vouch. I haven’t played with google’s model, but from the chatbot interface my gut feeling is it lacks the diversity of approaches that anthropic and OAI models can generate.

Jim says:

I just clear the context and give it a revised prompt, and if it keeps making the same mistake, just write the code it is getting wrong myself as part of the prompt.

I don’t see anyone being very successful at having most code written by AI — you can certainly have quite a lot of code ai assisted or ai written, but you are going to get nowhere fast trying to get it do an unreasonably large portion of the code.

Jim says:

Your question implies that if only I had the right model, and used in the right way, with the right prompts, all would be fine.

Observing AI slop, I doubt it.

AI is very handy. It is very useful. But it is not replacing human programmers.

The fundamental weakness is that it has to suck down huge amounts of data — it sucked down every git repo it could find, while Musk tracked millions of human drivers. And it is incapable of ascertaining what data should be interpreted as a source of truth, or what it would mean to so interpret it.

Fidelis says:

Without dwelling too much on the topic, as indeed there is an opportunity to show and not tell that I will be working on shortly:

@DS is perfectly right. I would more humorously put it as telling someone, “try not to think of a red elephant.” You will be frustrated and slow and not get the benefit of the method if you treat it like an oracle. The code will also be bad compared to what a competent human would write, but the tradeoff is you get far more of it for less cognitive effort. If you can frame your program in a way that is “correct by construction” or passes some very tight verification, you can be happy it solves the cases you care about and ignore the fact the way it was done is not the most efficient.

I mention models and methods because this only very recently has become viable. My conjecture is that Anthropic and OAI hit on some very efficient and well done methods of coding self-play, because their models are well ahead, and the chinese models distilled them to get 90-95% of the same results. I think google spent more of their budget on images and 3D worlds, and xAI has subpar talent (was mostly mercenary Chinese leading the effort) while Elon focused on his favorite task of engineering a massive facility at breakneck speed. You will get frustratingly dumb results with the majority of other providers, no matter the setting, unless it’s pure summary and retrieval tasks.

Mayflower Sperg says:

Good schools have labs because there’s only so much you can learn by reading books. In the old days, before we carried supercomputers in our pockets, I found it very frustrating to read about programming languages and ideas but not have access to a computer to try them on. Google’s AI engines attained godlike abilities in chess and go not by reading about these games but by playing them millions of times.

Can your favorite AI engine write code, test it on an actual computer, and get back to you when it finds something that works? More importantly, was it made to do this during training, so it doesn’t have to re-learn how to code every time you ask?

Jim says:

With agents, this is kind of sort of happening. In particular, the number one most popular agent opencode.

But I don’t think that llms are trained to operate with this, which may be causing the unreasonable demands on the context window.

The only actual useful use cases for llms are internet search engine, coding assistant, and self driving, because those are the only areas where you can find colossal amounts of training data. But idiot businessmen keep trying to find wider uses.

The obvious way to use an agentic coding assistant is to create a non sudo linux user with limited authority, and let a llm be that user, with complete authority to do anything that user could do — in particular, compile code, execute it, and make commits to his local repository. But I am seeing some considerable reluctance to embrace that use case, perhaps because it is not likely to justify a ten trillion dollar data center.

These guys have visions of the llm pushing to the release branch and rolling out the latest build to customers.

Outsourcing to India turned out to be a disaster, outsourcing to China turned out to be a disaster, hiring Indians turned out to be a disaster, now they are promising CEOs they will be able to replace engineers with AI.

The trouble with the Indians was that they cooperate with each other to defect against management.

The trouble with the Chinese is that when they make your stuff, they learn how to do it better and cheaper, and suddenly it is not your stuff any more.

And the trouble with llms doing engineering is that they need close supervision by human engineers.

There has been enormous growth in the number of tokens purchased, largely because agents consume vastly more tokens than humans. But these guys are trying to envisage use cases that will grow far beyond that.

Fidelis says:

Can your favorite AI engine write code, test it on an actual computer, and get back to you when it finds something that works? More importantly, was it made to do this during training, so it doesn’t have to re-learn how to code every time you ask?

Yes, and this is why I think we will continue to see improvements. The majority of the training has become self-play on verified math and coding tasks, rather than just memorizing human data.

It is also why I say you need to get out of the chatbot setting. They are extremely bad at that setting. The reason the loops work is that they have memorized all the different approaches to a logical problem, and are general enough to apply the memorized solution to the given context. They probabilistically sample from their general solution set, and try to apply. Depending on the problem, this could be attempt number one, or attempt number one-thousand-and-one. One thousand and one attempts is not great, but it is better than monkey on a typewriter, and it over time favors the working approaches and requires less samples during the loop. Eventually we top out on this approach, as all the problems that can be solved without true generalization or new techniques of abstraction/representation will be solved; yet this is an impressive set of low hanging fruit picked almost for free. Hence, I am very bullish in the mid term, as it allows humans good at engineering and abstraction to engineer their particular problems be it software or hardware or protein engineering as optimization problems to be solved through this technique. The opposite of the narrative, this tech gives engineers a new set of powers.

Mayflower Sperg says:

… and self driving, because those are the only areas where you can find colossal amounts of training data.

You can’t learn to drive by watching videos of people driving cars. You need to drive an actual car, or an extremely accurate simulation of a car in a realistic virtual world where children randomly dash into the street.

Hundred-million-dollar lunar landers have crashed because of simple software errors. I guess writing a video game that accurately simulates a lunar landing and using it to test the on-board software wasn’t in their budget.

Jim says:

Agentic ai assistants are not agentic. They are trained to predict the next token a human would generate.

The way an agent works is that there is this harness that loads the context window such that the llm will generate tokens that the harness will interpret as directions to take actions, and will then take those actions.

A lot of people are very excited about this, and say it works great. Maybe it does, but I know that llms with a whole lot of stuff in their context window tend to produce garbage. I think we are going to need a harness that trains an llm to be agentic.

Starting with ability to predict a diff file from a commit message and prior source.

Daddy Scarebucks says:

Agentic ai assistants are not agentic. They are trained to predict the next token a human would generate.

This is more of a problem in terminology than in capability. We know that tech people like to come up with bullshit terms to exaggerate capabilities and generate hype, which is not a new problem.

In this case, “agent” doesn’t mean “possesses agency”, which the purveyors of that term would surely like the dimwitted managers and investors to believe. What it actually means is “capable of running external commands or connecting to external systems via MCP”. External systems such as LSPs and test runners that can then be run in a loop to iterate on solutions.

In theory the “agents” are supposed to be used for other things, hence the baffling popularity of “give Anthropic the private keys to your entire life” openclaw ecosystem, but in practice those other things are proving pretty disastrous, which is why I keep referring to agents as models in a for loop, because at present that is their main utility.

You want agentic to mean agency, which is totally understandable given the misleading way its purveyors try to link it to the word “intelligence”, but it is really an “agent” in the same way that a web browser is a “user agent” or an IDS is a “security agent”, it is just a puffed-up marketing term describing something much more boring in practice. It’s like “quantum computing”, words chosen more on the basis of sounding cool than having any connection to reality.

Daddy Scarebucks says:

A human would not have done what was done, because he would be imitating the way Vscode does it.

I find that highly unlikely, since vscode is an Electron app written in TypeScript and its URL handling code is an arcane mess of internal command handlers and Electron API calls for external launches. Anybody trying to replicate the behavior in a desktop editor would take one look at vscode’s implementation and say “no thanks.” Plus, vscode source is hosted on GitHub and therefore it is also in the LLM training set.

No, this is in fact exactly the type of mistake a human would make. Bad AI code is characterized by strange inefficient algorithms, “if soup”, flagrantly wrong/non-compiling implementations and so on. Whereas an inexperienced human would implement it exactly the way all the other inexperienced humans told them to on Stack Overflow, i.e. with ShellExecute (C) or Process.Start (.NET) with precisely zero of the top answers or comments including even a passing mention about sanitizing. If humans produced those answers, then humans would write that code.

That’s not to say AI couldn’t have written it or that I’m 100% positive Microsoft’s Notepad “improvements” weren’t AI slop. An LLM certainly could have proffered an unsanitized ShellExecute but even if so, it is only because all the human implementations it trained on also use unsanitized ShellExecute. There are so many hundreds of better examples of proven AI slop out there, like all the moltbook insanity, and the horrible open source PRs causing maintainers to shut down public contributions, and the big AWS outage, and as a skeptic I don’t really like seeing dubious probably-not-AI cases held up as AI slop because it undermines awareness of the unambiguous and debilitating AI slop out there.

On an unrelated note, Fidelis is making a somewhat valid (albeit biased) point that is different from what you’re interpreting. It is not that you just have to use the right model with the right prompt and magic will happen; indeed, there are many problems for which no combination of agents, models, and unlimited running time will ever produce the result that you want. His point here is that the AI agents are like the stereotypical prima donna’s philosophy of “you can tell me either what to do or how to do it, but not both” made manifest in code. And the worst possible thing you can do is tell it how not to do it, which is the near-equivalent of writing “not X” in a Google search instead of using the actual “-X” operator.

If you want to use an agent to solve a problem (and keep in mind, I am firmly in the skeptic camp, and it is my opinion that you shouldn’t use an AI agent to solve any production-critical problem, this is strictly hypothetical) then the way the tools are intended to be used is that you write a big suite of tests, or supply a preexisting big suite of tests, throw that at the agent, and let it iterate for hours or days burning through hundreds of non-functioning solutions until it stumbles on one that passes all your tests, and you just leave it at that, however crappy or inefficient it might be, because you designed the test suite and therefore presumably anything that passes all the tests is acceptable. If the code is security or performance critical then you’d better make those part of the tests, and not just hope for a secure/fast implementation nor try to coax one out of the model through prompting, which tends to fail.

It would be a horrendously inefficient process for a junior developer to follow, and would probably be horrendously economically inefficient even with LLM agents if the token costs reflected the true operational costs and were not being massively subsidized by unsustainable high-risk venture capital. But, those issues aside, telling the agent to “do this thing” followed by “no, don’t do it that way, do it this way” really does run counter to how they are meant to be used, precisely because they are not intelligent, not human, and do not really understand how your instructions are logically connected, like a human would. They are just a “model in a for-loop”, like I mentioned a few days ago, and work best when you can give them a very strictly-defined goal (i.e. a test suite) and don’t particularly care how it achieves the goal as long as it somehow does.

I would imagine that the kind of work you are trying to get done just isn’t suited to LLMs or LLM agents, and is not going to be for the foreseeable future, because the kind of work you are trying to get done is experimental or exploratory. You aren’t exactly sure what you need at the outset, must start with something very general and progressively refine it as you gain more insight into how it behaves. In other words, you are doing design and build in parallel, or design and prototype in parallel, like most engineers would handle a new and unfamiliar design problem with unclear specs. LLMs obviously don’t know how to do design, they only know how to spit out text, and LLM agents only know how to repeatedly spit out text in a loop, run some external commands, and keep iterating until some condition is met.

There’s a clear expectation mismatch, so his attempts to persuade you to use AI for this are silly, as are your comparisons of your iterative-design experiences to his test-driven development experiences.

Sporadic Commenter says:

the Sourceror’s Apprentice

Kevin C. says:

Speaking of cooperating versus defecting, looks like the usual GOP squishes in the Senate just killed the SAVE act. So much for preventing the left from stealing the elections. We’re getting President Newsom, aren’t we?

Fidelis says:

Not sure. We may get war internal instead. Trump’s cabinet is, supposedly, beginning to live in bunkers. The Overton Window has radically shifted, and many political factions have aligned on “fight or die,” disagreeing only on what “fight” actually means.

This war is going to blow up the economy. It was already fake, now they won’t be able to pretend, as everything will be too expensive, no one will find middle class work, and the people who spent ~2% of GDP on datacenters expecting capital contributions from the Gulf states and miraculous wunderteck are going to be left high and dry. As with any conflict, the enemy also gets a say. China might decide to eat the pain, switch more fully to electric economy, and break the US economic bloc in the Pacific. Taiwan, Japan and SK are more dependent on gulf oil than China is. So they might arm the Iranian state and provide enough economic relief that this war becomes a new Ukraine. US will have its hands tied, they would lose face, the gulf kingdoms would forever change spheres, and Israel is a major political lobby that would not allow the withdrawal.

I expect a lot of trouble in the near future.

Daddy Scarebucks says:

…the people who spent ~2% of GDP on datacenters expecting capital contributions from the Gulf states and miraculous wunderteck are going to be left high and dry.

It’s much worse than just the money being spent building/upgrading data centers. It is massive prepayments (futures contracts in all but name) to Micron and Nvidia to fill data centers that will not physically exist for years. It is the huge web of reciprocal and/or round-robin investments between the hardware companies and the AI companies. It is the fact that neither OpenAI nor Anthropic have any current profit nor any expected profits until 2030, and those based on rather generous assumptions; and the fact that none of the downstream “AI SaaS” companies based on the OpenAI or Anthropic models have any realistic business plan at all, they are all just hoping to get bought out by bigger fish, and OpenAI and Anthropic are already eating their lunch.

It is the mind-boggling amount of venture capital invested in the “industry”, and the next 5 years of build-out and operating costs being completely and utterly dependent on increasing levels of VC investment year over year. It is Anthropic’s and OpenAI’s misleading portrayal of “training” as a fixed capital cost on their balance sheets, when everyone knows it is really an ongoing expense that will not only never go away, but probably keep increasing. And it is the total unknown and unknowability at present of the price elasticity of demand with regard to the tokens and models, and what the effect will be on consumer demand when OpenAI and Anthropic must inevitably either raise their prices or introduce distortions like paid advertising.

The technology may be very real (albeit overhyped) but the economics are truly horrifying, based on levels of funny money and unwarranted optimism we haven’t seen in a very long time. The optimists like to compare it to Amazon’s initial build-out of AWS, but AWS was built to fulfill a real need that Amazon already had, and while some at the time thought it was a risky venture, it was still considered economically sound, and they were breaking even by the 2nd or 3rd year. And some pessimists are comparing it to the dot com bubble, and it does have some parallels with that too, but the dot com bubble was actually much smaller in scope, and was mostly milking naive retail investors.

What I see are significant parallels to the 2008 Great Minority Mortgage Meltdown, specifically in the fact that this is a leveraged bubble, technically not leveraged on paper in a legal sense, but still inherently based on 90% or greater operating debt that can never be repaid if even the slightest thing does not go according to plan. The big difference, which isn’t that big a difference, is that the mortgage bubble was funded by banks, and the AI bubble is being funded by private equity. If the bubble pops, it has the potential to bring down Alphabet, Meta and Microsoft, and seriously hurt Amazon and Nvidia–that’s five of the “Magnificent Seven” right there–and also drive the private equity investors funneling trillions into this thing to insolvency.

All of this adds up to potentially a far greater disaster than losing 2% of GDP. The 2% is just the first domino in a long line of dominoes, with the end of the line being another credit crunch.

Of course, as I always say, this shouldn’t be interpreted as advice to get out of the market or hoard gold, because being early is the same as being wrong, and the market can stay irrational longer than you can stay solvent. But it is very, very worrying, and some of the skepticism has already started to be priced into the market, which is why e.g. Meta and Google stocks have been taking a haircut compared to companies with lower AI investment.

Fidelis says:

Global market is completely and totally fucked, barring Trump pulling a rabbit out, some other black swan. It looks to me like Iran is not going to negotiate, and sees itself as being set up to take far more pain than the US is capable of sustaining.

Europe and EAsia were still painfully limping along, demographic problems and economic structure problems and wounds from COVID era policy that never got a chance to heal.

Looks like we are in for a rough one.

The Cominator says:

Iran wont be allowed to keep the strait closed for long but it’ll still fuck Trump because if they do keep it closed marines will end up seizing the Iranian side of the coast and people will be pissed about the whole thing…

Fidelis says:

Iran has designed the entire country for this war, has spent almost 40 years designing the entire country just to make that impossible. They have a deliberately decentralized theocratic state with an economy designed to pay soldiers first no matter the pain otherwise. They have tunnels and weapons stockpiles for years of this. They have the perfect geography to make any such incursions as painful as possible. You think Ukraine looks bad? Kursk? This would be 100x worse.

I’m not sure what ends this war. Maybe China pressures them, but even then the leadership has been taken out and they have been psyching themselves up for this for a generation. It’s a real mess.

Daddy Scarebucks says:

I find myself less than 100% convinced. While I doubt very much that this ends up achieving every possible US objective or being completely consequence-free for either the US or Israel, almost all of these impressive claims come from Iranian media and government, and they tell even more frequent and grandiose lies than our own media and government.

Yes, it is possible that they still have tons of reserves, tons of firepower, complete control of the territory, etc. It is also possible that they shot their wad in the first few days, and don’t have much left in the tank. We really don’t know. What we do know is that US and Israeli aircraft and drones are flying around the region completely unchallenged, to the extent that the usual lefties, including almost all Jews on the left, are crying “turkey shoot, turkey shoot, there’s no honor in that!”, so either Iran does not have that much left to shoot with, or they are deliberately allowing this to go on in order to lull their enemy into a false sense of superiority.

I think that, just as Russia’s war on Ukraine broke the brains of the neocon factions, this war on Iran has broken the brains of the white nationalist/isolationist factions. In the former case, they just could not countenance the idea that Russia could win, and in this case, they cannot countenance the idea that Iran might lose. And let’s be clear, it’s been far from a clean or easy win for Russia, just as it is unlikely to be a clean or easy win against Iran, and we all hope that this does not turn into a brutal and protracted war of attrition that goes on for years, but as of right now, it has not even been one week.

I see the Occidental Observer, Strategic Culture and other joo-joo shills shrieking non-stop about this, and those guys are reliably wrong about everything, so I’m thinking that things aren’t really looking so bad for the US. That doesn’t mean I’m aligning myself with Ben Shapiro and the Kagans and other nut jobs trying to start WW3, I just think that realistically, the fog of war is still thicker than pea soup and that the usual suspects pretending to know that Iran can’t possibly lose are talking out of their asses.

None of us voted for this–notwithstanding that some of us don’t vote at all, but none of us voted for this, and the isolationists have every right to be pissed off, but they shouldn’t let that anger lead them to swallow a different brand of snake oil and lose themselves in fantasy land. The Iranian regime are huge liars, and most of the current Iran boosters on the “right” are also huge liars. At best, everyone everywhere is lying and we don’t really know what’s going on.

Jim says:

> it is possible that they still have tons of reserves, tons of firepower, complete control of the territory, etc. It is also possible that they shot their wad in the first few days, and don’t have much left in the tank. Yes, it is possible that they still have tons of reserves, tons of firepower, complete control of the territory, etc. It is also possible that they shot their wad in the first few days, and don’t have much left in the tank.

Obviously they don’t have much left in the tank. But this does not mean that we can make them surrender, and does not mean we can keep the Straits open.

It may well happen that Ayatollah Khamenei son of Ayatollah Khamenei decides he would prefer to wait a bit before meeting his seventy two virgins. But so far, Iranian will is not shaking, and there is not much more we can do to shake it.

This was the big known unknown, and it is not looking good for us.

The Cominator says:

My assessment is that they are basically out of missiles but still have tons of drones. Mining likely to be ineffective… so can they keep the strait closed with drones and very sporadic missiles?

Jim says:

> Mining likely to be ineffective

Where do you get that from.

And their sea surface drones have already proven devastatingly effective.

The straits are so shallow that mines lying on the sea floor are effective.

The Cominator says:

Anything trying should be able to be picked up and blown out of the water…

Jim says:

Russians do not seem to have a lot of luck blowing Ukrainian drones out of the water.

I think this war was stupid, was the result of grossly under estimating Iranian will and capability.

You just cannot make the Iranians surrender, not with air power, not with ground power. So you just have to live with whoever is in control of Iran, even if they are not easy to live with.

Fidelis says:

I don’t think this was underestimation. Looks like a case of the tail wagging the dog, a certain Levantine state finds Iran intolerable and is willing to burn down everything in order to ensure they are wiped out. Iran of course feels the same. About 24 hours before this started, there was signalling that the leadership wanted to de-escalate on nuclear rampups. Of course if that happened, it would be harder to convince anyone to kill them, so they had to move fast to ensure this war started.

I suspect if the admin tries to ‘take the L’, there will be a certain interest group to make sure pulling out is politically impossible. Some want total war, and they have a lot of influence. They might get it. Of course, this could be mere pessimism. Nothing but to wait and see.

Jim says:

> Looks like a case of the tail wagging the dog

That tail may well have seriously inconvenienced the dog.

Mayflower Sperg says:

China is angry at Iran for closing the Strait of Hormuz, and Iran is trying to appease them by allowing Chinese tankers to pass through. Which is never going to work because the USA also has veto power over the strait.

Jim says:

If Ayatollah Khamenei son of Ayatollah Khamenei wants to keep all ports near the Persian Gulf closed till hell freezes over, who can stop him?

The Cominator says:

Unfortunately the answer is a ground invasion which will be politically horrible for Trump but so will high oil prices… the spice must flow.

Jim says:

Ground invasion of Iran will fail. Iran has natural defenses that are likely to be extremely effective in the age of drone warfare., though we will not know for sure until someone gives it a go.

And submersible drones and surface drones can sink ships in the Persian gulf, and drop mines.

Iran is a natural fortress that has been equipping itself with long range weapons to strike far beyond its fortress walls. They are in a very good position to insist on getting whatever they want.

The Cominator says:

Yarvin is right why can’t Elon just fucking payoff these cocksuckers?

Jim says:

They don’t care about money. They want you to die.

We are not facing rational evil that whats to maximise its interests at the expense of our interests, but rather we are facing demonic evil that wants to die in a fire and drag us into the flames.

The Cominator says:

I agree that at least 95% of Democrats are like that but I always thought the corrupt RINOs were mostly motivated by whats in it for them.

Jim says:

Rational evil will not get you far in politics, because it is all about ganging up.

Daddy Scarebucks says:

Nah, that is blueshifted politics. The mainstream Republicans are motivated by what’s in it for them–party status is pay-to-win, “pay” in this context meaning “fund-raise”. You bring in more money for the party, you get more influence and better postings. It is the “good” Republicans who act according to some semblance of rational self-interest. They don’t really care about people like us, or about the future of America in general, but they respond to personal incentives and their behavior is fairly predictable. Those are the Ted Cruz, Chuck Grassley and Rick Scott types.

Murkowski and Collins types have always been working for the other team. They aren’t corrupt members of the red team, they are saboteurs working for the blue team. I thought that was obvious based on how they vote, which is in accordance with the red team 100% of the time when it doesn’t matter, and in accordance with the blue team 100% of the time when it does matter.

The Cominator says:

I actually have a fairly charitable view of Collins she survives as a republican senator in New England and her vote is there when needed for critical votes most of the time… if I were Emperor tomorrow she would not be executed.

Murkowski I kind of agree is a spiteful evil witch and probably couldn’t even be bribed. McCain was the only other republican senator who is as evil.

But all these other RINO types should be bribable…

Jim says:

> all these other RINO types should be bribable…

You over estimate the role of rational self interest.

Corruption is what keeps the legislature and the judiciary barely functional. It is the unbribable that are destroying the world.

Jim says:

Personal self interest is voter id. How many republicans are on board with voter id when their vote actually matters?

Let us suppose you are purely in it for graft money. Obviously you get more graft money being in the majority party, you get more graft money by Republicans having power. Obviously the majority of elected Republicans do not want the Republican party to be in power. Because if other Republicans are elected, their base is going to put the heat on them to implement Republican polcies.

The Cominator says:

“Personal self interest is voter id. ”
A lot of them may be dependent on fraud to win their primaries.

Jim says:

By and large, Republicans can only be elected in states with voter id, and if the state has voter id, the republican party in that state has voter id in its primaries.

Thus, very few sitting Republicans can benefit from lack of voter id.

No, Republicans hate voter id out of demonic evil, not out of rational self interested evil. They would feed their children into the fire if they could chain your children to their children. They will walk through fire provided they could drag other people into the fire with them.

Daddy Scarebucks says:

How many republicans are on board with voter id when their vote actually matters?

Looks like most of them are. It’s a tiny minority who aren’t, and that tiny minority work for the other team.

It’s not that the Republicans won’t vote for it, it’s the usual muh-filibuster bullshit, so look for the ones who say “but muh filibuster”, those are the ones who are demonically evil rather than self-interested and corrupt.

In practice there are probably several more hidden demons, ones who only vote “yes” because they know or expect that it will be blocked by filibuster, but even that is a kind of self-interest, and they can’t actually break ranks without revealing themselves.

Jim says:

> Looks like most of them are.

but muh filibuster

Leave a Reply to Sporadic Commenter Cancel reply

Your email address will not be published. Required fields are marked *