65 points | by jandrewrogers10 hours ago
> this paper focuses specifically on the zero-sum nature of AI labor automation... When AI automates a job - whether a truck driver, lawyer, or researcher - the wages previously earned by the human worker... flow to whoever controls the AI system performing that job.
The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.
I’m not sure what your expectation is, but even your claim about the assumption the paper makes is incorrect.
For one thing, the paper assumes that the amount that will be transferred from the human lawyer to the AI lawyer would be $500 + the productivity gains brought by AI, so more than 100%.
But that is irrelevant to the actual paper. You can apply whatever multiplier you want as long as the assumption that human labor will be replaced by AI labor holds true.
Because the actual nature of the future is irrelevant to the question the paper is answering.
The question the paper is answering is what impact such expectations of the future would have on today’s economy (limited to modeling the interest rate). Such a future need not arrive or even be possible as long as there is an expectation it may happen.
And future papers can model different variations on those expectations (so, for example, some may model that 20% of labor in the future will still be human, etc).
The important point as far as the paper is concerned is that the expectations of AI replacing human labor and some percentage of the wealth that was going to the human labor now accrues to the owner of the AI will lead to significant changes to current interest rates.
This is extremely useful and valuable information to model.
We seem pretty likely to be headed towards a future where AI-provided services have almost no value/pricing power, and just become super low margin businesses. Look at all of the nearly-identical 'frontier' LLMs right now, for a great example.
Labor automation is not zero sum. This statement alone makes me sceptical of the conclusions in the article.
With sufficiently advanced AI we might not have to do any work. That would be fantastic and extraordinarily valuable. How we allocate the value produced by the automation is a separate question. Our current system would probably not be able to allocate the value produced by such automation efficiently.
Is your theory that the next week there will be an AI lawyer that charges only 400$, then it is a race to the bottom?
There is a proven way to avoid a race to the bottom for wages, which is what a trade union does - a union by acting as one controls a large supply of labour to keep wages high.
Replace that with a company and prices, it could very well be that a handful of companies could keep prices high by having a seller's market where everyone avoids a race to the bottom by incidentally making similar pricing calls (or flat out illegally doing it).
The core problem is lawyers already automate plenty of their work, and lawyers get involved when the normal rules have failed.
You don't write a contract just to have a contract, you write one in case something goes wrong.
Litigation is highly dependent on the specific situation and case law. They're dealing with novel facts and arguing for new interpretations, not milling out an average of other legal works.
Also, you generally only get one bite at the apple, there's no do-overs if your AI screws up. You can hold a person accountable for malpractice.
this is true - and the majority of work of lawyers is in knowing past information, and synthesising possible futures from those information. In contracts, they write up clauses to protect you from past issues that have arisen (and may be potential future issues, depending on how good/creative said lawyer is).
In civil suits, discovery is what used to take enormous amounts of time, but recent automation in discovery has helped tremendously, and vastly reduced the amount of grunt work required.
I can see AI help in both of these aspects. Now, whether the newer AI's can produce the type of creativity work that lawyers need to do post information extraction, is still up for debate. So far, it doesn't seem like it has reached the required level for which a client would trust a pure ai generated contract imho.
I suspect the day you'd trust an AI doctor to diagnose and treat you, would also be the day you'd trust an AI lawyer.
US automotive, labor, and manufacturing unions couldn't remain competitive against developing economies, and the jobs moved overseas.
In the last few years, after US film workers went on strike and renegotiated their contracts, film production companies had the genius idea to start moving productions overseas and hire local crews. Only talent gets flown in.
What stops unions from ossifying, becoming too expensive, and getting replaced on the international labor market?
Labor action, such as strikes.
there will be caste of high-tech lawyers very soon which will be able to handle many times more volume of work thanks to AI, and many other lawyers will lose their jobs.
She's got international experience and connections but moved to a small town. She was a magic circle partner years ago. Now she has a FTTP connection and has picked up a bunch of contracts that she can deliver on with AI. She underbid some big firms on these because their business model was traditional rates, and hers is her cost * x (she didn't say but >1.0 I think)
Basically she uses AI for document processing (discovery) and drafting. Then treats it as the output of associates and puts the polish on herself. She does the client meetings too obviously.
I don't think her model will last long - my guess is that there will be a transformation in the next 5 years across the big firms and then she will be out of luck (maybe not at the margin though). She won't care - she'll be on the beach before then.
And to be transparent I'm very bearish on what we are being marketed to as "AI"; I see value in the techs flying underneath this banner and it will certainly change white collar jobs but there's endless childish and comical hubris in the space from the fans, engineers, and oligarchs jockeying to control the space and narratives.
I would appreciate a version of this paper that is worth reading, FWIW. The paper asks an important question: shame it doesn't answer it.
The proponents of AI systems seem to mostly misunderstand what you're paying for really. It's not writing letters.
Love this story so much I just posted it. Although it's from an era in which you'd buy CDs and books containing contracts, it's still relevant with "AI".
> “No lawyer writes a clause who is not prepared to go to court and defend it. No lawyer writes words and let’s others do the fighting for what they mean and how they must be interpreted. We find that forces the attorneys to be very, very, very careful in verbiage and drafting. It makes them very serious and very good. You cook it, you eat it. You draft it, you defend it.”
Lawyers are humans. They make the same mistakes as others humans. Quality of work is variable across skills, education, and if they had a coffee or not that day.
It's an interesting idea. But if the economy grinds to a halt because of that kind of investor behavior, it seems unlikely governments will just do nothing. E.g. what if they heavily tax ownership of AI-related assets?
Where does an AI get chartered status, admitted to the bar, and insurance cover?
I was thinking religious leaders might get a good run. Outside of say, Futurama, I'm not sure many people will want faith-leadership from a robot?
I've already heard people comparing AI hallucinations to oracles (in the greek sense)
I don't expect dad to Do Your Own AI anytime soon, he'll still pay someone to set it up and run it.
1) all work gets done by AI. Owners of AI reap the benefits for a while. There is a race to the bottom concerning costs, but also because people are not earning wages and come ang really afford the outputs of production. Thus rendering profits close to zero. If the people controlling the systems do not give the people "on the bottom" some kind allowance they will not have any chance for income. They might ask horrible and sadistic things from the bottom people but they will need to do something.
2) if people get pushed into these situations they will get riot or start civil wars. "Butlerian jihads" will be quite normal.
3) another scenario is that the society controlled by the rich will start to criminalise non-work in the early stages, that will lead to a new slave class. I find this scenario highly likely.
4) one of the options that I find very likely if "useless" people do NOT get "culled" en mass is an initial period of Revolt followed an AI controlled communist "Utopia". Where people do not need to work but "own" the means of production (AI workers). Nobody needs to work. Work is LARPing and is done by people who act like workers but don't really do anything (like some people do today) A lot of people don't do this, there are still people who see non-workers as leeching of the workers, because workers are "rewarded" by ingame mechanics (having a "better job"). Parallel societies will become normal. Just like now. Rich people will give themselves "better jobs" some people dont play the game and there are no real consequences, but not being allowed to play.
5) an amalgamation of the scenario as above, but in this scenario everybody will be forced to larp with the asset owning class. They will give people "jobs" but these jobs are bullshit. Just like many jobs right now. Jobs are just a way of creating different social classes. There is no meritocracy. Just rituals. Some people get to do certain rituals that give them more social status and wealth. This is based on oligarch whims. Once in a while a revolt, but mostly not needed.
Many other scenarios exist of course.
My prep is:
1) building a company (https://getdot.ai) that I think will add significant marginal benefits over using products from AI labs / TAI, ASI.
2) investing in the chip manufacturing supply chain: from ASML, NVDA, TSMC, ... and SnP 500.
3) Staying fit and healthy, so physical labour stays possible.
The only thing I see as obvious is AI is going to generate tremendous wealth. But it's not clear who's going to capture that wealth. Broad categories:
(1) chip companies (NVDA etc)
(2) model creators (OpenAI etc)
(3) application layer (YC and Andrew Ng's investments)
(4) end users (main street, eg ChatGPT subscribers)
(5) rentiers (land and resource ownership)
The first two are driving the revolution, but competition may not allow them to make profits.
The third might be eaten by the second.
The fourth might be eaten by second, but it could also turn out that competition amongst the second, and the fourth's access to consumers and supply chains means that they net benefit.
The fifth seems to have the least volatile upside. As the cost of goods and services goes to $0 due to automation, scarce goods will inflate.
It substitutes for human labour. This will reduce the price and substantially increase the benefits of land and resource ownership.
4) Develop an obsession for the customers & their experiences around your products.
I find it quite rare to see developers interacting directly with the customer. Stepping outside the comfort zone of backend code can grow you in ways the AI will not soon overtake.
#3 can make working with the customer a lot easier too. Whether or not we like it, there are certain realities that exist around sales/marketing and how we physically present ourselves.
I try to do 3 as much as possible.
My current work explicitly forbids me from doing 1. Currently just figuring out the timing to leave.
At best AI will be a tool I use while developing software. For now I don't even think it's very good at that.
Last famous words.
Current technology can't do your job, future tech most certainly will be able to. The question is just whether such tech will come in your lifetime.
I thought the creative field was the last thing humans could do but that was the first one to fall. Pixels and words are the cheapest item right now.
I'm not aware of any big changes in writer/artist employment either.
The only argument you can have is to be cheaper than the machine, and at some point you won't be.
Things change and people adapt. Maybe my job won't be the same in 20 years, maybe it will. But I'm pretty sure I'll still have a job.
If you want to make big decisions now based on vague predictions about the future go ahead. I don't care what you do. I'm going to do what works now, and if things change I'll make whatever decisions I need to make once I have the information I need to make them.
You call me naive, I'd say the same about you. You're out here preaching and calling people naive based on what you think the future might look like. Probably because some influencer or whatever got to you. I'm making good money doing what I do right now, and I know for a fact that will continue for years to come. I see no reason to change anything right now.
The question is which probability do you assign to getting TAI over time? From your comment it seems you say 0 percent in your career.
For me it's between 20 to 80 percent in the next ten years ( depending on the day :)
One believes the following:
> AI can't do my job and I doubt it will any time soon
The other believes the opposite; that AI is improving rapidly enough that their job is in danger "soon".
From a game theory stance, is there any advantage to holding the first belief over the second?
If this kind of thing happens, if interest rates are 0.5%, then people on UBI could potentially have access to land and not have horrible lives, if it's 16% as these guys propose, they will be living in 1980s Tokyo cyberpunk boxes.
Tho I guess even post scarcity we'd have people who care about hoarding gold-pressed latinum.
That's the literal actual textbook definition of "communism".
Lmao that I actually lived to see the day when techbros seriously discuss this.
People have been making comparisons between post scarcity economics and "utopia communism" for decades at this point. This talking point probably predates your birth.
“I have a theoretical degree in economics”
You’re hired!
real talk though, I wish I had just encountered an obscure paper that could lead me to refining a model for myself, but it seems like there would be so many competing papers that its the same as having none
It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.
FWIW, the author is listed as a fellow of "The Forethought Foundation" [0], which is part of the Effective Altruism crowd[1], who have some cultish doomerism views around AI [2][3]
There's a reason this stuff goes up on a non-peer reviewed paper mill.
--
[0] https://www.forethought.org/the-2022-cohort
[1] https://www.forethought.org/about-us
[2] https://reason.com/2024/07/05/the-authoritarian-side-of-effe...
[3] https://www.techdirt.com/2024/04/29/effective-altruisms-bait...
Isn't developing AGI basically the mission of OpenAI et al? What's so bad about considering what will happen if they achieve their mission?
>who have some cultish doomerism views around AI [2][3]
Check the signatories on this statement: https://www.safe.ai/work/statement-on-ai-risk
Almost everything on HN gets those comments. Look at the top comments of almost any discussion - they will be a rejection / dismissal of the OP.
- one is expanding on the topic without expressing disagreement
- one is a eulogy
- one expresses both agreement on some points and disagreement on other points