I agree with your top line point about being open to weirdness, but generally see most "AGI scene" people as being insufficiently economically aware and think they would strongly benefit from moving closer to Alex's position.
> Essentially, Jason gives three arguments:
> 1. Many of the jobs Alex lists have a smaller relational component than he suggests (see Trevor).
> 2. A small fraction are indeed purely relational, but since superhuman AI would dramatically shift consumption patterns, demand for those jobs could still collapse (see Philip).
> 3. Superhuman AI could reshape the human brain, drastically reducing our interest in relational services.
I don't really buy 1, and discussed with Trevor a bit here: https://x.com/herbiebradley/status/2046352086313668865. I haven't heard a satisfactory counterargument to the point that the AI-only product has to compete with AI + human, so the quality baseline is raised anyway. For therapists, don't you basically have to assume that there are no inherently-human tasks in the job of "therapist" to suppose that AI can be a full substitute (implicit in saying "better")? It's an *adjacent* product which fills some of the market demand but does not substitute.
On 2, not convinced it's a very small fraction. But on collapse, Alex's quote here directly argues against Phillip: "Many haven’t been invented yet, just as six out of ten jobs people hold today didn’t exist in 1940.". He's arguing that part of the expanding varieties is the expansion in relational service jobs. I haven't yet seen a good argument against the reasonable point that we should expect many more new such jobs to appear!
On 3, interesting take but seems far far too uncertain, even as someone AGI pilled, to really incorporate into a model here.
I don't think my intuition here rests on Alex's suggestion that many white-collar jobs have a core and underappreciated relational component, but I also think that's probably true and those who haven't worked in a large bureaucracy and significantly interacted with internal politics wildly, wildly underestimate it. Sometimes you can directly see the issue: people who have only ever worked as SWEs or AI researchers actually think that most jobs are highly intelligence loaded.
To your point, isn’t #1 basically orthogonal to the last mile problem? Yes, in absolute terms, perhaps these jobs aren’t as relational as proposed, but there is a relational subset or capacity to them that will be a bit of a differentiator and will become more important (perhaps) as the rest of the job gets automated.
I agree with your top line point about being open to weirdness, but generally see most "AGI scene" people as being insufficiently economically aware and think they would strongly benefit from moving closer to Alex's position.
> Essentially, Jason gives three arguments:
> 1. Many of the jobs Alex lists have a smaller relational component than he suggests (see Trevor).
> 2. A small fraction are indeed purely relational, but since superhuman AI would dramatically shift consumption patterns, demand for those jobs could still collapse (see Philip).
> 3. Superhuman AI could reshape the human brain, drastically reducing our interest in relational services.
I don't really buy 1, and discussed with Trevor a bit here: https://x.com/herbiebradley/status/2046352086313668865. I haven't heard a satisfactory counterargument to the point that the AI-only product has to compete with AI + human, so the quality baseline is raised anyway. For therapists, don't you basically have to assume that there are no inherently-human tasks in the job of "therapist" to suppose that AI can be a full substitute (implicit in saying "better")? It's an *adjacent* product which fills some of the market demand but does not substitute.
On 2, not convinced it's a very small fraction. But on collapse, Alex's quote here directly argues against Phillip: "Many haven’t been invented yet, just as six out of ten jobs people hold today didn’t exist in 1940.". He's arguing that part of the expanding varieties is the expansion in relational service jobs. I haven't yet seen a good argument against the reasonable point that we should expect many more new such jobs to appear!
On 3, interesting take but seems far far too uncertain, even as someone AGI pilled, to really incorporate into a model here.
I don't think my intuition here rests on Alex's suggestion that many white-collar jobs have a core and underappreciated relational component, but I also think that's probably true and those who haven't worked in a large bureaucracy and significantly interacted with internal politics wildly, wildly underestimate it. Sometimes you can directly see the issue: people who have only ever worked as SWEs or AI researchers actually think that most jobs are highly intelligence loaded.
To your point, isn’t #1 basically orthogonal to the last mile problem? Yes, in absolute terms, perhaps these jobs aren’t as relational as proposed, but there is a relational subset or capacity to them that will be a bit of a differentiator and will become more important (perhaps) as the rest of the job gets automated.