{"id":1340,"date":"2025-03-13T13:00:00","date_gmt":"2025-03-13T14:00:00","guid":{"rendered":"http:\/\/asian-idol.com\/?p=1340"},"modified":"2025-03-28T11:49:45","modified_gmt":"2025-03-28T11:49:45","slug":"ai-is-coming-for-the-laptop-class","status":"publish","type":"post","link":"http:\/\/asian-idol.com\/index.php\/2025\/03\/13\/ai-is-coming-for-the-laptop-class\/","title":{"rendered":"AI is coming for the laptop class"},"content":{"rendered":"
\n

\"Three

Enjoy the laptop lifestyle while it lasts, folks. | Smith Collection\/Gado\/Getty Images<\/figcaption><\/figure>\n

My entire job takes place on my laptop.<\/p>\n

I write stories like this in Google Docs on my laptop. I coordinate with my editor in Slack on my laptop. I reach out to sources with Gmail and then interview them over Zoom, on my laptop. This isn\u2019t true of all journalists \u2014 some go to war zones \u2014 but it\u2019s true of many of us, and for accountants, tax preparers, software engineers, and many more workers, maybe over one in 10<\/a>, besides.<\/p>\n

Laptop jobs have many charms: the lack of a commute or dress code, the location flexibility, the absence of real physical strain. But if you\u2019re a laptop worker and not worried about what\u2019s coming in the next decade, you haven\u2019t been paying attention. There is no segment of the labor market more at risk from rapid improvements in AI than us.<\/p>\n

The newest \u201creasoning models<\/a>\u201d from top AI companies are already essentially human-level, if not superhuman, at many programming tasks<\/a>, which in turn has already led new tech startups to hire fewer workers<\/a>. Generative AIs like Dall-E, Sora, or Midjourney are actively competing with human visual artists; they\u2019ve already noticeably reduced demand for freelance graphic design<\/a>. <\/p>\n

Services like OpenAI\u2019s Deep Research<\/a> are very good at internet-based research projects like, say, digging up background information for a Vox piece. \u201cAgentic\u201d AIs like Operator<\/a> are able to coordinate and sequence these kinds of tasks the way a good manager might. And the rapid pace of progress in the field means that laptop warriors can\u2019t even take comfort in the fact that current versions of these programs and models may be janky<\/a> and buggy<\/a>. They will only get better from here, while we humans will stay mostly the same.<\/p>\n

As AIs have improved at laptop job tasks, progress on more physical work has been slower. Humanoid robots<\/a> capable of tasks like folding laundry<\/a> have been a longtime dream, but the state-of-the-art falls wildly short of human level. Self-driving cars have seen considerable progress, but the dream has proven harder to achieve than boosters thought<\/a>. While AI has been improving rapidly, robotics<\/em> \u2014 the ability of AI to work in the physical world \u2014 has been improving much more slowly. At this point, a robot plumber or maid is far harder to imagine than a robot accountant or lawyer.<\/p>\n

Let me offer, then, a thought experiment. Imagine we get to a point \u2014 maybe in the next couple years, maybe in 10, maybe in 20 \u2014 when AI models can fully substitute for any remote worker. They can write this article better than me, make YouTube videos more popular than Mr. Beast\u2019s, do the work of an army of accountants, and review millions of discovery documents for a multibillion-dollar lawsuit, all in a matter of minutes. We would have, to borrow a phrase from AI writer and investor Leopold Aschenbrenner, \u201cdrop-in remote workers<\/a>.\u201d How does that reshape the US, and world, economy?<\/p>\n

Right now this is a hypothetical. But it\u2019s a hypothetical worth taking seriously \u2014 seriously enough that I may or may not be visiting the International Brotherhood of Electrical Workers\u2019 apprenticeship application<\/a> most days, just in case I need work that requires a human body.<\/p>\n

Fast AI progress, slow robotics progress<\/h2>\n

If you\u2019ve heard of OpenAI, you\u2019ve heard of its language models: GPTs 1, 2, 3, 3.5, 4, and most recently 4.5. You might have heard of their image generation model DALL-E or video generation model Sora.<\/p>\n

But you probably haven\u2019t heard of their Rubik\u2019s cube solving robot. That\u2019s because the team that built it was disbanded in 2021, about a year before the release of ChatGPT and the company\u2019s explosion into public consciousness.<\/p>\n

OpenAI engineer Wojciech Zaremba explained on a podcast<\/a> that year that the company had determined there was not enough real-world data of how to move in the real world to keep making progress on the robot. Two years of work, between 2017 and 2019, was enough to get the robot hand to a point where it could unscramble Rubik’s Cubes<\/a> successfully 20 to 60 percent of the time, depending on how well-scrambled the Cube was. That’s \u2026 not especially great, particularly when held up next to OpenAI’s language models, which even in earlier versions seemed capable of competing with humans on certain tasks.<\/p>\n

It\u2019s a small story that encapsulates a truism in the AI world: the physical is lagging the cognitive. Or, more simply, the chatbots are beating the robots.<\/p>\n

This is not a new observation: It\u2019s called Moravec\u2019s paradox, after the futurist Hans Moravec, who famously observed that computers tend to do poorly at tasks that are easy for humans<\/a> and do well at tasks that are often hard for humans.<\/p>\n

Why? Here we\u2019re less sure. As the machine learning researcher Nathan Lambert<\/a> once noted, Moravec’s paradox is \u201cbased on observation, not theory. We have a lot of work to do to figure out why.\u201d But we have some hypotheses.<\/p>\n

Perhaps human-like motions are harder for robots because we gained them relatively early in evolutionary time, far earlier than our capacity for reasoning. Running on two or even four legs is a very old ability that many animals share; it\u2019s instinctual for us, which both makes it harder for machines without that evolutionary history to learn, and harder for us to articulate to those machines.<\/p>\n

Harder still is the fact that a robot has to learn to run on two legs by actually<\/em> running on two legs in real life. This point is key: If OpenAI had its servers pronouncing every sentence that ChatGPT generates, out loud, one at a time, as part of its training process, it probably would\u2019ve taken millennia to get to today\u2019s abilities. Instead, it was able to train the GPT models using millions of CPU cores operating in parallel to analyze vast reams of data, processing trillions of individual words a second. Each new model only requires months or a few years of training because the process happens much, much faster than real time.<\/p>\n

Historically roboticists\u2019 way around this limitation was to make simulated worlds, sort of purpose-built video game environments, in which to train robots much faster. But when you take the bot out of the virtual playground and into the real world, it has a tendency to fail. Roboticists call this the \u201csim2real\u201d (simulation to reality) gap<\/a>, and many a noble robot has fallen into it (and over it, and on it) over the years.<\/p>\n

The optimistic theory of the case is that, given enough real-world data about movement, the same techniques that have made language models so successful can be used to make robots work well. The most bullish takes on robotics I\u2019ve seen, like this from Anthropic co-founder Jack Clark<\/a> last year, are based on the idea that if you throw enough data (from stuff like YouTube videos of people walking around, or from actual humans operating the robot with a controller<\/a>) into well-enough designed and fine-tuned transformer models (using the same learning structure as ChatGPT or Claude etc.), the end result will be a model good enough to govern a robot in the real world. <\/p>\n

Maybe! So far we mostly have academic demonstrations<\/a> rather than the real-world, commercialized products that large language models are today. (Disclosure: Vox Media is one of several publishers that has signed partnership agreements with OpenAI. One of Anthropic\u2019s early investors is James McClave, whose BEMC Foundation helps fund Future Perfect<\/a>. Our reporting remains editorially independent.)<\/p>\n

I don\u2019t know the trajectory of cognitive AI and robotics over the next decade. Maybe, as OpenAI CEO Sam Altman has predicted<\/a>, this year will \u201csee the first AI agents \u2018join the workforce\u2019 and materially change the output of companies\u201d (referring, presumably, to software workers rather than robots). Maybe, as critics argue<\/a>, the cost of training these models will prove too immense and the companies developing them, which are burning through billions in VC funding<\/a>, will fail. Maybe robotics will continue to lag, or maybe people will have Rosie from The Jetsons<\/em> dusting their furniture next year. I have my guesses, but I know enough to know I shouldn\u2019t be too confident.<\/p>\n

My median guess, though, is the world outlined above: language, audiovisual, and otherwise non-physical models continue to make very rapid progress, perhaps becoming capable of doing any fully remote job currently done by humans within the next decade; robotics continues to lag, being very useful in advanced manufacturing but unable to garden or change your sheets or empty your dishwasher. Taken to an extreme, this could look like, in the words of Anthropic CEO Dario Amodei<\/a>, a \u201ccountry of geniuses in a datacenter.\u201d<\/p>\n

What does that<\/em> world look like?<\/p>\n

The work left for the rest of us<\/h2>\n

One of the more useful pieces examining this idea came out in January from Epoch AI<\/a>, a small research group that\u2019s quickly become the most reliable source of data on cutting-edge AI models. The author, Matthew Barnett, uses a commercially available AI model (GPT-4o) to go through a US Department of Labor-sponsored database<\/a> of over 19,000<\/a> job tasks and categorize each of them as doable remotely (writing code, sending emails) or not doable remotely (firefighting, bowling).<\/p>\n

A task, notably, is not the same as a job or occupation. The occupation \u201cjournalist\u201d includes specific subtasks like writing emails, composing articles, making phone calls, appearing on panels, reading academic papers, and so on. And an occupation as a whole cannot be automated unless all tasks, or at least all absolutely necessary tasks, can themselves be automated. An AI might be able to do some of the mental labor a surgeon has to perform, for instance, but until it can actually cut and suture a human, the surgeon\u2019s job remains safe.<\/p>\n

Barnett finds that 34 percent of tasks can be performed remotely, but only 13 percent of occupations<\/em> have, as their top five most important subtasks, things that can all be done remotely. Thirteen percent can then serve as an (admittedly very rough) estimate of the share of jobs that could, in principle, be fully automated by a sufficiently advanced cognitive AI.<\/p>\n

Obviously, a world in which 13 percent of jobs are rapidly automated away is one with pretty massive social disruption. But at first glance, it doesn\u2019t seem too different from what\u2019s been happening in many industries over the past couple of centuries. In 1870<\/a>, about half of United States workers worked in agriculture. By 1900, a third did. Last year, only 1.4 percent<\/a> did. The consequence of this is not that Americans starve, but that a vastly more productive, heavily automated farming sector feeds us and lets the other 98.6 percent of the workforce do other work we like more.<\/p>\n

Similarly, manufacturing has become so automated that it now appears global<\/em> manufacturing employment has peaked<\/a> \u2014 it\u2019s not just that factories use fewer workers in the US compared to poorer countries, but that they use fewer workers everywhere, period<\/em>.<\/p>\n

\n
\n

\u201cThere’s an upper bound of how much can be remote, and I think we’re kind of at it now.\u201d<\/p>\n

Nicholas Bloom, Stanford University economist and leading expert on remote work<\/cite><\/p><\/blockquote>\n<\/figure>\n

Agriculture<\/a> and manufacturing<\/a> are also becoming less important as a share of global economic output over time, not just as shares of employment. So this is one possible future: AI rapidly increases productivity in remote-friendly jobs like software engineering, accounting, and writing for Vox.com, leading to sharp reductions in employment in those sectors. People displaced by this shift gradually shift to harder to automate jobs, becoming masseuses, electricians, nurses, and so forth.<\/p>\n

Barnett notes that if this happens, the effect on global economic growth could be massive (maybe a doubling of economic output). It would obviously be inconvenient for me, personally, and I would be sad. But it\u2019s basically \u201cthe world now, but moreso\u201d \u2014 more economic growth and more labor displacement \u2014 rather than a brave new world.<\/p>\n

That said Barnett thinks this is probably underselling what might happen. Yes, automation in agriculture and manufacturing has meant that those sectors gradually decline in importance. That doesn\u2019t have<\/em> to happen, though. Barnett gives the example of the UK after the invention of spinning jenny and flying shuttle<\/a>. Those and subsequent cotton-processing technologies massively improved productivity in the textiles industry relative to other sectors of the British economy.<\/p>\n

Was the result that textiles became less important? Quite the opposite: The sector exploded, and became vastly more<\/em> important to the British economy. British exports of textiles increased over sevenfold<\/a> between the 1740s (when those inventions were just being developed and deployed) and the 1750s, and kept growing exponentially from there.<\/p>\n

The difference between these scenarios is a number that Barnett calls the \u201celasticity of substitution\u201d \u2014 in this case, between remote and in-person work, but in principle between any two kinds of work. For some kinds of work, this number is below 1, meaning that if that work gets much cheaper, it will shrink relative to other kinds of work. The two types of work don\u2019t substitute well for each other, so the elasticity of substitution is low. But if the elasticity is above 1, then the work getting cheaper will become more<\/em> common and more <\/em>important.<\/p>\n

One way to think about this, Barnett told me, whether your demand for something can be saturated<\/em>. \u201cThere’s a sense in which your utility from food saturates, because the amount of utility you get from just getting 2,000 calories per day is not half the amount of utility you get from 4,000.” he told me. “Assuming you can live comfortably on 2,000 calories per day, then it’s going to be almost exactly the same amount of utility, because you’re probably gonna throw away a whole bunch of food.”<\/p>\n

It makes sense, then, that agriculture shrank in importance once humanity developed the ability to grow more calories than people needed (the world\u2019s farms currently produce about 3,000 calories per person per day<\/a>, more than enough to feed every human on the planet by sheer quantity). Manufacturing, too, makes some sense in these terms. Most people hit a limit on how much large manufactured stuff they actually are able to use. My first washing machine helped a lot; getting a third or even a second would be pointless.<\/p>\n

By contrast, the world\u2019s demand for textiles in the 18th century was nowhere near a saturation point. You can, in principle, own a limitless supply of clothes, and especially in the time of hand production, there was lots of pent-up demand, in countries around the world, for fabrics that had previously been prohibitively expensive. That meant that Britain could pour more and more resources into that sector of its economy without having returns diminish too much.<\/p>\n

What if remote work is more like that?<\/p>\n

This supposition might seem fanciful, but let\u2019s fantasize. If you had an on-call computer programmer who could make your computer work exactly the way you wanted, wouldn\u2019t you have a lot to ask it? If you had a personal animator who could make on-demand episodes of your favorite type of TV show with your favorite music in the background, wouldn\u2019t you call on her a lot?<\/p>\n

I have a million deeply weird questions I\u2019m too busy and\/or lazy to answer \u2014 who invented the \u201cYou Can\u2019t Hurry Love<\/a>\u201d bassline? Why were the witness reports in the Dag Hammarskjold plane crash<\/a> ignored? \u2014 that I wish something smarter than OpenAI Deep Research could give me an answer in seconds. Maybe you would too?<\/p>\n

If that\u2019s the situation, then things look very different. If the elasticity of substitution between remote and non-remote work is 10, Barnett finds, then you see US GDP grow tenfold over a decade, an average growth rate of 25 percent. That is completely unheard of in human history<\/a>. But it would also be incredibly weird growth, showing up in increased consumption of AI-generated products, rather than, say, easier access to child care or cheaper housing.<\/p>\n

Nicholas Bloom, the Stanford University economist and leading expert on remote work, is taking the under on this bet. It\u2019s better, he reasons, to think of remote and non-remote work as complements than substitutes, which makes a scenario with high substitution, like Barnett\u2019s fast growth situation, hard to believe. <\/p>\n

\u201cThere’s an upper bound of how much can be remote, and I think we’re kind of at it now,\u201d Bloom says. That said, part of Bloom\u2019s skepticism about full-remote work comes from his belief in the importance of mentoring, which is much harder to do in a remote work setup. With AI, presumably the need to mentor in-person becomes moot.<\/p>\n

\n

What are the most remote-friendly jobs?<\/h2>\n

One can of course reason through which jobs are easy to do remotely (graphic design, telemarketing) and which are impossible (surgery, construction). But is it possible to be more systematic?<\/p>\n

Several researchers have tried to categorize major occupations as remote-able or not, but I like Matthew Barnett\u2019s approach of simply asking a large language model if certain tasks can be done remotely. Here are some examples of jobs where every single task can be done remotely, per the OpenAI model that Barnett asked (GPT-4o):<\/p>\n