The kill floor of a modern American beef packing plant moves at a speed that does not lend itself to deliberation. A large plant — at IBP, JBS, Cargill, National Beef — runs two shifts and processes between five and six thousand cattle in a day. Every carcass is split into sides, hung, chilled, and walked past a sequence of stations, each station performing one decision in the chain that ends with a price on the cut at the grocery store. One of those stations, near the chill cooler, is a camera. The camera is mounted at a fixed position; the carcass arrives at it on schedule; a light is thrown across the exposed ribeye cross-section of the twelfth rib; an image is taken. In rather less than a second, the camera returns a USDA quality grade — Prime, Choice, Select, Standard — and a yield grade between one and five. The grade is stamped onto the carcass; the carcass moves on. The system, generically the VBG2000 from a German firm called e+v Technology and its modern successors, has been certified by the USDA in some form since the mid-1990s, deployed at scale across the American beef industry through the 2000s and 2010s, and by 2026 is, in any plant of significant size, the source of the great majority of the grades that determine the price of every cut of beef sold in the country.

This post is the second of two corrections — the first, earlier today, was on the emotional-support use of frontier language models — to the five posts on AI-lab trajectories I wrote last week. Those posts measured the things the analyst class is in the habit of measuring: model release cadence, revenue, compute, regulatory standing, distribution moats. They did not measure the camera on the kill floor, because the kill floor is not in any of the rooms the analyst class visits. The argument of this second corrective is parallel to the first and, by the time one has finished, perhaps stronger: the AI deployment that is most thoroughly built into the physical world in 2026 is not the deployment the AI conversation can see.

The Floor and the Number

A few facts, in the order one would want them.

The United States slaughters roughly thirty-two million head of cattle a year, the great majority of them through the four largest packers — Cargill, Tyson (which owns IBP), JBS USA, and National Beef. At full operation, these four account for something on the order of three-quarters of the country’s fed-cattle slaughter. A daily throughput of a hundred and twenty thousand carcasses across all American beef plants is not a stretching number; in many weeks of the year it is exceeded.

Every one of those carcasses is graded, because grading is what the price is paid against — the difference between a Prime and a Choice ribeye, at wholesale, is several dollars a pound, and the difference between an upper-Choice and a lower-Choice carcass, multiplied across the eight or nine hundred pounds of saleable beef a steer carries, is meaningful money to the packer and to the rancher who supplied the animal. Before camera grading, this work was done by USDA-employed master graders, who walked the cooler with the carcasses and rendered a judgment by eye on each. The job required years of training, a steady eye, and a willingness to spend one’s working life among hanging carcasses; the supply of qualified graders never quite kept up with the demand. By the middle of the 2010s, the camera systems had been certified at a sufficient number of decision points — the USDA accreditation is staged across marbling, ribeye area, lean color, fat color, and skeletal maturity — that the cameras did the work and the human grader supervised it. By the early 2020s, the supervision had become, in most plants, a matter of resolving the cases where the camera reported low confidence or where the boundary between two grades fell within a few percent of marbling. By 2026, a significant majority of the beef carcasses graded in the United States receive their USDA quality grade from a piece of software, and the grader signs off on the camera’s verdict at a pace that human-only grading could not have approached.

The agreement-rate numbers, where the USDA has published them, hold their shape across more than a decade of refinement: on marbling, the cameras and the master graders agree, depending on the system and the year of evaluation, somewhere in the high eighties to mid-nineties of a percent; on ribeye area, where the measurement is geometric rather than subjective, agreement is higher still and the cameras are usually treated as the ground truth. Throughput per inspection station, against the human-only baseline of an experienced grader doing seven hundred to eight hundred carcasses on a shift, is anywhere from three to six times higher with the camera in the loop, and the ceiling continues to lift as confidence-routing improves and the high-confidence cases bypass human review entirely.

These are the deployment numbers of a piece of artificial intelligence that has been in the field for thirty years, has cleared regulatory accreditation in five separate decision categories, has paid back its capital cost in every plant of any size, and has reorganized the labor of an entire profession around itself.

The Wrong Newsroom

One would think this would receive some coverage. It does, in fact, receive coverage — in The National Provisioner, in Meatingplace, in Meat & Poultry, in the USDA’s own Livestock Slaughter releases, and in a fairly substantial body of agricultural-economics literature. None of these publications, with the kindest will, are read by the people who write about artificial intelligence for a living. The AI trade press — which is to say TechCrunch, The Information, Stratechery, Semafor, the AI sections of the Wall Street Journal and the Financial Times, and the considerable cottage industry of independent newsletters that has grown up around the language-model labs since 2023 — has, as far as I have been able to find, written almost nothing about kill-floor camera grading at any point in the past five years. The deployment is too long-running to be news, the publications that cover it are not on anyone’s beat, and the firms involved — e+v Technology, the four packers, the USDA Agricultural Marketing Service — do not seek out the AI press because they have nothing to sell it.

This is the second face of the same problem the first corrective described. The first was that emotional support, by conversation volume, is the largest single use of frontier language models in 2026, and goes almost entirely uncounted by the analyst class because it does not generate API revenue. The second is that industrial visual quality grading is, by physical-world depth of deployment, the most thoroughly embedded use of artificial intelligence in the American economy in 2026, and goes almost entirely uncounted because the rooms in which it happens are not the rooms in which the analyst class drinks coffee. In both cases the structural problem is the same: when an industry describes “what AI does in the world,” it ends up describing what AI does in its own line of sight. The line of sight does not reach the chill cooler at a Cargill plant in Dodge City.

Why It Works on That Floor

There is a quiet structural argument for why camera grading became the deployment it became, while a hundred more famous applications stalled. It rests on three properties of the task.

The first is that the task is, at its core, a perception problem and not a reasoning problem. The camera does not need to understand beef; it needs to look at a marbling pattern and return a grade. There is no chain of inference, no context window of relevant prior facts, no consideration of intent. A trained vision system, given enough labeled examples, is exactly the right tool for that shape of problem, and was the right tool well before the language-model era. The cameras did not have to wait for the breakthroughs of the 2020s; the breakthroughs of the 1990s were sufficient, and the improvements since have only made the system more capable at the margins.

The second is that the conditions of the task are tightly controlled. Every carcass arrives at the camera at the same height, the same orientation, the same distance, in the same light. The variables that defeat consumer-grade vision systems — clutter, occlusion, lighting drift, unfamiliar pose — are, on the kill floor, engineered out of existence by the floor itself. Industrial AI has always performed well in industrial conditions; the conditions are what one might call the quiet half of the deployment story, the half that is built before any model is trained.

The third is that the supply of training data is exceptionally good. Decades of human master-grader decisions are available, every one of them tied to a downstream price the meat actually fetched in market. Ground truth in most machine-learning problems is contestable, expensive, or both; on the kill floor it is grounded in dollars, which is the cleanest signal there is. A model trained on what the master graders called Prime, against carcasses that went to market and fetched Prime prices, has a label set that is honest in a way that human-language-task label sets almost never manage.

These three properties — perceptual task, controlled conditions, dollar-anchored labels — are not unique to beef grading, but they are unusually clean there. The same triad is what made AI work for poultry inspection at Marel and Baader, for produce sorting at Tomra and Bühler, for fish grading, for pork carcass measurement at Frontmatec. The pattern is wider than the kill floor. The kill floor is simply the cleanest example, and the longest-running.

What This Is Not

One ought to be plain about what this is not.

It is not a story of a profession disappeared. USDA master graders still exist, still walk the floor at every major packing plant, still draw a salary from the Agricultural Marketing Service, and still own the final word on any carcass the camera flags as ambiguous. What has changed is that they verify rather than originate, and that the same number of graders covers many more carcasses than they could have done by eye. The labor has been rebalanced; it has not been eliminated, and the experienced graders who remain on the floor are arguably more valuable than they were in the human-only era, because their judgment is what resolves the cases the camera will not.

It is not a story of an industrial deployment that arrived through any sudden breakthrough. The first vision-based grading patents date to the late 1980s; the first USDA approvals to the mid-1990s; the rollout has been a thirty-year unbroken line of steady refinement and steady regulatory accommodation. The lesson, if there is one, is the dull lesson — that the most consequential AI deployments are not the ones one reads about on the day they ship.

It is not a story that generalizes effortlessly to the rest of the meat economy. Pork carcass grading uses different measurements (lean percentage rather than marbling) and different equipment. Poultry inspection is largely about defect detection, not quality grading, and runs at much higher line speeds with different cameras. Fish grading has its own peculiarities. Each of these is a deployment in its own right; each has its own three-decade history; none of them is the same as beef.

And it is not, finally, an argument that the deeper economic significance of AI in 2026 lies only in industrial visual grading. The first corrective made the case for emotional support; this one makes the case for visual quality grading; there are several other categories — predictive maintenance on heavy industrial equipment, optical character recognition at the back of every bank in the country, computer-vision defect detection in semiconductor fabrication, claims-triage models at every casualty insurer — that share the basic shape of this argument and that the AI trade press also does not cover. The plural form of the claim is that the analyst class has, as a class, a coverage problem, and that the largest and most embedded AI deployments are systematically the ones it is least equipped to see.

The Plain Fact

The structural facts come out, in order, as follows.

The first is that beef-grading cameras, by any measure of depth — regulatory accreditation, capital expenditure, supply-chain integration, labor reorganization, equipment lifetime — are among the deepest AI deployments in the American economy in 2026. A federal agency has certified them in five separate decision categories. The four largest packers have committed multi-decade capital to them. The price of every fed-cattle carcass in the country flows through them. The depreciation schedule on the equipment runs into the 2030s. Whatever else one wants to say about AI in 2026, it is harder to remove this deployment than it is to remove almost any chatbot one cares to name.

The second is that the analyst class’s account of what AI does in the world is not a description of what AI does in the world. It is a description of what AI does in the rooms the analyst class visits — productivity tools, agent platforms, consumer apps, API surfaces. The rooms it does not visit — the kill floor, the hospice room, the back of the bank, the fabrication plant — are where the depth of deployment runs deepest. The five trajectory posts, this corrective and the previous one included, are an extended argument against the analyst class’s own map.

The third is the one I should like to leave plainly. The chatbot’s history is short, has been long-promised, and could in principle be reversed by a shift in subsidy or in sentiment. The camera on the kill floor has a long history, has been quietly extending its reach for thirty years, is built into the price of a steak, and would require an act of regulatory withdrawal that no constituency is asking for in order to remove. The first is more visible; the second is more permanent. The thing one writes about and the thing one builds one’s economy around are, in 2026, no longer the same thing.

On the floor, the line keeps moving, the camera keeps grading, and the price keeps being set on what the camera sees. None of it has required anyone’s coverage. All of it has been the case for thirty years.