The most important story in journalism proper now could be that CBS News agreed to give Donald Trump $16 million in a legally blessed bribe. The nice sin of “The Home That Edward R. Morrow Constructed” concerned 60 Minutes airing a run-of-the-mill interview with Kamala Harris that made her seem like a reliable public servant with years of expertise. Since Trump’s interviews, no matter modifying, sound like a dementia affected person navigating a regulation faculty chilly name, he determined CBS had dedicated shopper fraud as a result of Harris spoke in full sentences.
However apparently we weren’t performed with at this time’s “dystopian assault on freedom of the press” information! And it got here after an unlikely goal: Law360. I definitely didn’t have “authorized trade commerce publication” on my censorship BINGO card. Then once more, Biglaw lateral moves have suddenly become political stories so maybe this marks inevitable cowardice creep reaching the authorized press.
However the a part of this story that elevates it from ominous improvement for civil liberties to comi-tragic is that Law360 is owned by LexisNexis and due to this fact the agent of Law360’s doom is… an AI algorithm! A brand new bias detecting ChatGPT wrapper slapped collectively by some LexisNexis product engineers most likely taken away from actually useful work to construct a degenerative AI to strip information articles of any semblance of worth. 2025, man… Does. Not. Miss.
NiemanLab, Harvard’s digital journalism heart, reviews that Law360 has ordered its reporters run their stories through an AI bias detector designed for “making use of a impartial voice to repeat” and to be obligatory for “headline drafting, story tagging, and ‘article refinement and modifying.’”
As one may think the journalists, represented by the Law360 union, object to this half-baked thought. A coverage this ethically bankrupt may solely come up from non-journalist government enter.
The announcement got here just a few weeks after an government at Law360’s father or mother firm accused the newsroom of liberal political bias in its protection of the Trump administration. At an April city corridor assembly, Teresa Harmon, vice chairman of authorized information at LexisNexis, cited unspecified reader complaints as proof of editorial bias.
Giving uncritical weight to squeaky wheel complaints, particularly in an surroundings the place a authorities official weaponized his followers to behave on their each grievance as much as and together with STORMING THE FUCKING CAPITOL, is a dunderheaded administration technique solely an MBA may provide you with. However it’s nearly definitely a cynical one. If all of us begin writing complaints that the headlines are neutered doublespeak, will Law360 be ordered to reverse course? I’m incredulous.
Whereas the article notes that there’s not a longtime throughline from these remarks to the implementation of the coverage, it speaks to a mindset that clearly acquired out of hand.
However let’s put apart the knowledge of the coverage and give attention to the truth that the bias detector can be horrible at its job. As a result of that’s just a bit bit extra enjoyable. Solely at a tech firm may somebody assume that generative AI instruments being developed for devoted authorized work duties might be bolted onto the editorial means of a information publication.
Generative AI is a robust device in the identical manner a screwdriver is a robust device. However you wouldn’t use a screwdriver to do your taxes. But that’s the considering concerned in bringing AI into an editorial course of. To borrow from the TV sequence Veep, it’s like utilizing a croissant as a dildo: “It doesn’t do the job, and it makes a fucking MESS!”
She additionally criticized the headline of a March 28 story — “DOGE officers arrive at SEC with unclear agenda” — for example. In the identical city corridor, Harmon prompt that the nonetheless experimental bias indicator could be an efficient answer to this drawback, in line with two staff in attendance.
However… DOGE officers did arrive on the SEC with an unclear agenda. The White Home couldn’t be clear about who was running DOGE not to mention its agenda. That is only a factual assertion that, if something, is biased in favor of DOGE since its suspected agenda to steal information and hamper regulation was about as disguised as three raccoons in a trench coat.
The report notes one other story in regards to the Trump resolution to mobilize the California Nationwide Guard:
A number of sentences within the story had been flagged as biased, together with this one: “It’s the primary time in 60 years {that a} president has mobilized a state’s Nationwide Guard with out receiving a request to take action from the state’s governor.” In accordance with the bias indicator, this sentence is “framing the motion as unprecedented in a manner which may subtly critique the administration.” It was finest to present extra context to “stability the tone.”
It was the primary time in 60 years although! That’s the related context. As is the juxtaposition with the civil rights period for the reason that final time a president did this, it was to push again in opposition to segregationists whereas this time it was about breaking apart a conga line. Absent that context, it strips a radical encroachment on state sovereignty of its newsworthiness.
The algorithm additionally apparently needed the article to tone down its characterization of Choose Breyer’s response:
One other line was flagged for suggesting Choose Charles Breyer had “pushed again” in opposition to the federal authorities in his ruling, an opinion which had referred to as the president’s deployment of the Nationwide Guard the act of “a monarchist.” Relatively than “pushed again,” the bias indicator prompt a milder phrase, like “disagreed.”
This new bot would have reported Watergate as a tenant affiliation dispute.
In one other instance, BiasBot advised Law360 that its protection of a case ought to “state the details of the lawsuit with out suggesting its broader implications.” Provided that the regulation remains to be ostensibly a operate of precedent, reporting on caselaw is… all about broader implications.
It’s sort of the entire purpose LexisNexis is in enterprise, truly!
As a typically tech reporter, I’ve nice relationships with the LexisNexis people working to make the authorized career extra environment friendly. However that’s as a result of my contacts aren’t the folks making an attempt to micromanage information protection to ensure each article earns the right-wing podcaster seal of approval as “truthful.” It appears to me, the corporate would possibly must get management of its rogue unit.
There are, admittedly, alternatives to leverage generative AI within the journalist workflow. Detecting bias shouldn’t be certainly one of them for a number of causes. Probably the most easy and technical of which is that generative AI instruments are designed to present the consumer pleasing solutions come hell or excessive water. It’s how AI hallucinates cases to match the user’s research query. So if you happen to construct an AI to “detect bias” it ensures that it’ll discover some bias. In all probability 4 or 5 bulleted examples it doesn’t matter what. Does it actually have an issue with “pushed again” or was that simply one thing it grabbed to fill its reply quota?
However the extra philosophical reply is that goal details usually have a lean. When 99 p.c of local weather scientists say local weather change is actual, do information retailers have to present equal time to Professor Daniel Plainview in regards to the medicinal advantages of consuming crude oil? As a result of the algorithm can’t deal with that nuance. Based mostly on the examples within the NiemanLab piece, it’s simply performing the barest degree of sentiment evaluation and flagging phrasing that carry even the slightest affect past the superficial. However that in and of itself is an act of bias. I used to inform deponents to not speculate as a result of in the event that they don’t know one thing — regardless of how a lot they assume they’re serving to — they’re truly mendacity in the event that they don’t admit that they don’t know.
The flip facet can be true. A information report that claims Charles Breyer had a tepid disagreement with the DOJ is, the truth is, a lie. And it’s not any much less of a lie since you requested the robotic to say the lie for you.
Joe Patrice is a senior editor at Above the Legislation and co-host of Thinking Like A Lawyer. Be happy to email any ideas, questions, or feedback. Comply with him on Twitter or Bluesky if you happen to’re occupied with regulation, politics, and a wholesome dose of school sports activities information. Joe additionally serves as a Managing Director at RPN Executive Search.