All educated people 2020 understand that everything is racist.

Here is a highly educated person, for instance, explaining that all White people are racist.

But even many educated Americans do not yet understand that Algorithms can be so-called “racist.” Indeed, this is an extension of the principle that nothing is more “racist” than accurate pattern recognition.

In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher.

With “80 percent accuracy and with no racial bias,” the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict “if someone is a criminal based solely on a picture of their face.” The press release has since been deleted from the university website.

The plot thickens…

Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.

But the researchers say the problem doesn’t stop there. Signers of the letter, collectively calling themselves the Coalition for Critical Technology (CCT), said the paper’s claims “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The letter argues it is impossible to predict criminality without racial bias, “because the category of ‘criminality’ itself is racially biased.”

Makes sense…

Race science was debunked long ago, but papers that use machine learning to “predict” innate attributes or offer diagnoses are making a subtle, but alarming return.

Why would an algorithm based on allegedly debunked science be predictively accurate?

In 2016 researchers from Shanghai Jiao Tong University claimed their algorithm could predict criminality using facial analysis. Engineers from Stanford and Google refuted the paper’s claims, calling the approach a new “physiognomy,” a debunked race science popular among eugenists, which infers personality attributes from the shape of someone’s head.

China’s refusal to subordinate scientific inquiry at the altar of egalitarian dogma may just prove to be their decisive advantage. Perhaps it’s time that the United States end its endless and unhealthy obsession with racism, and get back to the business of civilization.