Children (and we’re all children, no matter how mature and sophisticated we pretend to be) just as generally find it hard to follow that maxim. We all screw it up once in a while by succumbing to the temptation of some perceived advantage to be had by dealing with some unsavory character.
Large corporations and national governments are at least as likely to succumb to the prospect of making a fast buck or signing some treaty with peers who don’t entertain the same values we have (or at least pretend to have). Governments, especially, have a tough time in dealing with what I’ll call “Bad Guys.”
Let’s face it, better than half the nations of the world are run by people we wouldn’t want in our living rooms!
I’m specifically thinking about totalitarian regimes like the People’s Republic of China (PRC).
‘Way back in the last century, Mao Tse-tung (or Mao Zedong, depending on how you choose to mis-spell the anglicization of his name) clearly placed China on the “Anti-American” team, espousing a virulent form of Marxism and descending into the totalitarian authoritarianism Marxist regimes are so prone to. This situation continued from the PRC’s founding in 1949 through 1972, when notoriously authoritarian-friendly U.S. President Richard Nixon toured China in an effort to start a trade relationship between the two countries.
Greedy U.S. corporations quickly started falling all over themselves in an effort to gain access to China’s enormous potential market. Mesmerized by the statistics of more than a billion people spread out over China’s enormous land mass, they ignored the fact that those people were struggling in a subsistence-agriculture economy that had collapsed under decades of mis-managment by Mao’s authoritarian regime.
What they hoped those generally dirt-poor peasants were going to buy from them I never could figure out.
Unfortunately, years later I found myself embedded in the management of one of those starry-eyed multinational corporations that was hoping to take advantage of the developing Chinese electronics industry. Fresh off our success launching Test & Measurement Europe, they wanted to launch a new publication called Test & Measurement China. Recalling the then-recent calamity ending the Tiananmen Square protests of 1989, I pulled a Nancy Reagan and just said “No.”
I pointed out that the PRC was still run by a totalitarian, authoritarian regime, and that you just couldn’t trust those guys. You never knew when they were going to decide to sacrifice you on the altar of internal politics.
Today, American corporations are seeing the mistakes they made in pursuit of Chinese business, which like Robert Southey’s chickens, are coming home to roost. In 2015, Chinese Premier Li Keqiang announced the “Made in China 2025” plan to make China the World’s technology leader. It quickly became apparent that Mao’s current successor, Xi Jinping intends to achieve his goals by building on technology pilfered from western companies who’d naively partnered with Chinese firms.
Now, their only protector is another authoritarian-friendly president, Donald Trump. Remember it was Trump who, following his ill-advised summit with North Korean strongman Kim Jong Un, got caught on video enviously saying: “He speaks, and his people sit up at attention. I want my people to do the same.”
So, now these corporations have to look to an American would-be dictator for protection from an entrenched Chinese dictator. No wonder they find themselves screwed, blued, and tattooed!
Governments are not immune to the PRC’s siren song, either. Pundits are pointing out that the PRC’s vaunted “One Belt, One Road” initiative is likely an example of “debt-trap diplomacy.”
Debt-trap diplomacy is a strategy similar to organized crime’s loan-shark operations. An unscrupulous cash-rich organization, the loan shark, offers funds to a cash-strapped individual, such as an ambitious entrepreneur, in a deal that seems too good to be true. It’s NOT true because the deal comes in the form of a loan at terms that nearly guarantee that the debtor will default. The shark then offers to write off the debt in exchange for the debtor’s participation in some unsavory scheme, such as money laundering.
In the debt-trap diplomacy version, the PRC stands in the place of the loan shark while some emerging-economy nation, such as, say, Malaysia, accepts the unsupportable debt. In the PRC/ Malaysia case, the unsavory scheme is helping support China’s imperial ambitions in the western Pacific.
Earlier this month, Malaysia wisely backed out of the deal.
It’s not just the post-Maoist PRC that makes a dangerous place for western corporations to do business, authoritarians all over the world treat people like Heart’s Barracuda. They suck you in with mesmerizing bright and shiny promises, then leave you twisting in the wind.
Yes, I’ve piled up a whole mess of mixed metaphors here, but I’m trying to drive home a point!
Another example of the traps business people can get into by trying to deal with authoritarians is afforded by Danske Bank’s Estonia branch and their dealings with Vladimir Putin‘s Russian kleptocracy. Danske Bank is a Danish financial institution with a pan-European footprint and global ambitions. Recent release of a Danske Bank internal report produced by the Danish law firm Bruun & Hjejle says that the Estonia branch engaged in “dodgy dealings” with numerous corrupt Russian officials. Basically, the bank set up a scheme to launder money stolen from Russian tax receipts by organized criminals.
The scandal broke in Russia in June of 2007 when dozens of police officers raided the Moscow offices of Hermitage Global, an activist fund focused on global emerging markets. A coverup by Kremlin authorities resulted in the death (while in a Russian prison) of Sergei Leonidovich Magnitsky, a Russian tax accountant who specialized in anti-corruption activities.
Magnitsky’s case became an international cause célèbre. The U.S. Congress and President Barack Obama enacted the Magnitsky Act at the end of 2012, barring, among others, those Russian officials believed to be involved in Magnitsky’s death from entering the United States or using its banking system.
Apparently, the purpose of the infamous Trump Tower meeting of June 9, 2016 was, on the Russian side, an effort to secure repeal of the Magnitsky Act should then-candidate Trump win the election. The Russians dangled release of stolen emails incriminating Trump-rival Hillary Clinton as bait. This activity started the whole Mueller Investigation, which has so far resulted in dozens of indictments for federal crimes, and at least eight guilty pleas or convictions.
The latest business strung up in this mega-scandal was the whole corrupt banking system of Cyprus, whose laundering of Russian oligarchs’ money amounted to over $20B.
The moral of this story is: Don’t do business with bad guys, no matter how good they make the deal look.
Last week I spent a lot of space yammering on about how to tell fake news from the real stuff. I made a big point about how real news organizations don’t allow editorializing in news stories. I included an example of a New York Times op-ed (opinion editorial) that was decidedly not a news story.
On the other hand, last night I growled at my TV screen when I heard a CNN commentator say that she’d been taught that journalists must have opinions and should voice them. I growled because her statement could be construed to mean something anathema to journalistic ethics. I’m afraid way too many TV journalists may be confused about this issue. Certainly too many news consumers are confused!
It’s easy to get confused. For example, I got myself in trouble some years ago in a discussion over dinner and drinks with Andy Wilson, Founding Editor at Vision Systems Design, over a related issue that is less important to political-news reporting, but is crucial for business-to-business (B2B) journalism: the role of advertising in editorial considerations.
Andy insisted upon strictly ignoring advertiser needs when making editorial decisions. I advocated a more nuanced approach. I said that ignoring advertiser needs and desires would lead to cutting oneself off from our most important source of technology-trends information.
I’m not going to delve too deeply into that subject because it has only peripheral significance for this blog posting. The overlap with news reporting is that both activities involve dealing with biased sources.
My disagreement with Andy arose from my veteran-project-manager’s sensitivity to all stakeholders in any activity. In the B2B case, editors have several ways of enforcing journalistic discipline without biting the hand that feeds us. I was especially sensitive to the issue because I specialized in case studies, which necessarily discuss technology embodied in commercial products. Basically, I insisted on limiting (to one) actual product mentions in each story, and suppressing any claims that the mentioned product was the only possible way to access the embodied technology. In essence, I policed the stories I wrote or edited to avoid the “buy our stuff” messages that advertisers love and that send chills down Andy’s (and my) spine.
In the news-media realm, journalists need to police their writing for “buy our ideas” messages in news stories. “Just the facts, ma’am” needs to be the goal for news. Expressing editorial opinions in news stories is dangerous. That’s when the lines between fake news and real news get blurry.
Those lines need to be sharp to help news consumers judge the … information … they’re being fed.
Perhaps “information” isn’t exactly the right word.
It might be best to start with the distinction between “information” and “data.”
The distinction is not always clear in a general setting. It is, however, stark in the world of science, which is where I originally came from.
What comes into our brains from the outside world is “data.” It’s facts and figures. Contrary to what many people imagine, “data” is devoid of meaning. Scientists often refer to it as “raw data” to emphasize this characteristic.
There is nothing actionable in raw data. The observation that “the sky is blue” can’t even tell you if the sky was blue yesterday, or how likely it is to be blue tomorrow. It just says: “the sky is blue.” End of story.
Turning “data” into “information” involves combining it with other, related data, and making inferences about or deductions from patterns perceivable in the resulting superset. The process is called “interpretation,” and it’s the second step in turning data into knowledge. It’s what our brains are good for.
So, does this mean that news reporters are to be empty-headed recorders of raw facts?
Not by a long shot!
The CNN commentator’s point was that reporters are far from empty headed. While learning their trade, they develop ways to, for example, tell when some data source is lying to them.
In the hard sciences it’s called “instrumental error,” and experimental scientists (as I was) spend careers detecting and eliminating it.
Similarly, what a reporter does when faced with a lying source is the hard part of news reporting. Do you say, “This source is unreliable” and suppress what they told you? Do you report what they said along with a comment that they’re a lying so-and-so who shouldn’t be believed? Certainly, you try to find another source who tells you something you can rely on. But, what if the second source is lying, too?
That’s why we news consumers have to rely on professionals who actually care about the truth for our news.
On the other hand, nobody goes to news outlets for just raw data. We want something we can use. We want something actionable.
Most of us have neither the time nor the tools to interpret all the drivel we’re faced with. Even if we happen to be able to work it out for ourselves, we could always use some help, even if just to corroborate our own conclusions.
Who better to help us interpret the data (news) and glean actionable opinions from it than those journalists who’ve been spending their careers listening to the crap newsmakers want to feed us?
That’s where commentators come in. The difference between an editor and a reporter is that the editor has enough background and experience to interpret the raw data and turn it into actionable information.
That is: opinion you can use to make a decision. Like, maybe, who to vote for.
People with the chops to interpret news and make comments about it are called “commentators.”
When I was looking to hire what we used to call a “Technical Editor” for Test & Measurement World, I specifically looked for someone with a technical degree and experience developing the technology I wanted that person to cover. So, for example, when I was looking for someone to cover advances in testing of electronics for the telecommunications industry, I went looking for a telecommunications engineer. I figured that if I found one who could also tell a story, I could train them to be a journalist.
That brings us back to the CNN commentator who thought she should have opinions.
The relevant word here is “commentator.”
She’s not just a reporter. To be a commentator, she supposedly has access to the best available “data” and enough background to skillfully interpret it. So, what she was saying is true for a commentator rather than just a reporter.
Howsomever, ya can’t just give a conclusion without showing how the facts lead to it.
Let’s look at how I assemble a post for this blog as an example of what you should look for in a reliable op-ed piece.
Obviously, I look for a subject about which I feel I have something worthwhile to say. Specifically, I look for what I call the “take-home lesson” on which I base every piece of blather I write.
The “take-home lesson” is the basic point I want my reader to remember. Come Thursday next you won’t remember every word or even every point I make in this column. You’re (hopefully) going to remember some concept from it that you should be able to summarize in one or two sentences. It may be the “call to action” my eighth-grade English teacher, Miss Langley, told me to look for in every well-written editorial. Or, it could be just some idea, such as “Racism sucks,” that I want my reader to believe.
Whatever it is, it’s what I want the reader to “take home” from my writing. All the rest is just stuff I use to convince the reader to buy into the “take-home lesson.”
Usually, I start off by providing the reader with some context in which to fit what I have to say. It’s there so that the reader and I start off on the same page. This is important to help the reader fit what I have to say into the knowledge pattern of their own mind. (I hope that makes sense!)
After setting the context, I provide the facts that I have available from which to draw my conclusion. The conclusion will be, of course, the “take-home lesson.”
I can’t be sure that my readers will have the facts already, so I provide links to what I consider reliable outside sources. Sometimes I provide primary sources, but more often they’re secondary sources.
Primary sources for, say, a biographical sketch of Thomas Edison would be diary pages or financial records, which few readers would have immediate access to.
A secondary source might be a well-researched entry on, say, the Biography.com website, which the reader can easily get access to and which can, in turn, provide links to useful primary sources.
In any case, I try to provide sources for each piece of data on which I base my conclusion.
Then, I’ll outline the logical path that leads from the data pattern to my conclusion. While the reader should have no need to dispute the “data,” he or she should look very carefully to see whether my logic makes sense. Does it lead inevitably from the data to my conclusion?
Finally, I’ll clearly state the conclusion.
In general, every consumer of ideas should look for this same pattern in every information source they use.
Every morning we’d gather ’round the desk of our compatriot Ron Held, builder of stellar-interior computer models extraordinaire, to hear him read “what fits” from the days issue of The New York Times. Ron had noticed that when taken out of context much of what is written in newspapers sounds hilarious. He had a deadpan way of reading this stuff out loud that only emphasized the effect. He’d modified the Times‘ slogan, “All the news that’s fit to print” into “All the news that fits.”
Whenever I hear unmitigated garbage coming out of supposed news outlets, I think of Ron’s “All the news that fits.”
These days, I’m on a kick about fake news and how to spot it. It isn’t easy because it’s become so pervasive that it becomes almost believable. This goes along with my lifelong philosophical study that I call: “How do we know what we think we know?”
Early on I developed what I call my “BS detector.” It’s a mental alarm bell that goes off whenever someone tries to convince me of something that’s unbelievable.
It’s not perfect. It’s been wrong on a whole lot of occasions.
For example, back in the early 1970s somebody told me about something called “superconductivity,” where certain materials, when cooled to near absolute zero, lost all electrical resistance. My first reaction, based on the proposition that if something sounds too good to be true, it’s not, was: “Yeah, and if you believe that I’ve got this bridge between Manhattan and Brooklyn to sell you.”
After seeing a few experiments and practical demonstrations, my BS detector stopped going off and I was able to listen to explanations about Cooper Pairs, and electron-phonon interactions and became convinced. I eventually learned that nearly everything involving quantum theory sounds like BS until you get to understand it.
Another time I bought into the notion that Interferon would develop into a useful AIDS treatment. Being a monogamous heterosexual, I didn’t personally worry about AIDS, but I had many friends who did, so I cared. I cared enough to pay attention, and watch as the treatment just didn’t develop.
Most of the time, however, my BS detector works quite well, thank you, and I’ve spent a lot of time trying to divine what sets it off, and what a person can do to separate the grains of truth from the BS pile.
Consider Your Source(s)
There’s and old saying: “Figures don’t lie, but liars can figure.”
First off, never believe anybody whom you’ve caught lying to you in the past. For example, Donald Trump has been caught lying numerous times in the past. I know. I’ve seen video of him mouthing words that I’ve known at the time were incorrect. It’s happened so often that my BS detector goes off so loudly whenever he opens his mouth that the noise drowns out what he’s trying to say.
I had the same problem with Bill Clinton when he was President (he seems to have gotten better, now, but I’m still wary).
Nixon was pretty bad, too.
There’s a lot of noise these days about “reliable sources.” But, who’s a reliable source? You can’t take their word for it. It’s like the old riddle of the lying blackfoot indian and the truthful whitefoot.
Unfortunately, in the real world nobody always lies or always tells the truth, even Donald Trump. So, they can’t be unmasked by calling on the riddle’s answer. If you’re unfamiliar with the riddle, look it up.
The best thing to do is try to figure out what the source’s game is. Everyone in the communications business is selling something. It’s up to you to figure out what they’re selling and whether you want to buy it.
News is information collected on a global scale, and it’s done by news organizations. The New York Times is one such organization. Another is The Wall Street Journal, which is a subsidiary of Dow Jones & Company, a division of News Corp.
So, basically, what a legitimate news organization is selling is information. If you get a whiff that they’re selling anything else, like racism, or anarchy, or Donald Trump, they aren’t a real news organization.
The structure of a news organization is:
Publisher: An individual or group of individuals generally responsible for running the business. The publisher manages the Circulation, Advertising, Production, and Editorial departments. The Publisher’s job is to try to sell what the news organization has to sell (that is, information) at a profit.
Circulation: A group of individuals responsible for recruiting subscribers and promoting sales of individal copies of the news organization’s output.
Advertising: A group of individuals under the direct supervision of the Publisher who are responsible for selling advertising space to individuals and businesses who want to present their own messages to people who consume the news organization’s output.
Production: A group of individuals responsible for packaging the information gathered by the Editorial department into physical form and distributing it to consumers.
Editorial: A group of trained journalists under a Chief Editor responsible for gathering and qualifying information the news organization will distribute to consumers.
Notice the italics on “and qualifying” in the entry on the Editorial Department. Every publication has their self-selected editorial focus. For a publication like The Wall Street journal, whose editorial focus is business news, every story has to fit that editorial focus. A story that, say, affects how readers select stocks to buy or sell is in their editorial focus. A story that doesn’t isn’t.
A story about why Donald Trump lies doesn’t belong in The Wall Street Journal. It belongs in Psychology Today.
That’s why editors and reporters have to be “trained journalists.” You can’t hire just anybody off the street, slap a fedora on their head and call them a “reporter.” That never even worked in the movies. Journalism is a profession and journalists require training. They’re also expected behave in a manner consistent with journalistic ethics.
One of those ethical principles is that you don’t “editorialize” in news stories. That means you gather facts and report those facts. You don’t distort facts to fit your personal opinions. You for sure don’t make up facts out of thin air just ’cause you’d like it to be so.
Taking the example of The Wall Street Journal again, a reporter handed some fact doesn’t know what the reader will do with that fact. Some will do some things and others will do something else. If a reporter makes something up, and readers make business decisions based on that fiction, bad results will happen. Business people don’t like that. They’d stop buying copies of the newspaper. Circulation would collapse. Advertisers would abandon it.
Soon, no more The Wall Street Journal.
It’s the Chief Editor’s job to make sure reporters seek out information useful to their readers, don’t editorialize, and check their facts to make sure nobody’s been lying to them. Thus, the Chief Editor is the main gatekeeper that consumers rely on to keep out fake news.
That, by the way, is the fatal flaw in social media as a news source: there’s no Chief Editor.
One final note: A lot of people today buy into the cynical belief that this vision of journalism is naive. As a veteran journalist I can tell you that it’s NOT. If you think real journalism doesn’t work this way, you’re living in a Trumpian alternate reality.
Bang your head on the nearest wall hoping to knock some sense into it!
So, for you, the news consumer, to guard against fake news, your first job is to figure out if your source’s Chief Editor is trustworthy.
Unfortunately, it’s very seldom that most people get to know a news source’s Chief Editor well enough to know whether to trust him or her.
Comparison Shopping for Ideas
That’s why you don’t take the word of just one source. You comparison shop for ideas the same way you do for groceries, or anything else. You go to different stores. You check their prices. You look at sell-by dates. You sniff the air for stale aromas. You do the same thing in the marketplace for ideas.
If you check three-to-five news outlets, and they present the same facts, you gotta figure they’re all reporting the facts that were given to them. If somebody’s out of whack compared to the others, it’s a bad sign.
Of course, you have to consider the sources they use as well. Remember that everyone providing information to a news organization has something to sell. You need to make sure they’re not providing BS to the news organization to hype sales of their particular product. That’s why a credible news organization will always tell you who their sources are for every fact.
For example, a recent story in the news (from several outlets) was that The New York Times published an opinion-editorial piece (NOT a news story, by the way) saying very unflattering things about how President Trump was managing the Executive Branch. A very big red flag went up because the op-ed was signed “Anonymous.”
That red flag was minimized by the paper’s Chief Editor, Dean Baquet, assuring us all that he, at least, knew who the author was, and that it was a very high official who knew what they were talking about. If we believe him, we figure we’re likely dealing with a credible source.
Our confidence in the op-ed’s credibility was also bolstered by the fact that the piece included a lot of information that was available from other sources that corroborated it. The only new piece of information, that there was a faction within the White House that was acting to thwart the President’s worst impulses, fitted seamlessly with the verifiable information. So, we tend to believe it.
As another example, during the 1990s I was watching the scientific literature for reports of climate-change research results. I’d already seen signs that there was a problem with this particular branch of science. It had become too political, and the politicians were selling policies based on questionable results. I noticed that studies generally were reporting inconclusive results, but each article ended with a concluding paragraph warning of the dangers of human-induced climate change that did not fit seamlessly with the research results reported in the article. So, I tended to disbelieve the final conclusions.
Does It Make Sense to You?
This is where we all stumble when ferreting out fake news. If you’re pre-programmed to accept some idea, it won’t set off your BS detector. It won’t disagree with the other sources you’ve chosen to trust. It will seem reasonable to you. It will make sense, whether it’s right or wrong.
That’s a situation we all have to face, and the only antidote is to do an experiment.
Experiments are great! They’re our way of asking Mommy Nature to set us on the right path. And, if we ask often enough, and carefully enough, she will.
That’s how I learned the reality of superconductivity against my inbred bias. That’s how I learned how naive my faith in interferon had been.
With those cautions, let’s look at how we know what we think we know.
It starts with our parents. We start out truly impressed by our parents’ physical and intellectual capabilities. After all, they can walk! They can talk! They can (in some cases) do arithmetic!
Parents have a natural drive to stuff everything they know into our little heads, and we have a natural drive to suck it all in. It’s only later that we notice that not everyone agrees with our parents, and they aren’t necessarily the smartest beings on the planet. That’s when comparison shopping for ideas begins. Eventually, we develop our own ideas that fit our personalities.
Along the way, Mommy Nature has provided a guiding hand to either confirm or discredit our developing ideas. If we’re not pathological, we end up with a more or less reliable feel for what makes sense.
For example, almost everybody has a deep-seated conviction that torturing pets is wrong. We’ve all done bad things to pets, usually unintentionally, and found it made us feel sad. We don’t want to do it again.
So, if somebody advocates perpetrating cruelty to animals, most of us recoil. We’d have to be given a darn good reason to do it. Like, being told “If you don’t shoot that squirrel, there’ll be no dinner tonight.”
That would do it.
Our brains are full up with all kinds of ideas like that. When somebody presents us with a novel idea, or a report of something they suggest is a fact, our first line of defense is whether it makes sense to us.
If it’s unbelievable, it’s probably not true.
It could still be true, since a lot of unbelievable stuff actually happens, but it’s probably not. We can note it pending confirmation by other sources or some kind of experimental result (like looking to see the actual bloody mess).
The real naive attitude about news, which I used to hear a lot fifty or sixty years ago is, “If it’s in print, it’s gotta be true.”
Reporters, editors and publishers are human. They make mistakes. And, catching those mistakes follows the 95:5 rule.That is, you’ll expend 95% of your effort to catch the last 5% of the errors. It’s also called “The Law of Diminishing Returns,” and it’s how we know to quit obsessing.
The way this works for the news business is that news output involves a lot of information. I’m not going to waste space here estimating the amount of information (in bits) in an average newspaper, but let’s just say it’s 1.3 s**tloads!
It’s a lot. Getting it all right, then getting it all corroborated, then getting it all fact checked (a different, and tougher, job than just corroboration), then putting it into words that convey that information to readers, is an enormous task, especially when a deadline is involved. It’s why the classic image of a journalist is some frazzled guy wearing a fedora pushed back on his head, suitcoat off, sleeves rolled up and tie loosened, maniacally tapping at a typewriter keyboard.
So, don’t expect everything you read to be right (or even spelled right).
The easiest things to get right are basic facts, the Who, What, Where, and When.
How many deaths due to Hurricane Maria on Puerto Rico? Estimates have run from 16 to nearly 3,000 depending on who’s doing the estimating, what axes they have to grind, and how they made the estimate. Nobody was ever able to collect the bodies in one place to count them. It’s unlikely that they ever found all the bodies to collect for the count!
Those are the first four Ws of news reporting. The fifth one, Why, is by far the hardest ’cause you gotta get inside someone’s head.
So, the last part of judging whether news is fake is recognizing that nobody gets it entirely right. Just because you see it in print doesn’t make it fact. And, just because somebody got it wrong, doesn’t make them a liar.
They could get one thing wrong, and most everything else right. In fact, they could get 5 things wrong, and 95 things right!
What you look for is folks who make the effort to try to get things right. If somebody is really trying, they’ll make some mistakes, but they’ll own up to them. They’ll say something like: “Yesterday we told you that there were 16 deaths, but today we have better information and the death toll is up to 2,975.”
Anybody who won’t admit they’re ever wrong is a liar, and whatever they say is most likely fake news.
14 September 2018 – This is an extra edition of my usual weekly post on this blog. I’m writing it to tell you about an online event called “Open Future” put on by The Economist weekly newsmagazine and to encourage you to participate by visiting the URL www.economist.com/openfuture. The event is scheduled for tomorrow, 15 September 2018, but the website is already up, and some parts of the event are already live.
The newsmagazine’s Editor-in-Chief, Zanny Minton Beddoes, describes the event as “an initiative to remake the case for liberal values and policies in the 21st century.”
Now, don’t get put off by the use of the word “liberal.” These folks are Brits and, as I’ve often quipped: “The British invented the language, but they still can’t spell it or pronounce it.” They also sometimes use words to mean different things.
What The Economist calls “liberal” is not what we in the U.S. usually think of as liberal. You can get a clear idea of what The Economist refers to as “liberal” by perusing the list of seminal works in their article “The literature of liberalism.”
We in the U.S. are confused by typically hearing the word “liberal” used to describe leftist policies classed as Liberal with a capital L. Big-L Liberals have co-opted the word to refer to the agenda of the Democratic Party, which, as I’ll explain below, isn’t quite what The Economist refers to as small-L liberal.
The Economist‘s idea of liberal is more like what we usually call “libertarian.” Libertarians tend to take some ideas usually reserved for the left, and some from the right. Their main tenet, however, which is best expressed as “think for yourself,” is anathema to both ends of the political spectrum.
But, those of us in the habit of thinking for ourselves like it.
Unfortunately (or maybe not) small-L libertarianism is in danger of being similarly co-opted in the U.S. by the current big-L Libertarian Party. But, that’s a rant for another day!
What’s more important today is understanding a different way of dividing up political ideologies.
Left vs. Right
Two-hundred twenty-nine years ago, political discourse invented the terms “The Left” and “The Right” as a means of classifying political parties along ideological lines. The terms arose at the start of the French Revolution when delegates to the National Constituent Assembly still included foes of the revolution as well as its supporters.
As the ancient Greek proverb says, “birds of a feather flock together,” so supporters of revolution tended to pick seats near each other, and those against it sat together as well. Those supporting the revolution happened to sit on the left side of the hall, so those of more conservative bent gathered on the right. The terminology became institutionalized, so we now divide the political spectrum between a liberal/progressive Left and a conservative Right.
While the Left/Right-dichotomy works for describing what happened during the first meeting of the French National Constituent Assembly, it poorly reflects the concepts humans actually use to manage governments. In the real world, there is an equally simple, but far more relevant way of dividing up political views: authoritarianism versus democracy.
Authoritarians are all those people (and there’s a whole bunch of them) who want to tell everybody else what to do. It includes most religious leaders, most alpha males (and females), and, in fact, just about everyone who wants to lead anything from teenage gangs to the U.N. General Assembly. Patriarchal and matriarchal families are run on authoritarian principles.
Experience, by the way, shows that authoritarianism is a lousy way to run a railroad, despite the fact that virtually every business on the Planet is organized that way. Managment consultants and organizational-behavior researchers pretty much universally agree that spreading decision making throughout the organization, even down to the lowest levels, makes for the most robust, healthiest companies.
If you want your factory’s floors to be clean, make sure the janitors have a say in what mops and buckets to use!
The opposite of authoritarianism is democracy. Little-D democracy is the antithesis of authoritarianism. Small-D democrats don’t tell people what to do, they ask them what they (the people) want to do, and try to make it possible for them to do it. It takes a lot more savvy to balance all the conflicting desires of all those people than to petulently insist on things being done your way, but, if you can make it work, you get better results.
Now, political discourse based on the Left/Right dichotomy is simple and easy for political parties to espouse. Big-D Democrats have a laundry list of causes they champion. Similarly, Republicans have a laundry list of what they want to promote.
Those lists, however, absolutely do not fit the democracy/authoratarianism picture. And, there’s no reason to expect them to.
Politicians, generally, want to tell other people what to do. If they didn’t, they’d go do something else. That’s the very nature of politics. Thus, by and large, politicians are authoritarians.
They dress their plans up in terms that sound like democracy because most people don’t like being told what to do. In America, we’ve institutionalized the notion that people don’t like being told what to do, so bald-faced authoritarianism is a non-starter.
It started in England with the Magna Carta, in which the English nobles told King John “enough is enough.”
Yeah, King John is the same guy as the “Prince John” who was cast as the arch-enemy of fictional hero Robin Hood. See, we don’t like authoritarians, and generally cast them as the villains in our favorite stories.
Not wanting to be told what to do was imported to North America by the English colonists, who extended the concept (eventually) to everyone regardless of socio-economic status. From there, it was picked up by the French revolutionaries, then spread throughout Europe and parts East.
So, generally, nobody wants authoritarians telling them what to do, which is why they have to point guns at us to get us to do it.
The fact that most people would simultaneously like to be the authoritarian pointing the gun and doing the telling, and a fair fraction (probably about 25%) aren’t smart enough to see the incongruity involved, gives fascist populists a ready supply of people willing to hold the guns. Nazi Germany worked (for a while) because of this phenomenon. With a population North of 60 million, those statistics gave Hitler some 15 million gun holders to work with.
In the modern U.S.A., with a population over 300 million, the same statistical analysis gives modern fascists 75 million potential recruits. And, they’re walking around with more than their fair share of the guns!
Luckily, the rest of us have guns, too.
More importantly, we all have votes.
So, what’s an American who really doesn’t want any authoritarian telling them what to do … to do?
The first thing to do is open your eyes to the flim-flim represented by the Left/Right dichotomy. As long as you buy that drivel, you’ll never see what’s really going on. It’s set up as a sporting event where you’re required to back one of two teams: the Reds or the Blues.
Either one you pick, you’ll end up being told what to do by either the Red-team authoritarians or the Blue-team authoritarians. Because it’s treated as a sporting event, the object is to win, and there’s nothing at stake beyond winning. There isn’t even a trophy!
The next thing to do is look for people who would like to help, but don’t actually want to tell anyone what to do. When you find them, talk them into running for office.
Since you’ve picked on people who don’t really want to tell other people what to do, you’ll have to promise you won’t make them do it forever. After a while, you promise, you’ll let them off the hook so they can go do something else. That means putting term limits on elected officials.
The authoritarians, who get their jollies by telling other people what to do, won’t like that. The ones who just want to help out will be happy they can do their part for a while, then go home.
12 September 2018 – The Front Page was an hilarious one-set stage play supposedly taking place over a single night in the dingy press room of Chicago’s Criminal Courts Building overlooking the gallows behind the Cook County Jail. I’m not going to synopsize the plot because the Wikipedia entry cited above does such an excellent job it’s better for you to follow the link and read it yourself.
First performed in 1928, the play has been revived several times and suffered countless adaptations to other media. It’s notable for the fact that the main character, Hildy Johnson, originally written as a male part, is even more interesting as a female. That says something important, but I don’t know what.
By the way, I insist that the very best adaptation is Howard Hawks’ 1940 tour de force film entitled His Girl Friday starring Rosalind Russell as Hildy Johnson, and Cary Grant as the other main character Walter Burns. Burns is Johnson’s boss and ex-husband who uses various subterfuges to prevent Hildy from quitting her job and marrying an insurance salesman.
That’s not what I want to talk about today, though. What’s important for this blog posting is part of the play’s backstory. It’s important because it can help provide context for the entire social media industry, which is becoming so important for American society right now.
In that backstory, a critical supporting character is one Earl Williams, who’s a mousey little man convicted of murdering a policeman and sentenced to be executed the following morning right outside the press-room window. During the course of the play, it comes to light that Williams, confused by listening to a soapbox demagogue speaking in a public park, accidentally shot the policeman and was subsequently railroaded in court by a corrupt sheriff who wanted to use his execution to help get out the black(!?) vote for his re-election campaign.
What publicly executing a confused communist sympathizer has to do with motivating black voters I still fail to understand, but it makes as much sense as anything else the sheriff says or does.
This plot has so many twists and turns paralleling issues still resonating today that it’s rediculous. That’s a large part of the play’s fun!
Anyway, what I want you to focus on right now is the subtle point that Williams was confused by listening to a soapbox demagogue.
Soapboxdemagogues were a fixture in pre-Internet political discourse. The U.S. Constitution’s First Amendment explicitly gives private citizens the right to peaceably assemble in public places. For example, during the late 1960s a typical summer Sunday afternoon anywhere in any public park in North America or Europe would see a gathering of anywhere from 10 to 10,000 hippies for an impromptu “Love In,” or “Be In,” or “Happening.” With no structure or set agenda folks would gather to do whatever seemed like a good idea at the time. My surrealist novelette Lilith describes a gathering of angels, said to be “the hippies of the supernatural world,” that was patterned after a typical Hippie Love In.
Similarly, a soapbox demagogue had the right to commandeer a picnic table, bandstand, or discarded soapbox to place himself (at the time they were overwhelmingly male) above the crowd of passersby that he hoped would listen to his discourse on whatever he wanted to talk about.
In the case of Earl Williams’ demagogue, the speech was about “production for use.” The feeble-minded Williams applied that idea to the policeman’s service weapon, with predictable results.
Fast forward to the twenty-first century.
I haven’t been hanging around local parks on Sunday afternoons for a long time, so I don’t know if soapbox demagogues are still out there. I doubt that they are because it’s easier and cheaper to log onto a social-media platform, such as Facebook, to shoot your mouth off before a much larger international audience.
I have browsed social media, however, and see the same sort of drivel that used to spew out of the mouths of soapbox demagogues back in the day.
The point I’m trying to make is that there’s really nothing novel about social media. Being a platform for anyone to say anything to anyone is the same as last-century soapboxes being available for anyone who thinks they have something to say. It’s a prominent right guaranteed in the Bill of Rights. In fact, it’s important enough to be guaranteed in the very first of th Bill’s amendments to the U.S. Constitution.
What is not included, however, is a proscription against anyone ignoring the HECK out of soapbox demagogues! They have the right to talk, but we have the right to not listen.
Back in the day, almost everybody passed by soapbox demagogues without a second glance. We all knew they climbed their soapboxes because it was the only venue they had to voice their opinions.
Preachers had pulpits in front of congregations, so you knew they had something to say that people wanted to hear. News reporters had newspapers people bought because they contained news stories that people wanted to read. Scholars had academic journals that other scholars subscribed to because they printed results of important research. Fiction writers had published novels folks read because they found them entertaining.
The list goes on.
Soapbox demagogues, however, had to stand on an impromptu platform because they didn’t have anything to say worth hearing. The only ones who stopped to listen were those, like the unemployed Earl Williams, who had nothing better to do.
The idea of pretending that social media is any more of a legitimate venue for ideas is just goofy.
Social media are not legitimate media for the exchange of ideas simply because anybody is able to say anything on them, just like a soapbox in a park. Like a soapbox in a park, most of what is said on social media isn’t worth hearing. It’s there because the barrier to entry is essentially nil. That’s why so many purveyors of extremist and divisive rhetoric gravitate to social media platforms. Legitimate media won’t carry them.
Legitimate media organizations have barriers to the entry of lousy ideas. For example, I subscribe to The Economist because of their former Editor in Chief, John Micklethwait, who impressed me as an excellent arbiter of ideas (despite having a weird last name). I was very pleased when he transferred over to Bloomberg News, which I consider the only televised outlet for globally significant news. The Wall Street Journal‘s business focus forces Editor-in-Chief Matt Murray into a “just the facts, ma’am” stance because every newsworthy event creates both winners and losers in the business community, so content bias is a non-starter.
The common thread among these legitimate-media sources is existance of an organizational structure focused on maintaining content quality. There are knowlegeable gatekeepers (called “editors“) charged with keeping out bad ideas.
So, when Donald Trump, for example, shows a preference for social media (in his case, Twitter) and an abhorrence of traditional news outlets, he’s telling us his ideas aren’t worth listening to. Legitimate media outlets disparage his views, so he’s forced to use the twenty-first century equivalent of a public-park soapbox: social media.
On social media, he can say anything to anybody because there’s nobody to tell him, “That’s a stupid thing to say. Don’t say it!”
5 September 2018 – A lot of us grew up reading stories by Robert A. Heinlein, who was one of the most Libertarian-leaning of twentieth-century science-fiction writers. When contemplating then-future surveillance technology (which he imagined would be even more intrusive than it actually is today) he wrote (in his 1982 novel Friday): “… there is a moral obligation on each free person to fight back wherever possible … ”
The surveillance technology Heinlein expected to become the most ubiquitous, pervasive, intrusive and literally in-your-face was facial recognition. Back in 1982, he didn’t seem to quite get the picture (pun intended) of how automation, artificial intelligence, and facial recognition could combine to become Big Brother’s all-seeing eyes. Now that we’re at the cusp of that technology being deployed, it’s time for just-us-folks to think about how we should react to it.
An alarm should be set off by an article filed by NBC News journalists Tom Costello and Ethan Sacks on 23 August reporting: “New facial recognition tech catches first impostor at D.C. airport.” Apparently, a Congolese national tried to enter the United States on a flight from Sao Paulo, Brazil through Washington Dulles International Airport on a French passport, and was instantly unmasked by a new facial-recognition system that quickly figured out that his face did not match that of the real holder of the French passport. Authorities figured out he was a Congolese national by finding his real identification papers hidden in his shoe. Why he wanted into the United States; why he tried to use a French passport; and why he was coming in from Brazil are all questions unanswered in the article. The article was about this whiz-bang technology that worked so well on the third day it was deployed.
What makes the story significant is that this time it all worked in real time. Previous applications of facial recognition have worked only after the fact.
The reason this article should set off alarm bells is not that the technology unmasked some jamoke trying to sneak into the country for some unknown, but probably nefarious, purpose. On balance, that was almost certainly (from our viewpoint) a good thing. The alarms should sound, however, to wake us up to think about how we really want to react to this kind of ubiquitous surveillance being deployed.
There’s a whole lot of what each of us does that we want to keep private. While we consider it perfectly innocent, it’s just nobody else’s business.
It’s why the stalls in public bathrooms have doors.
People generally object to living in a fishbowl.
So, ubiquitous deployment of facial recognition technology brings with it some good things, and some that are not so good. That argues for a national public debate aimed at developing a consensus regarding where, when and how facial recognition technology should be used.
Framing the Debate
To start with, recognize that facial recognition is already ubiquitous and natural. It’s why Mommy Nature goes through all kinds of machinations to make our faces more-or-less unique. One of the first things babies learn is how to recognize Mom’s face. How could the cave guys have coordinated their hunting parties if nobody could tell Fred from Manny?
Facial recognition technology just extends our natural talent for recognizing our friends by sight to its use by automated systems.
A white paper entitled Top 4 Modern Use Cases of Biometric Technology crossed my desk recently. It was published by security-software firm iTrue. Their stated purpose is to “take biometric technology to the next level by securing all biometric data onto their blockchain platform.”
Because the white paper is clearly a marketing piece, and it is unsigned by the actual author, I can’t really vouch for the accuracy of its conclusions. For example, the four use cases listed in the paper are likely just the four main applications they envision for their technology. They are, however, a reasonable starting point for our public discussion.
The four use cases cited are:
Border control and airport security
Company payroll and attendance management
Financial data and identity protection
Physical or logical access solutions
This is probably not an exhaustive list, but offhand I can’t think of any important items left off. So, I’ll pretend like it’s a really good, complete list. It may be. It may not be. That should be part of the discussion.
The first item on the list is exactly what the D.C. airport news story was all about, so enough said. That horse has been beaten to death.
About the second item, the white paper says: “Organizations are beginning to invest in biometric technologies to manage employee ID and attendance, since individuals are always carrying their fingerprints, eyes, and faces with them, and these items cannot be lost, stolen, or forgotten.”
In my Mother’s unforgettable New England accent, we say, “Eye-yuh!”
There is, however, one major flaw in the reasoning behind relying on facial recognition. It’s illustrated by the image above. Since time immemorial, folks have worn makeup that could potentially give facial recognition systems ginky fits. They do it for all kinds of innocent reasons. If you’re going to make being able to pass facial recognition tests a prerequisite for doing your job, expect all sorts of pushback.
For example, over the years I’ve known many, many women who wouldn’t want to be seen in public without makeup. What are you going to do? Make your workplace a makeup-free zone? That’ll go over big!
On to number three. How’s your average cosplay enthusiast going to react to not being able to use their credit or debit card to buy gas on their way to an event because the bank’s facial recognition system can’t see through their alien-creature makeup?
Even more seriously, look at the image on the right. This is a transgender person wearing a wig. Really cute isn’t he/she? Do you think your facial recognition software could tell the difference between him and his sister? Does your ACH vendor want to risk trampling his/her rights?
When we come to the fourth item on the list, suppose a Saudi Arabian woman wants to get into her house? Are you going to require her to remove her burka to get through her front door? What about her right to religious freedom? Or, will this become another situation where she can’t function as a human being without being accompanied by a male guardian? We’re already on thin ice when she wants to enter the country through an airport!
I’ve already half formed my own ideas about these issues. I look forward to participating in the national debate.
Heinlein would, of course, delight in every example where facial recognition could be foiled. In Friday, he gleefully pointed out ” … what takes three hours to put on will come off in fifteen minutes of soap and hot water.”
29 August 2018 – With so much controversy in the news recently surrounding POTUS’ exposure in the Mueller investigation into Russian meddling in the 2016 Presidential election, I’ve been thinking a whole lot about how lawyers look at evidence versus how scientists look at evidence. While I’ve only limited background with legal matters (having an MBA’s exposure to business law), I’ve spent a career teaching and using the scientific method.
While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school consists of five to seven steps, which pretty much look like this:
I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.
It all starts with observation of things that go on in the World. Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question “why.”
Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several explanations that vary from the erudite to the thoroughly bizarre.
Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!
For example, ancients tended to think in terms of objects somehow “wanting” to go downward as the least wierd of explanations for gravity. It came from animism, which is the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior. Rocks are hard because their spirits resist being broken. They fall down when released because their spirits somehow like down better than up.
What we now consider the most-correctest explanation, that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other, wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.
Scientists then take all the hypotheses, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results of the experiments.
Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.
It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.
That’s why the last step is to repeat the entire process ad nauseam.
While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.
Not boiling it down to essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, science pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”
The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following through to the resultant results.
There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.
You do that a bazillion times in a bazillion different ways, and a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.
Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.
For example, I was once asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.”
I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.
I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He kept believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.
Anyway, the way this all works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.
That is NOT what our legal system does.
Not by a LONG shot!
The Legal Method
While both scientific and legal thinking methods start from some initial state, and move to some final conclusion, the processes for getting from A to B differs in important ways.
First, while the hypothesis in the scientific method is assumed to be provisional, the legal system is based on coming to a definite explanation of events that is in some sense “correct.” The results of scientific inquiry, on the other hand, are accepted as “probably right, maybe, for now.”
That ain’t good enough in legal matters. The verdict of a criminal trial, for example, has to be true “beyond a reasonable doubt.”
Second, in legal matters the path from the initial conditions (the “charges”) to the results (the “verdict”) is linear. It has one path: through a chain of evidence. There may be multiple bits of evidence, but you can follow them through from a definite start to a definite end.
The third way the legal method differs from the scientific method is what I call the “So, What?” factor.
If your scientific hypothesis is wrong, meaning it gives wrong results, “So, What?”
Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.
Finding that some hypothesis is wrong is no big deal. It just means you don’t have to bother with that dumbass idea, anymore. Alien abductions get relegated to entertainment for the entertainment starved, and real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.
(Leading hypothesis: the distances from there to here are so vast that anybody smart enough to make the trip has better things to do.)
If, on the other hand, your legal verdict is wrong, really bad things happen. Maybe somebody’s life is ruined. Maybe even somebody dies. The penalty for failure in the legal system is severe!
So, the term “air tight” shows up a lot in talking about legal evidence. In science not so much.
For scientists “Gee, it looks like . . . ” is usually as good as it gets.
For judges, they need a whole lot more.
So, as a scientist I can say: “POTUS looks like a career criminal.”
That, however, won’t do the job for, say, Robert Mueller.
In Real Life
Very few of us are either scientists or judges. We live in the real world and have to make real-world decisions. So, which sort of method for coming to conclusions should we use?
In 1983, film director Paul Brickman spent an estimated 6.2 million dollars and 99 min worth of celluloid (some 142,560 individual images at the standard frame rate of 24 fps) telling us that successful entrepreneurs must be prepared to make decisions based on insufficient information. That means with no guarantee of being right. No guarantee of success.
He, by the way, was right. His movie, Risky Business, grossed $63 million at the box office in the U.S. alone. A clear gross margin of 1,000%!
There’s an old saying: “A conclusion is that point at which you decide to stop thinking about it.”
It sounds a bit glib, but it actually isn’t. Every experienced businessman, for example, knows that you never have enough information. You are generally forced to make a decision based on incomplete information.
In the real world, making a wrong decision is usually better than making no decision at all. What that means is that, in the real world, if you make a wrong decision you usually get to say “Oops!” and walk it back. If you decide to make no decision, that’s a decision that you can’t walk back.
Oops! I have to walk that statement back.
There are situations where the penalty for the failure of making a wrong decision is severe. For example, we had a cat once, who took exception to a number of changes in our home life. We’d moved. We’d gotten a new dog. We’d adopted another cat. He didn’t like any of that.
I could see from his body language that he was developing a bad attitude. Whereas he had previously been patient when things didn’t go exactly his way, he’d started acting more aggressive. One night, we were startled to hear a screetching of brakes in the road passing our front door. We went out to find that Nick had run across the road and been hit by a car.
Considering the pattern of events, I concluded that Nick had died of PCD. That is, “Poor Cat Decision.” He’d been overly aggressive when deciding whether or not to cross the road.
Making no decision (hesitating before running across the road) would probably have been better than the decision he made to turn on his jets.
That’s the kind of decision where getting it wrong is worse than holding back.
Usually, however, no decision is the worst decision. As the Zen haiku says:
In walking, just walk.
In sitting, just sit.
Above all, don’t wobble.
That argues for using the scientist’s method: gather what facts you have, then make a decision. If you’re hypothesis turns out to be wrong, “So, What?”
22 August 2018 – Since the Fifteenth Century, when Johannes Gutenberg invented the technology, printing has given those in authority ginky fits. Until recently it was just dissemination of heretical ideas, a laGiordano Bruno, that raised authoritarian hackles. More recently, 3-D printing, also known as additive manufacturing (AM), has made it possible to make things that folks who like to tell you what you’re allowed to do don’t want you to make.
Don’t get me wrong, AM makes it possible to do a whole lot of good stuff that we only wished we could do before. Like, for example, Jay Leno uses it to replace irreplaceable antique car parts. It’s a really great technology that allows you to make just about anything you can describe in a digital drafting file without the difficulty and mess of hiring a highly skilled fabrication machine shop.
In my years as an experimental physicist, I dealt with fabrication shops a lot! Fabrication shops are collections of craftsmen and the equipment they need to make one-off examples of amazing stuff. It’s generally stuff, however, that nobody’d want to make a second time.
Like the first one of anything.
The reason I specify that AM is for making stuff nobody’d want to make a second time is that it’s slow. Regular machine-shop work, like making nuts and bolts, and sewing machines, and stuff folks want to make a lot of are worth spending a lot of time to figure out fast, efficient and cheap ways to make lots of them.
Take microchips. The darn things take huge amounts of effort to design, and tens of billions of dollars of equipment to make, but once you’ve figured it all out and set it all up, you can pop the things out like chocolate-chip cookies. You can buy an Intel Celeron G3900 dual-core 2.8 GHz desktop processor online for $36.99 because Intel spread the hideous quantities of up-front cost it took to originally set up the production line over the bazillions of processors that production line can make. It’s called “economy of scale.”
If you’re only gonna make one, or just a few, of the things, there’s no economy of scale.
But, if your’re only gonna make one, or just a few, you don’t worry too much about how long it takes to make each one, and what it costs is what it costs.
So, you put up with doing it some way that’s slow.
A HUGE advantage of making things with AM is that you don’t have to be all that smart. If you once learn to set the 3-D printer up, you’re all set. You just download the digital computer-aided-manufacturing (CAM) file into the printer’s artificial cerebrum, and it just DOES it. If you can download the file over the Internet, you’ve got it knocked!
Which brings us to what I want to talk about today: 3-D printing of handguns.
Now, I’m not any kind of anti-gun nut. I’ve got half a dozen firearms laying around the house, and have had since I was a little kid. It’s a family thing. I learned sharpshooting when I was around ten years old. Target shooting is a form of meditation for me. A revolver is my preferred weapon if ever I have need of a weapon. I never want to be the one who brought a knife to a gunfight!
That said, I’ve never actually found a need for a handgun. The one time I was present at a gunfight, I hid under the bed. The few times I’ve been threatened with guns, I talked my way out of it. Experience has led me to believe that carrying a gun is the best way to get shot.
I’ve always agreed with my gunsmith cousin that modern guns are beautiful pieces of art. I love their precision and craftsmanship. I appreciate the skill and effort it takes to make them.
The good ones, that is.
That’s the problem with AM-made guns. It takes practically no skill to create them. They’re right up there with the zip guns we talked about when we were kids.
We never made zip guns. We talked about them. We talked about them in the same tones we’d use talking about leeches or cockroaches. Ick!
We’d never make them because they were beneath contempt. They were so crude we wouldn’t dare fire one. Junk like that’s dangerous!
Okay, so you get an idea how I would react to the news that some nut case had published plans online for 3-D printing a handgun. Bad enough to design such a monstrosity, what about the idiot stupid enough to download the plans and make such a thing? Even more unbelievable, what moron would want to fire it?
Have they no regard for their hands? Don’t they like their fingers?
Anyway, not long ago, a press release crossed my desk from Giffords, the gun-safety organization set up by former U.S. Rep. Gabrielle Giffords and her husband after she survived an assassination attempt in Arizona in 2011. The Giffords’ press release praised Senators Richard Blumenthal and Bill Nelson, and Representatives David Cicilline and Seth Moulton for introducing legislation to stop untraceable firearms from further proliferation after the Trump Administration cleared the way for anyone to 3-D-print their own guns.
Why “untraceable” firearms, and what have they got to do with AM?
Downloadable plans for producing guns by the AM technique puts firearms in the hands of folks too unskilled and, yes, stupid to make them themselves with ordinary methods. That’s important because it is theoretically possible to AM produce firearms of surprising sophistication. The first one offered was a cheap plastic thing (depicted above) that would likely be more danger to its user than its intended victim. More recent offerings, however, have been repeating weapons made of more robust materials that might successfully be used to commit crimes.
Ordinary firearm production techniques require a level of skill and investment in equipment that puts them above the radar for federal regulators. The first thing those regulators require is licensing the manufacturer. Units must be serialized, and records must be kept. The system certainly isn’t perfect, but it gives law enforcement a fighting chance when the products are misused.
The old zip guns snuck in under the radar, but they were so crude and dangerous to their users that they were seldom even made. Almost anybody with enough sense to make them had enough sense not to make them! Those who got their hands on the things were more a danger to themselves than to society.
The Trump administration’s recent settlement with Defense Distributed, allowing them to relaunch their website, include a searchable database of firearm blueprints, and allow the public to create their own fully-functional, unserialized firearms using AM technology opens the floodgates for dangerous people to make their own untraceable firearms.
That’s just dumb!
The Untraceable Firearms Act would prohibit the manufacture and sale of firearms without serial numbers, require any person or business engaged in the business of selling firearm kits and unfinished receivers to obtain a dealer’s license and conduct background checks on purchasers, and mandate that a person who runs a business putting together firearms or finishing receivers must obtain a manufacturer’s license and put serial numbers on firearms before offering them for sale to consumers.
The 3-D Printing Safety Act would prohibit the online publication of computer-aided design (CAD) files which automatically program a 3-D-printer to produce or complete a firearm. That closes the loophole letting malevolent idiots evade scrutiny by making their own firearms.
We have to join with Giffords in applauding the legislators who introduced these bills.
15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”
The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.
When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!
When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”
Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.
Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.
I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.
This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.
Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!
Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:
Don’t Automate Something Humans Like to Do!
Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.
In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!
Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.
The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”
That’s pretty definative!
Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.
Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.
The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.
Yet, development of AV technology is going full steam ahead.
Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.
For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.
I, for one, don’t want to go there!
Sounds like another example of “More money than brains.”
There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.
Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.
Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.
Hence the autopilot.
Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.
So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.
1 August 2018 – “With respect to Russia, I agree with the Director of National Intelligence and others,” said Tonya Ugoretz, Director of the Cyber Threat Intelligence Integration Center during a televised panel session on 20 July, “that they are the most aggressive foreign actor that we see in cyberspace.
“For good reason,” she continued, “there is a lot of focus on their activity in 2016 against our election infrastructure and their malign-influence efforts.”
For those who didn’t notice, last week Facebookannounced that they shut down thirty two accounts that the company says “engaged in coordinated political agitation and misinformation efforts ahead of November’s midterm elections, in an echo of Russian activities on the platform during the 2016 U.S. presidential campaign.”
So far, so good.
But, so what?
About a month and a half ago, this column introduced what may be the most important topic to come up so far this century. It’s the dawning of a global cyberwar that some of us see as the Twenty-First Century equivalent of World War I.
Yeah, it’s that big!
Unlike WWI, which started just over one hundred years ago (28 July 1914, to be exact), this new global war necessarily includes a stealth component. It’s already been going on for years, but is only now starting to make headlines.
The reason for this stealth component, and why I say it’s necessary for the conduct of this new World War is that the most effective offensive weapon being used is disinformation, which doesn’t work too well when the attackee sees it coming.
Previous World Wars involved big, noisy things that made loud bangs, bright flashes of light, and lots of flying shrapnel to cause chaos and confusion among the unfortunate folks they’re aimed at. This time, however, the main weapons cause chaos and confusion simply by messing with peoples’ heads.
Wars, generally, reflect the technologies that dominate the cultures that wage them at the times they occur. During the Bronze Age, wars between Greek, Persian, Egyptian, etc. cultures involved hand-to-hand combat using sharp bits of metal held in the hand (swords, knives) or mounted on sticks (spears, arrows) that combatants cut each other up with. It was a nasty business where combatants generally got up close and personal enough to touch each other while hacking each other to bits.
By the Renaissance, firearms made warfare less personal by allowing combatants to stand off and lob nasty explosive junk at each other. It was less personal, but infinitely more destructive.
In the Nineteenth Century’s Mechanical Age, they graduated to using machines to grind each other up. Grinding people up is also a nasty business, but oh-so-effective for waging War.
The Twentieth Century introduced mass production to the art of warfare with weapons wielded by a few souls that could turn entire cities into junkyards in a few seconds.
That made such an ungodly mess that people started hiring folks to run their countries who actually believed in Jesus Christ’s idea of being nice to people (or at least not exterminating them en masse)! That led to a solid fifty years when the major civilizations stepped back from wars of mass destruction, and only benighted souls who still hadn’t gotten the point went on rampages.
The moral of this story is that countries like to choose up teams for a game called “War,” where the object is for one team to sow chaos and destruction in the other team’s culture. The War goes on until one or the other team cries “Uncle,” and submits to the whims of the “winning” team. Witness the Treaty of Versailles that so humiliated the German people (who cried “Uncle” to end WWI) that they were ready to listen to the %^&* spewed out by Adolf Hitler.
In the past, the method of playing War was to kill as many of the other team’s effectives (warriors) as quickly as possible. That sowed chaos and confusion among them until they (and their support networks ̶ can’t forget their support networks!) lost the will to fight.
Fast forward to the Information Age, and suddenly nobody needs the mess and inconvenience of mass physical destruction, when they can get the same result by raising chaos and confusion directly in the other team’s minds through attacks in cyberspace? As any pimply faced teen hacker can tell you, sowing chaos and confusion in cyberspace is cheap, easy, and (for the right kind of personality) a lot of fun.
You can even convince yourself that you’re not really hurting anyone.
Cyberwarfare is so inexpensive that even countries like North Korea, which is basically dead broke and living on handouts from China, can afford to engage in it.
Mike Rogers (former Chairman of the House Intelligence Committee), speaking at a panel discussion on CSPAN 2 on 20 July 2018 listed four “bad actors” of the nation-state ilk: North Korea, Iran, Russia, and China. These four form the main opposition to the United States in this cyberwar.
Basically, the lines are drawn between western-style democracies (U.S.A., UK, EU, Canada, Mexico, etc.) and old-fashioned fascistic autocracies (the four mentioned, plus a host of similar regimes, such as Turkey and Syria)
Gee, that looks ideologically like the Allied and Axis powers duking it out during World War II. U.S.A., UK, France, etc. were on the “democracy” side with Nazi Germany, Fascist Italy, and Japan on the “autocracy” side.
It’s telling that the autocratic Stalinist regime in Russia initially came in on the Fascist side. They were fascist buddies with Germany until Hitler stabbed them in the back with Operation Barbarossa, where Germany massively attacked the western Soviet Union along an 1,800-mile front. That betrayal was the reason Stalin flipped to the “democracy” team, even though they were his ideological enemies. Of course, cooperation between Russia and the West lasted about a microsecond past the German surrender on 7 May 1945. After that, the Cold War began.
So, what we’re dealing with now is a reprise of World War II, but mainly with cyberweapons and a somewhat different cast of characters making up the teams.
So far, the reality-show titled “The Trump Administration” has been actively aiding and abetting our adversaries. This problem seems to be isolated in POTUS, with lower levels of the U.S. government largely running around screaming, “The sky is falling!”
Facebook’s action shows that others in the U.S. are taking the threat seriously, despite anti-leadership from the top.
How big a shot across the Russian cyberjuggernaut’s bow Facebook’s action is remains to be seen. A shot across the bow is only made to get the other guy’s attention, and not to have any destructive effect.
Since I’m sure Putin’s minions at least noticed Facebook’s action, it probably did its job of getting their attention. Any actual deterrence, however, is going to depend on who does what next. As with the equivalent conflict of the last Century, expect to see a long slog before anybody cries “Uncle.”
As cybersecurity expert Theresa Payton of Fortalice Solutions says: “Facebook has effectively taken a proactive approach to this issue rather than waiting for Congressional oversight to force their hand. If other platforms and regulators do not take action soon, it will become too late to stop Russian interference.”
We want to commend Facebook for standing up to be counted when it’s needed. We want to encourage them and others in both the public and private sectors to expand their efforts to bolster our cyberdefenses.
We’ve got a World War to fight!
The outlook is not all bad. Looking at the list of our enemies, and comparing them to who we expect to be our friends in this conflict, it is clear that our team has the upper hand with regard to both economic and technological resources.
At least as far as the World War III game is concerned, we’re going to need, for the third time, to put the lie to Leo Durocher’s misquote: “Nice guys finish last.”
In this Third World War game, we hope that we’re the nice guys, but our team has still got to win!