I’m not going to get into the tangled web of potential copyright infringement that Shaw’s posting of Hazlitt’s entire text opens up, I’ve just linked to the most convenient-to-read posting of that particular chapter. If you follow the link and want to buy the book, I’ve given you the appropriate link as well.
The chapter is of immense value apropos the question of whether automation generally reduces the need for human labor, or creates more opportunities for humans to gain useful employment. Specifically, it looks at the results of a number of historic events where Luddites excoriated technology developers for taking away jobs from humans only to have subsequent developments prove them spectacularly wrong.
Hazlitt’s classic book is, not surprisingly for a classic, well documented, authoritative, and extremely readable. I’m not going to pretend to provide an alternative here, but to summarize some of the chapter’s examples in the hope that you’ll be intrigued enough to seek out the original.
Before getting on to the examples, let’s start by looking at the history of Luddism. It’s not a new story, really. It probably dates back to just after cave guys first thought of specialization of labor.
That is, sometime in the prehistoric past, some blokes were found to be especially good at doing some things, and the rest of the tribe came up with the idea of letting, say, the best potters make pots for the whole tribe, and everyone else rewarding them for a job well done by, say, giving them choice caribou parts for dinner.
Eventually, they had the best flint knappers make the arrowheads, the best fletchers put the arrowheads on the arrows, the best bowmakers make the bows, and so on. Division of labor into different jobs turned out to be so spectacularly successful that very few of us rugged individualists, who pretend to do everything for ourselves, are few and far between (and are largely kidding ourselves, anyway).
Since then, anyone who comes up with a great way to do anything more efficiently runs the risk of having the folks who spent years learning to do it the old way land on him (or her) like a ton of bricks.
It’s generally a lot easier to throw rocks to drive the innovator away than to adapt to the innovation.
Luddites in the early nineteenth century were organized bands of workers who violently resisted mechanization of factories during the late Industrial Revolution. Named for an imaginary character, Ned Ludd, who was supposedly an apprentice who smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. The term “Luddite” has come to mean anyone fanatically opposed to deploying advanced technology.
Of course, like religious fundamentalists, they have to pick a point in time to separate “good” technology from the “bad.” Unlike religious fanatics, who generally pick publication of a certain text to be the dividing line, Luddites divide between the technology of their immediate past (with which they are familiar) and anything new or unfamiliar. Thus, it’s a continually moving target.
In either case, the dividing line is fundamentally arbitrary, so the emotion of their response is irrational. Irrationality typically carries a warranty of being entirely contrary to facts.
What Happens Next
Hazlitt points out, “The belief that machines cause unemployment, when held with any logical consistency, leads to preposterous conclusions.” He points out that on the second page of the first chapter of Adam Smith’s seminal book Wealth of Nations, Smith tells us that a workman unacquainted with the use of machinery employed in sewing-pin-making “could scarce make one pin a day, and certainly could not make twenty,” but with the use of the machinery he can make 4,800 pins a day. So, zero-sum game theory would indicate an immediate 99.98 percent unemployment rate in the pin-making industry of 1776.
Did that happen? No, because economics is not a zero-sum game. Sewing pins went from dear to cheap. Since they were now cheap, folks prized them less and discarded them more (when was the last time you bothered to straighten a bent pin?), and more folks could afford to buy them in the first place. That led to an increase in sewing-pin sales as well as sales of things like sewing-patterns and bulk fine fabric sold to amateur sewers, and more employment, not less.
Similar results obtained in the stocking industry when new stocking frames (the original having been invented William Lee in 1589, but denied a patent by Elizabeth I who feared its effects on employment in hand-knitting industries) were protested by Luddites as fast as they could be introduced. Before the end of the nineteenth century the stocking industry was employing at least a hundred men for every man it employed at the beginning of the century.
Another example Hazlitt presents from the Industrial Revolution happened in the cotton-spinning industry. He says: “Arkwright invented his cotton-spinning machinery in 1760. At that time it was estimated that there were in England 5,200 spinners using spinning wheels, and 2,700 weavers—in all, 7,900 persons engaged in the production of cotton textiles. The introduction of Arkwright’s invention was opposed on the ground that it threatened the livelihood of the workers, and the opposition had to be put down by force. Yet in 1787—twenty-seven years after the invention appeared—a parliamentary inquiry showed that the number of persons actually engaged in the spinning and weaving of cotton had risen from 7,900 to 320,000, an increase of 4,400 percent.”
As these examples indicate, improvements in manufacturing efficiency generally lead to reductions in manufacturing cost, which, when passed along to customers, reduces prices with concommitent increases in unit sales. This is the price elasticity of demand curve from Microeconomics 101. It is the reason economics is decidedly not a zero-sum game.
If we accept economics as not a zero-sum game, predicting what happens when automation makes it possible to produce more stuff with fewer workers becomes a chancy proposition. For example, many economists today blame flat productivity (the amount of stuff produced divided by the number of workers needed to produce it) for lack of wage gains in the face of low unemployment. If that is true, then anything that would help raise productivity (such as automation) should be welcome.
Long experience has taught us that economics is a positive-sum game. In the face of technological advancement, it behooves us to expect positive outcomes while taking measures to ensure that the concomitant economic gains get distributed fairly (whatever that means) throughout society. That is the take-home lesson from the social dislocations that accompanied the technological advancements of the Early Industrial Revolution.
21 November 2018 – Regular readers of this blog know one of my favorite themes is critical thinking about news. Another of my favorite subjects is education. So, they won’t be surprised when I go on a rant about promoting teaching of critical news consumption habits to youngsters.
Apropos of this subject, last week the BBC launched a project entitled “Beyond Fake News,” which aims to “fight back” against fake news with a season of documentaries, special reports and features on the BBC’s international TV, radio and online networks.
In an article by Lucy Mapstone, Press Association Deputy Entertainment Editor for the Independent.ie digital network, entitled “BBC to ‘fight back’ against disinformation with Beyond Fake News project,” Jamie Angus, director of the BBC World Service Group, is quoted as saying: “Poor standards of global media literacy, and the ease with which malicious content can spread unchecked on digital platforms mean there’s never been a greater need for trustworthy news providers to take proactive steps.”
Angus’ quote opens up a Pandora’s box of issues. Among them is the basic question of what constitutes “trustworthy news providers” in the first place. Of course, this is an issue I’ve tackled in previous columns.
Another issue is what would be appropriate “proactive steps.” The BBC’s “Beyond Fake News” project is one example that seems pretty sound. (Sorry if this language seems a little stilted, but I’ve just finished watching a mid-twentieth-century British film, and those folks tended to talk that way. It’ll take me a little while to get over it.)
Another sort of “proactive step” is what I’ve been trying to do in this blog: provide advice about what steps to take to ensure that the news you consume is reliable.
A third is providing rebuttal of specific fake-news stories, which is what pundits on networks like CNN and MSNBC try (with limited success, I might say) to do every day.
The issue I hope to attack in this blog posting is the overarching concern in the first phrase of the Angus quote: “Poor standards of global media literacy, … .”
Global media literacy can only be improved the same way any lack of literacy can be improved, and that is through education.
Improving global media literacy begins with ensuring a high standard of media literacy among teachers. Teachers can only teach what they already know. Thus, a high standard of media literacy must start in college and university academic-education programs.
While I’ve spent decades teaching at the college level, so I have plenty of experience, I’m not actually qualified to teach other teachers how to teach. I’ve only taught technical subjects, and the education required to teach technical subjects centers on the technical subjects themselves. The art of teaching is (or at least was when I was at university) left to the student’s ability to mimic what their teachers did, informal mentoring by fellow teachers, and good-ol’ experience in the classroom. We were basically dumped into the classroom and left to sink or swim. Some swam, while others sank.
That said, I’m not going to try to lay out a program for teaching teachers how to teach media literacy. I’ll confine my remarks to making the case that it needs to be done.
Teaching media literacy to schoolchildren is especially urgent because the media-literacy projects I keep hearing about are aimed at adults “in the wild,” so to speak. That is, they’re aimed at adult citizens who have already completed their educations and are out earning livings, bringing up families, and participating in the political life of society (or ignoring it, as the case may be).
I submit that’s exactly the wrong audience to aim at.
Yes, it’s the audience that is most involved in media consumption. It’s the group of people who most need to be media literate. It is not, however, the group that we need to aim media-literacy education at.
We gotta get ‘em when they’re young!
Like any other academic subject, the best time to teach people good media-consumption habits is before they need to have them, not afterwards. There are multiple reasons for this.
First, children need to develop good habits before they’ve developed bad habits. It saves the dicey stage of having to unlearn old habits before you can learn new ones. Media literacy is no different. Neither is critical thinking.
Most of the so-called “fake news” appeals to folks who’ve never learned to think critically in the first place. They certainly try to think critically, but they’ve never been taught the skills. Of course, those critical-thinking skills are a prerequisite to building good media-consumption habits.
How can you get in the habit of thinking critically about news stories you consume unless you’ve been taught to think critically in the first place? I submit that the two skills are so intertwined that the best strategy is to teach them simultaneously.
And, it is most definitely a habit, like smoking, drinking alcohol, and being polite to pretty girls (or boys). It’s not something you can just tell somebody to do, then expect they’ll do it. They have to do it over and over again until it becomes habitual.
Another reason to promote media literacy among the young is that’s when people are most amenable to instruction. Human children are pre-programmed to try to learn things. That’s what “play” is all about. Acquiring knowledge is not an unpleasant chore for children (unless misguided adults make it so). It’s their job! To ensure that children learn what they need to know to function as adults, Mommy Nature went out of her way to make learning fun, just as she did with everything else humans need to do to survive as a species.
Learning, having sex, taking care of babies are all things humans have to do to survive, so Mommy Nature puts systems in place to make them fun, and so drive humans to do them.
A third reason we need to teach media literacy to the young is that, like everything else, you’re better off learning it before you need to practice it. Nobody in their right mind teaches a novice how to drive a car by running them out in city traffic. High schools all have big, torturously laid out parking lots to give novice drivers a safe, challenging place to practice the basic skills of starting, stopping and turning before they have to perform those functions while dealing with fast-moving Chevys coming out of nowhere.
Similarly, you want students to practice deciphering written and verbal communications before asking them to parse a Donald-Trump speech!
The “Call to Action” for this editorial piece is thus, “Agitate for developing good media-consumption habits among schoolchildren along with the traditional Three Rs.” It starts with making the teaching of media literacy part of K-12 teacher education. It also includes teaching critical thinking skills and habits at the same time. Finally, it includes holding K-12 teachers responsible for inculcating good media-consumption habits in their students.
Yes, it’s important to try to bring the current crop of media-illiterate adults up to speed, but it’s more important to promote global media literacy among the young.
6 November 2018 – Below is froma press release I received yesterday (Monday, 11/5) evening. It’s of sufficient import and urgent timing that I decided to post it to this blog verbatim.
There’s been a lot of talk about cybersecurity and whether or not the Trump administration is prepared for tomorrow’s midterm elections, but now that we’re down to the wire, former White House CIO and Fortalice Solutions CEO Theresa Payton says it’s time for voters to think about what they can do to make sure their voices are heard.
Theresa’s six cyber tips for voters ahead of midterms:
Don’t zone out while you’re voting. Pay close attention to how you cast your ballot and who you cast your ballot for.
Take your time during the review process, and double-check your vote before you finalize it;
It may sound cliche, but if you see something say something. If something seems strange, report it to your State Board of Elections immediately;
If you see suspicious social media personas pushing information that’s designed to influence (and maybe even misinform) voters, here’s where you can report it:
Check your voter registration status before you go to the polls. Voters in 37 states and the District of Columbia can register to vote online. Visit vote.org to find out how to check your registration status in your state;
Unless you are a resident of West Virginia or you’re serving overseas in the U.S. military, you cannot vote electronically on your phone. Protect yourself from text messages and email scams that indicate that you can. Knowledge is power.
Finally, trust the system. Yes, it’s flawed. Yes, it’s imperfect. But it’s the bedrock of our democracy. If you stay home or lose trust in the legitimacy of the process, our cyber enemies win.
Theresa is one of the nation’s leading experts in cyber security and IT strategy. She is the CEO of Fortalice Solutions, an industry-leading security consulting company. Under President George W. Bush, she served as the first female chief information officer at the White House, overseeing IT operations for POTUS and his staff. She was named #4 on IFSEC Global’s list of the world’s Top 50 cybersecurity influencers in security & fire 2017. See her profiled in the Washington Post for her role on the 2017 CBS reality show “Hunted” here.
17 October 2018 – Immigration is, by and large, a good thing. It’s not always a good thing, and it carries with it a host of potential problems, but in general immigration is better than its opposite: emigration. And, there are a number of reasons for that.
Immigration is movement toward some place. Emigration is flow away from a place.
Mathematically, population shifts are described by a non-homogeneous second-order differential equation. I expect that statement means absolutely nothing to about half the target audience for this blog, and a fair fraction of the others have (like me) forgotten most of what they ever knew (or wanted to know) about such equations. So, I’ll start with a short review of the relevant points of how the things behave.
It’ll help the rest of this blog make a lot more sense, so bear with me.
Basically, the relevant non-homogeneous second-order differential equation is something called the “diffusion equation.” Leaving the detailed math aside, what this equation says is that the rate of migration of just about anything from one place to another depends on the spatial distribution of population density, a mobility factor, and a driving force pushing the population in one direction or the other.
Things (such as people) “diffuse” from places with higher densities to those with lower densities.
That tendency is moderated by a “mobility” factor that expresses how easy it is to get from place to place. It’s hard to walk across a desert, so mobility of people through a desert is low. Similarly, if you build a wall across the migration path, that also reduces mobility. Throwing up all kinds of passport checks, visas and customs inspections also reduces mobility.
Giving people automobiles, buses and airplanes, on the other hand, pushes mobility up by a lot!
But, changing mobility only affects the rate of flow. It doesn’t do anything to change the direction of flow, or to actually stop it. That’s why building walls has never actually worked. It didn’t work for the First Emperor of China. It didn’t work for Hadrian. It hasn’t done much for the Israelis, either.
Direction of flow is controlled by a forcing term. Existence of that forcing term is what makes the equation “non-homogeneous” rather than “homogeneous.” The homogeneous version (without the forcing term) is called the “heat equation” because it models what dumb-old thermal energy does.
Things that can choose what to do (like people), and have feet to help them act on their choices, get to “vote with their feet.” That means they can go where they want, instead of always floating downstream like a dead leaf.
The forcing term largely accounts for the desirability of being in one place instead of another. For example, the United States has a reputation for being a nice place to live. Thus, people try to flock here in droves from places that are not so nice. Thus, there’s a forcing term that points people from other places to the U.S.
That’s the big reason you want to live in a country that has immigration issues, rather than one with emigration issues. The Middle East had a serious emigration problem in 2015. For a number of reasons, it had become a nasty place to live. Folks that lived there wanted out in a big way. So, they voted with their feet.
There was a huge forcing term that pushed a million people from the Middle East to elsewhere, specifically Europe. Europe was considered a much nicer place to be, so people were willing to go through Hell to get there. Thus: emigration from the Middle East, and immigration into Europe.
In another example Nazi occupation in the first half of the twentieth century made most places in Europe distasteful, especially for certain groups of people. So, the forcing term pushed a lot of people across the Atlantic toward America. In 1942 Michael Curtiz made a film about that. It was called Casablanca and is arguably one of the greatest films Humphrey Bogart starred in.
Similarly, for decades Mexico had some serious problems with poverty, organized crime and corruption. Those are things that make a place nasty to live in, so there was a big forcing function pushing people to cross the border into the much nicer United States.
In recent decades, regime change in Mexico cleaned up a lot of the country’s problems, so migration from Mexico to the United States dropped like a stone in the last years of the Obama administration. When Mexico became a nicer place to live, people stopped wanting to move away.
There are two morals to this story:
If you want to cut down on immigration from some other country, help that other country become a nicer place to live. (Conversely, you could turn your own country into a third-world toilet so nobody wants to come in, but that’s not what we want.)
Putting up walls and other barriers to immigration doesn’t stop it. They only slow it down.
We’re All Immigrants
I’d should subtitle this section, “The Bigot’s Lament.”
There isn’t a bi-manual (two-handed) biped (two-legged) creature anywhere in North or South America who isn’t an immigrant or a descendant of immigrants.
There have been two major influxes of human population in the history (and pre-history) of the Americas. The first occurred near the end of the last Ice Age, and the second occurred during the European Age of Discovery.
Before about ten-thousand years ago, there were horses, wolves, saber-tooth tigers, camels(!), elephants, bison and all sorts of big and little critters running around the Americas, but not a single human being.
(The actual date is controversial, but you get the idea.)
Anatomically modern humans, (and there aren’t any others because everyone else went extinct tens of thousands of years ago) developed in East Africa about 200,000 years ago.
They were, by the way, almost certainly negroes. A fact every racist wants to ignore is that: everybody has black ancestors! You can’t hate black people without hating your own forefathers.
More important for this discussion, however, is that every human being in North and South America is descended from somebody who came here from somewhere else. So-called “Native Americans” came here in the Pleistocene Epoch, most likely from Siberia. Most everybody else showed up after Christopher Columbus accidentally fell over North America.
Mostly these later immigrants were imported to fill America’s chronic labor shortage.
America’s labor shortage has persisted since the Spanish conquistadores pretty much wiped out the indigenous people, leaving the Spaniards with hardly anybody to do the manual labor on which their economy depended. Waves of forced and unforced migration have never caught up. We still have a chronic labor shortage.
Immigrants generally don’t come to take jobs from “real” Americans. They come here because there are by-and-large more available jobs than workers.
Currently, natural reductions in birth rates among better educated, better housed, and generally wealthier Americans have left the United States (similar to most developed countries) with the problem that the the working-age population is declining while the older, retired population expands. That means we haven’t got enough young squirts to support us old farts in retirement.
The only viable solution is to import more young squirts. That means welcoming working-age immigrants.
The articles discussed here reflect the print version of The Wall Street Journal, rather than the online version. Links to online versions are provided. The publication dates and some of the contents do not match.
10 October 2018 – Baseball is well known to be a game of statistics. Fans pay as much attention to statistical analysis of performance by both players and teams as they do to action on the field. They hope to use those statistics to indicate how teams and individual players are likely to perform in the future. It’s an intellectual exercise that is half the fun of following the sport.
While baseball statistics are quoted to three decimal places, or one part in a thousand, fans know to ignore the last decimal place, be skeptical of the second decimal place, and recognize that even the first decimal place has limited predictive power. It’s not that these statistics are inaccurate or in any sense meaningless, it’s that they describe a situation that seems predictable, yet is full of surprises.
With 18 players in a game at any given time, a complex set of rules, and at least three players and an umpire involved in the outcome of every pitch, a baseball game is a highly chaotic system. What makes it fun is seeing how this system evolves over time. Fans get involved by trying to predict what will happen next, then quickly seeing if their expectations materialize.
The essence of a chaotic system is conditional unpredictability. That is, the predictability of any result drops more-or-less drastically with time. For baseball, the probability of, say, a player maintaining their batting average is fairly high on a weekly basis, drops drastically on a month-to-month basis, and simply can’t be predicted from year to year.
Folks call that “streakiness,” and it’s one of the hallmarks of mathematical chaos.
Since the 1960s, mathematicians have recognized that weather is also chaotic. You can say with certainty what’s happening right here right now. If you make careful observations and take into account what’s happening at nearby locations, you can be fairly certain what’ll happen an hour from now. What will happen a week from now, however, is a crapshoot.
This drives insurance companies crazy. They want to live in a deterministic world where they can predict their losses far into the future so that they can plan to have cash on hand (loss reserves) to cover them. That works for, say, life insurance. It works poorly for losses do to extreme-weather events.
That’s because weather is chaotic. Predicting catastrophic weather events next year is like predicting Miami Marlins pitcher Drew Steckenrider‘s earned-run-average for the 2019 season.
Laugh out loud.
Notes from 3 October
My BS detector went off big-time when I read an article in this morning’s Wall Street Journal entitled “A Hotter Planet Reprices Risk Around the World.” That headline is BS for many reasons.
Digging into the article turned up the assertion that insurance providers were using deterministic computer models to predict risk of losses due to future catastrophic weather events. The article didn’t say that explicitly. We have to understand a bit about computer modeling to see what’s behind the words they used. Since I’ve been doing that stuff since the 1970s, pulling aside the curtain is fairly easy.
I’ve also covered risk assessment in industrial settings for many years. It’s not done with deterministic models. It’s not even done with traditional mathematics!
The traditional mathematics you learned in grade school uses real numbers. That is numbers with a definite value.
Pi = 3.1415926 ….
We know what Pi is because it’s measurable. It’s the ratio of a circle’s circumference to its diameter.
Measure the circumference. Measure the diameter. Then divide one by the other.
The ancient Egyptians performed the exercise a kazillion times and noticed that, no matter what circle you used, no matter how big it was, whether you drew it on papyrus or scratched it on a rock or laid it out in a crop circle, you always came out with the same number. That number eventually picked up the name “Pi.”
Risk assessment is NOT done with traditional arithmetic using deterministic (real) numbers. It’s done using what’s called “fuzzy logic.”
Fuzzy logic is not like the fuzzy thinking used by newspaper reporters writing about climate change. The “fuzzy” part simply means it uses fuzzy categories like “small,” “medium” and “large” that don’t have precisely defined values.
While computer programs are perfectly capable of dealing with fuzzy logic, they won’t give you the kind of answers cost accountants are prepared to deal with. They won’t tell you that you need a risk-reserve allocation of $5,937,652.37. They’ll tell you something like “lots!”
You can’t take “lots” to the bank.
The next problem is imagining that global climate models could have any possible relationship to catastrophic weather events. Catastrophic weather events are, by definition, catastrophic. To analyze them you need the kind of mathermatics called “catastrophe theory.”
Catastrophe theory is one of the underpinnings of chaos. In Steven Spielberg’s 1993 movie Jurassic Park, the character Ian Malcolm tries to illustrate catastrophe theory with the example of a drop of water rolling off the back of his hand. Whether it drips off to the right or left depends critically on how his hand is tipped. A small change creates an immense difference.
If a ball is balanced at the edge of a table, it can either stay there or drop off, and you can’t predict in advance which will happen.
That’s the thinking behind catastrophe theory.
The same analysis goes into predicting what will happen with a hurricane. As I recall, at the time Hurricane Florence (2018) made landfall, most models predicted it would move south along the Appalachian Ridge. Another group of models predicted it would stall out to the northwest.
When push came to shove, however, it moved northeast.
What actually happened depended critically on a large number of details that were too small to include in the models.
How much money was lost due to storm damage was a result of the result of unpredictable things. (That’s not an editing error. It was really the second order result of a result.) It is a fundamentally unpredictable thing. The best you can do is measure it after the fact.
That brings us to comparing climate-model predictions with observations. We’ve got enough data now to see how climate-model predictions compare with observations on a decades-long timescale. The graph above summarizes results compiled in 2015 by the Cato Institute.
Basically, it shows that, not only did the climate models overestimate the temperature rise from the late 1970s to 2015 by a factor of approximately three, but in the critical last decade, when the computer models predicted a rapid rise, the actual observations showed that it nearly stalled out.
Notice that the divergence between the models and the observations increased with time. As I’ve said, that’s the main hallmark of chaos.
It sure looks like the climate models are batting zero!
I’ve been watching these kinds of results show up since the 1980s. It’s why by the late 1990s I started discounting statements like the WSJ article’s: “A consensus of scientists puts blame substantially on emissios greenhouse gasses from cars, farms and factories.”
I don’t know who those “scientists” might be, but it sounds like they’re assigning blame for an effect that isn’t observed. Real scientists wouldn’t do that. Only politicians would.
Clearly, something is going on, but what it is, what its extent is, and what is causing it is anything but clear.
In the data depicted above, the results from global climate modeling do not look at all like the behavior of a chaotic system. The data from observations, however, do look like what we typically get from a chaotic system. Stuff moves constantly. On short time scales it shows apparent trends. On longer time scales, however, the trends tend to evaporate.
No wonder observers like Steven Pacala, who is Frederick D. Petrie Professor in Ecology and Evolutionary Biology at Princeton University and a board member at Hamilton Insurance Group, Ltd., are led to say (as quoted in the article): “Climate change makes the historical record of extreme weather an unreliable indicator of current risk.”
When you’re dealing with a chaotic system, the longer the record you’re using, the less predictive power it has.
Another point made in the WSJ article that I thought was hilarious involved prediction of hurricanes in the Persian Gulf.
According to the article, “Such cyclones … have never been observed in the Persian Gulf … with new conditions due to warming, some cyclones could enter the Gulf in the future and also form in the Gulf itself.”
This sounds a lot like a tongue-in-cheek comment I once heard from astronomer Carl Sagan about predictions of life on Venus. He pointed out that when astronomers observe Venus, they generally just see a featureless disk. Science fiction writers had developed a chain of inferences that led them from that observation of a featureless disk to imagining total cloud cover, then postulating underlying swamps teeming with life, and culminating with imagining the existence of Venusian dinosaurs.
Observation: “We can see nothing.”
Conclusion: “There are dinosaurs.”
Sagan was pointing out that, though it may make good science fiction, that is bad science.
The WSJ reporters, Bradley Hope and Nicole Friedman, went from “No hurricanes ever seen in the Persian Gulf” to “Future hurricanes in the Persian Gulf” by the same sort of logic.
The kind of specious misinformation represented by the WSJ article confuses folks who have genuine concerns about the environment. Before politicians like Al Gore hijacked the environmental narrative, deflecting it toward climate change, folks paid much more attention to the real environmental issue of pollution.
The one bit of information in the WSJ article that appears prima facie disturbing is contained in the graph at right.
The graph shows actual insurance losses due to catastrophic weather events increasing rapidly over time. The article draws the inference that this trend is caused by human-induced climate change.
That’s quite a stretch, considering that there are obvious alternative explanations for this trend. The most likely alternative is the possibility that folks have been building more stuff in hurricane-prone areas. With more expensive stuff there to get damaged, insurance losses will rise.
Invoking Occam’s Razor (choose the most believable of alternative explanations), we tend to favor the second explanation.
In summary, I conclude that the 3 October article is poor reporting that draws conclusions that are likely false.
Notes from 4 October
Don’t try to find the 3 October WSJ article online. I spent a couple of hours this morning searching for it, and came up empty. The closest I was able to get was a draft version that I found by searching on Bradley Hope’s name. It did not appear on WSJ‘s public site.
Apparently, WSJ‘s editors weren’t any more impressed with the article than I was.
The 4 October issue presents a corroboration of my alternative explanation of the trend in insurance-loss data: it’s due to a build up of expensive real estate in areas prone to catastrophic weather events.
In a half-page expose entitled “Hurricane Costs Grow as Population Shifts,” Kara Dapena reports that, “From 1980 to 2017, counties along the U.S. shoreline that endured hurricane-strength winds from Florence in September experienced a surge in population.”
In the end, this blog posting serves as an illustration of four points I tried to make last month. Specifically, on 19 September I published a post entitled: “Noble Whitefoot or Lying Blackfoot?” in which I laid out four principles to use when trying to determine if the news you’re reading is fake. I’ll list them in reverse of the order I used in last month’s posting, partly to illustrate that there is no set order for them:
Nobody gets it completely right ̶ In the 3 October WSJ story, the reporters got snookered by the climate-change political lobby. That article, taken at face value, has to be stamped with the label “fake news.”
Does it make sense to you? ̶ The 3 October fake news article set off my BS detector by making a number of statements that did not make sense to me.
Comparison shopping for ideas ̶ Assertions in the suspect article contradicted numerous other sources.
Consider your source ̶ The article’s source (The Wall Street Journal) is one that I normally trust. Otherwise, I likely never would have seen it, since I don’t bother listening to anyone I catch in a lie. My faith in the publication was restored when the next day they featured an article that corrected the misinformation.
Last week I spent a lot of space yammering on about how to tell fake news from the real stuff. I made a big point about how real news organizations don’t allow editorializing in news stories. I included an example of a New York Times op-ed (opinion editorial) that was decidedly not a news story.
On the other hand, last night I growled at my TV screen when I heard a CNN commentator say that she’d been taught that journalists must have opinions and should voice them. I growled because her statement could be construed to mean something anathema to journalistic ethics. I’m afraid way too many TV journalists may be confused about this issue. Certainly too many news consumers are confused!
It’s easy to get confused. For example, I got myself in trouble some years ago in a discussion over dinner and drinks with Andy Wilson, Founding Editor at Vision Systems Design, over a related issue that is less important to political-news reporting, but is crucial for business-to-business (B2B) journalism: the role of advertising in editorial considerations.
Andy insisted upon strictly ignoring advertiser needs when making editorial decisions. I advocated a more nuanced approach. I said that ignoring advertiser needs and desires would lead to cutting oneself off from our most important source of technology-trends information.
I’m not going to delve too deeply into that subject because it has only peripheral significance for this blog posting. The overlap with news reporting is that both activities involve dealing with biased sources.
My disagreement with Andy arose from my veteran-project-manager’s sensitivity to all stakeholders in any activity. In the B2B case, editors have several ways of enforcing journalistic discipline without biting the hand that feeds us. I was especially sensitive to the issue because I specialized in case studies, which necessarily discuss technology embodied in commercial products. Basically, I insisted on limiting (to one) actual product mentions in each story, and suppressing any claims that the mentioned product was the only possible way to access the embodied technology. In essence, I policed the stories I wrote or edited to avoid the “buy our stuff” messages that advertisers love and that send chills down Andy’s (and my) spine.
In the news-media realm, journalists need to police their writing for “buy our ideas” messages in news stories. “Just the facts, ma’am” needs to be the goal for news. Expressing editorial opinions in news stories is dangerous. That’s when the lines between fake news and real news get blurry.
Those lines need to be sharp to help news consumers judge the … information … they’re being fed.
Perhaps “information” isn’t exactly the right word.
It might be best to start with the distinction between “information” and “data.”
The distinction is not always clear in a general setting. It is, however, stark in the world of science, which is where I originally came from.
What comes into our brains from the outside world is “data.” It’s facts and figures. Contrary to what many people imagine, “data” is devoid of meaning. Scientists often refer to it as “raw data” to emphasize this characteristic.
There is nothing actionable in raw data. The observation that “the sky is blue” can’t even tell you if the sky was blue yesterday, or how likely it is to be blue tomorrow. It just says: “the sky is blue.” End of story.
Turning “data” into “information” involves combining it with other, related data, and making inferences about or deductions from patterns perceivable in the resulting superset. The process is called “interpretation,” and it’s the second step in turning data into knowledge. It’s what our brains are good for.
So, does this mean that news reporters are to be empty-headed recorders of raw facts?
Not by a long shot!
The CNN commentator’s point was that reporters are far from empty headed. While learning their trade, they develop ways to, for example, tell when some data source is lying to them.
In the hard sciences it’s called “instrumental error,” and experimental scientists (as I was) spend careers detecting and eliminating it.
Similarly, what a reporter does when faced with a lying source is the hard part of news reporting. Do you say, “This source is unreliable” and suppress what they told you? Do you report what they said along with a comment that they’re a lying so-and-so who shouldn’t be believed? Certainly, you try to find another source who tells you something you can rely on. But, what if the second source is lying, too?
That’s why we news consumers have to rely on professionals who actually care about the truth for our news.
On the other hand, nobody goes to news outlets for just raw data. We want something we can use. We want something actionable.
Most of us have neither the time nor the tools to interpret all the drivel we’re faced with. Even if we happen to be able to work it out for ourselves, we could always use some help, even if just to corroborate our own conclusions.
Who better to help us interpret the data (news) and glean actionable opinions from it than those journalists who’ve been spending their careers listening to the crap newsmakers want to feed us?
That’s where commentators come in. The difference between an editor and a reporter is that the editor has enough background and experience to interpret the raw data and turn it into actionable information.
That is: opinion you can use to make a decision. Like, maybe, who to vote for.
People with the chops to interpret news and make comments about it are called “commentators.”
When I was looking to hire what we used to call a “Technical Editor” for Test & Measurement World, I specifically looked for someone with a technical degree and experience developing the technology I wanted that person to cover. So, for example, when I was looking for someone to cover advances in testing of electronics for the telecommunications industry, I went looking for a telecommunications engineer. I figured that if I found one who could also tell a story, I could train them to be a journalist.
That brings us back to the CNN commentator who thought she should have opinions.
The relevant word here is “commentator.”
She’s not just a reporter. To be a commentator, she supposedly has access to the best available “data” and enough background to skillfully interpret it. So, what she was saying is true for a commentator rather than just a reporter.
Howsomever, ya can’t just give a conclusion without showing how the facts lead to it.
Let’s look at how I assemble a post for this blog as an example of what you should look for in a reliable op-ed piece.
Obviously, I look for a subject about which I feel I have something worthwhile to say. Specifically, I look for what I call the “take-home lesson” on which I base every piece of blather I write.
The “take-home lesson” is the basic point I want my reader to remember. Come Thursday next you won’t remember every word or even every point I make in this column. You’re (hopefully) going to remember some concept from it that you should be able to summarize in one or two sentences. It may be the “call to action” my eighth-grade English teacher, Miss Langley, told me to look for in every well-written editorial. Or, it could be just some idea, such as “Racism sucks,” that I want my reader to believe.
Whatever it is, it’s what I want the reader to “take home” from my writing. All the rest is just stuff I use to convince the reader to buy into the “take-home lesson.”
Usually, I start off by providing the reader with some context in which to fit what I have to say. It’s there so that the reader and I start off on the same page. This is important to help the reader fit what I have to say into the knowledge pattern of their own mind. (I hope that makes sense!)
After setting the context, I provide the facts that I have available from which to draw my conclusion. The conclusion will be, of course, the “take-home lesson.”
I can’t be sure that my readers will have the facts already, so I provide links to what I consider reliable outside sources. Sometimes I provide primary sources, but more often they’re secondary sources.
Primary sources for, say, a biographical sketch of Thomas Edison would be diary pages or financial records, which few readers would have immediate access to.
A secondary source might be a well-researched entry on, say, the Biography.com website, which the reader can easily get access to and which can, in turn, provide links to useful primary sources.
In any case, I try to provide sources for each piece of data on which I base my conclusion.
Then, I’ll outline the logical path that leads from the data pattern to my conclusion. While the reader should have no need to dispute the “data,” he or she should look very carefully to see whether my logic makes sense. Does it lead inevitably from the data to my conclusion?
Finally, I’ll clearly state the conclusion.
In general, every consumer of ideas should look for this same pattern in every information source they use.
Every morning we’d gather ’round the desk of our compatriot Ron Held, builder of stellar-interior computer models extraordinaire, to hear him read “what fits” from the days issue of The New York Times. Ron had noticed that when taken out of context much of what is written in newspapers sounds hilarious. He had a deadpan way of reading this stuff out loud that only emphasized the effect. He’d modified the Times‘ slogan, “All the news that’s fit to print” into “All the news that fits.”
Whenever I hear unmitigated garbage coming out of supposed news outlets, I think of Ron’s “All the news that fits.”
These days, I’m on a kick about fake news and how to spot it. It isn’t easy because it’s become so pervasive that it becomes almost believable. This goes along with my lifelong philosophical study that I call: “How do we know what we think we know?”
Early on I developed what I call my “BS detector.” It’s a mental alarm bell that goes off whenever someone tries to convince me of something that’s unbelievable.
It’s not perfect. It’s been wrong on a whole lot of occasions.
For example, back in the early 1970s somebody told me about something called “superconductivity,” where certain materials, when cooled to near absolute zero, lost all electrical resistance. My first reaction, based on the proposition that if something sounds too good to be true, it’s not, was: “Yeah, and if you believe that I’ve got this bridge between Manhattan and Brooklyn to sell you.”
After seeing a few experiments and practical demonstrations, my BS detector stopped going off and I was able to listen to explanations about Cooper Pairs, and electron-phonon interactions and became convinced. I eventually learned that nearly everything involving quantum theory sounds like BS until you get to understand it.
Another time I bought into the notion that Interferon would develop into a useful AIDS treatment. Being a monogamous heterosexual, I didn’t personally worry about AIDS, but I had many friends who did, so I cared. I cared enough to pay attention, and watch as the treatment just didn’t develop.
Most of the time, however, my BS detector works quite well, thank you, and I’ve spent a lot of time trying to divine what sets it off, and what a person can do to separate the grains of truth from the BS pile.
Consider Your Source(s)
There’s and old saying: “Figures don’t lie, but liars can figure.”
First off, never believe anybody whom you’ve caught lying to you in the past. For example, Donald Trump has been caught lying numerous times in the past. I know. I’ve seen video of him mouthing words that I’ve known at the time were incorrect. It’s happened so often that my BS detector goes off so loudly whenever he opens his mouth that the noise drowns out what he’s trying to say.
I had the same problem with Bill Clinton when he was President (he seems to have gotten better, now, but I’m still wary).
Nixon was pretty bad, too.
There’s a lot of noise these days about “reliable sources.” But, who’s a reliable source? You can’t take their word for it. It’s like the old riddle of the lying blackfoot indian and the truthful whitefoot.
Unfortunately, in the real world nobody always lies or always tells the truth, even Donald Trump. So, they can’t be unmasked by calling on the riddle’s answer. If you’re unfamiliar with the riddle, look it up.
The best thing to do is try to figure out what the source’s game is. Everyone in the communications business is selling something. It’s up to you to figure out what they’re selling and whether you want to buy it.
News is information collected on a global scale, and it’s done by news organizations. The New York Times is one such organization. Another is The Wall Street Journal, which is a subsidiary of Dow Jones & Company, a division of News Corp.
So, basically, what a legitimate news organization is selling is information. If you get a whiff that they’re selling anything else, like racism, or anarchy, or Donald Trump, they aren’t a real news organization.
The structure of a news organization is:
Publisher: An individual or group of individuals generally responsible for running the business. The publisher manages the Circulation, Advertising, Production, and Editorial departments. The Publisher’s job is to try to sell what the news organization has to sell (that is, information) at a profit.
Circulation: A group of individuals responsible for recruiting subscribers and promoting sales of individal copies of the news organization’s output.
Advertising: A group of individuals under the direct supervision of the Publisher who are responsible for selling advertising space to individuals and businesses who want to present their own messages to people who consume the news organization’s output.
Production: A group of individuals responsible for packaging the information gathered by the Editorial department into physical form and distributing it to consumers.
Editorial: A group of trained journalists under a Chief Editor responsible for gathering and qualifying information the news organization will distribute to consumers.
Notice the italics on “and qualifying” in the entry on the Editorial Department. Every publication has their self-selected editorial focus. For a publication like The Wall Street journal, whose editorial focus is business news, every story has to fit that editorial focus. A story that, say, affects how readers select stocks to buy or sell is in their editorial focus. A story that doesn’t isn’t.
A story about why Donald Trump lies doesn’t belong in The Wall Street Journal. It belongs in Psychology Today.
That’s why editors and reporters have to be “trained journalists.” You can’t hire just anybody off the street, slap a fedora on their head and call them a “reporter.” That never even worked in the movies. Journalism is a profession and journalists require training. They’re also expected behave in a manner consistent with journalistic ethics.
One of those ethical principles is that you don’t “editorialize” in news stories. That means you gather facts and report those facts. You don’t distort facts to fit your personal opinions. You for sure don’t make up facts out of thin air just ’cause you’d like it to be so.
Taking the example of The Wall Street Journal again, a reporter handed some fact doesn’t know what the reader will do with that fact. Some will do some things and others will do something else. If a reporter makes something up, and readers make business decisions based on that fiction, bad results will happen. Business people don’t like that. They’d stop buying copies of the newspaper. Circulation would collapse. Advertisers would abandon it.
Soon, no more The Wall Street Journal.
It’s the Chief Editor’s job to make sure reporters seek out information useful to their readers, don’t editorialize, and check their facts to make sure nobody’s been lying to them. Thus, the Chief Editor is the main gatekeeper that consumers rely on to keep out fake news.
That, by the way, is the fatal flaw in social media as a news source: there’s no Chief Editor.
One final note: A lot of people today buy into the cynical belief that this vision of journalism is naive. As a veteran journalist I can tell you that it’s NOT. If you think real journalism doesn’t work this way, you’re living in a Trumpian alternate reality.
Bang your head on the nearest wall hoping to knock some sense into it!
So, for you, the news consumer, to guard against fake news, your first job is to figure out if your source’s Chief Editor is trustworthy.
Unfortunately, it’s very seldom that most people get to know a news source’s Chief Editor well enough to know whether to trust him or her.
Comparison Shopping for Ideas
That’s why you don’t take the word of just one source. You comparison shop for ideas the same way you do for groceries, or anything else. You go to different stores. You check their prices. You look at sell-by dates. You sniff the air for stale aromas. You do the same thing in the marketplace for ideas.
If you check three-to-five news outlets, and they present the same facts, you gotta figure they’re all reporting the facts that were given to them. If somebody’s out of whack compared to the others, it’s a bad sign.
Of course, you have to consider the sources they use as well. Remember that everyone providing information to a news organization has something to sell. You need to make sure they’re not providing BS to the news organization to hype sales of their particular product. That’s why a credible news organization will always tell you who their sources are for every fact.
For example, a recent story in the news (from several outlets) was that The New York Times published an opinion-editorial piece (NOT a news story, by the way) saying very unflattering things about how President Trump was managing the Executive Branch. A very big red flag went up because the op-ed was signed “Anonymous.”
That red flag was minimized by the paper’s Chief Editor, Dean Baquet, assuring us all that he, at least, knew who the author was, and that it was a very high official who knew what they were talking about. If we believe him, we figure we’re likely dealing with a credible source.
Our confidence in the op-ed’s credibility was also bolstered by the fact that the piece included a lot of information that was available from other sources that corroborated it. The only new piece of information, that there was a faction within the White House that was acting to thwart the President’s worst impulses, fitted seamlessly with the verifiable information. So, we tend to believe it.
As another example, during the 1990s I was watching the scientific literature for reports of climate-change research results. I’d already seen signs that there was a problem with this particular branch of science. It had become too political, and the politicians were selling policies based on questionable results. I noticed that studies generally were reporting inconclusive results, but each article ended with a concluding paragraph warning of the dangers of human-induced climate change that did not fit seamlessly with the research results reported in the article. So, I tended to disbelieve the final conclusions.
Does It Make Sense to You?
This is where we all stumble when ferreting out fake news. If you’re pre-programmed to accept some idea, it won’t set off your BS detector. It won’t disagree with the other sources you’ve chosen to trust. It will seem reasonable to you. It will make sense, whether it’s right or wrong.
That’s a situation we all have to face, and the only antidote is to do an experiment.
Experiments are great! They’re our way of asking Mommy Nature to set us on the right path. And, if we ask often enough, and carefully enough, she will.
That’s how I learned the reality of superconductivity against my inbred bias. That’s how I learned how naive my faith in interferon had been.
With those cautions, let’s look at how we know what we think we know.
It starts with our parents. We start out truly impressed by our parents’ physical and intellectual capabilities. After all, they can walk! They can talk! They can (in some cases) do arithmetic!
Parents have a natural drive to stuff everything they know into our little heads, and we have a natural drive to suck it all in. It’s only later that we notice that not everyone agrees with our parents, and they aren’t necessarily the smartest beings on the planet. That’s when comparison shopping for ideas begins. Eventually, we develop our own ideas that fit our personalities.
Along the way, Mommy Nature has provided a guiding hand to either confirm or discredit our developing ideas. If we’re not pathological, we end up with a more or less reliable feel for what makes sense.
For example, almost everybody has a deep-seated conviction that torturing pets is wrong. We’ve all done bad things to pets, usually unintentionally, and found it made us feel sad. We don’t want to do it again.
So, if somebody advocates perpetrating cruelty to animals, most of us recoil. We’d have to be given a darn good reason to do it. Like, being told “If you don’t shoot that squirrel, there’ll be no dinner tonight.”
That would do it.
Our brains are full up with all kinds of ideas like that. When somebody presents us with a novel idea, or a report of something they suggest is a fact, our first line of defense is whether it makes sense to us.
If it’s unbelievable, it’s probably not true.
It could still be true, since a lot of unbelievable stuff actually happens, but it’s probably not. We can note it pending confirmation by other sources or some kind of experimental result (like looking to see the actual bloody mess).
The real naive attitude about news, which I used to hear a lot fifty or sixty years ago is, “If it’s in print, it’s gotta be true.”
Reporters, editors and publishers are human. They make mistakes. And, catching those mistakes follows the 95:5 rule.That is, you’ll expend 95% of your effort to catch the last 5% of the errors. It’s also called “The Law of Diminishing Returns,” and it’s how we know to quit obsessing.
The way this works for the news business is that news output involves a lot of information. I’m not going to waste space here estimating the amount of information (in bits) in an average newspaper, but let’s just say it’s 1.3 s**tloads!
It’s a lot. Getting it all right, then getting it all corroborated, then getting it all fact checked (a different, and tougher, job than just corroboration), then putting it into words that convey that information to readers, is an enormous task, especially when a deadline is involved. It’s why the classic image of a journalist is some frazzled guy wearing a fedora pushed back on his head, suitcoat off, sleeves rolled up and tie loosened, maniacally tapping at a typewriter keyboard.
So, don’t expect everything you read to be right (or even spelled right).
The easiest things to get right are basic facts, the Who, What, Where, and When.
How many deaths due to Hurricane Maria on Puerto Rico? Estimates have run from 16 to nearly 3,000 depending on who’s doing the estimating, what axes they have to grind, and how they made the estimate. Nobody was ever able to collect the bodies in one place to count them. It’s unlikely that they ever found all the bodies to collect for the count!
Those are the first four Ws of news reporting. The fifth one, Why, is by far the hardest ’cause you gotta get inside someone’s head.
So, the last part of judging whether news is fake is recognizing that nobody gets it entirely right. Just because you see it in print doesn’t make it fact. And, just because somebody got it wrong, doesn’t make them a liar.
They could get one thing wrong, and most everything else right. In fact, they could get 5 things wrong, and 95 things right!
What you look for is folks who make the effort to try to get things right. If somebody is really trying, they’ll make some mistakes, but they’ll own up to them. They’ll say something like: “Yesterday we told you that there were 16 deaths, but today we have better information and the death toll is up to 2,975.”
Anybody who won’t admit they’re ever wrong is a liar, and whatever they say is most likely fake news.
12 September 2018 – The Front Page was an hilarious one-set stage play supposedly taking place over a single night in the dingy press room of Chicago’s Criminal Courts Building overlooking the gallows behind the Cook County Jail. I’m not going to synopsize the plot because the Wikipedia entry cited above does such an excellent job it’s better for you to follow the link and read it yourself.
First performed in 1928, the play has been revived several times and suffered countless adaptations to other media. It’s notable for the fact that the main character, Hildy Johnson, originally written as a male part, is even more interesting as a female. That says something important, but I don’t know what.
By the way, I insist that the very best adaptation is Howard Hawks’ 1940 tour de force film entitled His Girl Friday starring Rosalind Russell as Hildy Johnson, and Cary Grant as the other main character Walter Burns. Burns is Johnson’s boss and ex-husband who uses various subterfuges to prevent Hildy from quitting her job and marrying an insurance salesman.
That’s not what I want to talk about today, though. What’s important for this blog posting is part of the play’s backstory. It’s important because it can help provide context for the entire social media industry, which is becoming so important for American society right now.
In that backstory, a critical supporting character is one Earl Williams, who’s a mousey little man convicted of murdering a policeman and sentenced to be executed the following morning right outside the press-room window. During the course of the play, it comes to light that Williams, confused by listening to a soapbox demagogue speaking in a public park, accidentally shot the policeman and was subsequently railroaded in court by a corrupt sheriff who wanted to use his execution to help get out the black(!?) vote for his re-election campaign.
What publicly executing a confused communist sympathizer has to do with motivating black voters I still fail to understand, but it makes as much sense as anything else the sheriff says or does.
This plot has so many twists and turns paralleling issues still resonating today that it’s rediculous. That’s a large part of the play’s fun!
Anyway, what I want you to focus on right now is the subtle point that Williams was confused by listening to a soapbox demagogue.
Soapboxdemagogues were a fixture in pre-Internet political discourse. The U.S. Constitution’s First Amendment explicitly gives private citizens the right to peaceably assemble in public places. For example, during the late 1960s a typical summer Sunday afternoon anywhere in any public park in North America or Europe would see a gathering of anywhere from 10 to 10,000 hippies for an impromptu “Love In,” or “Be In,” or “Happening.” With no structure or set agenda folks would gather to do whatever seemed like a good idea at the time. My surrealist novelette Lilith describes a gathering of angels, said to be “the hippies of the supernatural world,” that was patterned after a typical Hippie Love In.
Similarly, a soapbox demagogue had the right to commandeer a picnic table, bandstand, or discarded soapbox to place himself (at the time they were overwhelmingly male) above the crowd of passersby that he hoped would listen to his discourse on whatever he wanted to talk about.
In the case of Earl Williams’ demagogue, the speech was about “production for use.” The feeble-minded Williams applied that idea to the policeman’s service weapon, with predictable results.
Fast forward to the twenty-first century.
I haven’t been hanging around local parks on Sunday afternoons for a long time, so I don’t know if soapbox demagogues are still out there. I doubt that they are because it’s easier and cheaper to log onto a social-media platform, such as Facebook, to shoot your mouth off before a much larger international audience.
I have browsed social media, however, and see the same sort of drivel that used to spew out of the mouths of soapbox demagogues back in the day.
The point I’m trying to make is that there’s really nothing novel about social media. Being a platform for anyone to say anything to anyone is the same as last-century soapboxes being available for anyone who thinks they have something to say. It’s a prominent right guaranteed in the Bill of Rights. In fact, it’s important enough to be guaranteed in the very first of th Bill’s amendments to the U.S. Constitution.
What is not included, however, is a proscription against anyone ignoring the HECK out of soapbox demagogues! They have the right to talk, but we have the right to not listen.
Back in the day, almost everybody passed by soapbox demagogues without a second glance. We all knew they climbed their soapboxes because it was the only venue they had to voice their opinions.
Preachers had pulpits in front of congregations, so you knew they had something to say that people wanted to hear. News reporters had newspapers people bought because they contained news stories that people wanted to read. Scholars had academic journals that other scholars subscribed to because they printed results of important research. Fiction writers had published novels folks read because they found them entertaining.
The list goes on.
Soapbox demagogues, however, had to stand on an impromptu platform because they didn’t have anything to say worth hearing. The only ones who stopped to listen were those, like the unemployed Earl Williams, who had nothing better to do.
The idea of pretending that social media is any more of a legitimate venue for ideas is just goofy.
Social media are not legitimate media for the exchange of ideas simply because anybody is able to say anything on them, just like a soapbox in a park. Like a soapbox in a park, most of what is said on social media isn’t worth hearing. It’s there because the barrier to entry is essentially nil. That’s why so many purveyors of extremist and divisive rhetoric gravitate to social media platforms. Legitimate media won’t carry them.
Legitimate media organizations have barriers to the entry of lousy ideas. For example, I subscribe to The Economist because of their former Editor in Chief, John Micklethwait, who impressed me as an excellent arbiter of ideas (despite having a weird last name). I was very pleased when he transferred over to Bloomberg News, which I consider the only televised outlet for globally significant news. The Wall Street Journal‘s business focus forces Editor-in-Chief Matt Murray into a “just the facts, ma’am” stance because every newsworthy event creates both winners and losers in the business community, so content bias is a non-starter.
The common thread among these legitimate-media sources is existance of an organizational structure focused on maintaining content quality. There are knowlegeable gatekeepers (called “editors“) charged with keeping out bad ideas.
So, when Donald Trump, for example, shows a preference for social media (in his case, Twitter) and an abhorrence of traditional news outlets, he’s telling us his ideas aren’t worth listening to. Legitimate media outlets disparage his views, so he’s forced to use the twenty-first century equivalent of a public-park soapbox: social media.
On social media, he can say anything to anybody because there’s nobody to tell him, “That’s a stupid thing to say. Don’t say it!”
8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.
In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.
With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.
As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.
For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.
I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.
The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.
But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?
If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?
It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.
Is AI ready? IBM recently showed that it’s certainly coming along.
Is the sea of facts ready? That’s a lot less certain.
Debater holds its own
In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.
The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.
Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”
So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.
Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”
Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.
Beyond spinning away on publications, are computers ready to interact intelligently?
Artificial? Yes. But, Intelligent?
According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”
Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.
His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.
One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.
Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”
Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”
To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”
A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.
Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”
Centralized vs. Decentralized Fact Model
It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.
We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?
That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.
A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.
The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.
IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.
IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”
Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.
Trive and Debater seem to be a complement to each other, so far.
Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.
About Info-Tech Research Group
Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.
11 July 2018 – Please bear with me while I, once again, invert the standard news-story pyramid by presenting a great whacking pile of (hopfully entertaining) detail that leads eventually to the point of this column. If you’re too impatient to read it to the end, leave now to check out the latest POTUS rant on Twitter.
Unlike Will Rogers, who famously wrote, “I never met a man I didn’t like,” I’ve run across a whole slew of folks I didn’t like, to the point of being marginally misanthropic.
I’ve made friends with all kinds of people, from murderers to millionaires, but there are a few types that I just can’t abide. Top of that list is people that think they’re smarter than everybody else, and want you to acknowledge it.
I’m telling you this because I’m trying to be honest about why I’ve never been able to abide two recent Presidents: William Jefferson Clinton (#42) and Donald J. Trump (#45). Having been forced to observe their antics over an extended period, I’m pleased to report that they’ve both proved to be among the most corrupt individuals to occupy the Oval Office in recent memory.
I dislike them because they both show that same, smarmy self-satisfied smile when contemplating their own greatness.
Tricky Dick Nixon (#37) was also a world-class scumbag, but he never triggered the same automatic revulsion. That is because, instead of always looking self satisfied, he always looked scared. He was smart enough to recognize that he was walking a tightrope and, if he stayed on it long enough, he eventually would fall off.
And, he did.
I had no reason for disliking #37 until the mid-1960s, when, as a college freshman, I researched a paper for a history class that happened to involve digging into the McCarthy hearings of the early 1950s. Seeing the future #37’s activities in that period helped me form an extremely unflattering picture of his character, which a decade later proved accurate.
During those years in between I had some knock-down, drag-out arguments with my rabid-Nixon-fan grandmother. I hope I had the self control never to have said “I told you so” after Nixon’s fall. She was a nice lady and a wonderful grandma, and wouldn’t have deserved it.
As Abraham Lincoln (#16) famously said: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”
Since #45 came on my radar many decades ago, I’ve been trying to figure out what, exactly, is wrong with his brain. At first, when he was a real-estate developer, I just figured he had bad taste and was infantile. That made him easy to dismiss, so I did just that.
Later, he became a reality-TV star. His show, The Apprentice, made it instantly clear that he knew absolutely nothing about running a business.
No wonder his companies went bankrupt. Again, and again, and again….
I’ve known scads of corporate CEOs over the years. During the quarter century I spent covering the testing business as a journalist, I got to spend time with most of the corporate leaders of the world’s major electronics manufacturing companies. Unsurprisingly, the successful ones followed the best practices that I learned in MBA school.
Some of the CEOs I got to know were goofballs. Most, however, were absolutely brilliant. The successful ones all had certain things in common.
Chief among the characteristics of successful corporate executives is that they make the people around them happy to work for them. They make others feel comfortable, empowered, and enthusiastically willing to cooperate to make the CEO’s vision manifest.
Even Commendatore Ferrari, who I’ve heard was Hell to work for and Machiavellian in interpersonal relationships, made underlings glad to have known him. I’ve noticed that ‘most everybody who’s ever worked for Ferrari has become a Ferrari fan for life.
As far as I can determine, nobody ever sued him.
That’s not the impression I got of Donald Trump, the corporate CEO. He seemed to revel in conflict, making those around him feel like dog pooh.
Apparently, everyone who’s ever dealt with him has wanted to sue him.
That worked out fine, however, for Donald Trump, the reality-TV star. So-called “reality” TV shows generally survive by presenting conflict. The more conflict the better. Everybody always seems to be fighting with everybody else, and the winners appear to be those who consistently bully their opponents into feeling like dog pooh.
I see a pattern here.
The inescapable conclusion is that Donald Trump was never a successful corporate executive, but succeeded enormously playing one on TV.
Another characteristic I should mention of reality TV shows is that they’re unscripted. The idea seems to be that nobody knows what’s going to happen next, including the cast.
That leaves off the necessity for reality-TV stars to learn lines. Actual movie stars and stage actors have to learn lines of dialog. Stories are tightly scripted so that they conform to Aristotle’s recommendations for how to write a successful plot.
Having written a handful of traditional motion-picture scripts as well as having produced a few reality-TV episodes, I know the difference. Following Aristotle’s dicta gives you the ability to communicate, and sometimes even teach, something to your audience. The formula reality-TV show, on the other hand, goes nowhere. Everybody (including the audience) ends up exactly where they started, ready to start the same stupid arguments over and over again ad nauseam.
Apparently, reality-TV audiences don’t want to actually learn anything. They’re more focused on ranting and raving.
Later on, following a long tradition among theater, film and TV stars, #45 became a politician.
At first, I listened to what he said. That led me to think he was a Nazi demagogue. Then, I thought maybe he was some kind of petty tyrant, like Mussolini. (I never considered him competent enough to match Hitler.)
Eventually, I realized that it never makes any sense to listen to what #45 says because he lies. That makes anything he says irrelevant.
FIRST PRINCIPAL: If you catch somebody lying to you, stop believing what they say.
So, it’s all bullshit. You can’t draw any conclusion from it. If he says something obviously racist (for example), you can’t conclude that he’s a racist. If he says something that sounds stupid, you can’t conclude he’s stupid, either. It just means he’s said something that sounds stupid.
Piling up this whole load of B.S., then applying Occam’s Razor, leads to the conclusion that #45 is still simply a reality-TV star. His current TV show is titled The Trump Administration. Its supporting characters are U.S. senators and representatives, executive-branch bureaucrats, news-media personalities, and foreign “dignitaries.” Some in that last category (such as Justin Trudeau and Emmanuel Macron) are reluctant conscripts into the cast, and some (such as Vladimir Putin and Kim Jong-un) gleefully play their parts, but all are bit players in #45’s reality TV show.
Oh, yeah. The largest group of bit players in The Trump Administration is every man, woman, child and jackass on the planet. All are, in true reality-TV style, going exactly nowhere as long as the show lasts.
Politicians have always been showmen. Of the Founding Fathers, the one who stands out for never coming close to becoming President was Benjamin Franklin. Franklin was a lot of things, and did a lot of things extremely well. But, he was never really a P.T.-Barnum-like showman.
Really successful politicians, such as Abraham Lincoln, Franklin Roosevelt (#32), Bill Clinton, and Ronald Reagan (#40) were showmen. They could wow the heck out of an audience. They could also remember their lines!
That brings us, as promised, to Donald Trump and the Peter Principle.
Recognizing the close relationship between Presidential success and showmanship gives some idea about why #45 is having so much trouble making a go of being President.
Before I dig into that, however, I need to point out a few things that #45 likes to claim as successes that actually aren’t:
The 2016 election was not really a win for Donald Trump. Hillary Clinton was such an unpopular candidate that she decisively lost on her own (de)merits. God knows why she was ever the Democratic Party candidate at all. Anybody could have beaten her. If Donald Trump hadn’t been available, Elmer Fudd could have won!
The current economic expansion has absolutely nothing to do with Trump policies. I predicted it back in 2009, long before anybody (with the possible exception of Vladimir Putin, who apparently engineered it) thought Trump had a chance of winning the Presidency. My prediction was based on applying chaos theory to historical data. It was simply time for an economic expansion. The only effect Trump can have on the economy is to screw it up. Being trained as an economist (You did know that, didn’t you?), #45 is unlikely to screw up so badly that he derails the expansion.
While #45 likes to claim a win on North Korean denuclearization, the Nobel Peace Prize is on hold while evidence piles up that Kim Jong-un was pulling the wool over Trump’s eyes at the summit.
Peter was (he died at age 70 in 1990) not a management consultant or a behavioral psychologist. He was an Associate Professor of Education at the University of Southern California. He was also Director of the Evelyn Frieden Centre for Prescriptive Teaching at USC, and Coordinator of Programs for Emotionally Disturbed Children.
The Peter principle states: “In a hierarchy every employee tends to rise to his level of incompetence.”
Horrifying to corporate managers, the book went on to provide real examples and lucid explanations to show the principle’s validity. It works as satire only because it leaves the reader with a choice either to laugh or to cry.
See last week’s discussion of why academic literature is exactly the wrong form with which to explore really tough philosophical questions in an innovative way.
Let’s be clear: I’m convinced that the Peter principle is God’s Own Truth! I’ve seen dozens of examples that confirm it, and no counter examples.
It’s another proof that Mommy Nature has a sense of humor. Anyone who disputes that has, philosophically speaking, a piece of paper taped to the back of his (or her) shirt with the words “Kick Me!” written on it.
A quick perusal of the Wikipedia entry on the Peter Principle elucidates: “An employee is promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another. … If the promoted person lacks the skills required for their new role, then they will be incompetent at their new level, and so they will not be promoted again.”
I leave it as an exercise for the reader (and the media) to find the numerous examples where #45, as a successful reality-TV star, has the skills he needed to be promoted to President, but not those needed to be competent in the job.