Noble Whitefoot or Lying Blackfoot?

Fake News feed
How do you know when the news you’re reading is fake? Rawpixel/Shutterstock

19 September 2018 – Back in the mid-1970s, we RPI astrophysics graduate students had this great office at the very top of the Science Building at Rensselaer Polytechnic Institute.The construction was an exact duplicate of the top floor of an airport control tower, with the huge outward-sloping windows and the wrap-around balcony.

Every morning we’d gather ’round the desk of our compatriot Ron Held, builder of stellar-interior computer models extraordinaire, to hear him read “what fits” from the days issue of The New York Times. Ron had noticed that when taken out of context much of what is written in newspapers sounds hilarious. He had a deadpan way of reading this stuff out loud that only emphasized the effect. He’d modified the Times‘ slogan, “All the news that’s fit to print” into “All the news that fits.”

Whenever I hear unmitigated garbage coming out of supposed news outlets, I think of Ron’s “All the news that fits.”

These days, I’m on a kick about fake news and how to spot it. It isn’t easy because it’s become so pervasive that it becomes almost believable. This goes along with my lifelong philosophical study that I call: “How do we know what we think we know?”

Early on I developed what I call my “BS detector.” It’s a mental alarm bell that goes off whenever someone tries to convince me of something that’s unbelievable.

It’s not perfect. It’s been wrong on a whole lot of occasions.

For example, back in the early 1970s somebody told me about something called “superconductivity,” where certain materials, when cooled to near absolute zero, lost all electrical resistance. My first reaction, based on the proposition that if something sounds too good to be true, it’s not, was: “Yeah, and if you believe that I’ve got this bridge between Manhattan and Brooklyn to sell you.”

After seeing a few experiments and practical demonstrations, my BS detector stopped going off and I was able to listen to explanations about Cooper Pairs, and electron-phonon interactions and became convinced. I eventually learned that nearly everything involving quantum theory sounds like BS until you get to understand it.

Another time I bought into the notion that Interferon would develop into a useful AIDS treatment. Being a monogamous heterosexual, I didn’t personally worry about AIDS, but I had many friends who did, so I cared. I cared enough to pay attention, and watch as the treatment just didn’t develop.

Most of the time, however, my BS detector works quite well, thank you, and I’ve spent a lot of time trying to divine what sets it off, and what a person can do to separate the grains of truth from the BS pile.

Consider Your Source(s)

There’s and old saying: “Figures don’t lie, but liars can figure.”

First off, never believe anybody whom you’ve caught lying to you in the past. For example, Donald Trump has been caught lying numerous times in the past. I know. I’ve seen video of him mouthing words that I’ve known at the time were incorrect. It’s happened so often that my BS detector goes off so loudly whenever he opens his mouth that the noise drowns out what he’s trying to say.

I had the same problem with Bill Clinton when he was President (he seems to have gotten better, now, but I’m still wary).

Nixon was pretty bad, too.

There’s a lot of noise these days about “reliable sources.” But, who’s a reliable source? You can’t take their word for it. It’s like the old riddle of the lying blackfoot indian and the truthful whitefoot.

Unfortunately, in the real world nobody always lies or always tells the truth, even Donald Trump. So, they can’t be unmasked by calling on the riddle’s answer. If you’re unfamiliar with the riddle, look it up.

The best thing to do is try to figure out what the source’s game is. Everyone in the communications business is selling something. It’s up to you to figure out what they’re selling and whether you want to buy it.

News is information collected on a global scale, and it’s done by news organizations. The New York Times is one such organization. Another is The Wall Street Journal, which is a subsidiary of Dow Jones & Company, a division of News Corp.

So, basically, what a legitimate news organization is selling is information. If you get a whiff that they’re selling anything else, like racism, or anarchy, or Donald Trump, they aren’t a real news organization.

The structure of a news organization is:

Publisher: An individual or group of individuals generally responsible for running the business. The publisher manages the Circulation, Advertising, Production, and Editorial departments. The Publisher’s job is to try to sell what the news organization has to sell (that is, information) at a profit.

Circulation: A group of individuals responsible for recruiting subscribers and promoting sales of individal copies of the news organization’s output.

Advertising: A group of individuals under the direct supervision of the Publisher who are responsible for selling advertising space to individuals and businesses who want to present their own messages to people who consume the news organization’s output.

Production: A group of individuals responsible for packaging the information gathered by the Editorial department into physical form and distributing it to consumers.

Editorial: A group of trained journalists under a Chief Editor responsible for gathering and qualifying information the news organization will distribute to consumers.

Notice the italics on “and qualifying” in the entry on the Editorial Department. Every publication has their self-selected editorial focus. For a publication like The Wall Street journal, whose editorial focus is business news, every story has to fit that editorial focus. A story that, say, affects how readers select stocks to buy or sell is in their editorial focus. A story that doesn’t isn’t.

A story about why Donald Trump lies doesn’t belong in The Wall Street Journal. It belongs in Psychology Today.

That’s why editors and reporters have to be “trained journalists.” You can’t hire just anybody off the street, slap a fedora on their head and call them a “reporter.” That never even worked in the movies. Journalism is a profession and journalists require training. They’re also expected behave in a manner consistent with journalistic ethics.

One of those ethical principles is that you don’t “editorialize” in news stories. That means you gather facts and report those facts. You don’t distort facts to fit your personal opinions. You for sure don’t make up facts out of thin air just ’cause you’d like it to be so.

Taking the example of The Wall Street Journal again, a reporter handed some fact doesn’t know what the reader will do with that fact. Some will do some things and others will do something else. If a reporter makes something up, and readers make business decisions based on that fiction, bad results will happen. Business people don’t like that. They’d stop buying copies of the newspaper. Circulation would collapse. Advertisers would abandon it.

Soon, no more The Wall Street Journal.

It’s the Chief Editor’s job to make sure reporters seek out information useful to their readers, don’t editorialize, and check their facts to make sure nobody’s been lying to them. Thus, the Chief Editor is the main gatekeeper that consumers rely on to keep out fake news.

That, by the way, is the fatal flaw in social media as a news source: there’s no Chief Editor.

One final note: A lot of people today buy into the cynical belief that this vision of journalism is naive. As a veteran journalist I can tell you that it’s NOT. If you think real journalism doesn’t work this way, you’re living in a Trumpian alternate reality.

Bang your head on the nearest wall hoping to knock some sense into it!

So, for you, the news consumer, to guard against fake news, your first job is to figure out if your source’s Chief Editor is trustworthy.

Unfortunately, it’s very seldom that most people get to know a news source’s Chief Editor well enough to know whether to trust him or her.

Comparison Shopping for Ideas

That’s why you don’t take the word of just one source. You comparison shop for ideas the same way you do for groceries, or anything else. You go to different stores. You check their prices. You look at sell-by dates. You sniff the air for stale aromas. You do the same thing in the marketplace for ideas.

If you check three-to-five news outlets, and they present the same facts, you gotta figure they’re all reporting the facts that were given to them. If somebody’s out of whack compared to the others, it’s a bad sign.

Of course, you have to consider the sources they use as well. Remember that everyone providing information to a news organization has something to sell. You need to make sure they’re not providing BS to the news organization to hype sales of their particular product. That’s why a credible news organization will always tell you who their sources are for every fact.

For example, a recent story in the news (from several outlets) was that The New York Times published an opinion-editorial piece (NOT a news story, by the way) saying very unflattering things about how President Trump was managing the Executive Branch. A very big red flag went up because the op-ed was signed “Anonymous.”

That red flag was minimized by the paper’s Chief Editor, Dean Baquet, assuring us all that he, at least, knew who the author was, and that it was a very high official who knew what they were talking about. If we believe him, we figure we’re likely dealing with a credible source.

Our confidence in the op-ed’s credibility was also bolstered by the fact that the piece included a lot of information that was available from other sources that corroborated it. The only new piece of information, that there was a faction within the White House that was acting to thwart the President’s worst impulses, fitted seamlessly with the verifiable information. So, we tend to believe it.

As another example, during the 1990s I was watching the scientific literature for reports of climate-change research results. I’d already seen signs that there was a problem with this particular branch of science. It had become too political, and the politicians were selling policies based on questionable results. I noticed that studies generally were reporting inconclusive results, but each article ended with a concluding paragraph warning of the dangers of human-induced climate change that did not fit seamlessly with the research results reported in the article. So, I tended to disbelieve the final conclusions.

Does It Make Sense to You?

This is where we all stumble when ferreting out fake news. If you’re pre-programmed to accept some idea, it won’t set off your BS detector. It won’t disagree with the other sources you’ve chosen to trust. It will seem reasonable to you. It will make sense, whether it’s right or wrong.

That’s a situation we all have to face, and the only antidote is to do an experiment.

Experiments are great! They’re our way of asking Mommy Nature to set us on the right path. And, if we ask often enough, and carefully enough, she will.

That’s how I learned the reality of superconductivity against my inbred bias. That’s how I learned how naive my faith in interferon had been.

With those cautions, let’s look at how we know what we think we know.

It starts with our parents. We start out truly impressed by our parents’ physical and intellectual capabilities. After all, they can walk! They can talk! They can (in some cases) do arithmetic!

Parents have a natural drive to stuff everything they know into our little heads, and we have a natural drive to suck it all in. It’s only later that we notice that not everyone agrees with our parents, and they aren’t necessarily the smartest beings on the planet. That’s when comparison shopping for ideas begins. Eventually, we develop our own ideas that fit our personalities.

Along the way, Mommy Nature has provided a guiding hand to either confirm or discredit our developing ideas. If we’re not pathological, we end up with a more or less reliable feel for what makes sense.

For example, almost everybody has a deep-seated conviction that torturing pets is wrong. We’ve all done bad things to pets, usually unintentionally, and found it made us feel sad. We don’t want to do it again.

So, if somebody advocates perpetrating cruelty to animals, most of us recoil. We’d have to be given a darn good reason to do it. Like, being told “If you don’t shoot that squirrel, there’ll be no dinner tonight.”

That would do it.

Our brains are full up with all kinds of ideas like that. When somebody presents us with a novel idea, or a report of something they suggest is a fact, our first line of defense is whether it makes sense to us.

If it’s unbelievable, it’s probably not true.

It could still be true, since a lot of unbelievable stuff actually happens, but it’s probably not. We can note it pending confirmation by other sources or some kind of experimental result (like looking to see the actual bloody mess).

But, we don’t buy it out of hand.

Nobody Gets It Completely Right

As Dr. Who (Tom Baker) once said: “To err is computer. To forgive is fine.”

The real naive attitude about news, which I used to hear a lot fifty or sixty years ago is, “If it’s in print, it’s gotta be true.”

Reporters, editors and publishers are human. They make mistakes. And, catching those mistakes follows the 95:5 rule.That is, you’ll expend 95% of your effort to catch the last 5% of the errors. It’s also called “The Law of Diminishing Returns,” and it’s how we know to quit obsessing.

The way this works for the news business is that news output involves a lot of information. I’m not going to waste space here estimating the amount of information (in bits) in an average newspaper, but let’s just say it’s 1.3 s**tloads!

It’s a lot. Getting it all right, then getting it all corroborated, then getting it all fact checked (a different, and tougher, job than just corroboration), then putting it into words that convey that information to readers, is an enormous task, especially when a deadline is involved. It’s why the classic image of a journalist is some frazzled guy wearing a fedora pushed back on his head, suitcoat off, sleeves rolled up and tie loosened, maniacally tapping at a typewriter keyboard.

So, don’t expect everything you read to be right (or even spelled right).

The easiest things to get right are basic facts, the Who, What, Where, and When.

How many deaths due to Hurricane Maria on Puerto Rico? Estimates have run from 16 to nearly 3,000 depending on who’s doing the estimating, what axes they have to grind, and how they made the estimate. Nobody was ever able to collect the bodies in one place to count them. It’s unlikely that they ever found all the bodies to collect for the count!

Those are the first four Ws of news reporting. The fifth one, Why, is by far the hardest ’cause you gotta get inside someone’s head.

So, the last part of judging whether news is fake is recognizing that nobody gets it entirely right. Just because you see it in print doesn’t make it fact. And, just because somebody got it wrong, doesn’t make them a liar.

They could get one thing wrong, and most everything else right. In fact, they could get 5 things wrong, and 95 things right!

What you look for is folks who make the effort to try to get things right. If somebody is really trying, they’ll make some mistakes, but they’ll own up to them. They’ll say something like: “Yesterday we told you that there were 16 deaths, but today we have better information and the death toll is up to 2,975.”

Anybody who won’t admit they’re ever wrong is a liar, and whatever they say is most likely fake news.

Social Media and The Front Page

Walter Burns
Promotional photograph of Osgood Perkins as Walter Burns in the 1928 Broadway production of The Front Page

12 September 2018 – The Front Page was an hilarious one-set stage play supposedly taking place over a single night in the dingy press room of Chicago’s Criminal Courts Building overlooking the gallows behind the Cook County Jail. I’m not going to synopsize the plot because the Wikipedia entry cited above does such an excellent job it’s better for you to follow the link and read it yourself.

First performed in 1928, the play has been revived several times and suffered countless adaptations to other media. It’s notable for the fact that the main character, Hildy Johnson, originally written as a male part, is even more interesting as a female. That says something important, but I don’t know what.

By the way, I insist that the very best adaptation is Howard Hawks’ 1940 tour de force film entitled His Girl Friday starring Rosalind Russell as Hildy Johnson, and Cary Grant as the other main character Walter Burns. Burns is Johnson’s boss and ex-husband who uses various subterfuges to prevent Hildy from quitting her job and marrying an insurance salesman.

That’s not what I want to talk about today, though. What’s important for this blog posting is part of the play’s backstory. It’s important because it can help provide context for the entire social media industry, which is becoming so important for American society right now.

In that backstory, a critical supporting character is one Earl Williams, who’s a mousey little man convicted of murdering a policeman and sentenced to be executed the following morning right outside the press-room window. During the course of the play, it comes to light that Williams, confused by listening to a soapbox demagogue speaking in a public park, accidentally shot the policeman and was subsequently railroaded in court by a corrupt sheriff who wanted to use his execution to help get out the black(!?) vote for his re-election campaign.

What publicly executing a confused communist sympathizer has to do with motivating black voters I still fail to understand, but it makes as much sense as anything else the sheriff says or does.

This plot has so many twists and turns paralleling issues still resonating today that it’s rediculous. That’s a large part of the play’s fun!

Anyway, what I want you to focus on right now is the subtle point that Williams was confused by listening to a soapbox demagogue.

Soapbox demagogues were a fixture in pre-Internet political discourse. The U.S. Constitution’s First Amendment explicitly gives private citizens the right to peaceably assemble in public places. For example, during the late 1960s a typical summer Sunday afternoon anywhere in any public park in North America or Europe would see a gathering of anywhere from 10 to 10,000 hippies for an impromptu “Love In,” or “Be In,” or “Happening.” With no structure or set agenda folks would gather to do whatever seemed like a good idea at the time. My surrealist novelette Lilith describes a gathering of angels, said to be “the hippies of the supernatural world,” that was patterned after a typical Hippie Love In.

Similarly, a soapbox demagogue had the right to commandeer a picnic table, bandstand, or discarded soapbox to place himself (at the time they were overwhelmingly male) above the crowd of passersby that he hoped would listen to his discourse on whatever he wanted to talk about.

In the case of Earl Williams’ demagogue, the speech was about “production for use.” The feeble-minded Williams applied that idea to the policeman’s service weapon, with predictable results.

Fast forward to the twenty-first century.

I haven’t been hanging around local parks on Sunday afternoons for a long time, so I don’t know if soapbox demagogues are still out there. I doubt that they are because it’s easier and cheaper to log onto a social-media platform, such as Facebook, to shoot your mouth off before a much larger international audience.

I have browsed social media, however, and see the same sort of drivel that used to spew out of the mouths of soapbox demagogues back in the day.

The point I’m trying to make is that there’s really nothing novel about social media. Being a platform for anyone to say anything to anyone is the same as last-century soapboxes being available for anyone who thinks they have something to say. It’s a prominent right guaranteed in the Bill of Rights. In fact, it’s important enough to be guaranteed in the very first of th Bill’s amendments to the U.S. Constitution.

What is not included, however, is a proscription against anyone ignoring the HECK out of soapbox demagogues! They have the right to talk, but we have the right to not listen.

Back in the day, almost everybody passed by soapbox demagogues without a second glance. We all knew they climbed their soapboxes because it was the only venue they had to voice their opinions.

Preachers had pulpits in front of congregations, so you knew they had something to say that people wanted to hear. News reporters had newspapers people bought because they contained news stories that people wanted to read. Scholars had academic journals that other scholars subscribed to because they printed results of important research. Fiction writers had published novels folks read because they found them entertaining.

The list goes on.

Soapbox demagogues, however, had to stand on an impromptu platform because they didn’t have anything to say worth hearing. The only ones who stopped to listen were those, like the unemployed Earl Williams, who had nothing better to do.

The idea of pretending that social media is any more of a legitimate venue for ideas is just goofy.

Social media are not legitimate media for the exchange of ideas simply because anybody is able to say anything on them, just like a soapbox in a park. Like a soapbox in a park, most of what is said on social media isn’t worth hearing. It’s there because the barrier to entry is essentially nil. That’s why so many purveyors of extremist and divisive rhetoric gravitate to social media platforms. Legitimate media won’t carry them.

Legitimate media organizations have barriers to the entry of lousy ideas. For example, I subscribe to The Economist because of their former Editor in Chief, John Micklethwait, who impressed me as an excellent arbiter of ideas (despite having a weird last name). I was very pleased when he transferred over to Bloomberg News, which I consider the only televised outlet for globally significant news. The Wall Street Journals business focus forces Editor-in-Chief Matt Murray into a “just the facts, ma’am” stance because every newsworthy event creates both winners and losers in the business community, so content bias is a non-starter.

The common thread among these legitimate-media sources is existance of an organizational structure focused on maintaining content quality. There are knowlegeable gatekeepers (called “editors“) charged with keeping out bad ideas.

So, when Donald Trump, for example, shows a preference for social media (in his case, Twitter) and an abhorrence of traditional news outlets, he’s telling us his ideas aren’t worth listening to. Legitimate media outlets disparage his views, so he’s forced to use the twenty-first century equivalent of a public-park soapbox: social media.

On social media, he can say anything to anybody because there’s nobody to tell him, “That’s a stupid thing to say. Don’t say it!”

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

POTUS and the Peter Principle

Will Rogers & Wiley Post
In 1927, Will Rogers wrote: “I never met a man I didn’t like.” Here he is (on left) posing with aviator Wiley Post before their ill-fated flying exploration of Alaska. Everett Historical/Shutterstock

11 July 2018 – Please bear with me while I, once again, invert the standard news-story pyramid by presenting a great whacking pile of (hopfully entertaining) detail that leads eventually to the point of this column. If you’re too impatient to read it to the end, leave now to check out the latest POTUS rant on Twitter.

Unlike Will Rogers, who famously wrote, “I never met a man I didn’t like,” I’ve run across a whole slew of folks I didn’t like, to the point of being marginally misanthropic.

I’ve made friends with all kinds of people, from murderers to millionaires, but there are a few types that I just can’t abide. Top of that list is people that think they’re smarter than everybody else, and want you to acknowledge it.

I’m telling you this because I’m trying to be honest about why I’ve never been able to abide two recent Presidents: William Jefferson Clinton (#42) and Donald J. Trump (#45). Having been forced to observe their antics over an extended period, I’m pleased to report that they’ve both proved to be among the most corrupt individuals to occupy the Oval Office in recent memory.

I dislike them because they both show that same, smarmy self-satisfied smile when contemplating their own greatness.

Tricky Dick Nixon (#37) was also a world-class scumbag, but he never triggered the same automatic revulsion. That is because, instead of always looking self satisfied, he always looked scared. He was smart enough to recognize that he was walking a tightrope and, if he stayed on it long enough, he eventually would fall off.

And, he did.

I had no reason for disliking #37 until the mid-1960s, when, as a college freshman, I researched a paper for a history class that happened to involve digging into the McCarthy hearings of the early 1950s. Seeing the future #37’s activities in that period helped me form an extremely unflattering picture of his character, which a decade later proved accurate.

During those years in between I had some knock-down, drag-out arguments with my rabid-Nixon-fan grandmother. I hope I had the self control never to have said “I told you so” after Nixon’s fall. She was a nice lady and a wonderful grandma, and wouldn’t have deserved it.

As Abraham Lincoln (#16) famously said: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

Since #45 came on my radar many decades ago, I’ve been trying to figure out what, exactly, is wrong with his brain. At first, when he was a real-estate developer, I just figured he had bad taste and was infantile. That made him easy to dismiss, so I did just that.

Later, he became a reality-TV star. His show, The Apprentice, made it instantly clear that he knew absolutely nothing about running a business.

No wonder his companies went bankrupt. Again, and again, and again….

I’ve known scads of corporate CEOs over the years. During the quarter century I spent covering the testing business as a journalist, I got to spend time with most of the corporate leaders of the world’s major electronics manufacturing companies. Unsurprisingly, the successful ones followed the best practices that I learned in MBA school.

Some of the CEOs I got to know were goofballs. Most, however, were absolutely brilliant. The successful ones all had certain things in common.

Chief among the characteristics of successful corporate executives is that they make the people around them happy to work for them. They make others feel comfortable, empowered, and enthusiastically willing to cooperate to make the CEO’s vision manifest.

Even Commendatore Ferrari, who I’ve heard was Hell to work for and Machiavellian in interpersonal relationships, made underlings glad to have known him. I’ve noticed that ‘most everybody who’s ever worked for Ferrari has become a Ferrari fan for life.

As far as I can determine, nobody ever sued him.

That’s not the impression I got of Donald Trump, the corporate CEO. He seemed to revel in conflict, making those around him feel like dog pooh.

Apparently, everyone who’s ever dealt with him has wanted to sue him.

That worked out fine, however, for Donald Trump, the reality-TV star. So-called “reality” TV shows generally survive by presenting conflict. The more conflict the better. Everybody always seems to be fighting with everybody else, and the winners appear to be those who consistently bully their opponents into feeling like dog pooh.

I see a pattern here.

The inescapable conclusion is that Donald Trump was never a successful corporate executive, but succeeded enormously playing one on TV.

Another characteristic I should mention of reality TV shows is that they’re unscripted. The idea seems to be that nobody knows what’s going to happen next, including the cast.

That leaves off the necessity for reality-TV stars to learn lines. Actual movie stars and stage actors have to learn lines of dialog. Stories are tightly scripted so that they conform to Aristotle’s recommendations for how to write a successful plot.

Having written a handful of traditional motion-picture scripts as well as having produced a few reality-TV episodes, I know the difference. Following Aristotle’s dicta gives you the ability to communicate, and sometimes even teach, something to your audience. The formula reality-TV show, on the other hand, goes nowhere. Everybody (including the audience) ends up exactly where they started, ready to start the same stupid arguments over and over again ad nauseam.

Apparently, reality-TV audiences don’t want to actually learn anything. They’re more focused on ranting and raving.

Later on, following a long tradition among theater, film and TV stars, #45 became a politician.

At first, I listened to what he said. That led me to think he was a Nazi demagogue. Then, I thought maybe he was some kind of petty tyrant, like Mussolini. (I never considered him competent enough to match Hitler.)

Eventually, I realized that it never makes any sense to listen to what #45 says because he lies. That makes anything he says irrelevant.

FIRST PRINCIPAL: If you catch somebody lying to you, stop believing what they say.

So, it’s all bullshit. You can’t draw any conclusion from it. If he says something obviously racist (for example), you can’t conclude that he’s a racist. If he says something that sounds stupid, you can’t conclude he’s stupid, either. It just means he’s said something that sounds stupid.

Piling up this whole load of B.S., then applying Occam’s Razor, leads to the conclusion that #45 is still simply a reality-TV star. His current TV show is titled The Trump Administration. Its supporting characters are U.S. senators and representatives, executive-branch bureaucrats, news-media personalities, and foreign “dignitaries.” Some in that last category (such as Justin Trudeau and Emmanuel Macron) are reluctant conscripts into the cast, and some (such as Vladimir Putin and Kim Jong-un) gleefully play their parts, but all are bit players in #45’s reality TV show.

Oh, yeah. The largest group of bit players in The Trump Administration is every man, woman, child and jackass on the planet. All are, in true reality-TV style, going exactly nowhere as long as the show lasts.

Politicians have always been showmen. Of the Founding Fathers, the one who stands out for never coming close to becoming President was Benjamin Franklin. Franklin was a lot of things, and did a lot of things extremely well. But, he was never really a P.T.-Barnum-like showman.

Really successful politicians, such as Abraham Lincoln, Franklin Roosevelt (#32), Bill Clinton, and Ronald Reagan (#40) were showmen. They could wow the heck out of an audience. They could also remember their lines!

That brings us, as promised, to Donald Trump and the Peter Principle.

Recognizing the close relationship between Presidential success and showmanship gives some idea about why #45 is having so much trouble making a go of being President.

Before I dig into that, however, I need to point out a few things that #45 likes to claim as successes that actually aren’t:

  • The 2016 election was not really a win for Donald Trump. Hillary Clinton was such an unpopular candidate that she decisively lost on her own (de)merits. God knows why she was ever the Democratic Party candidate at all. Anybody could have beaten her. If Donald Trump hadn’t been available, Elmer Fudd could have won!
  • The current economic expansion has absolutely nothing to do with Trump policies. I predicted it back in 2009, long before anybody (with the possible exception of Vladimir Putin, who apparently engineered it) thought Trump had a chance of winning the Presidency. My prediction was based on applying chaos theory to historical data. It was simply time for an economic expansion. The only effect Trump can have on the economy is to screw it up. Being trained as an economist (You did know that, didn’t you?), #45 is unlikely to screw up so badly that he derails the expansion.
  • While #45 likes to claim a win on North Korean denuclearization, the Nobel Peace Prize is on hold while evidence piles up that Kim Jong-un was pulling the wool over Trump’s eyes at the summit.

Finally, we move on to the Peter Principle.

In 1969 Canadian writer Raymond Hull co-wrote a satirical book entitled The Peter Principle with Laurence J. Peter. It was based on research Peter had done on organizational behavior.

Peter was (he died at age 70 in 1990) not a management consultant or a behavioral psychologist. He was an Associate Professor of Education at the University of Southern California. He was also Director of the Evelyn Frieden Centre for Prescriptive Teaching at USC, and Coordinator of Programs for Emotionally Disturbed Children.

The Peter principle states: “In a hierarchy every employee tends to rise to his level of incompetence.”

Horrifying to corporate managers, the book went on to provide real examples and lucid explanations to show the principle’s validity. It works as satire only because it leaves the reader with a choice either to laugh or to cry.

See last week’s discussion of why academic literature is exactly the wrong form with which to explore really tough philosophical questions in an innovative way.

Let’s be clear: I’m convinced that the Peter principle is God’s Own Truth! I’ve seen dozens of examples that confirm it, and no counter examples.

It’s another proof that Mommy Nature has a sense of humor. Anyone who disputes that has, philosophically speaking, a piece of paper taped to the back of his (or her) shirt with the words “Kick Me!” written on it.

A quick perusal of the Wikipedia entry on the Peter Principle elucidates: “An employee is promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another. … If the promoted person lacks the skills required for their new role, then they will be incompetent at their new level, and so they will not be promoted again.”

I leave it as an exercise for the reader (and the media) to find the numerous examples where #45, as a successful reality-TV star, has the skills he needed to be promoted to President, but not those needed to be competent in the job.

Why Not Twitter?

Tweety birds
Character limitations mean Twitter messages have room to carry essentially no information. Shutterstock Image

20 June 2018 – I recently received a question: “Do you use Twitter?” The sender was responding positively to a post on this blog. My response was a terse: “I do not use Twitter.”

That question deserved a more extensive response. Well, maybe not “deserved,” since this post has already exceeded the maximum 280 characters allowed in a Twitter message. In fact, not counting the headline, dateline or image caption, it’s already 431 characters long!

That gives you an idea how much information you can cram into 280 characters. Essentially none. That’s why Twitter messages make their composers sound like airheads.

The average word in the English language is six characters long, not counting the spaces. So, to say one word, you need (on average) seven characters. If you’re limited to 280 characters, that means you’re limited to 280/7 = 40 words. A typical posting on this blog is roughly 1,300 words (this posting, by the way, is much shorter). A typical page in a paperback novel contains about 300 words. The first time I agreed to write a book for print, the publisher warned me that the manuscript needed to be at least 80,000 words to be publishable.

When I first started writing for business-to-business magazines, a typical article was around 2,500 words. We figured that was about right if you wanted to teach anybody anything useful. Not long afterward, when I’d (surprisingly quickly) climbed the journalist ranks to Chief Editor, I expressed the goal for any article written in our magazine (the now defunct Test & Measurement World) in the following way:

“Imagine an engineer facing a problem in the morning and not knowing what to do. If, during lunch, that engineer reads an article in our magazine and goes back to work knowing how to solve the problem, then we’ve done our job.”

That takes about 2,500 words. Since then, pressure from advertisers pushed us to writing shorter articles in the 1,250 word range. Of course, all advertisers really want any article to say is, “BUY OUR STUFF!”

That is NOT what business-to-business readers want articles to say. They want articles that tell them how to solve their problems. You can see who publishers listened to.

Blog postings are, essentially, stand-alone editorials.

From about day one as Chief Editor, I had to write editorials. I’d learned about editorial writing way back in Mrs. Langley’s eighth grade English class. I doubt Mrs. Langley ever knew how much I learned in her class, but it was a lot. Including how to write an editorial.

A successful editorial starts out introducing some problem, then explains little things like why it’s important and what it means to people like the reader, then tells the reader what to do about it. That last bit is what’s called the “Call to Action,” and it’s the most important part, and what everything else is there to motivate.

If your “problem” is easy to explain, you can often get away with an editorial 500 words long. Problems that are more complex or harder to explain take more words. Editorials can often reach 1,500 words.

If it can’t be done in 1,500 words, find a different problem to write your editorial about.

Now, magazine designers generally provide room for 500-1,000 word editorials, and editors generally work hard to stay within that constraint. Novice editors quickly learn that it takes a lot more work to write short than to write long.

Generally, writers start by dumping vast quantities of words into their manuscripts just to get the ideas out there, recorded in all their long-winded glory. Then, they go over that first draft, carefully searching for the most concise way to say what they want to say that still makes sense. Then, they go back and throw out all the ideas that really didn’t add anything to their editorial in the first place. By then, they’ve slashed the word count to close to what it needs to be.

After about five passes through the manuscript, the writer runs out of ways to improve the text, and hands it off to a production editor, who worries about things like grammar and spelling, as well as cramming it into the magazine space available. Then the managing editor does basically the same thing. Then the Chief Editor gets involved, saying “Omygawd, what is this writer trying to tell me?”

Finally, after about at least two rounds through this cycle, the article ends up doing its job (telling the readers something worth knowing) in the space available, or it gets “killed.”

“Killed” varies from just a mild “We’ll maybe run it sometime in the future,” to the ultimate “Stake Through The Heart,” which means it’ll never be seen in print.

That’s the process any piece of professional writing goes through. It takes days or weeks to complete, and it guarantees compact, dense, information-packed reading material. And, the shorter the piece, the more work it takes to pack the information in.

Think of cramming ten pounds of bovine fecal material into a five pound bag!

Is that how much work goes into the average Twitter feed?

I don’t think so! The twitter feeds I’ve seen sound like something written on a bathroom wall. They look like they were dashed off as fast as two fingers can type them, and they make their authors sound like illiterates.

THAT’s why I don’t use Twitter.

This blog posting, by the way, is a total of 5,415 characters long.

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.