Why Not Twitter?

Tweety birds
Character limitations mean Twitter messages have room to carry essentially no information. Shutterstock Image

20 June 2018 – I recently received a question: “Do you use Twitter?” The sender was responding positively to a post on this blog. My response was a terse: “I do not use Twitter.”

That question deserved a more extensive response. Well, maybe not “deserved,” since this post has already exceeded the maximum 280 characters allowed in a Twitter message. In fact, not counting the headline, dateline or image caption, it’s already 431 characters long!

That gives you an idea how much information you can cram into 280 characters. Essentially none. That’s why Twitter messages make their composers sound like airheads.

The average word in the English language is six characters long, not counting the spaces. So, to say one word, you need (on average) seven characters. If you’re limited to 280 characters, that means you’re limited to 280/7 = 40 words. A typical posting on this blog is roughly 1,300 words (this posting, by the way, is much shorter). A typical page in a paperback novel contains about 300 words. The first time I agreed to write a book for print, the publisher warned me that the manuscript needed to be at least 80,000 words to be publishable.

When I first started writing for business-to-business magazines, a typical article was around 2,500 words. We figured that was about right if you wanted to teach anybody anything useful. Not long afterward, when I’d (surprisingly quickly) climbed the journalist ranks to Chief Editor, I expressed the goal for any article written in our magazine (the now defunct Test & Measurement World) in the following way:

“Imagine an engineer facing a problem in the morning and not knowing what to do. If, during lunch, that engineer reads an article in our magazine and goes back to work knowing how to solve the problem, then we’ve done our job.”

That takes about 2,500 words. Since then, pressure from advertisers pushed us to writing shorter articles in the 1,250 word range. Of course, all advertisers really want any article to say is, “BUY OUR STUFF!”

That is NOT what business-to-business readers want articles to say. They want articles that tell them how to solve their problems. You can see who publishers listened to.

Blog postings are, essentially, stand-alone editorials.

From about day one as Chief Editor, I had to write editorials. I’d learned about editorial writing way back in Mrs. Langley’s eighth grade English class. I doubt Mrs. Langley ever knew how much I learned in her class, but it was a lot. Including how to write an editorial.

A successful editorial starts out introducing some problem, then explains little things like why it’s important and what it means to people like the reader, then tells the reader what to do about it. That last bit is what’s called the “Call to Action,” and it’s the most important part, and what everything else is there to motivate.

If your “problem” is easy to explain, you can often get away with an editorial 500 words long. Problems that are more complex or harder to explain take more words. Editorials can often reach 1,500 words.

If it can’t be done in 1,500 words, find a different problem to write your editorial about.

Now, magazine designers generally provide room for 500-1,000 word editorials, and editors generally work hard to stay within that constraint. Novice editors quickly learn that it takes a lot more work to write short than to write long.

Generally, writers start by dumping vast quantities of words into their manuscripts just to get the ideas out there, recorded in all their long-winded glory. Then, they go over that first draft, carefully searching for the most concise way to say what they want to say that still makes sense. Then, they go back and throw out all the ideas that really didn’t add anything to their editorial in the first place. By then, they’ve slashed the word count to close to what it needs to be.

After about five passes through the manuscript, the writer runs out of ways to improve the text, and hands it off to a production editor, who worries about things like grammar and spelling, as well as cramming it into the magazine space available. Then the managing editor does basically the same thing. Then the Chief Editor gets involved, saying “Omygawd, what is this writer trying to tell me?”

Finally, after about at least two rounds through this cycle, the article ends up doing its job (telling the readers something worth knowing) in the space available, or it gets “killed.”

“Killed” varies from just a mild “We’ll maybe run it sometime in the future,” to the ultimate “Stake Through The Heart,” which means it’ll never be seen in print.

That’s the process any piece of professional writing goes through. It takes days or weeks to complete, and it guarantees compact, dense, information-packed reading material. And, the shorter the piece, the more work it takes to pack the information in.

Think of cramming ten pounds of bovine fecal material into a five pound bag!

Is that how much work goes into the average Twitter feed?

I don’t think so! The twitter feeds I’ve seen sound like something written on a bathroom wall. They look like they were dashed off as fast as two fingers can type them, and they make their authors sound like illiterates.

THAT’s why I don’t use Twitter.

This blog posting, by the way, is a total of 5,415 characters long.

What If They Gave a War, But Nobody Noticed

Cyberwar
World War III is being fought in cyberspace right now, but most of us seem to be missing it! Oliver Denker/Shutterstock

13 June 2018 – Ever wonder why Kim Jong Un is so willing to talk about giving up his nuclear arsenal? Sort-of-President Donald Trump (POTUS) seems to think it’s because economic sanctions are driving North Korea (officially the Democratic People’s Republic of Korea, or DPRK) to the finacial brink.

That may be true, but it is far from the whole story. As usual, the reality star POTUS is stuck decades behind the times. The real World War III won’t have anything to do with nukes, and it’s started already.

The threat of global warfare using thermonuclear weapons was panic inducing to my father back in the 1950s and 1960s. Strangely, however, my superbrained mother didn’t seem very worried at the time.

By the 1980s, we were beginning to realize what my mother seemed to know instinctively — that global thermonuclear war just wasn’t going to happen. That kind of war leaves such an ungodly mess that no even-marginally-sane person would want to win one. The winners would be worse off than the losers!

The losers would join the gratefully dead, while the winners would have to live in the mess!

That’s why we don’t lose sleep at night knowing that the U.S., Russia, China, India, Pakistan, and, in fact, most countries in the first and second worlds, have access to thermonuclear weapons. We just worry about third-world toilets (to quote Danny DeVito’s character in The Jewel of the Nile) run by paranoid homicidal maniacs getting their hands on the things. Those guys are the only ones crazy enough to ever actually use them!

We only worried about North Korea developing nukes when Kim Jong Un was acting like a total whacko. Since he stopped his nuclear development program (because his nuclear lab accidentally collapsed under a mountain of rubble), it’s begun looking like he was no more insane than the leaders of Leonard Wibberley’s fictional nation-state, the Duchy of Grand Fewick.

In Wibberley’s 1956 novel The Mouse That Roared, the Duchy’s leaders all breathed a sigh of relief when their captured doomsday weapon, the Q-Bomb, proved to be a dud.

Yes, there is a hilarious movie to be made documenting the North Korean nuclear and missile programs.

Okay, so we’ve disposed of the idea that World War III will be a nuclear holocaust. Does that mean, as so many starry-eyed astrophysicists imagined in the late 1940s, the end of war?

Fat f-ing chance!

The winnable war in the Twenty-First Century is one fought in cyberspace. In fact, it’s going on right now. And, you’re missing it.

Cybersecurity and IT expert Theresa Payton, CEO of Fortalice Solutions, asserts that suspected North Korean hackers have been conducting offensive cyber operations on financial institutions amid discussions between Washington and Pyongyang on a possible nuclear summit between President Trump and Kim Jong Un.

“The U.S. has been able to observe North Korean-linked hackers targeting financial institutions in order to steal money,” she says. “This isn’t North Korea’s first time meddling in serious hacking schemes. This time, it’s likely because the international economic sanctions have hurt them in their wallets and they are desperate and strapped for cash.”

There is a long laundry list of cyberattacks that have been perpetrated against U.S. and European interests, including infrastructure, corporations and individuals.

“One of N. Korea’s best assets … is to flex it’s muscle using it’s elite trained cyber operations,” Payton asserts. “Their cyber weapons can be used to fund their government by stealing money, to torch organizations and governments that offend them (look at Sony hacking), to disrupt our daily lives through targeting critical infrastructure, and more. The Cyber Operations of N. Korea is a powerful tool for the DPRK to show their displeasure at anything and it’s the best bargaining chip that Kim Jong Un has.”

Clearly, DPRK is not the only bad state actor out there. Russia has long been in the news using various cyberwar tactics against the U.S., Europe and others. China has also been blamed for cyberattacks. In fact, cyberwarfare is a cheap, readily available alternative to messy and expensive nuclear weapons for anyone with Internet access (meaning, just about everybody) and wishing to do anybody harm, including us.

“You can take away their Nukes,” Payton points out, “but you will have a hard time dismantling their ability to attack critical infrastructure, businesses and even civilians through cyber operations.”

Programming Notes: I’ve been getting a number of comments on this blog each day, and it looks like we need to set some ground rules. At least, I need to be explicit about things I will accept and things I won’t:

  • First off, remember that this isn’t a social media site. When you make a comment, it doesn’t just spill out into the blog site. Comments are sequestered until I go in and approve or reject them. So far, the number of comments is low enough that I can go through and read each one, but I don’t do it every day. If I did, I’d never get any new posts written! Please be patient.
  • Do not embed URLs to other websites in comments. I’ll strip them out even if I approve your comment otherwise. The reason is that I don’t have time to vet every URL, and I stick to journalistic standards, which means I don’t allow anything in the blog that I can’t verify. There are no exceptions.
  • This is an English language site ONLY. Comments in other languages are immediately deleted. (For why, see above.)
  • Use Standard English written in clear, concise prose. If I have trouble understanding what you’re trying to say, I won’t give your comment any space. If you can’t write a cogent English sentence, take an ESL writing course!

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

What’s So Bad About Cryptocurrencies?

15 March 2018 – Cryptocurrency fans point to the vast “paper” fortunes that have been amassed by some bitcoin speculators, and sometimes predict that cryptocurrencies will eventually displace currencies issued and regulated by national governments. Conversely, banking-system regulators in several nations, most notably China and Russia, have outright bans on using cryptocurrency (specifically bitcoin) as a medium of exchange.

At the same time, it appears that fintech (financial technology) pundits pretty universally agree that blockchain technology, which is the enabling technology behind all cryptocurrency efforts, is the greatest thing since sliced bread, or, more to the point, the invention of ink on papyrus (IoP). Before IoP, financial records relied on clanky technologies like bundles of knotted cords, ceramic Easter eggs with little tokens baked inside, and that poster child for early written records, the clay tablet.

IoP immediately made possible tally sheets, journal and record books, double-entry ledgers, and spreadsheets. Without thin sheets of flat stock you could bind together into virtually unlimited bundles and then make indelible marks on, the concept of “bookkeeping” would be unthinkable. How could you keep books without having books to keep?

Blockchain is basically taking the concept of double-entry ledger accounting to the next (digital) level. I don’t pretend to fully understand how blockchain works. It ain’t my bailiwick. I’m a physicist, not a computer scientist.

To me, computers are tools. I think of them the same way I think of hacksaws, screw drivers, and CNC machines. I’m happy to have ’em and anxious to know how to use ’em. How they actually work and, especially, how to design them are details I generally find of marginal interest.

If it sounds like I’m backing away from any attempt to explain blockchains, that’s because I am. There are lots of people out there who are willing and able to explain blockchains far better than I could ever hope to.

Money, on the other hand, is infinitely easier to make sense of, and it’s something I studied extensively in MBA school. And, that’s really what cryptocurrencies are all about. It’s also the part cryptocurrency that its fans seem to have missed.

Once upon a time, folks tried to imbue their money (currency) with some intrinsic value. That’s why they used to make coins out of gold and silver. When Marco Polo introduced the Chinese concept of promissory notes to Renaissance Europe, it became clear that paper currency was possible provided there were two characteristics that went with it:

  • Artifact is some kind of thing (and I can’t identify it any more precisely than with the word “thing” because just about anything and everything has been tried and found to work) that people can pass between them to form a transaction; and
  • Underlying Value is some form of wealth that stands behind the artifact and gives an agreed-on value to the transaction.

For cryptocurrencies, the artifact consists of entries in a computer memory. The transactions are simply changes in the entries in computer memories. More specifically, blockchains amount to electronic ledger entries in a common database that forever leave an indelible record of transactions. (Sound familiar?)

Originally, the underlying value of traditional currencies was imagined to be the wealth represented by the metal in a coin, or the intrinsic value of a jewel, and so forth. More recently folks have begun imagining that the underlying value of government issued currency (dollars, pounds sterling, yuan) was fictitious. They began to believe the value of a dollar was whatever people believed it was.

According to this idea, anybody could issue currency as long as they got a bunch of people together to agree that it had some value. Put that concept together with the blockchain method of common recordkeeping, and you get cryptocurrency.

I’m oversymplifying all this in an effort to keep this posting within rational limits and to make a point, so bear with me. The point I’m trying to make is that the difference between any cryptocurrency and U.S. dollars is that these cryptocurrencies have no underlying value.

I’ve heard the argument that there’s no underlying value behind U.S. dollars, either. That just ain’t so! Having dollars issued by the U.S. government and tied to the U.S. tax base connects dollars to the U.S. economy. In other words, the underlying value backing up the artifacts of U.S. dollars is the entire U.S. economy. The total U.S. economic output in 2016, as measured by gross domestic product (GDP) was just under 20 trillion dollars. That ain’t nothing!