Computers Are Revolting!

Will Computers Revolt? cover
Charles Simon’s Will Computers Revolt? looks at the future of interactions between artificial intelligence and the human race.

14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.

On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.

That’s revolting!

On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.

We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.

Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”

Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?

Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!

Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.

The computers have taken over, so now we have to do what they tell us to do.

Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!

Golem Literature in Perspective

But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!

A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.

By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.

The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.

These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.

Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.

Spoiler Alert

Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.

“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”

There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).

Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.

They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?

So, the !!!! What?

The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.

He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?

A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.

I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.

Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!

Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.

Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.

In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.

Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?

Climate Models Bat Zero

Climate models vs. observations
Whups! Since the 1970s, climate models have overestimated global temperature rise by … a lot! Cato Institute

The articles discussed here reflect the print version of The Wall Street Journal, rather than the online version. Links to online versions are provided. The publication dates and some of the contents do not match.

10 October 2018 – Baseball is well known to be a game of statistics. Fans pay as much attention to statistical analysis of performance by both players and teams as they do to action on the field. They hope to use those statistics to indicate how teams and individual players are likely to perform in the future. It’s an intellectual exercise that is half the fun of following the sport.

While baseball statistics are quoted to three decimal places, or one part in a thousand, fans know to ignore the last decimal place, be skeptical of the second decimal place, and recognize that even the first decimal place has limited predictive power. It’s not that these statistics are inaccurate or in any sense meaningless, it’s that they describe a situation that seems predictable, yet is full of surprises.

With 18 players in a game at any given time, a complex set of rules, and at least three players and an umpire involved in the outcome of every pitch, a baseball game is a highly chaotic system. What makes it fun is seeing how this system evolves over time. Fans get involved by trying to predict what will happen next, then quickly seeing if their expectations materialize.

The essence of a chaotic system is conditional unpredictability. That is, the predictability of any result drops more-or-less drastically with time. For baseball, the probability of, say, a player maintaining their batting average is fairly high on a weekly basis, drops drastically on a month-to-month basis, and simply can’t be predicted from year to year.

Folks call that “streakiness,” and it’s one of the hallmarks of mathematical chaos.

Since the 1960s, mathematicians have recognized that weather is also chaotic. You can say with certainty what’s happening right here right now. If you make careful observations and take into account what’s happening at nearby locations, you can be fairly certain what’ll happen an hour from now. What will happen a week from now, however, is a crapshoot.

This drives insurance companies crazy. They want to live in a deterministic world where they can predict their losses far into the future so that they can plan to have cash on hand (loss reserves) to cover them. That works for, say, life insurance. It works poorly for losses do to extreme-weather events.

That’s because weather is chaotic. Predicting catastrophic weather events next year is like predicting Miami Marlins pitcher Drew Steckenrider‘s earned-run-average for the 2019 season.

Laugh out loud.

Notes from 3 October

My BS detector went off big-time when I read an article in this morning’s Wall Street Journal entitled “A Hotter Planet Reprices Risk Around the World.” That headline is BS for many reasons.

Digging into the article turned up the assertion that insurance providers were using deterministic computer models to predict risk of losses due to future catastrophic weather events. The article didn’t say that explicitly. We have to understand a bit about computer modeling to see what’s behind the words they used. Since I’ve been doing that stuff since the 1970s, pulling aside the curtain is fairly easy.

I’ve also covered risk assessment in industrial settings for many years. It’s not done with deterministic models. It’s not even done with traditional mathematics!

The traditional mathematics you learned in grade school uses real numbers. That is numbers with a definite value.

Like Pi.

Pi = 3.1415926 ….

We know what Pi is because it’s measurable. It’s the ratio of a circle’s circumference to its diameter.

Measure the circumference. Measure the diameter. Then divide one by the other.

The ancient Egyptians performed the exercise a kazillion times and noticed that, no matter what circle you used, no matter how big it was, whether you drew it on papyrus or scratched it on a rock or laid it out in a crop circle, you always came out with the same number. That number eventually picked up the name “Pi.”

Risk assessment is NOT done with traditional arithmetic using deterministic (real) numbers. It’s done using what’s called “fuzzy logic.”

Fuzzy logic is not like the fuzzy thinking used by newspaper reporters writing about climate change. The “fuzzy” part simply means it uses fuzzy categories like “small,” “medium” and “large” that don’t have precisely defined values.

While computer programs are perfectly capable of dealing with fuzzy logic, they won’t give you the kind of answers cost accountants are prepared to deal with. They won’t tell you that you need a risk-reserve allocation of $5,937,652.37. They’ll tell you something like “lots!”

You can’t take “lots” to the bank.

The next problem is imagining that global climate models could have any possible relationship to catastrophic weather events. Catastrophic weather events are, by definition, catastrophic. To analyze them you need the kind of mathermatics called “catastrophe theory.”

Catastrophe theory is one of the underpinnings of chaos. In Steven Spielberg’s 1993 movie Jurassic Park, the character Ian Malcolm tries to illustrate catastrophe theory with the example of a drop of water rolling off the back of his hand. Whether it drips off to the right or left depends critically on how his hand is tipped. A small change creates an immense difference.

If a ball is balanced at the edge of a table, it can either stay there or drop off, and you can’t predict in advance which will happen.

That’s the thinking behind catastrophe theory.

The same analysis goes into predicting what will happen with a hurricane. As I recall, at the time Hurricane Florence (2018) made landfall, most models predicted it would move south along the Appalachian Ridge. Another group of models predicted it would stall out to the northwest.

When push came to shove, however, it moved northeast.

What actually happened depended critically on a large number of details that were too small to include in the models.

How much money was lost due to storm damage was a result of the result of unpredictable things. (That’s not an editing error. It was really the second order result of a result.) It is a fundamentally unpredictable thing. The best you can do is measure it after the fact.

That brings us to comparing climate-model predictions with observations. We’ve got enough data now to see how climate-model predictions compare with observations on a decades-long timescale. The graph above summarizes results compiled in 2015 by the Cato Institute.

Basically, it shows that, not only did the climate models overestimate the temperature rise from the late 1970s to 2015 by a factor of approximately three, but in the critical last decade, when the computer models predicted a rapid rise, the actual observations showed that it nearly stalled out.

Notice that the divergence between the models and the observations increased with time. As I’ve said, that’s the main hallmark of chaos.

It sure looks like the climate models are batting zero!

I’ve been watching these kinds of results show up since the 1980s. It’s why by the late 1990s I started discounting statements like the WSJ article’s: “A consensus of scientists puts blame substantially on emissios greenhouse gasses from cars, farms and factories.”

I don’t know who those “scientists” might be, but it sounds like they’re assigning blame for an effect that isn’t observed. Real scientists wouldn’t do that. Only politicians would.

Clearly, something is going on, but what it is, what its extent is, and what is causing it is anything but clear.

In the data depicted above, the results from global climate modeling do not look at all like the behavior of a chaotic system. The data from observations, however, do look like what we typically get from a chaotic system. Stuff moves constantly. On short time scales it shows apparent trends. On longer time scales, however, the trends tend to evaporate.

No wonder observers like Steven Pacala, who is Frederick D. Petrie Professor in Ecology and Evolutionary Biology at Princeton University and a board member at Hamilton Insurance Group, Ltd., are led to say (as quoted in the article): “Climate change makes the historical record of extreme weather an unreliable indicator of current risk.”

When you’re dealing with a chaotic system, the longer the record you’re using, the less predictive power it has.

Duh!

Another point made in the WSJ article that I thought was hilarious involved prediction of hurricanes in the Persian Gulf.

According to the article, “Such cyclones … have never been observed in the Persian Gulf … with new conditions due to warming, some cyclones could enter the Gulf in the future and also form in the Gulf itself.”

This sounds a lot like a tongue-in-cheek comment I once heard from astronomer Carl Sagan about predictions of life on Venus. He pointed out that when astronomers observe Venus, they generally just see a featureless disk. Science fiction writers had developed a chain of inferences that led them from that observation of a featureless disk to imagining total cloud cover, then postulating underlying swamps teeming with life, and culminating with imagining the existence of Venusian dinosaurs.

Observation: “We can see nothing.”

Conclusion: “There are dinosaurs.”

Sagan was pointing out that, though it may make good science fiction, that is bad science.

The WSJ reporters, Bradley Hope and Nicole Friedman, went from “No hurricanes ever seen in the Persian Gulf” to “Future hurricanes in the Persian Gulf” by the same sort of logic.

The kind of specious misinformation represented by the WSJ article confuses folks who have genuine concerns about the environment. Before politicians like Al Gore hijacked the environmental narrative, deflecting it toward climate change, folks paid much more attention to the real environmental issue of pollution.

Insurance losses from extreme weather events
Actual insurance losses due to catastrophic weather events show a disturbing trend.

The one bit of information in the WSJ article that appears prima facie disturbing is contained in the graph at right.

The graph shows actual insurance losses due to catastrophic weather events increasing rapidly over time. The article draws the inference that this trend is caused by human-induced climate change.

That’s quite a stretch, considering that there are obvious alternative explanations for this trend. The most likely alternative is the possibility that folks have been building more stuff in hurricane-prone areas. With more expensive stuff there to get damaged, insurance losses will rise.

Again: duh!

Invoking Occam’s Razor (choose the most believable of alternative explanations), we tend to favor the second explanation.

In summary, I conclude that the 3 October article is poor reporting that draws conclusions that are likely false.

Notes from 4 October

Don’t try to find the 3 October WSJ article online. I spent a couple of hours this morning searching for it, and came up empty. The closest I was able to get was a draft version that I found by searching on Bradley Hope’s name. It did not appear on WSJ‘s public site.

Apparently, WSJ‘s editors weren’t any more impressed with the article than I was.

The 4 October issue presents a corroboration of my alternative explanation of the trend in insurance-loss data: it’s due to a build up of expensive real estate in areas prone to catastrophic weather events.

In a half-page expose entitled “Hurricane Costs Grow as Population Shifts,” Kara Dapena reports that, “From 1980 to 2017, counties along the U.S. shoreline that endured hurricane-strength winds from Florence in September experienced a surge in population.”

In the end, this blog posting serves as an illustration of four points I tried to make last month. Specifically, on 19 September I published a post entitled: “Noble Whitefoot or Lying Blackfoot?” in which I laid out four principles to use when trying to determine if the news you’re reading is fake. I’ll list them in reverse of the order I used in last month’s posting, partly to illustrate that there is no set order for them:

  • Nobody gets it completely right  ̶  In the 3 October WSJ story, the reporters got snookered by the climate-change political lobby. That article, taken at face value, has to be stamped with the label “fake news.”
  • Does it make sense to you? ̶  The 3 October fake news article set off my BS detector by making a number of statements that did not make sense to me.
  • Comparison shopping for ideas  ̶  Assertions in the suspect article contradicted numerous other sources.
  • Consider your source  ̶  The article’s source (The Wall Street Journal) is one that I normally trust. Otherwise, I likely never would have seen it, since I don’t bother listening to anyone I catch in a lie. My faith in the publication was restored when the next day they featured an article that corrected the misinformation.

Thinking Through Facial Recognition

Makeup
There are lots of reasons a person might wear makeup that could baffle facial recognition technology. Steven J Hensley / Shutterstock.com

5 September 2018 – A lot of us grew up reading stories by Robert A. Heinlein, who was one of the most Libertarian-leaning of twentieth-century science-fiction writers. When contemplating then-future surveillance technology (which he imagined would be even more intrusive than it actually is today) he wrote (in his 1982 novel Friday): “… there is a moral obligation on each free person to fight back wherever possible … ”

The surveillance technology Heinlein expected to become the most ubiquitous, pervasive, intrusive and literally in-your-face was facial recognition. Back in 1982, he didn’t seem to quite get the picture (pun intended) of how automation, artificial intelligence, and facial recognition could combine to become Big Brother’s all-seeing eyes. Now that we’re at the cusp of that technology being deployed, it’s time for just-us-folks to think about how we should react to it.

An alarm should be set off by an article filed by NBC News journalists Tom Costello and Ethan Sacks on 23 August reporting: “New facial recognition tech catches first impostor at D.C. airport.” Apparently, a Congolese national tried to enter the United States on a flight from Sao Paulo, Brazil through Washington Dulles International Airport on a French passport, and was instantly unmasked by a new facial-recognition system that quickly figured out that his face did not match that of the real holder of the French passport. Authorities figured out he was a Congolese national by finding his real identification papers hidden in his shoe. Why he wanted into the United States; why he tried to use a French passport; and why he was coming in from Brazil are all questions unanswered in the article. The article was about this whiz-bang technology that worked so well on the third day it was deployed.

What makes the story significant is that this time it all worked in real time. Previous applications of facial recognition have worked only after the fact.

The reason this article should set off alarm bells is not that the technology unmasked some jamoke trying to sneak into the country for some unknown, but probably nefarious, purpose. On balance, that was almost certainly (from our viewpoint) a good thing. The alarms should sound, however, to wake us up to think about how we really want to react to this kind of ubiquitous surveillance being deployed.

Do we really want Big Brother watching us?

Joan Quigley, former Assemblywoman from Jersey City, NJ, where she was Majority Conference Leader, chair of Homeland Security, and served on Budget, Health and Economic Development Committees, wrote an op-ed piece appearing in The Jersey Journal on 20 August entitled: “Facial recognition the latest alarm bell for privacy advocates.” In it she points out that “it’s not only crime some don’t want others to see.”

There’s a whole lot of what each of us does that we want to keep private. While we consider it perfectly innocent, it’s just nobody else’s business.

It’s why the stalls in public bathrooms have doors.

People generally object to living in a fishbowl.

So, ubiquitous deployment of facial recognition technology brings with it some good things, and some that are not so good. That argues for a national public debate aimed at developing a consensus regarding where, when and how facial recognition technology should be used.

Framing the Debate

To start with, recognize that facial recognition is already ubiquitous and natural. It’s why Mommy Nature goes through all kinds of machinations to make our faces more-or-less unique. One of the first things babies learn is how to recognize Mom’s face. How could the cave guys have coordinated their hunting parties if nobody could tell Fred from Manny?

Facial recognition technology just extends our natural talent for recognizing our friends by sight to its use by automated systems.

A white paper entitled Top 4 Modern Use Cases of Biometric Technology crossed my desk recently. It was published by security-software firm iTrue. Their stated purpose is to “take biometric technology to the next level by securing all biometric data onto their blockchain platform.”

Because the white paper is clearly a marketing piece, and it is unsigned by the actual author, I can’t really vouch for the accuracy of its conclusions. For example, the four use cases listed in the paper are likely just the four main applications they envision for their technology. They are, however, a reasonable starting point for our public discussion.

The four use cases cited are:

  1. Border control and airport security
  2. Company payroll and attendance management
  3. Financial data and identity protection
  4. Physical or logical access solutions

This is probably not an exhaustive list, but offhand I can’t think of any important items left off. So, I’ll pretend like it’s a really good, complete list. It may be. It may not be. That should be part of the discussion.

The first item on the list is exactly what the D.C. airport news story was all about, so enough said. That horse has been beaten to death.

About the second item, the white paper says: “Organizations are beginning to invest in biometric technologies to manage employee ID and attendance, since individuals are always carrying their fingerprints, eyes, and faces with them, and these items cannot be lost, stolen, or forgotten.”

In my Mother’s unforgettable New England accent, we say, “Eye-yuh!”

There is, however, one major flaw in the reasoning behind relying on facial recognition. It’s illustrated by the image above. Since time immemorial, folks have worn makeup that could potentially give facial recognition systems ginky fits. They do it for all kinds of innocent reasons. If you’re going to make being able to pass facial recognition tests a prerequisite for doing your job, expect all sorts of pushback.

For example, over the years I’ve known many, many women who wouldn’t want to be seen in public without makeup. What are you going to do? Make your workplace a makeup-free zone? That’ll go over big!

On to number three. How’s your average cosplay enthusiast going to react to not being able to use their credit or debit card to buy gas on their way to an event because the bank’s facial recognition system can’t see through their alien-creature makeup?

Transgender person
Portrait of young transgender person wearing pink wig. Ranta Images/Shutterstock

Even more seriously, look at the image on the right. This is a transgender person wearing a wig. Really cute isn’t he/she? Do you think your facial recognition software could tell the difference between him and his sister? Does your ACH vendor want to risk trampling his/her rights?

Ooops!

When we come to the fourth item on the list, suppose a Saudi Arabian woman wants to get into her house? Are you going to require her to remove her burka to get through her front door? What about her right to religious freedom? Or, will this become another situation where she can’t function as a human being without being accompanied by a male guardian? We’re already on thin ice when she wants to enter the country through an airport!

I’ve already half formed my own ideas about these issues. I look forward to participating in the national debate.

Heinlein would, of course, delight in every example where facial recognition could be foiled. In Friday, he gleefully pointed out ” … what takes three hours to put on will come off in fifteen minutes of soap and hot water.”

Death Logs Out

Death Logs Out Cover
E.J. Simon’s Death Logs Out (Endeavour Press) is the third in the Michael Nicholas series.

4 July 2018 – If you want to explore any of the really tough philosophical questions in an innovative way, the best literary forms to use are fantasy and science fiction. For example, when I decided to attack the nature of reality, I did it in a surrealist-fantasy novelette entitled Lilith.

If your question involves some aspect of technology, such as the nature of consciousness from an artificial-intelligence (AI) viewpoint, you want to dive into the science-fiction genre. That’s what sci-fi great Robert A. Heinlein did throughout his career to explore everything from space travel to genetically engineered humans. My whole Red McKenna series is devoted mainly to how you can use (and mis-use) robotics.

When E.J. Simon selected grounded sci-fi for his Michael Nicholas series, he most certainly made the right choice. Grounded sci-fi is the sub-genre where the author limits him- (or her-) self to what is at least theoretically possible using current technology, or immediate extensions thereof. No warp drives, wormholes or anti-grav boots allowed!

In this case, we’re talking about imaginitive development of artificial intelligence and squeezing a great whacking pile of supercomputing power into a very small package to create something that can best be described as chilling: the conquest of death.

The great thing about fiction genre, such as fantasy and sci-fi, is the freedom provided by the ol’ “willing suspension of disbelief.” If you went at this subject in a scholarly journal, you’d never get anything published. You’d have to prove you could do it before anybody’d listen.

I treated on this effect in the last chapter of Lilith when looking at my own past reaction to “scholarly” manuscripts shown to me by folks who forgot this important fact.

“Their ideas looked like the fevered imaginings of raving lunatics,” I said.

I went on to explain why I’d chosen the form I’d chosen for Lilith thusly: “If I write it up like a surrealist novel, folks wouldn’t think I believed it was God’s Own Truth. It’s all imagination, so using the literary technique of ‘willing suspension of disbelief’ lets me get away with presenting it without being a raving lunatic.”

Another advantage of picking fiction genre is that it affords the ability to keep readers’ attention while filling their heads with ideas that would leave them cross-eyed if simply presented straight. The technical details presented in the Michael Nicholas series could, theoretically, be presented in a PowerPoint presentation with something like fifteen slides. Well, maybe twenty five.

But, you wouldn’t be able to get the point across. People would start squirming in their seats around slide three. What Simon’s trying to tell us takes time to absorb. Readers have to make the mental connections before the penny will drop. Above all, they have to see it in action, and that’s just what embedding it in a mystery-adventure story does. Following the mental machinations of “real” characters as they try to put the pieces together helps Simon’s audience fit them together in their own minds.

Spoiler Alert: Everybody in Death Logs Out lives except bad guys, and those who were already dead to begin with. Well, with one exception: a supporting character who’s probably a good guy gets well-and-truly snuffed. You’ll have to read the book to find out who.

Oh, yeah. There are unreconstructed Nazis! That‘s always fun! Love having unreconstructed Nazis to hate!

I guess I should say a little about the problem that drives the plot. What good is a book review if it doesn’t say anything about what drives the plot?

Our hero, Michael, was the fair-haired boy of his family. He grew up to be a highly successful plain-vanilla finance geek. He married a beautiful trophy wife with whom he lives in suburban Connecticut. Michael’s daughter, Sophia, is away attending an upscale university in South Carolina.

Michael’s biggest problem is overwork. With his wife’s grudging acquiesence, he’d taken over his black-sheep big brother Alex’s organized crime empire after Alex’s murder two years earlier.

And, you thought Thomas Crown (The Thomas Crown Affair, 1968 and 1999) was a multitasker! Michael makes Crown look single minded. No wonder he’s getting frazzled!

But, Michael was holding it all together until one night when he was awakened by a telephone call from an old flame, whom he’d briefly employed as a body guard before realizing that she was a raving homicidal lunatic.

“I have your daughter,” Sindy Steele said over the phone.

Now, the obviously made-up first name “Sindy” should have warned Michael that Ms. Steele wasn’t playing with a full deck even before he got involved with her, but, at the time, the head with the brains wasn’t the head doing his thinking. She was, shall we say, “toothsome.”

Turns out that Sindy had dropped off her meds, then traveled all the way from her “retirement” villa in Santorini, Greece on an ill-advised quest to get back at Michael for dumping her.

But, that wasn’t Sophia’s worst problem. When she was nabbed, Sofia was in the midst of a call on her mobile phone from her dead uncle Alex, belatedly warning her of the danger!

While talking on the phone with her long-dead uncle confused poor Sofia, Michael knew just what was going on. For two years, he’d been having regular daily “face time” with Alex through cyberspace as he took over Alex’s syndicate. Mortophobic Alex had used his ill-gotten wealth to cheat death by uploading himself to the Web.

Now, Alex and Michael have to get Sofia back, then figure out who’s coming after Michael to steal the technology Alex had used to cheat death.

This is certainly not the first time someone has used “uploading your soul to the Web” as a plot device. Perhaps most notably, Robert Longo cast Barbara Sukowa as a cyberloaded fairy godmother trying to watch over Keanu Reeves’s character in the 1995 film Johnny Mnemonic. In Longo’s futuristic film, the technique was so common that the ghost had legal citizenship!

In the 1995 film, however, Longo glossed over how the ghost in the machine was supposed to work, technically. Johnny Mnemonic was early enough that it was futuristic sci-fi, as was Geoff Murphy’s even earlier soul-transference work Freejack (1992). Nobody in the early 1990s had heard of the supercomputing cloud, and email was high-tech. The technology for doing soul transference was as far in the imagined future as space travel was to Heinlein when he started writing about it in the 1930s.

Fast forward to the late 2010s. This stuff is no longer in the remote future. It’s in the near future. In fact, there’s very little technology left to develop before Simon’s version becomes possible. It’s what we in the test-equipment-development game used to call “specsmanship.” No technical breakthroughs needed, just advancements in “faster, wider, deeper” specifications.

That’s what makes the Michael Nicholas series grounded sci-fi! Simon has to imagine how today’s much-more-defined cloud infrastructure might both empower and limit cyberspook Alex. He also points out that what enables the phenomenon is software (as in artificial intelligence), not hardware.

Okay, I do have some bones to pick with Simon’s text. Mainly, I’m a big Strunk and White (Elements of Style) guy. Simon’s a bit cavalier about paragraphing, especially around dialog. His use of quotation marks is also a bit sloppy.

But, not so bad that it interferes with following the story.

Standard English is standardized for a reason: it makes getting ideas from the author’s head into the reader’s sooo much easier!

James Joyce needed a dummy slap! His Ulysses has rightly been called “the most difficult book to read in the English language.” It was like he couldn’t afford to buy a typewriter with a quotation key.

Enough ranting about James Joyce!

Simon’s work is MUCH better! There are only a few times I had to drop out of Death Logs Out‘s world to ask, “What the heck is he trying to say?” That’s a rarity in today’s world of amateurishly edited indie novels. Simon’s story always pulled me right back into its world to find out what happens next.

The Future of Personal Transportation

Israeli startup Griiip’s next generation single-seat race car demonstrating the world’s first motorsport Vehicle-to-Vehicle (V2V) communication application on a racetrack.

9 April 2018 – Last week turned out to be big for news about personal transportation, with a number of trends making significant(?) progress.

Let’s start with a report (available for download at https://gen-pop.com/wtf) by independent French market-research company Ipsos of responses from more than 3,000 people in the U.S. and Canada, and thousands more around the globe, to a survey about the human side of transportation. That is, how do actual people — the consumers who ultimately will vote with their wallets for or against advances in automotive technology — feel about the products innovators have been proposing to roll out in the near future. Today, I’m going to concentrate on responses to questions about self-driving technology and automated highways. I’ll look at some of the other results in future postings.

Perhaps the biggest take away from the survey is that approximately 25% of American respondents claim they “would never use” an autonomous vehicle. That’s a biggie for advocates of “ultra-safe” automated highways.

As my wife constantly reminds me whenever we’re out in Southwest Florida traffic, the greatest highway danger is from the few unpredictable drivers who do idiotic things. When surrounded by hundreds of vehicles ideally moving in lockstep, but actually not, what percentage of drivers acting unpredictably does it take to totally screw up traffic flow for everybody? One percent? Two percent?

According to this survey, we can expect up to 25% to be out of step with everyone else because they’re making their own decisions instead of letting technology do their thinking for them.

Automated highways were described in detail back in the middle part of the twentieth century by science-fiction writer Robert A. Heinlein. What he described was a scene where thousands of vehicles packed vast Interstates, all communicating wirelessly with each other and a smart fixed infrastructure that planned traffic patterns far ahead, and communicated its decisions with individual vehicles so they acted together to keep traffic flowing in the smoothest possible way at the maximum possible speed with no accidents.

Heinlein also predicted that the heros of his stories would all be rabid free-spirited thinkers, who wouldn’t allow their cars to run in self-driving mode if their lives depended on it! Instead, they relied on human intelligence, forethought, and fast reflexes to keep themselves out of trouble.

And, he predicted they would barely manage to escape with their lives!

I happen to agree with him: trying to combine a huge percentage of highly automated vehicles with a small percentage of vehicles guided by humans who simply don’t have the foreknowledge, reflexes, or concentration to keep up with the automated vehicles around them is a train wreck waiting to happen.

Back in the late twentieth century I had to regularly commute on the 70-mph parking lots that went by the name “Interstates” around Boston, Massachusetts. Vehicles were generally crammed together half a car length apart. The only way to have enough warning to apply brakes was to look through the back window and windshield of the car ahead to see what the car ahead of them was doing.

The result was regular 15-car pileups every morning during commute times.

Heinlein’s (and advocates of automated highways) future vision had that kind of traffic density and speed, but were saved from inevitable disaster by fascistic control by omniscient automated highway technology. One recalcitrant human driver tossed into the mix would be guaranteed to bring the whole thing down.

So, the moral of this story is: don’t allow manual-driving mode on automated highways. The 25% of Americans who’d never surrender their manual-driving priviledge can just go drive somewhere else.

Yeah, I can see THAT happening!

A Modest Proposal

With apologies to Johnathan Swift, let’s change tack and focus on a more modest technology: driver assistance.

Way back in the 1980s, George Lucas and friends put out the third in the interminable Star Wars series entitled The Empire Strikes Back. The film included a sequence that could only be possible in real life with help from some sophisticated collision-avoidance technology. They had a bunch of characters zooming around in a trackless forest on the moon Endor, riding what can only be described as flying motorcycles.

As anybody who’s tried trailblazing through a forest on an off-road motorcycle can tell you, going fast through virgin forest means constant collisions with fixed objects. As Bugs Bunny once said: “Those cartoon trees are hard!

Frankly, Luke Skywalker and Princess Leia might have had superhuman reflexes, but their doing what they did without collision avoidance technology strains credulity to the breaking point. Much easier to believe their little speeders gave them a lot of help to avoid running into hard, cartoon trees.

In the real world, Israeli companies Autotalks, and Griiip, have demonstrated the world’s first motorsport Vehicle-to-Vehicle (V2V) application to help drivers avoid rear-ending each other. The system works is by combining GPS, in-vehicle sensing, and wireless communication to create a peer-to-peer network that allows each car to send out alerts to all the other cars around.

So, imagine the situation where multiple cars are on a racetrack at the same time. That’s decidedly not unusual in a motorsport application.

Now, suppose something happens to make car A suddenly and unpredictably slow or stop. Again, that’s hardly an unusual occurrance. Car B, which is following at some distance behind car A, gets an alert from car A of a possible impending-collision situation. Car B forewarns its driver that a dangerous situation has arisen, so he or she can take evasive action. So far, a very good thing in a car-race situation.

But, what’s that got to do with just us folks driving down the highway minding our own business?

During the summer down here in Florida, every afternoon we get thunderstorms dumping torrential rain all over the place. Specifically, we’ll be driving down the highway at some ridiculous speed, then come to a wall of water obscuring everything. Visibility drops from unlimited to a few tens of feet with little or no warning.

The natural reaction is to come to a screeching halt. But, what happens to the cars barreling up from behind? They can’t see you in time to stop.

Whammo!

So, coming to a screeching halt is not the thing to do. Far better to keep going forward as fast as visibility will allow.

But, what if somebody up ahead panicked and came to a screeching halt? Or, maybe their version of “as fast as visibility will allow” is a lot slower than yours? How would you know?

The answer is to have all the vehicles equipped with the Israeli V2V equipment (or an equivalent) to forewarn following drivers that something nasty has come out of the proverbial woodshed. It could also feed into your vehicle’s collision avoidance system to step over the 2-3 seconds it takes for a human driver to say “What the heck?” and figure out what to do.

The Israelis suggest that the required chip set (which, of course, they’ll cheerfully sell you) is so dirt cheap that anybody can afford to opt for it in their new car, or retrofit it into their beat up old junker. They further suggest that it would be worthwhile for insurance companies to give a rake off on their premiums to help cover the cost.

Sounds like a good deal to me! I could get behind that plan.