What is This “Robot” Thing, Anyway?

Robot thinking
So, what is it that makes a robot a robot? Phonlamai Photo/Shutterstock

6 March 2019 – While surfing the Internet this morning, in a valiant effort to put off actually getting down to business grading that pile of lab reports that I should have graded a couple of days ago, I ran across this posting I wrote in 2013 for Packaging Digest.

Surprisingly, it still seems relevant today, and on a subject that I haven’t treated in this blog, yet. It being that I’m planning to devote most of next week to preparing my 2018 tax return, I decided to save some writing time by dusting it off and presenting it as this week’s posting to Tech Trends. I hope the folks at Packaging Digest won’t get their noses too far out of joint about my encroaching on their five-year-old copyright without asking permission.

By the way, this piece is way shorter than the usual Tech Trends essay because of the specifications for that Packaging Digest blog, which was entitled “New Metropolis” in homage to Fritz Lang’s 1927 feature film entitled Metropolis, which told the story of a futuristic mechanized culture and an anthropomorphic robot that a mad scientist creates to bring it down. The “New Metropolis” postings were specified to be approximately 500 words long, whereas Tech Trends postings are planned to be 1,000-1,500 words long.

Anyway, I hope you enjoy this little slice of recent history.


11 November 2013 – I thought it might be fun—and maybe even useful—to catalog the classifications of these things we call robots.

Lets start with the word robot. The idea behind the word robot grows from the ancient concept of the golem. A golem was an artificial person created by people.

Frankly, the idea of a golem scared the bejeezus out of the ancients because the golem stands at the interface between living and non-living things. In our enlightened age, it still scares the bejeezus out of people!

If we restricted the field to golems—strictly humanoid robots, or androids—we wouldnt have a lot to talk about, and practically nothing to do. The things havent proved particularly useful. So, I submit that we should expand the robot definition to include all kinds of human-made artificial critters.

This has, of course, already been done by everyone working in the field. The SCARA (selective compliance assembly robot arm) machines from companies like Kuka, and the delta robots from Adept Technologies clearly insist on this expanded definition. Mobile robots, such as the Roomba from iRobot push the boundary in another direction. Weird little things like the robotic insects and worms so popular with academics these days push in a third direction.

Considering the foregoing, the first observation is that the line between robot and non-robot is fuzzy. The old 50s-era dumb thermostats probably shouldnt be considered robots, but a smart, computer-controlled house moving in the direction of the Jarvis character in the Ironman series probably should. Things in between are – in between. Lets bite the bullet and admit were dealing with fuzzy-logic categories, and then move on.

Okay, so what are the main characteristics symptomatic of this fuzzy category robot?

First, its gotta be artificial. A cloned sheep is not a robot. Even designer germs are non-robots.
Second, its gotta be automated. A fly-by-wire fighter jet is not a robot. A drone linked at the hip to a human pilot is not a robot. A driverless car, on the other hand, is a robot. (Either that, or its a traffic accident waiting to happen.)

Third, its gotta interact with the environment. A general-purpose computer sitting there thinking computer-like thoughts is not a robot. A SCARA unit assembling a car is. I submit that an automated bill-paying system arguing through the telephone with my wife over how much to take out of her checkbook this month is a robot.

More problematic is a fourth direction—embedded systems, like automated houses—that beg to be admitted into the robotic fold. I vote for letting them in, along with artificial intelligence (AI) systems, like the robot bill paying systems my wife is so fond of arguing with.

Finally (maybe), its gotta be independent. To be a robot, the thing has to take basic instruction from a human, then go off on its onesies to do the deed. Ideally, you should be able to do something like say, Go wash the car, and itll run off as fast as its little robotic legs can carry it to wash the car. More chronistically, you should be able to program it to vacuum the living room at 4:00 a.m., then be able to wake up at 6:00 a.m. to a freshly vacuumed living room.

Socialist Mythos

Pegasus
Like the mythical Pegasus, socialism is a beautiful idea beloved of children that cannot be realized in practice. Catmando/Shutterstock

27 February 2019 – Some ideas are just so beautiful that we try to hang on to them even after failure after failure shows them to be unrealizable. Especially for the naive, these ideas hold such fascination that they persist long after cooler inspection consigns them to the dust bin of fantasy. This essay looks at two such ideas that display features in common: the ancient Greek myth of the flying horse, Pegasus, and the modern myth of the socialist state.

Pegasus

The ancient myth of the flying horse Pegasus is an obvious example. There’s no physical reason for such a creature to be impossible. Actual horses are built far too robustly to take to the air on their own power, but a delicately built version of Equus ferus fitted with properly functioning wings could certainly be able to fly.

That’s not the objection. Certainly, other robust land animals have developed flying forms. Birds, of course, developed from what our ancestors believed to be great lumbering theropod dinosaurs. Bats belong to the same mammalian class as horses, and they fly very well, indeed.

The objection to the existence of Pegasus-like creatures comes from evolutionary history. Specifically, the history of land-based vertebrates.

You see, all land-based vertebrates on Earth evolved from a limited number of ray-finned fish species. In fact, the number of fish species contributing DNA to land-vertebrate animals is likely limited to one.

All land vertebrates have exactly the same basic body form – with modifications – that developed from features common to ray-finned fishes. Basically, they have:

  • One spine that extends into a tail,
  • One head appended to the forward (opposite the tail) end of the spine,
  • Two front appendages that developed from the fish’s pectoral fins, and
  • Two rear appendages that developed from the fish’s pelvic fins.

Not all land-based vertebrates have all these features. Some originally extant features (like the human tail and cetacean rear legs) atrophied nearly to non-existence. But, the listed features are the only ones land-based vertebrates have ever had. Of course, I’m also including such creatures as birds and dolphins that developed from land-based critters as they moved on to other habitats or back to the sea.

The reason I suggest that all land vertebrates likely hail from one fish species is that no land vertebrates have ever had anal, caudal or adipose appendages, thus we all seem to have developed from some fish species that lacked these fins.

“Aha!” you say, “cetaceans like dolphins and whales have tail fins!”

“Nope,” I rebut. “Notice that cetacean tail flukes are fleshy appendages extending horizontally from the tip of the animals’ tails, not bony appendages oriented vertically like a fish’s caudal fins.”

They developed independently and have similar shapes because of convergent evolution.

Okay, so we’ve discovered what’s wrong with Pegasus that is not wrong with bats, pterodactyls, and birds. All the real land-based vertebrate forms have four limbs, whereas the fanciful Pegasus has six (four legs and two wings). Six-limbed Pegasus can’t exist because there aren’t any similar prior forms for it to have evolved from.

So, Pegasus is a beautiful idea that simply can’t be existent on Earth.

Well, you could have some sort of flying-horse-like creature that evolved on some other planet, then caught a convenient flying saucer to pop over to Earth, but they wouldn’t be native, and likely wouldn’t look at all earthlike.

Socialist State

So, what has all this got to do with socialism?

Well, as I’ve intimated, both are beautiful ideas that people are pretty fond of. Notwithstanding its popularity, Pegasus is not possible (as a native Earth creature) for a very good reason. Socialism is also a beautiful idea that people (at least great swaths of the population) are pretty fond of. Socialism is, however, also not possible as a stable form of society for a very good reason.

The reason socialism is not possible as a stable form of society goes back to our old friend, the Tragedy of the Commons. If you aren’t intimately familiar with this concept, follow the link to a well-written article by Margaret E. Banyan, Adjunct Assistant Professor in the Southwest Florida Center for Public and Social Policy at Florida Gulf Coast University, which explains the Tragedy, its origins, and ways that have been proposed to ameliorate its effects.

Anyway, economist Milton Friedman summarized the Tragedy of the Commons with the phrase: “When everybody owns something, nobody owns it … .”

The Tragedy of the Commons speaks directly to why true socialism is impossible, or at least not tenable as a stable, permanent system. Let’s start with what the word “socialism” actually means. According to Merriam-Webster, socialism is:

any of various economic and political theories advocating collective or governmental ownership and administration of the means of production and distribution of goods.”

Other dictionaries largely agree, so we’ll work with this definition.

So, you can see where the Tragedy of the Commons connects to socialism. The beautiful idea relates to the word “collective.”

We know that human beings evolved as territorial animals, but we’d like to imagine a utopia where we’ve gotten past this primitive urge. Without territoriality, one could imagine a world where conflict would cease to exist. Folks would just get along because nobody’d say “Hey, that’s mine. Keep your mitts off!”

The problem with such a world is the Tragedy of the Commons as described by Friedman: if everybody owns the means of production, then nobody owns it.

There are two potential outcomes

  • Scenario 1 is the utter destruction of whatever resource is held in common as described at the start of Banyan’s essay.
  • Scenario 2 is what happened to the first recorded experiment with democracy in ancient Athens: somebody steps up to the plate and takes over management of the resource for everybody. For Athens it was a series of dictator kings ending with Alexander the Great. In effect, to save the resource from destruction, some individual moves in to “own” it.

In scenario 1, the resource is destroyed along with the socialist society that collectively owns it.Everyone either starves or leaves. Result: no more socialism.

In scenario 2, the resource is saved by being claimed by some individual. That individual sets up rules for how to apportion use of the resource, which is, in effect, no longer collectively owned. Result: dictatorship and, no more socialism.

Generally, all socialist states eventually degenerate into dictatorships via scenario 2. They invariably keep the designation “socialist,” but their governments are de facto authoritarian, not socialist. This is why I say socialism is a beautiful idea that is, in the long term, impossible. Socialist states can be created, but they very quickly come under authoritarian rule.

The Democracy Option

The Merriam-Webster definition admits of one more scenario, and that’s what we use in democratically governed nations, which are generally not considered socialist states: government ownership of some (but not all) resources.

If we have a democracy, there are all kinds of great things we can have governmentally owned, but not collectively owned. Things that everybody needs and everybody uses and everybody has to share, like roads, airspace, forests, electricity grids, and national parks. These are prime candidates for government ownership.

Things like wives, husbands, houses, and bicycles (note there’s been a big bicycle-sharing SNAFU recently reported in China) have historically been shown best to not be shared!

So, in a democracy, lots of stuff can be owned by the government, rather than by individuals or “everybody.”

A prime example is airspace. I don’t mean the air itself. I mean airspace! That is the space in the air over anyplace in the United States, or virtually the entire world. One might think it’s owned by everybody, but that just ain’t so.

You just try floating off at over 500 feet above ground level (AGL) in any type of aircraft and see where it gets you. Ya just can’t do it legally. You have to get permission from the Federal Government (in the form of a pilot’s license), which involves a great whacking pile of training, examinations, and even background checks. That’s because everybody does NOT own airspace above 500 feet AGL (and great, whacking swaths of the stuff lower down, too), the government does. You, personally, individually or collectively, don’t own a bit of it and have no rights to even be there without permission from its real owner, the Federal Government.

Another one is the Interstate Highway System. Try walking down Interstate 75 in, say, Florida. Assuming you survive long enough without getting punted off the roadway by a passing Chevy, you’ll soon find yourself explaining what the heck you think you’re doing to the nearest representative (spelled C-O-P) of whatever division of government takes ownership of that particular stretch of roadway. Unless you’ve got a really good excuse (e.g., “I gotta pee real bad!”) they’ll immediately escort you off the premises via the nearest exit ramp.

Ultimately, the only viable model of socialism is a limited one that combines individual ownership of some resources that are not shared, with government ownership of other resources that are shared. Democracy provides a mechanism for determining which is what.

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

The Future of Personal Transportation

Israeli startup Griiip’s next generation single-seat race car demonstrating the world’s first motorsport Vehicle-to-Vehicle (V2V) communication application on a racetrack.

9 April 2018 – Last week turned out to be big for news about personal transportation, with a number of trends making significant(?) progress.

Let’s start with a report (available for download at https://gen-pop.com/wtf) by independent French market-research company Ipsos of responses from more than 3,000 people in the U.S. and Canada, and thousands more around the globe, to a survey about the human side of transportation. That is, how do actual people — the consumers who ultimately will vote with their wallets for or against advances in automotive technology — feel about the products innovators have been proposing to roll out in the near future. Today, I’m going to concentrate on responses to questions about self-driving technology and automated highways. I’ll look at some of the other results in future postings.

Perhaps the biggest take away from the survey is that approximately 25% of American respondents claim they “would never use” an autonomous vehicle. That’s a biggie for advocates of “ultra-safe” automated highways.

As my wife constantly reminds me whenever we’re out in Southwest Florida traffic, the greatest highway danger is from the few unpredictable drivers who do idiotic things. When surrounded by hundreds of vehicles ideally moving in lockstep, but actually not, what percentage of drivers acting unpredictably does it take to totally screw up traffic flow for everybody? One percent? Two percent?

According to this survey, we can expect up to 25% to be out of step with everyone else because they’re making their own decisions instead of letting technology do their thinking for them.

Automated highways were described in detail back in the middle part of the twentieth century by science-fiction writer Robert A. Heinlein. What he described was a scene where thousands of vehicles packed vast Interstates, all communicating wirelessly with each other and a smart fixed infrastructure that planned traffic patterns far ahead, and communicated its decisions with individual vehicles so they acted together to keep traffic flowing in the smoothest possible way at the maximum possible speed with no accidents.

Heinlein also predicted that the heros of his stories would all be rabid free-spirited thinkers, who wouldn’t allow their cars to run in self-driving mode if their lives depended on it! Instead, they relied on human intelligence, forethought, and fast reflexes to keep themselves out of trouble.

And, he predicted they would barely manage to escape with their lives!

I happen to agree with him: trying to combine a huge percentage of highly automated vehicles with a small percentage of vehicles guided by humans who simply don’t have the foreknowledge, reflexes, or concentration to keep up with the automated vehicles around them is a train wreck waiting to happen.

Back in the late twentieth century I had to regularly commute on the 70-mph parking lots that went by the name “Interstates” around Boston, Massachusetts. Vehicles were generally crammed together half a car length apart. The only way to have enough warning to apply brakes was to look through the back window and windshield of the car ahead to see what the car ahead of them was doing.

The result was regular 15-car pileups every morning during commute times.

Heinlein’s (and advocates of automated highways) future vision had that kind of traffic density and speed, but were saved from inevitable disaster by fascistic control by omniscient automated highway technology. One recalcitrant human driver tossed into the mix would be guaranteed to bring the whole thing down.

So, the moral of this story is: don’t allow manual-driving mode on automated highways. The 25% of Americans who’d never surrender their manual-driving priviledge can just go drive somewhere else.

Yeah, I can see THAT happening!

A Modest Proposal

With apologies to Johnathan Swift, let’s change tack and focus on a more modest technology: driver assistance.

Way back in the 1980s, George Lucas and friends put out the third in the interminable Star Wars series entitled The Empire Strikes Back. The film included a sequence that could only be possible in real life with help from some sophisticated collision-avoidance technology. They had a bunch of characters zooming around in a trackless forest on the moon Endor, riding what can only be described as flying motorcycles.

As anybody who’s tried trailblazing through a forest on an off-road motorcycle can tell you, going fast through virgin forest means constant collisions with fixed objects. As Bugs Bunny once said: “Those cartoon trees are hard!

Frankly, Luke Skywalker and Princess Leia might have had superhuman reflexes, but their doing what they did without collision avoidance technology strains credulity to the breaking point. Much easier to believe their little speeders gave them a lot of help to avoid running into hard, cartoon trees.

In the real world, Israeli companies Autotalks, and Griiip, have demonstrated the world’s first motorsport Vehicle-to-Vehicle (V2V) application to help drivers avoid rear-ending each other. The system works is by combining GPS, in-vehicle sensing, and wireless communication to create a peer-to-peer network that allows each car to send out alerts to all the other cars around.

So, imagine the situation where multiple cars are on a racetrack at the same time. That’s decidedly not unusual in a motorsport application.

Now, suppose something happens to make car A suddenly and unpredictably slow or stop. Again, that’s hardly an unusual occurrance. Car B, which is following at some distance behind car A, gets an alert from car A of a possible impending-collision situation. Car B forewarns its driver that a dangerous situation has arisen, so he or she can take evasive action. So far, a very good thing in a car-race situation.

But, what’s that got to do with just us folks driving down the highway minding our own business?

During the summer down here in Florida, every afternoon we get thunderstorms dumping torrential rain all over the place. Specifically, we’ll be driving down the highway at some ridiculous speed, then come to a wall of water obscuring everything. Visibility drops from unlimited to a few tens of feet with little or no warning.

The natural reaction is to come to a screeching halt. But, what happens to the cars barreling up from behind? They can’t see you in time to stop.

Whammo!

So, coming to a screeching halt is not the thing to do. Far better to keep going forward as fast as visibility will allow.

But, what if somebody up ahead panicked and came to a screeching halt? Or, maybe their version of “as fast as visibility will allow” is a lot slower than yours? How would you know?

The answer is to have all the vehicles equipped with the Israeli V2V equipment (or an equivalent) to forewarn following drivers that something nasty has come out of the proverbial woodshed. It could also feed into your vehicle’s collision avoidance system to step over the 2-3 seconds it takes for a human driver to say “What the heck?” and figure out what to do.

The Israelis suggest that the required chip set (which, of course, they’ll cheerfully sell you) is so dirt cheap that anybody can afford to opt for it in their new car, or retrofit it into their beat up old junker. They further suggest that it would be worthwhile for insurance companies to give a rake off on their premiums to help cover the cost.

Sounds like a good deal to me! I could get behind that plan.