Falling Out of the Sky

B737 Max taking off
Thai Lion Air Boeing 737 Max 9 taking off from Don Mueang international airport in Bankok, Thailand. Komenton / Shutterstock.com

3 April 2019 – On 29 October 2018, Lion Air flight 610 crashed soon after takeoff from Soekarno–Hatta International Airport in Jakarta, Indonesia. This is not the sort of thing we report in this blog. It’s straight news and we leave that to straight-news media, but I’m diving into it because it involves technology I’m quite familiar with and I might be able to help readers make sense of what happened and judge the often-uninformed reactions to it.

I claim to have the background to understand what happened because I’ve been flying light planes since the 1990s. I also put two years into a post-graduate Aerospace Engineering Program at Arizona State University concentrating on fluid dynamics. That’s enough background to make some educated guesses at what happened to Lion Air 610 as well as in the almost identical crash of an Ethiopian Airlines Boeing 737 MAX in Addis Ababa, , Ethiopia on 10 March 2019.

First, both airliners were recently commissioned Boeing 737 MAX aircraft using standard-equipment installations of Boeing’s new Maneuvering Characteristics Augmentation System (MCAS).

How to Stall an Aircraft

In aerodynamics the word “stall” means something quite unlike what most people expect. Most people encounter the word in an automobile context, where it refers to “stalling the engine.” That happens when you overload an internal-combustion engine. That is pull more power out than the engine can produce at its current operating speed. When that happens, the engine simply stops.

It turns from a power-producing machine to a boat anchor in a heartbeat. Your car stops with a lurch and everyone behind you starts swearing and blowing their horns in an effort to make you feel even worse than you already do.

That’s not what happens when an airplane stalls. It’s not the aircraft’s engine that stalls, but it’s wings. There are similarities in that, like engines, wings stall when they’re overloaded and when stalled they start producing drag like a boat anchor, but that’s about where the similarities end.

When an aircraft stalls, nobody swears and blows their horn. Instead, they scream and die.

Why? Well, wings are supposed to lift the aircraft and support it in the air. If you’ve ever tried to carry a sheet of plywood on a windy day you’ve experience both lift and drag. If you let the sheet tip up a little bit so the wind catches it underneath, it tries to fly up out of your hands. That’s the lift an airplane gets by tipping its wings up into the air stream as it moves forward into the air.

The more you tip the sheet up, the more lift you get for the same airspeed. That is, until you reach a certain attack angle (the angle between the sheet and the wind). Stalling begins suddenly at an attack angle of about 15°. Then, all of a sudden, the force lifting the sheet changes from up and a little back to no up, and a lot of back!

That’s a wing stall.

The aircraft stops imitating a bird, and starts imitating a rock.

You suddenly get a visceral sense of the concept “down.”

‘Cause that’s where you go in a hurry!

At that point, all you can do is point the nose down (so the wing’s forward edge starts pointing in the direction you’re moving: down!

If you’ve got enough space underneath your aircraft so the wing starts flying again before you hit the ground, you can gently pull the aircraft’s nose back up to resume straight and level flight. If not, that’s when the screaming starts.

Wings stall when they’re going too slowly to generate the required lift at an angle of attack of 15°. At higher speeds, the wing can generate the needed lift with less angle of attack, and worries about stalling never come up.

So, now you know all you need to know (or want to know) about stalling an aircraft.

MCAS

Boeing’s MCAS is an anti-stall system. It’s beating heart is a bit of software running on the flight-control computer that monitors a number of sensor inputs, like airspeed and angle of attack. Basically, in simple terms, it knows exactly how much attack angle the wings can stand before stalling out. If it sees that for some reason, the attack angle is getting too high, it assumes the pilot has screwed up. It takes control and pushes the nose down.

It doesn’t have to actually “take control” because modern commercial aircraft are “fly by wire,” which means it’s the computer that actually moves the control surfaces to fly the plane. The pilot’s “yoke” (the little wheel he or she gets to twist and turn and move forward and back) and the rudder pedals he pushes to steer (push right, go right) just sends signals to the computer to tell it what he wants to have happen. In a sense, the pilot negotiates with the computer about what the airplane should do.

The pilot makes suggestions (through the yoke, pedals and throttle control – collectively called the “cockpit flight controls”); the computer then takes that information, combines it with all the other information provided by a plethora (Do you like that word? I do!) of additional sensors; thinks about it for a microsecond; then, finally, the computer tells the aircraft’s control surfaces to move smoothly to a position that it (the computer) thinks will make the aircraft do what it wants it to do.

That’s all well and good when the reason the attack angle got too high is just that something happened that broke the pilot’s concentration, and he (or she) actually screwed up. What about when the pilot actually wants to stall the aircraft?

For example, on landing.

To land a plane, you slow it way down, so the wing’s almost stalled. Then, you fly it really close to the ground so the wheels almost touch the runway. Then you stall the wing so the wheels touch the ground just as the wings lose lift. You hear a satisfying “squeak” as the wheels momentarily skid while spinning up to match the relative speed of the runway. Finally, the wheels gently settle down, taking up the weight of the aircraft. The flight crew (and a few passengers who’ve been paying attention) cheer the pilot for a job well done, and the pilot starts breathing again.

Anti-stall systems don’t do much good during a landing, when you’re trying to intentionally stall the wings at just the right time.

Similarly, the don’t do much good when you’re taking off, and the pilot’s just trying to get the wings unstalled to get the aircraft into the air in the first place.

For those times, you want the MCAS turned off! So you’ve gotta be able to do that, too. Or, if your pilot is too absent minded to shut it off when its not needed, you need it to shut off automatically.

When Things Go Wrong

So, what happened in those two airliner crashes?

Remember that the main input into the MCAS is an attack angle sensor? Attack angle sensors, like any other piece of technology can go bad, especially if it’s exposed to weather. And, airliners are exposed to weather 24/7 except when they’re brought into a hangar for repair.

The working hypothesis for what happened to both airliners is that the attack-angle sensors failed. They jammed in a position where they erroneously reported a high angle-of-attack to the MCAS, which jumped to the conclusion “pilot error,” and pushed the nose down. When the pilot(s) tried to pull the nose back up (because their windshield filled up with things that looked a lot like ground instead of sky), the MCAS said: “Nope! You’re going down, Jack!”

By the time the pilots figured out what was wrong and looked up how to shut the MCAS off, they’d actually hit the things that looked too much like ground.

Why didn’t the MCAS figure out there was something wrong with the sensor?

How’s it supposed to know?

The sensor says the nose is pointed up, so the computer takes it at it’s word. Computers aren’t really very smart, and tend to be quite literal. The sensor says the nose is pointed up, so the computer thinks the nose is pointed up, and tries to point it down (or at least less up). End of story. And, in the real world, it’s “end of aircraft” as well.

If the pilot(s) try to tell the computer to pull the nose up (by desperately pulling back on the yoke), it figures they’re screw-ups, anyway, and won’t listen.

Every try to argue with a computer? Been there, done that. It doesn’t work.

Mea Culpa

When I learned about the hypothesis of attack-angle-sensor failure causing the crashes that took nearly four hundred lives, I got this awful sick feeling that was a mixture of embarrassment and guilt. You see, a decade and a half ago my research project at ASU was an effort to develop a different style of attack-angle sensor. Several events and circumstances combined to make me abandon that research project and, in fact, the whole PhD. program it was a part of. In my defense, it was the start of a ten-year period in which I couldn’t get anything right!

But, if I’d stuck it out and developed that sensor it might have been installed on those airliners and might not have failed at all. Of course, it could have been installed and failed in some other spectacular way.

You see, the attack angle sensor that apparently was installed consisted of a little vane attached to one side of the aircraft’s nose. Just like the wind sock traditionally hung outside airports the world over, wind pressure makes the vane line up downstream of the wind direction. A little angle sensor attached to the vane reports the wind direction relative to the nose: the attack angle.

I got involved in trying to develop an alternative attack-angle sensor because I have a horror of relying on sensors that depend on mechanical movement to work. If you’re relying on mechanical movement, it means you’re relying on bearings, and bearings can corrode and wear out and fail. The sensor I was working on relied on differences in air pressure that depended on the direction the wind hit the sensor.

In actual fact, there were two attack-angle sensors attached to the doomed aircraft – one on each side of the nose – but the Boeing MCAS was paying attention to only one of them. That was Boeing’s second mistake (the first being not using the sensor I hadn’t developed, so I guess they can’t be blamed for it). If the MCAS had been paying attention to both sensors, it would have known something in its touchy-feely universe was wrong. It might have been a little more reluctant to override the pilots’ input.

The third mistake (I believe) Boeing made was to downplay the differences between the new “Max” version of the aircraft and the older version. They’d changed the engines, which (as any aerospace engineer knows) necessitates changes in everything else. Aircraft are so intricately balanced machines that every time you change one thing, everything else has to change – or at least has to be looked at to see if it needs to be changed.

The new engines had improved performance, which affects just about everything involving the aircraft’s handling characteristics. Boeing had apparently tried to make the more-powerful yet more fuel efficient aircraft handle like the old aircraft. There, of course, were differences, which the company tried to pretend would make no difference to the pilots. The MCAS was one of those things that was supposed to make the “Max” version handle just like the non-Max version.

So, when something went wrong in “Max” land, it caught the pilots, who had thousands of hours experience with non-Max aircraft, by surprise.

The latest reports are that Boeing, the FAA, and the airlines have realized what the problems are that caused these issues (I hope they understand them a lot better than I do, because, after all, it’s their job to!), and have worked out a number of fixes.

First, the MCAS will pay attention to two attack-angle sensors. At least then the flight-control computer will have an indication that something is wrong and tell the MCAS to go back in its corner and shut up ‘til the issue is sorted out.

Second, they’ll install a little blinking light that effectively tells the pilots “there’s something wrong, so don’t expect any help from the MCAS ‘til it gets sorted out.”

Third, they’ll make sure the pilots have a good, positive way of emphatically shut the MCAS off if it starts to argue with them in an emergency. And, they’ll make sure the pilots are trained to know when and how to use it.

My understanding is that these fixes are already part of the options that American commercial airlines have generally installed, which is supposedly why the FAA, the airlines and the pilots’ union have been dragging their feet about grounding Boeing’s 737 Max fleet. Let’s hope they’re not just blowing smoke (again)!

What is This “Robot” Thing, Anyway?

Robot thinking
So, what is it that makes a robot a robot? Phonlamai Photo/Shutterstock

6 March 2019 – While surfing the Internet this morning, in a valiant effort to put off actually getting down to business grading that pile of lab reports that I should have graded a couple of days ago, I ran across this posting I wrote in 2013 for Packaging Digest.

Surprisingly, it still seems relevant today, and on a subject that I haven’t treated in this blog, yet. It being that I’m planning to devote most of next week to preparing my 2018 tax return, I decided to save some writing time by dusting it off and presenting it as this week’s posting to Tech Trends. I hope the folks at Packaging Digest won’t get their noses too far out of joint about my encroaching on their five-year-old copyright without asking permission.

By the way, this piece is way shorter than the usual Tech Trends essay because of the specifications for that Packaging Digest blog, which was entitled “New Metropolis” in homage to Fritz Lang’s 1927 feature film entitled Metropolis, which told the story of a futuristic mechanized culture and an anthropomorphic robot that a mad scientist creates to bring it down. The “New Metropolis” postings were specified to be approximately 500 words long, whereas Tech Trends postings are planned to be 1,000-1,500 words long.

Anyway, I hope you enjoy this little slice of recent history.


11 November 2013 – I thought it might be fun—and maybe even useful—to catalog the classifications of these things we call robots.

Lets start with the word robot. The idea behind the word robot grows from the ancient concept of the golem. A golem was an artificial person created by people.

Frankly, the idea of a golem scared the bejeezus out of the ancients because the golem stands at the interface between living and non-living things. In our enlightened age, it still scares the bejeezus out of people!

If we restricted the field to golems—strictly humanoid robots, or androids—we wouldnt have a lot to talk about, and practically nothing to do. The things havent proved particularly useful. So, I submit that we should expand the robot definition to include all kinds of human-made artificial critters.

This has, of course, already been done by everyone working in the field. The SCARA (selective compliance assembly robot arm) machines from companies like Kuka, and the delta robots from Adept Technologies clearly insist on this expanded definition. Mobile robots, such as the Roomba from iRobot push the boundary in another direction. Weird little things like the robotic insects and worms so popular with academics these days push in a third direction.

Considering the foregoing, the first observation is that the line between robot and non-robot is fuzzy. The old 50s-era dumb thermostats probably shouldnt be considered robots, but a smart, computer-controlled house moving in the direction of the Jarvis character in the Ironman series probably should. Things in between are – in between. Lets bite the bullet and admit were dealing with fuzzy-logic categories, and then move on.

Okay, so what are the main characteristics symptomatic of this fuzzy category robot?

First, its gotta be artificial. A cloned sheep is not a robot. Even designer germs are non-robots.
Second, its gotta be automated. A fly-by-wire fighter jet is not a robot. A drone linked at the hip to a human pilot is not a robot. A driverless car, on the other hand, is a robot. (Either that, or its a traffic accident waiting to happen.)

Third, its gotta interact with the environment. A general-purpose computer sitting there thinking computer-like thoughts is not a robot. A SCARA unit assembling a car is. I submit that an automated bill-paying system arguing through the telephone with my wife over how much to take out of her checkbook this month is a robot.

More problematic is a fourth direction—embedded systems, like automated houses—that beg to be admitted into the robotic fold. I vote for letting them in, along with artificial intelligence (AI) systems, like the robot bill paying systems my wife is so fond of arguing with.

Finally (maybe), its gotta be independent. To be a robot, the thing has to take basic instruction from a human, then go off on its onesies to do the deed. Ideally, you should be able to do something like say, Go wash the car, and itll run off as fast as its little robotic legs can carry it to wash the car. More chronistically, you should be able to program it to vacuum the living room at 4:00 a.m., then be able to wake up at 6:00 a.m. to a freshly vacuumed living room.

Computers Are Revolting!

Will Computers Revolt? cover
Charles Simon’s Will Computers Revolt? looks at the future of interactions between artificial intelligence and the human race.

14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.

On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.

That’s revolting!

On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.

We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.

Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”

Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?

Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!

Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.

The computers have taken over, so now we have to do what they tell us to do.

Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!

Golem Literature in Perspective

But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!

A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.

By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.

The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.

These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.

Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.

Spoiler Alert

Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.

“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”

There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).

Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.

They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?

So, the !!!! What?

The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.

He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?

A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.

I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.

Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!

Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.

Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.

In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.

Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

STEM Careers for Women

Woman engineer
Women have more career options than STEM. Courtesy Shutterstock.

6 April 2018 – Folks are going to HATE what I have to say today. I expect to get comments accusing me of being a slug-brained, misogynist reactionary imbicile. So be it, I often say things other people don’t want to hear, and I’m often accused of being a slug-brained imbecile. I’m sometimes accused of being reactionary.

I don’t think I’m usually accused of being mysogynist, so that’ll be a new one.

I’m not often accused of being misogynist because I’ve got pretty good credentials in the promoting-womens’-interests department. I try to pay attention to what goes on in my women-friends’ heads. I’m more interested in the girl inside than in their outsides. Thus, I actually do care about what’s important to them.

Historically, I’ve known a lot of exceptional women, and not a few who were not-so-exceptional, and, of course, I’ve met my share of morons. But, I’ve tried to understand what was going on in all their heads because I long ago noticed that just about everybody I encounter is able to teach me something if I pay attention.

So much for the preliminaries.

Getting more to the point of this blog entry, last week I listened to a Wilson Center webcast entitled “Opening Doors in Glass Walls for Women in STEM.” I’d hoped I might have something to add to the discussion, but I didn’t. I also didn’t hear much in the “new ideas” department, either. It was mostly “woe is us ’cause women get paid less than men,” and “we’ve made some progress, but there still aren’t many women in STEM careers,” and stuff like that.

Okay. For those who don’t already know, STEM is an acronym for “Science, Technology, Engineering and Math.” It’s a big thing in education and career-development circles because it’s critical to our national technological development.

Without going into the latest statistics (’cause I’m too lazy this morning to look ’em up), it’s pretty well acknowledged that women get paid a whole lot less than men for doing the same jobs, and a whole lot less than 50% of STEM workers are women despite their making up half the available workforce.

I won’t say much about the pay ranking, except to assert that paying someone less than they’re efforts are worth is just plain dumb. It’s dumb for the employer because good talent will vote with their feet for higher pay. It’s dumb for the employee because he, she, or it should vote with their feet by walking out the door to look for a more enlightened employer. It doesn’t matter whether you are a man or a woman, you don’t want to be dependent for your income on a a mismanaged company!

Enough said about the pay differential. What I want to talk about here is the idea that, since half the population is women, half the STEM workers should be women. I’m going to assert that’s equally dumb!

I do NOT assert that there is anything about women that makes them unsuited to STEM careers. It is true that women are significantly smaller physically (the last time I checked, the average American woman was 5’4″ tall, while the average American man was 5’10” tall with everything else more or less scaled to match), but that makes no nevermind for a STEM career. STEM jobs make demands on what’s between the ears, not what’s between the shoulders.

With regard to womens’ brains’ suitability for STEM jobs, experience has shown me that there’s no significant (to a STEM career) difference between them and male brains. Women are every bit as adept at independent thinking, puzzle solving, memory tasks, and just about any measurable talent that might make a difference to a STEM worker. I’ve seen no study that showed women to be inferior to men with respect to mathematical or abstract reasoning, either. In fact, some studies have purported to show the reverse.

On the other hand, as far as I know, EVERY culture traditionally separates jobs into “women’s work” and “men’s work.” Being a firm believer in Darwinian evolution, I don’t argue with Mommy Nature’s way, but do ask “Why?”

Many decades ago, my advanced lab instructor asserted that “tradition is the sum total of things our ancestors over the past four million years have found to work.” I completely agree with him, with the important proviso that things change.

Four million years ago, our ancestors didn’t have ceramic tile floors in their condos, nor did they have cars with remote keyless entry locks. It was a lot tougher for them than it is for us, and survival was far less assured.

They were the guys who decided to have men make the hand axes and arrowheads, and that women should weave the baskets and make the soup. Most importantly for our discussion, they decided women should change the diapers.

Fast forward four million years, and we’re still doing the same things, more or less. Things, however, have changed, and we’re now having to rethink that division of labor.

Some jobs, like digging ditches, still require physical prowess, which makes them more suited to men than women. I’m ignoring (but not forgetting) all the manual labor women are asked to do all over the world. That’s not what I’m talking about here. I’m talking about STEM jobs, which DON’T require physical prowess.

So, why don’t women go after those cushy, high-paying STEM jobs, and, equally significant, once they have one of those jobs, why is it so hard to keep them in them? One of the few things that came out of last week’s webinar (Remember this all started with my attending that webinar?) was the point that women leave STEM careers in droves. They abandon their hard-won STEM careers and go off to do something else.

The point I want to make with this essay is to suggest that maybe the reason women are underrepresented in STEM careers is that they actually have more options than men. Most importantly, they have the highly attractive (to them) option of the “homemaker” career.

Current thinking among the liberal intelligencia is that “homemaker” is not much of a career. I simply don’t accept that idea. Housewife is just as important a job as, say, truck driver, bank president, or technology journalist. So, pooh!

The homemaker option is not open to most men. We may be willing to help out around the house, and may even feel driven to do our part, or at least try to find some part that could be ours to do. But, I can’t think of one of my male friends who’d be comfortable shouldering the whole responsibility.

I assert that four million years of evolution has wired up human brains for sexual dimorphism with regard to “guy jobs” and “girl jobs.” It just feels right for guys to do jobs that seem to be traditionally guy things and for women to do jobs that seem to be traditionally theirs.

Now, throughout most of evolutionary time STEM jobs pretty much didn’t exist. One of the things our ancestors didn’t have four million years ago was trigonometry. In fact, they probably struggled with basic number theory. I did an experiment in high school that indicated that the crows in my back yard couldn’t count beyond two. Australopithecus Paranthropus was probably a better mathematician than that, but likely not by much.

So, one of the things we have now that has avoided being shaped by natural selection pressure is the option to persue a STEM career. It’s pretty much evolutionarily neutral. STEM careers are probably equally attractive (or repulsive) to women and men.

I mention “repulsive” for a very good reason. Preparing oneself for a STEM career is hard.

Mathematics, especially, is one of the few subjects that give many, if not most, people phobias. Frankly, arithmetic lost me on the second day of first grade when Miss Shay passed out a list of addition tables and told us to memorize it. I thought the idea of arithmetic was a gas. Memorizing tables, however, was not on my To Do list. I expect most people feel the same way.

Learning STEM subjects involves a $%^-load of memorizing! So, it’s no wonder girls would rather play with dolls (and boys with trucks) than study STEM subjects. Eventually, playing with trucks leads to STEM careers. Playing with dolls does not.

Grown up girls find they have the option of playing with dolls as a career. Grown up boys don’t. So, choosing a STEM career is something grown-up boys really want to do if they can, but for girls, not so much. They can find something to do that’s more satisfying with less work.

So, they vote with their feet. THAT may be why it’s so hard to get women into STEM careers in the first place, and then to keep them there for the long haul.

Before you start having apoplectic fits imagining that I’m making a broad generalization that females don’t like STEM careers, recognize that what I’m describing IS a broad theoretical generalization. It’s meant to be.

In the real world there are 300 million people in the United States, half of which are women, and each and every one of them gets to make a separate career choice for themself. Every one of them chooses for themself based on what they want to do with their life. Some choose STEM careers. Some don’t.

My point is that you shouldn’t just assume that half of STEM job slots ought be filled by women. Half of potential candidates may be women, but a fair fraction of them might prefer to go play somewhere else. It may be that they find women have more alternatives than do men. You may end up with more men slotting into those STEM jobs because they have less choice.

You know, being a housewife ain’t such a bad gig!

The Future of Personal Transportation

Israeli startup Griiip’s next generation single-seat race car demonstrating the world’s first motorsport Vehicle-to-Vehicle (V2V) communication application on a racetrack.

9 April 2018 – Last week turned out to be big for news about personal transportation, with a number of trends making significant(?) progress.

Let’s start with a report (available for download at https://gen-pop.com/wtf) by independent French market-research company Ipsos of responses from more than 3,000 people in the U.S. and Canada, and thousands more around the globe, to a survey about the human side of transportation. That is, how do actual people — the consumers who ultimately will vote with their wallets for or against advances in automotive technology — feel about the products innovators have been proposing to roll out in the near future. Today, I’m going to concentrate on responses to questions about self-driving technology and automated highways. I’ll look at some of the other results in future postings.

Perhaps the biggest take away from the survey is that approximately 25% of American respondents claim they “would never use” an autonomous vehicle. That’s a biggie for advocates of “ultra-safe” automated highways.

As my wife constantly reminds me whenever we’re out in Southwest Florida traffic, the greatest highway danger is from the few unpredictable drivers who do idiotic things. When surrounded by hundreds of vehicles ideally moving in lockstep, but actually not, what percentage of drivers acting unpredictably does it take to totally screw up traffic flow for everybody? One percent? Two percent?

According to this survey, we can expect up to 25% to be out of step with everyone else because they’re making their own decisions instead of letting technology do their thinking for them.

Automated highways were described in detail back in the middle part of the twentieth century by science-fiction writer Robert A. Heinlein. What he described was a scene where thousands of vehicles packed vast Interstates, all communicating wirelessly with each other and a smart fixed infrastructure that planned traffic patterns far ahead, and communicated its decisions with individual vehicles so they acted together to keep traffic flowing in the smoothest possible way at the maximum possible speed with no accidents.

Heinlein also predicted that the heros of his stories would all be rabid free-spirited thinkers, who wouldn’t allow their cars to run in self-driving mode if their lives depended on it! Instead, they relied on human intelligence, forethought, and fast reflexes to keep themselves out of trouble.

And, he predicted they would barely manage to escape with their lives!

I happen to agree with him: trying to combine a huge percentage of highly automated vehicles with a small percentage of vehicles guided by humans who simply don’t have the foreknowledge, reflexes, or concentration to keep up with the automated vehicles around them is a train wreck waiting to happen.

Back in the late twentieth century I had to regularly commute on the 70-mph parking lots that went by the name “Interstates” around Boston, Massachusetts. Vehicles were generally crammed together half a car length apart. The only way to have enough warning to apply brakes was to look through the back window and windshield of the car ahead to see what the car ahead of them was doing.

The result was regular 15-car pileups every morning during commute times.

Heinlein’s (and advocates of automated highways) future vision had that kind of traffic density and speed, but were saved from inevitable disaster by fascistic control by omniscient automated highway technology. One recalcitrant human driver tossed into the mix would be guaranteed to bring the whole thing down.

So, the moral of this story is: don’t allow manual-driving mode on automated highways. The 25% of Americans who’d never surrender their manual-driving priviledge can just go drive somewhere else.

Yeah, I can see THAT happening!

A Modest Proposal

With apologies to Johnathan Swift, let’s change tack and focus on a more modest technology: driver assistance.

Way back in the 1980s, George Lucas and friends put out the third in the interminable Star Wars series entitled The Empire Strikes Back. The film included a sequence that could only be possible in real life with help from some sophisticated collision-avoidance technology. They had a bunch of characters zooming around in a trackless forest on the moon Endor, riding what can only be described as flying motorcycles.

As anybody who’s tried trailblazing through a forest on an off-road motorcycle can tell you, going fast through virgin forest means constant collisions with fixed objects. As Bugs Bunny once said: “Those cartoon trees are hard!

Frankly, Luke Skywalker and Princess Leia might have had superhuman reflexes, but their doing what they did without collision avoidance technology strains credulity to the breaking point. Much easier to believe their little speeders gave them a lot of help to avoid running into hard, cartoon trees.

In the real world, Israeli companies Autotalks, and Griiip, have demonstrated the world’s first motorsport Vehicle-to-Vehicle (V2V) application to help drivers avoid rear-ending each other. The system works is by combining GPS, in-vehicle sensing, and wireless communication to create a peer-to-peer network that allows each car to send out alerts to all the other cars around.

So, imagine the situation where multiple cars are on a racetrack at the same time. That’s decidedly not unusual in a motorsport application.

Now, suppose something happens to make car A suddenly and unpredictably slow or stop. Again, that’s hardly an unusual occurrance. Car B, which is following at some distance behind car A, gets an alert from car A of a possible impending-collision situation. Car B forewarns its driver that a dangerous situation has arisen, so he or she can take evasive action. So far, a very good thing in a car-race situation.

But, what’s that got to do with just us folks driving down the highway minding our own business?

During the summer down here in Florida, every afternoon we get thunderstorms dumping torrential rain all over the place. Specifically, we’ll be driving down the highway at some ridiculous speed, then come to a wall of water obscuring everything. Visibility drops from unlimited to a few tens of feet with little or no warning.

The natural reaction is to come to a screeching halt. But, what happens to the cars barreling up from behind? They can’t see you in time to stop.

Whammo!

So, coming to a screeching halt is not the thing to do. Far better to keep going forward as fast as visibility will allow.

But, what if somebody up ahead panicked and came to a screeching halt? Or, maybe their version of “as fast as visibility will allow” is a lot slower than yours? How would you know?

The answer is to have all the vehicles equipped with the Israeli V2V equipment (or an equivalent) to forewarn following drivers that something nasty has come out of the proverbial woodshed. It could also feed into your vehicle’s collision avoidance system to step over the 2-3 seconds it takes for a human driver to say “What the heck?” and figure out what to do.

The Israelis suggest that the required chip set (which, of course, they’ll cheerfully sell you) is so dirt cheap that anybody can afford to opt for it in their new car, or retrofit it into their beat up old junker. They further suggest that it would be worthwhile for insurance companies to give a rake off on their premiums to help cover the cost.

Sounds like a good deal to me! I could get behind that plan.

Invasion of the Robofish!

30 March 2018 – Mobile autonomous systems come in all sizes, shapes, and forms, and have “invaded” every earthly habitat. That’s not news. What is news is how far the “bleeding edge” of that technology has advanced. Specifically, it’s news when a number of trends combine to make something unique.

Today I’m getting the chance to report on something that I predicted in a sci-fi novel I wrote back in 2011, and then goes at least one step further.

Last week the folks at Design World published a report on research at the MIT Computer Science & Artificial Intelligence Lab that combines three robotics trends into one system that quietly makes something I find fascinating: a submersible mobile robot. The three trends are soft robotics, submersible unmanned systems, and biomimetic robot design.

The beasty in question is a robot fish. It’s obvious why this little guy touches on those three trends. How could a robotic fish not use soft robotic, sumersible, and biomemetic technologies? What I want to point out is how it uses those technologies and why that combination is necessary.

Soft Robotics

Folks have made ROVs (basically remotely operated submarines) for … a very long time. What they’ve pretty much all produced are clanky, propeller-driven derivatives of Jules Verne’s fictional Nautilus from his 1870 novel Twenty Thousand Leagues Under the Sea. That hunk of junk is a favorite of steampunk afficionados.

Not much has changed in basic submarine design since then. Modern ROVs are more maneuverable than their WWII predecessors because they add multiple propellers to push them in different directions, but the rest of it’s pretty much the same.

Soft robotics changes all that.

About 45 years ago, a half-drunk physics professor at a kegger party started bending my ear about how Mommy Nature never seemed to have discovered the wheel. The wheel’s a nearly unique human invention that Mommy Nature has pretty much done without.

Mommy Nature doesn’t use the wheel because she uses largely soft technology. Yes, she uses hard technology to make structural components like endo- and exo-skeletons to give her live beasties both protection and shape, but she stuck with soft-bodied life forms for the first four billion years of Earth’s 4.5-billion-year history. Adding hard-body technology in the form of notochords didn’t happen until the cambrian explosion of 541-516 million years ago, when most major animal phyla appeared.

By the way, that professor at the party was wrong. Mommy Nature invented wheels way back in the precambrian era in the form of rotary motors to power the flagella that propel unicellular free-swimmers. She just hasn’t use wheels for much else, since.

Of course, everybody more advanced than a shark has a soft body reinforced by a hard, bony skeleton.

Today’s soft robotics uses elastomeric materials to solve a number of problems for mobile automated systems.

Perhaps most importantly it’s a lot easier for soft robots to separate their insides from their outsides. That may not seem like a big deal, but think of how much trouble engineers go through to keep dust, dirt, and chemicals (such as seawater) out of the delicate gears and bearings of wheeled vehicles. Having a flexible elastomeric skin encasing the whole robot eliminates all that.

That’s not to mention skin’s job of keeping pesty little creepy crawlies out! I remember an early radio astronomer complaining that pack rats had gotten into his remote desert headquarters trailer and eaten a big chunk of his computer’s magnetic-core memory. That was back in the days when computer random-access memories were made from tiny iron beads strung on copper wires.

Another major advantage of soft bodies for mobile robots is resistance to collision damage. Think about how often you’re bumped into when crossing the room at a cocktail party. Now, think about what your hard-bodied automobile would look like after bumping into that many other cars in a parking lot. Not a pretty sight!

The flexibility of soft bodies also makes possible a lot of propulsion methods beside wheel-like propellers, caterpillar tracks, and rubber tires. That’s good because piercing soft-body skins with drive shafts to power propellers and wheels pretty much trashes the advantages of having those skins in the first place.

That’s why prosthetic devices all have elaborate cuffs to hold them to the outsides of the wearer’s limbs. Piercing the skin to screw something like Captain Hook’s hook directly into the existing bone never works out well!

So, in summary, the MIT group’s choice to start with soft-robotic technology is key to their success.

Submersible Unmanned Systems

Underwater drones have one major problem not faced by robotic cars and aircraft: radio waves don’t go through water. That means if anything happens that your none-too-intelligent automated system can’t handle, it needs guidance from a human operator. Underwater, that has largely meant tethering the robot to a human.

This issue is a wall that self-driving-car developers run into constantly (and sometimes literally). When the human behind the wheel mandated by state regulators for autonomous test vehicles falls asleep or is distracted by texting his girlfriend, BLAMMO!

The world is a chaotic place and unpredicted things pop out of nowhere all the time. Human brains are programmed to deal with this stuff, but computer technology is not, and will not be for the foreseeable future.

Drones and land vehicles, which are immersed in a sea of radio-transparent air, can rely on radio links to remote human operators to help them get out of trouble. Underwater vehicles, which are immersed in a sea of radio-opaque water, can’t.

In the past, that’s meant copper wires enclosed in physical tethers that tie the robots to the operators. Tethers get tangled, cut and hung up on everything from coral outcrops to passing whales.

There are a couple of ways out of the tether bind: ultrasonics and infra-red. Both go through water very nicely, thank you. The MIT group seems to be using my preferred comm link: ultrasonics.

Sound goes through water like you-no-what through a goose. Water also has little or no sonic “color.” That is, all frequencies of sonic waves go more-or-less equally well through water.

The biggest problem for ultrasonics is interference from all the other noise makers out there in the natural underwater world. That calls for the spread-spectrum transmission techniques invented by Hedy Lamarr. (Hah! Gotcha! You didn’t know Hedy Lamarr, aka Hedwig Eva Maria Kiesler, was a world famous technical genius in addition to being a really cute, sexy movie actress.) Hedy’s spread-spectrum technique lets ultrasonic signals cut right through the clutter.

So, advanced submersible mobile robot technology is the second thread leading to a successful robotic fish.

Biomimetics

Biomimetics is a 25-cent word that simply means copying designs directly from nature. It’s a time-honored short cut engineers have employed from time immemorial. Sometimes it works spectacularly, such as Thomas Wedgwood’s photographic camera (developed as an analogue of the terrestrial vertebrate eye), and sometimes not, such as Leonardo da Vinci’s attempts to make flying machines based on birds’ wings.

Obvously, Mommy Nature’s favorite fish-propulsion mechanism is highly successful, having been around for some 550 million years and still going strong. It, of course, requires a soft body anchored to a flexible backbone. It takes no imagination at all to copy it for robot fish.

The copying is the hard part because it requires developing fabrication techniques to build soft-bodied robots with flexible backbones in the first place. I’ve tried it, and it’s no mean task.

The tough part is making a muscle analogue that will drive the flexible body to move back and forth rythmically and propel the critter through the water. The answer is pneumatics.

In the early 2000s, a patent-lawyer friend of mine suggested lining both sides of a flexible membrane with tiny balloons that could be alternately inflated or deflated. When the balloons on one side were inflated, the membrane would curve away from that side. When the balloons on the other side were inflated the membrane would curve back. I played around with this idea, but never went very far with it.

The MIT group seems to have made it work using both gas (carbon dioxide) and liquid (water) for the working fluid. The difference between this kind of motor and natural muscle is that natural muscle works by pulling when energized, and the balloon system works by pushing. Otherwise, both work by balancing mechanical forces along two axes with something more-or-less flexible trapped between them.

In Nature’s fish, that something is the critter’s skeleton (backbone made up of vertebrae and stiffened vertically by long, thin spines), whereas the MIT group’s robofish uses elastomers with different stiffnesses.

Complete Package

Putting these technical trends together creates a complete package that makes it possible to build a free-swimming submersible mobile robot that moves in a natural manner at a reasonable speed without a tether. That opens up a whole range of applications, from deep-water exploration to marine biology.