You Want to Print WHAT?!

3D printed plastic handgun
The Liberator gun, designed by Defense Distributed. Photo originally made at 16-05-2013 by Vvzvlad – Flickr: Liberator.3d.gun.vv.01, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=26141469

22 August 2018 – Since the Fifteenth Century, when Johannes Gutenberg invented the technology, printing has given those in authority ginky fits. Until recently it was just dissemination of heretical ideas, a la Giordano Bruno, that raised authoritarian hackles. More recently, 3-D printing, also known as additive manufacturing (AM), has made it possible to make things that folks who like to tell you what you’re allowed to do don’t want you to make.

Don’t get me wrong, AM makes it possible to do a whole lot of good stuff that we only wished we could do before. Like, for example, Jay Leno uses it to replace irreplaceable antique car parts. It’s a really great technology that allows you to make just about anything you can describe in a digital drafting file without the difficulty and mess of hiring a highly skilled fabrication machine shop.

In my years as an experimental physicist, I dealt with fabrication shops a lot! Fabrication shops are collections of craftsmen and the equipment they need to make one-off examples of amazing stuff. It’s generally stuff, however, that nobody’d want to make a second time.

Like the first one of anything.

The reason I specify that AM is for making stuff nobody’d want to make a second time is that it’s slow. Regular machine-shop work, like making nuts and bolts, and sewing machines, and stuff folks want to make a lot of are worth spending a lot of time to figure out fast, efficient and cheap ways to make lots of them.

Take microchips. The darn things take huge amounts of effort to design, and tens of billions of dollars of equipment to make, but once you’ve figured it all out and set it all up, you can pop the things out like chocolate-chip cookies. You can buy an Intel Celeron G3900 dual-core 2.8 GHz desktop processor online for $36.99 because Intel spread the hideous quantities of up-front cost it took to originally set up the production line over the bazillions of processors that production line can make. It’s called “economy of scale.”

If you’re only gonna make one, or just a few, of the things, there’s no economy of scale.

But, if your’re only gonna make one, or just a few, you don’t worry too much about how long it takes to make each one, and what it costs is what it costs.

So, you put up with doing it some way that’s slow.

Like AM.

A HUGE advantage of making things with AM is that you don’t have to be all that smart. If you once learn to set the 3-D printer up, you’re all set. You just download the digital computer-aided-manufacturing (CAM) file into the printer’s artificial cerebrum, and it just DOES it. If you can download the file over the Internet, you’ve got it knocked!

Which brings us to what I want to talk about today: 3-D printing of handguns.

Now, I’m not any kind of anti-gun nut. I’ve got half a dozen firearms laying around the house, and have had since I was a little kid. It’s a family thing. I learned sharpshooting when I was around ten years old. Target shooting is a form of meditation for me. A revolver is my preferred weapon if ever I have need of a weapon. I never want to be the one who brought a knife to a gunfight!

That said, I’ve never actually found a need for a handgun. The one time I was present at a gunfight, I hid under the bed. The few times I’ve been threatened with guns, I talked my way out of it. Experience has led me to believe that carrying a gun is the best way to get shot.

I’ve always agreed with my gunsmith cousin that modern guns are beautiful pieces of art. I love their precision and craftsmanship. I appreciate the skill and effort it takes to make them.

The good ones, that is.

That’s the problem with AM-made guns. It takes practically no skill to create them. They’re right up there with the zip guns we talked about when we were kids.

We never made zip guns. We talked about them. We talked about them in the same tones we’d use talking about leeches or cockroaches. Ick!

We’d never make them because they were beneath contempt. They were so crude we wouldn’t dare fire one. Junk like that’s dangerous!

Okay, so you get an idea how I would react to the news that some nut case had published plans online for 3-D printing a handgun. Bad enough to design such a monstrosity, what about the idiot stupid enough to download the plans and make such a thing? Even more unbelievable, what moron would want to fire it?

Have they no regard for their hands? Don’t they like their fingers?

Anyway, not long ago, a press release crossed my desk from Giffords, the gun-safety organization set up by former U.S. Rep. Gabrielle Giffords and her husband after she survived an assassination attempt in Arizona in 2011. The Giffords’ press release praised Senators Richard Blumenthal and Bill Nelson, and Representatives David Cicilline and Seth Moulton for introducing legislation to stop untraceable firearms from further proliferation after the Trump Administration cleared the way for anyone to 3-D-print their own guns.

Why “untraceable” firearms, and what have they got to do with AM?

Downloadable plans for producing guns by the AM technique puts firearms in the hands of folks too unskilled and, yes, stupid to make them themselves with ordinary methods. That’s important because it is theoretically possible to AM produce firearms of surprising sophistication. The first one offered was a cheap plastic thing (depicted above) that would likely be more danger to its user than its intended victim. More recent offerings, however, have been repeating weapons made of more robust materials that might successfully be used to commit crimes.

Ordinary firearm production techniques require a level of skill and investment in equipment that puts them above the radar for federal regulators. The first thing those regulators require is licensing the manufacturer. Units must be serialized, and records must be kept. The system certainly isn’t perfect, but it gives law enforcement a fighting chance when the products are misused.

The old zip guns snuck in under the radar, but they were so crude and dangerous to their users that they were seldom even made. Almost anybody with enough sense to make them had enough sense not to make them! Those who got their hands on the things were more a danger to themselves than to society.

The Trump administration’s recent settlement with Defense Distributed, allowing them to relaunch their website, include a searchable database of firearm blueprints, and allow the public to create their own fully-functional, unserialized firearms using AM technology opens the floodgates for dangerous people to make their own untraceable firearms.

That’s just dumb!

The Untraceable Firearms Act would prohibit the manufacture and sale of firearms without serial numbers, require any person or business engaged in the business of selling firearm kits and unfinished receivers to obtain a dealer’s license and conduct background checks on purchasers, and mandate that a person who runs a business putting together firearms or finishing receivers must obtain a manufacturer’s license and put serial numbers on firearms before offering them for sale to consumers.

The 3-D Printing Safety Act would prohibit the online publication of computer-aided design (CAD) files which automatically program a 3-D-printer to produce or complete a firearm. That closes the loophole letting malevolent idiots evade scrutiny by making their own firearms.

We have to join with Giffords in applauding the legislators who introduced these bills.

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!

The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.

What If They Gave a War, But Nobody Noticed

Cyberwar
World War III is being fought in cyberspace right now, but most of us seem to be missing it! Oliver Denker/Shutterstock

13 June 2018 – Ever wonder why Kim Jong Un is so willing to talk about giving up his nuclear arsenal? Sort-of-President Donald Trump (POTUS) seems to think it’s because economic sanctions are driving North Korea (officially the Democratic People’s Republic of Korea, or DPRK) to the finacial brink.

That may be true, but it is far from the whole story. As usual, the reality star POTUS is stuck decades behind the times. The real World War III won’t have anything to do with nukes, and it’s started already.

The threat of global warfare using thermonuclear weapons was panic inducing to my father back in the 1950s and 1960s. Strangely, however, my superbrained mother didn’t seem very worried at the time.

By the 1980s, we were beginning to realize what my mother seemed to know instinctively — that global thermonuclear war just wasn’t going to happen. That kind of war leaves such an ungodly mess that no even-marginally-sane person would want to win one. The winners would be worse off than the losers!

The losers would join the gratefully dead, while the winners would have to live in the mess!

That’s why we don’t lose sleep at night knowing that the U.S., Russia, China, India, Pakistan, and, in fact, most countries in the first and second worlds, have access to thermonuclear weapons. We just worry about third-world toilets (to quote Danny DeVito’s character in The Jewel of the Nile) run by paranoid homicidal maniacs getting their hands on the things. Those guys are the only ones crazy enough to ever actually use them!

We only worried about North Korea developing nukes when Kim Jong Un was acting like a total whacko. Since he stopped his nuclear development program (because his nuclear lab accidentally collapsed under a mountain of rubble), it’s begun looking like he was no more insane than the leaders of Leonard Wibberley’s fictional nation-state, the Duchy of Grand Fewick.

In Wibberley’s 1956 novel The Mouse That Roared, the Duchy’s leaders all breathed a sigh of relief when their captured doomsday weapon, the Q-Bomb, proved to be a dud.

Yes, there is a hilarious movie to be made documenting the North Korean nuclear and missile programs.

Okay, so we’ve disposed of the idea that World War III will be a nuclear holocaust. Does that mean, as so many starry-eyed astrophysicists imagined in the late 1940s, the end of war?

Fat f-ing chance!

The winnable war in the Twenty-First Century is one fought in cyberspace. In fact, it’s going on right now. And, you’re missing it.

Cybersecurity and IT expert Theresa Payton, CEO of Fortalice Solutions, asserts that suspected North Korean hackers have been conducting offensive cyber operations on financial institutions amid discussions between Washington and Pyongyang on a possible nuclear summit between President Trump and Kim Jong Un.

“The U.S. has been able to observe North Korean-linked hackers targeting financial institutions in order to steal money,” she says. “This isn’t North Korea’s first time meddling in serious hacking schemes. This time, it’s likely because the international economic sanctions have hurt them in their wallets and they are desperate and strapped for cash.”

There is a long laundry list of cyberattacks that have been perpetrated against U.S. and European interests, including infrastructure, corporations and individuals.

“One of N. Korea’s best assets … is to flex it’s muscle using it’s elite trained cyber operations,” Payton asserts. “Their cyber weapons can be used to fund their government by stealing money, to torch organizations and governments that offend them (look at Sony hacking), to disrupt our daily lives through targeting critical infrastructure, and more. The Cyber Operations of N. Korea is a powerful tool for the DPRK to show their displeasure at anything and it’s the best bargaining chip that Kim Jong Un has.”

Clearly, DPRK is not the only bad state actor out there. Russia has long been in the news using various cyberwar tactics against the U.S., Europe and others. China has also been blamed for cyberattacks. In fact, cyberwarfare is a cheap, readily available alternative to messy and expensive nuclear weapons for anyone with Internet access (meaning, just about everybody) and wishing to do anybody harm, including us.

“You can take away their Nukes,” Payton points out, “but you will have a hard time dismantling their ability to attack critical infrastructure, businesses and even civilians through cyber operations.”

Programming Notes: I’ve been getting a number of comments on this blog each day, and it looks like we need to set some ground rules. At least, I need to be explicit about things I will accept and things I won’t:

  • First off, remember that this isn’t a social media site. When you make a comment, it doesn’t just spill out into the blog site. Comments are sequestered until I go in and approve or reject them. So far, the number of comments is low enough that I can go through and read each one, but I don’t do it every day. If I did, I’d never get any new posts written! Please be patient.
  • Do not embed URLs to other websites in comments. I’ll strip them out even if I approve your comment otherwise. The reason is that I don’t have time to vet every URL, and I stick to journalistic standards, which means I don’t allow anything in the blog that I can’t verify. There are no exceptions.
  • This is an English language site ONLY. Comments in other languages are immediately deleted. (For why, see above.)
  • Use Standard English written in clear, concise prose. If I have trouble understanding what you’re trying to say, I won’t give your comment any space. If you can’t write a cogent English sentence, take an ESL writing course!

Quality vs. Quantity

Custom MC
It used to be that highest quality was synonymous with hand crafting. It’s not no more! Pressmaster/Shutterstock.com

23 May 2018 – Way back in the 1990s, during a lunch conversation with friends involved in the custom motorcycle business, one of my friends voiced the opinion that hand crafted items, from fine-art paintings to custom motorcycle parts, were worth the often-exhorbitant premium prices charged for them for two reasons: individualization and premium quality.

At that time, I disagreed about hand-crafted items exhibiting premium quality.

I had been deeply involved in the electronics test business for over a decade both as an engineer and a journalist. I’d come to realize that, even back then, things had changed drastically from the time when hand crafting could achieve higher product quality than mass production. Things have changed even more since then.

Early machine tools were little more than power-driven hand tools. The ancient Romans, for example, had hydraulically powered trip hammers, but they were just regular hammers mounted with a pivot at the end of the handle and a power-driven cam that lifted the head, then let it fall to strike an anvil. If you wanted something hammered, you laid it atop the anvil and waited for the hammer to fall on it. What made the exercise worthwhile was the scale achievable for these machines. They were much larger than could be wielded by puny human slaves.

The most revolutionary part of the Industrial Revolution was invention of many purpose-built precision machine tools that could crank out interchangeable parts.

Most people don’t appreciate that previously nuts and bolts were made in mating pairs. That is, that bolt was made to match that nut because the threads on this other nut/bolt pair wouldn’t quite match up because the threads were all filed by hand. It just wasn’t possible to carve threads with enough precision.

Precision machinery capable of repeating the same operation to produce the same result time after time solved that little problem, and made interchangeable parts possible.

Statistical Process Control

Fast forward to the twentieth century, when Walter A. Shewhart applied statistical methods to quality management. Basically, Shewhart showed that measurements of significant features of mass-produced anything fell into a bell-shaped curve, with each part showing some more-or-less small variation from some nominal value. More precise manufacturing processes led to tighter bell curves where variations from the nominal value tended to be smaller. That’s what makes manufacturing interchangeable parts by automated machine tools possible.

Bell Curve
Bell curve distribution of measurement results. Peter Hermes Furian/Shutterstock.com

Before Shewhart, we knew making interchangeable parts was possible, but didn’t fully understand why it was possible.

If you’re hand crafting components for, say, a motorcycle, you’re going to carefully make each part, testing frequently to make sure it fits together with all the other parts. Your time goes into carefully and incrementally honing the part’s shape to gradually bring it into a perfect fit. That’s what gave hand crafting the reputation for high quality.

In this cut-and-try method of fabrication, achieving a nominal value for each dimension becomes secondary to “does it fit.” The final quality depends on your motor skills, patience, and willingness to throw out anything that becomes unsalvageable. Each individual part becomes, well, individual. They are not interchangeable.

If, on the other hand, you’re cranking out kazillions of supposedly interchangeable parts in an automated manufacturing process, you blast parts out as fast as you can, then inspect them later. Since the parts are supposed to be interchangeable, whether they fit together is a matter of whether the variation (from the nominal value) of this particular part is small enough so that it is still guaranteed to fit with all the other parts.

If it’s too far off, it’s junk. If it’s close enough, it’s fine. The dividing line between “okay” and “junk” is called the “tolerance.”

Now, the thing about tolerance is that it’s somewhat flexible. You CAN improve the yield (the fraction of parts that fall inside the tolerance band) by simply stretching out the tolerance band. That lets more of your kazillion mass-produced parts into the “okay” club.

Of course, you have to fiddle with the nominal values of all the other parts to make room for the wider variations you want to accept. It’s not hard. Any engineer knows how to do it.

However, when you start fiddling with nominal values to accommodate wider tolerances, the final product starts looking sloppy. That is, after all, what “sloppy” means.

By the 1980s, engineers had figured out that if they insisted on automated manufacturing equipment to achieve the best possible consistency, they could then focus in on reducing those pesky variations (improving precision). Eventually, improved machine precision made it possible to squeeze tolerances and remove sloppiness (improving perceived quality).

By the 1990s, automated manufacturing processes had achieved quality that was far beyond what hand-crafted processes could match. That’s why I had to disagree with my friend who said that mass-manufactured stuff sacrificed quality for quantity.

In fact, Shewhart’s “statistical process control” made it possible to leverage manufacturing quantity to improve quality.

Product Individualization

That, however, left hand-crafting’s only remaining advantage to be individualization. You are, after all, making one unique item.

Hand crafting requires a lot of work by people who’ve spent a long time honing their skills. To be economically viable, it’s got to show some advantage that will allow its products to command a premium price. So, the fact that hand-crafting’s only advantage is its ability to achieve a high degree of product individualization matters!

I once heard an oxymoronic joke comment that said: “I want to be different, like everybody else.”

That silly comment actually has hidden layers of meaning.

Of course, if everybody is different, what are they different from? If there’s no normal (equivalent to the nominal value in manufacturing test results), how can you define a difference (variation) from normal?

Another layer of meaning in the statement is its implicit acknowledgment that everyone wants to be different. We all want to feel special. There seems to be a basic drive among humans to be unique. It probably stems from a desire to be valued by those around us so they might take special care to help ensure our individual survival.

That would confer an obvious evolutionary advantage.

One of the ways we can show our uniqueness is to have stuff that shows individualization. I want my stuff to be different from your stuff. That’s why, for example, women don’t want to see other women wearing dresses identical to their own at a cocktail party.

In a world, however, where the best quality is to be had with mass-produced manufactured goods, how can you display uniqueness without having all your stuff be junk? Do you wear underwear over a leotard? Do you wear a tutu with a pants suit? That kind of strategy’s been tried and it didn’t work very well.

Ideally, to achieve uniqueness you look to customize the products that you buy. And, it’s more than just picking a color besides black for your new Ford. You want significant features of your stuff to be different from the features of your neighbor’s stuff.

As freelance journalist Carmen Klingler-Deiseroth wrote in Automation Strategies, a May 11 e-magazine put out by Automation World, “Particularly among the younger generation of digital natives, there is a growing desire to fine-tune every online purchase to match their individual tastes and preferences.”

That, obviously, poses a challenge to manufacturers whose fabrication strategy is based on mass producing interchangeable parts on automated production lines in quantities large enough to use statistical process control to maintain quality. If your lot size is one, how do you get the statistics?

She quotes Robert Kickinger, mechatronic technologies manager at B&R Industrial Automation as pointing out: “What is new . . . is the idea of making customized products under mass-production conditions.”

Kickinger further explains that any attempt to make products customizable by increasing manufacturing-system flexibility is usually accompanied by a reduction in overall equipment effectiveness (OEE). “When that happens, individualization is no longer profitable.”

One strategy that can help is taking advantage of an important feature of automated manufacturing equipment, it’s programmability. Machine programmability comes from its reliance on software, and software is notably “soft.” It’s flexible.

If you could ensure that taking advantage of your malleable software’s flexibility won’t screw up your product quality when you make your one, unique, customized product, your flexible manufacturing system could then remain profitable.

One strategy is based on simulation. That is, you know how your manufacturing system works, so you can build what I like to call a “mathematical model” that will behave, in a mathematical sense, like your real manufacturing system. For any given input, it will produce results identical to that of the real system, but much, much faster.

The results, of course, are not real, physical products, but measurement results identical to what your test department will get out of the real product.

Now, you can put the unique parameters of your unique product into the mathematical model of your real system, and crank out as many simulated examples of products as you need to ensure that when you plug those parameters into your real system, it will spit out a unique example of your unique product exhibiting the best quality your operation is capable of — without the need of cranking out mass quantities of unwanted stuff in order to tune your process.

So, what happens when (in accordance with Murphy’s Law) something that can go wrong does go wrong? Your wonderful, expensive, finely tuned flexible manufacturing system spits out a piece of junk.

You’d better not (automatically) box that piece of junk up and ship it to your customer!

Instead, you’d better take advantage of the second feature Kickinger wants for your flexible manufacturing system: real-time rejection.

“Defective products need to be rejected on the spot, while maintaining full production speed,” he advises.

Immediately catching isolated manufacturing defects not only maintains overall quality, it allows replacing flexibly manufactured unique junk to be replaced quickly with good stuff to fulfill orders with minimum delay. If things have gone wrong enough to cause repetitive multiple failures, real-time rejection also allows your flexible manufacturing system to send up an alarm alerting non-automated maintenance assets (people with screwdrivers and wrenches) to correct the problem fast.

“This is the only way to make mass customization viable from an economic perspective,” Kickinger asserts.

Social and technological trends will only make developent of this kind of flexible manufacturing process de rigeur in the future. Online shoppers are going to increasingly insist on having reasonably priced unique products manufactured to high quality standards and customized according to their desires.

As Kickinger points out: “The era of individualization has only just begun.”

The Future of Personal Transportation

Israeli startup Griiip’s next generation single-seat race car demonstrating the world’s first motorsport Vehicle-to-Vehicle (V2V) communication application on a racetrack.

9 April 2018 – Last week turned out to be big for news about personal transportation, with a number of trends making significant(?) progress.

Let’s start with a report (available for download at https://gen-pop.com/wtf) by independent French market-research company Ipsos of responses from more than 3,000 people in the U.S. and Canada, and thousands more around the globe, to a survey about the human side of transportation. That is, how do actual people — the consumers who ultimately will vote with their wallets for or against advances in automotive technology — feel about the products innovators have been proposing to roll out in the near future. Today, I’m going to concentrate on responses to questions about self-driving technology and automated highways. I’ll look at some of the other results in future postings.

Perhaps the biggest take away from the survey is that approximately 25% of American respondents claim they “would never use” an autonomous vehicle. That’s a biggie for advocates of “ultra-safe” automated highways.

As my wife constantly reminds me whenever we’re out in Southwest Florida traffic, the greatest highway danger is from the few unpredictable drivers who do idiotic things. When surrounded by hundreds of vehicles ideally moving in lockstep, but actually not, what percentage of drivers acting unpredictably does it take to totally screw up traffic flow for everybody? One percent? Two percent?

According to this survey, we can expect up to 25% to be out of step with everyone else because they’re making their own decisions instead of letting technology do their thinking for them.

Automated highways were described in detail back in the middle part of the twentieth century by science-fiction writer Robert A. Heinlein. What he described was a scene where thousands of vehicles packed vast Interstates, all communicating wirelessly with each other and a smart fixed infrastructure that planned traffic patterns far ahead, and communicated its decisions with individual vehicles so they acted together to keep traffic flowing in the smoothest possible way at the maximum possible speed with no accidents.

Heinlein also predicted that the heros of his stories would all be rabid free-spirited thinkers, who wouldn’t allow their cars to run in self-driving mode if their lives depended on it! Instead, they relied on human intelligence, forethought, and fast reflexes to keep themselves out of trouble.

And, he predicted they would barely manage to escape with their lives!

I happen to agree with him: trying to combine a huge percentage of highly automated vehicles with a small percentage of vehicles guided by humans who simply don’t have the foreknowledge, reflexes, or concentration to keep up with the automated vehicles around them is a train wreck waiting to happen.

Back in the late twentieth century I had to regularly commute on the 70-mph parking lots that went by the name “Interstates” around Boston, Massachusetts. Vehicles were generally crammed together half a car length apart. The only way to have enough warning to apply brakes was to look through the back window and windshield of the car ahead to see what the car ahead of them was doing.

The result was regular 15-car pileups every morning during commute times.

Heinlein’s (and advocates of automated highways) future vision had that kind of traffic density and speed, but were saved from inevitable disaster by fascistic control by omniscient automated highway technology. One recalcitrant human driver tossed into the mix would be guaranteed to bring the whole thing down.

So, the moral of this story is: don’t allow manual-driving mode on automated highways. The 25% of Americans who’d never surrender their manual-driving priviledge can just go drive somewhere else.

Yeah, I can see THAT happening!

A Modest Proposal

With apologies to Johnathan Swift, let’s change tack and focus on a more modest technology: driver assistance.

Way back in the 1980s, George Lucas and friends put out the third in the interminable Star Wars series entitled The Empire Strikes Back. The film included a sequence that could only be possible in real life with help from some sophisticated collision-avoidance technology. They had a bunch of characters zooming around in a trackless forest on the moon Endor, riding what can only be described as flying motorcycles.

As anybody who’s tried trailblazing through a forest on an off-road motorcycle can tell you, going fast through virgin forest means constant collisions with fixed objects. As Bugs Bunny once said: “Those cartoon trees are hard!

Frankly, Luke Skywalker and Princess Leia might have had superhuman reflexes, but their doing what they did without collision avoidance technology strains credulity to the breaking point. Much easier to believe their little speeders gave them a lot of help to avoid running into hard, cartoon trees.

In the real world, Israeli companies Autotalks, and Griiip, have demonstrated the world’s first motorsport Vehicle-to-Vehicle (V2V) application to help drivers avoid rear-ending each other. The system works is by combining GPS, in-vehicle sensing, and wireless communication to create a peer-to-peer network that allows each car to send out alerts to all the other cars around.

So, imagine the situation where multiple cars are on a racetrack at the same time. That’s decidedly not unusual in a motorsport application.

Now, suppose something happens to make car A suddenly and unpredictably slow or stop. Again, that’s hardly an unusual occurrance. Car B, which is following at some distance behind car A, gets an alert from car A of a possible impending-collision situation. Car B forewarns its driver that a dangerous situation has arisen, so he or she can take evasive action. So far, a very good thing in a car-race situation.

But, what’s that got to do with just us folks driving down the highway minding our own business?

During the summer down here in Florida, every afternoon we get thunderstorms dumping torrential rain all over the place. Specifically, we’ll be driving down the highway at some ridiculous speed, then come to a wall of water obscuring everything. Visibility drops from unlimited to a few tens of feet with little or no warning.

The natural reaction is to come to a screeching halt. But, what happens to the cars barreling up from behind? They can’t see you in time to stop.

Whammo!

So, coming to a screeching halt is not the thing to do. Far better to keep going forward as fast as visibility will allow.

But, what if somebody up ahead panicked and came to a screeching halt? Or, maybe their version of “as fast as visibility will allow” is a lot slower than yours? How would you know?

The answer is to have all the vehicles equipped with the Israeli V2V equipment (or an equivalent) to forewarn following drivers that something nasty has come out of the proverbial woodshed. It could also feed into your vehicle’s collision avoidance system to step over the 2-3 seconds it takes for a human driver to say “What the heck?” and figure out what to do.

The Israelis suggest that the required chip set (which, of course, they’ll cheerfully sell you) is so dirt cheap that anybody can afford to opt for it in their new car, or retrofit it into their beat up old junker. They further suggest that it would be worthwhile for insurance companies to give a rake off on their premiums to help cover the cost.

Sounds like a good deal to me! I could get behind that plan.

Invasion of the Robofish!

30 March 2018 – Mobile autonomous systems come in all sizes, shapes, and forms, and have “invaded” every earthly habitat. That’s not news. What is news is how far the “bleeding edge” of that technology has advanced. Specifically, it’s news when a number of trends combine to make something unique.

Today I’m getting the chance to report on something that I predicted in a sci-fi novel I wrote back in 2011, and then goes at least one step further.

Last week the folks at Design World published a report on research at the MIT Computer Science & Artificial Intelligence Lab that combines three robotics trends into one system that quietly makes something I find fascinating: a submersible mobile robot. The three trends are soft robotics, submersible unmanned systems, and biomimetic robot design.

The beasty in question is a robot fish. It’s obvious why this little guy touches on those three trends. How could a robotic fish not use soft robotic, sumersible, and biomemetic technologies? What I want to point out is how it uses those technologies and why that combination is necessary.

Soft Robotics

Folks have made ROVs (basically remotely operated submarines) for … a very long time. What they’ve pretty much all produced are clanky, propeller-driven derivatives of Jules Verne’s fictional Nautilus from his 1870 novel Twenty Thousand Leagues Under the Sea. That hunk of junk is a favorite of steampunk afficionados.

Not much has changed in basic submarine design since then. Modern ROVs are more maneuverable than their WWII predecessors because they add multiple propellers to push them in different directions, but the rest of it’s pretty much the same.

Soft robotics changes all that.

About 45 years ago, a half-drunk physics professor at a kegger party started bending my ear about how Mommy Nature never seemed to have discovered the wheel. The wheel’s a nearly unique human invention that Mommy Nature has pretty much done without.

Mommy Nature doesn’t use the wheel because she uses largely soft technology. Yes, she uses hard technology to make structural components like endo- and exo-skeletons to give her live beasties both protection and shape, but she stuck with soft-bodied life forms for the first four billion years of Earth’s 4.5-billion-year history. Adding hard-body technology in the form of notochords didn’t happen until the cambrian explosion of 541-516 million years ago, when most major animal phyla appeared.

By the way, that professor at the party was wrong. Mommy Nature invented wheels way back in the precambrian era in the form of rotary motors to power the flagella that propel unicellular free-swimmers. She just hasn’t use wheels for much else, since.

Of course, everybody more advanced than a shark has a soft body reinforced by a hard, bony skeleton.

Today’s soft robotics uses elastomeric materials to solve a number of problems for mobile automated systems.

Perhaps most importantly it’s a lot easier for soft robots to separate their insides from their outsides. That may not seem like a big deal, but think of how much trouble engineers go through to keep dust, dirt, and chemicals (such as seawater) out of the delicate gears and bearings of wheeled vehicles. Having a flexible elastomeric skin encasing the whole robot eliminates all that.

That’s not to mention skin’s job of keeping pesty little creepy crawlies out! I remember an early radio astronomer complaining that pack rats had gotten into his remote desert headquarters trailer and eaten a big chunk of his computer’s magnetic-core memory. That was back in the days when computer random-access memories were made from tiny iron beads strung on copper wires.

Another major advantage of soft bodies for mobile robots is resistance to collision damage. Think about how often you’re bumped into when crossing the room at a cocktail party. Now, think about what your hard-bodied automobile would look like after bumping into that many other cars in a parking lot. Not a pretty sight!

The flexibility of soft bodies also makes possible a lot of propulsion methods beside wheel-like propellers, caterpillar tracks, and rubber tires. That’s good because piercing soft-body skins with drive shafts to power propellers and wheels pretty much trashes the advantages of having those skins in the first place.

That’s why prosthetic devices all have elaborate cuffs to hold them to the outsides of the wearer’s limbs. Piercing the skin to screw something like Captain Hook’s hook directly into the existing bone never works out well!

So, in summary, the MIT group’s choice to start with soft-robotic technology is key to their success.

Submersible Unmanned Systems

Underwater drones have one major problem not faced by robotic cars and aircraft: radio waves don’t go through water. That means if anything happens that your none-too-intelligent automated system can’t handle, it needs guidance from a human operator. Underwater, that has largely meant tethering the robot to a human.

This issue is a wall that self-driving-car developers run into constantly (and sometimes literally). When the human behind the wheel mandated by state regulators for autonomous test vehicles falls asleep or is distracted by texting his girlfriend, BLAMMO!

The world is a chaotic place and unpredicted things pop out of nowhere all the time. Human brains are programmed to deal with this stuff, but computer technology is not, and will not be for the foreseeable future.

Drones and land vehicles, which are immersed in a sea of radio-transparent air, can rely on radio links to remote human operators to help them get out of trouble. Underwater vehicles, which are immersed in a sea of radio-opaque water, can’t.

In the past, that’s meant copper wires enclosed in physical tethers that tie the robots to the operators. Tethers get tangled, cut and hung up on everything from coral outcrops to passing whales.

There are a couple of ways out of the tether bind: ultrasonics and infra-red. Both go through water very nicely, thank you. The MIT group seems to be using my preferred comm link: ultrasonics.

Sound goes through water like you-no-what through a goose. Water also has little or no sonic “color.” That is, all frequencies of sonic waves go more-or-less equally well through water.

The biggest problem for ultrasonics is interference from all the other noise makers out there in the natural underwater world. That calls for the spread-spectrum transmission techniques invented by Hedy Lamarr. (Hah! Gotcha! You didn’t know Hedy Lamarr, aka Hedwig Eva Maria Kiesler, was a world famous technical genius in addition to being a really cute, sexy movie actress.) Hedy’s spread-spectrum technique lets ultrasonic signals cut right through the clutter.

So, advanced submersible mobile robot technology is the second thread leading to a successful robotic fish.

Biomimetics

Biomimetics is a 25-cent word that simply means copying designs directly from nature. It’s a time-honored short cut engineers have employed from time immemorial. Sometimes it works spectacularly, such as Thomas Wedgwood’s photographic camera (developed as an analogue of the terrestrial vertebrate eye), and sometimes not, such as Leonardo da Vinci’s attempts to make flying machines based on birds’ wings.

Obvously, Mommy Nature’s favorite fish-propulsion mechanism is highly successful, having been around for some 550 million years and still going strong. It, of course, requires a soft body anchored to a flexible backbone. It takes no imagination at all to copy it for robot fish.

The copying is the hard part because it requires developing fabrication techniques to build soft-bodied robots with flexible backbones in the first place. I’ve tried it, and it’s no mean task.

The tough part is making a muscle analogue that will drive the flexible body to move back and forth rythmically and propel the critter through the water. The answer is pneumatics.

In the early 2000s, a patent-lawyer friend of mine suggested lining both sides of a flexible membrane with tiny balloons that could be alternately inflated or deflated. When the balloons on one side were inflated, the membrane would curve away from that side. When the balloons on the other side were inflated the membrane would curve back. I played around with this idea, but never went very far with it.

The MIT group seems to have made it work using both gas (carbon dioxide) and liquid (water) for the working fluid. The difference between this kind of motor and natural muscle is that natural muscle works by pulling when energized, and the balloon system works by pushing. Otherwise, both work by balancing mechanical forces along two axes with something more-or-less flexible trapped between them.

In Nature’s fish, that something is the critter’s skeleton (backbone made up of vertebrae and stiffened vertically by long, thin spines), whereas the MIT group’s robofish uses elastomers with different stiffnesses.

Complete Package

Putting these technical trends together creates a complete package that makes it possible to build a free-swimming submersible mobile robot that moves in a natural manner at a reasonable speed without a tether. That opens up a whole range of applications, from deep-water exploration to marine biology.