The Case for Free College

College vs. Income
While the need for skilled workers to maintain our technology edge has grown, the cost of training those workers has grown astronomically.

6 June 2018 – We, as a nation, need to extend the present system that provides free, universal education up through high school to cover college to the baccalaureate level.

DISCLOSURE: Teaching is my family business. My father was a teacher. My mother was a teacher. My sister’s first career was as a teacher. My brother in law was a teacher. My wife is a teacher. My son is a teacher. My daughter in law is a teacher. Most of my aunts and uncles and cousins are or were teachers. I’ve spent a lot of years teaching at the college level, myself. Some would say that I have a conflict of interest when covering developments in the education field. Others might argue that I know whereof I speak.

Since WW II, there has been a growing realization that the best careers go to those with at least a bachelor’s degree in whatever field they choose. Yet, at the same time, society has (perhaps inadvertently, although I’m not naive enough to eschew thinking there’s a lot of blame to go around) erected a monumental barrier to anyone wanting to get an education. Since the mid-1970s, the cost of higher education has vastly outstripped the ability of most people to pay for it.

In 1975, the price of attendance in college was about one fifth of the median family income (see graph above). In 2016, it was over a third. That makes sending kids to college a whole lot harder than it used to be. If your family happens to have less than median household income, that barrier looks even higher, and is getting steeper.

MORE DISCLOSURE: The reason I don’t have a Ph.D. today is that two years into my Aerospace Engineering Ph.D. program, Arizona State University jacked up the tuition beyond my (not incosiderable at the time) ability to pay.

I’d like everyone in America to consider the following propositions:

  1. A bachelor’s degree is the new high-school diploma;
  2. Having an educated population is a requirement for our technology-based society;
  3. Without education, upward mobility is nearly impossible;
  4. Ergo, it is a requirement for our society to ensure that every citizen capable of getting a college degree gets one.

EVEN MORE DISCLOSURE: Horace Mann, often credited as the Father of Public Education, was born in the same town (Franklin, MA) that I was, and our family charity is a scholarship fund dedicated to his memory.

About Mann’s intellectual progressivism, the historian Ellwood P. Cubberley said: “No one did more than he to establish in the minds of the American people the conception that education should be universal, non-sectarian, free, and that its aims should be social efficiency, civic virtue, and character, rather than mere learning or the advancement of education ends.” (source: Wikipedia)

The Wikipedia article goes on to say: “Arguing that universal public education was the best way to turn unruly American children into disciplined, judicious republican citizens, Mann won widespread approval from modernizers, especially in the Whig Party, for building public schools. Most states adopted a version of the system Mann established in Massachusetts, especially the program for normal schools to train professional teachers.”

That was back in the mid-nineteenth century. At that time, the United States was in the midst of a shift from an agrarian to an industrial economy. We’ve since completed that transition and are now shifting to an information-based economy. In future, full participation in the workforce will require everyone to have at least a bachelor’s degree.

So, when progressive politicians, like Bernie Sanders, make noises about free universal college education, YOU should listen!

It’s about time we, as a society, owned up to the fact that times have changed a lot since the mid-nineteenth century. At that time, universal free education to about junior high school level was considered enough. Since then, it was extended to high school. It’s time to extend it further to the bachelor’s-degree level.

That doesn’t mean shutting down Ivy League colleges. For those who can afford them, private and for-profit colleges can provide superior educational experiences. But, publicly funded four-year colleges offering tuition-free education to everyone has become a strategic imperative.

Quality vs. Quantity

Custom MC
It used to be that highest quality was synonymous with hand crafting. It’s not no more! Pressmaster/Shutterstock.com

23 May 2018 – Way back in the 1990s, during a lunch conversation with friends involved in the custom motorcycle business, one of my friends voiced the opinion that hand crafted items, from fine-art paintings to custom motorcycle parts, were worth the often-exhorbitant premium prices charged for them for two reasons: individualization and premium quality.

At that time, I disagreed about hand-crafted items exhibiting premium quality.

I had been deeply involved in the electronics test business for over a decade both as an engineer and a journalist. I’d come to realize that, even back then, things had changed drastically from the time when hand crafting could achieve higher product quality than mass production. Things have changed even more since then.

Early machine tools were little more than power-driven hand tools. The ancient Romans, for example, had hydraulically powered trip hammers, but they were just regular hammers mounted with a pivot at the end of the handle and a power-driven cam that lifted the head, then let it fall to strike an anvil. If you wanted something hammered, you laid it atop the anvil and waited for the hammer to fall on it. What made the exercise worthwhile was the scale achievable for these machines. They were much larger than could be wielded by puny human slaves.

The most revolutionary part of the Industrial Revolution was invention of many purpose-built precision machine tools that could crank out interchangeable parts.

Most people don’t appreciate that previously nuts and bolts were made in mating pairs. That is, that bolt was made to match that nut because the threads on this other nut/bolt pair wouldn’t quite match up because the threads were all filed by hand. It just wasn’t possible to carve threads with enough precision.

Precision machinery capable of repeating the same operation to produce the same result time after time solved that little problem, and made interchangeable parts possible.

Statistical Process Control

Fast forward to the twentieth century, when Walter A. Shewhart applied statistical methods to quality management. Basically, Shewhart showed that measurements of significant features of mass-produced anything fell into a bell-shaped curve, with each part showing some more-or-less small variation from some nominal value. More precise manufacturing processes led to tighter bell curves where variations from the nominal value tended to be smaller. That’s what makes manufacturing interchangeable parts by automated machine tools possible.

Bell Curve
Bell curve distribution of measurement results. Peter Hermes Furian/Shutterstock.com

Before Shewhart, we knew making interchangeable parts was possible, but didn’t fully understand why it was possible.

If you’re hand crafting components for, say, a motorcycle, you’re going to carefully make each part, testing frequently to make sure it fits together with all the other parts. Your time goes into carefully and incrementally honing the part’s shape to gradually bring it into a perfect fit. That’s what gave hand crafting the reputation for high quality.

In this cut-and-try method of fabrication, achieving a nominal value for each dimension becomes secondary to “does it fit.” The final quality depends on your motor skills, patience, and willingness to throw out anything that becomes unsalvageable. Each individual part becomes, well, individual. They are not interchangeable.

If, on the other hand, you’re cranking out kazillions of supposedly interchangeable parts in an automated manufacturing process, you blast parts out as fast as you can, then inspect them later. Since the parts are supposed to be interchangeable, whether they fit together is a matter of whether the variation (from the nominal value) of this particular part is small enough so that it is still guaranteed to fit with all the other parts.

If it’s too far off, it’s junk. If it’s close enough, it’s fine. The dividing line between “okay” and “junk” is called the “tolerance.”

Now, the thing about tolerance is that it’s somewhat flexible. You CAN improve the yield (the fraction of parts that fall inside the tolerance band) by simply stretching out the tolerance band. That lets more of your kazillion mass-produced parts into the “okay” club.

Of course, you have to fiddle with the nominal values of all the other parts to make room for the wider variations you want to accept. It’s not hard. Any engineer knows how to do it.

However, when you start fiddling with nominal values to accommodate wider tolerances, the final product starts looking sloppy. That is, after all, what “sloppy” means.

By the 1980s, engineers had figured out that if they insisted on automated manufacturing equipment to achieve the best possible consistency, they could then focus in on reducing those pesky variations (improving precision). Eventually, improved machine precision made it possible to squeeze tolerances and remove sloppiness (improving perceived quality).

By the 1990s, automated manufacturing processes had achieved quality that was far beyond what hand-crafted processes could match. That’s why I had to disagree with my friend who said that mass-manufactured stuff sacrificed quality for quantity.

In fact, Shewhart’s “statistical process control” made it possible to leverage manufacturing quantity to improve quality.

Product Individualization

That, however, left hand-crafting’s only remaining advantage to be individualization. You are, after all, making one unique item.

Hand crafting requires a lot of work by people who’ve spent a long time honing their skills. To be economically viable, it’s got to show some advantage that will allow its products to command a premium price. So, the fact that hand-crafting’s only advantage is its ability to achieve a high degree of product individualization matters!

I once heard an oxymoronic joke comment that said: “I want to be different, like everybody else.”

That silly comment actually has hidden layers of meaning.

Of course, if everybody is different, what are they different from? If there’s no normal (equivalent to the nominal value in manufacturing test results), how can you define a difference (variation) from normal?

Another layer of meaning in the statement is its implicit acknowledgment that everyone wants to be different. We all want to feel special. There seems to be a basic drive among humans to be unique. It probably stems from a desire to be valued by those around us so they might take special care to help ensure our individual survival.

That would confer an obvious evolutionary advantage.

One of the ways we can show our uniqueness is to have stuff that shows individualization. I want my stuff to be different from your stuff. That’s why, for example, women don’t want to see other women wearing dresses identical to their own at a cocktail party.

In a world, however, where the best quality is to be had with mass-produced manufactured goods, how can you display uniqueness without having all your stuff be junk? Do you wear underwear over a leotard? Do you wear a tutu with a pants suit? That kind of strategy’s been tried and it didn’t work very well.

Ideally, to achieve uniqueness you look to customize the products that you buy. And, it’s more than just picking a color besides black for your new Ford. You want significant features of your stuff to be different from the features of your neighbor’s stuff.

As freelance journalist Carmen Klingler-Deiseroth wrote in Automation Strategies, a May 11 e-magazine put out by Automation World, “Particularly among the younger generation of digital natives, there is a growing desire to fine-tune every online purchase to match their individual tastes and preferences.”

That, obviously, poses a challenge to manufacturers whose fabrication strategy is based on mass producing interchangeable parts on automated production lines in quantities large enough to use statistical process control to maintain quality. If your lot size is one, how do you get the statistics?

She quotes Robert Kickinger, mechatronic technologies manager at B&R Industrial Automation as pointing out: “What is new . . . is the idea of making customized products under mass-production conditions.”

Kickinger further explains that any attempt to make products customizable by increasing manufacturing-system flexibility is usually accompanied by a reduction in overall equipment effectiveness (OEE). “When that happens, individualization is no longer profitable.”

One strategy that can help is taking advantage of an important feature of automated manufacturing equipment, it’s programmability. Machine programmability comes from its reliance on software, and software is notably “soft.” It’s flexible.

If you could ensure that taking advantage of your malleable software’s flexibility won’t screw up your product quality when you make your one, unique, customized product, your flexible manufacturing system could then remain profitable.

One strategy is based on simulation. That is, you know how your manufacturing system works, so you can build what I like to call a “mathematical model” that will behave, in a mathematical sense, like your real manufacturing system. For any given input, it will produce results identical to that of the real system, but much, much faster.

The results, of course, are not real, physical products, but measurement results identical to what your test department will get out of the real product.

Now, you can put the unique parameters of your unique product into the mathematical model of your real system, and crank out as many simulated examples of products as you need to ensure that when you plug those parameters into your real system, it will spit out a unique example of your unique product exhibiting the best quality your operation is capable of — without the need of cranking out mass quantities of unwanted stuff in order to tune your process.

So, what happens when (in accordance with Murphy’s Law) something that can go wrong does go wrong? Your wonderful, expensive, finely tuned flexible manufacturing system spits out a piece of junk.

You’d better not (automatically) box that piece of junk up and ship it to your customer!

Instead, you’d better take advantage of the second feature Kickinger wants for your flexible manufacturing system: real-time rejection.

“Defective products need to be rejected on the spot, while maintaining full production speed,” he advises.

Immediately catching isolated manufacturing defects not only maintains overall quality, it allows replacing flexibly manufactured unique junk to be replaced quickly with good stuff to fulfill orders with minimum delay. If things have gone wrong enough to cause repetitive multiple failures, real-time rejection also allows your flexible manufacturing system to send up an alarm alerting non-automated maintenance assets (people with screwdrivers and wrenches) to correct the problem fast.

“This is the only way to make mass customization viable from an economic perspective,” Kickinger asserts.

Social and technological trends will only make developent of this kind of flexible manufacturing process de rigeur in the future. Online shoppers are going to increasingly insist on having reasonably priced unique products manufactured to high quality standards and customized according to their desires.

As Kickinger points out: “The era of individualization has only just begun.”

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

Linear Actuator Basics

 

Lead Screw Image
Close up of a ball-screw-type lead screw shaft being used as a precision linear actuator on a machine.

2 May 2018 – This blog post is intended for folks with an interest in basic practical mechanical engineering, or mechanical engineers who want a basic brush up on linear actuators. It’s mostly pretty basic stuff, which has been around for decades, but can serve as a guide to linear-motion actuators in the real world.

What triggered writing this post at this particular time is a notice crossing my desk about a video entitled “Can I Run A Linear Actuator Into A Hard Stop?” produced by global motion-control supplier Ametek. It’s an important topic that just about everyone faced with building a motorized linear-motion system needs to think about.

I first got serious about linear motion actuators in the mid-1970s as an experimental physics student. I, of course, have seen them in action since I can remember because of my father’s hobby of building powerboats. Virtually every powerboat (as opposed to sailboats) bigger than about ten feet uses manually powered linear actuators for steering linkage.

I didn’t really get into electromechanical linear actuators (linear actuators powered by electric motors) until I got involved with automated measurement systems, where steady motion or precision positioning are important. Since then, just about every system I’ve built has included a precision linear-actuator somewhere inside.

Linear Actuator Types

There are basically four main types of linear actuators: lead screw, hydraulic/pneumatic, linear-motor, and piezoelectric. I’m going to concentrate on the lead-screw type because it’s by far the most common, but I’ll drop in some info about the other types for completeness.

Piezoelectric actuators take advantage of the fact that certain anisotropic crystalline solids change their shapes when placed in an electric field. The range of motion, however, is notably microscopic, so they are best optimized to positioning things that are, well, microscopic. They’re a major enabling technology for atomic force microscopes.

Linear motors are much larger. Imagine a long, relatively narrow tray of chocolate-frosted fudge. Imagine further that the fudge is actually made of, say, barium ferrite ceramic and magnetized with magnetic north being on the frosting side and magnetic south being on the fudge side underneath.

Now, slice the fudge into strips with cuts going across the tray of fudge the short way. Finally, take every other strip out, turn it over with the frosting side down, and put it back in place.

So, you end up with the odd-numbered strips (1, 3, 5, … ) being frosting side up and the even strips (2, 4, 6, … ) frosting side down. That’s what the long stator portion of a linear motor looks like.

To make motion, however, you need an electromagnetic slide approximately as long as the fudge tray is wide, and as wide as one of the cut strips. When you energize the electromagnet, the slide will settle between two of the stips so that its north pole is as close as possible to the nearest stator south pole, while its south pole snuggles up to the nearest stator north pole.

Reversing the current through the slide’s electromagnet makes it possible to inch the slide along the stator, one strip at a time. Switching really fast makes it possible to move the slide along the stator really fast.

That’s a very rough idea of how linear motors work. They are capable of high speeds (to make, say, a rail gun), but are relatively low in the actual force department.

The pneumatic/hydraulic actuator is just a metal cylinder enclosed at one end with a moveable piston at the other. The space between the piston and the closed end is filled with some working fluid, such as air or oil. Forcing more fluid into the cylinder pushes the piston out. Pumping fluid out, pulls the piston back. Depending on details, the motion can be fast or slow, and the forces applied can be enormous. Precision of motion is, however, not so good.

A lead-screw-type linear actuator (LSLA) is a fairly complex piece of kit. Construction of the things is actually fairly simple, though, which largely accounts for their popularity.

Linear actuator diagram
Components of a lead-screw linear actuator.

Essentially, an LSLA consists of an ordinary reversible electric motor with a length of worm shaft fixed to its output. The worm shaft threads through a slide traveling along a track/frame that prevents the slide rotating with respect to the motor housing. The worm shaft and threaded slide form a simple screw machine to convert rotary motion of the shaft to linear motion of the slide.

A motor controller, which can be as simple as a DPST switch or as complex as an intelligent motor controller (IMC) with a microprocessor brain, supplies power and control to the motor.

Motion Control Stops

At minimum, something needs to be installed to keep the slide from either backing into the motor/shaft coupler at the proximal end of the worm shaft, or running off the distal end of the shaft. These thingd are called, not surprisingly, “stops,” and they can be mechanical, electrical, or software.

Mechanical Stops, also known as “hard” stops, are barriers attached to the frame that physically constrain the slide’s motion to a certain range. Running into a hard stop is generally considered a bad thing, and designers only put them into machines to prevent even worse outcomes that may obtain when the slide’s designed-in range is exceeded.

Electrical Stops, more often referred to as “limit switches,” are actual electrical switches mounted on the frame that are automatically actuated by the slide’s motion. Typically a designer will mount an SPST momentary switch in a bracket attached to the frame. The slide presses on the switch at the end of it’s travel, closing a set of contacts that send a logic signal to the controller alerting it to cut (or reverse) motor power. The block can also serve as a mechanical stop if the control function goes wrong.

Software Stops require adding a linear encoder to the linear actuator mechanism. There are all sorts of linear encoders, from simple lengths of resistance wire to digital optical position encoders. What they all do is send some kind of signal constantly informing the controller of where the slide is in real time. A sotware stop is then an algorithim in the controller program to say: “That’s far enough!” and trigger what happens next.

Given the choice, my preference is to rely mainly on software stops. Having a linear encoder in the mechanism gives all kinds of neat options for precision control of the system, such as positioning, speed control, and so forth, in addition to implementing the software stops.

For example, I once built an experiment to test a device to measure the attack angle of an aircraft wing. A wing’s attack angle is the angle between the relative airflow and the wing shape’s chord. It is the single most important parameter determining the wing’s lift at any given speed. There are, however, all kinds of phenomena that affect the actual attack angle, all of which change constantly in real time as the wing moves through the air. To really understand what’s going on with the wing, some means of monitoring attack angle is, shall we say, useful.

Anyway, the test protocol for the experiment called for mounting an example of the attack-angle sensor in a wind tunnel, and measuring its output at hundreds of combinations of air speed and sensor orientation. Central to the control system’s operation was a linear encoder whose output informed both the controller and the data logging computer.

The controller’s job was to hold the sensor’s orientation at a certain set point via a feedback loop just long enough to get a stable reading, then go on to the next set point. The test program set’s supervisory algorithm stepped the set point through all the orientations required, one at a time. In fact, it cycled the set point back and forth through the whole test range several times, logging data as it went.

After building and testing the whole rig, my job, as principle investigator, was reduced to setting the wind tunnel’s airspeed, then reading a novel while the system ran through the test program and logged all the data automatically.

When designing the thing, I spent a couple of days trying to figure out how to install limit switches. In the end, however, I decided it just wasn’t worth the trouble. The design I had was pretty compact to begin with. The switches available and the mounting brackets to hold them would have been bigger than the rest of the design. So, I gave up on adding limit switches and relied on software stops.

That left me in danger of running into a hard stop, though, if something went wrong with the program. There are always hard stops. Lead screws are of finite length and one of two things can happen when you come to the end: either something (a hard stop) blocks the slide motion, or it runs off the end. Both are bad.

If the electric motor rams the slide into a hard stop, it’s like the proverbial unstoppable force vs. an immovable object. Something’s gotta give and that something invariably breaks.

If, on the other hand, the slide runs off the end of the lead screw and the whole machine falls apart. That may be less destructive, but it means the entire machine has to be reassembled.

Running Into A Hard Stop

There are two rules regarding running into a hard stop:

Rule 1: DON’T DO IT!

Rule 2: ASSUME YOU CAN’T AVOID IT!

What happens if you break Rule 1 depends on the details of the mechanism’s design. Every design is different, and what happens when you go too far is different as well. The consequences are all different, but they are all more-or-less bad. There’s never a situation where running a linear actuator beyond its design limit is a good thing.

Rule 2, on the other hand, is a simple acknowledgement of Murphy’s Law: Anything that can go wrong will go wrong.

While Murphy’s Law has a statistical nature when you’re dealing with mechanical systems in use, when testing prototype systems it’s a stone-cold guarantee. And, any time you put together anything for the first time, then turn it on, you’re testing a prototype.

What Rule 2 tells you to do is think long and hard about what’s going to happen when you turn the thing on and get an unexpected surprise. You have to expect the unexpected because if you expected it, it wouldn’t be unexpected.

One of the most common surprises around linear actuators is the thing suddenly going out of control. When that happens the slide invariably runs past its design limits.

The video from Ametek is short. I hesitate to spoil it for you by telling you that the answer to the question “Can I Run A Linear Actuator Into A Hard Stop?” is “Yes.” It has to be because Rule 2 tells you it’s inevitable. Importantly, the video goes on to tell you what to do to minimize the damage when it happens.

Surrealism vs. Zen

Rene Magritte’s painting The Treachery of Images, also known as This Is Not a Pipe, is a famous example of surrealist style, which uses realistically rendered images to say something profound about the workings of the human mind.

26 April 2018 – As you can tell by the discrepancy between the date at start of this column and the publication date listed in red above, it’s taken a looong time to get this thing written! The date at the text start, of course, is the date I started writing the manuscript, and the red publication date was automatically added when I actually finished all the corrections and made the thing live on the blog page. My main excuse for taking so long to write it is that the day I started the manuscript I also came down with the flu. It cut my work output drastically ’cause I suddenly started spending so much of my work day in a semi-comatose state.

Before starting this manuscript, I finally finished reading the (really massive) catalog for a 2001-2002 Exhibition put together by the Tate Modern Gallery in Bankside, London, UK entitled Surrealism: Desire Unbound. This tome is 349 pages long and provides a serious look deep inside the mindset of proponents of the Surrealist Movement, which was arguably the most far reaching creative enterprise of the Twentieth Century.

I care about that because stylistically most of my art falls into the surrealistic style. That is, it’s an attempt to render mental images in a realistic manner. I have, however, major differences with the classic surrealists led by Andre Breton regarding the theory of how the mind works. That affects the content chosen.

I’m not a trained psychologist, but neither was Breton. While Breton attempted to base his creative theory on his interpretation of Freud’s pioneering psychoanalytical research, most of Freud’s writings were unavailable to him at the time he was developing the ideas on which he based his 1924 booklet, Manifeste du surréalisme. The fact that Freud’s work is now quite readily available is largely immaterial because Freud’s research delved into mental illness, whereas I’m interested in the workings of reasonably healthy minds.

I prefer to follow the introspective traditions of Zen Buddhism.

In his manifesto, Breton says: “. . . one proposes to express — verbally, by means of the written word, or in any other manner — the actual functioning of thought. Dictated by thought, in the absence of any control exercised by reason, exempt from any aesthetic or moral concern.” In other words, he proposed to free artists of all disciplines from discipline itself.

From today’s vantage point, nearly a century removed from this event, that doesn’t seem like much of a stretch. We have gotten used to the idea that an artist can do pretty much anything he or she damn well pleases, call it art, and get away with it. I’ve no problem with Breton’s statement, except that he went so far as eschewing the editorial process.

As a veteran journalist, fiction writer, and visual artist I know from experience, that not editing invariably results in gobbledygook. I also know the surrealists didn’t actually do it.

The essence of all creative arts, from journalistic writing to making motion pictures, is communication. An artist has something to say, and attempts to say it. That’s the difference between a Michelangelo and a house painter.

(Having painted houses professionally, I hesitate to say anything derogatory about house painters. Then again, I did get fired from that job for taking too long to get anything done! So, maybe I should shut up about painting houses professionally.)

The purpose of editing is to ensure that the audience has half a chance of figuring out what the author is trying to say. It has been said that James Joyce’s stream-of-consciousness novel Ulysses is the most difficult thing to read in the English language. Since fighting through more than ten pages of the thing in one sitting gives me a splitting headache, I don’t disagree. Joyce would have done the English-reading world a great favor by consulting a copy of Strunk and White’s The Elements of Style!

Hey, Jimmy, ever hear of quotation marks?

So, by trying to bypass the editing process, the early surrealists’ attempts at automatic writing produced pretty ungainly stuff. Of course, from concept to pen, “automatic” writers actually do a lot of editing. Take, for example, two lines from Joyce Mansour’s first book of poetry (supposedly automatically written) Chris:

I will fish out your empty soul
In the coffin where your mouldy body lies

Do you, or anybody you know, think in such complete sentences? I don’t. Even if I started with the complete mental image, I’d do a lot of backing and filling to gin up those two lines complete with intelligible word order. I’d likely start with the coffin image, then realize I needed a subject to view the image, then maybe come up with the fishing idea, and so forth.

It’d all happen really fast because we humans are really fast verbal thinkers. It might almost seem instantaneous if I were willfully not paying attention to the process. But, “the absence of any control exercised by reason” would NOT obtain!

Does anyone believe Salvador Dali’s The Hallucinogenic Toreador is NOT the product of careful planning?

Similarly, does anyone believe Salvador Dali’s The Hallucinogenic Toreador is NOT the product of careful planning and reasoned arrangements of intertwining visual components? I doubt if Dali, himself, would assert that!

I am currently working on a simple painting that realistically depicts a woman’s eye. I’ve had it on my easel for at least a week, during which time I spent a couple of days carefully erasing the eyebrow that I made too dark in the original underpainting. I’ve spent another day deciding whether to mix up additional yellow paint to correct the skin color, or just get started with what paint I already have on hand and worry about running out when, and if, I run out.

That’s all part of the editing process.

It’s something every artist has done all the way back to the cro-magnon guy (or maybe girl) scribbling graffiti on the cave walls in Lascaux. DaVinci spent a lifetime editing details of the Mona Lisa. Dali, trained in the same style, did the same thing.

Why would Breton be so enamoured of automatism? It goes back to his reliance on the image Freud posited for human mental activity.

Freud imagined a mind divided against itself. He imagined a subconscious filled with desires and emotions trying to express itself, but held in check by a conscious ego that constantly says: “No, No! You can’t say or do that!”

Breton’s goal was to free the subconcious from conscious control.

To a Zen Buddist that model of mental activity is absurd. Zen’s ancestor Taoism solved Breton’s problem roughly twenty-four centuries earlier with the image of the “uncarved block.”

Basically, as the second line of Lau Tsu’s Tao Te Ching says:

The name that can be named is not the eternal name.

That means dividing things up (by naming them) breaks them. Dividing the mind into subconscious and conscious parts breaks it.

Having a conscious mind controlling a subconscious mind results in insanity. To a Zen Buddhist, the sane person has a whole, undivided mind. What Freud imagines as the conscious part in fact always chooses a plan that expresses the desires of the unconscious part. How could a sane person act differently?

To a Zen Buddhist, it’s all one mind, not a bunch of disjointed pieces at war with each other.

What about the many examples of individuals whose unconstrained desires would run them afoul of society? To the Freudian surrealists, that was the normal state of affairs. To Zen Buddhists, on the other hand, that indicates one of two situations:

* Stupidity in which the conscious mind chooses inappropriate means to express the “unconscious” desires; or

* Mental illness in which the unconscious desires are such that no person “in their right mind” would actually desire them.

For example, Breton’s surrealists professed to admire the freedom from constraints expressed in the writings of the Marquis de Sade. Sade’s protagonists have a desire to cause suffering in others. To the Freudian surrealists, that forces a choice between consciously suppressing the violent urges, or going to jail for acting them out.

Having that desire in the first place would horrify any Buddist! Buddhists want to end suffering. You’d have to stand on your head, philosophically speaking, to imagine a Buddhist subconsciously desiring to cause suffering for any creature. The very existence of such a desire demonstrates a deranged mind!

The Buddhist would choose the kinder, gentler way of confining the kooky Marquis in the eighteenth century version of a loony bin, while providing him barrels of ink and reams of paper with which he could mentally live out his barbaric fantasies without actually hurting anyone. That, of course, is exactly what the French authorities did.

Good for them.

Of course, the Sade-smitten surrealists were generally neither stupid nor insane. A quick search revealed exactly zero instances of surriealists being jailed for violent behavior. Several of them did run afoul of decency laws, but most of us now would opine that was the fault of the laws, not the law breakers. I’m fairly confident that, though the surrealists often depicted instances of cruelty, they pretty much never actually hurt anyone, themselves.

Even that famously revolting shot in the film Un Chien Andalou by Dali and Luis Bunuel, that apparently shows a girl’s eye being sliced open, didn’t actually happen. It was a special-effects masterpiece.

So, what does all this mean for surrealism in the first quarter of the twenty-first century?

Well, a number of historians counted surrealism as dead at the end of World War II. Others confidently claim that surrealism died with Andre Breton in 1966. Still others say it died with Salvador Dali in 1989.

My experience indicates that surrealism might insist, along with Mark Twain: “The reports of my death are greatly exaggerated.”

I constantly see exquisite new works done in a style that can best be described as surrealist. That is, these works render images of mental landscapes and ideas in a startling realistic way. They make the life of the mind visible.

Sounds like surrealism to me!

STEM Careers for Women

Woman engineer
Women have more career options than STEM. Courtesy Shutterstock.

6 April 2018 – Folks are going to HATE what I have to say today. I expect to get comments accusing me of being a slug-brained, misogynist reactionary imbicile. So be it, I often say things other people don’t want to hear, and I’m often accused of being a slug-brained imbecile. I’m sometimes accused of being reactionary.

I don’t think I’m usually accused of being mysogynist, so that’ll be a new one.

I’m not often accused of being misogynist because I’ve got pretty good credentials in the promoting-womens’-interests department. I try to pay attention to what goes on in my women-friends’ heads. I’m more interested in the girl inside than in their outsides. Thus, I actually do care about what’s important to them.

Historically, I’ve known a lot of exceptional women, and not a few who were not-so-exceptional, and, of course, I’ve met my share of morons. But, I’ve tried to understand what was going on in all their heads because I long ago noticed that just about everybody I encounter is able to teach me something if I pay attention.

So much for the preliminaries.

Getting more to the point of this blog entry, last week I listened to a Wilson Center webcast entitled “Opening Doors in Glass Walls for Women in STEM.” I’d hoped I might have something to add to the discussion, but I didn’t. I also didn’t hear much in the “new ideas” department, either. It was mostly “woe is us ’cause women get paid less than men,” and “we’ve made some progress, but there still aren’t many women in STEM careers,” and stuff like that.

Okay. For those who don’t already know, STEM is an acronym for “Science, Technology, Engineering and Math.” It’s a big thing in education and career-development circles because it’s critical to our national technological development.

Without going into the latest statistics (’cause I’m too lazy this morning to look ’em up), it’s pretty well acknowledged that women get paid a whole lot less than men for doing the same jobs, and a whole lot less than 50% of STEM workers are women despite their making up half the available workforce.

I won’t say much about the pay ranking, except to assert that paying someone less than they’re efforts are worth is just plain dumb. It’s dumb for the employer because good talent will vote with their feet for higher pay. It’s dumb for the employee because he, she, or it should vote with their feet by walking out the door to look for a more enlightened employer. It doesn’t matter whether you are a man or a woman, you don’t want to be dependent for your income on a a mismanaged company!

Enough said about the pay differential. What I want to talk about here is the idea that, since half the population is women, half the STEM workers should be women. I’m going to assert that’s equally dumb!

I do NOT assert that there is anything about women that makes them unsuited to STEM careers. It is true that women are significantly smaller physically (the last time I checked, the average American woman was 5’4″ tall, while the average American man was 5’10” tall with everything else more or less scaled to match), but that makes no nevermind for a STEM career. STEM jobs make demands on what’s between the ears, not what’s between the shoulders.

With regard to womens’ brains’ suitability for STEM jobs, experience has shown me that there’s no significant (to a STEM career) difference between them and male brains. Women are every bit as adept at independent thinking, puzzle solving, memory tasks, and just about any measurable talent that might make a difference to a STEM worker. I’ve seen no study that showed women to be inferior to men with respect to mathematical or abstract reasoning, either. In fact, some studies have purported to show the reverse.

On the other hand, as far as I know, EVERY culture traditionally separates jobs into “women’s work” and “men’s work.” Being a firm believer in Darwinian evolution, I don’t argue with Mommy Nature’s way, but do ask “Why?”

Many decades ago, my advanced lab instructor asserted that “tradition is the sum total of things our ancestors over the past four million years have found to work.” I completely agree with him, with the important proviso that things change.

Four million years ago, our ancestors didn’t have ceramic tile floors in their condos, nor did they have cars with remote keyless entry locks. It was a lot tougher for them than it is for us, and survival was far less assured.

They were the guys who decided to have men make the hand axes and arrowheads, and that women should weave the baskets and make the soup. Most importantly for our discussion, they decided women should change the diapers.

Fast forward four million years, and we’re still doing the same things, more or less. Things, however, have changed, and we’re now having to rethink that division of labor.

Some jobs, like digging ditches, still require physical prowess, which makes them more suited to men than women. I’m ignoring (but not forgetting) all the manual labor women are asked to do all over the world. That’s not what I’m talking about here. I’m talking about STEM jobs, which DON’T require physical prowess.

So, why don’t women go after those cushy, high-paying STEM jobs, and, equally significant, once they have one of those jobs, why is it so hard to keep them in them? One of the few things that came out of last week’s webinar (Remember this all started with my attending that webinar?) was the point that women leave STEM careers in droves. They abandon their hard-won STEM careers and go off to do something else.

The point I want to make with this essay is to suggest that maybe the reason women are underrepresented in STEM careers is that they actually have more options than men. Most importantly, they have the highly attractive (to them) option of the “homemaker” career.

Current thinking among the liberal intelligencia is that “homemaker” is not much of a career. I simply don’t accept that idea. Housewife is just as important a job as, say, truck driver, bank president, or technology journalist. So, pooh!

The homemaker option is not open to most men. We may be willing to help out around the house, and may even feel driven to do our part, or at least try to find some part that could be ours to do. But, I can’t think of one of my male friends who’d be comfortable shouldering the whole responsibility.

I assert that four million years of evolution has wired up human brains for sexual dimorphism with regard to “guy jobs” and “girl jobs.” It just feels right for guys to do jobs that seem to be traditionally guy things and for women to do jobs that seem to be traditionally theirs.

Now, throughout most of evolutionary time STEM jobs pretty much didn’t exist. One of the things our ancestors didn’t have four million years ago was trigonometry. In fact, they probably struggled with basic number theory. I did an experiment in high school that indicated that the crows in my back yard couldn’t count beyond two. Australopithecus Paranthropus was probably a better mathematician than that, but likely not by much.

So, one of the things we have now that has avoided being shaped by natural selection pressure is the option to persue a STEM career. It’s pretty much evolutionarily neutral. STEM careers are probably equally attractive (or repulsive) to women and men.

I mention “repulsive” for a very good reason. Preparing oneself for a STEM career is hard.

Mathematics, especially, is one of the few subjects that give many, if not most, people phobias. Frankly, arithmetic lost me on the second day of first grade when Miss Shay passed out a list of addition tables and told us to memorize it. I thought the idea of arithmetic was a gas. Memorizing tables, however, was not on my To Do list. I expect most people feel the same way.

Learning STEM subjects involves a $%^-load of memorizing! So, it’s no wonder girls would rather play with dolls (and boys with trucks) than study STEM subjects. Eventually, playing with trucks leads to STEM careers. Playing with dolls does not.

Grown up girls find they have the option of playing with dolls as a career. Grown up boys don’t. So, choosing a STEM career is something grown-up boys really want to do if they can, but for girls, not so much. They can find something to do that’s more satisfying with less work.

So, they vote with their feet. THAT may be why it’s so hard to get women into STEM careers in the first place, and then to keep them there for the long haul.

Before you start having apoplectic fits imagining that I’m making a broad generalization that females don’t like STEM careers, recognize that what I’m describing IS a broad theoretical generalization. It’s meant to be.

In the real world there are 300 million people in the United States, half of which are women, and each and every one of them gets to make a separate career choice for themself. Every one of them chooses for themself based on what they want to do with their life. Some choose STEM careers. Some don’t.

My point is that you shouldn’t just assume that half of STEM job slots ought be filled by women. Half of potential candidates may be women, but a fair fraction of them might prefer to go play somewhere else. It may be that they find women have more alternatives than do men. You may end up with more men slotting into those STEM jobs because they have less choice.

You know, being a housewife ain’t such a bad gig!

The Future of Personal Transportation

Israeli startup Griiip’s next generation single-seat race car demonstrating the world’s first motorsport Vehicle-to-Vehicle (V2V) communication application on a racetrack.

9 April 2018 – Last week turned out to be big for news about personal transportation, with a number of trends making significant(?) progress.

Let’s start with a report (available for download at https://gen-pop.com/wtf) by independent French market-research company Ipsos of responses from more than 3,000 people in the U.S. and Canada, and thousands more around the globe, to a survey about the human side of transportation. That is, how do actual people — the consumers who ultimately will vote with their wallets for or against advances in automotive technology — feel about the products innovators have been proposing to roll out in the near future. Today, I’m going to concentrate on responses to questions about self-driving technology and automated highways. I’ll look at some of the other results in future postings.

Perhaps the biggest take away from the survey is that approximately 25% of American respondents claim they “would never use” an autonomous vehicle. That’s a biggie for advocates of “ultra-safe” automated highways.

As my wife constantly reminds me whenever we’re out in Southwest Florida traffic, the greatest highway danger is from the few unpredictable drivers who do idiotic things. When surrounded by hundreds of vehicles ideally moving in lockstep, but actually not, what percentage of drivers acting unpredictably does it take to totally screw up traffic flow for everybody? One percent? Two percent?

According to this survey, we can expect up to 25% to be out of step with everyone else because they’re making their own decisions instead of letting technology do their thinking for them.

Automated highways were described in detail back in the middle part of the twentieth century by science-fiction writer Robert A. Heinlein. What he described was a scene where thousands of vehicles packed vast Interstates, all communicating wirelessly with each other and a smart fixed infrastructure that planned traffic patterns far ahead, and communicated its decisions with individual vehicles so they acted together to keep traffic flowing in the smoothest possible way at the maximum possible speed with no accidents.

Heinlein also predicted that the heros of his stories would all be rabid free-spirited thinkers, who wouldn’t allow their cars to run in self-driving mode if their lives depended on it! Instead, they relied on human intelligence, forethought, and fast reflexes to keep themselves out of trouble.

And, he predicted they would barely manage to escape with their lives!

I happen to agree with him: trying to combine a huge percentage of highly automated vehicles with a small percentage of vehicles guided by humans who simply don’t have the foreknowledge, reflexes, or concentration to keep up with the automated vehicles around them is a train wreck waiting to happen.

Back in the late twentieth century I had to regularly commute on the 70-mph parking lots that went by the name “Interstates” around Boston, Massachusetts. Vehicles were generally crammed together half a car length apart. The only way to have enough warning to apply brakes was to look through the back window and windshield of the car ahead to see what the car ahead of them was doing.

The result was regular 15-car pileups every morning during commute times.

Heinlein’s (and advocates of automated highways) future vision had that kind of traffic density and speed, but were saved from inevitable disaster by fascistic control by omniscient automated highway technology. One recalcitrant human driver tossed into the mix would be guaranteed to bring the whole thing down.

So, the moral of this story is: don’t allow manual-driving mode on automated highways. The 25% of Americans who’d never surrender their manual-driving priviledge can just go drive somewhere else.

Yeah, I can see THAT happening!

A Modest Proposal

With apologies to Johnathan Swift, let’s change tack and focus on a more modest technology: driver assistance.

Way back in the 1980s, George Lucas and friends put out the third in the interminable Star Wars series entitled The Empire Strikes Back. The film included a sequence that could only be possible in real life with help from some sophisticated collision-avoidance technology. They had a bunch of characters zooming around in a trackless forest on the moon Endor, riding what can only be described as flying motorcycles.

As anybody who’s tried trailblazing through a forest on an off-road motorcycle can tell you, going fast through virgin forest means constant collisions with fixed objects. As Bugs Bunny once said: “Those cartoon trees are hard!

Frankly, Luke Skywalker and Princess Leia might have had superhuman reflexes, but their doing what they did without collision avoidance technology strains credulity to the breaking point. Much easier to believe their little speeders gave them a lot of help to avoid running into hard, cartoon trees.

In the real world, Israeli companies Autotalks, and Griiip, have demonstrated the world’s first motorsport Vehicle-to-Vehicle (V2V) application to help drivers avoid rear-ending each other. The system works is by combining GPS, in-vehicle sensing, and wireless communication to create a peer-to-peer network that allows each car to send out alerts to all the other cars around.

So, imagine the situation where multiple cars are on a racetrack at the same time. That’s decidedly not unusual in a motorsport application.

Now, suppose something happens to make car A suddenly and unpredictably slow or stop. Again, that’s hardly an unusual occurrance. Car B, which is following at some distance behind car A, gets an alert from car A of a possible impending-collision situation. Car B forewarns its driver that a dangerous situation has arisen, so he or she can take evasive action. So far, a very good thing in a car-race situation.

But, what’s that got to do with just us folks driving down the highway minding our own business?

During the summer down here in Florida, every afternoon we get thunderstorms dumping torrential rain all over the place. Specifically, we’ll be driving down the highway at some ridiculous speed, then come to a wall of water obscuring everything. Visibility drops from unlimited to a few tens of feet with little or no warning.

The natural reaction is to come to a screeching halt. But, what happens to the cars barreling up from behind? They can’t see you in time to stop.

Whammo!

So, coming to a screeching halt is not the thing to do. Far better to keep going forward as fast as visibility will allow.

But, what if somebody up ahead panicked and came to a screeching halt? Or, maybe their version of “as fast as visibility will allow” is a lot slower than yours? How would you know?

The answer is to have all the vehicles equipped with the Israeli V2V equipment (or an equivalent) to forewarn following drivers that something nasty has come out of the proverbial woodshed. It could also feed into your vehicle’s collision avoidance system to step over the 2-3 seconds it takes for a human driver to say “What the heck?” and figure out what to do.

The Israelis suggest that the required chip set (which, of course, they’ll cheerfully sell you) is so dirt cheap that anybody can afford to opt for it in their new car, or retrofit it into their beat up old junker. They further suggest that it would be worthwhile for insurance companies to give a rake off on their premiums to help cover the cost.

Sounds like a good deal to me! I could get behind that plan.

Invasion of the Robofish!

30 March 2018 – Mobile autonomous systems come in all sizes, shapes, and forms, and have “invaded” every earthly habitat. That’s not news. What is news is how far the “bleeding edge” of that technology has advanced. Specifically, it’s news when a number of trends combine to make something unique.

Today I’m getting the chance to report on something that I predicted in a sci-fi novel I wrote back in 2011, and then goes at least one step further.

Last week the folks at Design World published a report on research at the MIT Computer Science & Artificial Intelligence Lab that combines three robotics trends into one system that quietly makes something I find fascinating: a submersible mobile robot. The three trends are soft robotics, submersible unmanned systems, and biomimetic robot design.

The beasty in question is a robot fish. It’s obvious why this little guy touches on those three trends. How could a robotic fish not use soft robotic, sumersible, and biomemetic technologies? What I want to point out is how it uses those technologies and why that combination is necessary.

Soft Robotics

Folks have made ROVs (basically remotely operated submarines) for … a very long time. What they’ve pretty much all produced are clanky, propeller-driven derivatives of Jules Verne’s fictional Nautilus from his 1870 novel Twenty Thousand Leagues Under the Sea. That hunk of junk is a favorite of steampunk afficionados.

Not much has changed in basic submarine design since then. Modern ROVs are more maneuverable than their WWII predecessors because they add multiple propellers to push them in different directions, but the rest of it’s pretty much the same.

Soft robotics changes all that.

About 45 years ago, a half-drunk physics professor at a kegger party started bending my ear about how Mommy Nature never seemed to have discovered the wheel. The wheel’s a nearly unique human invention that Mommy Nature has pretty much done without.

Mommy Nature doesn’t use the wheel because she uses largely soft technology. Yes, she uses hard technology to make structural components like endo- and exo-skeletons to give her live beasties both protection and shape, but she stuck with soft-bodied life forms for the first four billion years of Earth’s 4.5-billion-year history. Adding hard-body technology in the form of notochords didn’t happen until the cambrian explosion of 541-516 million years ago, when most major animal phyla appeared.

By the way, that professor at the party was wrong. Mommy Nature invented wheels way back in the precambrian era in the form of rotary motors to power the flagella that propel unicellular free-swimmers. She just hasn’t use wheels for much else, since.

Of course, everybody more advanced than a shark has a soft body reinforced by a hard, bony skeleton.

Today’s soft robotics uses elastomeric materials to solve a number of problems for mobile automated systems.

Perhaps most importantly it’s a lot easier for soft robots to separate their insides from their outsides. That may not seem like a big deal, but think of how much trouble engineers go through to keep dust, dirt, and chemicals (such as seawater) out of the delicate gears and bearings of wheeled vehicles. Having a flexible elastomeric skin encasing the whole robot eliminates all that.

That’s not to mention skin’s job of keeping pesty little creepy crawlies out! I remember an early radio astronomer complaining that pack rats had gotten into his remote desert headquarters trailer and eaten a big chunk of his computer’s magnetic-core memory. That was back in the days when computer random-access memories were made from tiny iron beads strung on copper wires.

Another major advantage of soft bodies for mobile robots is resistance to collision damage. Think about how often you’re bumped into when crossing the room at a cocktail party. Now, think about what your hard-bodied automobile would look like after bumping into that many other cars in a parking lot. Not a pretty sight!

The flexibility of soft bodies also makes possible a lot of propulsion methods beside wheel-like propellers, caterpillar tracks, and rubber tires. That’s good because piercing soft-body skins with drive shafts to power propellers and wheels pretty much trashes the advantages of having those skins in the first place.

That’s why prosthetic devices all have elaborate cuffs to hold them to the outsides of the wearer’s limbs. Piercing the skin to screw something like Captain Hook’s hook directly into the existing bone never works out well!

So, in summary, the MIT group’s choice to start with soft-robotic technology is key to their success.

Submersible Unmanned Systems

Underwater drones have one major problem not faced by robotic cars and aircraft: radio waves don’t go through water. That means if anything happens that your none-too-intelligent automated system can’t handle, it needs guidance from a human operator. Underwater, that has largely meant tethering the robot to a human.

This issue is a wall that self-driving-car developers run into constantly (and sometimes literally). When the human behind the wheel mandated by state regulators for autonomous test vehicles falls asleep or is distracted by texting his girlfriend, BLAMMO!

The world is a chaotic place and unpredicted things pop out of nowhere all the time. Human brains are programmed to deal with this stuff, but computer technology is not, and will not be for the foreseeable future.

Drones and land vehicles, which are immersed in a sea of radio-transparent air, can rely on radio links to remote human operators to help them get out of trouble. Underwater vehicles, which are immersed in a sea of radio-opaque water, can’t.

In the past, that’s meant copper wires enclosed in physical tethers that tie the robots to the operators. Tethers get tangled, cut and hung up on everything from coral outcrops to passing whales.

There are a couple of ways out of the tether bind: ultrasonics and infra-red. Both go through water very nicely, thank you. The MIT group seems to be using my preferred comm link: ultrasonics.

Sound goes through water like you-no-what through a goose. Water also has little or no sonic “color.” That is, all frequencies of sonic waves go more-or-less equally well through water.

The biggest problem for ultrasonics is interference from all the other noise makers out there in the natural underwater world. That calls for the spread-spectrum transmission techniques invented by Hedy Lamarr. (Hah! Gotcha! You didn’t know Hedy Lamarr, aka Hedwig Eva Maria Kiesler, was a world famous technical genius in addition to being a really cute, sexy movie actress.) Hedy’s spread-spectrum technique lets ultrasonic signals cut right through the clutter.

So, advanced submersible mobile robot technology is the second thread leading to a successful robotic fish.

Biomimetics

Biomimetics is a 25-cent word that simply means copying designs directly from nature. It’s a time-honored short cut engineers have employed from time immemorial. Sometimes it works spectacularly, such as Thomas Wedgwood’s photographic camera (developed as an analogue of the terrestrial vertebrate eye), and sometimes not, such as Leonardo da Vinci’s attempts to make flying machines based on birds’ wings.

Obvously, Mommy Nature’s favorite fish-propulsion mechanism is highly successful, having been around for some 550 million years and still going strong. It, of course, requires a soft body anchored to a flexible backbone. It takes no imagination at all to copy it for robot fish.

The copying is the hard part because it requires developing fabrication techniques to build soft-bodied robots with flexible backbones in the first place. I’ve tried it, and it’s no mean task.

The tough part is making a muscle analogue that will drive the flexible body to move back and forth rythmically and propel the critter through the water. The answer is pneumatics.

In the early 2000s, a patent-lawyer friend of mine suggested lining both sides of a flexible membrane with tiny balloons that could be alternately inflated or deflated. When the balloons on one side were inflated, the membrane would curve away from that side. When the balloons on the other side were inflated the membrane would curve back. I played around with this idea, but never went very far with it.

The MIT group seems to have made it work using both gas (carbon dioxide) and liquid (water) for the working fluid. The difference between this kind of motor and natural muscle is that natural muscle works by pulling when energized, and the balloon system works by pushing. Otherwise, both work by balancing mechanical forces along two axes with something more-or-less flexible trapped between them.

In Nature’s fish, that something is the critter’s skeleton (backbone made up of vertebrae and stiffened vertically by long, thin spines), whereas the MIT group’s robofish uses elastomers with different stiffnesses.

Complete Package

Putting these technical trends together creates a complete package that makes it possible to build a free-swimming submersible mobile robot that moves in a natural manner at a reasonable speed without a tether. That opens up a whole range of applications, from deep-water exploration to marine biology.

What’s So Bad About Cryptocurrencies?

15 March 2018 – Cryptocurrency fans point to the vast “paper” fortunes that have been amassed by some bitcoin speculators, and sometimes predict that cryptocurrencies will eventually displace currencies issued and regulated by national governments. Conversely, banking-system regulators in several nations, most notably China and Russia, have outright bans on using cryptocurrency (specifically bitcoin) as a medium of exchange.

At the same time, it appears that fintech (financial technology) pundits pretty universally agree that blockchain technology, which is the enabling technology behind all cryptocurrency efforts, is the greatest thing since sliced bread, or, more to the point, the invention of ink on papyrus (IoP). Before IoP, financial records relied on clanky technologies like bundles of knotted cords, ceramic Easter eggs with little tokens baked inside, and that poster child for early written records, the clay tablet.

IoP immediately made possible tally sheets, journal and record books, double-entry ledgers, and spreadsheets. Without thin sheets of flat stock you could bind together into virtually unlimited bundles and then make indelible marks on, the concept of “bookkeeping” would be unthinkable. How could you keep books without having books to keep?

Blockchain is basically taking the concept of double-entry ledger accounting to the next (digital) level. I don’t pretend to fully understand how blockchain works. It ain’t my bailiwick. I’m a physicist, not a computer scientist.

To me, computers are tools. I think of them the same way I think of hacksaws, screw drivers, and CNC machines. I’m happy to have ’em and anxious to know how to use ’em. How they actually work and, especially, how to design them are details I generally find of marginal interest.

If it sounds like I’m backing away from any attempt to explain blockchains, that’s because I am. There are lots of people out there who are willing and able to explain blockchains far better than I could ever hope to.

Money, on the other hand, is infinitely easier to make sense of, and it’s something I studied extensively in MBA school. And, that’s really what cryptocurrencies are all about. It’s also the part cryptocurrency that its fans seem to have missed.

Once upon a time, folks tried to imbue their money (currency) with some intrinsic value. That’s why they used to make coins out of gold and silver. When Marco Polo introduced the Chinese concept of promissory notes to Renaissance Europe, it became clear that paper currency was possible provided there were two characteristics that went with it:

  • Artifact is some kind of thing (and I can’t identify it any more precisely than with the word “thing” because just about anything and everything has been tried and found to work) that people can pass between them to form a transaction; and
  • Underlying Value is some form of wealth that stands behind the artifact and gives an agreed-on value to the transaction.

For cryptocurrencies, the artifact consists of entries in a computer memory. The transactions are simply changes in the entries in computer memories. More specifically, blockchains amount to electronic ledger entries in a common database that forever leave an indelible record of transactions. (Sound familiar?)

Originally, the underlying value of traditional currencies was imagined to be the wealth represented by the metal in a coin, or the intrinsic value of a jewel, and so forth. More recently folks have begun imagining that the underlying value of government issued currency (dollars, pounds sterling, yuan) was fictitious. They began to believe the value of a dollar was whatever people believed it was.

According to this idea, anybody could issue currency as long as they got a bunch of people together to agree that it had some value. Put that concept together with the blockchain method of common recordkeeping, and you get cryptocurrency.

I’m oversymplifying all this in an effort to keep this posting within rational limits and to make a point, so bear with me. The point I’m trying to make is that the difference between any cryptocurrency and U.S. dollars is that these cryptocurrencies have no underlying value.

I’ve heard the argument that there’s no underlying value behind U.S. dollars, either. That just ain’t so! Having dollars issued by the U.S. government and tied to the U.S. tax base connects dollars to the U.S. economy. In other words, the underlying value backing up the artifacts of U.S. dollars is the entire U.S. economy. The total U.S. economic output in 2016, as measured by gross domestic product (GDP) was just under 20 trillion dollars. That ain’t nothing!

And, You Thought Global Warming was a BAD Thing?

Ice skaters on the frozen Thames river in 1677

10 March 2017 – ‘Way back in the 1970s, when I was an astophysics graduate student, I was hot on the trail of why solar prominences had the shapes we observe them to have. Being a good little budding scientist, I spent most of my waking hours in the library poring over old research notes from the (at that time barely existing) current solar research, back to the beginning of time. Or, at least to the invention of the telescope.

The fact that solar prominences are closely associated with sunspots led me to studying historical measurements of sunspots. Of course, I quickly ran across two well-known anomalies known as the Maunder and Sporer minima. These were periods in the middle ages when sunspots practically disappeared for decades at a time. Astronomers of the time commented on it, but hadn’t a clue as to why.

The idea that sunspots could disappear for extended periods is not really surprising. The Sun is well known to be a variable star whose surface activity varies on a more-or-less regular 11-year cycle (22 years if you count the fact that the magnetic polarity reverses after every minimum). The idea that any such oscillator can drop out once in a while isn’t hard to swallow.

Besides, when Mommy Nature presents you with an observable fact, it’s best not to doubt the fact, but to ask “Why?” That leads to much more fun research and interesting insights.

More surprising (at the time) was the observed correlation between the Maunder and Sporer minima and a period of anomalously cold temperatures throughout Europe known as the “Little Ice Age.” Interesting effects of the Little Ice Age included the invention of buttons to make winter garments more effective, advances of glaciers in the mountains, ice skating on rivers that previously never froze at all, and the abandonment of Viking settlements in Greenland.

And, crop failures. Can’t forget crop failures! Marie Antoinette’s famous “Let ’em eat cake” faux pas was triggered by consistent failures of the French wheat harvest.

The moral of the Little Ice Age story is:

Global Cooling = BAD

The converse conclusion:

Global Warming = GOOD

seems less well documented. A Medieval Warm Period from about 950-1250 did correlate with fairly active times for European culture. Similarly, the Roman Warm Period (250 BCE – 400 CE) saw the rise of the Roman civilization. So, we can tentatively conclude that global warming is generally NOT bad.

Sunspots as Markers

The reason seeing sunspot minima coincide with cool temperatures was surprising was that at the time astronomers fantasized that sunspots were like clouds that blocked radiation leaving the Sun. Folks assumed that more clouds meant more blocking of radiation, and cooler temperatures on Earth.

Careful measurements quickly put that idea into its grave with a stake through its heart! The reason is another feature of sunspots, which the theory conveniently forgot: they’re surrounded by relatively bright areas (called faculae) that pump out radiation at an enhanced rate. It turns out that the faculae associated with a sunspot easily make up for the dimming effect of the spot itself.

That’s why we carefully measure details before jumping to conclusions!

Anyway, the best solar-output (irradiance) research I was able to find was by Charles Greeley Abbott, who, as Director of the Smithsonian Astrophysical Observatory from 1907 to 1944, assembled an impressive decades-long series of meticulous measurements of the total radiation arriving at Earth from the Sun. He also attempted to correlate these measurements with weather records from various cities.

Blinded by a belief that solar activity (as measured by sunspot numbers) would anticorrelate with solar irradiation and therefore Earthly temperatures, he was dismayed to be unable to make sense of the combined data sets.

By simply throwing out the assumptions, I was quickly able to see that the only correlation in the data was that temperatures more-or-less positively correlated with sunspot numbers and solar irradiation measurements. The resulting hypothesis was that sunspots are a marker for increased output from the Sun’s core. Below a certain level there are no spots. As output increases above the trigger level, sunspots appear and then increase with increasing core output.

The conclusion is that the Little Ice Age corresponded with a long period of reduced solar-core output, and the Maunder and Sporer minima are shorter periods when the core output dropped below the sunspot-trigger level.

So, we can conclude (something astronomers have known for decades if not centuries) that the Sun is a variable star. (The term “solar constant” is an oxymoron.) Second, we can conclude that variations in solar output have a profound affect on Earth’s climate. Those are neither surprising nor in doubt.

We’re also on fairly safe ground to say that (within reason) global warming is a good thing. At least its pretty clearly better than global cooling!