Don’t Tell Me What to Think!

Your Karma ran over My Dogma
A woman holds up a sign while participating in the annual King Mango Strut parade in Miami, FL on 28 December 2014. BluIz60/Shutterstock

2 January 2019 – Now that the year-end holidays are over, it’s time to get back on my little electronic soapbox to talk about an issue that scientists have had to fight with authorities over for centuries. It’s an issue that has been around for millennia, but before a few centuries ago there weren’t scientists around to fight over it. The issue rears its ugly head under many guises. Most commonly today it’s discussed as academic freedom, or freedom of expression. You might think it was definitively won for all Americans in 1791 with the ratification of the first ten amendments to the U.S. Constitution and for folks in other democracies soon after, but you’d be wrong.

The issue is wrapped up in one single word: dogma.

According to the Oxford English Dictionary, the word dogma is defined as:

“A principle or set of principles laid down by an authority as incontrovertibly true.”

In 1600 CE, Giordano Bruno was burned at the stake for insisting that the stars were distant suns surrounded by their own planets, raising the possibility that these planets might foster life of their own, and that the universe is infinite and could have no “center.” These ideas directly controverted the dogma laid down as incontrovertibly true by both the Roman Catholic and Protestant Christian churches of the time.

Galileo Galilei, typically thought as the poster child for resistance to dogma, was only placed under house arrest (for the rest of his life) for advocating the less radical Copernican vision of the solar system.

Nicholas Copernicus, himself, managed to fly under the Catholic Church’s radar for nearly a century and a quarter by the simple tactic of not publishing his heliocentric model. Starting in 1510, he privately communicated it to his friends, who then passed it to some of their friends, etc. His signature work, Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in which he laid it out for all to see, wasn’t published until his death in 1643, when he’d already escaped beyond the reach of earthly authorities.

If this makes it seem that astrophysicists have been on the front lines of the war against dogma since there was dogma to fight against, that’s almost certainly true. Astrophysicists study stuff relating to things beyond the Earth, and that traditionally has been a realm claimed by religious authorities.

That claim largely started with Christianity, specifically the Roman Catholic Church. Ancient religions, which didn’t have delusions that they could dominate all of human thought, didn’t much care what cockamamie ideas astrophysicists (then just called “philosophers”) came up with. Thus, Aristarchus of Samos suffered no ill consequences (well, maybe a little, but nothing life – or even career – threatening) from proposing the same ideas that Galileo was arrested for championing some eighteen centuries later.

Fast forward to today and we have a dogma espoused by political progressives called “climate change.” It used to be called “global warming,” but that term was laughed down decades ago, though the dogma’s still the same.

The United-Nations-funded Intergovernmental Panel on Climate Change (IPCC) has become “the Authority” laying down the principles that Earth’s climate is changing and that change constitutes a rapid warming caused by human activity. The dogma also posits that this change will continue uninterrupted unless national governments promulgate drastic laws to curtail human activity.

Sure sounds like dogma to me!

Once again, astrophysicists are on the front lines of the fight against dogma. The problem is that the IPCC dogma treats the Sun (which is what powers Earth’s climate in the first place) as, to all intents and purposes, a fixed star. That is, it assumes climate change arises solely from changes in Earthly conditions, then assumes we control those conditions.

Astrophysicists know that just ain’t so.

First, stars generally aren’t fixed. Most stars are variable stars. In fact, all stars are variable on some time scale. They all evolve over time scales of millions or billions of years, but that’s not the kind of variability we’re talking about here.

The Sun is in the evolutionary phase called “main sequence,” where stars evolve relatively slowly. That’s the source of much “invariability” confusion. Main sequence stars, however, go through periods where they vary in brightness more or less violently on much shorter time scales. In fact, most main sequence stars exhibit this kind of behavior to a greater or lesser extent at any given time – like now.

So, a modern (as in post-nineteenth-century) astrophysicist would never make the bald assumption that the Sun’s output was constant. Statistically, the odds are against it. Most stars are variables; the Sun is like most stars; so the Sun is probably a variable. In fact, it’s well known to vary with a fairly stable period of roughly 22 years (the 11-year “sunspot cycle” is actually only a half cycle).

A couple of centuries ago, astronomers assumed (with no evidence) that the Sun’s output was constant, so they started trying to measure this assumed “solar constant.” Charles Greeley Abbot, who served as the Secretary of the Smithsonian Institute from 1928 to 1944, oversaw the first long-term study of solar output.

His observations were necessarily ground based and the variations observed (amounting to 3-5 percent) have been dismissed as “due to changing weather conditions and incomplete analysis of his data.” That despite the monumental efforts he went through to control such effects.

On the 1970s I did an independent analysis of his data and realized that part of the problem he had stemmed from a misunderstanding of the relationship between sunspots and solar irradiance. At the time, it was assumed that sunspots were akin to atmospheric clouds. That is, scientists assumed they affected overall solar output by blocking light, thus reducing the total power reaching Earth.

Thus, when Abbott’s observations showed the opposite correlation, they were assumed to be erroneous. His purported correlations with terrestrial weather observations were similarly confused, and thus dismissed.

Since then, astrophysicists have realized that sunspots are more like a symptom of increased internal solar activity. That is, increases in sunspot activity positively correlate with increases in the internal dynamism that generates the Sun’s power output. Seen in this light, Abbott’s observations and analysis make a whole lot more sense.

We have ample evidence, from historical observations of climate changes correlating with observed variations in sunspot activity, that there is a strong connection between climate and solar variability. Most notably the fact that the Sporer and Maunder anomalies (which were times when sunspot activity all but disappeared for extended periods) in sunspot records correlated with historically cold periods in Earth’s history. There was a similar period from about 1790 to 1830 of low solar activity (as measured by sunspot numbers) called the “Dalton Minimum” that similarly depressed global temperatures and gave an anomalously low baseline for the run up to the Modern Maximum.

For astrophysicists, the phenomenon of solar variability is not in doubt. The questions that remain involve by how much, how closely they correlate with climate change, and are they predictable?

Studies of solar variability, however, run afoul of the IPCC dogma. For example, in May of 2017 an international team of solar dynamicists led by Valentina V. Zharkova at Northumbria University in the U.K. published a paper entitled “On a role of quadruple component of magnetic field in defining solar activity in grand cycles” in the Journal of Atmospheric and Solar-Terrestrial Physics. Their research indicates that the Sun, while it’s activity has been on the upswing for an extended period, should be heading into a quiescent period starting with the next maximum of the 11-year sunspot cycle in around five years.

That would indicate that the IPCC prediction of exponentially increasing global temperatures due to human-caused increasing carbon-dioxide levels may be dead wrong. I say “may be dead wrong” because this is science, not dogma. In science, nothing is incontrovertible.

I was clued in to this research by my friend Dan Romanchik, who writes a blog for amateur radio enthusiasts. Amateur radio enthusiasts care about solar activity because sunspots are, in fact, caused by magnetic fields at the Sun’s surface. Those magnetic fields affect Earth by deflecting cosmic rays away from the inner solar system, which is where we live. Those cosmic rays are responsible for the Kennelly–Heaviside layer of ionized gas in Earth’s upper atmosphere (roughly 90–150 km, or 56–93 mi, above the ground).

Radio amateurs bounce signals off this layer to reach distant stations beyond line of sight. When solar activity is weak this layer drops to lower altitudes, reducing the effectiveness of this technique (often called “DXing”).

In his post of 16 December 2018, Dan complained: “If you operate HF [the high-frequency radio band], it’s no secret that band conditions have not been great. The reason, of course, is that we’re at the bottom of the sunspot cycle. If we’re at the bottom of the sunspot cycle, then there’s no way to go but up, right? Maybe not.

“Recent data from the NOAA’s Space Weather Prediction Center seems to suggest that solar activity isn’t going to get better any time soon.”

After discussing the NOAA prediction, he went on to further complain: “And, if that wasn’t depressing enough, I recently came across an article reporting on the research of Prof. Valentina Zharkova, who is predicting a grand minimum of 30 years!”

He included a link to a presentation Dr. Zharkova made at the Global Warming Policy Foundation last October in which she outlined her research and pointedly warned that the IPCC dogma was totally wrong.

I followed the link, viewed her presentation, and concluded two things:

  1. The research methods she used are some that I’m quite familiar with, having used them on numerous occasions; and

  2. She used those techniques correctly, reaching convincing conclusions.

Her results seems well aligned with meta-analysis published by the Cato Institute in 2015, which I mentioned in my posting of 10 October 2018 to this blog. The Cato meta-analysis of observational data indicated a much reduced rate of global warming compared to that predicted by IPCC models.

The Zharkova-model data covers a much wider period (millennia-long time scale rather than decades-long time scale) than the Cato data. It’s long enough to show the Medieval Warm Period as well as the Little Ice Age (Maunder minimum) and the recent warming trend that so fascinates climate-change activists. Instead of a continuation of the modern warm period, however, Zharkova’s model shows an abrupt end starting in about five years with the next maximum of the 11-year sunspot cycle.

Don’t expect a stampede of media coverage disputing the IPCC dogma, however. A host of politicians (especially among those in the U.S. Democratic Party) have hung their hats on that dogma as well as an array of governments who’ve sold policy decisions based on it. The political left has made an industry of vilifying anyone who doesn’t toe the “climate change” line, calling them “climate deniers” with suspect intellectual capabilities and moral characters.

Again, this sounds a lot like dogma. It’s the same tactic that the Inquisition used against Bruno and Galileo before escalating to more brutal methods.

Supporters of Zharkova’s research labor under a number of disadvantages. Of course, there’s the obvious disadvantage that Zharkova’s thick Ukrainian accent limits her ability to explain her work to those who don’t want to listen. She would not come off well on the evening news.

A more important disadvantage is the abstruse nature of the applied mathematics techniques used in the research. How many political reporters and, especially, commentators are familiar enough with the mathematical technique of principal component analysis to understand what Zharkova’s talking about? This stuff makes macroeconomics modeling look like kiddie play!

But, the situation’s even worse because to really understand the research, you also need an appreciation of stellar dynamics, which is based on magnetohydrodynamics. How many CNN commentators even know how to spell that?

Of course, these are all tools of the trade for astrophysicists. They’re as familiar to them as a hammer or a saw is to a carpenter.

For those in the media, on the other hand, it’s a lot easier to take the “most scientists agree” mantra at face value than to embark on the nearly hopeless task of re-educating themselves to understand Zharkova’s research. That goes double for politicians.

It’s entirely possible that “most” scientists might agree with the IPCC dogma, but those in a position to understand what’s driving Earth’s climate do not agree.

The Future of Personal Transportation

Israeli startup Griiip’s next generation single-seat race car demonstrating the world’s first motorsport Vehicle-to-Vehicle (V2V) communication application on a racetrack.

9 April 2018 – Last week turned out to be big for news about personal transportation, with a number of trends making significant(?) progress.

Let’s start with a report (available for download at https://gen-pop.com/wtf) by independent French market-research company Ipsos of responses from more than 3,000 people in the U.S. and Canada, and thousands more around the globe, to a survey about the human side of transportation. That is, how do actual people — the consumers who ultimately will vote with their wallets for or against advances in automotive technology — feel about the products innovators have been proposing to roll out in the near future. Today, I’m going to concentrate on responses to questions about self-driving technology and automated highways. I’ll look at some of the other results in future postings.

Perhaps the biggest take away from the survey is that approximately 25% of American respondents claim they “would never use” an autonomous vehicle. That’s a biggie for advocates of “ultra-safe” automated highways.

As my wife constantly reminds me whenever we’re out in Southwest Florida traffic, the greatest highway danger is from the few unpredictable drivers who do idiotic things. When surrounded by hundreds of vehicles ideally moving in lockstep, but actually not, what percentage of drivers acting unpredictably does it take to totally screw up traffic flow for everybody? One percent? Two percent?

According to this survey, we can expect up to 25% to be out of step with everyone else because they’re making their own decisions instead of letting technology do their thinking for them.

Automated highways were described in detail back in the middle part of the twentieth century by science-fiction writer Robert A. Heinlein. What he described was a scene where thousands of vehicles packed vast Interstates, all communicating wirelessly with each other and a smart fixed infrastructure that planned traffic patterns far ahead, and communicated its decisions with individual vehicles so they acted together to keep traffic flowing in the smoothest possible way at the maximum possible speed with no accidents.

Heinlein also predicted that the heros of his stories would all be rabid free-spirited thinkers, who wouldn’t allow their cars to run in self-driving mode if their lives depended on it! Instead, they relied on human intelligence, forethought, and fast reflexes to keep themselves out of trouble.

And, he predicted they would barely manage to escape with their lives!

I happen to agree with him: trying to combine a huge percentage of highly automated vehicles with a small percentage of vehicles guided by humans who simply don’t have the foreknowledge, reflexes, or concentration to keep up with the automated vehicles around them is a train wreck waiting to happen.

Back in the late twentieth century I had to regularly commute on the 70-mph parking lots that went by the name “Interstates” around Boston, Massachusetts. Vehicles were generally crammed together half a car length apart. The only way to have enough warning to apply brakes was to look through the back window and windshield of the car ahead to see what the car ahead of them was doing.

The result was regular 15-car pileups every morning during commute times.

Heinlein’s (and advocates of automated highways) future vision had that kind of traffic density and speed, but were saved from inevitable disaster by fascistic control by omniscient automated highway technology. One recalcitrant human driver tossed into the mix would be guaranteed to bring the whole thing down.

So, the moral of this story is: don’t allow manual-driving mode on automated highways. The 25% of Americans who’d never surrender their manual-driving priviledge can just go drive somewhere else.

Yeah, I can see THAT happening!

A Modest Proposal

With apologies to Johnathan Swift, let’s change tack and focus on a more modest technology: driver assistance.

Way back in the 1980s, George Lucas and friends put out the third in the interminable Star Wars series entitled The Empire Strikes Back. The film included a sequence that could only be possible in real life with help from some sophisticated collision-avoidance technology. They had a bunch of characters zooming around in a trackless forest on the moon Endor, riding what can only be described as flying motorcycles.

As anybody who’s tried trailblazing through a forest on an off-road motorcycle can tell you, going fast through virgin forest means constant collisions with fixed objects. As Bugs Bunny once said: “Those cartoon trees are hard!

Frankly, Luke Skywalker and Princess Leia might have had superhuman reflexes, but their doing what they did without collision avoidance technology strains credulity to the breaking point. Much easier to believe their little speeders gave them a lot of help to avoid running into hard, cartoon trees.

In the real world, Israeli companies Autotalks, and Griiip, have demonstrated the world’s first motorsport Vehicle-to-Vehicle (V2V) application to help drivers avoid rear-ending each other. The system works is by combining GPS, in-vehicle sensing, and wireless communication to create a peer-to-peer network that allows each car to send out alerts to all the other cars around.

So, imagine the situation where multiple cars are on a racetrack at the same time. That’s decidedly not unusual in a motorsport application.

Now, suppose something happens to make car A suddenly and unpredictably slow or stop. Again, that’s hardly an unusual occurrance. Car B, which is following at some distance behind car A, gets an alert from car A of a possible impending-collision situation. Car B forewarns its driver that a dangerous situation has arisen, so he or she can take evasive action. So far, a very good thing in a car-race situation.

But, what’s that got to do with just us folks driving down the highway minding our own business?

During the summer down here in Florida, every afternoon we get thunderstorms dumping torrential rain all over the place. Specifically, we’ll be driving down the highway at some ridiculous speed, then come to a wall of water obscuring everything. Visibility drops from unlimited to a few tens of feet with little or no warning.

The natural reaction is to come to a screeching halt. But, what happens to the cars barreling up from behind? They can’t see you in time to stop.

Whammo!

So, coming to a screeching halt is not the thing to do. Far better to keep going forward as fast as visibility will allow.

But, what if somebody up ahead panicked and came to a screeching halt? Or, maybe their version of “as fast as visibility will allow” is a lot slower than yours? How would you know?

The answer is to have all the vehicles equipped with the Israeli V2V equipment (or an equivalent) to forewarn following drivers that something nasty has come out of the proverbial woodshed. It could also feed into your vehicle’s collision avoidance system to step over the 2-3 seconds it takes for a human driver to say “What the heck?” and figure out what to do.

The Israelis suggest that the required chip set (which, of course, they’ll cheerfully sell you) is so dirt cheap that anybody can afford to opt for it in their new car, or retrofit it into their beat up old junker. They further suggest that it would be worthwhile for insurance companies to give a rake off on their premiums to help cover the cost.

Sounds like a good deal to me! I could get behind that plan.

Invasion of the Robofish!

30 March 2018 – Mobile autonomous systems come in all sizes, shapes, and forms, and have “invaded” every earthly habitat. That’s not news. What is news is how far the “bleeding edge” of that technology has advanced. Specifically, it’s news when a number of trends combine to make something unique.

Today I’m getting the chance to report on something that I predicted in a sci-fi novel I wrote back in 2011, and then goes at least one step further.

Last week the folks at Design World published a report on research at the MIT Computer Science & Artificial Intelligence Lab that combines three robotics trends into one system that quietly makes something I find fascinating: a submersible mobile robot. The three trends are soft robotics, submersible unmanned systems, and biomimetic robot design.

The beasty in question is a robot fish. It’s obvious why this little guy touches on those three trends. How could a robotic fish not use soft robotic, sumersible, and biomemetic technologies? What I want to point out is how it uses those technologies and why that combination is necessary.

Soft Robotics

Folks have made ROVs (basically remotely operated submarines) for … a very long time. What they’ve pretty much all produced are clanky, propeller-driven derivatives of Jules Verne’s fictional Nautilus from his 1870 novel Twenty Thousand Leagues Under the Sea. That hunk of junk is a favorite of steampunk afficionados.

Not much has changed in basic submarine design since then. Modern ROVs are more maneuverable than their WWII predecessors because they add multiple propellers to push them in different directions, but the rest of it’s pretty much the same.

Soft robotics changes all that.

About 45 years ago, a half-drunk physics professor at a kegger party started bending my ear about how Mommy Nature never seemed to have discovered the wheel. The wheel’s a nearly unique human invention that Mommy Nature has pretty much done without.

Mommy Nature doesn’t use the wheel because she uses largely soft technology. Yes, she uses hard technology to make structural components like endo- and exo-skeletons to give her live beasties both protection and shape, but she stuck with soft-bodied life forms for the first four billion years of Earth’s 4.5-billion-year history. Adding hard-body technology in the form of notochords didn’t happen until the cambrian explosion of 541-516 million years ago, when most major animal phyla appeared.

By the way, that professor at the party was wrong. Mommy Nature invented wheels way back in the precambrian era in the form of rotary motors to power the flagella that propel unicellular free-swimmers. She just hasn’t use wheels for much else, since.

Of course, everybody more advanced than a shark has a soft body reinforced by a hard, bony skeleton.

Today’s soft robotics uses elastomeric materials to solve a number of problems for mobile automated systems.

Perhaps most importantly it’s a lot easier for soft robots to separate their insides from their outsides. That may not seem like a big deal, but think of how much trouble engineers go through to keep dust, dirt, and chemicals (such as seawater) out of the delicate gears and bearings of wheeled vehicles. Having a flexible elastomeric skin encasing the whole robot eliminates all that.

That’s not to mention skin’s job of keeping pesty little creepy crawlies out! I remember an early radio astronomer complaining that pack rats had gotten into his remote desert headquarters trailer and eaten a big chunk of his computer’s magnetic-core memory. That was back in the days when computer random-access memories were made from tiny iron beads strung on copper wires.

Another major advantage of soft bodies for mobile robots is resistance to collision damage. Think about how often you’re bumped into when crossing the room at a cocktail party. Now, think about what your hard-bodied automobile would look like after bumping into that many other cars in a parking lot. Not a pretty sight!

The flexibility of soft bodies also makes possible a lot of propulsion methods beside wheel-like propellers, caterpillar tracks, and rubber tires. That’s good because piercing soft-body skins with drive shafts to power propellers and wheels pretty much trashes the advantages of having those skins in the first place.

That’s why prosthetic devices all have elaborate cuffs to hold them to the outsides of the wearer’s limbs. Piercing the skin to screw something like Captain Hook’s hook directly into the existing bone never works out well!

So, in summary, the MIT group’s choice to start with soft-robotic technology is key to their success.

Submersible Unmanned Systems

Underwater drones have one major problem not faced by robotic cars and aircraft: radio waves don’t go through water. That means if anything happens that your none-too-intelligent automated system can’t handle, it needs guidance from a human operator. Underwater, that has largely meant tethering the robot to a human.

This issue is a wall that self-driving-car developers run into constantly (and sometimes literally). When the human behind the wheel mandated by state regulators for autonomous test vehicles falls asleep or is distracted by texting his girlfriend, BLAMMO!

The world is a chaotic place and unpredicted things pop out of nowhere all the time. Human brains are programmed to deal with this stuff, but computer technology is not, and will not be for the foreseeable future.

Drones and land vehicles, which are immersed in a sea of radio-transparent air, can rely on radio links to remote human operators to help them get out of trouble. Underwater vehicles, which are immersed in a sea of radio-opaque water, can’t.

In the past, that’s meant copper wires enclosed in physical tethers that tie the robots to the operators. Tethers get tangled, cut and hung up on everything from coral outcrops to passing whales.

There are a couple of ways out of the tether bind: ultrasonics and infra-red. Both go through water very nicely, thank you. The MIT group seems to be using my preferred comm link: ultrasonics.

Sound goes through water like you-no-what through a goose. Water also has little or no sonic “color.” That is, all frequencies of sonic waves go more-or-less equally well through water.

The biggest problem for ultrasonics is interference from all the other noise makers out there in the natural underwater world. That calls for the spread-spectrum transmission techniques invented by Hedy Lamarr. (Hah! Gotcha! You didn’t know Hedy Lamarr, aka Hedwig Eva Maria Kiesler, was a world famous technical genius in addition to being a really cute, sexy movie actress.) Hedy’s spread-spectrum technique lets ultrasonic signals cut right through the clutter.

So, advanced submersible mobile robot technology is the second thread leading to a successful robotic fish.

Biomimetics

Biomimetics is a 25-cent word that simply means copying designs directly from nature. It’s a time-honored short cut engineers have employed from time immemorial. Sometimes it works spectacularly, such as Thomas Wedgwood’s photographic camera (developed as an analogue of the terrestrial vertebrate eye), and sometimes not, such as Leonardo da Vinci’s attempts to make flying machines based on birds’ wings.

Obvously, Mommy Nature’s favorite fish-propulsion mechanism is highly successful, having been around for some 550 million years and still going strong. It, of course, requires a soft body anchored to a flexible backbone. It takes no imagination at all to copy it for robot fish.

The copying is the hard part because it requires developing fabrication techniques to build soft-bodied robots with flexible backbones in the first place. I’ve tried it, and it’s no mean task.

The tough part is making a muscle analogue that will drive the flexible body to move back and forth rythmically and propel the critter through the water. The answer is pneumatics.

In the early 2000s, a patent-lawyer friend of mine suggested lining both sides of a flexible membrane with tiny balloons that could be alternately inflated or deflated. When the balloons on one side were inflated, the membrane would curve away from that side. When the balloons on the other side were inflated the membrane would curve back. I played around with this idea, but never went very far with it.

The MIT group seems to have made it work using both gas (carbon dioxide) and liquid (water) for the working fluid. The difference between this kind of motor and natural muscle is that natural muscle works by pulling when energized, and the balloon system works by pushing. Otherwise, both work by balancing mechanical forces along two axes with something more-or-less flexible trapped between them.

In Nature’s fish, that something is the critter’s skeleton (backbone made up of vertebrae and stiffened vertically by long, thin spines), whereas the MIT group’s robofish uses elastomers with different stiffnesses.

Complete Package

Putting these technical trends together creates a complete package that makes it possible to build a free-swimming submersible mobile robot that moves in a natural manner at a reasonable speed without a tether. That opens up a whole range of applications, from deep-water exploration to marine biology.