Social Media and The Front Page

Walter Burns
Promotional photograph of Osgood Perkins as Walter Burns in the 1928 Broadway production of The Front Page

12 September 2018 – The Front Page was an hilarious one-set stage play supposedly taking place over a single night in the dingy press room of Chicago’s Criminal Courts Building overlooking the gallows behind the Cook County Jail. I’m not going to synopsize the plot because the Wikipedia entry cited above does such an excellent job it’s better for you to follow the link and read it yourself.

First performed in 1928, the play has been revived several times and suffered countless adaptations to other media. It’s notable for the fact that the main character, Hildy Johnson, originally written as a male part, is even more interesting as a female. That says something important, but I don’t know what.

By the way, I insist that the very best adaptation is Howard Hawks’ 1940 tour de force film entitled His Girl Friday starring Rosalind Russell as Hildy Johnson, and Cary Grant as the other main character Walter Burns. Burns is Johnson’s boss and ex-husband who uses various subterfuges to prevent Hildy from quitting her job and marrying an insurance salesman.

That’s not what I want to talk about today, though. What’s important for this blog posting is part of the play’s backstory. It’s important because it can help provide context for the entire social media industry, which is becoming so important for American society right now.

In that backstory, a critical supporting character is one Earl Williams, who’s a mousey little man convicted of murdering a policeman and sentenced to be executed the following morning right outside the press-room window. During the course of the play, it comes to light that Williams, confused by listening to a soapbox demagogue speaking in a public park, accidentally shot the policeman and was subsequently railroaded in court by a corrupt sheriff who wanted to use his execution to help get out the black(!?) vote for his re-election campaign.

What publicly executing a confused communist sympathizer has to do with motivating black voters I still fail to understand, but it makes as much sense as anything else the sheriff says or does.

This plot has so many twists and turns paralleling issues still resonating today that it’s rediculous. That’s a large part of the play’s fun!

Anyway, what I want you to focus on right now is the subtle point that Williams was confused by listening to a soapbox demagogue.

Soapbox demagogues were a fixture in pre-Internet political discourse. The U.S. Constitution’s First Amendment explicitly gives private citizens the right to peaceably assemble in public places. For example, during the late 1960s a typical summer Sunday afternoon anywhere in any public park in North America or Europe would see a gathering of anywhere from 10 to 10,000 hippies for an impromptu “Love In,” or “Be In,” or “Happening.” With no structure or set agenda folks would gather to do whatever seemed like a good idea at the time. My surrealist novelette Lilith describes a gathering of angels, said to be “the hippies of the supernatural world,” that was patterned after a typical Hippie Love In.

Similarly, a soapbox demagogue had the right to commandeer a picnic table, bandstand, or discarded soapbox to place himself (at the time they were overwhelmingly male) above the crowd of passersby that he hoped would listen to his discourse on whatever he wanted to talk about.

In the case of Earl Williams’ demagogue, the speech was about “production for use.” The feeble-minded Williams applied that idea to the policeman’s service weapon, with predictable results.

Fast forward to the twenty-first century.

I haven’t been hanging around local parks on Sunday afternoons for a long time, so I don’t know if soapbox demagogues are still out there. I doubt that they are because it’s easier and cheaper to log onto a social-media platform, such as Facebook, to shoot your mouth off before a much larger international audience.

I have browsed social media, however, and see the same sort of drivel that used to spew out of the mouths of soapbox demagogues back in the day.

The point I’m trying to make is that there’s really nothing novel about social media. Being a platform for anyone to say anything to anyone is the same as last-century soapboxes being available for anyone who thinks they have something to say. It’s a prominent right guaranteed in the Bill of Rights. In fact, it’s important enough to be guaranteed in the very first of th Bill’s amendments to the U.S. Constitution.

What is not included, however, is a proscription against anyone ignoring the HECK out of soapbox demagogues! They have the right to talk, but we have the right to not listen.

Back in the day, almost everybody passed by soapbox demagogues without a second glance. We all knew they climbed their soapboxes because it was the only venue they had to voice their opinions.

Preachers had pulpits in front of congregations, so you knew they had something to say that people wanted to hear. News reporters had newspapers people bought because they contained news stories that people wanted to read. Scholars had academic journals that other scholars subscribed to because they printed results of important research. Fiction writers had published novels folks read because they found them entertaining.

The list goes on.

Soapbox demagogues, however, had to stand on an impromptu platform because they didn’t have anything to say worth hearing. The only ones who stopped to listen were those, like the unemployed Earl Williams, who had nothing better to do.

The idea of pretending that social media is any more of a legitimate venue for ideas is just goofy.

Social media are not legitimate media for the exchange of ideas simply because anybody is able to say anything on them, just like a soapbox in a park. Like a soapbox in a park, most of what is said on social media isn’t worth hearing. It’s there because the barrier to entry is essentially nil. That’s why so many purveyors of extremist and divisive rhetoric gravitate to social media platforms. Legitimate media won’t carry them.

Legitimate media organizations have barriers to the entry of lousy ideas. For example, I subscribe to The Economist because of their former Editor in Chief, John Micklethwait, who impressed me as an excellent arbiter of ideas (despite having a weird last name). I was very pleased when he transferred over to Bloomberg News, which I consider the only televised outlet for globally significant news. The Wall Street Journals business focus forces Editor-in-Chief Matt Murray into a “just the facts, ma’am” stance because every newsworthy event creates both winners and losers in the business community, so content bias is a non-starter.

The common thread among these legitimate-media sources is existance of an organizational structure focused on maintaining content quality. There are knowlegeable gatekeepers (called “editors“) charged with keeping out bad ideas.

So, when Donald Trump, for example, shows a preference for social media (in his case, Twitter) and an abhorrence of traditional news outlets, he’s telling us his ideas aren’t worth listening to. Legitimate media outlets disparage his views, so he’s forced to use the twenty-first century equivalent of a public-park soapbox: social media.

On social media, he can say anything to anybody because there’s nobody to tell him, “That’s a stupid thing to say. Don’t say it!”

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!

The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.

The Mad Hatter’s Riddle

Raven/Desk
Lewis Carroll’s famous riddle “Why is a raven like a writing desk?” turns out to have a simple solution after all! Shutterstock

27 June 2018 – In 1865 Charles Lutwidge Dodgson, aka Lewis Carroll, published Alice’s Adventures in Wonderland, in which his Mad Hatter character posed the riddle: “Why is a raven like a writing desk?”

Somewhat later in the story Alice gave up trying to guess the riddle and challenged the Mad Hatter to provide the answer. When he couldn’t, nor could anyone else at the story’s tea party, Alice dismissed the whole thing by saying: “I think you could do something better with the time . . . than wasting it in asking riddles that have no answers.”

Since then, it has generally been believed that the riddle has, in actuality, no answer.

Modern Western thought has progressed a lot since the mid-nineteenth century, however. Specifically, two modes of thinking have gained currency that directly lead to solving this riddle: Zen and Surrealism.

I’m not going to try to give even sketchy pictures of Zen or Surrealist doctrine here. There isn’t anywhere near enough space to do either subject justice. I will, however, allude to those parts that bear on solving the Hatter’s riddle.

I’m also not going to credit Dodson with having surreptitiously known the answer, then hiding it from the World. There is no chance that he could have read Andre Breton‘s The Surrealist Manifesto, which was published twenty-six years after Dodson’s death. And, I’ve not been able to find a scrap of evidence that the Anglican-deacon Dodson ever seriously studied Taoism or its better-known offshoot, Zen. I’m firmly convinced that the religiously conservative Dodson really did pen the riddle as an example of a nonsense question. He seemed fond of nonsense.

No, I’m trying to make the case that in the surreal world of imagination, there is no such thing as nonsense. There is always a viewpoint from which the absurd and seemingly illogical comes into sharp focus as something obvious.

As Obi-Wan Kenobi said in Return of the Jedi: “From a certain point of view.”

Surrealism sought to explore the alternate universe of dreams. From that point of view, Alice is a classic surrealist work. It explicitly recounts a dream Alice had while napping on a summery hillside with her head cradled in her big sister’s lap. The surrealists, reading Alice three quarters of a century later, recognized this link, and acknowledged the mastery with which Dodson evoked the dream world.

Unlike the mid-nineteenth-century Anglicans, however, the surrealists of the early twentieth century viewed that dream world as having as much, if not more, validity as the waking world of so-called “reality.”

Chinese Taoism informs our thinking through the melding of all forms of reality (along with everything else) into one unified whole. When allied with Indian Buddhism to form the Chinese Ch’an, or Japanese Zen, it provides a method that frees the mind to explore possible answers to, among other things, riddles like the Hatter’s, and find just the right viewpoint where the solution comes into sharp relief. This method, which is called a koan, is an exercise wherein a master provides riddles to his (or her) students to help guide them along their paths to enlightenment.

Ultimately, the solution to the Hatter’s riddle, as I revealed in my 2016 novella Lilith, is as follows:

Question: Why is a raven like a writing desk?

Answer: They’re both not made of bauxite.

According to Collins English Dictionary – Complete & Unabridged 2012 Digital Edition, bauxite is “a white, red, yellow, or brown amorphous claylike substance comprising aluminium oxides and hydroxides, often with such impurities as iron oxides. It is the chief ore of aluminium and has the general formula: Al2O3 nH2O.”

As a claylike mineral substance, bauxite is clearly exactly the wrong material from which to make a raven. Ravens are complex, highly organized hydrocarbon-based life forms. In its hydrated form, one could form an amazingly lifelike statue of a raven. It wouldn’t, however, even be the right color. Certainly it would never exhibit the behaviors we normally expect of actual, real, live ravens.

Similarly, bauxite could be used to form an amazingly lifelike statue of a writing desk. The bauxite statue of a writing desk might even have a believable color!

Why one would want to produce a statue of a writing desk, instead of making an actual writing desk, is a question outside the scope of this blog posting.

Real writing desks, however, are best made of wood, although other materials, such as steel, fiber-reinforced plastic (FRP), and marble, have been used successfully. What makes wood such a perfect material for writing desks is its mechanically superior composite structure.

Being made of long cellulose fibers held in place by a lignin matrix, wood has wonderful anisotropic mechanical properties. It’s easy to cut and shape with the grain, while providing prodigious yield strength when stressed against the grain. Its amazing toughness when placed under tension or bending loads makes assembling wood into the kind of structure ideal for a writing desk almost too easy.

Try making that out of bauxite!

Alice was unable to divine the answer the Hatter’s riddle because she “thought over all she could remember about ravens and writing desks.” That is exactly the kind of mistake we might expect a conservative Anglican deacon to make as well.

It is only by using Zen methods of turning the problem inside out and surrealist imagination’s ability to look at it as a question, not of what ravens and writing desks are, but what they are not, that the riddle’s solution becomes obvious.

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

Chaos Piggies

Don Argott’s 2009 film The Art of the Steal led me down a primrose path to some insight about chaos in human interactions.

25 November 2017 – After viewing Don Argott’s 2009 film The Art of the Steal about the decades long history of the Barnes Foundation and its gradual conversion from a private suburban-Pennsylvania art-education institution into a Philadelphia tourist attraction, the first thing I thought about was the Beatles’ song “Piggies.” The second thing I thought about was the chaos of human interractions. That led to an epiphany about the class struggle that has been going on, probably, since long before there were humans around to divide into classes that could struggle.

To understand what I’m talking about, the best place to start is with some general observations about mathematical chaos.

The fundamental characteristic of chaotic systems is that they have limited predictability. That is, while they may seem to evolve along a predictable path in the short run, as time goes on “what happens next” becomes increasingly unpredictable until eventually all bets are off. You can set something up to keep going forever, but if it turns out to be a chaotic system, eventually it comes unravelled.

Another chaos characteristic is what electronics engineers call 1/f noise. It’s called 1/f noise because if you carefully analyze the signal, you find it’s a mixture of waves whose amplitude is inversely proportional to their frequency. It’s found in measurements of everything from ocean waves to solid state electronics. When you see this kind of behavior in virtually anything, it’s a sure sign of chaos.

It turns out that the best way to create a chaotic system is to take kazillions of things all acting independently, then somehow get them to affect each other. In the case of human society, you’ve got kazillions of people all doing their own things independently, but having to work together to get anything done.

Now, let’s look at the Beatles’ song:

“Have you seen the little piggies/Crawling in the dirt?” …

“Have you seen the bigger piggies/In their starched white shirts?” …

Sound familiar? The lyrics are pointing out an observation of 1/f noise. The “little piggies” are analogous to the rapid, high-frequency fluctuations whose effect is swamped by the larger, low-frequency fluctuations represented by the “bigger piggies.”

In other words, the big and powerful few have and outsized effect compared to that of the small and powerless many.

Duh!

I’m not crediting George Harrison with sufficient insight into mathematical chaos to draw the parallel between it and the social scene he was describing. He was certainly smart enough and interested enough to make the connection, but in 1968 when the song was released chaos theory was not widely enough understood to inform Harrison’s songwriting. Most likely at the time he was simply creating a metaphor in which we now can percieve chaos.

Okay, what has “Piggies” to do with The Art of the Steal?

The thesis of the film is that big, powerful politicians conspired to run roughshod over a group of small, relatively powerless art lovers trying to preserve the legacy of one Dr. Albert C. Barnes, who created the Barnes Foundation in the first place. Supposedly (and we have no reason to doubt it) Barnes hated the big, powerful interests who ultimately got control of his art collection decades after his death.

The lesson I’d like to draw from this whole thing is not quite the lesson the film would like us to draw, which is that the big piggies are bad guys beating up on the good guy little piggies. That’s the usual class-struggle argument.

To me, the good and bad in this tale is a matter of viewpoint. What I’d prefer us to learn is that Barnes’ attempt to create something that would forever function as he wanted it to was fundamentally doomed to failure.

Human society is a chaotic system, so any human organization you set up will eventually evolve in ways you cannot predict and cannot control. That’s Mommy Nature’s way.

If you try to go against Mommy Nature’s way, as Barnes did, Mommie Nature SPANK!