The Dead-Cat Bounce

Dead Cat Bounce
Chaotic market theory and basic control theory combine to explain equities markets’ dead-cat bounce phenomenon.

10 June 2020 – The title of this essay sounds like a new Argentinian dance craze, but it’s not. It’s a pattern of stock-price fluctuations that has been repeated over, and over, since folks have been tracking stock prices. It doesn’t get the attention it deserves because people who pretend that they have power (i.e., the People In Charge – PIC), and can wisely dispense it, don’t like things that show how little power they actually have. So, they ignore the heck out of them, thereby proving themselves dumb, as well as powerless.

There’s been a lot of blather in the news media recently about some hypothetical “V-shaped recovery,” which a lot of pundits, especially those of the Republican-Party persuasion (notably led by that master of misinformation, Donald Trump), want you to believe the U.S. economy is experiencing. In an attempt to prove their case, they point to the performance over approximately the past three months of all three major equity-market indices, those being the Dow-Jones Industrial Average (DJIA), the Standard and Poor’s 500-Stock Composite Index (S&P), and the National Association of Securities Dealers Automated Quotations index (NASDAQ),. Those three indices do tell a consistent story, but it’s not the one the V-shaped-recovery fans want you to believe. The story is actually much more complicated. It’s what’s called the dead-cat bounce.

To understand the dead-cat bounce that has been going on since the U.S. equities market crashed in March, you have to understand what I was driving at in this space on 18 March 2020. That was about the time the market bloodbath hit bottom. By the way, I’d been mostly out of the market, and into cash, for several months at that point. I could see that something evil was bound to happen in the near future. I just didn’t know what it would be. It turned out to be a pandemic coming out of the blue.

In that 18 March essay, I spent a whole lot of space developing the chaotic-market theory, which visualizes markets as having an equilibrium value based on classical efficient-market theory, with a free-roaming chaotic component riding on it. The chaotic component arises as millions of investors jostle to control prices of thousands of equity instruments (stocks). One of the first things those of us who have been responsible for designing and building feedback control systems run into is a little phenomenon called pilot-involved oscillation (PIO), named after an instability all pilots have to deal with when learning to land an airplane. PIO arises from the inescapable fact that feedback response comes some time after the system moves off equilibrium. Obviously, the response can’t come before the movement, it has to come after. That’s why they call it a “response!” That time lag is what causes the PIO.

A feedback-controlled system’s behavior follows what’s called a inhomogeneous time-dependent linear differential equation. Let me break that name down a bit. The “inhomogeneous” part just means there is something driving the system. In the case of equities markets, that’s the underlying economy setting the equilibrium in accordance with Adam Smith’s supply and demand. The “time-dependent” part just means that things change over time. As Jim Morrison said: “The future’s uncertain and the end is always near.” A “linear differential equation” means that what happens next depends on what happened before, and the rate at which things are changing, now. Without going into the applied mathematics of finding a solution, I’ll just skip to the end, and tell you that there’s only one solution: the dratted things oscillate. That is, they go up and down, always overshooting and undershooting the equilibrium point.

Do you see the connection, now?

That solution is called a damped harmonic oscillator, which simply means that the thing’s overshooting and undershooting follows a regular sinusoidal (you’ll have to look that one up, yourself) pattern, but it dies out over time. The rate at which the oscillation dies out is controlled by something called the damping ratio, which can take on any value from zero to infinity. Zero damping means the oscillation doesn’t die out. A damping ratio exactly equal to one means the system over- or undershoots once, then comes back to its equilibrium value. A damping ratio much over one makes the system respond sluggishly, and not oscillate at all.

Now, with that explanation in mind, look at the market behavior depicted in the graph above. The graph starts at the beginning of March 2020. Investors started to realize that the pandemic was going to trash the U.S. economy around mid-February, so you see that I’ve cut off some of the start of the crash that happened before 1 March. By 1 March, stock prices were falling like a stone until 23 March. That’s when the dead cat hit the pavement, and bounced. It bounced too high and, around 27 March, it started falling back down, only to undershoot again. Around 2 April, it bottomed and started back up, again. Looking at these movements quantitatively, we can see the clear pattern of a damped oscillation with a period of about 12 days, and a damping ratio of between 0.2 and 0.4.

To bring out the underlying pattern, I’ve filtered the data by averaging over three days for each point in the data set to get the smoother red line. The three-day filter (called a Butterworth filter, by the way) does little to suppress the slower 12-day oscillations, or the even slower smack from the pandemic’s economic hit. I does, however, pretty well filter out the daily noise from the fast-moving day-trading fluctuations.

Clearly, we are in a recovery. There’s no doubt about that! The economy is coming back to life after being practically shut down for a short period of time. The initial shock from the pandemic is largely over. Look for a gradual return to the three-to-five-percent-per-year long-term growth rate we’ve seen over the century-and-a-quarter history of the DJIA.

Efficient Markets and Chaos

DJIA1900-2020
Semi-logarithmic plot of historic record of Dow Jones Industrial Average closing values from 1900-2020 plotted against an increasing exponential function to show chaotic oscillations.

18 March 2020 –Equities markets are not a zero-sum game (Fama, 1970). They are specifically designed to provide investors with a means of participating in companies’ business performance either directly through regular cash dividends, or indirectly through a secular increase in the market prices of the companies’ stock. The efficient market hypothesis (EMH), which postulates that stock prices reflect all available information, specifically addresses the stock-price-appreciation channel. EMH has three forms (Klock, & Bacon, 2014):

  • Weak-form EMH refers specifically to predictions based on past-price information;
  • Semi-strong form EMH includes use of all publicly available information;
  • Strong-form EMH includes all information, including private, company-confidential information.

This essay examines equities-market efficiency from the point of view of a model based on chaos theory (Gleick, 2008). The model envisions market-price movements as chaotic fluctuations around an equilibrium value determined by strong-form market efficiency (Chauhan, Chaturvedula, & Iyer, 2014). The next section shows how equities markets work as dynamical systems, and presents evidence that they are also chaotic. The third section describes how dynamical systems work in general. The fourth section shows how dynamical systems become chaotic. The conclusion ties up the argument’s various threads.

Stock-Market Dynamism

Once a stock is sold to the public, it can be traded between various investors at a strike price that is agreed upon ad hoc between buyers and sellers in a secondary market (Hayek, 1945). When one investor decides to sell stock in a given company, it increases the supply of that stock, exerting downward pressure on the strike price. Conversely, when another investor decides to buy that stock, it increases the demand, driving the strike price up. Interestingly, consummating the transaction decreases both supply and demand, and thus has no effect on the strike price. It is the intention to buy or sell the stock that affects the price. The market price is the strike price of the last transaction completed.

Successful firms grow in value over time, which is reflected in secular growth of the market price of their stocks. So, there exists an arbitrage strategy that has a high probability of a significant return: buy and hold. That is, buy equity in a well-run company, and hold it for a significant period of time, then sell. That, of course, is not what is meant by market efficiency (Chauhan, et al, 2014). Efficient market theory specifically concerns itself with returns in excess of such market returns (Fama, 1970).

Of course, if all investors were assured the market price would rise, no owners would be willing to sell, no transactions could occur, and the market would collapse. Similarly, if all investors were assured that the stock’s market price would fall, owners would be anxious to sell, but nobody would be willing to buy. Again, no transactions could occur, and the market would, again, collapse. Markets therefore actually work because of the dynamic tension created by uncertainty as to whether any given stock’s market price will rise or fall in the near future, making equities markets dynamical systems that move constantly (Hayek, 1945).

Fama (1970) concluded that on time scales longer than a day, the EMH appears to work. He found, however, evidence that on shorter time scales it was possible to use past-price information to obtain returns in excess of market returns, violating even weak-form efficiency. He concluded, however, that returns available on such short time scales were insufficient to cover transaction costs, upholding weak-form EMH. Technological improvements since 1970 have, however, drastically reduced costs for high volumes of very-short-timescale transactions, making high-frequency trading profitable (Baron, Brogaard, Hagströmer, & Kirilenko, 2019). Such short-time predictability and long-time unpredictability is a case of sensitive dependence on initial conditions, which Edward Lorentz discovered in 1961 to be one of the hallmarks of chaos (Gleick, 2008). Since 1970, considerable work has been published applying the science of chaotic systems to markets, especially the forex market (Bhattacharya, Bhattacharya, & Roychoudhury, 2017), which operates nearly identically to equities markets.

Dynamic Attraction

Chaos is a property of dynamical systems. Dynamical-systems theory generally concerns itself with the behavior of some quantitative variable representing the motion of a system in a phase space. In the case of a one-dimensional variable, such as the market price of a stock, the phase space is two dimensional, with the variable’s instantaneous value plotted along one axis, and its rate of change plotted along the other (Strogatz, 2015). At any given time, the variable’s value and rate of change determine the location in phase space of a phase point representing the system’s instantaneous state of motion. Over time, the phase point traces out a path, or trajectory, through phase space.

As a simple example illustrating dynamical-system features, take an unbalanced bicycle wheel rotating in a vertical plane (Strogatz, 2015). This system has only one moving part, the wheel. The stable equilibrium position for that system is to have the unbalanced weight hanging down directly below the axle. If the wheel is set rotating, the wheel’s speed increases as the weight approaches its equilibrium position, and decreases as it moves away. If the energy of motion is not too large, the wheel’s speed decreases until it stops, then starts rotating back toward the fixed equilibrium point, then slows again, stops, then rotates back. In the absence of friction, this oscillating motion continues ad infinitum. In phase space, the phase point’s trajectory is an elliptical orbit centered on an attractor located at the unbalanced weight’s equilibrium position and zero velocity. The ellipse’s size (semi-major axis) depends on the amount of energy in the motion. The more energy, the larger the orbit.

If, on the other hand, the wheel’s motion has too much energy, it carries the unbalanced weight over the top (Strogatz, 2015). The wheel then continues rotating in one direction, and the oscillation stops. In phase space, the phase point appears outside some elliptical boundary defined by how much energy it takes to drive the unbalanced weight over the top. That elliptical boundary defines the attractor’s basin of attraction.

Chaotic Attractors

To illustrate how a dynamic system can become chaotic requires a slightly more complicated example. The pitch-control system in an aircraft is particularly apropos equities markets. This system is a feedback control system with two moving parts: the pilot and aircraft (Efremov, Rodchenko, & Boris, 1996). In that system, the oscillation arises from a difference in the speed at which the aircraft reacts to control inputs, and the speed at which the pilot reacts in an effort to correct unintended aircraft movements. The pilot’s response typically lags the aircraft’s movement by a more-or-less fixed time. In such a case, there is always an oscillation frequency at which that time lag equals one oscillation period (i.e., time to complete one cycle). The aircraft’s nose then bobs up and down at that frequency, giving the aircraft a porpoising motion. Should the pilot try to control the porpoising, the oscillation only grows larger because the response still lags the motion by the same amount. This is called pilot induced oscillation (PIO), and it is a major nuisance for all feedback control systems.

PIO relates to stock-market behavior because there is always a lag between market-price movement and any given investor’s reaction to set a price based on it (Baron, Brogaard, Hagströmer, & Kirilenko, 2019). The time lag between intention and consummation of a trade will necessarily represent the period of some PIO-like oscillation. The fact that at any given time there are multiple investors (up to many thousands) driving market-price fluctuations at their own individual oscillation frequencies, determined by their individual reaction-time lags, makes the overall market a chaotic system with many closely spaced oscillation frequencies superposed on each other (Gleick, 2008).

This creates the possibility that a sophisticated arbitrageur may analyze the frequency spectrum of market fluctuations to find an oscillation pattern large enough (because it represents a large enough group of investors) and persistent enough to provide an opportunity for above-market returns using a contrarian strategy (Klock, & Bacon, 2014). Of course, applying the contrarian strategy damps the oscillation. If enough investors apply it, the oscillation disappears, restoring weak-form efficiency.

Conclusion

Basic market theory based on Hayek’s (1945) description assumes there is an equilibrium market price for any given product, which in the equity-market case is a company’s stock (Fama, 1970). Fundamental (i.e., strong-form efficient) considerations determine this equilibrium market price (Chauhan, et al, 2014). The equilibrium price identifies with the attractor of a chaotic system (Gleick, 2008; Strogatz, 2015). Multiple sources showing market fluctuations’ sensitive dependence on initial conditions serve to bolster this identification (Fama, 1970; Baron, Brogaard, et al, 2019; Bhattacharya, et al, 2017). PIO-like oscillations among a large group of investors provide a source for such market fluctuations (Efremov, et al, 1996).

References

Baron, M., Brogaard, J., Hagströmer, B., & Kirilenko, A. (2019). Risk and return in high-frequency trading. Journal of Financial & Quantitative Analysis, 54(3), 993–1024.

Bhattacharya, S. N., Bhattacharya, M., & Roychoudhury, B. (2017). Behavior of the foreign exchange rates of BRICs: Is it chaotic? Journal of Prediction Markets, 11(2), 1–18.

Chauhan, Y., Chaturvedula, C., & Iyer, V. (2014). Insider trading, market efficiency, and regulation. A literature review. Review of Finance & Banking, 6(1), 7–14.

Efremov, A. V., Rodchenko, V. V., & Boris, S. (1996). Investigation of Pilot Induced Oscillation Tendency and Prediction Criteria Development (No. SPC-94-4028). Moscow Inst of Aviation Technology (USSR).

Fama, E. (1970). Efficient capital markets: A review of theory and empirical work. The Journal of Finance, 25(2), 383-417.

Farazmand, A. (2003). Chaos and transformation theories: A theoretical analysis with implications for organization theory and public management. Public Organization Review, 3(4), 339-372.

Gleick, J. (2008). Chaos: Making a new science. New York, NY; Penguin Group.

Hayek, F. A. (1945). The use of knowledge in society. American Economic Review, 35(4), 519–530.

Klock, S. A., & Bacon, F. W. (2014). The January effect: A test of market efficiency. Journal of Business & Behavioral Sciences, 26(3), 32–42.

Strogatz, S. H. (2018). Nonlinear dynamics and chaos. Boca Raton, FL: CRC Press.

Making Successful Decisions

Project Inputs
External information about team attributes, group dynamics and organizational goals ultimately determine project success.

4 September 2019 – I’m in the early stages of a long-term research project for my Doctor of Business Administration (DBA) degree. Hopefully, this research will provide me with a dissertation project, but I don’t have to decide that for about a year. And, in the chaotic Universe in which we live a lot can, and will, happen in a year.

I might even learn something!

And, after learning something, I might end up changing the direction of my research. Then again, I might not. To again (as I did last week ) quote Winnie the Pooh: “You never can tell with bees!

No, this is not an appropriate forum for publishing academic research results. For that we need peer-reviewed scholarly journals. There are lots of them out there, and I plan on using them. Actually, if I’m gonna get the degree, I’m gonna have to use them!

This is, however, an appropriate forum for summarizing some of my research results for a wider audience, who might just have some passing interest in them. The questions I’m asking affect a whole lot of people. In fact, I dare say that they affect almost everyone. They certainly can affect everyone’s thinking as they approach teamwork at home and at work, as well as how they consider political candidates asking for their votes.

For example, a little over a year from now, you’re going to have the opportunity to vote for who you want running the United States Government’s Executive Branch as well as a few of the people you’ll hire (or re-hire) to run the Legislative Branch. Altogether, those guys form a fairly important decision-making team. A lot of folks have voiced disapprobation with how the people we’ve hired in the past have been doing those jobs. My research has implications for what questions you ask of the bozos who are going to be asking for your votes in the 2020 elections.

One of the likely candidates for President has shown in words and deeds over the past two years (actually over the past few decades, if you care to look that far into his past) that he likes to make decisions all by his lonesome. In other words, he likes to have a decision team numbering exactly one member: himself.

Those who have paid attention to this column (specifically the posting of 17 July) can easily compute the diversity score for a team like that. It’s exactly zero.

When looking at candidates for the Legislative Branch, you’ll likely encounter candidates who’re excessively proud to promise that they’ll consult that Presidential candidate’s whims regarding anything, and support whatever he tells them he wants. Folks who paid attention to that 17 July posting will recognize that attitude as one of the toxic group-dynamics phenomena that destroy a decision team’s diversity score. If we elect too many of them to Congress and we vote Bozo #1 back into the Presidency, we’ll end up with another four years of the effective diversity of the U.S. Government decision team being close to or exactly equal to zero.

Preliminary results from my research – looking at results published by other folks asking what diversity or lack thereof does to the results of projects they make decisions for – indicates that decision teams with zero effective diversity are dumber than a box of rocks. Nobody’s done the research needed to make that statement look anything like Universal Truth, but several researchers have looked at outcomes of a lot of projects. They’ve all found that more diverse teams do better.

Anyway, what this research project is all about is studying the effect of team-member diversity on decision-team success. For that to make sense, it’s important to define two things: diversity and success. Even more important is to make them measurable.

I’ve already posted about how to make both diversity and success measurable. On 17 July I posted a summary of how to quantify diversity. On 7 August I posted a summary of my research (so far) into quantifying project success as well. This week I’m posting a summary of how I plan to put it all together and finally get some answers about how diversity really affects project-development teams.

Methodology

What I’m hoping to do with this research is to validate three hypotheses. The main hypothesis is that diversity (as measured by the Gini-Simpson index outlined in the 17 July posting) correlates positively with project success (as measured by the critical success index outlined in the 7 August posting). A secondary hypothesis is that four toxic group-dynamic phenomena reduce a team’s ability to maximize project success. A third hypothesis is that there are additional unknown or unknowable factors that affect project success. The ultimate goal of this research is to estimate the relative importance of these factors as determinants of project success.

Understanding the methodology I plan to use begins with a description of the information flows within an archetypal development project. I then plan on conducting an online survey to gather data on real world projects in order to test the hypothesis that it is possible to determine a mathematical function that describes the relationship between diversity and project success, and to elucidate the shape of such a function if it exists. Finally, the data can help gauge the importance of group dynamics to team-decision quality.

The figure above schematically shows the information flows through a development project. External factors determine project attributes. Personal attributes, such as race, gender, and age combine with professional attributes, such as technical discipline (e.g., electronics or mechanical engineering) and work experience to determine raw team diversity. Those attributes combine with group dynamics to produce an effective team diversity. Effective diversity affects both project planning and project execution. Additional inputs from stakeholder goals and goals of the sponsoring enterprise also affect the project plans. Those plans, executed by the team, determine the results of project execution.

The proposed research will gather empirical data through an online survey of experienced project managers. Following the example of researchers van Riel, Semeijn, Hammedi, & Henseler (2011), I plan to invite members of the Project Management Institute (PMI) to complete an online survey form. Participants will be asked to provide information about two projects that they have been involved with in the past – one they consider to be successful and one that they consider less successful. This is to ensure that data collected includes a range of project outcomes.

There will be four parts to the survey. The first part will ask about the respondent and the organization sponsoring the project. The second will ask about the project team and especially probe the various dimensions of team diversity. The third will ask about goals expressed for the project both by stakeholders and the organization, and how well those goals were met. Finally, respondents will provide information about group dynamics that played out during project team meetings. Questions will be asked in a form similar to that used by van Riel, Semeijn, Hammedi, & Henseler (2011): Respondents will rate their agreement with statements on a five- or seven-step Likert scale.

The portions of the survey that will be of most importance will be the second and third parts. Those will provide data that can be aggregated into diversity and success indices. While privacy concerns will make masking identities of individuals, companies and projects important, it will be critical to preserve links between individual projects and data describing those project results.

This will allow creating a two-dimensional scatter plot with indices of team diversity and project success as independent and dependent variables respectively. Regression analysis of the scatter plot will reveal to what extent the data bear out the hypothesis that team diversity positively correlates with project success. Assuming this hypothesis is correct, analysis of deviations from the regression curve (n-way ANOVA) will reveal the importance of different group dynamics effects in reducing the quality of team decision making. Finally, I’ll need to do a residual analysis to gauge the importance of unknown factors and stochastic noise in the data.

Altogether this research will validate the three hypotheses listed above. It will also provide a standard methodology for researchers who wish to replicate the work in order to verify or extend it. Of course, validating the link between team diversity and decision-making success has broad implications for designing organizations for best performance in all arenas of human endeavor.

References

de Rond, M., & Miller, A. N. (2005). Publish or perish: Bane or boon of academic life? Journal of Management Inquiry, 14(4), 321-329.

van Riel, A., Semeijn, J., Hammedi, W., & Henseler, J. (2011). Technology-based service proposal screening and decision-making effectiveness. Management Decision, 49(5), 762-783.

Measuring Project Success

 

 

Motorcycle ride
What counts as success depends on what your goals are. By Andrey Armyagov/Shutterstock

7 August 2019 – As part of my research into diversity in project teams, I’ve spent about a week digging into how it’s possible to quantify success. Most people equate personal success with income or wealth, and business success with profitability or market capitalization, but none of that really does it. Veteran project managers (like yours truly) recognize that it’s almost never about money. If you do everything else right, money just shows up sometimes. What it’s really all about is all those other things that go into making a success of some project.

So, measuring success is all about quantifying all those other things. Those other things are whatever is important to all the folks that your project affects. We call them stakeholders because they have a stake in the project’s outcome.

For example, some years ago it started becoming obvious to me that the boat tied up to the dock out back was doing me no good because I hardly ever took it out. I knew that I’d get to use a motorcycle every day if I had one, but I had that stupid boat instead. So, I conceived of a project to replace the boat with a motorcycle.

I wasn’t alone, however. Whether we had a boat or a motorcycle would make a difference to my wife, as well. She had a stake in whether we had a boat or a motorcycle, so she was also a stakeholder. It turned out that she would also prefer to have a motorcycle than a boat, so we started working on a project to replace the boat with a motorcycle.

So, the first thing to consider when planning a project is who the stakeholders are. The next thing to consider is what each stakeholder wants to get out of the project. In the case of the motorcycle project, what my wife wanted to get out of it was the fun of riding around southwest Florida visiting this, that and the other place. It turned out that the places she wanted to go were mostly easier to get to by motorcycle than by boat. So, her goal wasn’t just to have the motorcycle, it was to visit places she could get to by motorcycle. For her, getting to visit those places would fulfill her goal for the project.

See? There was no money involved. Only an intangible thing of being able to visit someplace.

The “intangible” part is what hangs people up when they want to quantify the value of something. It’s why people get hung up on money-related goals. Money is something everyone knows how to quantify. How do you quantify the value of “getting to go somewhere?”

A lot of people have tried a lot of schemes for “measuring” the “value” of some intangible thing, like getting where you want to go. It turns out, however, that it’s easy if you change your point of view just a little bit. Instead of asking how valuable it is to get there, you can ask something like: “What are the odds that I can get there?” Getting to some place five miles from the sea by boat likely isn’t going to happen, but getting there by motorcycle might be easy.

The way we quantify this is through what’s called a Likert scale. You make a statement, like “I can get there” and pick a number from, say, zero to five with zero being “It ain’t gonna happen” and five being “Easy k’neezie.”

You do that for all the places you’re likely to want to go and calculate an average score. If you really want to complete the job, you normalize your score by weighting the scores for each destination with how often you’re likely to want to go there, then divide by five times the sum of the weights. That leaves you with an index ranging from zero to one.

You go through this process for all of the goals of all your stakeholders and average the indices to get a composite index. This is an example of how one uses fuzzy logic, which takes into account that most of the time you can’t really be sure of anything. The fuzzy part is using the Likert scale to estimate how likely it is that your fuzzy statement (in this case, “I can get there”) will be true.

When using fuzzy logic to quantify project success, the fuzzy statements are of the form: “Stakeholder X’s goal Y is met.” The value assigned to that statement is the degree to which it is true, or, said another way, the degree to which the goal has been met. That allows for the prospect that not all stakeholder goals will be fully met.

For example, how well my wife’s goal of “Getting to Miromar Outlets in Estero, FL from our place in Naples” would be met depended a whole lot on the characteristics of the motorcycle. If the motorcycle is like the 1988 FLST light-touring bike I used to have, the value would be five. We used to ride that thing all day for weeks at a time! If, on the other hand, it’s like that ol’ 1986 XLH chopper, she might make it, but she wouldn’t be happy at the end (literally ’cause the seat was uncomfortable)! The value in that condition would be one or two. Of course, since Miromar is land locked, the value of keeping the boat would be zero.

So, the steps to quantifying project success are:

  1. Determine all goals of all stakeholders;
  2. Assign a relative importance (weight) to each stakeholder goal;
  3. Use a Likert scale to quantify the degree to which each stakeholder goal has been met;
  4. Normalize the scores to work out an index for each stakeholder goal;
  5. Form a critical success index (CSI) for the project as an average of the indices for the stakeholder goals.

Before you complain about that being an awful lot of math to go through just to figure out how well your project succeeded, recognize that you go through it in a haphazard way every time you do anything. Even if it’s just going to the bathroom, you start out with a goal and finish deciding how well you succeeded. Thinking about these steps just gives you half a chance to reach the correct conclusion.

Do the Math

Applied Math teacher
Throughout history, applied mathematics has been the key to human development. By Elnur/Shutterstock

31 July 2019 – Over the millennia that philosophers have been doing their philosophizing, a recurring theme has been the quest to come up with some simple definition of what sets humans apart from so-called “lower” animals. This is not just idle curiosity. From Aristotle on, folks have realized that understanding what makes us human is key to making the most of our humanity. If we don’t know who we are, how can we figure out how to be better?

In recent decades, however, it’s become clear that this is a fool’s errand. Such a definition of humanity doesn’t exist. Instead, what sets humans apart is a suite of characteristics, such as two eyes in the front of a head that’s set up on a stalk over a main torso, with two legs down below and a couple of arms on each side ending with wiggly fingers and opposable thumbs; a brain able to use sophisticated language; and so forth. Not every human has all of them (for example, I had a friend in Arizona who’d managed to lose his right arm and shoulder without losing his humanity) and a lot of non-humans have some of them (for example, chimpanzees use tools a lot). What marks humans as humans is having most of these characteristics, and what marks non-humans as not human is lacking a lot of them.

On the other hand, there is one thing that most humans are capable of that most non-humans aren’t: humans are capable of doing the math.

Yeah, crows can count past two. I hear that pigeons are good at pattern recognition. But, I’m talking about mathematical reasoning more sophisticated than counting past seven. That’s something most humans can do, and most other animals can’t.

Everybody has their mathematical limitations.Experience indicates that one’s mathematical limitations are mostly an issue of motivation. At some point, just about everybody decides that it’s just not worth putting in the effort needed to learn any more math than they already know.

That’s because learning math is hard. It’s the biggest learning challenge most of us ever face. Most of us give up long before reaching the limits of our innate ability to puzzle it out.

Luckily, there are some who are willing to push the limits, and master mathematical puzzles that no human has solved before. That’s lucky because without people like them, human progress would quickly stop.

Even better, those people are often willing – even anxious – to explain what they’ve puzzled out to the rest of us. For example, we have geometry because a bunch of Egyptians puzzled out how to design pyramids, stone temples and other stuff they wanted to build, then proudly explained to their peers exactly how to do it. We have double-entry accounting because folks in the Near East wanted to keep track of what they had, figured out how to do it, and taught others to help. We’ve got calculus because Sir Isaac Newton and a bunch of his buddies figured out how to predict what the visible planets would do next, then taught it to a bunch of physics students.

It’s what we like to call “Applied Mathematics,” and it’s responsible for most of the progress people have made since the days of stone knives and bear skins. Throughout history, we’ve all stood around scratching our heads about things we couldn’t make sense of until some bright guy (or gal) worked out the right mathematics and applied it to the problem. Then, suddenly what had been unintelligible became understandable.

These days, what I think is the bleeding edge of applied mathematics is nonlinear dynamics and chaos. Maybe there’s some fuzzy logic thrown into the mix, too. Most of the math tools needed to understand (as in “make mathematical models using”) these things is pretty well in hand. What we need to do is apply such tools to the problems that today vex us.

A case in point is the Gini-Simpson Diversity Index I described in this blog two weeks ago. That is a small brick in the wall of a structure that I hope will someday help us avoid making so many dumb choices. Last week I ran across another brick in a paper written by a couple of computer science professors at my old alma mater Rensselaer Polytechnic Institute (aka RPI, or as we used to call it when I was there as a graduate student, “the Tute”). This bit of intellectual flotsam describes a mathematical model the authors use to predict how political polarization evolves in the U.S. Congress.

Polarization is one of four (at my last count) toxic group-dynamics phenomena that make collaborative decision making fail. Basically, the best decisions are made by groups that work together to reach a consensus. We get crappy decisions when the group’s dynamics break down.

The RPI model is a nonlinear differential equation describing an aspect of the dynamics of decision-making teams. Specifically, it quantifies conditions that determine whether decision teams evolve toward consensus or polarization. We see today what happens when Congress evolves toward polarization. The authors’ research shows that prior to about 1980 Congress evolved toward consensus. Seeing this dynamic at work mathematically gives us a leg up on figuring out why, and maybe doing something about it.

I’m not going to go into the mathematical model the RPI paper presents. The study of nonlinear dynamical systems is far outside the editorial focus of this column. At this point, I’m not going to talk about solutions the paper might suggest for toxic U.S. Government polarization, either. The theory is not well enough developed yet to provide meaningful suggestions.

The purpose of this posting is to point out that application of sophisticated mathematics is necessary for solving society’s most intractable problems. As I said above, not everybody is ready and willing to become expert in using such tools. That’s not necessary. What I hope you’ll walk away with today is an appreciation of applied mathematics’ importance for societal development, and a willingness to support STEM (science, technology, engineering and mathematics) education throughout our school system. Finally, I hope you’ll encourage students who show an interest to learn the techniques and follow STEM careers.

Computing Diversity

Decision Team
Diversity of membership in decision-making teams leads to better outcomes. By Rawpixel.com/Shutterstock

17 July 2019 – It’s come to my attention that a whole lot of people don’t know how to calculate a diversity score, or even that such a thing exists! How can there be so much discussion of diversity and so little understanding of what the word means? In this post I hope to give you a peek behind the curtain, and maybe shed some light on what diversity actually is and how it is measured.

This topic is of particular interest to me at present because momentum is building to make a study of diversity in business-decision making the subject of my doctoral dissertation in Business Administration. Specifically, I’m looking at how decision-making teams (such as boards of directors) can benefit from membership diversity, and what can go wrong.

Estimating Diversity

The dictionary definition of diversity is: “the condition of having or being composed of differing elements.”

So, before we can quantify the diversity of any group, we’ve got to identify what makes different elements different. This, by the way, is all basic set theory. In different contexts what we mean by “different” may vary. When we’re talking about group decision making in a business context, it gets pretty complicated.

A group may be subdivided, or “stratified” along various dimensions. For example, a team of ten people sitting around a table trying to figure out what to do next about, say, a new product could be subdivided in various ways. One way to stratify such a group is by age. You’d have so many individuals in their 20’s, so many might be in their 30’s, and so forth up to the oldest group being aged 50 or more. Another (perhaps more useful) way to subdivide them is by specialty. There may be so many software engineers, so many hardware engineers, so many marketers, and so forth. These days stratifying teams by gender, ethnicity, educational level or political persuasion could be important. What counts as diversity depends on what the team is trying to decide.

The moral of this story is that a team might score high in diversity along one dimension and very poorly along another. I’m not going to say any more about diversity’s multidimensional nature in this essay, however. We have other fish to fry today.

For now, let’s assume a one-dimensional diversity index. What we pick for a dimension makes little difference to the mathematics we use. Let’s just imagine a medium-sized group of, say, ten individuals and stratify them according to the color of tee-shirts they happen to be wearing at the moment.

What the color of their tee-shirts could possibly mean for the group’s decisions about new-product development I can’t imagine, and probably wouldn’t care anyway. It does, however, give us a way to stratify the sample. Let’s say their shirt colors fall out as in Table 1. So, we’ve got five categories of team members stratified by tee-shirt color.Table 1: Tee-Shirt Colors

NOTE: The next bit is mathematically rigorous enough to give most people nosebleeds. You can skip over it if you want to, as I’m going to follow it with a more useful quick-and-dirty estimation method.

The Gini–Simpson diversity index, which I consider to be the most appropriate for evaluating diversity of decision-making teams, has a range of zero to one, with zero being “everybody’s the same” and one being “everybody’s different.” We start by asking: “What is the probability that two members picked at random have the same color tee shirt?”

If you’ve taken my statistical analysis course, you’ll likely loathe remembering that the probability of picking two things from a stratified data set, and having them both fall into the same category is:

Equation 1

Where λ is the probability we want, N is the number of categories (in this case 5), and P is the probability that, given the first pick falling into a certain category (i) the second pick will be in the same category. The superscript 2 just indicates that we’re taking members two at a time. Basically P is the number of members in category i divided by the total number of members in all categories. Thus, if the first pick has a blue tee-shirt, then P is 3/10 = 0.3.

This probability is high when diversity is low, and low when diversity is high. The Gini-Simpson index makes more intuitive sense by simply subtracting that probability from unity (1.0) to get something that is low when diversity is low, and high when diversity is high.

NOTE: Here’s where we stop with the fancy math.

Guesstimating Diversity

Veteran business managers (at least those not suffering from pathological levels of OCD) realize that the vast majority of business decisions – in fact most decisions in general – are not made after extensive detailed mathematical analysis like what I presented in the previous section. In fact, humans have an amazing capacity for making rapid decisions based on what’s called “fuzzy logic.”

Fuzzy logic recognizes that in many situations, precise details may be difficult or impossible to obtain, and may not make a significant difference to the decision outcome, anyway. For example, deciding whether to step out to cross a street could be based on calculations using precise measurements of an oncoming car’s speed, distance, braking capacity, and the probability that the driver will detect your presence in time to apply the brakes to avoid hitting you.

But, it’s usually not.

If we had to make the decision by the detailed mathematical analysis of physical measurements, we’d hardly ever get across the street. We can’t judge speed or distance accurately enough, and have no idea whether the driver is paying attention. We don’t, in general, make these measurements, then apply detailed calculations using Gallilean Transformations to decide if now is a safe time to cross.

No, we have, with experience over time, developed a “gut feel” for whether it’s safe. We use fuzzy categories of “far” and “near,” and “slow” or “fast.” Even the terms “safe” and “unsafe” have imprecise meanings, gradually shifting from one to the other as conditions change. For example “safe to cross” means something quite different on a dry, sunny day in summertime, than when the pavement has a slippery sheen of ice.

Group decision making has a similar fuzzy component. We know that the decision team we’ve got is the decision team we’re going to use. It makes no difference whether it’s diversity score is 4.9 or 5.2, what we’ve got is what we’re going to use. Maybe we could make a half-percent improvement in the odds of making the optimal decision by spending six months recruiting and training a blind Hispanic woman with an MBA to join the team, but are we going to do it? Nope!

We’ll take our chances with the possibly sub-optimal decision made by the team we already have in place.

Hopefully we’re not trying to work out laws affecting 175 million American women with a team consisting of 500 old white guys, but, historically, that’s the team we’ve had. No wonder we’ve got so many sub-optimal laws!

Anyway, we don’t usually need to do the detailed Gini-Simpson Diversity Index calculation to guesstimate how diverse our decision committee is. Let’s look at some examples whose diversity indexes are easy to calculate. That will help us develop a “gut feel” for diversity that’ll be useful in most situations.

So, let’s assume we look around our conference room and see six identical white guys and six identical white women. It’s pretty easy to work out that the team’s diversity index is 0.5. The only way to stratify that group is by gender, and the two strata are the same size. If our first pick happens to be a woman, then there’s a 50:50 chance that the second pick will be a woman, too. One minus that probability (0.5) equals 0.5.

Now, let’s assume we still have twelve team members, but eleven of them are men and there’s only one token woman. If your first pick is the woman, the probability of picking a woman again is 1/12 = 0.8. (The Gini-Simpson formula lets you pick the same member twice.) If, on the other hand, your first pick is a man, the probability that the second pick will also be a man is 11/12 = 0.92. I plugged all this into an online Gini-Simpson-Index calculator (‘cause I’m lazy) and it returned a value of 26%. That’s a whole lot worse.

Let’s see what happens when we maximize diversity by making everyone different. That means we end up stratifying the members into twelve segments. After picking one member, the odds of the second pick being identical are 1/12 = 0.8 for every segment. The online calculator now gives us a diversity index of 91.7%. That’s a whole lot better!

What Could Possibly Go Wrong?

There are two main ways to screw up group diversity: group-think and group-toxicity. These are actually closely related group-dynamic phenomena. Both lower the effective diversity.

Group-think occurs when members are too accommodating. That is, when members strive too hard to reach consensus. They look around to see what other members want to do, and agree to it without trying to come up with their own alternatives. This produces sub-optimal decisions because the group fails to consider all possible alternatives.

Toxic group dynamics occurs when one or more members dominate the conversation either by being more vocal or more numerous. Members with more reticent personalities fail to speak up, thus denying the group their input. Whenever a member fails to speak up, they lower the group’s effective diversity.

A third phenomenon that messes up decision making for  high-diversity teams is that when individual members are too insistent that their ideas are the best, groups often fail to reach consensus at all. At that point more diversity makes reaching consensus harder. That’s the problem facing both houses of the U.S. Congress at the time of this writing.

These phenomena are present to some extent in every group discussion. It’s up to group leadership to suppress them. In the end, creating an effective decision-making team requires two elements: diversity in team membership, and effective team leadership. Membership diversity provides the raw material for effective team decision making. Effective leadership mediates group dynamics to make it possible to maximize the team’s effective diversity.

Constructing Ideas

Constructivist pix
Constructivist illustration with rooster’s head. By Leonid Zarubin/Shutterstock

3 July 2019 – Long time readers of my columns will know that one of my favorite philosophical questions is: “How do we know what we think we know?” Along the way, my thoughts have gravitated toward constructivism, which is a theory in the epistemology branch of philosophy.

Jean Piaget has been credited with initiating the constructivist theory of learning through his studies of childhood development. His methods were to ask probing questions of his children and others, in an attempt to understand how they viewed the world. He also devised and administered reading tests to schoolchildren and became interested in the types of errors they made, leading him to explore the reasoning process in these young children.

From his studies, he worked out a model of childhood development that mapped several stages of world-view paradigms they seemed to use as they matured. This forced him to postulate that children actively participate in constructing their own ideas – their knowledge base – based on experience and prior knowledge. Hence, the term “constructivism.”

Imagine a house that represents everything the child “knows.” Mentally, they live in that house all the time, view the world in relation to it, and make decisions based on what’s there.

As they experience everything, including the experience of having someone tell them something verbally or through written words, they actively remodel the place. The operant concept here is that they constantly do the remodeling themselves by trying to fit new information into the structure that’s already there.

My own journey toward constructivism was based on introspective phenomenological studies. That is, I paid attention to how I gained new knowledge and compared my experiences with experiences reported by others studying the same material.

A paradigm example is the study of quantum mechanics. This subject is difficult for students familiar with classical physics because the principles and the phenomena on which they are based seem counterintuitive. Especially, the range of time and distance scales on which quantum principles act is not directly accessible to humans. Quantum mechanics works at submicroscopic distances and on nanosecond time scales.

Successful students of quantum mechanics start by studying human-scale phenomena that betray the presence of quantum principles. For example, the old “planetary model” of atoms as miniature solar systems in which electrons revolve in stable orbits around the atomic nucleus like planets around the Sun is a physical impossibility. Students realize this after studying Maxwellian Electrodynamics.

In 1864, James Clerk Maxwell succeeded in summarizing everything physicists of the time knew about electricity and magnetism in four concise (though definitely not simple) equations. Taken together, they implied the feasibility of radio and not only how light traveled, but even predicted its precise speed. Maxwell’s Equations were enormously successful in guiding the development of electrical technology in the late nineteenth century.

The problem for physicists studying atomic-scale phenomena, however, was that Maxwell’s Equations implied that electrons whizzing around nuclei would rapidly convert all their energy of motion into light, which would radiate away. With no energy of motion left to keep electrons orbiting, the atoms would quickly collapse – then, no more atoms! The Universe as we know it would rapidly cease to exist.

When I say rapidly, I mean on the time scale of trillionths of a second!

Not good for the Universe! Luckily for the Universe, what this really means that there’s something wrong with classical-electrodynamic theory (i.e., Maxwell’s Equations).

The student finds out about dozens of such paradoxes that show that classical physics is just flat out wrong! The student is then ready to entertain some outlandish ideas that form the core of quantum theory. The student proceeds to piece these ideas together into their own mental version of quantum mechanics.

Every physics student I’ve discussed this with has had the same experience learning this quantum-electrodynamical theory (QED). Even more telling, they all report initially learning the ideas by rote without really understanding them, then applying them for considerable time (months or years) before piecing them together into a mental pattern that eventually feels intuitive. At that point, when presented with some phenomenon (such as the sky being blue) they immediately seize on a QED-based explanation as the most obvious. Even doubting QED has become absurd for them!

To a constructivist, this process for learning quantum mechanics makes perfect sense. The student is presented with numerous paradoxes, which causes cognitive dissonance. This state motivates the student to seek alternative concepts and fit them into his or her world view. In a sense, they construct an extension onto the framework of their world view. This will likely require them to make some modifications to the original structure to accommodate the new knowledge.

This method of developing new knowledge dovetails quite nicely with the scientific method that’s been under development since Aristotle and Plato started toying around with it in the fourth century BCE. The new development is that Piaget showed that it is the normal way humans develop new knowledge. Even children can’t fully comprehend a new idea until they fit it into a modified version of their knowledge base.

This model also explains why humans’ normal initial reaction to novel ideas is to forcefully reject them. Accepting new ideas requires them to do a lot of work on their mental scaffolding. It takes a powerful mental event causing severe cognitive dissonance to motivate them to remodel a mental construction they’ve been piecing together for years.

It also explains why younger humans are so much quicker to take up new ideas. Their mental frameworks are still small, and rebuilding them to fit in new concepts is relatively easy. The reward for building out their mental framework is great. They are also more used to tinkering with their mental models than older humans, who have mental frameworks that have served them well for decades without modification.

Of course, once they reach the point of intolerable cognitive dissonance, older humans have more experience to draw on to do the remodeling job. They will be even quicker than youngsters to make whatever adjustments are necessary.

Older humans who have a lifelong habit of challenging themselves with new ideas have the easiest time adapting to change. They are more used to realigning their thinking to incorporate new concepts and have more practice in constructing knowledge frameworks.

Stick to Your Knitting

Man knitting
Man in suit sticking to his knitting. Photo by fokusgood / Shutterstock

6 June 2019 – Once upon a time in an MBA school far, far away, I took a Marketing 101 class. The instructor, whose name I can no longer be sure of, had a number of sayings that proved insightful, bordering on the oracular. (That means they were generally really good advice.) One that he elevated to the level of a mantra was: “Stick to the knitting.”

Really successful companies of all sizes hew to this advice. There have been periods of history where fast-growing companies run by CEOs with spectacularly big egos have equally spectacularly honored this mantra in the breach. With more hubris than brains, they’ve managed to over-invest themselves out of business.

Today’s tech industry – especially the FAANG companies (Facebook, Amazon, Apple, Netflix and Google) – is particularly prone to this mistake. Here I hope to concentrate on what the mantra means, and what goes wrong when you ignore it.

Okay, “stick to your knitting” is based on the obvious assumption that every company has some core expertise. Amazon, for example, has expertise in building and operating an online catalog store. Facebook has expertise in running an online forum. Netflix operates a bang-up streaming service. Ford builds trucks. Lockheed Martin makes state-of-the-art military airplanes.

General Electric, which has core expertise in manufacturing industrial equipment, got into real trouble when it got the bright idea of starting a finance company to extend loans to its customers for purchases of its equipment.

Conglomeration

There is a business model, called the conglomerate that is based on explicitly ignoring the “knitting” mantra. It was especially popular in the 1960s. Corporate managers imagined that conglomerates could bring into play synergies that would make conglomerates more effective than single-business companies.

For a while there, this model seemed to be working. However, when business conditions began to change (specifically interest rates began to rise from an abnormally low level to more normal rates) their supposed advantages began melting like a birthday cake left outside in a rainstorm. These huge conglomerates began hemorrhaging money until vultures swooped in to pick them apart. Conglomerates are now a thing of the past.

There are companies, such as Berkshire Hathaway, whose core expertise is in evaluating and investing in other companies. Some of them are very successful, but that’s because they stick to their core expertise.

Berkshire Hathaway was originally a textile company that investor Warren Buffett took over when the textile industry was busy going overseas. As time went on, textiles became less important and, by 1985 this core part of the company was shut down. It had become a holding company for Buffett’s investments in other companies. It turns out that Buffett’s core competence is in handicapping companies for investment potential. That’s his knitting!

The difference between a holding company and a conglomerate is (and this is specifically my interpretation) a matter of integration. In a conglomerate, the different businesses are more-or-less integrated into the parent corporation. In a holding company, they are not.

Berkshire Hathaway is known for it’s insurance business, but if you want to buy, say, auto insurance from Berkshire Hathaway, you have to go to it’s Government Employees Insurance Company (GEICO) subsidiary. GEICO is a separate company that happens to be wholly owned by Berkshire Hathaway. That is, it has its own corporate headquarters and all the staff, fixtures and other resources needed to operate as an independent insurance company. It just happens to be owned, lock, stock and intellectual property by another corporate entity: Berkshire Hathaway.

GEICO’s core expertise is insurance. Berkshire Hathaway’s core expertise is finding good companies to invest in. Some are partially owned (e.g., 5.4% of Apple) some are wholly owned (e.g., Acme Brick).

Despite Berkshire Hathaway’s holding positions in both Apple and Acme Brick, if you ask Warren Buffet if Berkshire Hathaway is a computer company or a brick company, he’d undoubtedly say “no.” Berkshire Hathaway is a diversified holding company.

It’s business is owning other businesses.

To paraphrase James Coburn’s line from Stanley Donen’s 1963 film Charade: “[Mrs. Buffett] didn’t raise no stupid children!”

Why Giant Corporations?

All this giant corporation stuff stems from a dynamic I also learned about in MBA school: a company grows or it dies. I ran across this dynamic during a financial modeling class where we used computers to predict results of corporate decisions in lifelike conditions. Basically, what happens is that unless the company strives to its utmost to maintain growth, it starts to shrink and then all is lost. Feedback effects take over and it withers and dies.

Observations since then have convinced me this is some kind of natural law. It shows up in all kinds of natural systems. I used to think I understood why, but I’m not so sure anymore. It may have something to do with chaos, and we live in a chaotic universe. I resolve to study this in more detail – later.

But, anyway … .

Companies that embrace this mantra (You grow or you die.) grow until they reach some kind of external limit, then they stop growing and – in some fashion or other – die.

Sometimes (and paradigm examples abound) external limits don’t kick in before some company becomes very big, indeed. Standard Oil Company may be the poster child for this effect. Basically, the company grew to monopoly status and, in 1911 the U.S. Federal Government stepped in and, using the 1890 Sherman Anti-Trust Act, forced its breakup into 33 smaller oil companies, many of which still exist today as some of the world’s major oil companies (e.g., Mobil, Amoco, and Chevron). At the time of its breakup, Standard Oil had a market capitalization of just under $11B and was the third most valuable company in the U.S. Compare that to the U.S. GDP of roughly $34B at the time.

The problem with companies that big is that they generate tons of free cash. What to do with it?

There are three possibilities:

  1. You can reinvest it in your company;

  2. You can return it to your shareholders; or

  3. You can give it away.

Reinvesting free cash in your company is usually the first choice. I say it is the first choice because it is used at the earliest period of the company’s history – the period when growth is necessarily the only goal.

If done properly reinvestment can make your company grow bigger faster. You can reinvest by out-marketing your competition (by, say, making better advertisements) and gobbling up market share. You can also reinvest to make your company’s operations more effective or efficient. To grow, you also need to invest in adding production facilities.

At a later stage, your company is already growing fast and you’ve got state-of-the-art facilities, and you dominate your market. It’s time to do what your investors gave you their money for in the first place: return profits to them in the form of dividends. I kinda like that. It’s what the game’s all about, anyway.

Finally, most leaders of large companies recognize that having a lot of free cash laying around is an opportunity to do some good without (obviously) expecting a payback. I qualify this with the word “obviously” because on some level altruism does provide a return in some form.

Generally, companies engage in altruism (currently more often called “philanthropy”) to enhance their perception by the public. That’s useful when lawsuits rear their ugly heads or somebody in the organization screws up badly enough to invite public censure. Companies can enhance their reputations by supporting industry activities that do not directly enhance their profits.

So-called “growth companies,” however, get stuck in that early growth phase, and never transition to paying dividends. In the early days of the personal-computer revolution, tech companies prided themselves on being “growth stocks.” That is, investors gained vast wealth on paper as the companies’ stock prices went up, but couldn’t realized those gains (capital gains) unless they sold the stock. Or, as my father once did, by using the stock for collateral to borrow money.

In the end, wise investors eventually want their money back in the form of cash from dividends. For example, in the early 2000s, Microsoft and other technology companies were forced by their shareholders to start paying dividends for the first time.

What can go wrong

So, after all’s said and done, why’s my marketing professor’s mantra wise corporate governance?

To make money, especially the scads of money that corporations need to become really successful, you’ve gotta do something right. In fact, you gotta do something better than the other guys. When you know how to do something better than the other guys, that’s called expertise!

Companies, like people, have limitations. To imagine you don’t have limitations is hubris. To put hubris in perspective, recall that the ancients famously made it Lucifer’s cardinal sin. In fact, it was his only sin!

Folks who tell you that you can do anything are flat out conning your socks off.

If you’re lucky you can do one thing better than others. If you’re really lucky, you can do a few things better than others. If you try to do stuff outside your expertise, however, you’re gonna fail. A person can pick themselves up, dust themselves off, and try again – but don’t try to do the same thing again ‘cause you’ve already proved it’s outside your expertise. People can start over, but companies usually can’t.

One of my favorite sayings is:

Everything looks easy to someone who doesn’t know what they’re doing.

The rank amateur at some activity typically doesn’t know the complexities and pitfalls that an expert in the field has learned about through training and experience. That’s what we know as expertise. When anyone – or any company – wanders outside their field of expertise, they quickly fall foul of those complexities and pitfalls.

I don’t know how many times I’ve overheard some jamoke at an art opening say, “Oh, I could do that!”

Yeah? Then do it!

The artist has actually done it.

The same goes for some computer engineer who imagines that knowing how to program computers makes him (or her) smart, and because (s)he is so smart, (s)he could run, say, a magazine publishing house. How hard can it be?

Mark Zuckerberg is in the process of finding out.

So, You Thought It Was About Climate Change?

Smog over Warsaw
Air pollution over Warsaw center city in winter. Piotr Szczepankiewicz / Shutterstock

Sorry about failing to post to this blog last week. I took sick and just couldn’t manage it. This is the entry I started for 10 April, but couldn’t finish until now.

17 April 2019 – I had a whole raft of things to talk about in this week’s blog posting, some of which I really wanted to cover for various reasons, but I couldn’t resist an excuse to bang this old “environmental pollution” drum once again.

A Zoë Schlanger-authored article published on 2 April 2019 by World Economic Forum in collaboration with Quartz entitled “The average person in Europe loses two years of their life due to air pollution” crossed my desk this morning (8 April 2019). It was important to me because environmental pollution is an issue I’ve been obsessed with since the 1950s.

The Setup

One of my earliest memories is of my father taking delivery of a even-then-ancient 26-foot lifeboat (I think it was from an ocean liner, though I never really knew where it came from), which he planned to convert to a small cabin cruiser. I was amazed when, with no warning to me, this great, whacking flatbed trailer backed over our front lawn, and deposited this thing that looked like a miniature version of Noah’s Ark.

It was double-ended – meaning it had a prow-shape at both ends – and was pretty much empty inside. That is, it had benches for survivors to sit on and fittings for oarlocks (I vaguely remember oarlocks actually being in place, but my memory from over sixty years ago is a bit hazy.) but little else. No decks. No superstructure. Maybe some grates in the bottom to keep people’s feet out of the bilge, but that’s about it.

My father spent year or so installing lower decks, upper decks, a cabin with bunks, head and a small galley, and a straight-six gasoline engine for propulsion. I sorta remember the keel already having been fitted for a propeller shaft and rudder, which would class the boat as a “launch” rather than a simple lifeboat, but I never heard it called that.

Finally, after multiple-years’ reconstruction, the thing was ready to dump into the water to see if it would float. (Wooden boats never float when you first put them in the water. The planks have to absorb water and swell up to tighten the joints. Until then, they leak like sieves.)

The water my father chose to dump this boat into was the Seekonk River in nearby Providence, Rhode Island. It was a momentous day in our family, so my mother shepherded my big sister and me around while my father stressed out about getting the deed done.

We won’t talk about the day(s) the thing spent on the tiny shipway off Gano Street where the last patches of bottom paint were applied over where the boat’s cradle had supported its hull while under construction, and the last little forgotten bits were fitted and checked out before it was launched.

While that was going on, I spent the time playing around the docks and frightening my mother with my antics.

That was when I noticed the beautiful rainbow sheen covering the water.

Somebody told me it was called “iridescence” and was caused by the whole Seekonk River being covered by an oil slick. The oil came from the constant movement of oil-tank ships delivering liquid dreck to the oil refinery and tank farm upstream. The stuff was getting dumped into the water and flowing down to help turn Narragansett Bay, which takes up half the state to the south, into one vast combination open sewer and toxic-waste dump.

That was my introduction to pollution.

It made my socks rot every time I accidentally or reluctantly-on-purpose dipped any part of my body into that cesspool.

It was enough to gag a maggot!

So when, in the late 1960s, folks started yammering on about pollution, my heartfelt reaction was: “About f***ing time!”

I did not join the “Earth Day” protests that started in 1970, though. Previously, I’d observed the bizarre antics surrounding the anti-war protests of the middle-to-late 1960s, and saw the kind of reactions they incited. My friends and I had been a safe distance away leaning on an embankment blowing weed and laughing as less-wise classmates set themselves up as targets for reactionary authoritarians’ ire.

We’d already learned that the best place to be when policemen suit up for riot patrol is someplace a safe distance away.

We also knew the protest organizers – they were, after all, our classmates in college – and smiled indulgently as they worked up their resumes for lucrative careers in activist management. There’s more than one way to make a buck!

Bohemians, beatniks, hippies, or whatever term du jour you wanted to call us just weren’t into the whole money-and-power trip. We had better, mellower things to do than march around carrying signs, shouting slogans, and getting our heads beaten in for our efforts. So, when our former friends, the Earth-Day organizers, wanted us to line up, we didn’t even bother to say “no.” We just turned and walked away.

I, for one, was in the midst of changing tracks from English to science. I’d already tried my hand at writing, but found that, while I was pretty good at putting sentences together in English, then stringing them into paragraphs and stories, I really had nothing worthwhile to write about. I’d just not had enough life experience.

Since physics was basic to all the other stuff I’d been interested in – for decades – I decided to follow that passion and get a good grounding in the hard sciences, starting with physics. By the late seventies, I had learned whereof science was all about, and had developed a feel for how it was done, and what the results looked like. Especially, I was deep into astrophysics in general and solar physics in particular.

As time went on, the public noises I heard about environmental concerns began to sound more like political posturing and less like scientific discourse. Especially as they chose to ignore variability of the Sun that we astronomers knew was what made everything work.

By the turn of the millennium, scholarly reports generally showed no observations that backed up the global-warming rhetoric. Instead, they featured ambiguous results that showed chaotic evolution of climate with no real long-term trends.

Those of us interested in the history of science also realized that warm periods coincided with generally good conditions for humans, while cool periods could be pretty rough. So, what was wrong with a little global warming when you needed it?

A disturbing trend, however, was that these reports began to feature a boilerplate final paragraph saying, roughly: “climate change is a real danger and caused by human activity.” They all featured this paragraph, suspiciously almost word for word, despite there being little or nothing in the research results to support such a conclusion.

Since nothing in the rest of the report provided any basis for that final paragraph, it was clearly non-sequitur and added for non-science reasons. Clearly something was terribly wrong with climate research.

The penny finally dropped in 2006 when emeritus Vice President Albert Gore (already infamous for having attempted to take credit for developing the Internet) produced his hysteria-inducing movie An Inconvenient Truth along with the splashing about of Jerry Mahlman’s laughable “hockey-stick graph.” The graph, in particular, was based on a stitching together of historical data for proxies of global temperature with a speculative projection of a future exponential rise in global temperatures. That is something respectable scientists are specifically trained not to do, although it’s a favorite tactic of psycho-ceramics.

Air Pollution

By that time, however, so much rhetoric had been invested in promoting climate-change fear and convincing the media that it was human-induced, that concerns about plain old pollution (which anyone could see) seemed dowdy and uninteresting by comparison.

One of the reasons pollution seemed then (and still does now) old news is that in civilized countries (generally those run as democracies) great strides had already been made beating it down. A case in point is the image at right

East/West Europe Pollution
A snapshot of particulate pollution across Europe on Jan. 27, 2018. (Apologies to Quartz [ https://qz.com/1192348/europe-is-divided-into-safe-and-dangerous-places-to-breathe/ ] from whom this image was shamelessly stolen.)

. This image, which is a political map overlaid by a false-color map with colors indicating air-pollution levels, shows relatively mild pollution in Western Europe and much more severe levels in the more-authoritarian-leaning countries of Eastern Europe.

While this map makes an important point about how poorly communist and other authoritarian-leaning regimes take care of the “soup” in which their citizens have to live, it doesn’t say a lot about the environmental state of the art more generally in Europe. We leave that for Zoë Schlanger’s WEF article, which begins:

“The average person living in Europe loses two years of their life to the health effects of breathing polluted air, according to a report published in the European Heart Journal on March 12.

“The report also estimates about 800,000 people die prematurely in Europe per year due to air pollution, or roughly 17% of the 5 million deaths in Europe annually. Many of those deaths, between 40 and 80% of the total, are due to air pollution effects that have nothing to do with the respiratory system but rather are attributable to heart disease and strokes caused by air pollutants in the bloodstream, the researchers write.

“‘Chronic exposure to enhanced levels of fine particle matter impairs vascular function, which can lead to myocardial infarction, arterial hypertension, stroke, and heart failure,’ the researchers write.”

The point is, while American politicians debate the merits of climate change legislation, and European politicians seem to have knuckled under to IPCC climate-change rhetoric by wholeheartedly endorsing the 2015 Paris Agreement, the bigger and far more salient problem of environmental pollution is largely being ignored. This despite the visible and immediate deleterious affects on human health, and the demonstrated effectiveness of government efforts to ameliorate it.

By the way, in the two decades between the time I first observed iridescence atop the waters of the Seekonk River and when I launched my own first boat in the 1970s, Narragansett Bay went from a potential Superfund site to a beautiful, clean playground for recreational boaters. That was largely due to the efforts of the Save the Bay volunteer organization. While their job is not (and never will be) completely finished, they can serve as a model for effective grassroots activism.

Why Diversity Rules

Diverse friends
A diverse group of people with different ages and nationalities having fun together. Rawpixel/Shutterstock

23 January 2019 – Last week two concepts reared their ugly heads that I’ve been banging on about for years. They’re closely intertwined, so it’s worthwhile to spend a little blog space discussing why they fit so tightly together.

Diversity is Good

The first idea is that diversity is good. It’s good in almost every human pursuit. I’m particularly sensitive to this, being someone who grew up with the idea that rugged individualism was the highest ideal.

Diversity, of course, is incompatible with individualism. Individualism is the cult of the one. “One” cannot logically be diverse. Diversity is a property of groups, and groups by definition consist of more than one.

Okay, set theory admits of groups with one or even no members, but those groups have a diversity “score” (Gini–Simpson index) of zero. To have any diversity at all, your group has to have at absolute minimum two members. The more the merrier (or diversitier).

The idea that diversity is good came up in a couple of contexts over the past week.

First, I’m reading a book entitled Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson, which I plan eventually to review in this blog. Part of the advice Johnson offers is that groups make better decisions when their membership is diverse. How they are diverse is less important than the extent to which they are diverse. In other words, this is a case where quantity is more important than quality.

Second, I divided my physics-lab students into groups to perform their first experiment. We break students into groups to prepare them for working in teams after graduation. Unlike when I was a student fifty years ago, activity in scientific research and technology development is always done in teams.

When I was a student, research was (supposedly) done by individuals working largely in isolation. I believe it was Willard Gibbs (I have no reliable reference for this quote) who said: “An experimental physicist must be a professional scientist and an amateur everything else.”

By this he meant that building a successful physics experiment requires the experimenter to apply so many diverse skills that it is impossible to have professional mastery of all of them. He (or she) must have an amateur’s ability pick up novel skills in order to reach the next goal in their research. They must be ready to work outside their normal comfort zone.

That asked a lot from an experimental researcher! Individuals who could do that were few and far between.

Today, the fast pace of technological development has reduced that pool of qualified individuals essentially to zero. It certainly is too small to maintain the pace society expects of the engineering and scientific communities.

Tolkien’s “unimaginable hand and mind of Feanor” puttering around alone in his personal workshop crafting magical things is unimaginable today. Marlowe’s Dr. Faustus character, who had mastered all earthly knowledge, is now laughable. No one person is capable of making a major contribution to today’s technology on their own.

The solution is to perform the work of technological research and development in teams with diverse skill sets.

In the sciences, theoreticians with strong mathematical backgrounds partner with engineers capable of designing machines to test the theories, and technicians with the skills needed to fabricate the machines and make them work.

Chaotic Universe

The second idea I want to deal with in this essay is that we live in a chaotic Universe.

Chaos is a property of complex systems. These are systems consisting of many interacting moving parts that show predictable behavior on short time scales, but eventually foil the most diligent attempts at long-term prognostication.

A pendulum, by contrast, is a simple system consisting of, basically, three moving parts: a massive weight, or “pendulum bob,” that hangs by a rod or string (the arm) from a fixed support. Simple systems usually do not exhibit chaotic behavior.

The solar system, consisting of a huge, massive star (the Sun), eight major planets and a host of minor planets, is decidedly not a simple system. Its behavior is borderline chaotic. I say “borderline” because the solar system seems well behaved on short time scales (e.g., millennia), but when viewed on time scales of millions of years does all sorts of unpredictable things.

For example, approximately four and a half billion years ago (a few tens of millions of years after the system’s initial formation) a Mars-sized planet collided with Earth, spalling off a mass of material that coalesced to form the Moon, then ricochetted out of the solar system. That’s the sort of unpredictable event that happens in a chaotic system if you wait long enough.

The U.S. economy, consisting of millions of interacting individuals and companies, is wildly chaotic, which is why no investment strategy has ever been found to work reliably over a long time.

Putting It Together

The way these two ideas (diversity is good, and we live in a chaotic Universe) work together is that collaborating in diverse groups is the only way to successfully navigate life in a chaotic Universe.

An individual human being is so powerless that attempting anything but the smallest task is beyond his or her capacity. The only way to do anything of significance is to collaborate with others in a diverse team.

In the late 1980s my wife and I decided to build a house. To begin with, we had to decide where to build the house. That required weeks of collaboration (by our little team of two) to combine our experiences of different communities in the area where we were living, develop scenarios of what life might be like living in each community, and finally agree on which we might like the best. Then we had to find an architect to work with our growing team to design the building. Then we had to negotiate with banks for construction loans, bridge loans, and ultimate mortgage financing. Our architect recommended adding a prime contractor who had connections with carpenters, plumbers, electricians and so forth to actually complete the work. The better part of a year later, we had our dream house.

There’s no way I could have managed even that little project – building one house – entirely on my own!

In 2015, I ran across the opportunity to produce a short film for a film festival. I knew how to write a script, run a video camera, sew a costume, act a part, do the editing, and so forth. In short, I had all the skills needed to make that 30-minute film.

Did that mean I could make it all by my onesies? Nope! By the time the thing was completed, the list of cast and crew counted over a dozen people, each with their own job in the production.

By now, I think I’ve made my point. The take-home lesson of this essay is that if you want to accomplish anything in this chaotic Universe, start by assembling a diverse team, and the more diverse, the better!