6 March 2020 – It has come to my attention that too many folks who should know better are confused about exactly why the stock market has shown such volatility recently. Specifically, we’ve seen multi-percentage daily changes both up and down, but especially down. We have also seen reports of reduced earnings guidance from multinational enterprises (MNEs). Finally, we’ve seen a drastic (50 basis points) cut by the Federal Reserve Bank with essentially no reaction from the stock-market. Pundits have charged most of this volatility to economic ramifications of the developing pandemic of COVID-19coronavirus. That is sorta true, but it doesn’t tell the whole story.
First of all, none of this behavior is either unexpected or irrational. Well, the Fed rate cut was pretty irrational, but they were just doin’ what they can. It didn’t work because it was pointless. The Fed funds rate has no connection to supply-network operations, and the pandemic’s (yes, we’re in a pandemic) economic effects are mostly supply-network disruptions. Nobody paid attention to the rate cut because anybody with enough business background to be involved in the stock market knew enough not to be fooled. I expect the Fed governors were just trying to make Donald Trump feel good because he seems to think everyone is even stupider than he is. With that mindset, he’d expect investors to be fooled and react accordingly. It didn’t happen because folks aren’t as dumb as he thinks they are. Well, maybe his base, but that’s a rant for a different day.
Clearly, the rise of COVID-19 has trashed China’s economy for 2020, and the economic contagion is spreading through global supply chains to other economies faster than this fast-moving virus is spreading through the world’s no-longer-isolated populations. This highlights two important characteristics of global business in the 21st century:
All national economies that are big enough to be called “economies” are inextricably interconnected;
The supply networks we’ve built up are entirely too brittle.
The reason supply networks are so important is because MNEs are essentially global supply networks as shown in the figure above. There is a central node that represents the MNE brand, such as Apple, General Motors, or Texaco, which organizes the whole mess, but it all starts with a bunch of raw material providers, which feed a bunch of intermediate-product (subassemblies or assemblies in a manufacturing environment) processors which feed the finished, customer-ready end products to the central MNE node for downstream distribution. From that central node, products get shipped through a distribution (wholesale) network to final retail customers (consumers). Folks persist in calling these things “supply chains,” but they’re really networks. A supply chain is just a supply network set up as a linear chain where there is only one node at each step from subassembly to consumer.
Unlike chains, which are famously only as strong as their weakest link, networks, such as the Internet, can be, and generally are, self healing. Instead of breaking whenever the weakest node fails, self-healing networks quickly adjust to keep the flow of whatever’s flowing through the network going. Think of it as the difference between a pipe and a river. Water flowing through the pipe stops moving whenever the pipe gets clogged. A river, however, adjusts by diverting water through an alternate channel. Try it next time you run across water flowing in a ditch or gutter by the side of a road. No matter how you try to block it, the flow finds some way to circumvent any obstacle.
This self-healing characteristic comes from network-organizational rules that provide alternative pathways to circumvent nodes (e.g., subassembly suppliers) that temporarily or permanently fail ( Hee-won & Ho-Shin, 2017; Huang & Wang, 2013) . This is the difference between a robust supply network and a brittle supply chain. While little can be done about reorganizing MNE supply networks in the middle of a crisis, it is important that we recognize the looming economic catastrophe accompanying the looming COVID-19 pandemic as an unnecessary vulnerability that we can correct in the future. We need to think about self-healing networks when designing global MNEs.
Hee-won, K., & Ho-Shin, C. (2017). SOUNET: Self-organized underwater wireless sensor network. Sensors, 17(2), 283.
Huang, M. J., & Wang, T. (2013). Self-healing research of ZigBee network based on coordinator node isolated. Applied Mechanics and Materials, 347-350, 2089.
26 February 2020 – This essay is a transcription of a paper I wrote last week as part of my studies for a Doctor of Business Administration (DBA) at Keiser University.
Developing a theory that quantitatively determines the rate of exchange between two fiat currencies has been a problem since the Song dynasty, when China’s Jurchen neighbors to the north figured out that they could emulate China’s Tang-dynasty innovation of printing fiat money on paper (Onge, 2017). With two currencies to exchange, some exchange rate was needed. This essay looks to Song-Dynasty economic history to find reasons why foreign exchange (forex) rates are so notoriously hard to predict. The analytical portion starts from the proposition that money itself is neutral (Patinkin & Steiger, 1989), and incorporates recently introduced ideas about money (de Soto, 2000; Masi, 2019), and concludes in favor of the interest rate approach for forex-rate prediction (Scott Hacker, Karlsson, & Månsson, 2012).
After the introduction of paper money, the Song Chinese quickly ran into the problem of inflation due to activities of rent seekers (Onge, 2017). Rent-seeking is an economics term that refers to attempts to garner income from non-productive activities, and has been around since at least the early days of agriculture (West, 2008). The Greek poet Hesiod complained about it in what has been called the first economics text, Works and Days, in which he said, “It is from work that men are rich in flocks and wealthy … if you work, it will readily come about that a workshy man will envy you as you become wealthy” (p. 46).
Repeated catastrophes arose for the Song Chinese after socialist economist Wang Anshi, prime minister from 1069 to 1076, taught officials that they could float government expenditures by simply cranking up their printing presses to flood the economy with fiat currency (Onge, 2017). Inflation exploded while productivity collapsed. The Jurchens took advantage of the situation by conquering the northern part of China’s empire. After they, too, destroyed their economy by succumbing to Wang’s bad advice, the Mongols came from the west to take over everything and confiscate the remaining wealth of the former Chinese Empire to fund their conquest of Eurasia.
Neutrality of Money
The proposition that money is neutral comes from a comment by John Stuart Mill, who, in 1871, wrote that “The relations of commodities to one another remain unaltered by money” (as cited in Patinkin & Steiger, 1989, p. 239). In other words, if a herdsman pays a farmer 50 cows as bride price for one of the farmer’s daughters, it makes no difference whether those 50 cows are worth 100 gold shekels, or 1,000, the wife is still worth 50 cows! One must always keep this proposition in mind when thinking about foreign exchange rates, and money in general. (Apologies for using a misogynistic example treating women as property, but we’re trying to drive home the difference between a thing and its monetary value.)
Another concept to keep in mind is Hernando de Soto’s (2000) epiphany that a house is just a shelter from the weather until it is secured by a property title. He envisioned that such things as titles inhabit what amounts to a separate universe parallel to the physical universe where the house resides. Borrowing a term from philosophy, one might call this a metaphysical universe made up of metadata that describes objects in the physical universe. de Soto’s idea was that existence of the property-title metadata turns the house into wealth that can become capital through the agency of beneficial ownership.
If one has beneficial ownership of a property title, one can encumber it by, for example, using it to secure a loan. One can then invest the funds derived from that loan into increased productive capacity of a business–back in the physical universe. Thus, the physical house is just an object, whereas the property title is capital (de Soto, 2000). It is the metaphysical capital that is transferable, not the physical property. In the transaction between the farmer and the herdsman above, what occurred was a swap between the two parties of de-Sotoan capital derived from beneficial ownership of the cattle and of the daughter, and it happened in the metaphysical universe.
What Is Money, Really?
Much of the confusion about forex rates arises from conflating capital and money. Masi (2019) speculated that money in circulation (e.g., M1) captures only half of what money really is. Borrowing concepts from both physics and double-entry bookkeeping, he equated money with a two-part conserved quantity he referred to as credit/debit. (Note that here the words “credit” and “debit” are not used strictly according to their bookkeeping definitions.) Credit arises in tandem with creation of an equal amount of debit. Thus, the net amount of money (equaling credit-minus-debit) is always the same: zero. A homeowner raising funds through a home-equity line of credit (HELOC) does not affect his or her total wealth. The transaction creates funds (credit) and debt (debit) in equal amounts. Similarly, a government putting money into circulation, whether by printing pieces of paper, or by making entries in a digital ledger, automatically increases the national debt.
Capital, on the other hand, arises, as de Soto (2000) explained, as metadata associated with property. The confusion comes from the fact that both capital and money are necessarily measured in the same units. While capital can increase through, say, building a house, or it can decrease by, for example, burning a house down, the amount of money (as credit/debit) can never change. It’s always a net zero.
The figure above shows how de Soto’s (2000) and Masi’s (2019) ideas combine. The cycle begins on the physical side with beneficial ownership of some property. On the metaphysical side, that beneficial ownership is represented by capital (i.e., property title). That capital can be used to secure a loan, which creates credit and debit in equal amounts. The beneficial owner is then free to invest the credit in beneficial ownership of a productive business back on the physical side. The business generates profits (e.g., inventory) that the owner retains as an increase in property.
The debit that was created along the way stays on the metaphysical side as an encumbrance on the total capital. The system is limited by the quantity of capital that can be encumbered, which limits the credit that can be created to fund continuing operations. The system grows through productivity of the business, which increases the property that can be represented by new capital, which can be encumbered by additional credit/debit creation, which can then fund more investment, and so forth. Note that the figure ignores, for simplicity, ongoing investment required to maintain the productive assets, and interest payments to service the debt.
Wang’s mismanagement strategy amounted to deficit spending–using a printing press to create credit/debit faster than the economy can generate profit to be turned into an increasing stock of capital (Onge, 2017). Eventually, the debt level rises to encumber the entire capital supply, at which point no new credit/debit can be created. Continued running of Wang’s printing press merely creates more fiat money to chase the same amount of goods: inflation. Thus, inflation arises from having the ratio of money creation divided by capital creation greater than one.
In Song China, investment collapsed due to emphasis on rent seeking, followed by collapsing productivity (Onge, 2017). Hyperinflation set in as the government cranked the printing presses just to cover national-debt service. Finally, hungry outsiders, seeing the situation, swooped in to seize the remaining productive assets. First it was the Jurchens, then the Mongols.
Forex and Hyperinflation
The Song Chinese quickly saw Wang’s mismanagement at work, and kicked him out of office (Onge, 2017). They, however, failed to correct the practices he’d introduced. Onge (2017) pointed out that China’s GDP per person at the start of the Song dynasty was greater than that of 21st-century Great Britain. Under Wang’s policies, decline set in around 1070–80, and GDP per person had fallen by 23% by 1120. Population growth changed to decline. Productivity cratered. Inflation turned to hyperinflation. The Jurchen, without the burden of Wang’s teachings, were slower to inflate their currency.
As Chinese inflation increased relative to that of the Jurchen, exchange rates between Jurchen and Chinese currencies changed rapidly. The Jurchen fiat currency became stronger relative to that of the Chinese. This tale illustrates how changes in forex rates follow relative inflation between currencies, and argues for using the interest rate approach to predict future equilibrium forex rates (Scott Hacker, et al., 2012).
Forex rates are free to fluctuate because money is neutral (Patinkin & Steiger, 1989). Viewing money as a conserved two-fluid metaphysical quantity (Masi, 2019) shows how a country’s supply of de-Sotoan capital constrains the money supply, and shows how an economy grows through profits from productive businesses (de Soto, 2000). It also explains inflation as an attempt to increase the money supply faster than the capital supply can grow. The mismatch of relative inflation affects equilibrium forex rates by increasing strength of one currency relative to another, and argues for the interest-rate approach to forex theory (Scott Hacker, et al., 2012).
de Soto, H. (2000). The mystery of capital. New York, NY: Basic Books.
Masi, C. G. (2019, June 19). The Fluidity of Money. [Web log post]. Retrieved from http://cgmblog.com/2019/06/19/the-fluidity-of-money/
Onge, P. S. T. (2017). How paper money led to the Mongol conquest: Money and the collapse of Song China. The Independent Review, 22(2), 223-243.
Patinkin, D., & Steiger, O. (1989). In search of the “veil of money” and the “neutrality of money”: A note on the origin of terms. Scandinavian Journal of Economics, 91(1), 131.
Scott Hacker, R., Karlsson, H. K., & Månsson, K. (2012). The relationship between exchange rates and interest rate differentials: A wavelet approach. World Economy, 35(9), 1162–1185.
West, M. L. [Ed.] (2008). Hesiod: Theogony and works and days. Oxford, UK; Oxford University Press.
9 February 2020 – I’m about half way through a course on global economics at Keiser University, and one of this week’s assigned readings is a 2012 article by Argentine-American legal scholar Fernando R. Tesón discussing his views on the ethical basis of free trade. I was particularly struck by the wording of his conclusion section:
More often, trade barriers allow governments to transfer resources in favor of rent-seekers and other political parasites. … Developed countries deserve scorn for not opening their markets to products made by the world’s poor by protecting their inefficient industries, while ruling elites in developing nations deserve scorn for allowing bad institutions, including misguided protectionism. (p. 126)
This was unusually blunt in a scholarly article! Tesón, however, did a good job of making his case. Citing David Ricardo’s and Hecksher-Olin’s theories of comparative-advantage, He provided a well-thought-out, if impassioned, argument that trade barriers are misguided at best, and at worst unconscionable. Among the practices he heaped scorn upon are “tariffs, import licenses, export licenses, import quotas, subsidies [emphasis added], government procurement rules, sanitary rules, voluntary export restraints, local content requirements, national security requirements, and embargoes” (Tesón, 2012, p. 126).
Generally, that was a defensible list. All of those practices tend to slew market-based purchase decisions toward goods produced by firms lacking true competitive advantage. The case against subsidies, however, is not so simple. There are various reasons for creating subsidies and ways of applying them. Not all are counterproductive from an economic-development standpoint.
Stephen Redding, in a 1999 article entitled “Dynamic comparative advantage and the welfare effects of trade” pointed out that comparative advantage is actually a dynamic thing. That is, it varies with time, and producers can, through appropriate investments, artificially create comparative advantages that are every bit as real as the comparative-advantage endowments that the earlier theorists described.
The original Ricardian model envisioned countries endowed with innate comparative advantages for producing some good(s) relative to producing the same good(s) in another country (Kang, 2018). Redding pointed out that a country’s productivity for manufacturing some good increases with time (experience) spent producing it. He posited that if the country’s competitors’ comparative advantage for producing that good is not great, it may be possible for the country to, through investing in or subsidizing development of an improved production process, overtake its competitors. In this way, Redding asserted, the relative competitive advantage/disadvantage situation may be reversed.
The counterargument to subsidizing such a project is that the subsidy has an opportunity cost in that the subsidy uses funds exacted from the country’s taxpayers to benefit one or more selected firms. Tesón’s position is that this would be an inappropriate use of taxpayer funds to benefit only a small subset of the country’s citizens. This is ipso facto unfair, hence his stigmatizing such a decision. The reductio ad absurdum rejoinder to this argument is that it leaves government powerless to effect economic development.
In a democracy, government decisions are assumed to have tacit acceptance by the whole population. Thus, an action by the government to support a small group developing a comparative advantage through a subsidy must be assumed to have a positive externality for the whole population.
If the government is an autocracy or oligarchy, there is no legitimate claim to fairness for any of its decisions, anyway, so the unfairness argument is moot.
There are thus conditions under which subsidizing firms or industries to develop enhanced productive capacity for some good make economic sense. Those conditions are as follows:
Competitors’ comparative advantage is small enough that it can be overcome with a reasonable subsidy over a reasonable length of time;
There is reason to expect the country will be able to maintain its improved comparative advantage situation after subsidies have been removed;
Achieving a comparative advantage for production of that good will have ripple effects that will generate comparative advantage throughout the economy.
If and only if all of these conditions obtain is it reasonable to create a temporary subsidy.
An example of an inappropriate subsidy is that by the European Union for Airbus, which began with the company’s launch in 1970 to create an EU-based large civil aircraft (LCA) industry to compete with the U.S.-based Boeing Aircraft Company and continues today (European Commission, 6 October 2004). While this history indicates that item 1 on the list above was fulfilled (Airbus became an effective competitor for Boeing in the 1980s), and item 3 certainly was fulfilled, the fact that the subsidies continue today, half a century later, indicates that item 2 was not fulfilled.
On the other hand, the myriad salutary effects that came out of the Polaris missile program of the mid-20th Century shows that all three conditions were valid for that government-subsidized project (Engwall, 2012).
Engwall, M. (2012). PERT, Polaris, and the Realities of Project Execution. International Journal of Managing Projects in Business,.5(4), 595-616.
European Commission. (6 October 2004). EU – US Agreement on Large Civil Aircraft 1992: key facts and figures. (MEMO/04/232). Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/MEMO_04_232
Kang, M. (2018). Comparative advantage and strategic specialization. Review of International Economics, 26(1), 1–19.
Redding, S. (1999). Dynamic comparative advantage and the welfare effects of trade. Oxford Economic Papers, 51, 15-39.
Tesón, F.,R. (2012). Why free trade is required by justice. Social Philosophy & Policy, 29(1), 126-153.
14 December 2019 – The following essay is a verbatim copy of one I recently posted to a Global Business discussion site in response to a link emailed to me by Dr. Tiffany Jordan of Keiser University.
Thank you, TJ, for sending along a link to Steve Sjuggerud’s documentary on Chinese development. History teaches us that 5,000 years ago, China was one of two (maybe three, if you count Central America) population centers (the other was Egypt) where folks independently invented civilization. You can’t go far wrong by betting on people that smart!
The second factor in this story is that one out of six human beings on this planet is Chinese. With that many really smart people let loose to work together, they’re bound to push the limits of economic development. The last time that happened anywhere was in the 18th century when steam technology was let loose among the newly liberated populations of England, North America, and Europe. The resulting Industrial Revolution was a similar game changer. People from the countryside flocked to the cities to make the most of revolutionary technology, and made vast piles of wealth in the process. Sound familiar?
So, what could go wrong? The known preference of the Chinese people for long power distance is what could go wrong (Hofstede, 1993). Since Qin Shi Huang patched together the Chinese Empire in 221 BCE (Shi, 2014), the country has had a nearly unbroken record of authoritarian rule, which is why, after all this time, they’re still stuck with “emerging nation” status. The latest period of lax central control started in the mid-1970s, when Mao Zedong lost control of his Marxist People’s Republic (PRC), and good things started happening in China.
China is home to two philosophies at opposing ends of the power-distance spectrum: Taoist egalitarianism and Confucian formality (Carnogurská, 2014). Taoists insist (among other things) on individual self-rule. Confucionists insist on respect for authority (Zhou, 2011). You can guess which philosophy Xi Jinping’s power-grabbing PRC favors! It is no accident that the slowing of China’s economic expansion immediately followed Xi’s re-institution of central authority. The stark contrast can be seen in the difference between the miracle on the Chinese mainland and the even-bigger miracle that has been playing out in Hong Kong.
I’m always ambivalent, however, about investing in the Chinese “miracle.” Back in the early 1990s I was asked to duplicate my success helping expand an American electronics publication into Europe by doing the same thing in China. With images from Tiananmen-Square events fresh in my mind, I declined. Unlike my corporate bosses, I just didn’t trust the PRC leadership to play nice. That corporation is now out of the publishing business! I’d done the same thing in the 1970s when I declined the last Shah of Iran’s invitation to take our Boston-based Physics Department to Tehran University–just before theirrevolution broke out. (Whew!)
China is not Iran, and Xi Jinping is not Mohammad Reza Shah. Pres. Xi likes leading the fastest-growing economy on the planet, but is facing his big test with current events in Hong Kong. Will he figure a way to defuse that uprising, or will his unenlightened cronies in Beijing push him into a disasterous reprise of Tiananmen-Square? I’m not jumping onto the Chinese bandwagon until I see the result.
Carnogurská, M. (2014). Xunzi, an ingeniously critical synthesist of Chinese philosophy of the pre-Qin period. Journal of Sino – Western Communications, 6(1), 3-25.
Hofstede, G. (1993). Cultural constraints in management theories. Executive, 7(1), 81–94.
Shi, J. (2014). Incorporating all for one: The first emperor’s tomb mound. Early China, 37(1), 359-391.
Zhou, H. (2011). Confucianism and the legalism: A model of the national strategy of governance in ancient China. Frontiers of Economics in China, 6(4), 616-637.
30 October 2019 – The essay below was posted to the Keiser University DBA 710 Week 8 Discussion Forum. It is reproduced here in the hope that readers of this blog will find this peek into state-of-the-art management research interesting.
This posting is a bit off topic for Week 8, but it reviews a paper that didn’t cross my desk in time to be included in last week’s discussions, where it would have been more appropriate. In fact, the copy of the paper I received was a manuscript version of a paper accepted by the journal Organizational Psychology Review that is at the printer now.
The paper, written by an Australian-German team, covers recent developments in measuring variables apropos management of decision teams in various situations (Klonek, Gerpott, Lehmann-Willenbrock & Parker, in press). As we saw last week, there is a lot of work to be done on metrology of leadership and management variables. The two main metrology-tool classifications are case studies (Pettigrew, 1990) and surveys (Osei-Kyei & Chan, 2018). Both involve time lags that make capturing data in real time and assuring its freedom from bias impossible (Klonek, Gerpott, Lehmann-Willenbrock & Parker, in press). Decision teams, however, present a dynamic environment where decision-making processes evolve over time (Lu, Gao & Szymanski, 2019). To adequately study such processes requires making time resolved measurements quickly enough to follow these dynamic changes.
Recent technological advances change that situation. Wireless sensor systems backed by advanced data-acquisition software make in possible to unobtrusively monitor team members’ activities in real time (Klonek, Gerpott, Lehmann-Willenbrock & Parker, in press). The paper describes how management scholars can use these tools to capture useful information for making and testing management theories. It provides a step-by-step breakdown of the methodology, including determining the appropriate time-resolution target, choosing among available metrology tools, capturing data, organizing data, and interpreting data. It covers working on time scales from milliseconds to months, and mixed time scales. Altogether, the paper provides invaluable information for anyone intending to link management theory and management practice in an empirical way (Bartunek, 2011).
Bartunek, J. M. (2011). What has happened to Mode 2? British Journal of Management, 22(3), 555–558.
Klonek, F.E., Gerpott, F., Lehmann-Willenbrock, N., & Parker, S. (in press). Time to go wild: How to conceptualize and measure process dynamics in real teams with high resolution? Organizational Psychology Review.
Lu, X., Gao, J. & Szymanski, B. (2019) The evolution of polarization in the legislative branch of government. Journal of the Royal Society Interface, 16: 20190010.
Osei-Kyei, R., & Chan, A. (2018). Evaluating the project success index of public-private partnership projects in Hong Kong. Construction Innovation, 18(3), 371-391.
Pettigrew, A. M. (1990). Longitudinal Field Research on Change: Theory and Practice. Organization Science, 1(3), 267–292.
18 September 2019 – The following essay is taken verbatim from a posting I made to the discussion forum for a class in my Doctor of Business Administration program at Keiser University.
For those who were disappointed by my not posting to this blog last week, I apologize. Doctoral programs are very intensive and I’ve found myself overloaded with work. I’ve had to prioritize, and regular postings to this blog are one of the things I’ve had to cut back. When something crosses my desk that I think readers of this blog might find particularly interesting, I’ll try to take time to post it here and let folks know about it through my Linkedin and Facebook accounts.
In the essay below I suggest an extension to a method for understanding human motivation using applied mathematics techniques. What, you didn’t think that was possible? Read on!
Almost at random, I happened to pick up Chung’s (1969) paper from this week’s reading list first. Since it discussed an approach to questions of motivation that I find particularly interesting, I was inspired to jump in and discuss my reaction to it immediately.
The approach Chung took was to use applied mathematics (AM) techniques for analyzing motivation. Anyone not steeped in AM methods could be excused for being surprised that the field could have anything to say about motivation. On the surface, motivation might seem completely qualitative, so how could mathematical techniques be at all useful for analyzing it?
In fact, quantification of anything that you can rank is possible. For example, Zheng & Jiang, (2017) discussed methods of quantifying species diversity in ecosystems. The fact that you can say this ecosystem is more diverse than that ecosystem means that ecosystem diversity is quantifiable.
Similarly, the fact that you can say that such-and-such a person is more motivated to do something than some other person indicates that motivation is quantifiable as well. Before proposing his Markov-chain model, Chung (1969) discussed five other analytical methods for studying motivation based on Maslow’s hierarchy, all of which descriptions he started by describing some method of quantifying motivation.
It happens that I am quite familiar with the mathematics Chung (1969) used. It is called linear algebra, and is a staple technique for analyzing theoretical physics problems. I started my career as an astrophysicist, so Chung’s paper is right in my intellectual wheelhouse. Reading it stimulated me to think: “Yeah, but what about …?”
What Chung’s analysis left out was how human motivation is subject to chaotic exogenous forces. I’ve more than once used the following thought experiment to illustrate this phenomenon. Imagine Albert Einstein scratching away at General Relativity Theory on the blackboard in his office. I mention Einstein particularly because he was known to be fond of thought experiments, so including him in this one seems appropriate. So, Einstein is totally absorbed in his work puzzling out GRT. Maslow would say that he is motivated at the “self-actualization” level. Suddenly, our hero realizes that it’s lunch time because his body signals a physiological need for a ham sandwich. An exogenous event (lunchtime) has modified Einstein’s needs state.
In Chung’s (1969) analysis, Einstein’s transition matrix P has suddenly switched from having element values that cause Einstein’s needs vector N to remain stable at Maslow’s level five to values that cause his needs to switch to level one at the next transition. At that point, Einstein puts down his chalk and roots around in his briefcase for the ham sandwich he knows his wife put in there this morning.
So, how would we handle this situation from a linear algebra standpoint? Using Chung’s (1969) notation, the transition from the ith state to the (i+1)th state is given by Equation 1:
Ni+1 = Ni P (1)
I’ve modified the notation slightly by writing vectors in regular italic typeface and matrices in bold italic typeface. That satisfies my need to have vectors and matrices sybolized in different typefaces. It’s a stability thing for me, so it’s down at Maslow’s level two (Chung, 1969) in my personal hierarchy of needs.
What we need now is to modify the transition matrix by applying another matrix that isolates the effect of the exogenous event. If we add a subscript 0 to specify the original transition matrix, and multiply it by a new matrix X that accounts specifically for the exogenous event, we get a new transition matrix given by Equation 2:
P = P0 X (2)
Finally, Equation 1 becomes Equation 3.
Ni+1 = Ni P0 X (3)
What is left to do is to develop methods of determining numerical values for the elements of these vectors and matrices in specific situations. This addition shows how to extend Chung’s (1969) Markov-chain model to situations where life events modify an individual’s motivational outlook. Such events can be anything from time reaching the lunch hour to the individual becoming a parent.
Chung, K. H. (1969). A Markov Chain Model of Human Needs: An Extension of Maslow’s Need Theory. Academy of Management Journal, 12(2), 223–234.
Zheng, L. & Jiang, J. (2017) A New Diversity Estimator. Journal of Statistical Distributions and Applications, 4(1), 1-13.
4 September 2019 – I’m in the early stages of a long-term research project for my Doctor of Business Administration (DBA) degree. Hopefully, this research will provide me with a dissertation project, but I don’t have to decide that for about a year. And, in the chaotic Universe in which we live a lot can, and will, happen in a year.
I might even learn something!
And, after learning something, I might end up changing the direction of my research. Then again, I might not. To again (as I did last week ) quote Winnie the Pooh: “You never can tell with bees!”
No, this is not an appropriate forum for publishing academic research results. For that we need peer-reviewed scholarly journals. There are lots of them out there, and I plan on using them. Actually, if I’m gonna get the degree, I’m gonna have to use them!
This is, however, an appropriate forum for summarizing some of my research results for a wider audience, who might just have some passing interest in them. The questions I’m asking affect a whole lot of people. In fact, I dare say that they affect almost everyone. They certainly can affect everyone’s thinking as they approach teamwork at home and at work, as well as how they consider political candidates asking for their votes.
For example, a little over a year from now, you’re going to have the opportunity to vote for who you want running the United States Government’s Executive Branch as well as a few of the people you’ll hire (or re-hire) to run the Legislative Branch. Altogether, those guys form a fairly important decision-making team. A lot of folks have voiced disapprobation with how the people we’ve hired in the past have been doing those jobs. My research has implications for what questions you ask of the bozos who are going to be asking for your votes in the 2020 elections.
One of the likely candidates for President has shown in words and deeds over the past two years (actually over the past few decades, if you care to look that far into his past) that he likes to make decisions all by his lonesome. In other words, he likes to have a decision team numbering exactly one member: himself.
Those who have paid attention to this column (specifically the posting of 17 July) can easily compute the diversity score for a team like that. It’s exactly zero.
When looking at candidates for the Legislative Branch, you’ll likely encounter candidates who’re excessively proud to promise that they’ll consult that Presidential candidate’s whims regarding anything, and support whatever he tells them he wants. Folks who paid attention to that 17 July posting will recognize that attitude as one of the toxic group-dynamics phenomena that destroy a decision team’s diversity score. If we elect too many of them to Congress and we vote Bozo #1 back into the Presidency, we’ll end up with another four years of the effective diversity of the U.S. Government decision team being close to or exactly equal to zero.
Preliminary results from my research – looking at results published by other folks asking what diversity or lack thereof does to the results of projects they make decisions for – indicates that decision teams with zero effective diversity are dumber than a box of rocks. Nobody’s done the research needed to make that statement look anything like Universal Truth, but several researchers have looked at outcomes of a lot of projects. They’ve all found that more diverse teams do better.
Anyway, what this research project is all about is studying the effect of team-member diversity on decision-team success. For that to make sense, it’s important to define two things: diversity and success. Even more important is to make them measurable.
I’ve already posted about how to make both diversity and success measurable. On 17 July I posted a summary of how to quantify diversity. On 7 August I posted a summary of my research (so far) into quantifying project success as well. This week I’m posting a summary of how I plan to put it all together and finally get some answers about how diversity really affects project-development teams.
What I’m hoping to do with this research is to validate three hypotheses. The main hypothesis is that diversity (as measured by the Gini-Simpson index outlined in the 17 July posting) correlates positively with project success (as measured by the critical success index outlined in the 7 August posting). A secondary hypothesis is that four toxic group-dynamic phenomena reduce a team’s ability to maximize project success. A third hypothesis is that there are additional unknown or unknowable factors that affect project success. The ultimate goal of this research is to estimate the relative importance of these factors as determinants of project success.
Understanding the methodology I plan to use begins with a description of the information flows within an archetypal development project. I then plan on conducting an online survey to gather data on real world projects in order to test the hypothesis that it is possible to determine a mathematical function that describes the relationship between diversity and project success, and to elucidate the shape of such a function if it exists. Finally, the data can help gauge the importance of group dynamics to team-decision quality.
The figure above schematically shows the information flows through a development project. External factors determine project attributes. Personal attributes, such as race, gender, and age combine with professional attributes, such as technical discipline (e.g., electronics or mechanical engineering) and work experience to determine raw team diversity. Those attributes combine with group dynamics to produce an effective team diversity. Effective diversity affects both project planning and project execution. Additional inputs from stakeholder goals and goals of the sponsoring enterprise also affect the project plans. Those plans, executed by the team, determine the results of project execution.
The proposed research will gather empirical data through an online survey of experienced project managers. Following the example of researchers van Riel, Semeijn, Hammedi, & Henseler (2011), I plan to invite members of the Project Management Institute (PMI) to complete an online survey form. Participants will be asked to provide information about two projects that they have been involved with in the past – one they consider to be successful and one that they consider less successful. This is to ensure that data collected includes a range of project outcomes.
There will be four parts to the survey. The first part will ask about the respondent and the organization sponsoring the project. The second will ask about the project team and especially probe the various dimensions of team diversity. The third will ask about goals expressed for the project both by stakeholders and the organization, and how well those goals were met. Finally, respondents will provide information about group dynamics that played out during project team meetings. Questions will be asked in a form similar to that used by van Riel, Semeijn, Hammedi, & Henseler (2011): Respondents will rate their agreement with statements on a five- or seven-step Likert scale.
The portions of the survey that will be of most importance will be the second and third parts. Those will provide data that can be aggregated into diversity and success indices. While privacy concerns will make masking identities of individuals, companies and projects important, it will be critical to preserve links between individual projects and data describing those project results.
This will allow creating a two-dimensional scatter plot with indices of team diversity and project success as independent and dependent variables respectively. Regression analysis of the scatter plot will reveal to what extent the data bear out the hypothesis that team diversity positively correlates with project success. Assuming this hypothesis is correct, analysis of deviations from the regression curve (n-way ANOVA) will reveal the importance of different group dynamics effects in reducing the quality of team decision making. Finally, I’ll need to do a residual analysis to gauge the importance of unknown factors and stochastic noise in the data.
Altogether this research will validate the three hypotheses listed above. It will also provide a standard methodology for researchers who wish to replicate the work in order to verify or extend it. Of course, validating the link between team diversity and decision-making success has broad implications for designing organizations for best performance in all arenas of human endeavor.
de Rond, M., & Miller, A. N. (2005). Publish or perish: Bane or boon of academic life? Journal of Management Inquiry, 14(4), 321-329.
van Riel, A., Semeijn, J., Hammedi, W., & Henseler, J. (2011). Technology-based service proposal screening and decision-making effectiveness. Management Decision, 49(5), 762-783.
28 August 2019 – The short answer is, to quote Pooh Bear in A.A. Milne’s Winnie-the-Pooh, “You never can tell with bees!” Or, with advancing technology, for that matter. Last week, however, the Analytics Team at Autolist published results of a survey of 1,567 current car shoppers that might shed some light on the question of whether electric vehicles (EVs) can fully replace vehicles with internal combustion engines (ICEs).
The Analytics Team asked survey respondents what were their biggest reasons to not buy an electric vehicle. By looking at the results, we can project when, how, and if e-vehicle technology can ever surmount car-shoppers’ objections.
The survey results were spectacularly unsurprising. The top three barriers to purchasing an electric vehicle were:
Concerns about lack of adequate range;
E-vehicles’ relatively high cost compared to similar gas vehicles; and
Concerns about charging infrastructure.
Anybody following the development of electric vehicles already knew that. Most folks could even peg the order of concern. What was somewhat surprising, though, is how little folks’ trepidation dropped off for less significant concerns. Approximately 42% of respondents cited adequate range as a concern. The score dropped only to about 14% for the ninth-most-concerning worry: being unhappy with choices of body style.
What that means for development of electric-vehicle technology is that resolving the top three issues won’t do the job. Resolving the top three issues would just elevate the next three issues to top-concern status for 25-30% of potential customers. That’s still way too high to allow fully replacing ICE-powered vehicles with EVs, as nine European countries (so far) have announced they want to do between 2020 and 2050.
Looking at what may be technologically feasible could give a glimpse of how sane or insane such ICE bans might be. What we can do is go down the list and speculate on how tough it will be to overcome each obstacle to full adoption. The Pareto chart above will show the “floor” to folks’ resistance if any of these issues remains unmet.
Top Three Issues
By inspection the Pareto chart shows natural breaks into three groups of three. The top three concerns (range, cost, and charging) all concern roughly 40% of respondents. That’s approximately the size of the political base that elected Donald Trump to be President of the United States in 2016.
I mention Trump’s political base to give perspective for how important a 40% rating really is. Just as 40% acceptance got Trump over the top in a head-to-head competition with Hillary Clinton, a 40% non-acceptance is enough to doom electric vehicles in a head-to-head competition with ICE-powered vehicles. So, what are the chances of technologically fixing those problems?
Lack of Range is just a matter of how much energy you can backpack onto an electric vehicle. The inputs to that calculation are how far you can drive on every Joule of energy (for comparison, 3,600 Joules equal one Watt-hour of energy) and how many Joules can you pack into a battery that an electric vehicle can reasonably carry around. I don’t have time to research these data points today, since I have only a few hours left to draft this essay, so I’m just not going to do it.
There are two ways, however, that we can qualitatively guesstimate the result. First, note that EV makers have already introduced models that they claim can go as far on one “fill up” (i.e., recharge) as is typical for ICE vehicles. That’s in the range of 200 to 300 miles. I can report that my sportscar goes pretty close to 200 miles on a tankful of gas, and that’s adequate for most of the commuting I’ve done over my career.
The second way to guesstimate the result is to watch progress of the Formula E electric-vehicle races. Formula E has been around for nearly a decade now (the first race was run in 2011), so we have some history to help judge the pace of technological developments.
The salient point that Formula E history makes is that battery range is improving. In previous events batteries couldn’t last a reasonable race distance. Unlike other forms of motor racing, where refueling takes just a few seconds, it takes too darn long to charge up an electric vehicle to make pit stops for refueling viable.
The solution was to have two cars for each racer. About half way through the race, the first car’s batteries would run out of juice, and the driver would have to jump into the second car to complete the race. This uncomfortable situation lasted through the last racing season (2018).
This year, however, I’m told that the rules have been changed to require racers to complete the entire race in one car on one battery charge. That tells us that e-technology has advanced enough to allow racers to complete a reasonable race distance at a reasonable race speed on one charge from a reasonable battery pack. That means e-vehicle developers have made significant progress on the range-limitation issue. Projecting into the future, we can be confident that range limits will soon become a non-issue.
High e-vehicle cost will also soon become a non-issue. History plainly shows that if folks are serious about mass-marketing anything, purchase prices will come down to a sustainable level. While Elon Musk’s Tesla hasn’t yet shown a profit while the company struggles to produce enough cars to fill even today’s meager electric-vehicle demand, there are some very experienced and professional automobile manufacturers also in the electric-vehicle game. Anyone who thinks those guys won’t be able to solve the mass-production-at-a-reasonable-cost problem for electric vehicles just hasn’t been paying attention over the past century and a quarter. They’re gonna do it, and they’ll do it very soon!
Charging infrastructure is similarly just a matter of doing it. It didn’t take the retail-gasoline vendors long to build out infrastructure to feed ICE-powered cars. Solving the EV-charging problem is not much more difficult. You just plunk charging stations down on every corner to replace the gasoline filling stations you’re going to close down because you’ve made ICE vehicles illegal.
The top three issues don’t seem to pose any insurmountable obstacles, so we can move on to the second-tier issues of recharging time, insufficient public knowledge, and battery life. All of these concerned just under 30% of survey respondents.
Charging time is the Achilles heel for EV technology. Currently, it takes hours to recharge an electric-car’s batteries. Charging speed is a matter of power, and that’s a serious limitation. It’s the real charging-infrastructure problem!
It takes less than a minute to pump ten gallons of gasoline into my sportscar’s fuel tank. That ten gallons can deliver approximately 1.2x109 Joules of energy. That’s 1.2 billion Watt seconds!
To cram that much energy into a battery in one minute would take a power rate of 20 MW. That’s enough to power a medium-sized town of 26,000 people! Now, look at a typical gas station with eight gas pumps, and imagine each of those pumps pumping a medium-size-town’s worth of electric power into a waiting EV’s battery. Now, count the number of gas stations in your town.
That should give you some idea of the enormity of the charging-infrastructure problem that mass use of electric vehicles will create!
I’m not going to suggest any solutions to this issue. Luckily, since I don’t advocate for mass use of electric vehicles, I don’t have to solve this problem for people do. In the interest of addressing the rest of the issues, let’s pretend we’re liberal politicians and can wave our fairy wands to make the enormity of this issue magically disappear.
Inadequate public knowledge is a relative non-issue. Electric vehicles aren’t really difficult to understand. In fact, they should be simpler to operate than ICE vehicles. Especially since the prime mover EVs use is a motor rather than an engine.
Hardly anyone I know is conscious of the difference between a motor and an engine. Everyone knows it, but doesn’t think about it. Everyone knows that to run an ICE you have to crank it with a starter motor to get it running in the first place, and then you’ve got to constantly take care not to stall it. That knowledge becomes so ingrained by the time you get a driver’s license that you don’t even think about it.
Electric motors are not engines, though. They’re motors, which means they start all by themselves as soon as you feed them power. When you brake your electric car to a stop at a stop light, it just stops! You don’t have to then keep it chunking over at idle. Stopped is stopped.
When sitting at a stop light, or waiting for your spouse to load groceries into the boot, an EV uses no power ‘cause it’s stopped. When you’re ready to go, you push on the accelerator pedal, and it just goes. No more fiddling with clutch pedals or shifting gears or using any of the other mechanical skills manual-transmission cars force us to learn and automatic-transmission cars take care of for us automatically. The biggest thing we have to learn about driving EVs is how easy it is.
There isn’t much else to learn about EVs either. Gearheads will probably want to dig into things like regenerative braking and multipolar induction motors, but just folks won’t care. If the most important thing about your ICE-powered SUV is the number of cup holders, that will all be the same in your electric-powered SUV.
Overall battery life will be an issue for years going forward, but eventually that will become a non-issue, too. Overall battery life refers to the number of times your lithium-ion battery pack can be recharged before it swells up and bursts. Ten years from now we expect to have a better solution than lithium-ion batteries, but they aren’t all that bad a solution for now, anyway.
It was annoying when the relatively small lithium-ion battery pack in your Samsung smartphone burst into flames back in 2016, and you can imagine what’ll happen if the much larger battery pack in your Tesla does the same thing when sitting in the garage under your house. But, it’ll be less of a problem than when the battery packs in airliners started going up in smoke a few years ago. We got through that and we’ll get through this!
Third-rate issues concerned 15-20% of survey respondents. They include issues around electric-motor reliability, battery materials, and vehicle designs. While they concerned relatively fewer respondents, enough people said they worried about them that they have to be addressed before EVs can fully replace ICE-powered vehicles.
Reliability concerned 20% of survey respondents. It shouldn’t. Electric motors have been around since William Sturgeon built the first practical one in 1832. They’ve proved to be extremely reliable with only two parts to wear out: the commutator brushes and the bearings. Unlike ICE power units, they need practically no regular maintenance. With modern solid-state power electronics taking the place of the old commutators, the only things left to wear out are the bearings, which take less punishment than the load-carrying wheel bearings all cars have.
Battery materials are a concern, but when viewed in perspective they shouldn’t be. Yes, lithium burns vigorously when exposed to air, and is especially flammable when exposed to water. But, gasoline burns just as vigorously when ignited by even a spark.
A tankful of gasoline can be responsible for a horrendous fire if ignited in an accident. Lithium ion batteries can cause similar mayhem, but are no more likely to do so than any other energy-storage medium.
Body size/style should not, to my mind, even be on the list. Electric-powered vehicles present fewer design constraints to coach builders than those with ICE power plants. In fact, it’s possible to design an EV chassis such that you can put any body on it that you can think of. Especially if you design that chassis with individually driven wheels, there are no drive-shaft and power-train issues to deal with.
Looking at the nine EV issues that survey respondents said would give them pause when considering the purchase of an electric vehicle rather than an ICE-powered vehicle, the only one not inevitably amenable to technological solution is the scale of the charging infrastructure. All of the others we can expect to be disposed of in short order as soon as we collectively decide we want to do it.
That charging infrastructure issue poses two problems: recharging time and recharging cost. The ten-gallon fuel tank in my sportscar typically gets me through about a week. That’s because I do relatively little commuting. I drive a round trip of about 60 miles to teach classes in Fort Myers twice a week. The rest of my driving is short local trips that burn up more than their fair share of gasoline because they’re stop-and-go driving.
In the past, I’ve had more difficult commute schedules that would have burned up a tankful of gas a day. Commuting more than 200 miles a day is almost unheard of. So, having to sit at a recharging station for hoursto top up batteries in the middle of a commute would be an unusual concern for a commuter. They would top up the batteries at home overnight.
Road trips, however, are another story. On a typical road trip, most people plan to burn up two tankfuls of fuel a day in two 4-5-hour stints. That’s why most vehicles have fuel tanks capable of taking them 200-300 miles. That’s about how far you can drive in a 4-5-hour stint. So, you drive out the tank, then stop for a while, which includes spending a minute or so refilling the tank. Then you’re ready to go on the next stint.
With an electric vehicle, however, which has to sit still for hours to recharge, that just doesn’t work. Instead of taking two days to drive to Virginia to visit my daughter, the trip would take most of a week. Electric vehicles are simply not suitable for road trips unless and until we solve the problem of supplying enough electric power to an EV’s battery to supply a small town!
Then, there’s the expense. If you’re going to recharge your EV once a week (or top it off from your wall outlet every night), you’ve gotta pay for that energy at the going rate. That 1.2 billion Joules translates into 333 kiloWatt hours added to your light bill every week. At a typical U.S. electricity rate of $0.12/kWh, that’s about $40. That may not seem like much, but compare it to the $25 I typically pay for a tankful of gas.
In conclusion, it looks like EVs will eventually do fine as dedicated commuter vehicles. They’ll cost a little more to run, but not enough to break most budgets. For road trips, however, they won’t work out well.
Thus, the answer to the question: “Can electric vehicles fully replace gas guzzlers?” is probably “No.” They’re fine for intra-city commuting, or commuting out to the suburbs, but unless Americans want to entirely forgo the possibility of taking road trips, ICE-powered vehicles will be needed for the foreseeable future.
14 August 2019 – There’s been some hand wringing in the mass media recently about negative interest rates and what they mean. Before you can think about that, however, you have to know what negative rates are and how they actually work. Journalists Sam Goldfarb and Daniel Kruger pointed out in a Wall Street Journal article on Monday (8/12) that not so long ago negative interest rates were thought impossible.
Of course, negative interest rates were never really “impossible.” They used to be considered highly unlikely, however, because nobody in their right mind would be willing to pay someone else for taking money off their hands. I mean, would you do it?
But, the world has changed drastically over the past, say, quarter century. Today, so-called “investors” think nothing of buying stock in giant technology companies, such as Tesla, Inc. that have never made a dime of profit and have no prospects of doing so in the near future. Such “investors” are effectively giving away their money at negative interest rates.
Buying stock in an unprofitable enterprise makes sense if you believe that the enterprise will eventually become profitable. Or, and this is a commonly applied strategy, you believe the market value of the stock will rise in the future, when you can sell it to somebody else at a profit. This latter strategy is known as the “bigger fool theory.” This theory holds that doing something that stupid is a good idea as long as you believe you’ll be able to find a “bigger fool” to take your stock in the deadbeat enterprise off your hands before it collapses into bankruptcy.
That all works quite nicely for stocks, but makes less sense for bonds, which is what folks are talking about when they wring their hands over negative-interest-rate policy by central banks. The difference is that in the bond market, there really is no underlying enterprise ownership that might turn a profit in the future. A bond is just an agreement between a lender and a debtor.
This is where the two-fluid model of money I trotted out in this column on 19 June helps paint an understandable picture. Recall from that column that money appears from nowhere when two parties, a lender and a debtor, execute a loan contract. The cash (known as “credit” in the model) goes to the debtor while an equal amount of debt goes to the lender. Those are the two paired “fluids” that make up what we call “money,” as I explain in that column.
Fed Funds Rate
The Federal Reserve Bank is a system of banks run by the U.S. Treasury Department. One of the system’s functions is to ensure the U.S. money supply by holding excess money for other banks who have more than they need at the moment, and loaning it out to banks in need of cash. By setting the interest rate (the so-called Fed Funds Rate) at which these transactions occur, the Fed controls how much money flows through the economy. Lowering the rate allows money to flow faster. Raising it slows things down.
Actual paper money represents only a tiny fraction of U.S. currency. In actual fact, money is created whenever anybody borrows anything from anybody, even your average loan shark. The Federal Reserve System is how the U.S. Federal Government attempts to keep the whole mess under control.
By the way, the problem with cryptocurrencies is that they attempt to usurp that control, but that’s a rant for another day.
Think of money as blood coursing through the country’s economic body, carrying oxygen to the cells (you and me and General Motors) that they use to create wealth. That’s when the problem with negative interest rates shows up. When interest rates are positive, it means wealth is being created. When they’re negative, well you can imagine what that means!
Negative interest rates mean folks are burning up wealth to keep the economic ship sailing along. If you keep burning up wealth instead of creating it, eventually you go broke. Think Venezuela, or, on a smaller scale, Puerto Rico.
Okay, so how do negative interest rates actually work?
A loan contract, or bond, is an agreement between a lender and a debtor to create some money (the two fluids, again). The idea behind any contract is that everybody gets something out of it that they want. In a conventional positive-interest-rate bond, the debtor gets credit that they can use to create wealth, like, maybe building a house. The lender gets a share in that wealth in the form of interest payments over and above the cash needed to retire the loan (as in pay back the principal).
Bonds are sold in an auction process. That is, the issuer offers to sell the bond for a face value (the principal) and pay it back plus interest at a certain rate in the future. In the real world, however, folks buy such bonds at a market price, which may or may not be equal to the principal.
If the market price is lower than the principal, then the effective rate of interest will be higher than the offered rate because what the actual market value is doesn’t affect the pay-back terms written on the loan agreement. If the market price is higher than the principal, the effective rate will be lower than the offered rate. If the market price is too much higher than the principal, the repayment won’t be enough to cover it, and the effective rate will be negative.
Everyone who’s ever participated in an auction knows that there are always amateurs around (or supposed professionals whose glands get the better of their brains so they act like amateurs) who get caught up in the auction dynamics and agree to pay more than they should for what’s offered. When it’s a bond auction, that’s how you get a negative interest rate by accident. Folks agree to pay up front more than they get back as principal plus interest for the loan.
Negative Interest Rate Policy (NIRP) is when a central bank (such as the U.S. Federal Reserve) runs out of options to control economic activity, and publicly says it’s going to borrow money from its customers at negative rates. The Fed’s customers (the large banks that deposit their excess cash with the Fed) have to put their excess cash somewhere, so they get stuck making the negative-interest-rate loans. That means they’re burning up the wealth their customers share with them when they pay their loans back.
If you’re the richest country in the world, you can get away with burning up wealth faster than you create it for a very long time. If, on the other hand, you’re, say, Puerto Rico, you can’t.
7 August 2019 – As part of my research into diversity in project teams, I’ve spent about a week digging into how it’s possible to quantify success. Most people equate personal success with income or wealth, and business success with profitability or market capitalization, but none of that really does it. Veteran project managers (like yours truly) recognize that it’s almost never about money. If you do everything else right, money just shows up − sometimes. What it’s really all about is all those other things that go into making a success of some project.
So, measuring success is all about quantifying all those other things. Those other things are whatever is important to all the folks that your project affects. We call them stakeholders because they have a stake in the project’s outcome.
For example, some years ago it started becoming obvious to me that the boat tied up to the dock out back was doing me no good because I hardly ever took it out. I knew that I’d get to use a motorcycle every day if I had one, but I had that stupid boat instead. So, I conceived of a project to replace the boat with a motorcycle.
I wasn’t alone, however. Whether we had a boat or a motorcycle would make a difference to my wife, as well. She had a stake in whether we had a boat or a motorcycle, so she was also a stakeholder. It turned out that she would also prefer to have a motorcycle than a boat, so we started working on a project to replace the boat with a motorcycle.
So, the first thing to consider when planning a project is who the stakeholders are. The next thing to consider is what each stakeholder wants to get out of the project. In the case of the motorcycle project, what my wife wanted to get out of it was the fun of riding around southwest Florida visiting this, that and the other place. It turned out that the places she wanted to go were mostly easier to get to by motorcycle than by boat. So, her goal wasn’t just to have the motorcycle, it was to visit places she could get to by motorcycle. For her, getting to visit those places would fulfill her goal for the project.
See? There was no money involved. Only an intangible thing of being able to visit someplace.
The “intangible” part is what hangs people up when they want to quantify the value of something. It’s why people get hung up on money-related goals. Money is something everyone knows how to quantify. How do you quantify the value of “getting to go somewhere?”
A lot of people have tried a lot of schemes for “measuring” the “value” of some intangible thing, like getting where you want to go. It turns out, however, that it’s easy if you change your point of view just a little bit. Instead of asking how valuable it is to get there, you can ask something like: “What are the odds that I can get there?” Getting to some place five miles from the sea by boat likely isn’t going to happen, but getting there by motorcycle might be easy.
The way we quantify this is through what’s called a Likert scale. You make a statement, like “I can get there” and pick a number from, say, zero to five with zero being “It ain’t gonna happen” and five being “Easy k’neezie.”
You do that for all the places you’re likely to want to go and calculate an average score. If you really want to complete the job, you normalize your score by weighting the scores for each destination with how often you’re likely to want to go there, then divide by five times the sum of the weights. That leaves you with an index ranging from zero to one.
You go through this process for all of the goals of all your stakeholders and average the indices to get a composite index. This is an example of how one uses fuzzy logic, which takes into account that most of the time you can’t really be sure of anything. The fuzzy part is using the Likert scale to estimate how likely it is that your fuzzy statement (in this case, “I can get there”) will be true.
When using fuzzy logic to quantify project success, the fuzzy statements are of the form: “Stakeholder X’s goal Y is met.” The value assigned to that statement is the degree to which it is true, or, said another way, the degree to which the goal has been met. That allows for the prospect that not all stakeholder goals will be fully met.
For example, how well my wife’s goal of “Getting to Miromar Outlets in Estero, FL from our place in Naples” would be met depended a whole lot on the characteristics of the motorcycle. If the motorcycle is like the 1988 FLST light-touring bike I used to have, the value would be five. We used to ride that thing all day for weeks at a time! If, on the other hand, it’s like that ol’ 1986 XLH chopper, she might make it, but she wouldn’t be happy at the end (literally ’cause the seat was uncomfortable)! The value in that condition would be one or two. Of course, since Miromar is land locked, the value of keeping the boat would be zero.
So, the steps to quantifying project success are:
Determine all goals of all stakeholders;
Assign a relative importance (weight) to each stakeholder goal;
Use a Likert scale to quantify the degree to which each stakeholder goal has been met;
Normalize the scores to work out an index for each stakeholder goal;
Form a critical success index (CSI) for the project as an average of the indices for the stakeholder goals.
Before you complain about that being an awful lot of math to go through just to figure out how well your project succeeded, recognize that you go through it in a haphazard way every time you do anything. Even if it’s just going to the bathroom, you start out with a goal and finish deciding how well you succeeded. Thinking about these steps just gives you half a chance to reach the correct conclusion.