You Want Me to Pay You … Why?

Fed Funds Rate goes negative
Negative rates burn wealth! ramcreations/Shutterstock

14 August 2019 – There’s been some hand wringing in the mass media recently about negative interest rates and what they mean. Before you can think about that, however, you have to know what negative rates are and how they actually work. Journalists Sam Goldfarb and Daniel Kruger pointed out in a Wall Street Journal article on Monday (8/12) that not so long ago negative interest rates were thought impossible.

Of course, negative interest rates were never really “impossible.” They used to be considered highly unlikely, however, because nobody in their right mind would be willing to pay someone else for taking money off their hands. I mean, would you do it?

But, the world has changed drastically over the past, say, quarter century. Today, so-called “investors” think nothing of buying stock in giant technology companies, such as Tesla, Inc. that have never made a dime of profit and have no prospects of doing so in the near future. Such “investors” are effectively giving away their money at negative interest rates.

Buying stock in an unprofitable enterprise makes sense if you believe that the enterprise will eventually become profitable. Or, and this is a commonly applied strategy, you believe the market value of the stock will rise in the future, when you can sell it to somebody else at a profit. This latter strategy is known as the “bigger fool theory.” This theory holds that doing something that stupid is a good idea as long as you believe you’ll be able to find a “bigger fool” to take your stock in the deadbeat enterprise off your hands before it collapses into bankruptcy.

That all works quite nicely for stocks, but makes less sense for bonds, which is what folks are talking about when they wring their hands over negative-interest-rate policy by central banks. The difference is that in the bond market, there really is no underlying enterprise ownership that might turn a profit in the future. A bond is just an agreement between a lender and a debtor.

This is where the two-fluid model of money I trotted out in this column on 19 June helps paint an understandable picture. Recall from that column that money appears from nowhere when two parties, a lender and a debtor, execute a loan contract. The cash (known as “credit” in the model) goes to the debtor while an equal amount of debt goes to the lender. Those are the two paired “fluids” that make up what we call “money,” as I explain in that column.

Fed Funds Rate

The Federal Reserve Bank is a system of banks run by the U.S. Treasury Department. One of the system’s functions is to ensure the U.S. money supply by holding excess money for other banks who have more than they need at the moment, and loaning it out to banks in need of cash. By setting the interest rate (the so-called Fed Funds Rate) at which these transactions occur, the Fed controls how much money flows through the economy. Lowering the rate allows money to flow faster. Raising it slows things down.

Actual paper money represents only a tiny fraction of U.S. currency. In actual fact, money is created whenever anybody borrows anything from anybody, even your average loan shark. The Federal Reserve System is how the U.S. Federal Government attempts to keep the whole mess under control.

By the way, the problem with cryptocurrencies is that they attempt to usurp that control, but that’s a rant for another day.

Think of money as blood coursing through the country’s economic body, carrying oxygen to the cells (you and me and General Motors) that they use to create wealth. That’s when the problem with negative interest rates shows up. When interest rates are positive, it means wealth is being created. When they’re negative, well you can imagine what that means!

Negative interest rates mean folks are burning up wealth to keep the economic ship sailing along. If you keep burning up wealth instead of creating it, eventually you go broke. Think Venezuela, or, on a smaller scale, Puerto Rico.

Negative Interest

Okay, so how do negative interest rates actually work?

A loan contract, or bond, is an agreement between a lender and a debtor to create some money (the two fluids, again). The idea behind any contract is that everybody gets something out of it that they want. In a conventional positive-interest-rate bond, the debtor gets credit that they can use to create wealth, like, maybe building a house. The lender gets a share in that wealth in the form of interest payments over and above the cash needed to retire the loan (as in pay back the principal).

Bonds are sold in an auction process. That is, the issuer offers to sell the bond for a face value (the principal) and pay it back plus interest at a certain rate in the future. In the real world, however, folks buy such bonds at a market price, which may or may not be equal to the principal.

If the market price is lower than the principal, then the effective rate of interest will be higher than the offered rate because what the actual market value is doesn’t affect the pay-back terms written on the loan agreement. If the market price is higher than the principal, the effective rate will be lower than the offered rate. If the market price is too much higher than the principal, the repayment won’t be enough to cover it, and the effective rate will be negative.

Everyone who’s ever participated in an auction knows that there are always amateurs around (or supposed professionals whose glands get the better of their brains so they act like amateurs) who get caught up in the auction dynamics and agree to pay more than they should for what’s offered. When it’s a bond auction, that’s how you get a negative interest rate by accident. Folks agree to pay up front more than they get back as principal plus interest for the loan.

Negative Interest Rate Policy (NIRP) is when a central bank (such as the U.S. Federal Reserve) runs out of options to control economic activity, and publicly says it’s going to borrow money from its customers at negative rates. The Fed’s customers (the large banks that deposit their excess cash with the Fed) have to put their excess cash somewhere, so they get stuck making the negative-interest-rate loans. That means they’re burning up the wealth their customers share with them when they pay their loans back.

If you’re the richest country in the world, you can get away with burning up wealth faster than you create it for a very long time. If, on the other hand, you’re, say, Puerto Rico, you can’t.

Luddites RULE!

LindaBucklin-Shutterstock
Momma said there’d be days like this! (Apologies to songwriters Luther Dixon and Willie Denson, and, of course, the Geico Caveman.) Linda Bucklin/Shutterstock

7 February 2019 – This is not the essay I’d planned to write for this week’s blog. I’d planned a long-winded, abstruse dissertation on the use of principal component analysis to glean information from historical data in chaotic systems. I actually got most of that one drafted on Monday, and planned to finish it up Tuesday.

Then, bright and early on Tuesday morning, before I got anywhere near the incomplete manuscript, I ran headlong into an email issue.

Generally, I start my morning by scanning email to winnow out the few valuable bits buried in the steaming pile of worthless refuse that has accumulated in my Inbox since the last time I visited it. Then, I visit a couple of social media sites in an effort to keep my name if front of the Internet-entertained public. After a couple of hours of this colossal waste of time, I settle in to work on whatever actual work I have to do for the day.

So, finding that my email client software refused to communicate with me threatened to derail my whole day. The fact that I use email for all my business communications, made it especially urgent that I determine what was wrong, and then fix it.

It took the entire morning and on into the early afternoon to realize that there was no way I was going to get to that email account on my computer, find out that nobody in the outside world (not my ISP, not the cable company that went that extra mile to bring Internet signals from that telephone pole out there to the router at the center of my local area network, or anyone else available with more technosavvy than I have) was going to be able to help. I was finally forced to invent a work around involving a legacy computer that I’d neglected to throw in the trash just to get on with my technology-bound life.

At that point the Law of Deadlines forced me to abandon all hope of getting this week’s blog posting out on time, and move on to completing final edits and distribution of that press release for the local art gallery.

That wasn’t the last time modern technology let me down. In discussing a recent Physics Lab SNAFU, Danielle, the laboratory coordinator I work with at the University said: “It’s wonderful when it works, but horrible when it doesn’t.”

Where have I heard that before?

The SNAFU Danielle was lamenting happened last week.

I teach two sections of General Physics Laboratory at Florida Gulf Coast University, one on Wednesdays and one on Fridays. The lab for last week had students dropping a ball, then measuring its acceleration using a computer-controlled ultrasonic detection system as it (the ball, not the computer) bounces on the table.

For the Wednesday class everything worked perfectly. Half a dozen teams each had their own setups, and all got good data, beautiful-looking plots, and automated measurements of position and velocity. The computers then automatically derived accelerations from the velocity data. Only one team had trouble with their computer, but they got good data by switching to an unused setup nearby.

That was Wednesday.

Come Friday the situation was totally different. Out of four teams, only two managed to get data that looked even remotely like it should. Then, one team couldn’t get their computer to spit out accelerations that made any sense at all. Eventually, after class time ran out, the one group who managed to get good results agreed to share their information with the rest of the class.

The high point of the day was managing to distribute that data to everyone via the school’s cloud-based messaging service.

Concerned about another fiasco, after this week’s lab Danielle asked me how it worked out. I replied that, since the equipment we use for this week’s lab is all manually operated, there were no problems whatsoever. “Humans are much more capable than computers,” I said. “They’re able to cope with disruptions that computers have no hope of dealing with.”

The latest example of technology Hell appeared in a story in this morning’s (2/7/2019) Wall Street Journal. Some $136 million of customers’ cryptocurrency holdings became stuck in an electronic vault when the founder (and sole employee) of cryptocurrency exchange QuadrigaCX, Gerald Cotten, died of complications related to Crohn’s disease while building an orphanage in India. The problem is that Cotten was so secretive about passwords and security that nobody, even his wife, Jennifer Robertson, can get into the reserve account maintained on his laptop.

“Quadriga,” according to the WSJ account, “would need control of that account to send those funds to customers.”

No lie! The WSJ attests this bizarre tale is the God’s own truth!

Now, I’ve no sympathy for cryptocurrency mavens, which I consider to be, at best, technoweenies gleefully leading a parade down the primrose path to technology Hell, but this story illustrates what that Hell looks like!

It’s exactly what the Luddites of the early 19th Century warned us about. It’s a place of nameless frustration and unaccountable loss that we’ve brought on ourselves.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

What’s So Bad About Cryptocurrencies?

15 March 2018 – Cryptocurrency fans point to the vast “paper” fortunes that have been amassed by some bitcoin speculators, and sometimes predict that cryptocurrencies will eventually displace currencies issued and regulated by national governments. Conversely, banking-system regulators in several nations, most notably China and Russia, have outright bans on using cryptocurrency (specifically bitcoin) as a medium of exchange.

At the same time, it appears that fintech (financial technology) pundits pretty universally agree that blockchain technology, which is the enabling technology behind all cryptocurrency efforts, is the greatest thing since sliced bread, or, more to the point, the invention of ink on papyrus (IoP). Before IoP, financial records relied on clanky technologies like bundles of knotted cords, ceramic Easter eggs with little tokens baked inside, and that poster child for early written records, the clay tablet.

IoP immediately made possible tally sheets, journal and record books, double-entry ledgers, and spreadsheets. Without thin sheets of flat stock you could bind together into virtually unlimited bundles and then make indelible marks on, the concept of “bookkeeping” would be unthinkable. How could you keep books without having books to keep?

Blockchain is basically taking the concept of double-entry ledger accounting to the next (digital) level. I don’t pretend to fully understand how blockchain works. It ain’t my bailiwick. I’m a physicist, not a computer scientist.

To me, computers are tools. I think of them the same way I think of hacksaws, screw drivers, and CNC machines. I’m happy to have ’em and anxious to know how to use ’em. How they actually work and, especially, how to design them are details I generally find of marginal interest.

If it sounds like I’m backing away from any attempt to explain blockchains, that’s because I am. There are lots of people out there who are willing and able to explain blockchains far better than I could ever hope to.

Money, on the other hand, is infinitely easier to make sense of, and it’s something I studied extensively in MBA school. And, that’s really what cryptocurrencies are all about. It’s also the part cryptocurrency that its fans seem to have missed.

Once upon a time, folks tried to imbue their money (currency) with some intrinsic value. That’s why they used to make coins out of gold and silver. When Marco Polo introduced the Chinese concept of promissory notes to Renaissance Europe, it became clear that paper currency was possible provided there were two characteristics that went with it:

  • Artifact is some kind of thing (and I can’t identify it any more precisely than with the word “thing” because just about anything and everything has been tried and found to work) that people can pass between them to form a transaction; and
  • Underlying Value is some form of wealth that stands behind the artifact and gives an agreed-on value to the transaction.

For cryptocurrencies, the artifact consists of entries in a computer memory. The transactions are simply changes in the entries in computer memories. More specifically, blockchains amount to electronic ledger entries in a common database that forever leave an indelible record of transactions. (Sound familiar?)

Originally, the underlying value of traditional currencies was imagined to be the wealth represented by the metal in a coin, or the intrinsic value of a jewel, and so forth. More recently folks have begun imagining that the underlying value of government issued currency (dollars, pounds sterling, yuan) was fictitious. They began to believe the value of a dollar was whatever people believed it was.

According to this idea, anybody could issue currency as long as they got a bunch of people together to agree that it had some value. Put that concept together with the blockchain method of common recordkeeping, and you get cryptocurrency.

I’m oversymplifying all this in an effort to keep this posting within rational limits and to make a point, so bear with me. The point I’m trying to make is that the difference between any cryptocurrency and U.S. dollars is that these cryptocurrencies have no underlying value.

I’ve heard the argument that there’s no underlying value behind U.S. dollars, either. That just ain’t so! Having dollars issued by the U.S. government and tied to the U.S. tax base connects dollars to the U.S. economy. In other words, the underlying value backing up the artifacts of U.S. dollars is the entire U.S. economy. The total U.S. economic output in 2016, as measured by gross domestic product (GDP) was just under 20 trillion dollars. That ain’t nothing!