Americans are Ready for the Libertarian Party

Nick Sarwak Photo
Nicholas Sarwark is the Chairman of the Libertarian National Committee. Photo Courtesy Libertarian National Committee

13 February 2019 – The following is an invited guest post by Nicholas Sarwark, Chairman of the Libertarian National Committee

Republicans and Democrats often have a stranglehold on the U.S. political process, but Americans are ready for that to change.

According to a Morning Consult–Politico poll conducted in early February, more than half of all voters in the United States believe a third party is needed, and one third of all voters would be willing to vote for a third-party candidate in the 2020 presidential election. A Gallup poll from October showed that 57 percent of Americans think a strong third party is needed.

It’s no wonder why. Another Gallup poll from January revealed that only 35 percent of Americans trust the U.S. government to handle domestic problems, a number that increases to only 41 percent for international troubles. Those are the lowest figures in more than 20 years. A running Gallup poll showed that in January, 29 percent of Americans view government itself as the biggest problem facing the country.

This widespread dissatisfaction with U.S. government is consistent with the increasing prevalence of libertarian views among the general public. Polling shows that more than a quarter of Americans have political views that can be characterized as libertarian.

All of this suggests that the Libertarian Party should be winning more and bigger electoral races than ever. In fact, that’s exactly what’s happening. Out of the 833 Libertarian candidates who ran in 2018, 55 were elected to public office in 11 states.

One of those officials elected is Jeff Hewitt, who in November won a seat on the board of supervisors in Riverside County, Calif. while finishing up eight years on the Calimesa city council—three as mayor. Before being elected to the city council, he had served six years on the city’s planning commission. Hewitt recently gave the Libertarian Party’s 2019 State of the Union address, explaining how Libertarians would restrain runaway government spending, withdraw from never-ending wars abroad, end the surveillance state, protect privacy and property rights, end mass incarceration and the destructive “war on drugs,” and welcome immigrants who expand our economy and enrich our culture.

Journalist Gustavo Arellano attended Hewitt’s swearing-in ceremony on January 8. In his feature story for the Los Angeles Times, he remarked, “Riverside County Supervisor Jeff Hewitt just might be the strangest Libertarian of them all: a politician capable of winning elections who could move the party from the fringes into the mainstream.”

During Hewitt’s time as mayor of Calimesa, he severed ties with the bloated pensions and overstaffing of the state-run fire department. He replaced it with a local alternative that costs far less and has been much more effective at protecting endangered property. This simple change also eliminated two layers of administrative costs at the county and state levels.

Now Hewitt is poised to bring libertarian solutions to an even larger region, in his new position with Riverside County, which has more residents than the populations of 15 different states. This rise from local success is a model that can be replicated around the country, suggested Fullerton College political science professor Jodi Balma, quoted in the L.A. Times article as saying that Hewitt’s success shows how Libertarian candidates can “build a pipeline to higher office” with successful local races that show the practical value of Libertarian Party ideas on a small scale, then parlaying those experiences into winning state and federal office.

That practical value is immense, as Libertarian Laura Ebke showed when, as a Nebraska state legislator, she almost single-handedly brought statewide occupational-licensure reform to nearly unanimous 45-to-1, tri-partisan approval. This legislation has cleared the way for countless Nebraskans to build careers in fields that were once closed off from effective competition behind mountains of regulatory red tape.

The American people have the third party they’re looking for. The Libertarian Party is already the third-largest political party in the United States, and it shares the public’s values of fiscal responsibility and social tolerance — the same values that drive the public’s disdain for American politicians and wasteful, destructive, ineffective government programs.

The Libertarian Party is also the only alternative party that routinely appears on ballots in every state.

As of December 17 we had secured ballot access for our 2020 Presidential ticket in 33 states and the District of Columbia — the best starting position since 1914 for any alternative party at this point in the election cycle. This will substantially reduce the burden for achieving nationwide ballot access that we have so often borne. After the 1992 midterm election, for example, we had ballot access in only 17 states — half as many as today. Full ballot access for the Libertarian Party means that voters of every state will have more choice.

The climate is ripe for Libertarian progress. The pieces are all here, ready to be assembled. All it requires is building awareness of the Libertarian Party — our ideas, our values, our practical reforms, and our electoral successes — in the minds and hearts of the American public.

Nicholas Sarwark is serving his third term as chair of the Libertarian National Committee, having first been elected in 2014. Prior to that, he has served as chair of the Libertarian Party of Maryland and as vice chair of the Libertarian Party of Colorado, where he played a key role in recruiting the state’s 42 Libertarian candidates in 2014 and supported the passage of Colorado’s historic marijuana legalization initiative in 2012. In 2018, he ran for mayor of Phoenix, Ariz.

Six Tips to Protect Your Vote from Election Meddlers

Theresa Payton headshot
Theresa Payton, cybersecurity expert and CEO of Fortalice Solutions. photo courtesy Fortalice Solutions

6 November 2018 – Below is from a press release I received yesterday (Monday, 11/5) evening. It’s of sufficient import and urgent timing that I decided to post it to this blog verbatim.

There’s been a lot of talk about cybersecurity and whether or not the Trump administration is prepared for tomorrow’s midterm elections, but now that we’re down to the wire, former White House CIO and Fortalice Solutions CEO Theresa Payton says it’s time for voters to think about what they can do to make sure their voices are heard.

Theresa’s six cyber tips for voters ahead of midterms:

  • Don’t zone out while you’re voting. Pay close attention to how you cast your ballot and who you cast your ballot for.

  • Take your time during the review process, and double-check your vote before you finalize it;

  • It may sound cliche, but if you see something say something. If something seems strange, report it to your State Board of Elections immediately;

  • If you see suspicious social media personas pushing information that’s designed to influence (and maybe even misinform) voters, here’s where you can report it:

  • Check your voter registration status before you go to the polls. Voters in 37 states and the District of Columbia can register to vote online. Visit vote.org to find out how to check your registration status in your state;

  • Unless you are a resident of West Virginia or you’re serving overseas in the U.S. military, you cannot vote electronically on your phone. Protect yourself from text messages and email scams that indicate that you can. Knowledge is power.

Finally, trust the system. Yes, it’s flawed. Yes, it’s imperfect. But it’s the bedrock of our democracy. If you stay home or lose trust in the legitimacy of the process, our cyber enemies win.

Theresa is one of the nation’s leading experts in cyber security and IT strategy. She is the CEO of Fortalice Solutions, an industry-leading security consulting company. Under President George W. Bush, she served as the first female chief information officer at the White House, overseeing IT operations for POTUS and his staff. She was named #4 on IFSEC Global’s list of the world’s Top 50 cybersecurity influencers in security & fire 2017. See her profiled in the Washington Post for her role on the 2017 CBS reality show “Hunted” here.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.