6 March 2019 – While surfing the Internet this morning, in a valiant effort to put off actually getting down to business grading that pile of lab reports that I should have graded a couple of days ago, I ran across this posting I wrote in 2013 for Packaging Digest.
Surprisingly, it still seems relevant today, and on a subject that I haven’t treated in this blog, yet. It being that I’m planning to devote most of next week to preparing my 2018 tax return, I decided to save some writing time by dusting it off and presenting it as this week’s posting to Tech Trends. I hope the folks at Packaging Digest won’t get their noses too far out of joint about my encroaching on their five-year-old copyright without asking permission.
By the way, this piece is way shorter than the usual Tech Trends essay because of the specifications for that Packaging Digest blog, which was entitled “New Metropolis” in homage to Fritz Lang’s 1927 feature film entitled Metropolis, which told the story of a futuristic mechanized culture and an anthropomorphic robot that a mad scientist creates to bring it down. The “New Metropolis” postings were specified to be approximately 500 words long, whereas Tech Trends postings are planned to be 1,000-1,500 words long.
Anyway, I hope you enjoy this little slice of recent history.
11 November 2013 – I thought it might be fun—and maybe even useful—to catalog the classifications of these things we call “robots.”
Let’s start with the word “robot.” The idea behind the word “robot” grows from the ancient concept of the golem. A golem was an artificial person created by people.
Frankly, the idea of a golem scared the bejeezus out of the ancients because the golem stands at the interface between living and non-living things. In our “enlightened” age, it still scares the bejeezus out of people!
If we restricted the field to golems—strictly humanoid robots, or androids—we wouldn’t have a lot to talk about, and practically nothing to do. The things haven’t proved particularly useful. So, I submit that we should expand the “robot” definition to include all kinds of human-made artificial critters.
This has, of course, already been done by everyone working in the field. The SCARA (selective compliance assembly robot arm) machines from companies like Kuka, and the delta robots from Adept Technologies clearly insist on this expanded definition. Mobile robots, such as the Roomba from iRobot push the boundary in another direction. Weird little things like the robotic insects and worms so popular with academics these days push in a third direction.
Considering the foregoing, the first observation is that the line between robot and non-robot is fuzzy. The old 50s-era dumb thermostats probably shouldn’t be considered robots, but a smart, computer-controlled house moving in the direction of the Jarvis character in the Ironman series probably should. Things in between are – in between. Let’s bite the bullet and admit we’re dealing with fuzzy-logic categories, and then move on.
Okay, so what are the main characteristics symptomatic of this fuzzy category “robot?”
First, it’s gotta be artificial. A cloned sheep is not a robot. Even designer germs are non-robots.
Second, it’s gotta be automated. A fly-by-wire fighter jet is not a robot. A drone linked at the hip to a human pilot is not a robot. A driverless car, on the other hand, is a robot. (Either that, or it’s a traffic accident waiting to happen.)
Third, it’s gotta interact with the environment. A general-purpose computer sitting there thinking computer-like thoughts is not a robot. A SCARA unit assembling a car is. I submit that an automated bill-paying system arguing through the telephone with my wife over how much to take out of her checkbook this month is a robot.
More problematic is a fourth direction—embedded systems, like automated houses—that beg to be admitted into the robotic fold. I vote for letting them in, along with artificial intelligence (AI) systems, like the robot bill paying systems my wife is so fond of arguing with.
Finally (maybe), it’s gotta be independent. To be a robot, the thing has to take basic instruction from a human, then go off on its onesies to do the deed. Ideally, you should be able to do something like say, “Go wash the car,” and it’ll run off as fast as its little robotic legs can carry it to wash the car. More chronistically, you should be able to program it to vacuum the living room at 4:00 a.m., then be able to wake up at 6:00 a.m. to a freshly vacuumed living room.
7 February 2019 – This is not the essay I’d planned to write for this week’s blog. I’d planned a long-winded, abstruse dissertation on the use of principal component analysis to glean information from historical data in chaotic systems. I actually got most of that one drafted on Monday, and planned to finish it up Tuesday.
Then, bright and early on Tuesday morning, before I got anywhere near the incomplete manuscript, I ran headlong into an email issue.
Generally, I start my morning by scanning email to winnow out the few valuable bits buried in the steaming pile of worthless refuse that has accumulated in my Inbox since the last time I visited it. Then, I visit a couple of social media sites in an effort to keep my name if front of the Internet-entertained public. After a couple of hours of this colossal waste of time, I settle in to work on whatever actual work I have to do for the day.
So, finding that my email client software refused to communicate with me threatened to derail my whole day. The fact that I use email for all my business communications, made it especially urgent that I determine what was wrong, and then fix it.
It took the entire morning and on into the early afternoon to realize that there was no way I was going to get to that email account on my computer, find out that nobody in the outside world (not my ISP, not the cable company that went that extra mile to bring Internet signals from that telephone pole out there to the router at the center of my local area network, or anyone else available with more technosavvy than I have) was going to be able to help. I was finally forced to invent a work around involving a legacy computer that I’d neglected to throw in the trash just to get on with my technology-bound life.
At that point the Law of Deadlines forced me to abandon all hope of getting this week’s blog posting out on time, and move on to completing final edits and distribution of that press release for the local art gallery.
That wasn’t the last time modern technology let me down. In discussing a recent Physics Lab SNAFU, Danielle, the laboratory coordinator I work with at the University said: “It’s wonderful when it works, but horrible when it doesn’t.”
Where have I heard that before?
The SNAFU Danielle was lamenting happened last week.
I teach two sections of General Physics Laboratory at Florida Gulf Coast University, one on Wednesdays and one on Fridays. The lab for last week had students dropping a ball, then measuring its acceleration using a computer-controlled ultrasonic detection system as it (the ball, not the computer) bounces on the table.
For the Wednesday class everything worked perfectly. Half a dozen teams each had their own setups, and all got good data, beautiful-looking plots, and automated measurements of position and velocity. The computers then automatically derived accelerations from the velocity data. Only one team had trouble with their computer, but they got good data by switching to an unused setup nearby.
That was Wednesday.
Come Friday the situation was totally different. Out of four teams, only two managed to get data that looked even remotely like it should. Then, one team couldn’t get their computer to spit out accelerations that made any sense at all. Eventually, after class time ran out, the one group who managed to get good results agreed to share their information with the rest of the class.
The high point of the day was managing to distribute that data to everyone via the school’s cloud-based messaging service.
Concerned about another fiasco, after this week’s lab Danielle asked me how it worked out. I replied that, since the equipment we use for this week’s lab is all manually operated, there were no problems whatsoever. “Humans are much more capable than computers,” I said. “They’re able to cope with disruptions that computers have no hope of dealing with.”
The latest example of technology Hell appeared in a story in this morning’s (2/7/2019) Wall Street Journal. Some $136 million of customers’ cryptocurrency holdings became stuck in an electronic vault when the founder (and sole employee) of cryptocurrency exchange QuadrigaCX, Gerald Cotten, died of complications related to Crohn’s disease while building an orphanage in India. The problem is that Cotten was so secretive about passwords and security that nobody, even his wife, Jennifer Robertson, can get into the reserve account maintained on his laptop.
“Quadriga,” according to the WSJ account, “would need control of that account to send those funds to customers.”
No lie! The WSJ attests this bizarre tale is the God’s own truth!
Now, I’ve no sympathy for cryptocurrency mavens, which I consider to be, at best, technoweenies gleefully leading a parade down the primrose path to technology Hell, but this story illustrates what that Hell looks like!
It’s exactly what the Luddites of the early 19th Century warned us about. It’s a place of nameless frustration and unaccountable loss that we’ve brought on ourselves.
23 January 2019 – Last week two concepts reared their ugly heads that I’ve been banging on about for years. They’re closely intertwined, so it’s worthwhile to spend a little blog space discussing why they fit so tightly together.
Diversity is Good
The first idea is that diversity is good. It’s good in almost every human pursuit. I’m particularly sensitive to this, being someone who grew up with the idea that rugged individualism was the highest ideal.
Diversity, of course, is incompatible with individualism. Individualism is the cult of the one. “One” cannot logically be diverse. Diversity is a property of groups, and groups by definition consist of more than one.
Okay, set theory admits of groups with one or even no members, but those groups have a diversity “score” (Gini–Simpson index) of zero. To have any diversity at all, your group has to have at absolute minimum two members. The more the merrier (or diversitier).
The idea that diversity is good came up in a couple of contexts over the past week.
First, I’m reading a book entitled Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson, which I plan eventually to review in this blog. Part of the advice Johnson offers is that groups make better decisions when their membership is diverse. How they are diverse is less important than the extent to which they are diverse. In other words, this is a case where quantity is more important than quality.
Second, I divided my physics-lab students into groups to perform their first experiment. We break students into groups to prepare them for working in teams after graduation. Unlike when I was a student fifty years ago, activity in scientific research and technology development is always done in teams.
When I was a student, research was (supposedly) done by individuals working largely in isolation. I believe it was Willard Gibbs (I have no reliable reference for this quote) who said: “An experimental physicist must be a professional scientist and an amateur everything else.”
By this he meant that building a successful physics experiment requires the experimenter to apply so many diverse skills that it is impossible to have professional mastery of all of them. He (or she) must have an amateur’s ability pick up novel skills in order to reach the next goal in their research. They must be ready to work outside their normal comfort zone.
That asked a lot from an experimental researcher! Individuals who could do that were few and far between.
Today, the fast pace of technological development has reduced that pool of qualified individuals essentially to zero. It certainly is too small to maintain the pace society expects of the engineering and scientific communities.
Tolkien’s “unimaginable hand and mind of Feanor” puttering around alone in his personal workshop crafting magical things is unimaginable today. Marlowe’s Dr. Faustus character, who had mastered all earthly knowledge, is now laughable. No one person is capable of making a major contribution to today’s technology on their own.
The solution is to perform the work of technological research and development in teams with diverse skill sets.
In the sciences, theoreticians with strong mathematical backgrounds partner with engineers capable of designing machines to test the theories, and technicians with the skills needed to fabricate the machines and make them work.
The second idea I want to deal with in this essay is that we live in a chaotic Universe.
Chaos is a property of complex systems. These are systems consisting of many interacting moving parts that show predictable behavior on short time scales, but eventually foil the most diligent attempts at long-term prognostication.
A pendulum, by contrast, is a simple system consisting of, basically, three moving parts: a massive weight, or “pendulum bob,” that hangs by a rod or string (the arm) from a fixed support. Simple systems usually do not exhibit chaotic behavior.
The solar system, consisting of a huge, massive star (the Sun), eight major planets and a host of minor planets, is decidedly not a simple system. Its behavior is borderline chaotic. I say “borderline” because the solar system seems well behaved on short time scales (e.g., millennia), but when viewed on time scales of millions of years does all sorts of unpredictable things.
For example, approximately four and a half billion years ago (a few tens of millions of years after the system’s initial formation) a Mars-sized planet collided with Earth, spalling off a mass of material that coalesced to form the Moon, then ricochetted out of the solar system. That’s the sort of unpredictable event that happens in a chaotic system if you wait long enough.
The U.S. economy, consisting of millions of interacting individuals and companies, is wildly chaotic, which is why no investment strategy has ever been found to work reliably over a long time.
Putting It Together
The way these two ideas (diversity is good, and we live in a chaotic Universe) work together is that collaborating in diverse groups is the only way to successfully navigate life in a chaotic Universe.
An individual human being is so powerless that attempting anything but the smallest task is beyond his or her capacity. The only way to do anything of significance is to collaborate with others in a diverse team.
In the late 1980s my wife and I decided to build a house. To begin with, we had to decide where to build the house. That required weeks of collaboration (by our little team of two) to combine our experiences of different communities in the area where we were living, develop scenarios of what life might be like living in each community, and finally agree on which we might like the best. Then we had to find an architect to work with our growing team to design the building. Then we had to negotiate with banks for construction loans, bridge loans, and ultimate mortgage financing. Our architect recommended adding a prime contractor who had connections with carpenters, plumbers, electricians and so forth to actually complete the work. The better part of a year later, we had our dream house.
There’s no way I could have managed even that little project – building one house – entirely on my own!
In 2015, I ran across the opportunity to produce a short film for a film festival. I knew how to write a script, run a video camera, sew a costume, act a part, do the editing, and so forth. In short, I had all the skills needed to make that 30-minute film.
Did that mean I could make it all by my onesies? Nope! By the time the thing was completed, the list of cast and crew counted over a dozen people, each with their own job in the production.
By now, I think I’ve made my point. The take-home lesson of this essay is that if you want to accomplish anything in this chaotic Universe, start by assembling a diverse team, and the more diverse, the better!
19 December 2018 – I generally don’t buy into utopias.
Utopias are intended as descriptions of a paradise. They’re supposed to be a paradise for everybody, and they’re supposed to be filled with happy people committed to living in their city (utopias are invariably built around descriptions of cities), which they imagine to be the best of all possible cities located in the best of all possible worlds.
Unfortunately, however, utopia stories are written by individual authors, and they’d only be a paradise for that particular author. If the author is persuasive enough, the story will win over a following of disciples, who will praise it to high Heaven. Once in a great while (actually surprisingly often) those disciples become so enamored of the description that they’ll drop everything and actually attempt to build a city to match the description.
When that happens, it invariably ends in tears.
That’s because, while utopian stories invariably describe city plans that would be paradise to their authors, great swaths of the population would find living in them to be horrific.
Even Thomas More, the sixteenth century philosopher, politician and generally overall smart guy who’s credited with giving us the word “utopia” in the first place, was wise enough to acknowledge that the utopia he described in his most famous work, Utopia, wouldn’t be such a fun place for the slaves he had serving his upper-middle class citizens, who were the bulwark of his utopian society.
Even Plato’sRepublic, which gave us the conundrum summarized in Juvenal’sSatires as “Who guards the guards?,” was never meant as a workable society. Plato’s work, in general, was meant to teach us how to think, not what to think.
What to think is a highly malleable commodity that varies from person to person, society to society, and, most importantly, from time to time. Plato’s Republic reflected what might have passed as good ideas for city planning in 380 BC Athens, but they wouldn’t have passed muster in More’s sixteenth-century England. Still less would they be appropriate in twenty-first-century democracies.
That subtitle indicated that Tankersley just might have a sense of humor, and enough gumption to put that sense of humor into his contribution to Futurism.
Futurism tends to be the work of self-important intellectuals out to make a buck by feeding their audience on fantasies that sound profound, but bear no relation to any actual or even possible future. Its greatest value is in stimulating profits for publishers of magazines and books about Futurism. Otherwise, they’re not worth the trees killed to make the paper they’re printed on.
Trees, after all and as a group, make a huge contribution to all facets of human life. Like, for instance, breathing. Breathing is of incalculable value to humans. Trees make an immense contribution to breathing by absorbing carbon dioxide and pumping out vast quantities of oxygen, which humans like to breathe.
We like trees!
Futurists, not so much.
Tankersley’s little (168 pages, not counting author bio, front matter and introduction) opus is not like typical Futurist literature, however. Well, it would be like that if it weren’t more like the Republic in that it’s avowed purpose is to stimulate its readers to think about the future themselves. In the introduction that I purposely left out of the page count he says:
“I want to help you reimagine our tomorrows; to show you that we are living in a time when the possibility of creating a better future has never been greater.”
Tankersley structured the body of his book in ten chapters, each telling a separate story about an imagined future centered around a possible solution to an issue relevant today. Following each chapter is an “apology” by a fictional future character named Archibald T. Patterson III.
Archie is what a hundred years ago would have been called a “Captain of Industry.” Today, we’d refer to him as an uber-rich and successful entrepreneur. Think Elon Musk or Bill Gates.
Actually, I think he’s more like Warren Buffet in that he’s reasonably introspective and honest with himself. Archie sees where society has come from, how it got to the future it got to, and what he and his cohorts did wrong. While he’s super-rich and privileged, the futures the stories describe were made by other people who weren’t uber-rich and successful. His efforts largely came to naught.
The point Tankersley seems to be making is that progress comes from the efforts of ordinary individuals who, in true British fashion, “muddle through.” They see a challenge and apply their talents and resources to making a solution. The solution is invariably nothing anyone would foresee, and is nothing like what anyone else would come up with to meet the same challenge. Each is a unique response to a unique challenge by unique individuals.
It might seem naive, this idea that human development comes from ordinary individuals coming up with ordinary solutions to ordinary problems all banded together into something called “progress,” but it’s not.
For example, Mark Zuckerberg developed Facebook as a response to the challenge of applying then-new computer-network technology to the age-old quest by late adolescents to form their own little communities by communicating among themselves. It’s only fortuitous that he happened on the right combination of time (the dawn of a radical new technology), place (in the midst of a huge cadre of the right people well versed in using that radical new technology) and marketing to get the word out to those right people wanting to use that radical new technology for that purpose. Take away any of those elements and there’d be no Facebook!
What if Zuckerberg hadn’t invented Facebook? In that event, somebody else (Reid Hoffman) would have come up with a similar solution (Linkedin) to the same challenge facing a similar group (technology professionals).
Oh, my! They did!
History abounds with similar examples. There’s hardly any advancement in human culture that doesn’t fit this model.
The good news is that Tankersley’s vision for how we can re-imagine our tomorrows is right on the money.
14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.
On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.
On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.
We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.
Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”
Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?
Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!
Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.
The computers have taken over, so now we have to do what they tell us to do.
Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!
Golem Literature in Perspective
But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!
A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.
By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.
The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.
These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.
Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.
Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.
“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”
There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).
Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.
They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?
So, the !!!! What?
The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.
He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?
A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.
I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.
Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!
Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.
Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.
In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.
Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?
22 August 2018 – Since the Fifteenth Century, when Johannes Gutenberg invented the technology, printing has given those in authority ginky fits. Until recently it was just dissemination of heretical ideas, a laGiordano Bruno, that raised authoritarian hackles. More recently, 3-D printing, also known as additive manufacturing (AM), has made it possible to make things that folks who like to tell you what you’re allowed to do don’t want you to make.
Don’t get me wrong, AM makes it possible to do a whole lot of good stuff that we only wished we could do before. Like, for example, Jay Leno uses it to replace irreplaceable antique car parts. It’s a really great technology that allows you to make just about anything you can describe in a digital drafting file without the difficulty and mess of hiring a highly skilled fabrication machine shop.
In my years as an experimental physicist, I dealt with fabrication shops a lot! Fabrication shops are collections of craftsmen and the equipment they need to make one-off examples of amazing stuff. It’s generally stuff, however, that nobody’d want to make a second time.
Like the first one of anything.
The reason I specify that AM is for making stuff nobody’d want to make a second time is that it’s slow. Regular machine-shop work, like making nuts and bolts, and sewing machines, and stuff folks want to make a lot of are worth spending a lot of time to figure out fast, efficient and cheap ways to make lots of them.
Take microchips. The darn things take huge amounts of effort to design, and tens of billions of dollars of equipment to make, but once you’ve figured it all out and set it all up, you can pop the things out like chocolate-chip cookies. You can buy an Intel Celeron G3900 dual-core 2.8 GHz desktop processor online for $36.99 because Intel spread the hideous quantities of up-front cost it took to originally set up the production line over the bazillions of processors that production line can make. It’s called “economy of scale.”
If you’re only gonna make one, or just a few, of the things, there’s no economy of scale.
But, if your’re only gonna make one, or just a few, you don’t worry too much about how long it takes to make each one, and what it costs is what it costs.
So, you put up with doing it some way that’s slow.
A HUGE advantage of making things with AM is that you don’t have to be all that smart. If you once learn to set the 3-D printer up, you’re all set. You just download the digital computer-aided-manufacturing (CAM) file into the printer’s artificial cerebrum, and it just DOES it. If you can download the file over the Internet, you’ve got it knocked!
Which brings us to what I want to talk about today: 3-D printing of handguns.
Now, I’m not any kind of anti-gun nut. I’ve got half a dozen firearms laying around the house, and have had since I was a little kid. It’s a family thing. I learned sharpshooting when I was around ten years old. Target shooting is a form of meditation for me. A revolver is my preferred weapon if ever I have need of a weapon. I never want to be the one who brought a knife to a gunfight!
That said, I’ve never actually found a need for a handgun. The one time I was present at a gunfight, I hid under the bed. The few times I’ve been threatened with guns, I talked my way out of it. Experience has led me to believe that carrying a gun is the best way to get shot.
I’ve always agreed with my gunsmith cousin that modern guns are beautiful pieces of art. I love their precision and craftsmanship. I appreciate the skill and effort it takes to make them.
The good ones, that is.
That’s the problem with AM-made guns. It takes practically no skill to create them. They’re right up there with the zip guns we talked about when we were kids.
We never made zip guns. We talked about them. We talked about them in the same tones we’d use talking about leeches or cockroaches. Ick!
We’d never make them because they were beneath contempt. They were so crude we wouldn’t dare fire one. Junk like that’s dangerous!
Okay, so you get an idea how I would react to the news that some nut case had published plans online for 3-D printing a handgun. Bad enough to design such a monstrosity, what about the idiot stupid enough to download the plans and make such a thing? Even more unbelievable, what moron would want to fire it?
Have they no regard for their hands? Don’t they like their fingers?
Anyway, not long ago, a press release crossed my desk from Giffords, the gun-safety organization set up by former U.S. Rep. Gabrielle Giffords and her husband after she survived an assassination attempt in Arizona in 2011. The Giffords’ press release praised Senators Richard Blumenthal and Bill Nelson, and Representatives David Cicilline and Seth Moulton for introducing legislation to stop untraceable firearms from further proliferation after the Trump Administration cleared the way for anyone to 3-D-print their own guns.
Why “untraceable” firearms, and what have they got to do with AM?
Downloadable plans for producing guns by the AM technique puts firearms in the hands of folks too unskilled and, yes, stupid to make them themselves with ordinary methods. That’s important because it is theoretically possible to AM produce firearms of surprising sophistication. The first one offered was a cheap plastic thing (depicted above) that would likely be more danger to its user than its intended victim. More recent offerings, however, have been repeating weapons made of more robust materials that might successfully be used to commit crimes.
Ordinary firearm production techniques require a level of skill and investment in equipment that puts them above the radar for federal regulators. The first thing those regulators require is licensing the manufacturer. Units must be serialized, and records must be kept. The system certainly isn’t perfect, but it gives law enforcement a fighting chance when the products are misused.
The old zip guns snuck in under the radar, but they were so crude and dangerous to their users that they were seldom even made. Almost anybody with enough sense to make them had enough sense not to make them! Those who got their hands on the things were more a danger to themselves than to society.
The Trump administration’s recent settlement with Defense Distributed, allowing them to relaunch their website, include a searchable database of firearm blueprints, and allow the public to create their own fully-functional, unserialized firearms using AM technology opens the floodgates for dangerous people to make their own untraceable firearms.
That’s just dumb!
The Untraceable Firearms Act would prohibit the manufacture and sale of firearms without serial numbers, require any person or business engaged in the business of selling firearm kits and unfinished receivers to obtain a dealer’s license and conduct background checks on purchasers, and mandate that a person who runs a business putting together firearms or finishing receivers must obtain a manufacturer’s license and put serial numbers on firearms before offering them for sale to consumers.
The 3-D Printing Safety Act would prohibit the online publication of computer-aided design (CAD) files which automatically program a 3-D-printer to produce or complete a firearm. That closes the loophole letting malevolent idiots evade scrutiny by making their own firearms.
We have to join with Giffords in applauding the legislators who introduced these bills.
15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”
The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.
When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!
When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”
Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.
Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.
I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.
This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.
Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!
Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:
Don’t Automate Something Humans Like to Do!
Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.
In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!
Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.
The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”
That’s pretty definative!
Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.
Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.
The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.
Yet, development of AV technology is going full steam ahead.
Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.
For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.
I, for one, don’t want to go there!
Sounds like another example of “More money than brains.”
There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.
Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.
Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.
Hence the autopilot.
Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.
So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.
8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.
In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.
With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.
As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.
For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.
I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.
The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.
But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?
If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?
It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.
Is AI ready? IBM recently showed that it’s certainly coming along.
Is the sea of facts ready? That’s a lot less certain.
Debater holds its own
In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.
The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.
Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”
So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.
Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”
Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.
Beyond spinning away on publications, are computers ready to interact intelligently?
Artificial? Yes. But, Intelligent?
According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”
Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.
His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.
One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.
Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”
Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”
To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”
A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.
Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”
Centralized vs. Decentralized Fact Model
It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.
We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?
That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.
A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.
The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.
IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.
IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”
Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.
Trive and Debater seem to be a complement to each other, so far.
Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.
About Info-Tech Research Group
Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.
4 July 2018 – If you want to explore any of the really tough philosophical questions in an innovative way, the best literary forms to use are fantasy and science fiction. For example, when I decided to attack the nature of reality, I did it in a surrealist-fantasy novelette entitled Lilith.
If your question involves some aspect of technology, such as the nature of consciousness from an artificial-intelligence (AI) viewpoint, you want to dive into the science-fiction genre. That’s what sci-fi great Robert A. Heinlein did throughout his career to explore everything from space travel to genetically engineered humans. My whole Red McKenna series is devoted mainly to how you can use (and mis-use) robotics.
When E.J. Simon selected grounded sci-fi for his Michael Nicholas series, he most certainly made the right choice. Grounded sci-fi is the sub-genre where the author limits him- (or her-) self to what is at least theoretically possible using current technology, or immediate extensions thereof. No warp drives, wormholes or anti-grav boots allowed!
In this case, we’re talking about imaginitive development of artificial intelligence and squeezing a great whacking pile of supercomputing power into a very small package to create something that can best be described as chilling: the conquest of death.
The great thing about fiction genre, such as fantasy and sci-fi, is the freedom provided by the ol’ “willing suspension of disbelief.” If you went at this subject in a scholarly journal, you’d never get anything published. You’d have to prove you could do it before anybody’d listen.
I treated on this effect in the last chapter of Lilith when looking at my own past reaction to “scholarly” manuscripts shown to me by folks who forgot this important fact.
“Their ideas looked like the fevered imaginings of raving lunatics,” I said.
I went on to explain why I’d chosen the form I’d chosen for Lilith thusly: “If I write it up like a surrealist novel, folks wouldn’t think I believed it was God’s Own Truth. It’s all imagination, so using the literary technique of ‘willing suspension of disbelief’ lets me get away with presenting it without being a raving lunatic.”
Another advantage of picking fiction genre is that it affords the ability to keep readers’ attention while filling their heads with ideas that would leave them cross-eyed if simply presented straight. The technical details presented in the Michael Nicholas series could, theoretically, be presented in a PowerPoint presentation with something like fifteen slides. Well, maybe twenty five.
But, you wouldn’t be able to get the point across. People would start squirming in their seats around slide three. What Simon’s trying to tell us takes time to absorb. Readers have to make the mental connections before the penny will drop. Above all, they have to see it in action, and that’s just what embedding it in a mystery-adventure story does. Following the mental machinations of “real” characters as they try to put the pieces together helps Simon’s audience fit them together in their own minds.
Spoiler Alert: Everybody in Death Logs Out lives except bad guys, and those who were already dead to begin with. Well, with one exception: a supporting character who’s probably a good guy gets well-and-truly snuffed. You’ll have to read the book to find out who.
Oh, yeah. There are unreconstructed Nazis! That‘s always fun! Love having unreconstructed Nazis to hate!
I guess I should say a little about the problem that drives the plot. What good is a book review if it doesn’t say anything about what drives the plot?
Our hero, Michael, was the fair-haired boy of his family. He grew up to be a highly successful plain-vanilla finance geek. He married a beautiful trophy wife with whom he lives in suburban Connecticut. Michael’s daughter, Sophia, is away attending an upscale university in South Carolina.
Michael’s biggest problem is overwork. With his wife’s grudging acquiesence, he’d taken over his black-sheep big brother Alex’s organized crime empire after Alex’s murder two years earlier.
And, you thought Thomas Crown (The Thomas Crown Affair, 1968 and 1999) was a multitasker! Michael makes Crown look single minded. No wonder he’s getting frazzled!
But, Michael was holding it all together until one night when he was awakened by a telephone call from an old flame, whom he’d briefly employed as a body guard before realizing that she was a raving homicidal lunatic.
“I have your daughter,” Sindy Steele said over the phone.
Now, the obviously made-up first name “Sindy” should have warned Michael that Ms. Steele wasn’t playing with a full deck even before he got involved with her, but, at the time, the head with the brains wasn’t the head doing his thinking. She was, shall we say, “toothsome.”
Turns out that Sindy had dropped off her meds, then traveled all the way from her “retirement” villa in Santorini, Greece on an ill-advised quest to get back at Michael for dumping her.
But, that wasn’t Sophia’s worst problem. When she was nabbed, Sofia was in the midst of a call on her mobile phone from her dead uncle Alex, belatedly warning her of the danger!
While talking on the phone with her long-dead uncle confused poor Sofia, Michael knew just what was going on. For two years, he’d been having regular daily “face time” with Alex through cyberspace as he took over Alex’s syndicate. Mortophobic Alex had used his ill-gotten wealth to cheat death by uploading himself to the Web.
Now, Alex and Michael have to get Sofia back, then figure out who’s coming after Michael to steal the technology Alex had used to cheat death.
This is certainly not the first time someone has used “uploading your soul to the Web” as a plot device. Perhaps most notably, Robert Longo cast Barbara Sukowa as a cyberloaded fairy godmother trying to watch over Keanu Reeves’s character in the 1995 film Johnny Mnemonic. In Longo’s futuristic film, the technique was so common that the ghost had legal citizenship!
In the 1995 film, however, Longo glossed over how the ghost in the machine was supposed to work, technically. Johnny Mnemonic was early enough that it was futuristic sci-fi, as was Geoff Murphy’s even earlier soul-transference work Freejack (1992). Nobody in the early 1990s had heard of the supercomputing cloud, and email was high-tech. The technology for doing soul transference was as far in the imagined future as space travel was to Heinlein when he started writing about it in the 1930s.
Fast forward to the late 2010s. This stuff is no longer in the remote future. It’s in the near future. In fact, there’s very little technology left to develop before Simon’s version becomes possible. It’s what we in the test-equipment-development game used to call “specsmanship.” No technical breakthroughs needed, just advancements in “faster, wider, deeper” specifications.
That’s what makes the Michael Nicholas series grounded sci-fi! Simon has to imagine how today’s much-more-defined cloud infrastructure might both empower and limit cyberspook Alex. He also points out that what enables the phenomenon is software (as in artificial intelligence), not hardware.
Okay, I do have some bones to pick with Simon’s text. Mainly, I’m a big Strunk and White (Elements of Style) guy. Simon’s a bit cavalier about paragraphing, especially around dialog. His use of quotation marks is also a bit sloppy.
But, not so bad that it interferes with following the story.
Standard English is standardized for a reason: it makes getting ideas from the author’s head into the reader’s sooo much easier!
James Joyce needed a dummy slap! His Ulysses has rightly been called “the most difficult book to read in the English language.” It was like he couldn’t afford to buy a typewriter with a quotation key.
Enough ranting about James Joyce!
Simon’s work is MUCH better! There are only a few times I had to drop out of Death Logs Out‘s world to ask, “What the heck is he trying to say?” That’s a rarity in today’s world of amateurishly edited indie novels. Simon’s story always pulled me right back into its world to find out what happens next.
20 June 2018 – I recently received a question: “Do you use Twitter?” The sender was responding positively to a post on this blog. My response was a terse: “I do not use Twitter.”
That question deserved a more extensive response. Well, maybe not “deserved,” since this post has already exceeded the maximum 280 characters allowed in a Twitter message. In fact, not counting the headline, dateline or image caption, it’s already 431 characters long!
That gives you an idea how much information you can cram into 280 characters. Essentially none. That’s why Twitter messages make their composers sound like airheads.
The average word in the English language is six characters long, not counting the spaces. So, to say one word, you need (on average) seven characters. If you’re limited to 280 characters, that means you’re limited to 280/7 = 40 words. A typical posting on this blog is roughly 1,300 words (this posting, by the way, is much shorter). A typical page in a paperback novel contains about 300 words. The first time I agreed to write a book for print, the publisher warned me that the manuscript needed to be at least 80,000 words to be publishable.
When I first started writing for business-to-business magazines, a typical article was around 2,500 words. We figured that was about right if you wanted to teach anybody anything useful. Not long afterward, when I’d (surprisingly quickly) climbed the journalist ranks to Chief Editor, I expressed the goal for any article written in our magazine (the now defunct Test & Measurement World) in the following way:
“Imagine an engineer facing a problem in the morning and not knowing what to do. If, during lunch, that engineer reads an article in our magazine and goes back to work knowing how to solve the problem, then we’ve done our job.”
That takes about 2,500 words. Since then, pressure from advertisers pushed us to writing shorter articles in the 1,250 word range. Of course, all advertisers really want any article to say is, “BUY OUR STUFF!”
That is NOT what business-to-business readers want articles to say. They want articles that tell them how to solve their problems. You can see who publishers listened to.
Blog postings are, essentially, stand-alone editorials.
From about day one as Chief Editor, I had to write editorials. I’d learned about editorial writing way back in Mrs. Langley’s eighth grade English class. I doubt Mrs. Langley ever knew how much I learned in her class, but it was a lot. Including how to write an editorial.
A successful editorial starts out introducing some problem, then explains little things like why it’s important and what it means to people like the reader, then tells the reader what to do about it. That last bit is what’s called the “Call to Action,” and it’s the most important part, and what everything else is there to motivate.
If your “problem” is easy to explain, you can often get away with an editorial 500 words long. Problems that are more complex or harder to explain take more words. Editorials can often reach 1,500 words.
If it can’t be done in 1,500 words, find a different problem to write your editorial about.
Now, magazine designers generally provide room for 500-1,000 word editorials, and editors generally work hard to stay within that constraint. Novice editors quickly learn that it takes a lot more work to write short than to write long.
Generally, writers start by dumping vast quantities of words into their manuscripts just to get the ideas out there, recorded in all their long-winded glory. Then, they go over that first draft, carefully searching for the most concise way to say what they want to say that still makes sense. Then, they go back and throw out all the ideas that really didn’t add anything to their editorial in the first place. By then, they’ve slashed the word count to close to what it needs to be.
After about five passes through the manuscript, the writer runs out of ways to improve the text, and hands it off to a production editor, who worries about things like grammar and spelling, as well as cramming it into the magazine space available. Then the managing editor does basically the same thing. Then the Chief Editor gets involved, saying “Omygawd, what is this writer trying to tell me?”
Finally, after about at least two rounds through this cycle, the article ends up doing its job (telling the readers something worth knowing) in the space available, or it gets “killed.”
“Killed” varies from just a mild “We’ll maybe run it sometime in the future,” to the ultimate “Stake Through The Heart,” which means it’ll never be seen in print.
That’s the process any piece of professional writing goes through. It takes days or weeks to complete, and it guarantees compact, dense, information-packed reading material. And, the shorter the piece, the more work it takes to pack the information in.
Think of cramming ten pounds of bovine fecal material into a five pound bag!
Is that how much work goes into the average Twitter feed?
I don’t think so! The twitter feeds I’ve seen sound like something written on a bathroom wall. They look like they were dashed off as fast as two fingers can type them, and they make their authors sound like illiterates.
THAT’s why I don’t use Twitter.
This blog posting, by the way, is a total of 5,415 characters long.