Full Self-Driving, the Manly-Man Way

Elon Musk is stymied by the delusion of the rugged individual.

Julian S. Taylor

--

NOTE: Within this text, wherever gender is not key to the explanation, I am using the Elverson ey/em construction of the Spivak Pronouns.

©2020 Tomasz Swatowski, Attribution-Share Alike 4.0 International

We’re all in this together. We’re all going places and we’re each looking carefully for how other people are likely to interact with us. Is that outstretched hand a greeting or an impending slap? Is that turn signal left over from the previous lane change? Is that person at the other end of the bar wanting to talk or is my drink on fire? Is that driver letting me into the lane? Excellent! Wave, in thanks. Human interactions are complicated and yet, we humans have evolved to instinctively understand the gestures, the facial expressions, the occasional grunt and even the scent: after-shave or a provocative cologne?

In this community of individuals, we form groups, families, corporations and cooperatives. We work together to make the impossible reality. We flew to the moon; we mastered laser and radio; we marched until segregation was outlawed; with the use of social media, we have almost figured out the Marvel Cinematic Universe. We did those things together. In fact we do everything together, that’s why masking and distancing has been such a struggle. We are communal creatures. No one has ever accomplished a great feat alone. There is always an essential community (a team, a support structure, an ordered society) to provide at least the basic security required to be as great as we can be.

With that self-evident fact glaring at us, we must wonder at the modern philosophy propagated through the news media and exemplified by people like Ron Paul and the late Margaret Thatcher from whom we have inherited the now popular view that we are all in this alone because when we come together as a community in order to form a government, we always fail. This is the popular libertarian fantasy (intentionally not capitalized) promoted by certain powerful influencers to assure that average people don’t seek to acquire freedoms in excess of their station. Each individual is challenged to be the mythical “manly man” pulling himself up by some sort of articulated bootstraps and being all that he can be — a demand so absurd that the only way to actually express that irrational persona is through violence and pointless conflict.

I know people who declare themselves to be Libertarians and also make sense. Several are my good friends. As with any word, its real meaning is not tied down uniformly for each person claiming it. I, a sometime Marxist, respect and discuss all sorts of issues with my Libertarian friends. We agree on many things and we generally agree that modern popular libertarianism is a weird creature. As with so many signal words like “free market”, “communism” and “liberal”, Libertarian has been branded and specifically defined to serve those in power and this primitive popular understanding has done great damage.

As with any popular error, it drives other errors and those errors multiply leading to cultural and technological errors that may actually cost lives. The error we are going to review, in this essay, is the popular solution to the problem of transportation safety that we call full self-driving (FSD).

The Popular Libertarian Fantasy

The popular libertarian fantasy is that of a twelve year old boy. On your own, isolated from civilized society, you ride your stallion to where your best girl has been tied to the railroad tracks. You leap out of the saddle. “Injuns” appear on the horizon and you pull your six-guns from their holsters to take on the three dozen bow-and-arrow-bearing assailants. You shoot and shoot and shoot until all of the savage attackers have been quelled. You free your girl from her bonds and she gives you a hearty kiss on your cheek. Deep inside, you suspect that this relationship may be missing something; but, your underlying biology is not ready for that concept. That is the popular libertarian dream in a nutshell.

This philosophical justification for bad behavior (i.e., “No one will do what I say, so I have to do everything myself”) has been around since shortly after John Locke introduced Liberalism in the 17th Century, but it was refined and perfected by Ronald Reagan. That addled, regressive understanding of the world has become the Republican fascist vision for your future. The government is the enemy (It won’t do what you say.). Other cultures are the enemy (They won’t acknowledge your superiority.). Your neighbors are the enemy (They refuse to take the lizard people seriously.). Your enemies, weapons in hand, are charging over the rise and you are your only rescue. To quote the Casino Royale theme, “Arm yourself, for no one else here will save you.” There is no community to which you may appeal. You are on your own. Where is your stallion? Where are your six-guns?

I have a brilliant friend who describes himself as a Libertarian and who has praised the French nationalized health care system. I doubt that he would fully support the words in the preamble to the platform of the Libertarian Party of the United States wherein individual sovereignty is the ultimate goal in a world where government is not we the people, but the enemy of freedom; where “freedom” means that corporations are free to monopolize at will; and, where “liberty” means that the worker is at liberty to slave and suffer for the good of eir righteous masters. If that is not your Libertarian philosophy, then I would suggest that you may not be an adherent to the principles propagated by very public popular libertarians like Charles Koch, Miriam Adelson or Rand Paul. Regardless the merit of your personal philosophy, it is not the invigorating call to action broadcast to a desperate public by our mass media. This is our shared popular libertarian fantasy. This is the one to which Elon Musk adheres and his fantasy regarding FSD is infecting every other auto manufacturer.

I seek to demonstrate that, while Musk’s advocacy for EVs has been instrumental in changing the auto industry for the better, this popular understanding of the individual as supreme has confused his technology. His goal of self-driving automobiles on existing roads has led him to postulate that the upcoming $25,000 Tesla Model 2 may not even have a steering wheel. His confidence that full self-drive will become the norm for domestic travel reveals a series of delusions that are centered on aspects of the popular libertarian fantasy which seek to isolate the human from community and to understand the human as a mere predictable component of the soul-less mechanism that we call “reality”. No one will help, so all great accomplishments must be the solitary produce of a great man.

Musk isn’t alone in this. His long-running enthusiasm for FSD has led many other companies to undertake this challenge. Their solutions are all similar to that promoted by Musk and they are all delusional. The delusions are tied to this notion that there are leaders and there are the rabble. That rare leader, as defined by Ayn Rand, is worthy of all bounty and is exceptional. That leader is an agent of change, a true human being. The rest are mindless leeches. The government (we the people) is a conglomeration of leeches and cannot be trusted to do anything. The people are the problem and will not support the worthy goals of the leadership class. See if you spot how that Randian/Objectivist/libertarian philosophy feeds these delusional technical architectures.

Countdown to Paradox

Let’s begin by counting down the delusions.

  1. The human is, at the end of the day, a simple mechanism and artificial mechanisms are more effective than humans.
    Elon Musk sought to develop an almost totally mechanized automobile assembly plant for the Model 3. During test runs, unpredictable events stymied the highly advanced robots leaving them helpless. Musk believed that this could be addressed by tweaking the software; but, run after run failed. This led Musk himself to finally profess that “The human is underrated.” So Musk became aware that a set of specialized robots in a tightly controlled environment could not be programmed to build a thing for which the assembly process was precisely defined. Despite this, he has undertaken to develop an artificial mechanism to guide a car through a chaotic environment having neither machine referents nor machine-compatible communications protocols. That is FSD as understood by modern technologists.
  2. The human brain is basically a computer and so a computer may emulate the human brain.
    When hydraulics was the latest cool technology, Descartes described the human brain as an hydraulic mechanism. When electricity was the latest cool thing, Mary Shelley sparked the brain of her humanoid emulation using electricity. When computers were the latest cool thing, philosophers like Daniel Dennett began to explain the human brain as a specialized computer. Now, with the artificial neuron presiding over all information science, the brain is basically a neural network, just like the ones in our labs. This is why Tesla has developed a massive super computer to train their FSD neural networks.
    Good luck with that.
    Our brain is such an intimate part of everything we do and experience that we cannot resist trying to explain it. While every generation has sought to explain the human brain, some eminent scientists argue that we cannot possibly understand it because, in essence, the seal cannot seal itself. To fully understand our brain (and the inexplicable mind that we believe it contains) would literally violate mathematical law. It would violate Kurt Gödel’s Incompleteness Theorems.
    If and when we come to understand our brain, its model will be unlike any mechanism devised by humans. It will be astounding beyond belief. It may not even be the seat of our elusive consciousness. Let’s keep studying; but, let’s understand that we have no idea how it works yet.
  3. With enough data defining all possible driving situations, a car can drive itself more safely than a human could.
    “Data is Dada” was the motto of the 1970’s Artificial Intelligence (AI) researcher because data is not intelligence. Intelligence is complicated but, in this case, we can identify the most important component of intelligence for FSD to be situational awareness (understanding clearly and unambiguously your current environment and reacting to it constructively). Data is a table of information. Knowledge is a tapestry woven from the relationships between those tabular entries and the relationships between those relationships. Situational awareness (required for all driving) is a complex of tapestries with each knot in each tapestry linked to only the most appropriate knots in every other tapestry.
    Slowing down when a shadow appears on the mountain road near sunset as a naïve deer considers leaping across the road to the nearby stream is a cooperative venture between the complex tapestries in the brain forming our awareness of the immediate situation and the complex of tapestries formed from our lifetime of intimate muscle memory in the temple of action that is the human body.
    No amount of data and no configuration of neural networks will match that. If you want to build a system that will drive a car with infallible precision, then parent a child and teach em to drive a car well. A wealth of data plugged into a pitiful simulacrum of a human brain will never suffice.
  4. Any solution must assume no outside assistance (Your enemies are everywhere and they will not do what you say.).
    Current roads were designed for human drivers. They are filthy with signs, indications and outright ambiguous hints that humans perceive as meaningful information, but which no artificial system could possibly comprehend. If you start from the assumption that the current road system and its anthropocentric signage is the basis for your FSD design, your project is doomed.
    Your system will have to be programmed to read street signs and observe lighted flares on the road-side warning that there is an accident ahead. It will have to interpret likely pedestrian behavior based upon facial expressions and hand movements. It will have to watch for large things with fur indicating a likely encroachment by an elk; watch for a possible lane change by a signaling auto when the sun is on the horizon and the camera is blind; watch for humans in adjacent cars honking and vigorously waving their arms to signal that they want into your lane; and, when the snow is heavy in the Rockies, it will have to understand that on an unplowed road it will need to follow the ruts pressed into the snow by previous drivers and not the actual lane.
    These things are obvious to humans but they must be pre-programmed into the immense complex of tapestries of data stored in the FSD system — a complex of tapestries of data that are emulated using a mere neural network controlled by a specialized computer.

Musk’s solution is misguided because he is doing it the hardest possible way. It cannot work; but, for the love of all that’s holy, he’s going to dedicate everything he has to this goal. It is the goal of producing a self-driving automobile rather than proposing an instrumented system designed specifically to support autonomous transport. FSD advocates are proposing to cobble together fantastically complex Rube Goldberg machines to make up for the fact that the underlying substrate for autonomous travel is simply not present. The car must do everything on its own because no one else will help it.

In 1956, General Motors (GM) released a series of films predicting a “highway of tomorrow”. The central protagonist was the Firebird II concept car and its FSD was based upon an instrumented highway. If the roads themselves participate in this project, FSD is actually conceivable as an engineered system. Our modern roadways have signs and esoteric announcements intended exclusively for the use of humans. Humans have, with difficulty, come to comprehend this set of traditional, ad hoc signifiers and to respond appropriately. If a machine is to understand this construct, new signifiers must be instituted which conform to the expectations of the machine. They must be unfailingly consistent and they must be unambiguous. The human deals with ambiguity as a normal state of affairs. The machine does so only with specialized software. It requires specialized instructions for each and every circumstance that was not predicted by its programmers and that paradox is the downfall of manly-man FSD.

Algorithm Versus AI

One of the most basic problems associated with neural-network-based AI is that it is nearly impossible to figure out why the neural network came to the observed conclusion. Even when we look “into” the network and ask about the state of its neurons, the actual “reasoning” remains incomprehensible. The neural network was “trained” with example after example, indeed millions of examples. Those examples demonstrated to the neural network that a specific set of inputs should lead to a specific output. Unfortunately, after training it, the neural network cannot explain why, given such-and-such an input, it yielded some specific output.

We use neural networks to solve problems that we believe are too complex for us to program into an ordinary computer. The neural network provides an excuse for the frustrated software engineer, confronting the insurmountable problem, to throw up eir hands and instruct the machine to figure it out for itself. For that reason, programmers are often delighted when the neural network answers in an unusual fashion. They proclaim that the neural network has shown creative intelligence when it is more likely that some class of examples simply confused it and the answer was in response to the wrong question. Humans do this all the time and we sometimes mistake that for creativity too.

We have accepted that decision-making within the astoundingly complex human frontal-cortex is currently beyond comprehension. Fortunately, the frontal-cortex and the conscious mind have evolved a relationship which allows the conscious mind to at least summarize what likely stimuli resulted in the ultimate conclusion. This is not possible with neural-network AI. Even in cases wherein specific neuron firings were recorded for analysis after-the-fact, scientists can only guess at the actual underlying motivations.

Let me be clear, automobiles which assist the driver in avoiding dangerous situations have real value. I was driving my Model 3 along a small Colorado highway after a heavy snow-storm. That Model 3, of course, has forward-sensing radar and an array of cameras. I found myself behind a driver unfamiliar with snowy conditions driving at 30 mph below the speed limit. I decided to pass.

The lanes were obscured and we were driving in the ruts left by prior drivers. As a curmudgeon, I always prefer to drive in the lane rather than the ruts; but, passing this driver, I had to move slightly into the on-coming lane. I pulled to the left and floored the accelerator. I easily overtook the driver until we were side-by-side. At that point, my car began to slow. I checked and, yes, the accelerator was “floored” but my car was slowing. I looked ahead and saw no obstructions, then I saw an object on my video screen flashing red as it approached along the distant leftward curve of the highway.

In the distance, the radar in my car had detected a speeding vehicle. It had geolocated it on the highway and detected that it was in the snow-rut-defined lane that I currently occupied. As the vehicle finally came into view, my Model 3 had slowed enough that I could pull behind the vehicle I was intending to pass. I did so and the other vehicle shot past me.

I am not objecting to this kind of safety feature because this is not (or does not need to be) tied to the modern delusion of AI. The car’s response to the situation may be programmed, as instructions, into a computer. A human could explain the meaning and reason for the instructions to another human. Seeing those instructions, another human would respond, “Yes, that makes sense. That’s how I would respond to a problem like that.” In other words, it is algorithmic. The actions taken by my Model 3 can be defined in an understandable fashion. A set of specialized sensors are programmed to define the trajectories of approaching obstacles. Upon detecting such an obstacle, the sensor notifies the central computer which then verifies that situation using the vehicle’s cameras which may not have yet identified the obstacle as a threat. Using an algorithm, the central computer determines that the approaching object is a threat and harmlessly decides to decelerate, regardless the signal from the accelerator, using the simple rule [Do not accelerate into an obstacle].

The decision to pull to the right remains that of the driver but seeing the indicated approaching vehicle on the screen, the rational driver will, of course, pull over. The vehicle has made a decision based upon sophisticated but explicitly definable models of the world. These are algorithms and these algorithms may easily record (in a facility that software people call a “log”) the reasons why the decision was made. The algorithm makes a decision because a programmer detailed a specific set of criteria for such a decision. As a result, the decision may be explained and recorded for future review. This is not true for a decision made by a neural network.

Engineered FSD

For practical FSD, we would use engineering and not manly, demonstrative proof that an individual visionary can solve any problem without help from anyone (except for an army of brilliant engineers pressed into pointless service by excellent salaries). The engineering solution would not make the amateurish mistake of assuming the inadequacies of the vehicle to be the root cause of the problem. The reason Tesla is trying to solve the problem using only the vehicle is because that company has control over only the vehicle. The truly terrifying possibility is that the real solution would require a collaboration with the larger community. Cobbling together a deeply flawed solution over which one has complete control is much easier than tackling the actual root cause of the problem. It is like a software company writing software to allow their customer to listen to FM radio stations on their iPhone. A simple analog radio is much more efficient and effective; but, this is a software company — they don’t make radios.

Imagine a world where we all work together to solve problems. Imagine a world where the rugged manly-man is superseded by the inventive person who reaches out to industry experts and government representatives and proposes a compelling real solution wherein every human may get to their destination in comfort and safety. This person would actually be an intelligent and creative agent of real change. No longer tied to a manly myth, this individual would present the real advantages of cooperation with the community to produce not a magical vehicle but a transportation system fit to the purpose of practical arrival.

GM’s fanciful instrumented highway could now be implemented using modern technology. Imagine solar-cell lined roads equipped with inexpensive road controllers (computers as simple as the ubiquitous $50 Raspberry Pi) at one-mile intervals. Each controller would receive data from numerous inexpensive sonar, video and radio sensors lining the highway. Each controller would communicate with the vehicles on that stretch of roadway using an established industry-standard communication protocol. That protocol would allow the controller to send instructions to a vehicle such as [Slow to 53 mph] or [Add two feet of separation behind the lead vehicle]; but, it would also allow the vehicle to tell the road controller that it needs to [Route to exit 153]. Each vehicle would exercise sufficient control, using its internal sensors, to keep safe separation from other vehicles; but decisions such as lane changes and coordinating access to a highway exit would be managed by the controllers which understand the entire transportation system over that stretch of road.

If we commit to a solution wherein each individual vehicle, on its own in manly-man fashion, vies for each exit and limited stretch of lane in order to pursue its own goals without a keen understanding of the disparate goals around it, then outcroppings of unintended chaos will result in fully automated piles of scrap metal at regular intervals on every highway. Let’s imagine that the Tesla FSD is flawless; how does it deal with a recent startup company’s FSD in the next lane? What does the Tesla do when the GomerGo FSD exposes a faulty learned goal and decides that it can change lanes at any time because damage to the body-work of the vehicle is not prohibited by the goal. This is very common in neural networks. Knowing that you have fully specified the goal is a form of magic not yet perfected.

Imagine the simple and very common case of changing lanes in order to reach a desired exit in a highway traffic jam. Currently, the human has a few good options.

  1. Look for a large truck which can’t accelerate like other vehicles, wait for traffic to speed up and slip in front of the truck.
  2. Signal and flash your lights. If this car doesn’t yield, slow and try the next one.
  3. Honk and wave your arms pointing to the lane.

Eventually, someone will “let you in.”

In a world of disparate, manly-man autonomous vehicles, this case is impossible. The manufacturers have not agreed upon a mechanism whereby, one autonomous vehicle may inform another autonomous vehicle of its intention to change lanes much less how a disabled autonomous vehicle could inform surrounding vehicles that it has sustained a mechanical failure meaning that vehicles must go around. Even if such a signal could be transmitted, how would the adjacent vehicle respond? Would it slow to allow the other vehicle in? Would it see its goal as predominant and maintain speed? If the vehicle behind that vehicle knew that it was on a leisurely drive, would it make room? If so, how would it inform the requesting vehicle that it was opening a space for it?

The effective engineering solution to this problem is based upon a well-established engineering pattern that software and hardware folks know as Object Oriented Methodology (OOM). The basic principle of that methodology is to define objects (vehicle, sensor, road controller, regional controller) and assign responsibilities to each object based upon the capabilities of that object. The vehicle can manage simple functions regarding separation from other vehicles and staying in the designated lane. The road controller can keep track of how individual vehicles may need to adjust their goals in order to accommodate the needs of vehicles moving to an exit or to a parking space near an indicated destination. The sensors may provide information regarding vehicle positions or unexpected vehicle failures to the road controller. The regional controller may coordinate between groups of road controllers in order to keep each of them informed of emerging traffic patterns.

Actual functional FSD is possible if we work together. No magical neural network is needed. Clever yet comprehensible algorithms suffice when the solution is developed using the community and engineering method. We start from the root cause of the problem and resolve that. Addressing outlying symptoms of the problem, as modern manufacturers are seeking to do, is always much more difficult than resolving the root cause of the problem.

Conclusion

I can think of only one argument for full FSD: humans are lousy drivers. That is a real problem. It is a problem that we must solve using engineering. The simple technician will look at the modern automobile and dream of ways to improve it so as to make driving safer. The engineer will expose the root cause of the problem and resolve that in the simplest and most effective way. That is the difference between the engineer and the enthusiastic dabbler.

Just as the astute engineers at Aptera realized that the modern car is not the problem that needs solving, neither is the ubiquitous steering wheel the root cause of unsafe transport. The Aptera Paradigm, the highly efficient vehicle scheduled for release in 2023, is addressing the root problem of how you get people from one place to another with maximum efficiency and freedom. The resulting solar-powered autocycle doesn’t look much like a car; but, it solves the real problem better than a rejiggered automobile would.

If the real problem is how to get people from one place to another without automobile accidents then the actual solution to that real problem doesn’t look like a car with an invisible chauffeur. It looks like a coordinated system of effective inexpensive mass transportation with safe options for the occasional individual trip (what we call an automobile). All vehicles would be managed by an established industry-standard traffic management system, put in place by we the people, through our governments and technical standards organizations. An analysis of the most dangerous traffic environments would identify where the first traffic management automation would be deployed. From there we expand to the entire road system.

Modern FSD is a little boy’s dream of a car that drives itself. It doesn’t solve any actual problems, it just transfers the locus of control from a flawed isolated human to a flawed isolated AI written by humans and yet constrained by the same artificial limitations experienced by a human driver sitting in a driver’s seat. Taking on the actual problem holistically results in a simpler solution because we are addressing the actual problem. Clever but comprehensible algorithms may be distributed across the entire system of moving vehicles; taking into account that entire system as opposed to the situation surrounding a single vehicle; and, they will have the power to coordinate the actions of multiple vehicles without AI guesswork because the vantage point is the system, not the vehicle.

This is one of those many cases wherein the community can do it better than the lone individual. It is the solution of the skilled engineer and not of the excited twelve-year old.

Julian S. Taylor is the author of Famine in the Bullpen a book about bringing innovation back to software engineering.
Available at or orderable from your local bookstore.
Rediscover real browsing at your local bookstore.
Also available in ebook and audio formats at Sockwood Press.

This work represents the opinion of the author only.

--

--

Julian S. Taylor

Software engineer & author. Former Senior Staff Engineer w/ Sun Microsystems. Latest book: Famine in the Bullpen. See & hear at https://sockwood.com