31 C
Dubai
Tuesday, April 15, 2025

Are pilots going to be eliminated?

Must read

[ad_1]

We have become accustomed to the idea that we can do more with less and flying is no exception.  Improvements in technology first allowed airplanes to fly without navigators as long range radio navigation then inertial systems and most recently GPS coupled with simple computers have simplified the task.  The navigator’s job consisted of a combination of taking measurements off the stars and the sun with basic calculations of the projected aircraft path considering speed, wind drift and the like.  All of this was easily solved with the advent of calculators that could do trigonometry coupled with systems that could determine the current position, and it was thus a simple matter to combine the two.   While humans are removed somewhat from the process in this the pilots were trained well enough in the principles of navigation so that they could see if the calculations were not as they should be, much like a person using a calculator should be able to discern if the answer is far off the mark.

Independently, improvements in basic electronic systems and the simplification of system design allowed for the elimination of the flight engineer.  While many people may view this as “automation”, most of the jet transport designs currently operational do not include much in terms of having computers run the systems with the only notable exception being the McDonnell-Douglas (now Boeing) MD-11.  Improvements have been made in alerting systems to notify pilots of problems although most of these systems are not particularly “smart” but rather list problems in the temporal order in which they are triggered by the system, with the exception of the MD-11 which does (to an extent) rank the most critical items at the top of the list.   The systems have not so much been automated as improved such that the engineering design of systems are so simple they require almost no human action most of the time.

The MD-11 does include features that the other airliners do not, such as automatically reconfiguring systems to work around inoperative components or isolate problems, but it still will reach a point where it will defer to the human operator to make a decision.  An example is that it will shut down one hydraulic system due to overheat, but not two.  The designers considered that the decision to take a system beyond a certain point is too dynamic and dependent on circumstances.  What one might need to do while mid-ocean is far different than near an airport, for example.  Still, as most routine systems are automated the pilot is put into a monitoring mode.  If the system takes an action it is designed to notify the flight crew of that action.  In most cases the pilot does not have to take additional action but just consider the situation in terms of the inoperative components on flight planning.  However, like the person with the calculator it is very important that the pilot has a clear understanding of how the system works and how it should be operated.  The automation is not doing anything that the pilot would not be required to do absent the automation but the reliability of the system could lend a person to not do the mental work needed to understand the system.

Further, as the automation is able to remove the workload of operating a complex system there was less incentive for the engineers to simplify the system as was done on some of the other advanced designs such as Airbus or Boeing models.  The MD-11 systems are thus more complex, relatively.  This allows for the system to do things that are beneficial that the others do not do, for example, the MD-11 will sense fuel temperature and if it reaches a certain point it will move fuel from warmer tanks to colder tanks in order to prevent the fuel getting so cold it will no longer flow smoothly.  In the B-777 if the fuel temperature does get dangerously cold the pilots must use a combination of flying faster (increasing the temperature by friction) or descending to a lower (hopefully warmer) altitude.  However, the other side of this is the complexity can make it more difficult to understand.  There have been cases where the MD-11 system was properly reconfiguring fuel pumps and valves to keep the fuel in close balance and the pilot, not understanding the nuances and thinking that it was doing the wrong thing, turned the automatic controllers off resulting in an engine failing due to lack of fuel.  The system was smarter than the pilot, and we are back to the person with the calculator that does not know the process well enough to know if the answer is correct or wildly off.

The autopilot systems have improved somewhat, but are still not all that more of a complex problem than a cruise control in a car.  The system essentially looks at what the pilot has commanded it to do and then adjusts the controls to put it there.  The main difference being that the pilot can command it to follow an electronic signal from the ground or a path from the navigation system.  This functionality is not new, and while new autopilots do a better job it is not much different than the systems available in the 1960s and 70s.  The addition of autoland in the 1970s was also relatively simple in terms of just including the radio altitude into the mix so that the autopilot could then adjust the controls to maintain a programmed trajectory.  Obviously pilots will monitor this very closely as any failure or bad signal can put the airplane in a dangerous state.  The autopilot is not smart enough to discern that things “do not look right”, although some of the more advanced systems do disconnect when certain parameters are exceeded – leaving it to the pilot to “save” the airplane.

A pilot can also program the entire route, from departure to the approach to landing, prior to taking off.  Still it would not be unlike the ability to program your automobile’s cruise control to drive certain speeds at various portions of your route, it is just that the “cruise control” would also have control over your steering.  It is really just following a programmed script.   The system contains a database of points or a new point can be created via the latitude and longitude and these are just entered in the order the pilot wants the system to follow.  Altitudes and airspeeds that the pilot wants the system to follow can also be added to each point in the “flight plan”.   Despite the public impression, these systems rely heavily on human input and monitoring just as would a cruise control in your car that was programmable.  It is not possible to anticipate hazards, for example, so the traffic jam, icy road or other aspect would require human intervention.

There are currently several research projects that are looking for ways to further improve designs to that aircraft can be flown with one pilot or no pilots at all.  The primary incentive might appear to most to be financial, and perhaps it is, but the promoters of these systems argue that it is to improve safety.  They argue that the majority of airplane accidents are the result of human error and therefore by eliminating (eventually) humans flying airplanes we can achieve safety improvements not possible otherwise.  It was with this philosophy that Airbus first started designing limits to what pilots could do in their airplanes.  They had found that there was little benefit to allowing pilots to overstress the airplane or exceed certain bank angles or pitch attitudes, that pilots had not needed to exceed these conditions to prevent a problem but rather had only done so inadvertently.  Thus, the flight controls were designed to not allow a pilot to do so.  Boeing had not initially agreed, but the evidence was overwhelming and so the newer Boeing aircraft, while not entirely preventing such excursions, do make it much more difficult to do so.  To be fair, pilots can also take measures to do so on the Airbus designs as well.

All of this is not particularly an issue.  Designing a system so that one cannot do the wrong thing is far different than designing the decision making out the system.  A simple example is an automobile lock that is designed so that the car door will not lock a person out of the car or the system that prevents the car being started unless the brake is applied, or even the system that rings a chime if the lights are left on or a seatbelt is not fastened.  All of these do prevent errors but do any of them change the ability for the driver to make a decision?  Arguably, they do not.  The same is true for the current systems in airplanes, so there is some merit to the concept that better system design can eliminate many types of errors.

The aviation industry has been quite good at this, redesigning airplanes and systems to the point that most of the simple errors can be eliminated.  Those errors were often caused by a momentary distraction, an attempt to rush through a procedure or by poorly written guidance.  Through identifying and then eliminating these possibilities we have created a system where the chance of having an accident is now extremely low.  The fatal accident rate has steadily been reduced but in the past few years appears to have reached a plateau.  As system and equipment design as improved the failure rates have dropped and that has left just one primary cause of fatal accidents – human error.

The problem with this position is quite simply that it is wrong.  As pointed out by leading researchers such as Sidney Dekker, Erik Hollnagel and others, it is not that humans are making errors but that the remaining gaps are so dependent on human resilience to prevent accidents that those times when humans are not able to do so we view it as an “error”.  Think of it in terms again of the automobile.  Anti-lock brakes have certainly saved lives, as has improved signage on roadways, better designed highways, grooved pavement for high speed, banks on curves, better traffic signals, designs of automobiles that eliminate blind spots and improved visibility to other drivers and a myriad other things but safe driving still depends on human skill, particularly awareness.  We have used technology to eliminate the more simple problems but the larger ones remain.  As British Psychologist Lisanne Bainbridge pointed out, “the designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate”.

Now imagine we design systems that eliminate crashes and the like. Take for example Google’s self-driving car.  Certainly it can drive automatically, motion detectors and a very good navigation system can be supplemented with updates of position as it passed roadways.  It can avoid obstacles and the like, so what might create a problem for the current system?  Have you ever driven down a road and saw an issue that was only potentially a problem?  Imagine looking down the road and seeing activity that you recognize as two street gangs facing off against each other.  The road is otherwise clear and to the Google car it is just a normal situation.  It does not notice the tell-tale signs of a street gang or perhaps an angry mob so will carry you on right into the middle of it.  These are the types of issues difficult to solve with current technology.  These are the types of aspects that humans are still far better at and an airplane has many more than a car does.

In an airplane the set of issues which are difficult to program are larger.  Take for example the smell of smoke.  In the car there is no need to program anything, the occupant pushes the emergency stop and you’re done.  In the airplane it is not so simple.  First, the system would need some way to detect the smoke, and this too is not so simple.  Back in the car if you smell something a bit odd you can go on driving waiting to see if it manifests into real smoke.  In the airplane the first indication might be just a subtle change in odor.  As there are large amounts of air moving through various systems odors change all the time.  However, a review of real events have shown that waiting until it was clearly smoke is sometimes too late to prevent catastrophe.  Then there is the issue of what the options are.  Flying over the North Atlantic at night is it better to ditch or try to make it to the nearest airport?  Is it better to depressurize the airplane and slow the burn rate from the fire (depriving it of oxygen) to be able to survive the flying time to the nearest airport or is it better to dive to a lower altitude so passengers do not run out of oxygen?  Can a computer make such a decision better than a human?

Other things that are difficult to program include subtle aspects such as a very small change in sound or vibration.  Is that a serious change or not?  It takes a lot of experience to be able to discern the difference.  Then there are aspects that would require engineers who design systems to better understand, such as meteorology.

All of this ignores the issue of power.  If there is something that interrupts the electrical power of the airplane and its components, how is the computer flying the airplane to be protected?  A good example is fire that is burning through power systems.  So until computers increase to the point of being at human-level and the power issue is resolved the concept of replacing humans pilots in airplanes with computers is not realistic.  Once we reach human level artificial intelligence then there is another set of issues which make more of this entire topic likely moot, as will be discussed land the end of this article.

So what about the idea then of leaving a single human in the cockpit to capture these issues?

Humans are subject to a number of cognitive limitations and historically it was considered that having more than one pilot improved safety because the second pilot could catch the first pilot’s errors.  True there are plenty of aircraft that do operate with a single pilot, which include everything from military aircraft to very sophisticated private aircraft.  Smaller charter type airlines also operate this way in many cases so clearly the workload issues could be worked out, at least during routine flights.  However, the accident rates for these categories are much higher than would be accepted for mass transportation.  Partly this is due to people making mistakes, misperceiving things and the like, but as we create systems to capture these we still see accidents.  Why is that?

It is the general view that humans are error prone and that led to the idea of removing humans entirely out of the equation in the first place. The generally accepted reason adding a second pilot improves safety is that a second person will notice these errors and speak up about them.  Indeed, that is the basis for the crew resource management (CRM) training that was instituted starting sometime in the 1980s.  The concept was that by training pilots to speak up when they noticed a problem we could solve many issues, and underlying this is the assumption that in many cases the second pilot did notice an issue but did not point it out for various reasons.  These could include fear, power-distance, not wanting to “make waves” or even anger.   There were perhaps some actual cases of this and so CRM training became “the fix” to solve this problem.  Simultaneously, equipment became more reliable, the design of procedures became better and systems were installed that would warn pilots of dangerous conditions, such as approaching terrain, windshear, too steep of a bank angle, approaching a runway on the ground, a collision risk and so on.  In addition, ground based systems were installed so that air traffic controllers would also get alarms and be able to shout a warning for many dangerous situations.  And, accident rates did go down.

So, was it the CRM that led to this improvement?  The truth is that the evidence to support such a conclusion is weak at best.  It is not to say that CRM is a bad thing, but rather that it may have not made as much impact as its proponents desired.  The same can be said of other programs.  One example was a program that essentially centered around the concept that the more  precise a person attempts to be in all areas of their life the “smaller the target” will be and the less likely they will be to deviate from what they intended to do.  Certainly not a harmful concept, but there is no evidence it has any correlation to accident prevention.  Other emphasis is easier to see, such as improved diet, hydration and, of course, mitigating fatigue.  Fatigue can definitely be a problem and people do make more errors when tired, but the entire approach is again based on the assumption that errors are the problem when the real issue is the loss of resilience as discussed previously.

That said, obviously a second person can help capture errors, but the real value comes in that a second person, if the pilots are properly trained to work together, create a shared mental model and then act, essentially, as one mind but one with multiple senses.  That also has a multiplier effect on the experience level and the ability to cope with unusual situations. This does not just double the resilience found in one person, but rather magnifies it.  Humans are able to accommodate variability and two well trained humans working closely together are better than one.  More can even be better as Captain Al Haynes described after surviving the Sioux City accident.

To make this work the system would need to be smart in that it would anticipate what the human needs.  That is something the other pilot is doing, they are not only able to react, but actually anticipate the needs and actions of the other pilot.  Computers are rather limited at this currently, “auto-correct” being a case in point.  Would we really want a virtual computer “co-pilot” that reacts as “auto-correct” does?

The proponents of the concept counter by stating that they can have a person serving on the ground as a “virtual co-pilot”, ready to assist if needed.  The person on the ground would attend to multiple flights under the premise that only one might have an issue at a time.  This makes one wonder.  Have you attended a virtual meeting?  Even under the best of circumstances there are limitations and subtle cues are missed.  Hand gestures, facial expressions, many other nuances would be lost. In reality, the person on the ground is working as a dispatcher.  Dispatchers are already part of the decision team for safe flights for all major airlines so we are not adding something particularly new.

So the disadvantages of this scheme would be to lose the “shared mental model”, because no matter what sort of data connection there is the second person on the ground would not actually “be there”.  They would not be experiencing the sensations, they would not have “skin in the game”.  Even assuming that there was some way to transmit some of those aspects they would still need continuity.  Not being completely immersed in what was happening until there was a problem would not be unlike the situation with the Air France 447 Captain who did not come up to the cockpit until after the airplane was stalled.  He had almost zero chance of sorting out the issues.  If we want to fix that then the person on the ground would need to be virtually “in the cockpit” for the entire flight, which would mean that they would not be virtually able to be in multiple cockpits simultaneously.   Of course, once we have done that we have saved nothing.

Finally, there is, of course, the problem of someone who is suicidal or similar.  The Germanwings case highlights that issue, and there is no known psychological testing that would ferret out that sort of issue reliably. In sum, it is clear that the impediments to both pilot-less and single pilot transport aircraft are larger than most realize.

So what about artificial intelligence?  There is certainly no question that once computers reach human level cognition that they will be able to fly an airplane as well as a pilot can.  They would need appropriate sensors to pick up subtle odors, vibrations and sounds, but that is not a difficult problem.  Methods could be devised to ensure they are powered, so that issue is also surmountable.

It is estimated that computers of this level could be operational in the next few years, although others believe it will be longer than that the overall consensus is that they will be up and running by the end of the century.  Is this something pilots should be concerned about?

The answer is “perhaps”, but the real truth is that once human level intelligence is created the world will be so completely changed who flies airplanes is likely not to be a large concern, even for those that currently make their living at it.  The reason is that this level of artificial intelligence referred to as “artificial general intelligence” (AGI) is very unlikely to be like a Hollywood movie.

Current technology includes a lot of what is considered “artificial intelligence” or AI.  This level includes predicting words on a smart phone or tablet, and numerous other applications.  Google’s new car is in this regime.  These systems are able to “learn” on their own and as they “watch” what you do and so improve their performance.  Cool stuff.

It seems to follow that if you create an intelligent enough computer that is learning on its own as some point it will be much equivalent to human level, and as humans we tend to anthropomorphize objects so we think of it as being much like it would be human-like.  Indeed, it would mimic many human traits as it would be logical to design it to speak, etc.  However, as much as it might seem to be human, it would not be.  A computer is not even a biological organism.  Tim Urban has a very good discussion on this topic, and one illustration considers a spider and a guinea pig.  He points out that if one were to make a guinea pig as intelligent as a human it might not be so scary, but a spider with human level intelligence is very unlikely to be friendly or a good thing.  A computer is not even a biological organism so is actually more different.  Tim points out that a spider is not “good” or “evil” as Hollywood likes to portray things, but is rather neither, it is just different, and likewise would a computer.  It reacts to things based on programming, but once it can self-learn and at that level, its motivations are based only on what its job is set to be.   A short example of how this could go wrong is illustrated in this SMBC comic.

Humans are social animals, primates with the social structure of termites.  We survive as a result of that social aspect.   Termites have evolved to “know” that any individual will sacrifice itself to prevent the destruction of the hive.  Altruism and self-sacrifice is not something we attribute to insects, but the fact is that termites, bees, ants and other social insects will take actions that in humans we would consider altruistic.  The fact is that by being social animals humans are able to succeed where non-social animals could not.  As a result we have evolved to be social and our “programming” reflects this.  It might be described very roughly as follows:

  1. Prevent harm yourself unless (in order);
    1. Your family is in danger, protect them first.
    2. Your “tribe” is in danger;
    3. Your Nation is in danger;
    4. Prevent harm to another person outside your group.

The last items might or might not occur, many will protect themselves before helping strangers.  The programming varies.  Very few would not give their life to protect their children, their spouse and immediate family and we are programmed to keep it in that order.  The entire point of all of this is to ensure our genes survive, and it is better to ensure that even a bit of your DNA makes it (you will have relatives most likely in your tribe, nation, etc.), and it seems that the stronger the DNA connection (or in the case of a spouse, the likelihood of ensuring your genes survival) the stronger our will to do anything to protect them is, even at your own expense.  Of course, intrinsic in all of this is self procreation.

Obviously, a computer only has the traits we have programmed it to have so as much as it might appear to be human, it really is not.

This leads into the larger concern.  Computers today process information millions of times faster than humans but still lack human intelligence.  This article is not intended to get into the technical aspects of how our brains are structured, but suffice to say the structure of our brains allows for ways of processing information and connections that computers are not yet capable of.  Once they are, though, they will be combining that faster processing with those connections.  Connect to the internet and things happen fast.

Consider a computer with this capability and the ability to learn.  It starts as a toddler but an hour later has the ability and knowledge of Einstein, and an hour after that is has the combined knowledge of all the great thinkers combined.  Unlike us, it has constant access to ALL of that knowledge and a much faster processing skill.  The problems that take us years to solve or appear without resolution are likely to be trivial for it.

Is there any doubt that it could rapidly know ways to make itself even faster and smarter?  Does it have the ability to adjust its pathways and improve its structure based on its knowledge?   If we give it that ability, or it figures out how to do it on its own, then the intelligence can increase even faster.  Above AGI is artificial super intelligence, or ASI.  In this realm we are looking at a computer that is not just a little more intelligent than us, but instead millions of times.  This is a system that might realize there is a way to manipulate matter, time or space.  It would not be limited to our perceptions of reality. The trouble is that it is farther above us than we would be to an ant, and we might not be relevant to it or just a nuisance.

All of this is such a tremendous game-changer that who is flying our airplanes becomes somewhat of a trivial issue.  ASI could lead to solving all the problems of humanity or the end of humanity.   People like Stephen Hawking and Bill Gates are very concerned. Elon Musk is so concerned he states he spends a third of his waking hours thinking about it.  This while running several companies!  Hopefully it will turn out well.  If it solves all of the problems of humanity than all of us may be able to live just doing what we want without any real need for work.  If it goes badly then  none of it will matter either.

The bottom line here is that we might see a push for single pilot or even no-pilot airplanes, but if we do it is based on a fundamental misunderstanding of what the issues are and where the risks lay.  We might automate the basic functions but that would still leave us vulnerable to the real “corner-point” scenarios that lead to actual accidents.  Contrary to popular opinion, most accidents do not follow a simple linear causal chain.  It would be safe most of the time, true, but not as safe as the public demands today.  It might plateau to safety levels reached in the 1970s or so.  Reaching the higher safety levels now demanded by the public and regulators would require AGI, and once we reach that point the outcome moves in directions that are beyond the ability to reasonably anticipate.

=================================

aoa-book-image1

[ad_2]

Source link

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article