X-31: Breaking the Chain: Lessons Learned


♪ [music] ♪ (Chase Pilot) NASA ONE:
We have an ejection, we have an ejection. The aircraft is descending over
the North Base area. I have a chute. The pilot is out of his seat and
the chute is good. Control Room: NASA One, we copy. (Rogers Smith) We had a
highly competent team, very experienced, many flights
under their belt. We had a number of pilots that
flew the airplane. The pilot in particular that was
flying that day had been on the program from
the very beginning. Highly experienced
with the X-31. Each mishap has its own
circumstances and it’s own sequence of events. But you find similar issues: communications, complacency, assumptions that haven’t
been warranted. Human frailties. And you have to account for
these things in a program. (Rogers Smith) Just like a
chain. You make a chain when you have
any of these accidents. A chain of events. Any link of
the chain, if it were broken, you would not have an accident. This was the A team. We had the best people from
every discipline, from every organization. And we lost an airplane. So, if it can happen to the best
team. It can happen to any team. (female narrator) The X-31
research effort began in the late 1980s as an
international program involving DARPA, the U.S. Navy, Deutsche Aerospace, the German Federal Ministry of
Defense and Rockwell International. The program’s goal was to
explore the tactical utility of a thrust-vectored aircraft
with advanced flight control systems, using an aircraft
designed and built specifically for that task. The X-31 was a real pioneering
program. In fact, the X-31 program
pretty much wrote the book on thrust-vectoring, along
with its sister program, the F-18 HARV. The initial X-31 flight
tests were conducted at Rockwell’s facility in Palmdale,
California. But, in 1992, NASA and the U.S. Air
Force joined the X-31 research team and the test
flight program was moved to the Dryden Flight Research
Center on Edwards Air Force Base. And
before too long, the X-31 was turning in some
extremely impressive results. [Jet engine] By any measure, the X-31 was a
highly successful program. It regularly flew several
flights a day, accumulating over 550 flights during the
course of the program, with a superlative
safety record. And yet, on the 19th of January
1995, on the very last scheduled flight of the X-31’s
ship #1, disaster struck. This particular flight had been on the books for some
time to get done. And it was by our standards, an
absolutely routine flight. We were not expanding the
envelope. We were not trying anything new. We were flying a new pitot
static tube… but this was a routine mission, a routine task, a routine flight
environment with an experienced pilot and an experienced crew. But while the flight was
routine, there had been some changes
to the configuration of the X-31 since its initial flights. In
particular, the original pitot tube, which supplies
airspeed information to the plane’s flight control
computers, had been replaced with another kind of pitot tube
known as a “Kiel probe.” The Kiel probe gave more
accurate airspeed data at high angles of attack, but it
was more vulnerable to icing — especially since the Kiel probe
on the X-31 did not have any pitot heat. (Fred Knox) We were never
to fly the airplane in ice. That was a prohibited
maneuver. So, if you’re prohibited from flying in
ice, you don’t need a heater. Normally, the conditions at
Edwards are warm and dry enough that icing, or pitot heat isn’t
a concern. But January 19th, 1995 was not a
normal day. The unusual part of the day
was we had a high humidity at altitude actually conducive
for freezing conditions and the airplane was
operated for, in and out of some fairly high moisture content for
extended periods of time, lead to some indications in the
cockpit and the control room that was causing problems with
the air data system. [Jet engine] (Dana Purifoy) This particular
airplane had a limit to not fly through clouds,
through visible moisture. That day, we were flying very
close to and occasionally in and out of very thin cirrus
clouds. It didn’t particularly
worry me because everything seemed to
be going along quite normally. But some minutes, like five,
before the airplane went out of control and the pilot jumped
out, the pilot observed that there was some moisture
around where he was. So he turned the pitot heat
switch on. Now clearly, when he turned the
pitot heat switch on, he expected that the pitot heat
would be working. About two and a half minutes
later, which is two and a half minutes before the accident,
he mentioned that fact to the control room. Mysteriously, to this day, the
control room gave him no
response. They had an internal discussion
as time, the clock clicked down. And internally it was commented
that the pitot heat was not
hooked up. But this vital piece of
information was not relayed to the pilot for more
than two minutes. And even when it was, the
information was not stated as clearly or strongly as it could
have been. Control Room: ….And pitot
heat. Pilot: We’ll leave it on for
a moment. Control Room: Yeah, we think it
may not be hooked up. Pilot: It MAY not be hooked up?
That’s good. I like this. We had side discussions that
should have been going on on the intercom so that
everybody in the control room was part of the conversation. Instead, we pulled our headsets
aside so that we could talk to each other because we were
sitting adjacent to each another. And that’s another part
of just control room discipline that we broke down on. Meanwhile, the first signs of
trouble were beginning to
appear. So now the pilot sees an anomaly
in his airspeed. He’s at 20 degrees angle of
attack, and he can see that. And he’s says to the ground, and
I briefed this many times, he said, “I’m at 277,
I mean 207 knots.” Pilot: The airspeed is off. I’m
reading 207 knots at 20
AOA ….Ok, pitch doublet. Well, anybody that’s been on the
program, and lots of people have been on many years,
would know that 20 degrees angle of attack is
somewhere around 135 knots, 140 knots. It’s NOT 207 knots. Apparently, no one in the
control room caught the possible significance of that
discrepancy. And perhaps even more
importantly, neither did the chase pilot — for the
simple reason that he couldn’t hear any of the pilot’s
transmissions. We had a mechanism of hot mic. It was very important to the
pilot of the X-31 that he be able to talk to the
control room without having to press buttons at certain key
times, especially at high angles of
attack. Which was not going to be a
factor in this flight because it was going to go to about 20
degrees angle of attack. But, it was a general operating
procedure that was compounded because our hot mic system
didn’t work always very well. And when it didn’t work, it put
a lot of static in the earphones of the chase pilot who
wanted to hear the hot mic to know what’s going on. So it was the one-sided nature
of the communication that kept me from having the
situational awareness to be able to step in and say,
“Hey, I’m reading X knots, and you guys are reading Y knots and
these two numbers should be the same and they’re not.” The X-31 did, indeed, have an
air data problem. The unheated Kiel probe had
frozen over in the cool, moist conditions, causing it to
start giving incorrect airspeed information to the X-31’s flight
control computers. In terms of the accepted risk,
the failure of the pitot static system or damage to it was well
known. It was well understood. The pilot himself had simulated
the failure in simulations before we even got the airplane.
And it probably helped him understand that he had to
get out of the airplane because the time is short when
the airplane is diverging. And we went through quite a
thorough review of the hazards that we knew or could come up
with based upon the design of the flight control system. And we thought we had a good
handle on that. We thought we could lose the
whole nose boom. We could take a bird strike,
wipe out the whole nose boom and fly home safe. As a
result of that, we thought we had a pretty robust system.
The reason the team thought they HAD a robust system was the
X-31’s flight control system was designed with three back-up
reversionary modes the pilot could select in the event
of an air data problem or other systems failures. So in the case, of if you saw
something that was not right or the control room saw
something that was not right with respect to the airspeed system, they could tell the
pilot to go to R3. R3 was a reversionary mode
that would have removed —
within 2 seconds — the airspeed data inputs into
the flight control system. The control surface response to
pilot inputs would then be independent of airspeed allowing
the airplane to remain controllable for the remainder
of the flight back to the
landing. The accepted risk was
probably reasonable. But here’s the kicker…the
consequences of a failure are so high here that you really needed to put some special attention on
this. The designer did by putting R3
in. But nobody on the test team, including the pilot, realized
that the X-31 was experiencing an air data problem that would
require implementing the R-3 reversionary system. For several minutes we had
indications that airspeed was becoming poor, both in the
cockpit and the control room. In our last ditch catch,
nobody stood up and yelled, “Wait a minute, this can’t be
right.” Because had we realized what was
going on, the control system had the
ability to go to fixed flight control gains. And
with fixed flight control gains, it would not have been a
problem. They would have been able to
land the airplane safely. But we just never got enough
information to make the decision to do that. We had an
alternate airspeed indicator that used a different pitot tube
which would be less susceptible to icing than this special tube. It was at the pilot’s right hand
knee. And he never looked at it. We had a lack of attention to
the reversionary modes. Gradually, we were not thinking.
We learned to depend on the control room — they’re going to
tell us when we need to go to R2 or R1 or R3. We need to know as
pilots, which we kind of forgot, where are the safety nets. The
safety nets – push the right
button – didn’t get the test data, but
you bring the aircraft back. So if we didn’t understand
what was happening, we should have been constantly reminded,
push the button and talk about
it. The pilot obviously wasn’t
concerned. He was experienced… Probably, if you look at the
control room, the pilot and everybody involved in that day’s
activity, he was the most
experienced in that day’s activity. He’d
been on the program since Palmdale. He noticed something
but he wasn’t concerned. And he didn’t ask for help
that I was aware of. So I think the control room
said, “Well, if he’s not that panicked, I’m not that
panicked.” And I think that fed off each other a little bit. The team moved on to the
final test point of the day — a simple, automatic
control response test that required only a command
from the pilot to initiate. But once again, the airplane DID
NOT respond as expected. He hits the box, presses the
button and he says,
“I don’t get anything.” Well, he didn’t get anything
because the box was designed not to put any input if you went
beyond a certain speed, like 200 knots. So it was seeing
the false airspeed of 200 plus knots and when he
pushed the button, it didn’t
work. Pilot: Three, two, one, go. Hmm.
It doesn’t do anything. Well, it didn’t work because
something was wrong! And, the control room came
back and finally just kind of ignored that and said, “It’s
all okay and RTB now.” It’s almost like expecting to
hear that it went fine. After this program with hundreds
of flights and everything going
perfectly, in your mind, you’re hearing
things that aren’t happening. Everything’s fine, it worked
fine, let’s come home. The normal operation of the
system was expected that the system would identify
the problems itself that it would not be the people
on the ground identifying an air data problem
and calling for fixed gains. Although it was certainly
capable of doing that, the expectation would be that the
system would do its own self-diagnosis and identify
failures. But the failure we had was a
slow failure of the tube, slowly building the ice up. So the changes in the speed were
within perfectly reasonable numbers for a real airplane. The software’s just not capable
of detecting that failure, for that system. There was one or two people that
actually knew that there was these little tiny areas that it
couldn’t handle but that word never got out.
They never stood up and said, “Uh, boss, that’s not quite
right. You can handle it over 95 or
99 percent of the area but
there’s really a couple little areas that the automated
system can’t handle.” And that didn’t come out until
after the accident. I never did get to talk to them
about it but I just kind of felt like they didn’t want to stop
the program, thought it was of no real issue because of the
difficulty of getting to such a small area of the envelope. But as the X-31 began to descend
on its return to base, the problems caused by the
failure of its air data system became far more pronounced. We have frozen the
pitot tube now. And it’s stuck. It’s got what it had in it and
it’s going to hold that
pressure. Now when you start
down with a frozen pitot tube, the airspeed, what you see, the
false airspeed that he saw, will decrease as he decreases
altitude. But we are seeing, “we”, the
control room is seeing, they have a big display, the pilot is seeing every
time he turns his head, he’s seeing the airspeed in the
HUD. And now, it’s perhaps at one
point, it’s at 150 knots. It cannot be at 150 knots! And then it’s at 100 knots
and it cannot be at 100 knots! And going on down and finally
right just before the accident, it gets to 48 knots which is the
minimum it’s going to read. But the control system in the
airplane is getting this wrong information and this is a
complex closed-loop system and when you put
too much gain in, it will start to get unstable
and it will start moving the controls, which it did in a
matter of seconds. And finally, it dramatically
pitches up, the pilot of course
tries to prevent that and I’m sure the instant
that he hit the forward stop and realized he was out of control,
he did the natural thing which was to eject from the airplane. We were RTB (return to base) and
I started to rejoin on the X-31. As I came up on his right side,
about 100 yards away and closing I saw the airplane start
to go into a small wing rock that progressively got larger
and larger. And, as I got within about 200 feet of him,
the airplane pitched up vertical and approximately the time that I passed abeam him, I saw the
pilot eject. Chase Pilot Dana Purifoy: Okay
NASA One, we have an ejection, we have an ejection. NASA One do you read? Yeah, we
copy Dana, we copy. Puriofy: Sport, NASA 584 has
ejected the aircraft and is
descending over the North Base area. I have a chute. Sport
NASA 850, how do you read? 850, say it again please? Purifoy: Yes sir. NASA 584 has
ejected from his aircraft. The aircraft is descending
north of North Base. The pilot is in the chute
at this time, descending approximately
one mile north of North Base. 854, copy. (John Bosworth) So there was
the knowledge and training in the simulation that taught the
pilot that when he started to see the airplane was
oscillating and was not controlled, he knew he
had to get out of the airplane very fast or else the airplane
would go into a tumble. And he did do that and that
saved his life. I also know
that the pilot, as he was ejecting from
the airplane, had thoughts of, “Maybe I should have tried a
reversionary mode.” But at that point, if he would have
hesitated any longer, he would have been probably lost with the
airplane. Stoliker: I did not connect,
until after the plane departed, and while the plane was
tumbling, I made the connection: the pitot system had to be
frozen. And just didn’t come to the realization soon enough to do anything about it in the
control room. Less than four minutes after the
first comment about pitot heat was recorded between the pilot
and the control room, the X-31 crashed just north
of Edwards Air Force Base. How could such a routine
operation have ended in disaster, when flights with
far higher risk had been completed safely? And more importantly, what
can we learn from the answers
to that question? Szalai: Every person involved in
an experimental flight research program should actually study
the mishaps of all experimental aircraft in the past twenty to
thirty years. There’s a lot of things you can learn. Because
human nature doesn’t change. The processes don’t change. It’s
always the same set of contributing factors. Just the
names and the details change. Of the ten things, for example,
that I would describe as causes, contributing causes of the
mishap, six of them occurred prior to the day of flight. Four
occurred within about two minutes. So, we had a better
chance of working on the six than we did on the four. In some senses, the X-31
accident started six years earlier, when the plane was
first developed and tested at Rockwell. Knox: We had a hazard analysis
from the initial design. And in the accident that had to
actually get dusted off. You should never have to dust
off one of those. Everybody familiar with the
program, at all those levels needs to have a really good
comfortable feeling of what those hazards are and what are
accepted in the risks. There was a redo of that
analysis as we moved to NASA in ’92. And I think, that
it was clear after the accident that not everybody really
understood what that design was to the detail you
needed to understand the full risks of the program. Clearly, from 1990 to ’95 you
have a large team turnover. We changed locations. We
expanded the objectives of the program and as time
rolls on and the new people come in, not everybody
has the same understanding or appreciation of the
kind of vehicle we’re operating. It’s a special airplane. It’s
not the same risk as any other airplane and to operate
it everyday you really ought to have the same
appreciation for the risk. And I don’t think we, as
a team, did a good job at keeping everybody that came to
the program with the same level of understanding of both the design and the risk
of the airplane. We shouldn’t have had a
control room, a pilot and a team that day that didn’t
understand that fundamental
fact. And it’s not elaborate. It’s just straight-forward. The airspeed I see in the HUD is
the airspeed the computer uses. If the airspeed I see
has got a problem, the airplane has got a
problem. And that fact didn’t get
communicated correctly from the old team members to the
new team members and if it had, I don’t think there would have
been anybody in that room that wouldn’t have yelled “STOP” and jumped off that bridge to
make it happen. There were errors made. The pitot heat circuit breaker
was disabled but there was no placard in the cockpit to say
“NO PITOT HEAT.” Notices of the configuration
were sent around but here also we probably lacked one step and that is to know that
everybody got the message. It’s one thing to send
it out, it’s another thing to verify that everyone has read
and understood it. And so that procedure was
changed, by the way, so that people ripped off the
bottom of the page and sent it back. I’ve seen it. Ironically, the X-31 program
also may have been a victim of its own success. Szalai: I never saw
complacency in this team. I went to tech briefs, crew
briefs and it was treated very professionally and
in fact, to some extent it was treated like an
experimental airplane every flight. But certainly you have to think that after hundreds of
flights, excellent results and the fact that none of these
hazards, these terrible things that
you predict could happen, has ever happened, it could
lead you to be less sensitive to things that are
happening. Maybe just a little bit of the
edge comes off. Those single point failures were
identified and we made some actual changes to the
design of the airplane
to account for that. Again, that was in 1989. Why all those were
there and what the concerns were and how to mitigate them or how
to worry about them became… we hadn’t had any problems with
that for five years and I think again, the complacency just got
built into the team. It worked fine. We’d never
had a problem. And those little hairs on the
back of your neck weren’t geared to stand up when people started
having air speed problems. Our control rooms used
to have a saying on them to, “Prepare for the unexpected and
expect to be unprepared.” And I think that’s
a truth in the flight test business that we need to keep
in mind continuously. I wish that sign was still up
there because that reminder needs to be enforced all the
time. Well certainly in the case of
the X-31 we were returning to base after two exhausting days,
7 flights. Ship 1 was now going into the
boneyard or at least it was being retired from the test
program and so we’re finally finished. Was everybody paying attention
like they should be? Obviously not. And while the
X-31 program flights were highly successful, they did not
include an element that might have helped prime the program
team to take the one mitigating action that could have brought
the X-31 home safely. We’ve debated amongst ourselves
whether we actually would have been able to convince
anybody to use the fixed gains system because there
was not an obvious need for it. The pilot may have been better
prepared when things started to go awry to select fixed gains
but I don’t know if we ever really would have
done it in that situation because we didn’t
have a real problem. We DID have a real
problem but it hadn’t been diagnosed as
a real problem. On a previous program,
the X-29 program, we had the same sort of thing. We had an analog reversion
mode, a digital reversion mode, and the normal mode of
the airplane. We routinely at every test point
selected those back up modes, flew them around so the pilots
were much more familiar and much more comfortable with selecting
those modes. On the X-31 program, we
never selected those modes intentionally, we only
used them when we had a sensor failure or when the system told
us to select those modes. On the day of the mishap itself,
there were additional links added to the chain. There were unusual weather
conditions that created an uncommon and
unexpected kind of flight hazard. And the team was working with
a flawed hot mic system that kept the chase pilot from
hearing critical communications from the X-31 pilot. So, some links in the chain are
already built there. Management links. The control room has now
talked internally, they’ve heard some things, they
haven’t said anything. Some more links are built. We’ve got this chain is
building now. The chase pilot didn’t
hear anything about this, he didn’t know anything
was wrong with the airplane until he saw the airplane pitch
up and the pilot jump out. Whereas he could have stopped
this at any time. At any rate, it’s a total team
concept and the chase pilot has to
be part of that team and the team has to
have total communication. So the use of a hot microphone
frequency that did not allow the chase pilot to stay up with
what was going on with the airplane was
essentially keeping me from doing my job at
least at a certain level. And that’s one of
the things that we changed in the way we do business here at
Dryden is to allow the chase pilots either access to
the hot mic or to ensure that all critical communications are
transmitted so that all the players are kept up to speed
with what’s going on. And that was a direct fall out
of how the X-31 operation was handled that day. If one or more of these
contributing factors had been caught and addressed prior to
January 19th, the chain of events leading
up the accident might have been broken before the
flight even took place. Yet there were still
opportunities to avoid the mishap, even in the last few
minutes of the X-31’s flight. So why didn’t the team
manage to recognize, communicate, and respond to the
X-31’s pattern of anomalies in time? Stoliker: So we were seeing
inconsistencies between the data from the aircraft
system and what we knew of the physics of the problem that it
could not be, that you could not have that airspeed and that
angle of attack simultaneously. And for me, I just remember
thinking, “Gosh, I can’t wait until we get the
data from this flight because I want to see what’s going on.” I
knew there was an anomaly. We had talked about it between
the engineers. We didn’t talk about
it on the intercom though, it was sidebar conversations
in the control room. Well, many of us are
engineers and we see an issue and, “Oh, this is interesting, I
wonder what’s causing that.” And you start thinking
about it and trying to figure out what is the
answer. In the meantime the seconds
are clicking by. And really, the right response
is, “Something’s going on. I don’t understand, let’s call a
halt here and let’s just figure it out.” We should have, at the first
call of an airspeed failure, just puckered up.
Whether you’re RTB at that point or not, it wouldn’t have
changed. The kind of failure that was occurring should have
triggered a lot of emotion anywhere in the flight envelope. In the case of any discrepancy, anything that doesn’t sound
right, feel right, smell right, let’s stop and think it over.
And I think that kind of attitude has been built in now
into the mission control room processes since then. We were
flying lots of flights. At the peak of the program
there would be days when there would be five flight days.
I think on that particular day we were only doing three
flights and it was the last flight
of the day. It was the last flight for the
first airplane and we had completed all the test points
for that mission. In addition, we were going
through the RTB or return to base checklist and at
that point, every one of us kind of relaxed. Like I said, what was going
through my mind is, “I can’t wait to get this data.
Something funny is going on and I want to figure it out.” And, that’s another
lessoned learned and we talk about
it all the time, that the mission’s not
over until the airplane’s on the ground and the
engine’s shut down. And you see it a lot in
the control rooms, you start getting ready to
land and everybody relaxes a little bit. And that’s a
lesson I’ve carried with me is that you need to
continue the vigilance there on the flight. Communication is what
it’s all about. We have to have the
communication links. We didn’t have it
to the chase. Hot mic was a contributing
factor. We didn’t have it in the control
room. We discussed things
internally, it was not transmitted to
the pilot. We have to have an
environment built where people can speak up when they THINK
something’s wrong. They don’t have to be
right. If they’re concerned, they
should be able to speak their mind, put their
hand up and we stop the train and then we say, “No, you
weren’t right, it’s okay.” Fine, we go on. We
didn’t do that. We never stopped the train. We had a problem and we didn’t
stop, not only testing, but we didn’t stop
flying and come home. But you can’t stop for every
problem. That’s unrealistic. You have problems in flight. The
combination that went with that is that we didn’t understand the
severity of the problem. So you have to understand your
vehicle and the consequences of failures. And if one of those
failures has a serious consequence, you need to stop
and come home. Clearly, there are lessons to be
learned in the entire progression of events that
led up to the X-31 mishap. And yet, the X-31 program did
not end with that crash. The next chapter of its story is
an equally important reminder of why flight test remains such
a valuable step in proving a concept or technology,
despite the hazards that come with the territory. The X-31 had been scheduled
to fly at the Paris Air Show in June of 1995. But after the loss of one
of the two X-31 ships less than six months
before the show, it seemed an
impossible goal. (Szalai) Having lost the
airplane, pretty much everyone thought, “That’s it.” Because flying the kind of
maneuvers that this airplane can do at 500 feet, sounded a lot
riskier to me after you lose an airplane. The team really talked a
lot about this and decided that it did not want to end this
program on a low note and so we made the decision to press on
with the Paris Air Show. A huge thing to sign up for
was to take an airplane that just crashed
and turn it around to go do a low altitude,
high angle of attack flight demonstration. That took a lot of guts on
everybody’s part and a lot of good engineering
work to make that happen. We actually flew the X-31
84 days after the mishap. This required the board to
reach its conclusions, to write a report. For the team to react to all of
the issues and problems and contributing
factors brought up. Solve the problem and
get it into an airplane and get it qualified for first
flight. It was all done in 84
days. It does tell you about the
quality of the team. Air Show Announcer: A
totally different airplane which will demonstrate a most
remarkable flying ability. It is the X-31 technology
demonstrator. (Stoliker) You know, after the
mishap, I think the program made a spectacular
recovery and made one of the finest appearances ever at the
Paris Air Show. The airplane did things that
no other airplane could do. The Russians had demonstrated
post stall maneuvers with the Cobra but it was really
an open-looped maneuver. They pulled back on the stick
and then you flew out of it at the end whereas the X-31
just demonstrated the ability to control all axis
of the airplane, pitch, roll, and yaw simultaneously
while operating at the extremes of the flight envelope. (Smith) So, fantastic Air Show. Absolutely the most
spectacular I’ve ever seen and I saw every one of
them. And I stood with the crowd on
some of them and I was in the control tower on others and I
was right underneath it at other times. But to
be with the crowd and watch even hardened
veteran’s, the military, had no concept of what it could
really do and seeing it was jaw-dropping for the crowd.
It was spectacular. The announcement that
the X-31 was next to fly, as you looked down
the row of chalets, you see all the people coming
out of the chalets, out against the railing
to watch the flight. If the events leading up
to the X-31’s mishap are a reminder of how much
vigilance is required in order to mitigate the risks
inherent in a flight test
program, the X-31’s Paris Air Show
performance was a reminder of why those risks
are still worth undertaking. (Smith) Flight test of all kinds
is inherently dangerous. There are risks involved in it. Never can you or anybody else
bring it to zero. Well, you can, and that’s keep
the airplane in the hangar. Don’t fly. But if you don’t fly, you
don’t move forward, you don’t discover, you don’t prove things. So you need to take some
risks but you need to do it in a controlled fashion. (Szalai) The reason we spend
time on looking at these accidents is that there
aren’t many accidents. We don’t lose many airplanes
in flight research activities at Dryden. We haven’t over the years. And
so when you do have one, you better learn everything
about it. In fact, you should do the same thing for
close calls. The lessons to be learned. Don’t assume that they’ve been
learned. We can always, with every
new group have to learn the same lessons and you don’t
want to do it the hard way with an accident. Safety is
everybody’s business. Flight test safety is
everybody’s business on the team. And, there
are no processes… you have to have processes…
but there are no perfect processes that will not require
good judgement from all levels of the program. If you’re a program that has
been operating for a long time, potentially, and you’ve got a
lot of turnover, you’re in your mature years, all your documentation is
years old. Maybe you better make sure that
all your new people are as good as your old people. That you’ve reviewed your
documentation and it’s still correct and that you all
understand it. And that what you’re doing
today still makes sense from how you started. Maybe if you’re in that area you
ought to take a look at
yourself. It always is clear what you
should do after the fact or should have done rather. And nobody thinks it’s ever
going to happen to them, to lose judgement, to
lose this communication link, to not do the right things. So what is the message? What is the message for
the team? It may mean that “I” am a part of the chain and
that if I don’t catch this and if other people don’t catch
their mistakes, we will run through the entire
chain and lead to a mishap. So it means that every
individual on the program, from beginning to end, no matter
what the job is, from the highest-level job to
the lowest-level job, in terms of detail, they have to
take it very seriously. And, that’s the
message that you have to keep promoting, pronouncing, and
explaining. It sounds trite but everybody is
responsible for safety. If you think some safety office
analysis is going to find these things, they won’t. Mishaps can occur everywhere. But, the point is, you have
to fly, safely… but fly. ♪

About the author

Comments

  1. Fascinating and I'm normally loathed to criticise these incidents. However, some of the facts are very hard to believe and can only point to a bunch of incompetencies never normally associated with test flying and test pilots. The fact the pilot didn't know the pitot heat wasn't hooked up before taking off, the fact the pilot didn't look at his backup mechanical ASI to cross reference and the fact that all information was so obviously wrong to everybody concerned and nobody pulled the alarm, is absolutely astonishing. I think that this team were so complacent having had an easy ride to date they completely switched off. Highly unprofessional and completely avoidable. I imagine the pilot is currently enjoying an extended retirement because he alone should have been switched on enough to prevent the whole incident regardless of the other factors.

  2. I like the way they pat themselves on the back to fly the second X31 again only 84 days after the "mishap" with the first one as if not sabotaging the jet was now something to celebrate.

  3. '
    no wonder american was first made X-31 airplane…
    later china copy steal design from american first X-31 to J-10 jetfighter

  4. Does anyone see that this is the Eurofighter Typhoon prototype? I cannot believe how similar they look, sans the twin non-thrust vectored engines.

  5. "What can we learn?"

    First of all, this question has been asked so many times after a disaster like this that there is nothing to learn until team members will not tolerate "hot mic system didn't always work very well (8:34)," and other system deficiencies that prevent complete member attention to be present in the system.

    Why wasn't the pilot told at the beginning that this different pitot tube had no heat? Gross negligence?
    .

  6. Excellent video NASA! Almost done reading Breaking the mishap chain. Great ebook, thanks for posting it ont the website.

  7. Never accept risk on a test plane, never fly on the moist weather is a period the rest of what happens next is delinquency to safety systems

  8. J-10 is this layout. So did Israeli lavi copy this or did you copy the lavi? Planes can't be copied. Flying them and testing them is the invention itself.

  9. F-16, FA-18 and even the F-15 have been tested with Thrust Vectoring Nozzels and Paddles. F-16 https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/3_three_thrust-vectoring_aircraft.jpg/800px-3_three_thrust-vectoring_aircraft.jpg

    https://www.nasa.gov/centers/dryden/multimedia/imagegallery/F-15b_837/EC96-43456-1.html

  10. The X-31 flight test team was the "A" team — the best people, from
    every discipline — from every organization. But they lost an airplane.
    If it can happen to the best team, it can happen to any team.? Oh really ?
    The pilot had lots of altitude, press R3 -not the ejection button ! He sucked , the flight team sucked that day too, enough excuses and explanations of how it happened, probe without a heater that can lock up under ice? Nice , from the start the best would have been on guard for data issues and the pilot didn't know no heater? They just wanted to finish and go home.Sorry no excuses. We went soft on the job and no one took charge to end the test flight and bring the bird in and no disciplinary action for sure for the best of the best? This is an embarrassment start to finish and even making this video was not too bright an idea? I mean for public viewing. Lessons learned ? Can't make this process idiot proof ? This shouldn't have happened , Pilot had the authority to stop the test flight? or was it not his decision except ejecting? The ground control team all knew they were far from the best that day, but not one had the balls to take responsibility-shared responsibility equal no one responsible in the end,almost the worst they could have been that day, sorry but that is the story and what i learned from this !

  11. I think this is a great way of dealing with such an event, openly discussing and admitting mistakes made. Still, an aircraft tumbling and crashing into the ground is clearly not a "mishap" but a serious accident.

  12. ugh, so sad to hear that one of the most amazing planes ever built went down in such a stupid way. Have some balls, get on the radio, speak up and tell that pilot to LAND THE FUCKING PLANE damnit.

  13. This thing wouldn't happen with russians: they would write: "no pitot tube heating available" on a note sticker and ,then, they would put this alert on the pitot tube switch…….Simple solutions for "complexes problems".

  14. Fantastic Lessons Learnt video! Essential viewing for every test director/manager. This is not just relevant in Aviation, this is relevant to any sector/industry. Thank you NASA Armstrong Flight Research Center for this.

  15. Specific power and a specific pitch attitude will determine an airspeed within reasonable flight limits to avoid a stall. Shouldn't this data have been made available to the pilot to avoid this accident? After time in an airplane the pilot can avoid a stall by keeping a pitch low attitude visually referenced. It just seems strange that they had to lose an aircraft because they had no instrumented airspeed indication. These guys are top test pilots. Power and pitch attitude data should be used to achieve close to any desired airspeed. Didn't they have this data?

  16. Most successful, 550 flights, several flights a day…
    It was still an experimental airplane, not one where all the institutional knowledge is honed into the instrument panel with priorities emphasized and anomalies highlighted for the average highly skilled pilot. But people had gotten comfortable with it.
    A pitot tube froze over, right?

  17. aside from not having the flying buttress on the vertical tail and canards set further back than the X31 the Chinese did a near copy of this plane , Chinese Gov probably paid hundreds of millions in development costs only to get a design ripped off from the U.S. which probably cost that company nothing to acquire

  18. so they linked the AI over manual input. When computer is wrong it fuck things up and pilot cannot turn it off. all you can do, is to bait out and watch it fuck up.

  19. At least it did not kill 228 people like Air France Airbus 330 in a similar situation i.e. unreliable airspeed.

  20. These men and women are highly confident in their ability to perform the impossible. Because they do it all the time. Their cockyness is not bravado. It's Earned over decades of education, experience and as a group and individually they are among the brightest people on the planet. They seem more critical of themselves than anyone else.

  21. If it can happen to the best team, it can happen to any team…… what a load of Not My Fault nonsense.

    Frankly, with such a well known problem totally ignored, the claims of an A Team are ridiculous. They should have eliminated that freezing possibility, but instead did nothing.

    The pilot had an alternate airspeed reading, did not compare the two. A Team? HAHAHAH.

  22. It might be obvious, but it would not have been enough to simply decide "Return to Base" – they would have had to have some idea what was going on and which revisionary mode to use.

  23. Sad thing is that if the pilot had hit R3 instead of ejecting the aircraft was capable of coming out of it. Safety first, but that plane was exceptionally capable.

  24. Excellent video. Study in post mishap sequences can be a greater value than the intended flight tests themselves. hankful no loss of life. Unfortunate loss to an historic aircraft.

  25. the pilot survived so who cares… yes, it was costly but it can be regarded as money well spent IF they learn from their mistakes
    and the truck driver survived, too which is a great bonus

  26. how many times pitostatic systems have bought down planes with lots of people inside who cant eject, air france 447, and that plane from the air crash investigation ep "flying blind". same shit, computers and controls spakking out due to false air speed data from pito tubes blocked. amazing, that a little tube worth 20 bucks causes so much tragedy.

  27. How is it the plane was unflyable after the initial pitch up just before ejection? It stayed in the air quite a time after ejection.

  28. I don't know about the real plane but in my flight simulator I could do high alpha stuff all day in this plane

  29. Where the hell was the inop sticker? You don't disconnect any switch with a function label without putting an inop sticker or placard on it before it goes back to the ramp!

  30. I think pitot tubes are a problem judging by the amount of accidents related to them, there needs to be a more reliable way

  31. This program explores catastrophic failure and the chain of events that almost always leads up to it. Some commenters below demonstrates the chain of catastrophic failure of thought, specially those like mr juice, he shows a complete lack of perception. The NAZI engineers were not better than others, but they were very good. Their successes demonstrates what humans are capable of when pushed hard, they literally had their backs to the wall with a metaphoric avalanche rolling straight for them.

  32. Too bad, the tech development of this thing especially the revolutionary thrust vectoring might lead to a more efficient VTOL or SVTOL. The Harrier and F 35 is too slow and too inefficient in performance. I like to see in my lifetime a Supersonic Stealth Jet figther that can take off, land, hover, turn sideways quickly like a Helicopter.

  33. The X-31 was designed and built by Rockwell and Messerschmitt-Bölkow-Blohm in a joint US/German Enhanced Fighter Maneuverability program. The second X-31 had 288 flights and is at the Deutsches Museum Flugwerft Schleissheim near Munich.
    For more ​info ​read Flying Beyond The Stall​ (PDF) ​:-
    — https://www.nasa.gov/sites/default/files/files/Flying_Beyond_the_Stall.pdf

  34. I cannot be emphatic enough when I say this… The practice of relying purely on technology, academia, the media, our superiors or upon politicians without questioning the perception that these modules or groups know best, is the single greatest liability of our modern times. Our sole trust in these systems, groups or persons wherein we await for them to tell us when there is a problem instead of us learning to always think critically for ourselves & then confirm these other resources after the fact, will always come at a cost that exceeds any return on investment that we can make. These kinds of events are always called black swans after the fact by the arrogant elite, but the one thing that is never a true black swan is the laziness of otherwise intelligent people who refuse to think critically about the so-called reliability of modern schools of thought. Critical thinking of each & every individual is the safeguard of a modern people to prevent arrogance from blinding them to the errors of history, because the true error of history is not the danger of any particular event but rather the danger of not thinking critically & assuming that modernity itself is a guaranty of our safety & success in any given venture.

  35. The government had a hand in the destruction of the craft for reasons it will not discuss. Always has an underhanded finger in the mystery . OUR government is a bunch of crooks.

  36. A Senater that had a airplane company in his State wanted this plane to crash so the military would use his company's Air force plane .

  37. A Senater that had a airplane company in his State wanted this plane to crash so the military would use his company's Air force plane .

  38. At 21:30 he said it was a communication problem. No it was not, ANYONE involved in such a project knows that correct airspeed and many other factors are essential for the airworthiness of such an aircraft. It was simple neglect, complacency.

  39. What a awesome plane, such control, it looks like it's capable of flying itself. Thank you for bringing this to us.

  40. Wait, what? There's that plane the survival of which depends on having all data available at an time, and it is cleared to fly with essential systems like pitot heat not working?
    That is asking for trouble. Downright bonkers. It's like saying "Okay, I have no pads on the breaks of my car, but I'm only going uphill today, so she'll be right.".
    There are many old pilots and there are many bold pilots, but there are few old, bold pilots. Period.
    In aviation, risks you take will make you crash.

  41. as an engineer myself, it pains me to see these guys so nonchalantly talking about major AoA and static discrepancies, like its oh well, what the hell is wrong with them? they act like its a minor opps, its not it cost a pilot his life because of these data acquisition systems, these guys fucked up and do not care

  42. New pitot tube not heated.
    Pilot not informed of the new pitot tube is not heated.
    Control room did not informed pilot pitot tube is not heated.
    F18 chase pilot was not in the chat group.

    So much fail.

  43. Glad to see our experimental aircraft push the boundaries of performance. The maneuvers were simply wonderful. This is the sort of steed that WW2 pilots envisioned when, as lads, they were thrilled by the high paced antics of Flash Gordon and Buck Rogers. A plane you can fling all over the sky . . . better than PAK Fa? Sweet!

  44. Amazing!
    America is the best!
    But where is the successor?
    How comes Sukhoi had their S 27- 35 without a game changer from the American side ?

  45. Lesson: be veeeery careful about how much faith you put in a computer. This aircraft relied heavily on the computer to control the aircraft under certain circumstances, rather than the pilot. Understandable considering the characteristics of this aircraft. Still, an experienced pilot is worth twice what all the avionics in the world are. Problems = pilot takes control and RTB.

  46. wenn die usa mit den deutschen zusammen arbeiten ,dann um das knowhow der deutschen in ihre zukünftigen maschinen zu integrieren!dazu wird zuerst mit den deutschen an projekten zusammen gearbeitet,nur um sich später daraus wieder zurückzuziehen um ihre eigenen projekte zu realisieren(zbs panzer 70( https://www.youtube.com/watch?v=KHyWj40IrXE )
    deutschland hatte auch das beste railgun system gebaut,die usa haben andauernt ihr leute geschickt um deren railgun-system zu studieren!

    https://en.wikipedia.org/wiki/MBB_Lampyridae
    https://www.youtube.com/watch?v=Hkwvq1Oeq2A

  47. The outline of this experimental aircraft is almost similar to the Chinese J-10 fighter jet, who bought the technology from Israel…. but without the thrust vectoring hardware. Those small wings does not help much in terms of maneuverability but cut down other performance on the said aircraft…. should it succeed and turn into a full working jet aircraft, it won't do much in any battle scenario with the exception of the thrust vectoring technology

  48. 3:10 Funny that, how every airplane that crashes, it usually crashes on the plane's last flight.
    Kind of like how when you find some lost item, like your car keys,how they are always find it in the last place you look.

  49. 4:20 "If you are prohibited from flying…" This is like people who say, "I don't have an IFR qualification (I am prohibited from flying in IMC) so I don't need to know how to fly on instruments."
    Just because you are not allowed to fly in particular conditions, it doesn't mean you won't accidentally encounter such conditions.
    How much does the system to provide pitot heat weigh? I can't really think of a reason NOT to have it, even if you never ever use it. I mean, you have an ejection seat and parachute. How much does that equipment weigh? Most pilots will never eject from an airplane even once, even in combat.

  50. This is the second jet crash I seen a video about that centered on pitot tubes. Is there any way to solidly guarantee pitot tube function? Maybe just keep the *#% things heated at all times, or automatically operate pitot tube heaters when humidity passes X at temperatures below Y with a cockpit warning light if the PT heaters don't come on automatically .

  51. The only plane that perfected the Trust Vetoring system was the Russians Mig-29 and Su-27 family and the F-22 Raptor

  52. 33:05 Lol I was today years old when I realized NASA invented 3D flight…my RC pilots out there know what I'm saying! 😎👍🏻(Edit: time stamp)

  53. 6:47 The disbelief and dry sarcasm from the pilot about the pitot heat wiring issue is great.

    Ground: Yeah we think it may not be hooked up.
    Pilot: "It may not be hooked up." That's good. I like this.

  54. A lot of the information gathered during these tests where incorporated into the Eurofighter Typhoon project as fast as aerodynamics are involved but most of the R&D was completed by BAE systems in the UK 🇬🇧, so this data is always of use and never a wasted of time.

Leave a Reply

Your email address will not be published. Required fields are marked *