From the discussion during Section 2, NASA 2007 handbook has hugely developed itself comparing to 20 years ago. During 1980s, NASA did not use any system engineering approach which caused the big accident in 1986. On January 28, 1986, the space shuttle Challenger exploded in midair, sending six astronauts and schoolteacher Christa McAuliffe to their deaths. After NASA adopted the systems engineering approach and published the systems engineering handbook in 1995, the accident dramatically reduced, only several non-fatal incidents appeared during 1995 to 2002. However, the big accident appeared again in 2003. The Space Shuttle Columbia disaster occurred on February 1, 2003, when Columbia disintegrated over Texas and Louisiana as it reentered Earth's atmosphere, killing all seven crew members. On the second day after launching Columbia, members of the Intercenter Photo Working Group were concerned about the apparent momentum of the strike. Also, no one had ever seen such a large debris strike so late in ascent. Due to lack of risk analysis and ignorance of the system manager, the Columbia …show more content…
In term of system design and development, NASA could not do well on design for safety and security part.
4. Suggestion
Due to the evaluation above, NASA certainly should focuses more on the risk analysis. It might be a great opportunity to revise the NASA systems engineering handbook again. Additional examples and expansion of Technical Risk Management might reduce the number of astronaut fatalities in the future. For the other focuses, NASA would rather apply Zachman Framework with the DODAF that have been used. Also, NASA could start to think seriously about system of systems because the rapid growth of system and product
NASA enjoys a reputation of being able to tackle very complex problems, and as a result, they have become a leader in the ability to perform problem analysis. What we have observed over time, is that the severity of the problem does not necessarily determine the complexity or length of the analysis required to resolve it.
Engineers and scientists began trying to find what went wrong almost right away. They studied the film of the take-off. When they studied the film, they noticed a small jet of flame coming from inside the casing for one of the rocket boosters. The flame got bigger and bigger. It started to touch a strut that connected the booster to the big fuel tank attached to the space shuttle. About two or three seconds later, hydrogen began leaking from the gigantic fuel tank. About seventy-two seconds after take-off, the hydrogen caught on fire and the booster swung around. That punctured the fuel tank, which caused a big explosion.
Are you focused on what you're doing and thinking during an emergency? Do you just give up if you’re stuck in a problem? In the Scholastic Scope article, “Disaster in Space,” it teaches us that in an emergency, we should remain calm and focused on the problem and to never give up, as the astronauts and engineers involved in the Apollo 13 mission did during an emergency on the spacecraft. These processes are exemplified in the Scholastic Scope article, “Disaster in Space” when it talks about how three astronauts handle an emergency that would have costed their lives. In conclusion, in the Scholastic Scope article, “Disaster in Space,” it teaches us that in an emergency, we should remain calm and focused on the problem, use our ingenuity, and never give up, as the astronauts and engineers involved in the Apollo 13 mission did during an emergency on the spacecraft.
NASA has faced many tragedies during their time; but one can question if two of the tragedies were preventable by changing some critical decisions made by the organization. The investigation board looking at the decisions made for the space shuttle tragedies of the Columbia and Challenger noted that the “loss resulted as much from organizational as from technical failures” (Bolman & Deal, 2008, p. 191). The two space shuttle tragedies were about twenty years apart, they both had technical failures but politics also played a factor in to these two tragedies.
According to “A Human Error Approach to Aviation Accident Analysis…”, both authors stated that HFACS was developed based off from the Swiss Cheese model to provide a tool to assist in the investigation process to identify the probable human cause (Wiegmann and Shappell, 2003). Moreover, the HFACS is broken down into four categories to identify the failure occur. In other words, leading up to adverse events the HFACS will identify the type error occur.
Even though there were many factors contributing to the Challenger disaster, the most important issue was the lack of an effective risk management plan. The factors leading to the Challenger disaster are:
...lenger inquiry” [online], World Socialist Web Site, May 6, 2003 [cited March 16, 2010], available from World Wide Web:
The Challenger disaster of 1986 was a shock felt around the country. During liftoff, the shuttle exploded, creating a fireball in the sky. The seven astronauts on board were killed and the shuttle was obliterated. Immediately after the catastrophe, blame was spread to various people who were in charge of creating the shuttle and the parts of the shuttle itself. The Presidential Commission was decisive in blaming the disaster on a faulty O-ring, used to connect the pieces of the craft. On the other hand, Harry Collins and Trevor Pinch, in The Golem at Large, believe that blame cannot be isolated to any person or reason of failure. The authors prove that there are too many factors to decide concretely as to why the Challenger exploded. Collins and Pinch do believe that it was the organizational culture of NASA and Morton Thiokol that allowed the disaster. While NASA and Thiokol were deciding whether to launch, there was not a concrete reason to postpone the mission.
...ial approaches which are Normal Accident and HROs, although it seems certain that both of them tends to limit the progression that can contribute toward achieving to highly protective systems. This is because the scope of the problems is too narrow and the potential of the solutions is too limited as well. Hence, Laporte and Consolini et.al., (1991) as cited in Marais, et.al., (2004) conclude that the most interesting feature of the high reliability organization is to prioritize both performance and security by the managerial oversight. In addition, the goal agreement must be an official announcement. In essence, it is recommended that there is a continuing need in the high risk organizations for more awareness of developing security system and high reliability environment in order to gain highly successful method to lower risk in an advance technology system.
Fifteen years have passed since American Airlines flight 1420 experienced a botched landing tragically killing 10 passengers, the captain, and injuring 110 others. Thankfully, 24 passengers were uninjured, and the first officer survived. This horrific accident could have turned out much worse, but it could have also been easily avoided.
A.P. HERSMAN, CHRISTOPHER A. HART, and ROBERT L. SUMWALT. National Transportation Safety Board (NTSB), 6 May 2010. Web. 19 July 2010. .
The National Academies Press (2012) NASA’s Strategic Direction and Need for a National Consensus retrieved from http//www.npa.edu/openbook.php?record_id=18248&
...fault with NASA’s top-down design and testing methods, “was designed and put together all at once with relatively little detailed preliminary study of the material and components. Then when troubles are found…, it is more expensive and difficult to discover the causes and make changes.…[A] simple fix…may be impossible to implement without a redesign of the entire engine.” The outcome of this simple issue as we all are aware could have saved billions on the project if time for safety was taken. Instead of the top down approach wouldn’t the outcome been a significantly less expense if we used a bottom-up approach. When we think of safety is there a reason to worry about price with the thoughts of the Challenger incident in mind. Safety has always been a part of the working community not only in aviation but throughout all industry. Aviation being the background of
Salmon, P. M., Cornelissen, M., & Trotter, M. J. (2012). Systems-based accident analysis methods: A comparison of Accimap, HFACS, and STAMP. Safety Science, 50(4), 1158-1170.