28.12.2025, By Stephan Schwab
On December 20, 1995, a highly trained crew flew a perfectly functioning aircraft into a Colombian mountainside. They followed their plan with precision. They trusted their instruments. They died anyway. In software development, we call this "staying on track" — and it kills projects just as surely as it killed Flight 965.
It was five days before Christmas. One hundred fifty-nine people were flying home to their families. Some had gifts in the overhead bins. Some were already thinking about the meals they would share, the faces they would see, the embraces waiting at the arrival gate in Cali.
They never arrived.
American Airlines Flight 965 was a routine trip from Miami to Cali, Colombia. The Boeing 757 was in perfect mechanical condition. The captain had over thirteen thousand hours of flight time. The first officer was experienced and competent. The weather was clear at altitude.
Nothing was wrong — except everything was about to go catastrophically wrong.
During the approach, air traffic control offered a shortcut. The crew accepted. A simple change to the plan. They began reprogramming the flight management computer, entering a new waypoint: “R” for Rozo, the navigation beacon near Cali.
But the letter R pulled up a different beacon first. One near Bogotá. One hundred thirty kilometers away. In the wrong direction.
The crew selected it. The aircraft, obedient as always, banked left and began flying away from Cali, directly toward the Andes.
They didn’t notice. The instruments showed them on course — the course the computer was following, not the course they intended. The pilots trusted the plan. They had entered it themselves. Why would they doubt it?
Outside the windows, invisible in the darkness, the mountains rose.
At 9:41 PM, the Ground Proximity Warning System screamed to life: “TERRAIN, TERRAIN. PULL UP. PULL UP.”
The captain reacted instantly. He slammed the throttles forward. He pulled the nose up hard. The aircraft responded — it was doing everything it could to climb, to escape, to live.
But someone had left the speedbrakes extended from the descent. Those panels on the wings, designed to slow the aircraft, were stealing the lift they desperately needed. The crew didn’t notice. They were focused on the climb. They were following the recovery procedure.
Six seconds later, Flight 965 struck the side of El Diluvio — “The Flood” — a ridge rising to nearly nine thousand feet.
Four passengers survived, thrown clear in the wreckage. One hundred fifty-nine people — parents, children, colleagues, friends — did not.
I tell you this story not to dwell on tragedy but because I watch organizations fly into mountains every day.
Not literal mountains. Worse, in a way — invisible ones. Technical debt that experienced developers have warned about for years. Architectural decisions imposed by consultants who left before the consequences arrived. Roadmaps dictated by people who have never shipped software, forcing teams to build what cannot be built in the time that does not exist.
The crew of Flight 965 were not stupid. They were not careless. They were highly trained professionals operating expensive equipment according to documented procedures. They followed the plan.
And the plan flew them into a mountain.
Plans are seductive. They offer certainty in an uncertain world. They let us tell stakeholders when we’ll be done. They create the illusion that we understand what we’re building, how long it will take, and what the future holds.
But plans are not reality. They are our best guess about reality at a moment in time — usually the moment when we knew the least about what we were attempting.
The crew of Flight 965 had a plan. It was filed with air traffic control. It was programmed into the computer. It accounted for fuel, time, waypoints, and altitude restrictions. It was a good plan.
It just didn’t account for a single wrong keystroke.
Here is the uncomfortable truth that every executive, every program manager, every Gantt chart enthusiast needs to understand:
Reality doesn’t negotiate.
The mountain didn’t care that the crew had a plan. The mountain didn’t care that the computer showed them on course. The mountain didn’t care about the captain’s thirteen thousand hours of experience or the airline’s safety record or the passengers’ Christmas plans.
The mountain was simply there. And when the aircraft’s path intersected with the mountain’s location, the mountain won. It always does.
Technical complexity is a mountain. Architectural constraints are a mountain. The laws of physics that govern how software systems behave under load — those are mountains. Every time management overrides developer judgment with a directive from consultants or a mandate from the boardroom, they are programming a new course. Sometimes that course leads into terrain nobody can see from the executive suite.
Flight 965’s GPWS gave the crew a warning. It was loud. It was unmistakable. It was terrifying by design.
They had six seconds. It wasn’t enough — partly because the speedbrakes stole their climb rate, but also partly because the warning came too late. The old GPWS technology couldn’t see ahead; it could only detect the ground rushing up from below.
Your organization has warning systems too. Senior developers saying “this architecture won’t scale.” The team lead warning that the timeline is fantasy. Engineers explaining — again — why the approach mandated by the expensive consulting firm cannot work. Experienced voices, dismissed as “resistant to change” or “not team players,” because they refuse to pretend the mountain isn’t there.
These are your terrain warnings. They are screaming at you right now.
Are you listening? Or are you following the plan that someone outside your cockpit programmed for you?
The investigation into Flight 965 changed aviation. The industry developed Enhanced Ground Proximity Warning Systems — EGPWS — that use GPS and terrain databases to see mountains ahead, not just below. Airlines revised their procedures for programming flight computers. Training emphasized situational awareness over blind trust in automation.
One hundred fifty-nine people died, and an industry learned.
But not every organization learns from disaster. Some respond by buying solutions from the same people who sold them the problem.
There is another way organizations fly themselves into mountains: they buy management frameworks that promise to “fix” developers. Make them predictable. Make them flow smoothly through a process like widgets on an assembly line. The sales pitch always includes the word “learning” — continuous improvement, feedback loops, adaptation.
But then comes the implementation.
The framework gets installed by people who have never written production code. The trainers leave. And what remains is a system that punishes learning. Going back is failure. Refactoring is waste. Changing direction after discovering new information is deviation from the plan. The entire apparatus is designed around the illusion that work flows forward and never returns — that you can know everything at the start and simply execute.
This contradicts everything we know about building software. Test-Driven Development works precisely because you go back. You write a failing test, you make it pass, you refactor. Red, green, refactor. The cycle is the learning. Every iteration teaches you something about the problem you couldn’t have known before you started.
But the framework was sold on the promise that management would finally have visibility and control. That developers would become predictable resources. That estimates would become commitments and commitments would become delivery dates. Going back wasn’t part of the sales pitch.
So when developers try to refactor — try to learn, try to improve — they are told to stop. The milestone is fixed. The timeline is approved. The resources are allocated. There is no time for learning. There is only time for execution. This is how organizations destroy their developers’ intrinsic motivation — by treating thinking as a bug rather than a feature.
And the aircraft descends, confident and controlled, toward a mountain that the framework’s dashboards don’t show.
There is a subtler danger than flying into a mountain. Sometimes organizations create policies so rigid that even when pilots can see the runway clearly, they are forbidden to land.
On October 16, 2023, Lufthansa flight LH458 — an Airbus A350 from Munich — approached San Francisco International Airport. The weather was clear. The runway was visible. The aircraft was functioning perfectly. The crew was experienced and alert.
But SFO was operating visual approaches only that night. And Lufthansa’s corporate policy, instituted after the terrifying 2017 Air Canada near-disaster where a fatigued crew almost landed on a taxiway full of aircraft, prohibited its pilots from accepting visual approaches at night. The policy required an instrument approach — ILS or satellite-guided — regardless of conditions.
The ILS was off. The crew could see the runway. They were not allowed to land on it.
So they declared a fuel emergency and diverted to Oakland. The passengers were bussed back to San Francisco. Nobody died. But a planeload of people spent hours on a bus because company policy had become more important than pilot judgment.
This is what happens when organizations respond to failure by removing discretion. After Air Canada 759 nearly killed over a thousand people by confusing a taxiway for a runway during a visual approach, Lufthansa’s response was rational: ban night visual approaches entirely. Remove the possibility of human error by removing human judgment.
But policies cannot anticipate every situation. The crew of LH458 was not fatigued. They were not confused. They could see exactly where they needed to go. The policy, designed to prevent one kind of failure, created a different kind of absurdity.
In software organizations, this happens constantly. A project fails because developers made autonomous decisions that management didn’t understand. The response? Remove developer autonomy. Institute approval processes. Require sign-offs. Mandate that all technical decisions flow through non-technical managers who have been trained by framework salespeople to distrust the very people who build the software.
The developers can see the runway. They know how to land. But they are not allowed to. The method has become the master. The policy exists to protect against a failure that isn’t happening, while creating new failures that nobody anticipated.
And sometimes the company doesn’t divert to Oakland. Sometimes it runs out of fuel. Reclaiming your organization means trusting the people who actually fly the aircraft.
Every day, leaders face a choice: follow the plan or follow reality.
Following the plan is comfortable. It means the quarterly report looks predictable. It means nobody has to explain why the roadmap changed. It means the expensive management framework you bought is working as advertised.
Following reality is hard. It means admitting uncertainty. It means telling stakeholders the truth. It means trusting the people closest to the work to tell you what’s actually happening — and believing them when it contradicts your carefully constructed plan.
The crew of Flight 965 followed their plan. They trusted their computer. They descended confidently through the darkness, believing they knew where they were.
They were wrong. And because they were wrong, one hundred fifty-nine people never saw Christmas.
How many projects must die before your organization learns? How many millions must be written off? How many talented developers must burn out and leave, their warnings vindicated too late? How many companies must die — actually die, doors closed, everyone gone — before leadership understands that the people in the cockpit might know more about flying than the people in the boardroom?
Somewhere in your organization right now, an experienced developer is raising a concern. They are saying the timeline is impossible. They are explaining why the architecture mandated from above cannot support the planned features. They are pointing out that the approach imposed by outside consultants contradicts everything they know about building software that actually works.
They are your terrain warning. They are screaming at you.
What will you do?
Will you follow the plan that was handed down from above? Will you stay on course, forcing your experienced crew to execute a flight path they know is wrong, dismissing their warnings as negativity or resistance to change?
Or will you trust the people who actually fly the aircraft — and let them see the mountain before it kills everyone aboard?
One hundred fifty-nine people died five days before Christmas because trained professionals trusted their plan more than they trusted reality.
The mountain is still there. It is always there.
The only question is whether you will see it in time.
Let's talk about your real situation. Want to accelerate delivery, remove technical blockers, or validate whether an idea deserves more investment? Book a short conversation (20 min): I listen to your context and give 1–2 practical recommendations—no pitch, no obligation. If it fits, we continue; if not, you leave with clarity. Confidential and direct.
Prefer email? Write me: sns@caimito.net