In Practice: What We Find When We Rescue a Software Project
In Practice: What We Find When We Rescue a Software Project

In Practice: What We Find When We Rescue a Software Project

In Practice is a recurring column about what actually happens inside software engagements. Not frameworks, not advice. What we find, what goes wrong, and what it means.

Featured

The call has a shape. A founder, a product, a developer who's gone or going, and a situation that's somewhere between "we're stuck" and "we're bleeding." The company is different every time. The stack is different. The developer relationship ended differently. But within ten minutes I can usually tell which of three categories the problem falls into — because I've been on this call enough times that the details have started to blur together. The shape never changes.

Founders come in thinking it's about code quality. "The developer wrote bad code." Sometimes that's true. Usually it's not the main event. The main event is almost always something else — something that would have happened regardless of how good the code was — and it falls into one of three buckets.

They don't own what they paid for

The product exists. It runs. But the infrastructure that keeps it running lives in accounts that belong to someone who isn't answering emails.

This shows up in a few different ways. The cloud hosting account is under the developer's personal email — or worse, a throwaway they set up for the project and have since abandoned. The GitHub organization was created by the developer and transferred to no one. The Stripe account, the Auth0 tenant, the third-party API keys — all of them created during development, none of them documented, most of them tied to credentials that are now inaccessible.

The product technically exists. But the founder doesn't own it yet. And until they do, every deployment, every emergency fix, every moment of "something is broken and I need to get in" runs through a person who has no incentive to respond quickly.

This is the most fixable category — once you know it's the problem. The fix is administrative, not technical. But you can't fix it until you know what you don't have access to, and most founders don't find out until they need it.

The context left with the developer

The code runs. The accounts are accessible. But nobody knows why anything is the way it is.

There's no documentation that explains the architecture. No record of decisions that were made and reconsidered. No explanation of why a particular third-party service was chosen over the obvious alternative, or why a database column has an unusual name, or why a function exists that doesn't seem to be called anywhere. The code describes what the system does. Nobody recorded why.

This matters more than it sounds. When something breaks — and it will — diagnosing the problem requires understanding the system. Understanding the system requires knowing the decisions that shaped it. Those decisions lived in the previous developer's head, and they walked out with them.

What's left is archaeology. You read the code and reconstruct the intent. You look at git history and try to understand what changed and why. You make your best guess about whether an unusual pattern was intentional or accidental. Some of those guesses are wrong, and you find out the expensive way.

The founders who avoid this problem kept their developers in the habit of writing things down. Not formal documentation — short comments, decision records, a README that explains the non-obvious. It takes the developer ten minutes. It saves the next person days.

The product never ran in production

This is the hardest one to explain to a founder who believed everything was working.

Development happened locally. The developer had the codebase on their machine, a database on their machine, environment variables set up on their machine. The product worked — in that environment, under those conditions. And somewhere along the line, the assumption that "it works here" was treated as equivalent to "it will work there."

It won't. Not always. Sometimes the production environment has different configurations. Sometimes the database schema that was built locally was never migrated to the cloud instance. Sometimes the API keys that work in development are scoped differently in production. Sometimes it's something smaller — a path that resolves correctly on a Mac and fails on a Linux server.

The discovery usually happens at the worst possible time: when someone's trying to launch, or when a client is watching, or when a founder has just told investors the product is ready to show. The system crashes in an environment it was never actually tested in, and nobody knows where to start.

I've rebuilt staging environments from migration files I found in repositories three months after a project was declared "done." I've reconnected cloud databases to applications using credentials that were documented on a sticky note in a screenshot someone found in their downloads folder. The detective work is fine — that's just the job. What's harder is explaining to a founder that the thing they paid to have built wasn't actually built all the way through.

What prevents this

None of these problems require finding a better developer. They require establishing the right structures from the start and checking on them throughout.

Own every account from day one. Not "the developer has access" — you own it, they're a collaborator. Create the GitHub organization, the cloud hosting account, the third-party service accounts under your company credentials, and add the developer to them. The ten minutes this takes at the start of a project prevents months of access recovery at the end.

Require a staging environment before anything is called production-ready. If there's no staging, there's no test of whether the product runs outside the developer's machine. And if the developer resists staging because "it's a simple project" — that's the answer to your question about whether they've thought past their own setup.

Ask for written decisions when something non-obvious is built. Not documentation for its own sake. Just a short note when a significant choice is made: what was chosen, what the alternatives were, why this one. Most developers will do this if asked. Almost none will do it without being asked.

When the call comes anyway

Sometimes the structures weren't in place. The project moved fast, the developer seemed trustworthy, nobody thought about what happens when the relationship ends. That's not a failure of judgment — it's just how early-stage software projects often go.

When the call comes, the first thing I do is figure out which category we're in. Access problem, knowledge problem, or production problem — they require different approaches and they take different amounts of time. Knowing which one we're dealing with is half the work. The other half is knowing where to look.

If you're on that call right now, or you can feel it coming, that's the conversation worth having. Start at def0x.com/discovery.

In Practice: What We Find When We Rescue a Software Project | Def0x