Testing Laravel mutators and accessors

rarely test Laravel mutators and accessors. Mainly because usually that code is straightforward, and testing it does not provide enough value to be worth the trouble. Of course, like with anything, there are cases in which it does makes sense, so do it then!

Anyway, here’s an example of a User class with a mutator and accessor for the name attribute.

class User extends Authenticable 
    protected function name(): Attribute
        return Attribute::make(
            get: fn ($value) => ucwords($value),
            set: fn($value) => strtolower($value)

Testing accessors

An accessor is an incoming query message – you ask the object to give you something, and it so does. Test it by making assertions about what it sends back.

/** @test */
public function name_is_capitalized()
    $user = new User(['name' => 'constantin druc']);
    $this->assertEquals('Constantin Druc', $user->name);

Testing mutators

A mutator is (mostly) a command message – you ask the object to change something, so it does.

Test it by making assertions about its public side effect(s). Like, did it actually change what I told it to change?

However, if your model has a mutator and an accessor for the same attribute, testing the accessor is tricky. Because if you set the name to some value and then test it by calling $user->name – the mutator will come in and alter your expected result.

The only way I know how to test it is by using $user->getAttributes().

/** @test */
public function name_is_saved_in_lowercase()
    $user = new User();
    $user->name = 'Constantin Druc';

    $attributes = $user->getAttributes();
    $this->assertEquals('constantin druc', $attributes['name']);

I know this was a somewhat silly example, but to keep in mind, test accessors by making assertions on what they return, and test mutators by making assertions on what they’re supposed to change – the attribute value.

Download all Vimeo videos

When I started making programming videos, I didn’t put too much thought into organizing them. I didn’t organize them into directories; I didn’t even name them right. My focus was on just getting the next video out.

As you can imagine, this turned into a mess over the two years I’ve been making videos.

Today I decided to delete everything on my local machine and download my videos from Vimeo.

While you can easily download a video from your Vimeo dashboard, I couldn’t find a way of downloading all my videos. So I turned to external help: youtube-dl.

This tool is fantastic. If you haven’t starred it yet on GitHub, please do.

Its name suggests that it only works with youtube videos, but it also works with Vimeo. And you can easily download ALL your videos; you don’t have to do it one by one.

Here’s what you need to do:

  1. Install youtube-dl; it works on Linux, Windows & Mac OS.
  2. Make all your Vimeo videos public and downloadable. You can easily do that by loading all your videos, selecting them, and then editing their privacy options in bulk.
  3. Open your terminal, navigate to wherever you want to download the videos, and run:
    youtube-dl https://vimeo.com/your-username
  4. Profit! The tool will download the raw version of all your public videos.
  5. Turn your videos back to private.

youtube-dl is amazingly good.

npm run hot address already in use

Have you ever got the following error when running laravel mix using `npm run hot`?

To fix it, open your terminal and run the following command:

sudo lsof -i :8080

This will list processes using that `8080` port. Next up, you just have to kill that process using:

kill -9 PROCESS_ID

Next up, just re-run npm run hot and it should work.

Audio and video at once

I was right; recording audio and video at the same time is way, way better! At least for me.

I gave the other voice-over episodes a re-listen, and they were awful— so much disconnect, so robotic, not natural at all.

For the final episode of the series, I recorded the audio and video simultaneously, and I loved it! I still had to do lots of takes, cut lots of umms and ahhs, and awkward pauses, but all in all, it was better than all the other lessons I voice-over on.

Live reloading with Laravel mix and BrowserSync

I was about to record a TailwindCSS video, and this kind of screencast works better if the browser shows you the changes in real-time; otherwise, you have to refresh the page with every change you make – and that’s just annoying.

The first thing that popped into my mind was tailwind play, an online playground where you can try out tailwind stuff. Any change you make will instantly appear on the right.

Then I remembered Laravel mix can do pretty much the same thing with BrowserSync.

Here’s how you can use it when serving a Laravel app with artisan:

  proxy: ''

And here’s how to do it when using Laravel valet to serve a custom domain:

  proxy: 'https://myapp.test',
  host: 'myapp.test',
  open: 'external',
  https: {
    key: '/Users/yourUser/.config/valet/Certificates/myapp.test.key',
    cert: '/Users/yourUser/.config/valet/Certificates/myapp.test.crt',

It’s nowhere near as fast as tailwind play, but it’s good enough when working on entire projects.

Why I am not integrating tallpad with Vimeo

Currently, every time I publish a screencast on tallpad, I have to:

  • export the video
  • upload it to vimeo
  • fill in all vimeo’s fields (name, privacy settings, etc)
  • grab the vimeo embed code
  • go to tallpad’s nova admin panel and create a new episode – here, I have to fill in the title, description, embed code, video duration, and other fields.
  • hit publish

The reason I’m going with this somewhat tedious flow is that, well, I don’t post screencasts that often. So I don’t mind taking 5-10 minutes and doing it manually.

Even if somehow I increase my posting frequency to 1 screencast a day, it still wouldn’t bother me to do that part manually. I’d rather focus my efforts on making even more videos or improving the user experience on the platform.

As a programmer, I love building stuff. I love spending time fiddling with new layout types, adding icons, thinking about nested comments, bookmarked episodes, organizing episodes under series and topics, and other nice-to-haves.

But that’s what those are right now; nice-to-haves. More content is needed.

The only reasonable, cost-effective way to test validation in Laravel apps

Every time I tell someone how I test validation in Laravel, their reaction is somewhere in the lines of “wait, what? this is so much better. I wish I knew it existed”.

So, yeah, here’s how I test validation in Laravel.

Bellow, we have a bunch of tests asserting that different validation rules are in place when registering a new user. We have required name, required email, email must be a valid email, email must be unique, and so on.

namespace Tests\Feature\Auth;

use App\Models\User;
use Illuminate\Foundation\Testing\RefreshDatabase;
use Tests\TestCase;

class RegistrationTest extends TestCase
    use RefreshDatabase;

    /** @test */
    public function name_is_required()
        $response = $this->postJson(route('register'), [
            'name' => '',

            'email' => 'druc@pinsmile.com',
            'password' => 'password',
            'password_confirmation' => 'password',
            'device_name' => 'iphone'


    /** @test */
    public function email_is_required()
        $response = $this->postJson(route('register'), [
            'email' => '',

            'name' => 'Constantin Druc',
            'password' => 'password',
            'password_confirmation' => 'password',
            'device_name' => 'iphone'


    /** @test */
    public function email_must_be_a_valid_email()
        // code

    /** @test */
    public function email_must_unique()
        // code

    // more similar tests

What we are doing in every test is sending the correct values except for the field we are testing the validation. This way, we can assert that we receive a 422 response and validation errors for that specific field.

And this is how most people I know test validation – some optimize this further by extracting methods to reduce duplication, but the general idea is to write one test per validation rule.

The thing is, just as production code, test code requires maintenance, and the more tests you have, the slower your test suite becomes.

The slower your test suite becomes, the less often you’ll run it, and the less often you’ll be refactoring and improving code. Not only that, but you will also end up avoiding writing more tests knowing it will slow you down even more.

So while writing tests is crucial, having too many of them can also become a problem. Ideally, you want to have as few tests as possible that run as fast as possible while still being confident enough that everything works.

We pay for confidence with tests

We write tests so we can be confident that our changes don’t break the app. We pay for confidence with tests. The more tests we write, the more confident we are things are working.

But sometimes, we happen to overpay for that confidence. Sometimes we write more tests than we actually need to.

In our example, those tests take about 300ms to run. Let’s say we have an app where 20 requests require validation – that will add up to a cost of about 6s of waiting time and 160 tests – assuming each request has about 8 validation rules. This is the price we pay for the confidence that our requests are validated. 6s and 160 tests.

But there’s a cheaper way to do it. It doesn’t yield as much confidence as what we are doing now, but it is close enough, and it’s much, much cheaper.

Laravel has thorough and exhaustive tests for every validation rule. If I set required as a validation rule, I’m confident that it will work; it will let me know if the field is missing. I don’t need to test that.

What I do need to test is that my request is validated with whatever rules I set in place. But this approach comes with some costs.

The first is, we need to install an additional package: jasonmccreary/laravel-test-assertions – this will provide us with the assertions we need to test that our controller action is validated using the correct form request object.

The second thing is, with this approach, you can no longer use inline validation – it only works with form request objects.

Here’s how it looks:

namespace Tests\Feature\Auth;

use App\Http\Controllers\Auth\RegistrationController;
use App\Http\Requests\RegistrationRequest;
use JMac\Testing\Traits\AdditionalAssertions;
use Tests\TestCase;

class RegistrationRequestTest extends TestCase
    use AdditionalAssertions;

    /** @test */
    public function registration_uses_the_correct_form_request()

    /** @test */
    public function registration_request_has_the_correct_rules()
            'name' => ['required'],
            'email' => ['required', 'email', 'unique:users,email'],
            'password' => ['required', 'min:8', 'confirmed'],
            'device_name' => ['required']
        ], (new RegistrationRequest())->rules());

The first test asserts that the RegistrationController@register action uses the RegistrationRequest form request object.

The second test asserts that the RegistrationRequest has the rules we want it to have.

Before, we had to write 8 tests to ensure our register action is validated; now, we only need 2.

Before, our tests needed 300ms to run; now, they only take 80ms. And we can speed this up even more by replacing Laravel’s TestCase with the PHPUnit/Framework/TestCase class. The first one loads the entire Laravel application, and we don’t need that to run these two tests.

namespace Tests\Feature\Auth;

use App\Http\Controllers\Auth\RegistrationController;
use App\Http\Requests\RegistrationRequest;
use JMac\Testing\Traits\AdditionalAssertions;
use PHPUnit\Framework\TestCase; // previously: use Tests\TestCase;

class RegistrationRequestTest extends TestCase
    use AdditionalAssertions;

    /** @test */
    public function registration_uses_the_correct_form_request()
		// code

	  // second test

Now, these tests only take 9ms to run. That’s over 30 times faster than what we had before.

So while we need to install an additional package and we are limited to only using form request objects for validation, this second approach is much faster. On top of that, it only requires 2 tests instead of one for each validation rule.

That’s how I test validation in Laravel. 2 tests. And they are fast tests.

If you liked this article, consider subscribing to my youtube channel.

Surviving your first week of Git without losing your mind

No matter what kind of software you are writing or what technologies and languages you are using, there is a good chance (~74% to be more precise) you need to learn how to use git.

The problem is… well, git is quite large and complicated. And the more you try to learn about it, the more confusing it tends to get. Sometimes even the most experienced developers have trouble making sense of it. So don’t feel bad if you don’t understand it just yet. In this article, I’ll do my best to help you survive your first week of Git without loosing your mind.

Why git?

Git is a version control system. It tracks all the changes made to a project: deleted files, modified files, new files, when and who made those changes.

Not only that, but it also offers a convenient way of jumping from one point in history to another. This is useful when, for some reason, your project stops working correctly after introducing new changes. Git allows you to easily roll back to a specific point when you know the project is stable.

Apart from that, what makes git super useful is how easy it makes for developers to collaborate and work on the same project simultaneously, without stepping on each other’s toes (most of the time anyway😅).

Repository & commits

A git project, also known as a repository, contains all files of the project and its entire revision history – every commit that was ever made. A commit is a perfect snapshot of the project at a specific point in time. The more commits you have, the more moments in history you can navigate to. That’s why I recommend committing your work often.


To organize commits and to allow developers to work in parallel, git uses a concept called branching. Think of branches as different timelines of our project where we fix bugs or add new features that will eventually make their way into the main timeline (usually called the master branch).

When you branch out, you get a perfect copy of the project where you can do whatever you want without affecting the main timeline (master branch). When you finish working on your branch, you can merge back into master, creating a combination of the two branches.

Consider the following example:

On the blue timeline, Mike has everything from the master branch plus his work on the payments system, while on the red timeline, Sandi has everything from the master branch plus her work on the authentication system.

None of them have each other’s work just yet, though.

The master branch is the timeline where all the other branches will be merged in. When Sandi and Mike finish their work, they will both merge their branches into the master branch. Once they do that, they will continue to branch out and merge in their new branches until the end of the project.

Merge requests

Those red and blue circles from the image above merge requests, sometimes called “pull requests”. A merge request is a formal way of asking to merge your branch into another one.

When you create a merge request with a source code hosting application like Gitlab, you can see every change your branch will introduce. This allows you and your team to review and discuss the changes before merging them.


You can work with git either from the command line or by using a dedicated client like SourceTree. Or even from your code editor as most of them support git out of the box or have plugins you can install.

However, I strongly recommend trying git from the command line before moving to a dedicated client application. While git can get complicated, you can get away with just a handful of git commands most of the time.

Identify yourself

The first thing you need to do after downloading and installing git is to identify yourself. Git needs to know who you are to set you as the author of the commits you will be making. You only need to do this once.

git config --global user.name "Your full name"
git config --global user.email "your@email.com"

The path you are on when you run the commands above doesn’t matter, but from now on, you must run every git command inside your project’s root directory.

cd /the/exact/path/to/your/project

Starting out

You can find yourself in two contexts: either you need to start a git repository from scratch or work on an already existing repository created by someone else.

In both scenarios, you’ll need the git repository remote address from your Git hosting service (Gitlab, Github, Bitbucket, to name a few).

If you’re starting a new project from scratch, you’ll need to go inside the project’s root directory, initialize it as a new git repository, and set its remote address.

# inside my project's directory
git init
git remote add origin https://your-repository-remote-address.git

Although your project will have a single remote most of the time, you can actually add more remotes – that’s why the command is git remote add.

  • git remote – tells git you want to do something remote related.
  • add is what you want to do, which is to “add a new remote.”
  • origin is the name of the remote. It can be anything you want. I named it origin by convention.
  • https://your-repository-remote-address.git is the remote address of the repository from your Git hosting service and where you will push your commits.

If you need to work on an already existing project, you have to clone the repository – which essentially means you have to download the project files and it’s entire revision history – every commit that was ever made.

To do so, create a new directory for your project and run the following git command:

# inside my project's directory
git clone https://your-repository-remote-address.git .

Caution: make sure the directory is completely empty and that you add the dot after the remote address. The dot tells git to download the files in this exact location. Also, when you are cloning an existing repository, there’s no need to re-initialize it or add the remote again – everything is already configured.

Pulling changes

As you saw in the image above, developers branch out from the master branch, work on their branches, and then merge them back into master. Whenever that happens, your local master branch becomes outdated. It doesn’t have the changes made on the git hosting service, so you’ll have to pull them.

Pulling refers to downloading the repository’s latest changes from the git hosting service and updating your local repository. As with everything software, you want to keep things updated.

# inside my project's directory
#git pull <remote> <branch>
git pull origin master

Make sure you replace origin and master if you named your remote and main branch differently.

Creating and switching branches

Every time you start working on a feature or a bug, pull from master to update your repository, and then create and switch to a new branch. Never work directly on the master branch. You should only update the master branch via merge requests (to be discussed later).

The first step before making any changes is to pull and then create and switch to a new branch:

# inside my project's directory
# checks for changes and updates the local repository
git pull origin master
# create and switch to new branch
git checkout -b my-feature

From now on, every change you will make will be on this my-feature branch – the master branch won’t be affected.

Your branch name should reflect what it is that you are working on. If it’s a new feature, name the feature. If it’s a defect fix, name the defect. If it’s a performance improvement, name the improvement you are making. If you’re just getting started and don’t know how to name your branch, go with dev-yourName.

To switch to another branch, type in your terminal:

# inside my project's directory
# git checkout <branch name>
git checkout master

Continue reading to see how you can add new files, make changes, and merge your branches into the master branch.

Adding new files

Even though the repository is initialized (either from scratch or cloned), you must manually add every new file you create to the repository. Luckily, you can add multiple files at once, so you don’t have to type in all the file paths.

# inside my project's directory
git add relative/path/to/the/file.txt
git add relative/path/to/directory
git add .

The first git command adds a single file to the repository, the second adds an entire directory, while the last command adds every new directory and file created.

Git status

You’ll often need to check your repository’s status: what files you can add, what files were changed, or have been deleted. To do so, run the following command:

# inside my project's directory
git status

You will get back an output that looks somewhat like the one below:

Changes to be committed is also known as the stage. It tells you what files would be committed if you were to create a new commit right now.

Changes not staged for commit displays changes to files git knows about but that are not prepared to be committed – these changes won’t be included if you were to create a commit right now. To add those files to the stage, run the git add . command.

Untracked files are the files git doesn’t know about just yet. It doesn’t care what happens to them. If they were to be deleted, git wouldn’t bat an eye – they are untracked. To commit these files, you need to add them by using the git add command.

Creating commits

Once you’ve added your files to the stage by using git add ., you can create a new commit – all it needs is a message that describes the changes you’ve made:

# inside my project's directory
# add all files to stage
git add .
# create commit
git commit -m "Replace old logo"

From now on, you can continue making changes to your project and create more commits. It’s essential to commit your changes often to build a good revision history with multiple places in time where you can restore to. If our project was a single giant commit, there wouldn’t be any undo options – we wouldn’t have the possibility to restore our changes if we needed to. Commit your work often.

Pushing changes

If pulling means download changes (other commits), pushing means upload changes.

After pulling from master, creating a new branch, and committing your work, it’s time to push your branch to the git hosting service, where you can create a merge request to have it merged into the master branch.

Before you push your changes, pull from master again just to make sure your local master branch is up to date. Once you’ve done that, push your branch using the following command:

# make sure everything is up to date. pull from <remote> <branch>
git pull origin master

# push to <origin> <this-branch>
git push origin my-feature

You can push your branch multiple times just as you can pull multiple times. Say you already pushed a branch, but you saw one file missing. You can stage the file with git add ., commit it with git commit -m "Add x file", and push your branch again with git push origin my-feature. However, if your branch was already merged into master, you will have to create an additional merge request from the same my-feature branch into the master branch.

Merge requests

Before we merge our branch it would be nice to have one final look over our changes, just to make sure we haven’t missed anything, or even better, add a new pair of eyes by inviting a colleague to look over our changes. That’s what merge requests are for!

While the user interface might vary, the steps to create new merge requests are usually:

  1. click the appropriate button (create merge/pull request)
  2. select the branch you want to merge and the destination branch (where you want to merge it, usually master)
  3. select one or more reviewers (if needed)
  4. confirm your action.

After creating the merge request, you will see a list where you can review each individual commit or just look over the whole change list caused by all the commits.

For example, in the above image, git tells us we have four changed files with three additions and four deletions. All file contents will end in an uppercase letter and a dot, while file-d.text will be deleted entirely. Keep in mind that additions and deletions refer to the number of lines added and deleted, not the number of affected files.

As you discuss and receive feedback, you can continue to make changes to your branch while the merge request is open by following the same steps: make changes locally, commit, push your branch again. The merge request will be updated with your new changes, and the review can resume.

Once your branch is merged into master, you can start working on something else by checking into the master branch, pulling the updates, and then checkout to a new branch that will eventually make it’s way back into master via a merge request.

# on branch my-feature that was recently merged into master

# checkout to master
git checkout master

# pull the latest changes
git pull origin master

# checkout to a new branch
git checkout -b my-new-branch

# time passes, changes are made, commits are added...
git push origin my-new-branch

# create new merge request, have it approved, and repeat from the top

Handling merge conflicts

While git empowers us to work in parallel, occasionally you will run into conflicts. Somebody, maybe even you, made changes to the exact same line, in the exact same file, on a different branch that was already merged into master – this will cause a conflict.

While git is smart and all, sometimes it gets confused and doesn’t know who to trust and pick the correct change. You’ll have to help it decide which changes are the correct ones.

Pulling from the branch you want to merge to will show you the conflicted files. In our case, we have a conflict in file-b.txt:

If we open the file in our code editor, we’ll see some of our code split into two sections:

The code between <<<< HEAD and === are the changes made by our branch, while the other section shows us what’s currently on master.

We have three options:

  1. keep what is currently on master – that means deleting everything except This is file B. B comes after the letter A.
  2. decide our changes are the correct ones – that means deleting everything except This is file B. B comes before the letter C.
  3. replace everything with a combination of both – This is file B. B comes between the letter A and C.

Once you fixed all the conflicts, stage the files, create a new commit, and push your branch again:

# after fixing conflicts

# stage everything
git add .

# create commit
git commit -m "Fixed merge conflicts"

# push branch
git push origin my-branch-name

You might have multiple conflicts in the same file. The idea is the same. What’s between <<<HEAD and ==== is your change, what’s between === and the hash is what’s currently on that branch. Decide which change is the correct one, commit your decision, and push your branch again.

To reduce the risk of conflicts, pull from master often, and especially pull every time you start a new branch – this way, you make sure you start from the latest version of the master branch.

Entire flow recaped

# identify yourself
git config --global user.name "Your full name"
git config --global user.email "your@email.com"

# go to your project's root directory
cd /the/exact/path/of/my/project

# initialize a new project and add the remote
git init
git remote add origin https://url-got-from-git-hosting.git

# or clone an existing repository
git clone https://url-got-from-git-hosting.git .

# pull from master
git pull origin master

# checkout to a new branch
git checkout -b my-branch

# make changes, stage the files, and then commit them
git add .
git commit -m "My changes"

# push your branch
git push origin my-branch

# create a merge request, discuss your changes
# fix conflicts if any
git pull origin master

# open the conflicting files in your editor.
# decide what changes are the correct ones. 
# stage the files and create a new commit.
# push your branch again.
git add .
git commit -am "Fixes merge conflicts"
git push origin my-branch

# once your branch is merged, checkout to master, pull changes, and start a new branch
git checkout master
git pull origin master
git checkout -b my-new-branch

# rinse and repeat from the top.

As I said at the beginning of this post, git is quite large and complicated. There are numerous different ways of doing what we just learned. There are also many other concepts and commands we haven’t covered: releasing versions, cherry-picking, resets, squashing, to name a few.

This been said, I’m confident this post will be good enough for someone who is new to git and needs to learn the basics of it. I hope it helped you.

Why rewriting applications from scratch is almost always a bad idea

There are many good reasons to rewrite a legacy application, but most of the time, the cost outweighs the benefits. In this article, I try to balance the pros and cons of rewriting applications from scratch and articulate why it is almost always a bad idea.

Many developers, including myself, can live with a legacy application for only so long before they want to kill it, burn it with fire, and rebuild it from the ground up. The code is so hard to follow and understand, methods a hundred lines long, unused variables, conditionals conditioning conditionals on the different levels; it’s so terrible, Sandi’s squint test would make you dizzy.

Why rewrites are so alluring

You get to use the newest and shiniest tools.

It’s hard not to wish for a rewrite when you see places you could improve in many ways by just using a new best practice, framework, or package. Why would you stay back in time struggling to do great work with shovels and hammers and other medieval tools when today you have access to all kinds of well tested and battle-proven tools?

You have all the facts

“We have access to the current application, we can see all how the previous developers went wrong, and the clients themselves have more experience and know what works and what doesn’t and what they actually need. Rewriting the app will be a piece of cake done in a heartbeat!”

Easier to write tests

Most legacy applications subject to a rewrite don’t have tests. Adding them now is hard. Not only are there countless dependencies and execution paths to follow, often you don’t even know what to test! You’re left playing the detective, carefully following every method trying to guess what it supposed to do. When you start from scratch, you get to test your own code, which is a million times easier.

Why rewrites are a bad idea

Rewrites are expensive

The application won’t rewrite itself. You have to pour hours and hours into getting it to the point where, well, it does pretty much what it was doing before, maybe a little bit better.

One would make a good argument by saying, “you’ll be losing the same or even more time and money by not rewriting it, due to the inability to ship features as fast.”

That is true. Holding a legacy codebase together, fixing bugs, and shipping new features at the same time is no walk in the park. You can’t be rolling out feature after feature like you used to. But at least it’s not impossible; which is the second point:

You can’t put out new features

When you go on a rewrite, you are unable to release anything new for months. Depending on the nature of the business, responding to users and shipping new features might be critical. Your client might not even be in business by the end of the rewrite.

No, you don’t really have all the facts

After years and years of changes, no one knows precisely how the app reacts in every situation, not even your clients. Go ahead, ask them. They might have a general idea of how the app works and what it does, but there are still many unknowns that can only be uncovered by looking at the current codebase. Go into a full rewrite unprepared, and you will waste hours and hours on client communication and dead ends.

You risk damaging your relationship with your client

Let’s say you get your client to accept all the costs of the rewrite. You promise them, in the end, all of this will be worth it. The app will be faster, bug-free, better designed, and easier to extend.

If there’s one thing we know about software development is that at some point, something somewhere will go wrong: the server, the database, a misconfigured service, some innocent update. No matter how much you plan, something somewhere will go wrong. It’s like a law of nature.

When that happens, your client will start questioning his decision to rewrite. Even though the new application is 10x better, they don’t know that. They don’t care if you are using the latest and greatest tools, best practices, frameworks, and packages. All they know is they agreed to spend a ton of money and precious time on something that looks and functions quite the same. Every hiccup you go through will damage the relationship you have with your client.

When you should consider a rewrite

Apart from the case in which the application is small and straightforward, and you can rewrite it from scratch in just a few months, there are two more situations when complete rewriting the application can be a great idea:

When shipping new features is not a priority

When business is steady, with most work revolving around customer support and a few bugs here and there. If there’s no time pressure and you want to do a rewrite for performance reasons and maybe to stay up to date with the latest technologies. A rewrite will put you in a good place in case you need to shift gears and change direction towards something else.

When trying a different approach

The business has been great, the clients are happy, and the current application is well tested and crafted, but it has gotten a bit old, and you want to try a fresh approach to attract new customers.

Basecamp is the best example. Every few years, they completely rewrite and launch a new version of their product. New customers are coming in for the shiny new approach, while the old ones are given the choice to upgrade or to stick with their current version. Everyone is happy.

Having to work on legacy codebases sucks. You are terrified when clients ask you to add a new feature. You feel like you can’t touch anything without breaking something else in some obscure corner of the codebase. The only reasonable solution you see is to give up and rewrite the whole application — anything to make the madness stop. Hell, sometimes you feel you’d be willing to do it in your own spare time.

But sadly, rewriting is rarely a good idea. There are too many unknowns, you have to put a hold on launching new features, you risk damaging your relationship with your clients, and the time and money invested might never be earned back.

Refactoring, on the other hand, might be just enough to save you.