Author: cdruc

Tallpad #3

In the last two weeks I’ve published two screencasts:

📺 Laravel mass assignment and fillable vs guarded
In this video I go over mass-assignment in Laravel, explain why it isn’t as bad as some people make it out to be and why I just set $guarded to an empty array and rely on being explicit when updating/creating eloquent models.

📺Building a slide-over dialog component with HeadlessUI and VueJS
In case you didn’t know, a while back the TailwindCSS team released this unstyled, fully accessible UI component library called HeadlessUI – it’s really, really good. In this video I go over how you can combine the HeadlessUI dialog and transition components to build a Slide-over component.

Tallpad #2

In the last two weeks I’ve published four screencasts:

📺 Search records media records by name
We’ve seen how we can filter records by month and type, in this video we add a third query scope used for searching records by their name.

📺 Avoid this PHP Carbon mistake
This is a mistake I made on the date filter and I was incredibly lucky to discover it – as it only happens if open the app on the 31st of the month, or at the end of February & March 😮

📺 Building a custom pagination component
Not just your regular pagination component, a component that allows you to navigate to a specific page you enter in an input field!

📺 Executing bulk actions
Tick a few records, select the action, hit Apply, and bam! All the selected records are deleted.

In other news

I’ve silently launched a mini-side project called undefined talks. Currently, it’s just a directory where you can discover new technical talks and “recommend” your favorite ones. Plus, you get a “profile kind of page” where other people can see what you personally recommend they watch.

I don’t know exactly where this project is going, but what I do know is that:

  • I want to discover new technical talks
  • I want to see what other people are recommending
  • I want to be able to set a reminder like “remind me to watch this talk in 3 months, or every 6 months” – a lot of times I watch a technical talk and fail to understand parts of it – so rewatching it later after I learn/experience more things helps.

This is the link to the website: https://undefinedtalks.com, and this is my profile page – just a few recommendations for now – I’ll add more later 🔥🔥🔥

Any feedback and/or suggestion is welcomed, but keep in mind I only put like ~2 days of work into it 😅

Tallpad #1

In the last two weeks I’ve published two screencasts, both in the same Building a media library series:

📺 Displaying the list of media records
We’re setting up a new route, paginate the records, create and use a JSON resource to select the bits of information we need, and finally loop through the results and display them on the frontend. Pretty basic stuff but had to be done for the continuity of the series.

📺 Filter media records by file type and month
This one is a bit more interesting due to the challenges it presented. We had to figure out how to get generic file types like Video, Archive, Document from specific mime types like video/mp4, application/msword, etc.

Then we made sure to only show options that may return results – no point in showing “Archive” as an option if there are no archive files in the list — or showing “May 2021” as a date option if there were no files uploaded in that month.

We also went over things like creating computed properties to add additional select values, maintain query parameters between requests, use query scopes to filter the records, and other possibly interesting things 😄

Live reloading with Laravel mix and BrowserSync

I was about to record a TailwindCSS video, and this kind of screencast works better if the browser shows you the changes in real-time; otherwise, you have to refresh the page with every change you make – and that’s just annoying.

The first thing that popped into my mind was tailwind play, an online playground where you can try out tailwind stuff. Any change you make will instantly appear on the right.

Then I remembered Laravel mix can do pretty much the same thing with BrowserSync.

Here’s how you can use it when serving a Laravel app with artisan:

mix.browserSync({
  proxy: 'http://127.0.0.1:8000/'
});

And here’s how to do it when using Laravel valet to serve a custom domain:

mix.browserSync({
  proxy: 'https://myapp.test',
  host: 'myapp.test',
  open: 'external',
  https: {
    key: '/Users/yourUser/.config/valet/Certificates/myapp.test.key',
    cert: '/Users/yourUser/.config/valet/Certificates/myapp.test.crt',
  },
});

It’s nowhere near as fast as tailwind play, but it’s good enough when working on entire projects.

Why I am not integrating tallpad with Vimeo

Currently, every time I publish a screencast on tallpad, I have to:

  • export the video
  • upload it to vimeo
  • fill in all vimeo’s fields (name, privacy settings, etc)
  • grab the vimeo embed code
  • go to tallpad’s nova admin panel and create a new episode – here, I have to fill in the title, description, embed code, video duration, and other fields.
  • hit publish

The reason I’m going with this somewhat tedious flow is that, well, I don’t post screencasts that often. So I don’t mind taking 5-10 minutes and doing it manually.

Even if somehow I increase my posting frequency to 1 screencast a day, it still wouldn’t bother me to do that part manually. I’d rather focus my efforts on making even more videos or improving the user experience on the platform.

As a programmer, I love building stuff. I love spending time fiddling with new layout types, adding icons, thinking about nested comments, bookmarked episodes, organizing episodes under series and topics, and other nice-to-haves.

But that’s what those are right now; nice-to-haves. More content is needed.

The only reasonable, cost-effective way to test validation in Laravel apps

Every time I tell someone how I test validation in Laravel, their reaction is somewhere in the lines of “wait, what? this is so much better. I wish I knew it existed”.

So, yeah, here’s how I test validation in Laravel.

Bellow, we have a bunch of tests asserting that different validation rules are in place when registering a new user. We have required name, required email, email must be a valid email, email must be unique, and so on.

namespace Tests\Feature\Auth;

use App\Models\User;
use Illuminate\Foundation\Testing\RefreshDatabase;
use Tests\TestCase;

class RegistrationTest extends TestCase
{
    use RefreshDatabase;

    /** @test */
    public function name_is_required()
    {
        $response = $this->postJson(route('register'), [
            'name' => '',

            'email' => 'druc@pinsmile.com',
            'password' => 'password',
            'password_confirmation' => 'password',
            'device_name' => 'iphone'
        ]);

        $response->assertStatus(422);
        $response->assertJsonValidationErrors('name');
    }

    /** @test */
    public function email_is_required()
    {
        $response = $this->postJson(route('register'), [
            'email' => '',

            'name' => 'Constantin Druc',
            'password' => 'password',
            'password_confirmation' => 'password',
            'device_name' => 'iphone'
        ]);

        $response->assertStatus(422);
        $response->assertJsonValidationErrors('email');
    }

    /** @test */
    public function email_must_be_a_valid_email()
    {
        // code
    }

    /** @test */
    public function email_must_unique()
    {
        // code
    }

    // more similar tests
}

What we are doing in every test is sending the correct values except for the field we are testing the validation. This way, we can assert that we receive a 422 response and validation errors for that specific field.

And this is how most people I know test validation – some optimize this further by extracting methods to reduce duplication, but the general idea is to write one test per validation rule.

The thing is, just as production code, test code requires maintenance, and the more tests you have, the slower your test suite becomes.

The slower your test suite becomes, the less often you’ll run it, and the less often you’ll be refactoring and improving code. Not only that, but you will also end up avoiding writing more tests knowing it will slow you down even more.

So while writing tests is crucial, having too many of them can also become a problem. Ideally, you want to have as few tests as possible that run as fast as possible while still being confident enough that everything works.

We pay for confidence with tests

We write tests so we can be confident that our changes don’t break the app. We pay for confidence with tests. The more tests we write, the more confident we are things are working.

But sometimes, we happen to overpay for that confidence. Sometimes we write more tests than we actually need to.

In our example, those tests take about 300ms to run. Let’s say we have an app where 20 requests require validation – that will add up to a cost of about 6s of waiting time and 160 tests – assuming each request has about 8 validation rules. This is the price we pay for the confidence that our requests are validated. 6s and 160 tests.

But there’s a cheaper way to do it. It doesn’t yield as much confidence as what we are doing now, but it is close enough, and it’s much, much cheaper.

Laravel has thorough and exhaustive tests for every validation rule. If I set required as a validation rule, I’m confident that it will work; it will let me know if the field is missing. I don’t need to test that.

What I do need to test is that my request is validated with whatever rules I set in place. But this approach comes with some costs.

The first is, we need to install an additional package: jasonmccreary/laravel-test-assertions – this will provide us with the assertions we need to test that our controller action is validated using the correct form request object.

The second thing is, with this approach, you can no longer use inline validation – it only works with form request objects.

Here’s how it looks:

namespace Tests\Feature\Auth;

use App\Http\Controllers\Auth\RegistrationController;
use App\Http\Requests\RegistrationRequest;
use JMac\Testing\Traits\AdditionalAssertions;
use Tests\TestCase;

class RegistrationRequestTest extends TestCase
{
    use AdditionalAssertions;

    /** @test */
    public function registration_uses_the_correct_form_request()
    {
        $this->assertActionUsesFormRequest(
            RegistrationController::class, 
            'register', 
            RegistrationRequest::class
        );
    }

    /** @test */
    public function registration_request_has_the_correct_rules()
    {
        $this->assertValidationRules([
            'name' => ['required'],
            'email' => ['required', 'email', 'unique:users,email'],
            'password' => ['required', 'min:8', 'confirmed'],
            'device_name' => ['required']
        ], (new RegistrationRequest())->rules());
    }
}

The first test asserts that the RegistrationController@register action uses the RegistrationRequest form request object.

The second test asserts that the RegistrationRequest has the rules we want it to have.

Before, we had to write 8 tests to ensure our register action is validated; now, we only need 2.

Before, our tests needed 300ms to run; now, they only take 80ms. And we can speed this up even more by replacing Laravel’s TestCase with the PHPUnit/Framework/TestCase class. The first one loads the entire Laravel application, and we don’t need that to run these two tests.

namespace Tests\Feature\Auth;

use App\Http\Controllers\Auth\RegistrationController;
use App\Http\Requests\RegistrationRequest;
use JMac\Testing\Traits\AdditionalAssertions;
use PHPUnit\Framework\TestCase; // previously: use Tests\TestCase;

class RegistrationRequestTest extends TestCase
{
    use AdditionalAssertions;

    /** @test */
    public function registration_uses_the_correct_form_request()
    {
		// code
    }

	  // second test
}

Now, these tests only take 9ms to run. That’s over 30 times faster than what we had before.

So while we need to install an additional package and we are limited to only using form request objects for validation, this second approach is much faster. On top of that, it only requires 2 tests instead of one for each validation rule.

That’s how I test validation in Laravel. 2 tests. And they are fast tests.

If you liked this article, consider subscribing to my youtube channel.

Surviving your first week of Git without losing your mind

No matter what kind of software you are writing or what technologies and languages you are using, there is a good chance (~74% to be more precise) you need to learn how to use git.

The problem is… well, git is quite large and complicated. And the more you try to learn about it, the more confusing it tends to get. Sometimes even the most experienced developers have trouble making sense of it. So don’t feel bad if you don’t understand it just yet. In this article, I’ll do my best to help you survive your first week of Git without loosing your mind.

Why git?

Git is a version control system. It tracks all the changes made to a project: deleted files, modified files, new files, when and who made those changes.

Not only that, but it also offers a convenient way of jumping from one point in history to another. This is useful when, for some reason, your project stops working correctly after introducing new changes. Git allows you to easily roll back to a specific point when you know the project is stable.

Apart from that, what makes git super useful is how easy it makes for developers to collaborate and work on the same project simultaneously, without stepping on each other’s toes (most of the time anyway😅).

Repository & commits

A git project, also known as a repository, contains all files of the project and its entire revision history – every commit that was ever made. A commit is a perfect snapshot of the project at a specific point in time. The more commits you have, the more moments in history you can navigate to. That’s why I recommend committing your work often.

Branches

To organize commits and to allow developers to work in parallel, git uses a concept called branching. Think of branches as different timelines of our project where we fix bugs or add new features that will eventually make their way into the main timeline (usually called the master branch).

When you branch out, you get a perfect copy of the project where you can do whatever you want without affecting the main timeline (master branch). When you finish working on your branch, you can merge back into master, creating a combination of the two branches.

Consider the following example:

On the blue timeline, Mike has everything from the master branch plus his work on the payments system, while on the red timeline, Sandi has everything from the master branch plus her work on the authentication system.

None of them have each other’s work just yet, though.

The master branch is the timeline where all the other branches will be merged in. When Sandi and Mike finish their work, they will both merge their branches into the master branch. Once they do that, they will continue to branch out and merge in their new branches until the end of the project.

Merge requests

Those red and blue circles from the image above merge requests, sometimes called “pull requests”. A merge request is a formal way of asking to merge your branch into another one.

When you create a merge request with a source code hosting application like Gitlab, you can see every change your branch will introduce. This allows you and your team to review and discuss the changes before merging them.

Workflow

You can work with git either from the command line or by using a dedicated client like SourceTree. Or even from your code editor as most of them support git out of the box or have plugins you can install.

However, I strongly recommend trying git from the command line before moving to a dedicated client application. While git can get complicated, you can get away with just a handful of git commands most of the time.

Identify yourself

The first thing you need to do after downloading and installing git is to identify yourself. Git needs to know who you are to set you as the author of the commits you will be making. You only need to do this once.

git config --global user.name "Your full name"
git config --global user.email "your@email.com"

The path you are on when you run the commands above doesn’t matter, but from now on, you must run every git command inside your project’s root directory.

cd /the/exact/path/to/your/project

Starting out

You can find yourself in two contexts: either you need to start a git repository from scratch or work on an already existing repository created by someone else.

In both scenarios, you’ll need the git repository remote address from your Git hosting service (Gitlab, Github, Bitbucket, to name a few).

If you’re starting a new project from scratch, you’ll need to go inside the project’s root directory, initialize it as a new git repository, and set its remote address.

# inside my project's directory
git init
git remote add origin https://your-repository-remote-address.git

Although your project will have a single remote most of the time, you can actually add more remotes – that’s why the command is git remote add.

  • git remote – tells git you want to do something remote related.
  • add is what you want to do, which is to “add a new remote.”
  • origin is the name of the remote. It can be anything you want. I named it origin by convention.
  • https://your-repository-remote-address.git is the remote address of the repository from your Git hosting service and where you will push your commits.

If you need to work on an already existing project, you have to clone the repository – which essentially means you have to download the project files and it’s entire revision history – every commit that was ever made.

To do so, create a new directory for your project and run the following git command:

# inside my project's directory
git clone https://your-repository-remote-address.git .

Caution: make sure the directory is completely empty and that you add the dot after the remote address. The dot tells git to download the files in this exact location. Also, when you are cloning an existing repository, there’s no need to re-initialize it or add the remote again – everything is already configured.

Pulling changes

As you saw in the image above, developers branch out from the master branch, work on their branches, and then merge them back into master. Whenever that happens, your local master branch becomes outdated. It doesn’t have the changes made on the git hosting service, so you’ll have to pull them.

Pulling refers to downloading the repository’s latest changes from the git hosting service and updating your local repository. As with everything software, you want to keep things updated.

# inside my project's directory
#git pull <remote> <branch>
git pull origin master

Make sure you replace origin and master if you named your remote and main branch differently.

Creating and switching branches

Every time you start working on a feature or a bug, pull from master to update your repository, and then create and switch to a new branch. Never work directly on the master branch. You should only update the master branch via merge requests (to be discussed later).

The first step before making any changes is to pull and then create and switch to a new branch:

# inside my project's directory
# checks for changes and updates the local repository
git pull origin master
# create and switch to new branch
git checkout -b my-feature

From now on, every change you will make will be on this my-feature branch – the master branch won’t be affected.

Your branch name should reflect what it is that you are working on. If it’s a new feature, name the feature. If it’s a defect fix, name the defect. If it’s a performance improvement, name the improvement you are making. If you’re just getting started and don’t know how to name your branch, go with dev-yourName.

To switch to another branch, type in your terminal:

# inside my project's directory
# git checkout <branch name>
git checkout master

Continue reading to see how you can add new files, make changes, and merge your branches into the master branch.

Adding new files

Even though the repository is initialized (either from scratch or cloned), you must manually add every new file you create to the repository. Luckily, you can add multiple files at once, so you don’t have to type in all the file paths.

# inside my project's directory
git add relative/path/to/the/file.txt
git add relative/path/to/directory
git add .

The first git command adds a single file to the repository, the second adds an entire directory, while the last command adds every new directory and file created.

Git status

You’ll often need to check your repository’s status: what files you can add, what files were changed, or have been deleted. To do so, run the following command:

# inside my project's directory
git status

You will get back an output that looks somewhat like the one below:

Changes to be committed is also known as the stage. It tells you what files would be committed if you were to create a new commit right now.

Changes not staged for commit displays changes to files git knows about but that are not prepared to be committed – these changes won’t be included if you were to create a commit right now. To add those files to the stage, run the git add . command.

Untracked files are the files git doesn’t know about just yet. It doesn’t care what happens to them. If they were to be deleted, git wouldn’t bat an eye – they are untracked. To commit these files, you need to add them by using the git add command.

Creating commits

Once you’ve added your files to the stage by using git add ., you can create a new commit – all it needs is a message that describes the changes you’ve made:

# inside my project's directory
# add all files to stage
git add .
# create commit
git commit -m "Replace old logo"

From now on, you can continue making changes to your project and create more commits. It’s essential to commit your changes often to build a good revision history with multiple places in time where you can restore to. If our project was a single giant commit, there wouldn’t be any undo options – we wouldn’t have the possibility to restore our changes if we needed to. Commit your work often.

Pushing changes

If pulling means download changes (other commits), pushing means upload changes.

After pulling from master, creating a new branch, and committing your work, it’s time to push your branch to the git hosting service, where you can create a merge request to have it merged into the master branch.

Before you push your changes, pull from master again just to make sure your local master branch is up to date. Once you’ve done that, push your branch using the following command:

# make sure everything is up to date. pull from <remote> <branch>
git pull origin master

# push to <origin> <this-branch>
git push origin my-feature

You can push your branch multiple times just as you can pull multiple times. Say you already pushed a branch, but you saw one file missing. You can stage the file with git add ., commit it with git commit -m "Add x file", and push your branch again with git push origin my-feature. However, if your branch was already merged into master, you will have to create an additional merge request from the same my-feature branch into the master branch.

Merge requests

Before we merge our branch it would be nice to have one final look over our changes, just to make sure we haven’t missed anything, or even better, add a new pair of eyes by inviting a colleague to look over our changes. That’s what merge requests are for!

While the user interface might vary, the steps to create new merge requests are usually:

  1. click the appropriate button (create merge/pull request)
  2. select the branch you want to merge and the destination branch (where you want to merge it, usually master)
  3. select one or more reviewers (if needed)
  4. confirm your action.

After creating the merge request, you will see a list where you can review each individual commit or just look over the whole change list caused by all the commits.

For example, in the above image, git tells us we have four changed files with three additions and four deletions. All file contents will end in an uppercase letter and a dot, while file-d.text will be deleted entirely. Keep in mind that additions and deletions refer to the number of lines added and deleted, not the number of affected files.

As you discuss and receive feedback, you can continue to make changes to your branch while the merge request is open by following the same steps: make changes locally, commit, push your branch again. The merge request will be updated with your new changes, and the review can resume.

Once your branch is merged into master, you can start working on something else by checking into the master branch, pulling the updates, and then checkout to a new branch that will eventually make it’s way back into master via a merge request.

# on branch my-feature that was recently merged into master

# checkout to master
git checkout master

# pull the latest changes
git pull origin master

# checkout to a new branch
git checkout -b my-new-branch

# time passes, changes are made, commits are added...
git push origin my-new-branch

# create new merge request, have it approved, and repeat from the top

Handling merge conflicts

While git empowers us to work in parallel, occasionally you will run into conflicts. Somebody, maybe even you, made changes to the exact same line, in the exact same file, on a different branch that was already merged into master – this will cause a conflict.

While git is smart and all, sometimes it gets confused and doesn’t know who to trust and pick the correct change. You’ll have to help it decide which changes are the correct ones.

Pulling from the branch you want to merge to will show you the conflicted files. In our case, we have a conflict in file-b.txt:

If we open the file in our code editor, we’ll see some of our code split into two sections:

The code between <<<< HEAD and === are the changes made by our branch, while the other section shows us what’s currently on master.

We have three options:

  1. keep what is currently on master – that means deleting everything except This is file B. B comes after the letter A.
  2. decide our changes are the correct ones – that means deleting everything except This is file B. B comes before the letter C.
  3. replace everything with a combination of both – This is file B. B comes between the letter A and C.

Once you fixed all the conflicts, stage the files, create a new commit, and push your branch again:

# after fixing conflicts

# stage everything
git add .

# create commit
git commit -m "Fixed merge conflicts"

# push branch
git push origin my-branch-name

You might have multiple conflicts in the same file. The idea is the same. What’s between <<<HEAD and ==== is your change, what’s between === and the hash is what’s currently on that branch. Decide which change is the correct one, commit your decision, and push your branch again.

To reduce the risk of conflicts, pull from master often, and especially pull every time you start a new branch – this way, you make sure you start from the latest version of the master branch.

Entire flow recaped

# identify yourself
git config --global user.name "Your full name"
git config --global user.email "your@email.com"


# go to your project's root directory
cd /the/exact/path/of/my/project

# initialize a new project and add the remote
git init
git remote add origin https://url-got-from-git-hosting.git

# or clone an existing repository
git clone https://url-got-from-git-hosting.git .

# pull from master
git pull origin master

# checkout to a new branch
git checkout -b my-branch

# make changes, stage the files, and then commit them
git add .
git commit -m "My changes"

# push your branch
git push origin my-branch

# create a merge request, discuss your changes
# fix conflicts if any
git pull origin master

# open the conflicting files in your editor.
# decide what changes are the correct ones. 
# stage the files and create a new commit.
# push your branch again.
git add .
git commit -am "Fixes merge conflicts"
git push origin my-branch

# once your branch is merged, checkout to master, pull changes, and start a new branch
git checkout master
git pull origin master
git checkout -b my-new-branch

# rinse and repeat from the top.

As I said at the beginning of this post, git is quite large and complicated. There are numerous different ways of doing what we just learned. There are also many other concepts and commands we haven’t covered: releasing versions, cherry-picking, resets, squashing, to name a few.

This been said, I’m confident this post will be good enough for someone who is new to git and needs to learn the basics of it. I hope it helped you.

Why rewriting applications from scratch is almost always a bad idea

There are many good reasons to rewrite a legacy application, but most of the time, the cost outweighs the benefits. In this article, I try to balance the pros and cons of rewriting applications from scratch and articulate why it is almost always a bad idea.

Many developers, including myself, can live with a legacy application for only so long before they want to kill it, burn it with fire, and rebuild it from the ground up. The code is so hard to follow and understand, methods a hundred lines long, unused variables, conditionals conditioning conditionals on the different levels; it’s so terrible, Sandi’s squint test would make you dizzy.

Why rewrites are so alluring

You get to use the newest and shiniest tools.

It’s hard not to wish for a rewrite when you see places you could improve in many ways by just using a new best practice, framework, or package. Why would you stay back in time struggling to do great work with shovels and hammers and other medieval tools when today you have access to all kinds of well tested and battle-proven tools?

You have all the facts

“We have access to the current application, we can see all how the previous developers went wrong, and the clients themselves have more experience and know what works and what doesn’t and what they actually need. Rewriting the app will be a piece of cake done in a heartbeat!”

Easier to write tests

Most legacy applications subject to a rewrite don’t have tests. Adding them now is hard. Not only are there countless dependencies and execution paths to follow, often you don’t even know what to test! You’re left playing the detective, carefully following every method trying to guess what it supposed to do. When you start from scratch, you get to test your own code, which is a million times easier.

Why rewrites are a bad idea

Rewrites are expensive

The application won’t rewrite itself. You have to pour hours and hours into getting it to the point where, well, it does pretty much what it was doing before, maybe a little bit better.

One would make a good argument by saying, “you’ll be losing the same or even more time and money by not rewriting it, due to the inability to ship features as fast.”

That is true. Holding a legacy codebase together, fixing bugs, and shipping new features at the same time is no walk in the park. You can’t be rolling out feature after feature like you used to. But at least it’s not impossible; which is the second point:

You can’t put out new features

When you go on a rewrite, you are unable to release anything new for months. Depending on the nature of the business, responding to users and shipping new features might be critical. Your client might not even be in business by the end of the rewrite.

No, you don’t really have all the facts

After years and years of changes, no one knows precisely how the app reacts in every situation, not even your clients. Go ahead, ask them. They might have a general idea of how the app works and what it does, but there are still many unknowns that can only be uncovered by looking at the current codebase. Go into a full rewrite unprepared, and you will waste hours and hours on client communication and dead ends.

You risk damaging your relationship with your client

Let’s say you get your client to accept all the costs of the rewrite. You promise them, in the end, all of this will be worth it. The app will be faster, bug-free, better designed, and easier to extend.

If there’s one thing we know about software development is that at some point, something somewhere will go wrong: the server, the database, a misconfigured service, some innocent update. No matter how much you plan, something somewhere will go wrong. It’s like a law of nature.

When that happens, your client will start questioning his decision to rewrite. Even though the new application is 10x better, they don’t know that. They don’t care if you are using the latest and greatest tools, best practices, frameworks, and packages. All they know is they agreed to spend a ton of money and precious time on something that looks and functions quite the same. Every hiccup you go through will damage the relationship you have with your client.

When you should consider a rewrite

Apart from the case in which the application is small and straightforward, and you can rewrite it from scratch in just a few months, there are two more situations when complete rewriting the application can be a great idea:

When shipping new features is not a priority

When business is steady, with most work revolving around customer support and a few bugs here and there. If there’s no time pressure and you want to do a rewrite for performance reasons and maybe to stay up to date with the latest technologies. A rewrite will put you in a good place in case you need to shift gears and change direction towards something else.

When trying a different approach

The business has been great, the clients are happy, and the current application is well tested and crafted, but it has gotten a bit old, and you want to try a fresh approach to attract new customers.

Basecamp is the best example. Every few years, they completely rewrite and launch a new version of their product. New customers are coming in for the shiny new approach, while the old ones are given the choice to upgrade or to stick with their current version. Everyone is happy.


Having to work on legacy codebases sucks. You are terrified when clients ask you to add a new feature. You feel like you can’t touch anything without breaking something else in some obscure corner of the codebase. The only reasonable solution you see is to give up and rewrite the whole application — anything to make the madness stop. Hell, sometimes you feel you’d be willing to do it in your own spare time.

But sadly, rewriting is rarely a good idea. There are too many unknowns, you have to put a hold on launching new features, you risk damaging your relationship with your clients, and the time and money invested might never be earned back.

Refactoring, on the other hand, might be just enough to save you.

Less boilerplate and more awesomeness with the new InertiaJS form helper

In this video we’ll take a look over Inertia’s new form helper – a game changer as it solves some of the old pains we had with forms, while removing a lot of boilerplate as well.

<template>
  <app-layout>
    <div class="sticky top-0 flex items-center justify-between p-4 bg-white border-b border-gray-200">
      <h2 class="text-xl font-semibold leading-tight text-gray-800">
        Edit profile
      </h2>

      <div class="flex items-center space-x-2">
        <secondary-button @click="form.reset()">
          Reset
        </secondary-button>

        <primary-button form="profile-form" :loading="form.processing">
          Save profile {{ form.progress ? `${form.progress.percentage}%` : ''}}
        </primary-button>
      </div>
    </div>

    <form id="profile-form" class="p-5" @submit.prevent="submit">
      <div v-if="form.wasSuccessful" class="p-3 mb-3 bg-green-100 rounded border border-green-300">
        Profile was updated successfully.
      </div>

      <div class="mb-3">
        <avatar-input class="h-24 w-24 rounded-full" v-model="form.avatar" :default-src="user.profile_photo_url"></avatar-input>
        <p class="text-sm text-red-600" v-if="form.errors.avatar">{{ form.errors.avatar }}</p>
      </div>

      <div class="mb-3">
        <label for="name" class="block font-medium text-sm text-gray-700">Name:</label>
        <input v-model="form.name" id="name" class="form-input rounded-md shadow-sm w-full">
        <p class="text-sm text-red-600" v-if="form.errors.name">{{ form.errors.name }}</p>
      </div>

      <div class="mb-3">
        <label for="username" class="block font-medium text-sm text-gray-700">Username:</label>
        <input v-model="form.username" id="username" class="form-input rounded-md shadow-sm w-full">
        <p class="text-sm text-red-600" v-if="form.errors.username">{{ form.errors.username }}</p>
      </div>

      <div class="mb-3">
        <label for="email" class="block font-medium text-sm text-gray-700">Email:</label>
        <input v-model="form.email" id="email" class="form-input rounded-md shadow-sm w-full">
        <p class="text-sm text-red-600" v-if="form.errors.email">{{ form.errors.email }}</p>
      </div>

      <div class="mb-3">
        <label for="description" class="block font-medium text-sm text-gray-700">Description:</label>
        <textarea v-model="form.description" id="description" class="form-input rounded-md shadow-sm w-full" rows="3"></textarea>
        <p class="text-sm text-red-600" v-if="form.errors.description">{{ form.errors.description }}</p>
      </div>
    </form>
  </app-layout>
</template>

<script>
import AppLayout from './../../Layouts/AppLayout';
import AvatarInput from "../../Components/AvatarInput";

export default {
  props: {
    user: Object
  },
  data() {
    return {
      form: this.$inertia.form({
        avatar: null,
        name: this.user.name,
        username: this.user.username,
        email: this.user.email,
        description: this.user.description,
        _method: 'PUT'
      })
    }
  },
  methods: {
    submit() {
      this.form.post(`/settings/profile`, {
        onSuccess: () => this.form.reset();
      });
    }
  },
  components: {
    AppLayout,
    AvatarInput
  },
}
</script>

4 ways to reduce complexity in your eloquent models

I think everyone loves to work on completely greenfield applications. You get to plan your own course, chose your current favourite technologies, structures, and patterns to follow. There is no legacy code, no technical debt, nothing that stands in your way. You can do whatever you want and building features is a breeze.

But you know the story. You know what happens next.

Your application grows. New requirements come in, and old features need to be changed.

You do your best to keep customers happy, but after a while complexity creeps in and you find yourself in a position where you start doing every possible hack and take every crappy decision to fight your own code into submission.

One of the places our application tends to grow is in our model layer. The usual suspects are the User class and whatever models are part of the core of our application. If you’re building a content management system, Post would be one of the suspects. Selling stuff? Take a look at the Order class.

We’re going to look at ways to deal with complexity in our eloquent models.

Use traits

Traits are the easiest way to slim down an eloquent model. Create a new file, copy and paste a bunch of methods, and BAM! – your model is 100 lines thiner.

The problem with traits is that most of the time you end up sweeping dirt under the rug. You want to cleanup the model but instead of thinking really hard at the problem, you take the trash and sweep it under a trait.

The biggest problem is not the accumulated dirt. Everyone has dirt on their floor at some point. The problem is that, with traits, you won’t really notice it. You’ll look at the room, thinking it’s clean and tidy.

Having your model spread over multiple files makes it a lot harder to identify new concepts and behaviour that can be given representations in your system. You clean and organise things you can see. Code that is not seen is not refactored.

Nevertheless, especially in programming, things are never black and white, and traits are sometimes a good option.

Query builders

One of Eloquent’s nicest features are query scopes. They allow you to pick a common set of constraints, name it, and re-use it through out your application. Let’s take the following example of a Video model:

class Video extends Model
{
    public function scopeDraft($query)
    {
        return $query->where('published', false);
    }

    public function scopePublished($query)
    {
        return $query->where('published', true);
    }

    public function scopeLive($query)
    {
        return $query->published()
            ->where('published_at', '<=', Carbon::now());
    }
}

Once the list of scope methods starts to get in our way we can move them into a dedicated query builder class like bellow. Notice that we no longer need the scope prefix, nor to pass in the $query parameter as we are now in a query builder context.

class VideoQueryBuilder extends Builder
{
    public function draft()
    {
        return $this->where('published', false);
    }

    public function published()
    {
        return $this->where('published', true);
    }

    public function live()
    {
        return $this->published()
            ->where('published_at', '<=', Carbon::now());
    }
}

To replace the default query builder with our enhanced one, we override the newEloquentBuilder method in our Video model like bellow:

class Video extends Model
{
    public function newEloquentBuilder($query)
    {
        return new VideoQueryBuilder($query);
    }
}

Move event listeners to observer classes

When models are short and easy to go through, I like to keep the event listeners right in the boot method so I don’t need to switch to a second file to figure out what happens when. But, when the model starts growing and growing, moving the event listeners into their own observer class is a good trade-off.

protected static function boot()
{
    parent::boot();

    self::saved(function(BodyState $bodyState) {
        $bodyState->update([
            'macros_id' => $this->user->latestMacros->id
        ]);
    });

    self::created(function(BodyState $bodyState) {
        if ($bodyState->user->logBodyStateNotification) {
            $bodyState->user->logBodyStateNotification->markAsRead();
        }
    });
}

Bonus tip 1: Instead of a having a bunch of CRUD calls, when possible, make your code as expressive as possible.

class BodyStateObserver
{
    public function saved(BodyState $bodyState)
    {
        $bodyState->associateWithLatestMacros();
    }

    public function created(BodyState $bodyState)
    {
        $bodyState->user->markLogBodyStateNotificationsAsRead();
    }
}

Bonus tip 2: Instead of hiding your observers in a provider class, register them in the model’s boot method. Not only you’ll know that observer exists, but you’ll also be able to quickly navigate to it from your model.

class BodyState extends Model
{
    protected static function boot()
    {
        parent::boot();
        self::observe(BodyStateObserver::class);
    }
}

Value Objects

When you notice two or more things that seem to always go together, for example street name and street number, start date and end date, you have an opportunity to extract and represent them into a single concept. In our case Address and DateRange.

Another way to detect this kind of objects is to look for methods that “play” a lot with one of your model’s attributes. In the example bellow it seems there are quite a few things we do with price (in cents) of the product.

class Product
{
    public function getPrice()
    {
       return $this->price;
    }

    public function getPriceInDollars()
    {
       return $this->price / 100;
    }

    public function getPriceDisplay()
    {
       return (new NumberFormatter( 'en_US', NumberFormatter::CURRENCY ))
          ->formatCurrency($this->getPriceInDollars(), "USD");
    }
}

We can extract these methods into a Price class.

class Price
{
    public function __constructor($cents)
    {
        $this->cents = $cents;
    }

    public function inDollars()
    {
        $this->cents / 100;
    }

    public function getDisplay()
    {
       return (new NumberFormatter('en_US', NumberFormatter::CURRENCY))
          ->formatCurrency($this->getDollars(), "USD");
    }
}

Remove the previous methods and return the Price class.

class Product
{
    public function getPrice()
    {
        return new Price($this->price);
    }
}

There are many other ways of putting your models on a diet (service objects, form objects, decorators, view objects, policies and others), but the ones I shared above are the ones I tend to reach out the most when my models need to lose some weight. I hope it makes a good diet for your models too 🙂