Live reloading with Laravel mix and BrowserSync

I was about to record a TailwindCSS video, and this kind of screencast works better if the browser shows you the changes in real-time; otherwise, you have to refresh the page with every change you make – and that’s just annoying.

The first thing that popped into my mind was tailwind play, an online playground where you can try out tailwind stuff. Any change you make will instantly appear on the right.

Then I remembered Laravel mix can do pretty much the same thing with BrowserSync.

Here’s how you can use it when serving a Laravel app with artisan:

mix.browserSync({
  proxy: 'http://127.0.0.1:8000/'
});

And here’s how to do it when using Laravel valet to serve a custom domain:

mix.browserSync({
  proxy: 'https://myapp.test',
  host: 'myapp.test',
  open: 'external',
  https: {
    key: '/Users/yourUser/.config/valet/Certificates/myapp.test.key',
    cert: '/Users/yourUser/.config/valet/Certificates/myapp.test.crt',
  },
});

It’s nowhere near as fast as tailwind play, but it’s good enough when working on entire projects.

Why I am not integrating tallpad with Vimeo

Currently, every time I publish a screencast on tallpad, I have to:

  • export the video
  • upload it to vimeo
  • fill in all vimeo’s fields (name, privacy settings, etc)
  • grab the vimeo embed code
  • go to tallpad’s nova admin panel and create a new episode – here, I have to fill in the title, description, embed code, video duration, and other fields.
  • hit publish

The reason I’m going with this somewhat tedious flow is that, well, I don’t post screencasts that often. So I don’t mind taking 5-10 minutes and doing it manually.

Even if somehow I increase my posting frequency to 1 screencast a day, it still wouldn’t bother me to do that part manually. I’d rather focus my efforts on making even more videos or improving the user experience on the platform.

As a programmer, I love building stuff. I love spending time fiddling with new layout types, adding icons, thinking about nested comments, bookmarked episodes, organizing episodes under series and topics, and other nice-to-haves.

But that’s what those are right now; nice-to-haves. More content is needed.

The only reasonable, cost-effective way to test validation in Laravel apps

Every time I tell someone how I test validation in Laravel, their reaction is somewhere in the lines of “wait, what? this is so much better. I wish I knew it existed”.

So, yeah, here’s how I test validation in Laravel.

Bellow, we have a bunch of tests asserting that different validation rules are in place when registering a new user. We have required name, required email, email must be a valid email, email must be unique, and so on.

namespace Tests\Feature\Auth;

use App\Models\User;
use Illuminate\Foundation\Testing\RefreshDatabase;
use Tests\TestCase;

class RegistrationTest extends TestCase
{
    use RefreshDatabase;

    /** @test */
    public function name_is_required()
    {
        $response = $this->postJson(route('register'), [
            'name' => '',

            'email' => 'druc@pinsmile.com',
            'password' => 'password',
            'password_confirmation' => 'password',
            'device_name' => 'iphone'
        ]);

        $response->assertStatus(422);
        $response->assertJsonValidationErrors('name');
    }

    /** @test */
    public function email_is_required()
    {
        $response = $this->postJson(route('register'), [
            'email' => '',

            'name' => 'Constantin Druc',
            'password' => 'password',
            'password_confirmation' => 'password',
            'device_name' => 'iphone'
        ]);

        $response->assertStatus(422);
        $response->assertJsonValidationErrors('email');
    }

    /** @test */
    public function email_must_be_a_valid_email()
    {
        // code
    }

    /** @test */
    public function email_must_unique()
    {
        // code
    }

    // more similar tests
}

What we are doing in every test is sending the correct values except for the field we are testing the validation. This way, we can assert that we receive a 422 response and validation errors for that specific field.

And this is how most people I know test validation – some optimize this further by extracting methods to reduce duplication, but the general idea is to write one test per validation rule.

The thing is, just as production code, test code requires maintenance, and the more tests you have, the slower your test suite becomes.

The slower your test suite becomes, the less often you’ll run it, and the less often you’ll be refactoring and improving code. Not only that, but you will also end up avoiding writing more tests knowing it will slow you down even more.

So while writing tests is crucial, having too many of them can also become a problem. Ideally, you want to have as few tests as possible that run as fast as possible while still being confident enough that everything works.

We pay for confidence with tests

We write tests so we can be confident that our changes don’t break the app. We pay for confidence with tests. The more tests we write, the more confident we are things are working.

But sometimes, we happen to overpay for that confidence. Sometimes we write more tests than we actually need to.

In our example, those tests take about 300ms to run. Let’s say we have an app where 20 requests require validation – that will add up to a cost of about 6s of waiting time and 160 tests – assuming each request has about 8 validation rules. This is the price we pay for the confidence that our requests are validated. 6s and 160 tests.

But there’s a cheaper way to do it. It doesn’t yield as much confidence as what we are doing now, but it is close enough, and it’s much, much cheaper.

Laravel has thorough and exhaustive tests for every validation rule. If I set required as a validation rule, I’m confident that it will work; it will let me know if the field is missing. I don’t need to test that.

What I do need to test is that my request is validated with whatever rules I set in place. But this approach comes with some costs.

The first is, we need to install an additional package: jasonmccreary/laravel-test-assertions – this will provide us with the assertions we need to test that our controller action is validated using the correct form request object.

The second thing is, with this approach, you can no longer use inline validation – it only works with form request objects.

Here’s how it looks:

namespace Tests\Feature\Auth;

use App\Http\Controllers\Auth\RegistrationController;
use App\Http\Requests\RegistrationRequest;
use JMac\Testing\Traits\AdditionalAssertions;
use Tests\TestCase;

class RegistrationRequestTest extends TestCase
{
    use AdditionalAssertions;

    /** @test */
    public function registration_uses_the_correct_form_request()
    {
        $this->assertActionUsesFormRequest(
            RegistrationController::class, 
            'register', 
            RegistrationRequest::class
        );
    }

    /** @test */
    public function registration_request_has_the_correct_rules()
    {
        $this->assertValidationRules([
            'name' => ['required'],
            'email' => ['required', 'email', 'unique:users,email'],
            'password' => ['required', 'min:8', 'confirmed'],
            'device_name' => ['required']
        ], (new RegistrationRequest())->rules());
    }
}

The first test asserts that the RegistrationController@register action uses the RegistrationRequest form request object.

The second test asserts that the RegistrationRequest has the rules we want it to have.

Before, we had to write 8 tests to ensure our register action is validated; now, we only need 2.

Before, our tests needed 300ms to run; now, they only take 80ms. And we can speed this up even more by replacing Laravel’s TestCase with the PHPUnit/Framework/TestCase class. The first one loads the entire Laravel application, and we don’t need that to run these two tests.

namespace Tests\Feature\Auth;

use App\Http\Controllers\Auth\RegistrationController;
use App\Http\Requests\RegistrationRequest;
use JMac\Testing\Traits\AdditionalAssertions;
use PHPUnit\Framework\TestCase; // previously: use Tests\TestCase;

class RegistrationRequestTest extends TestCase
{
    use AdditionalAssertions;

    /** @test */
    public function registration_uses_the_correct_form_request()
    {
		// code
    }

	  // second test
}

Now, these tests only take 9ms to run. That’s over 30 times faster than what we had before.

So while we need to install an additional package and we are limited to only using form request objects for validation, this second approach is much faster. On top of that, it only requires 2 tests instead of one for each validation rule.

That’s how I test validation in Laravel. 2 tests. And they are fast tests.

If you liked this article, consider subscribing to my youtube channel.

Surviving your first week of Git without losing your mind

No matter what kind of software you are writing or what technologies and languages you are using, there is a good chance (~74% to be more precise) you need to learn how to use git.

The problem is… well, git is quite large and complicated. And the more you try to learn about it, the more confusing it tends to get. Sometimes even the most experienced developers have trouble making sense of it. So don’t feel bad if you don’t understand it just yet. In this article, I’ll do my best to help you survive your first week of Git without loosing your mind.

Why git?

Git is a version control system. It tracks all the changes made to a project: deleted files, modified files, new files, when and who made those changes.

Not only that, but it also offers a convenient way of jumping from one point in history to another. This is useful when, for some reason, your project stops working correctly after introducing new changes. Git allows you to easily roll back to a specific point when you know the project is stable.

Apart from that, what makes git super useful is how easy it makes for developers to collaborate and work on the same project simultaneously, without stepping on each other’s toes (most of the time anyway😅).

Repository & commits

A git project, also known as a repository, contains all files of the project and its entire revision history – every commit that was ever made. A commit is a perfect snapshot of the project at a specific point in time. The more commits you have, the more moments in history you can navigate to. That’s why I recommend committing your work often.

Branches

To organize commits and to allow developers to work in parallel, git uses a concept called branching. Think of branches as different timelines of our project where we fix bugs or add new features that will eventually make their way into the main timeline (usually called the master branch).

When you branch out, you get a perfect copy of the project where you can do whatever you want without affecting the main timeline (master branch). When you finish working on your branch, you can merge back into master, creating a combination of the two branches.

Consider the following example:

On the blue timeline, Mike has everything from the master branch plus his work on the payments system, while on the red timeline, Sandi has everything from the master branch plus her work on the authentication system.

None of them have each other’s work just yet, though.

The master branch is the timeline where all the other branches will be merged in. When Sandi and Mike finish their work, they will both merge their branches into the master branch. Once they do that, they will continue to branch out and merge in their new branches until the end of the project.

Merge requests

Those red and blue circles from the image above merge requests, sometimes called “pull requests”. A merge request is a formal way of asking to merge your branch into another one.

When you create a merge request with a source code hosting application like Gitlab, you can see every change your branch will introduce. This allows you and your team to review and discuss the changes before merging them.

Workflow

You can work with git either from the command line or by using a dedicated client like SourceTree. Or even from your code editor as most of them support git out of the box or have plugins you can install.

However, I strongly recommend trying git from the command line before moving to a dedicated client application. While git can get complicated, you can get away with just a handful of git commands most of the time.

Identify yourself

The first thing you need to do after downloading and installing git is to identify yourself. Git needs to know who you are to set you as the author of the commits you will be making. You only need to do this once.

git config --global user.name "Your full name"
git config --global user.email "your@email.com"

The path you are on when you run the commands above doesn’t matter, but from now on, you must run every git command inside your project’s root directory.

cd /the/exact/path/to/your/project

Starting out

You can find yourself in two contexts: either you need to start a git repository from scratch or work on an already existing repository created by someone else.

In both scenarios, you’ll need the git repository remote address from your Git hosting service (Gitlab, Github, Bitbucket, to name a few).

If you’re starting a new project from scratch, you’ll need to go inside the project’s root directory, initialize it as a new git repository, and set its remote address.

# inside my project's directory
git init
git remote add origin https://your-repository-remote-address.git

Although your project will have a single remote most of the time, you can actually add more remotes – that’s why the command is git remote add.

  • git remote – tells git you want to do something remote related.
  • add is what you want to do, which is to “add a new remote.”
  • origin is the name of the remote. It can be anything you want. I named it origin by convention.
  • https://your-repository-remote-address.git is the remote address of the repository from your Git hosting service and where you will push your commits.

If you need to work on an already existing project, you have to clone the repository – which essentially means you have to download the project files and it’s entire revision history – every commit that was ever made.

To do so, create a new directory for your project and run the following git command:

# inside my project's directory
git clone https://your-repository-remote-address.git .

Caution: make sure the directory is completely empty and that you add the dot after the remote address. The dot tells git to download the files in this exact location. Also, when you are cloning an existing repository, there’s no need to re-initialize it or add the remote again – everything is already configured.

Pulling changes

As you saw in the image above, developers branch out from the master branch, work on their branches, and then merge them back into master. Whenever that happens, your local master branch becomes outdated. It doesn’t have the changes made on the git hosting service, so you’ll have to pull them.

Pulling refers to downloading the repository’s latest changes from the git hosting service and updating your local repository. As with everything software, you want to keep things updated.

# inside my project's directory
#git pull <remote> <branch>
git pull origin master

Make sure you replace origin and master if you named your remote and main branch differently.

Creating and switching branches

Every time you start working on a feature or a bug, pull from master to update your repository, and then create and switch to a new branch. Never work directly on the master branch. You should only update the master branch via merge requests (to be discussed later).

The first step before making any changes is to pull and then create and switch to a new branch:

# inside my project's directory
# checks for changes and updates the local repository
git pull origin master
# create and switch to new branch
git checkout -b my-feature

From now on, every change you will make will be on this my-feature branch – the master branch won’t be affected.

Your branch name should reflect what it is that you are working on. If it’s a new feature, name the feature. If it’s a defect fix, name the defect. If it’s a performance improvement, name the improvement you are making. If you’re just getting started and don’t know how to name your branch, go with dev-yourName.

To switch to another branch, type in your terminal:

# inside my project's directory
# git checkout <branch name>
git checkout master

Continue reading to see how you can add new files, make changes, and merge your branches into the master branch.

Adding new files

Even though the repository is initialized (either from scratch or cloned), you must manually add every new file you create to the repository. Luckily, you can add multiple files at once, so you don’t have to type in all the file paths.

# inside my project's directory
git add relative/path/to/the/file.txt
git add relative/path/to/directory
git add .

The first git command adds a single file to the repository, the second adds an entire directory, while the last command adds every new directory and file created.

Git status

You’ll often need to check your repository’s status: what files you can add, what files were changed, or have been deleted. To do so, run the following command:

# inside my project's directory
git status

You will get back an output that looks somewhat like the one below:

Changes to be committed is also known as the stage. It tells you what files would be committed if you were to create a new commit right now.

Changes not staged for commit displays changes to files git knows about but that are not prepared to be committed – these changes won’t be included if you were to create a commit right now. To add those files to the stage, run the git add . command.

Untracked files are the files git doesn’t know about just yet. It doesn’t care what happens to them. If they were to be deleted, git wouldn’t bat an eye – they are untracked. To commit these files, you need to add them by using the git add command.

Creating commits

Once you’ve added your files to the stage by using git add ., you can create a new commit – all it needs is a message that describes the changes you’ve made:

# inside my project's directory
# add all files to stage
git add .
# create commit
git commit -m "Replace old logo"

From now on, you can continue making changes to your project and create more commits. It’s essential to commit your changes often to build a good revision history with multiple places in time where you can restore to. If our project was a single giant commit, there wouldn’t be any undo options – we wouldn’t have the possibility to restore our changes if we needed to. Commit your work often.

Pushing changes

If pulling means download changes (other commits), pushing means upload changes.

After pulling from master, creating a new branch, and committing your work, it’s time to push your branch to the git hosting service, where you can create a merge request to have it merged into the master branch.

Before you push your changes, pull from master again just to make sure your local master branch is up to date. Once you’ve done that, push your branch using the following command:

# make sure everything is up to date. pull from <remote> <branch>
git pull origin master

# push to <origin> <this-branch>
git push origin my-feature

You can push your branch multiple times just as you can pull multiple times. Say you already pushed a branch, but you saw one file missing. You can stage the file with git add ., commit it with git commit -m "Add x file", and push your branch again with git push origin my-feature. However, if your branch was already merged into master, you will have to create an additional merge request from the same my-feature branch into the master branch.

Merge requests

Before we merge our branch it would be nice to have one final look over our changes, just to make sure we haven’t missed anything, or even better, add a new pair of eyes by inviting a colleague to look over our changes. That’s what merge requests are for!

While the user interface might vary, the steps to create new merge requests are usually:

  1. click the appropriate button (create merge/pull request)
  2. select the branch you want to merge and the destination branch (where you want to merge it, usually master)
  3. select one or more reviewers (if needed)
  4. confirm your action.

After creating the merge request, you will see a list where you can review each individual commit or just look over the whole change list caused by all the commits.

For example, in the above image, git tells us we have four changed files with three additions and four deletions. All file contents will end in an uppercase letter and a dot, while file-d.text will be deleted entirely. Keep in mind that additions and deletions refer to the number of lines added and deleted, not the number of affected files.

As you discuss and receive feedback, you can continue to make changes to your branch while the merge request is open by following the same steps: make changes locally, commit, push your branch again. The merge request will be updated with your new changes, and the review can resume.

Once your branch is merged into master, you can start working on something else by checking into the master branch, pulling the updates, and then checkout to a new branch that will eventually make it’s way back into master via a merge request.

# on branch my-feature that was recently merged into master

# checkout to master
git checkout master

# pull the latest changes
git pull origin master

# checkout to a new branch
git checkout -b my-new-branch

# time passes, changes are made, commits are added...
git push origin my-new-branch

# create new merge request, have it approved, and repeat from the top

Handling merge conflicts

While git empowers us to work in parallel, occasionally you will run into conflicts. Somebody, maybe even you, made changes to the exact same line, in the exact same file, on a different branch that was already merged into master – this will cause a conflict.

While git is smart and all, sometimes it gets confused and doesn’t know who to trust and pick the correct change. You’ll have to help it decide which changes are the correct ones.

Pulling from the branch you want to merge to will show you the conflicted files. In our case, we have a conflict in file-b.txt:

If we open the file in our code editor, we’ll see some of our code split into two sections:

The code between <<<< HEAD and === are the changes made by our branch, while the other section shows us what’s currently on master.

We have three options:

  1. keep what is currently on master – that means deleting everything except This is file B. B comes after the letter A.
  2. decide our changes are the correct ones – that means deleting everything except This is file B. B comes before the letter C.
  3. replace everything with a combination of both – This is file B. B comes between the letter A and C.

Once you fixed all the conflicts, stage the files, create a new commit, and push your branch again:

# after fixing conflicts

# stage everything
git add .

# create commit
git commit -m "Fixed merge conflicts"

# push branch
git push origin my-branch-name

You might have multiple conflicts in the same file. The idea is the same. What’s between <<<HEAD and ==== is your change, what’s between === and the hash is what’s currently on that branch. Decide which change is the correct one, commit your decision, and push your branch again.

To reduce the risk of conflicts, pull from master often, and especially pull every time you start a new branch – this way, you make sure you start from the latest version of the master branch.

Entire flow recaped

# identify yourself
git config --global user.name "Your full name"
git config --global user.email "your@email.com"


# go to your project's root directory
cd /the/exact/path/of/my/project

# initialize a new project and add the remote
git init
git remote add origin https://url-got-from-git-hosting.git

# or clone an existing repository
git clone https://url-got-from-git-hosting.git .

# pull from master
git pull origin master

# checkout to a new branch
git checkout -b my-branch

# make changes, stage the files, and then commit them
git add .
git commit -m "My changes"

# push your branch
git push origin my-branch

# create a merge request, discuss your changes
# fix conflicts if any
git pull origin master

# open the conflicting files in your editor.
# decide what changes are the correct ones. 
# stage the files and create a new commit.
# push your branch again.
git add .
git commit -am "Fixes merge conflicts"
git push origin my-branch

# once your branch is merged, checkout to master, pull changes, and start a new branch
git checkout master
git pull origin master
git checkout -b my-new-branch

# rinse and repeat from the top.

As I said at the beginning of this post, git is quite large and complicated. There are numerous different ways of doing what we just learned. There are also many other concepts and commands we haven’t covered: releasing versions, cherry-picking, resets, squashing, to name a few.

This been said, I’m confident this post will be good enough for someone who is new to git and needs to learn the basics of it. I hope it helped you.

Why rewriting applications from scratch is almost always a bad idea

There are many good reasons to rewrite a legacy application, but most of the time, the cost outweighs the benefits. In this article, I try to balance the pros and cons of rewriting applications from scratch and articulate why it is almost always a bad idea.

Many developers, including myself, can live with a legacy application for only so long before they want to kill it, burn it with fire, and rebuild it from the ground up. The code is so hard to follow and understand, methods a hundred lines long, unused variables, conditionals conditioning conditionals on the different levels; it’s so terrible, Sandi’s squint test would make you dizzy.

Why rewrites are so alluring

You get to use the newest and shiniest tools.

It’s hard not to wish for a rewrite when you see places you could improve in many ways by just using a new best practice, framework, or package. Why would you stay back in time struggling to do great work with shovels and hammers and other medieval tools when today you have access to all kinds of well tested and battle-proven tools?

You have all the facts

“We have access to the current application, we can see all how the previous developers went wrong, and the clients themselves have more experience and know what works and what doesn’t and what they actually need. Rewriting the app will be a piece of cake done in a heartbeat!”

Easier to write tests

Most legacy applications subject to a rewrite don’t have tests. Adding them now is hard. Not only are there countless dependencies and execution paths to follow, often you don’t even know what to test! You’re left playing the detective, carefully following every method trying to guess what it supposed to do. When you start from scratch, you get to test your own code, which is a million times easier.

Why rewrites are a bad idea

Rewrites are expensive

The application won’t rewrite itself. You have to pour hours and hours into getting it to the point where, well, it does pretty much what it was doing before, maybe a little bit better.

One would make a good argument by saying, “you’ll be losing the same or even more time and money by not rewriting it, due to the inability to ship features as fast.”

That is true. Holding a legacy codebase together, fixing bugs, and shipping new features at the same time is no walk in the park. You can’t be rolling out feature after feature like you used to. But at least it’s not impossible; which is the second point:

You can’t put out new features

When you go on a rewrite, you are unable to release anything new for months. Depending on the nature of the business, responding to users and shipping new features might be critical. Your client might not even be in business by the end of the rewrite.

No, you don’t really have all the facts

After years and years of changes, no one knows precisely how the app reacts in every situation, not even your clients. Go ahead, ask them. They might have a general idea of how the app works and what it does, but there are still many unknowns that can only be uncovered by looking at the current codebase. Go into a full rewrite unprepared, and you will waste hours and hours on client communication and dead ends.

You risk damaging your relationship with your client

Let’s say you get your client to accept all the costs of the rewrite. You promise them, in the end, all of this will be worth it. The app will be faster, bug-free, better designed, and easier to extend.

If there’s one thing we know about software development is that at some point, something somewhere will go wrong: the server, the database, a misconfigured service, some innocent update. No matter how much you plan, something somewhere will go wrong. It’s like a law of nature.

When that happens, your client will start questioning his decision to rewrite. Even though the new application is 10x better, they don’t know that. They don’t care if you are using the latest and greatest tools, best practices, frameworks, and packages. All they know is they agreed to spend a ton of money and precious time on something that looks and functions quite the same. Every hiccup you go through will damage the relationship you have with your client.

When you should consider a rewrite

Apart from the case in which the application is small and straightforward, and you can rewrite it from scratch in just a few months, there are two more situations when complete rewriting the application can be a great idea:

When shipping new features is not a priority

When business is steady, with most work revolving around customer support and a few bugs here and there. If there’s no time pressure and you want to do a rewrite for performance reasons and maybe to stay up to date with the latest technologies. A rewrite will put you in a good place in case you need to shift gears and change direction towards something else.

When trying a different approach

The business has been great, the clients are happy, and the current application is well tested and crafted, but it has gotten a bit old, and you want to try a fresh approach to attract new customers.

Basecamp is the best example. Every few years, they completely rewrite and launch a new version of their product. New customers are coming in for the shiny new approach, while the old ones are given the choice to upgrade or to stick with their current version. Everyone is happy.


Having to work on legacy codebases sucks. You are terrified when clients ask you to add a new feature. You feel like you can’t touch anything without breaking something else in some obscure corner of the codebase. The only reasonable solution you see is to give up and rewrite the whole application — anything to make the madness stop. Hell, sometimes you feel you’d be willing to do it in your own spare time.

But sadly, rewriting is rarely a good idea. There are too many unknowns, you have to put a hold on launching new features, you risk damaging your relationship with your clients, and the time and money invested might never be earned back.

Refactoring, on the other hand, might be just enough to save you.

Less boilerplate and more awesomeness with the new InertiaJS form helper

In this video we’ll take a look over Inertia’s new form helper – a game changer as it solves some of the old pains we had with forms, while removing a lot of boilerplate as well.

<template>
  <app-layout>
    <div class="sticky top-0 flex items-center justify-between p-4 bg-white border-b border-gray-200">
      <h2 class="text-xl font-semibold leading-tight text-gray-800">
        Edit profile
      </h2>

      <div class="flex items-center space-x-2">
        <secondary-button @click="form.reset()">
          Reset
        </secondary-button>

        <primary-button form="profile-form" :loading="form.processing">
          Save profile {{ form.progress ? `${form.progress.percentage}%` : ''}}
        </primary-button>
      </div>
    </div>

    <form id="profile-form" class="p-5" @submit.prevent="submit">
      <div v-if="form.wasSuccessful" class="p-3 mb-3 bg-green-100 rounded border border-green-300">
        Profile was updated successfully.
      </div>

      <div class="mb-3">
        <avatar-input class="h-24 w-24 rounded-full" v-model="form.avatar" :default-src="user.profile_photo_url"></avatar-input>
        <p class="text-sm text-red-600" v-if="form.errors.avatar">{{ form.errors.avatar }}</p>
      </div>

      <div class="mb-3">
        <label for="name" class="block font-medium text-sm text-gray-700">Name:</label>
        <input v-model="form.name" id="name" class="form-input rounded-md shadow-sm w-full">
        <p class="text-sm text-red-600" v-if="form.errors.name">{{ form.errors.name }}</p>
      </div>

      <div class="mb-3">
        <label for="username" class="block font-medium text-sm text-gray-700">Username:</label>
        <input v-model="form.username" id="username" class="form-input rounded-md shadow-sm w-full">
        <p class="text-sm text-red-600" v-if="form.errors.username">{{ form.errors.username }}</p>
      </div>

      <div class="mb-3">
        <label for="email" class="block font-medium text-sm text-gray-700">Email:</label>
        <input v-model="form.email" id="email" class="form-input rounded-md shadow-sm w-full">
        <p class="text-sm text-red-600" v-if="form.errors.email">{{ form.errors.email }}</p>
      </div>

      <div class="mb-3">
        <label for="description" class="block font-medium text-sm text-gray-700">Description:</label>
        <textarea v-model="form.description" id="description" class="form-input rounded-md shadow-sm w-full" rows="3"></textarea>
        <p class="text-sm text-red-600" v-if="form.errors.description">{{ form.errors.description }}</p>
      </div>
    </form>
  </app-layout>
</template>

<script>
import AppLayout from './../../Layouts/AppLayout';
import AvatarInput from "../../Components/AvatarInput";

export default {
  props: {
    user: Object
  },
  data() {
    return {
      form: this.$inertia.form({
        avatar: null,
        name: this.user.name,
        username: this.user.username,
        email: this.user.email,
        description: this.user.description,
        _method: 'PUT'
      })
    }
  },
  methods: {
    submit() {
      this.form.post(`/settings/profile`, {
        onSuccess: () => this.form.reset();
      });
    }
  },
  components: {
    AppLayout,
    AvatarInput
  },
}
</script>

4 ways to reduce complexity in your eloquent models

I think everyone loves to work on completely greenfield applications. You get to plan your own course, chose your current favourite technologies, structures, and patterns to follow. There is no legacy code, no technical debt, nothing that stands in your way. You can do whatever you want and building features is a breeze.

But you know the story. You know what happens next.

Your application grows. New requirements come in, and old features need to be changed.

You do your best to keep customers happy, but after a while complexity creeps in and you find yourself in a position where you start doing every possible hack and take every crappy decision to fight your own code into submission.

One of the places our application tends to grow is in our model layer. The usual suspects are the User class and whatever models are part of the core of our application. If you’re building a content management system, Post would be one of the suspects. Selling stuff? Take a look at the Order class.

We’re going to look at ways to deal with complexity in our eloquent models.

Use traits

Traits are the easiest way to slim down an eloquent model. Create a new file, copy and paste a bunch of methods, and BAM! – your model is 100 lines thiner.

The problem with traits is that most of the time you end up sweeping dirt under the rug. You want to cleanup the model but instead of thinking really hard at the problem, you take the trash and sweep it under a trait.

The biggest problem is not the accumulated dirt. Everyone has dirt on their floor at some point. The problem is that, with traits, you won’t really notice it. You’ll look at the room, thinking it’s clean and tidy.

Having your model spread over multiple files makes it a lot harder to identify new concepts and behaviour that can be given representations in your system. You clean and organise things you can see. Code that is not seen is not refactored.

Nevertheless, especially in programming, things are never black and white, and traits are sometimes a good option.

Query builders

One of Eloquent’s nicest features are query scopes. They allow you to pick a common set of constraints, name it, and re-use it through out your application. Let’s take the following example of a Video model:

class Video extends Model
{
    public function scopeDraft($query)
    {
        return $query->where('published', false);
    }

    public function scopePublished($query)
    {
        return $query->where('published', true);
    }

    public function scopeLive($query)
    {
        return $query->published()
            ->where('published_at', '<=', Carbon::now());
    }
}

Once the list of scope methods starts to get in our way we can move them into a dedicated query builder class like bellow. Notice that we no longer need the scope prefix, nor to pass in the $query parameter as we are now in a query builder context.

class VideoQueryBuilder extends Builder
{
    public function draft()
    {
        return $this->where('published', false);
    }

    public function published()
    {
        return $this->where('published', true);
    }

    public function live()
    {
        return $this->published()
            ->where('published_at', '<=', Carbon::now());
    }
}

To replace the default query builder with our enhanced one, we override the newEloquentBuilder method in our Video model like bellow:

class Video extends Model
{
    public function newEloquentBuilder($query)
    {
        return new VideoQueryBuilder($query);
    }
}

Move event listeners to observer classes

When models are short and easy to go through, I like to keep the event listeners right in the boot method so I don’t need to switch to a second file to figure out what happens when. But, when the model starts growing and growing, moving the event listeners into their own observer class is a good trade-off.

protected static function boot()
{
    parent::boot();

    self::saved(function(BodyState $bodyState) {
        $bodyState->update([
            'macros_id' => $this->user->latestMacros->id
        ]);
    });

    self::created(function(BodyState $bodyState) {
        if ($bodyState->user->logBodyStateNotification) {
            $bodyState->user->logBodyStateNotification->markAsRead();
        }
    });
}

Bonus tip 1: Instead of a having a bunch of CRUD calls, when possible, make your code as expressive as possible.

class BodyStateObserver
{
    public function saved(BodyState $bodyState)
    {
        $bodyState->associateWithLatestMacros();
    }

    public function created(BodyState $bodyState)
    {
        $bodyState->user->markLogBodyStateNotificationsAsRead();
    }
}

Bonus tip 2: Instead of hiding your observers in a provider class, register them in the model’s boot method. Not only you’ll know that observer exists, but you’ll also be able to quickly navigate to it from your model.

class BodyState extends Model
{
    protected static function boot()
    {
        parent::boot();
        self::observe(BodyStateObserver::class);
    }
}

Value Objects

When you notice two or more things that seem to always go together, for example street name and street number, start date and end date, you have an opportunity to extract and represent them into a single concept. In our case Address and DateRange.

Another way to detect this kind of objects is to look for methods that “play” a lot with one of your model’s attributes. In the example bellow it seems there are quite a few things we do with price (in cents) of the product.

class Product
{
    public function getPrice()
    {
       return $this->price;
    }

    public function getPriceInDollars()
    {
       return $this->price / 100;
    }

    public function getPriceDisplay()
    {
       return (new NumberFormatter( 'en_US', NumberFormatter::CURRENCY ))
          ->formatCurrency($this->getPriceInDollars(), "USD");
    }
}

We can extract these methods into a Price class.

class Price
{
    public function __constructor($cents)
    {
        $this->cents = $cents;
    }

    public function inDollars()
    {
        $this->cents / 100;
    }

    public function getDisplay()
    {
       return (new NumberFormatter('en_US', NumberFormatter::CURRENCY))
          ->formatCurrency($this->getDollars(), "USD");
    }
}

Remove the previous methods and return the Price class.

class Product
{
    public function getPrice()
    {
        return new Price($this->price);
    }
}

There are many other ways of putting your models on a diet (service objects, form objects, decorators, view objects, policies and others), but the ones I shared above are the ones I tend to reach out the most when my models need to lose some weight. I hope it makes a good diet for your models too 🙂

InertiaJS infinite scrolling example

I just published a new video on how to do infinite scrolling in an InertiaJS and Laravel application – using a twitter-like feed as an example.

Infinite scrolling with InertiaJs and Laravel

The gist of it is:

  1. Setup a listener for the scroll event
  2. Inside the listener calculate the remaining pixels until the bottom of the page so you can make additional requests to load new items before the user hits the bottom of the page.
  3. Make sure you use lodash’s debounce method to avoid making the same request multiple times.
  4. Use axios or fetch or anything else to make a regular xhr request to the server.
  5. Make sure the server endpoint knows when to return an inertia response and when to return a regular json response.
  6. Put your items in a local state property so you can add the new ones when you make the additional requests. The reason for this is because VueJs props shouldn’t be mutated.

Backend snippet:

public function index(User $user, Request $request)
{
    $tweets = $user->tweets()->with('user')->paginate();

    if ($request->wantsJson()) {
        return $tweets;
    }

    return Inertia::render('UserTweets', [
        'user' => $user,
        'tweets' => $tweets
    ]);
}

Scroll event listener snippet:

import AppLayout from './../Layouts/AppLayout'
import {debounce} from "lodash/function";

export default {
  props: {
    user: Object,
    tweets: Object
  },
  data() {
    return {
      userTweets: this.tweets
    }
  },
  components: {
    AppLayout,
  },
  mounted() {
    window.addEventListener('scroll', debounce((e) => {
      let pixelsFromBottom = document.documentElement.offsetHeight - document.documentElement.scrollTop - window.innerHeight;

      if (pixelsFromBottom < 200) {
        axios.get(this.userTweets.next_page_url).then(response => {
          this.userTweets = {
            ...response.data,
            data: [...this.userTweets.data, ...response.data.data]
          }
        });
      }
    }, 100));
  }
}

Composer – everything I should have known

Composer was just something I used to get my project up and running and occasionally install additional libraries. I never put too much thought into it – it just worked. Sometimes I would run into problems, but often they were easily fixed by running composer install or composer dump-autoload. I had no idea what dump-autoload was doing, but it was fixing things.

This article goes through everything I would have liked to know about composer. You’ll find out why we are using it, how it works, what else you can do with it, and how to use it in production.

The PEAR days

Before composer, we relied on cherry-picking code from one place to another, or manually download libraries and throw them into a libs directory. Another solution was PEAR (PHP Extension and Application Repository), the existing library distribution at the time.

But there were many problems with it:

  • installations were made system-wide, rather than on a project basis. You couldn’t have to versions of the same library on your machine.
  • every library had to be unique. You were not allowed to have a different take on how to solve a specific problem.
  • to have your repository accepted into PEAR you had to gather a certain number of up-votes.
  • existing solutions were often outdated, inactive, or unmaintained.

Enter composer

Built by Nills Adermann and Jordi Boggiano, it solved everything PEAR sucked at. Packages were installed on a per-project basis and anyone was free to contribute, create, and share packages with the world. This encouragement led to more polished, complete, and bug-free libraries.

Composer is a dependency manager. It relies on a configuration file (composer.json) to figure out your project’s dependencies and their sub, sub-dependencies, where to download them from, and when to install and autoload them.

Dependencies are not only php libraries. They might also come in the form of platform requirements: like the php version and the extensions installed on your machine. These cannot be installed via composer – their only purpose is to inform developers on how their environment should look like.

Where are all the packages coming from

Packages can be installed from anywhere as long as they are accessible through a VCS (version control system, like git or svn), PEAR, or a direct url to a .zip file. To do so, you must specify your sources under the repositories key using the proper repository configuration.

For private repositories, you can configure composer to use access tokens or basic http authentication. Configuration options are available for GitHubGitlab and Bitbucket.

If no repositories were specified, or packages were not found using the provided sources, composer will search its main repository, packagist.org.

Once the package is located, composer uses the VCS’s features (branches and tags) to find and attempt to download the best match for the version constraints specified in the composer.json file.

Wait a minute. Attempt? Best match?

– a confused reader

Well, yes. There are different ways to specify what package versions should composer install. But before we get into that, let’s have a quick look at how most developers version their packages – which is by following the SemVer (semantic versioning) convention.

SemVer helps developers communicate the nature of the changes made to their package. This way, everyone relying on it won’t have to manually check the source code for changes that might break their project.

The convention assumes a three-part version number X.Y.Z (Major.Minor.Patch) with optional stability suffixes. Each number starts at 0 and is incremented based on what type of changes were made:

  • Major.Minor.Patch – increments when breaking changes were introduced
  • Major.Minor.Patch – increments when backwards-compatible features were added
  • Major.Minor.Patch – increments when bugs were fixed
  • occasionally, stability suffixes are used: -dev, -patch (-p), -alpha (-a), -beta (-b) or -RC (release candidate)

Cautions

  1. Packages having the major version at 0.x.x can introduce breaking changes during minor releases. Only packages having version 1.0.0 or higher are considered to be production-ready.
  2. Not all packages follow semantic versioning. Make sure to consult their documentation before making any assumptions.

Specifying version constraints

There are many ways you can specify package versions, but the most common ones are: – version range – using the math operators >, >=, <, <=, != =1.0.0 <2.0.0 – will install the newest version higher or equal to 1.0.0 but lower than 2.0.0. – caret range – adding a caret ^ will install the newest version available that does not include breaking changes. ^2.1.0 translates into “install the newest version higher or equal to 2.1.0, but lower than 3.0.0” – tilde-range – is similar to the caret range; the difference is that it only increases the version of the last specified number. ~2.1 will install the newest 2.x version available (for example, 2.9~2.1.0 will install the newest 2.1.x version available (for example, 2.1.9) – you can also choose to specify the exact version.

You can register dependencies in two different places: require and require-dev. The first contains everything your project needs to run in production, while the second dictates the additional requirements to do development work – for example, phpunit to run your test suite. The reason why we specify dependencies in two different places is to not install and autoload packages intended for development on the production machines.

Here’s what you might have in composer.json:

{
    "require": {
        "php": "^7.1.3",
        "ext-gd": "",
        "lib-libxml": "",
        "monolog/monolog": "^1.12"
    },
    "require-dev": {
        "fzaninotto/faker": "^1.4",
        "phpunit/phpunit": "^7.5"
    }
}

Production requirements: php version between 7.1.3 and 8.0.0 (not included). Both the gd (graphics drawing) extension and libxml library must be installed, along with any version between 1.1.2 and 2.0.0 (not included) of the monolog/monolog package.

Development requirements: fzaninotto/faker between 1.4 and 2.0 (not included) and phpunit/phpunit between 7.5 and 8.0 (not included).

Why version ranges?

Why would I ever want to set a range constraint and not the exact version?

– a confused reader

Specifying the exact versions makes it difficult to keep your project’s dependencies up to date, introducing the risk of missing important patch releases. Using a range will allow composer to pull in new releases containing bugfixes and security patches.

Ok, but everybody makes mistakes. Some packages might still introduce breaking changes in minor releases. Doesn’t that mean that when I’ll run composer install on the production machine, there is a chance my project will break due to the breaking changes introduced by some random package?

– a concerned reader

Here’s where the composer.lock file kicks in. Once you ran the install for the first time, the exact versions of your dependencies and their sub-dependencies are stored in the composer.lock file – meaning that all subsequent installs will download the exact same versions, avoiding the scenario above.

The only time composer tries to guess what package versions to download is when you run the install command for the first time or when the composer.lock file goes missing. Unless you are writing a library, you should always commit the composer.lock file to source control.

Running the composer update command will act as if the lock file doesn’t exist. Composer will look up any new versions that fit your versioning constraints and rewrite the composer.lock file with the new updated versions.

Composer autoloading

Composer generates a vendor/autoload.php file you can include in your project and start using the classes provided by the installed packages without any extra work. You can even add your own code to the autoloader by adding an autoload key to composer.json.

Here’s an example of what you can autoload using composer:

{ 
    "autoload": {
        "psr-4": {
            "Foo\\": "src/"
        },
        "classmap": [
            "database/seeds",
            "database/factories",
            "lib/MyAwesomeLib.php"
        ],
        "files": ["src/helpers.php"],
        "exclude-from-classmap": ["tests/"]
    }
}
  • PSR-0 and PSR-4 are standards used to translate class namespaces into the physical paths of the files containing them. For example, whenever we import the use Foo\Bar\Baz;class, composer will autoload the file located at src/Foo/Bar/Baz.php.
  • classmap – contains a list of class files and directories to autoload.
  • files – contains a list of files to autoload. This is especially useful when you have a bunch of functions you want to use globally throughout your project.
  • you can also exclude certain files and directories from being autoloaded by adding them under exclude-from-classmap.

What about custom scripts?

A composer script can be a PHP callback defined as a static method call or any other valid command-line executable command. They can either be run manually, using composer yourScriptName or hooked into different events thrown during the composer execution process.

For example, after creating a new Laravel project, composer makes a copy of the .env.example file to .env and then runs the php artisan key:generate command to generate and set the application key in the .env file it just copied.

{ 
    "scripts": {
        "post-root-package-install": [
            "@php -r \\"file_exists('.env') || copy('.env.example', '.env');\\""
        ],
        "post-create-project-cmd": [
            "@php artisan key:generate --ansi"
        ]
    }
}

You can also reference other composer scripts. In the test script bellow, we call the clearCache script to delete the cache directory before running our tests using phpunit.

{ 
    "scripts": { 
        "test": [
            "@clearCache", 
            "phpunit"
        ], 
        "clearCache": "rm -rf cache/*"
    }
}

Composer in production

Here are a set of guidelines I recommend you to follow when using composer in production environments:

  • Never, ever run composer update in production. Do it on on a development machine so you can make sure everything is still working. Only then commit your changes, pull them on the production machine and run composer install to download the new versions specified in the composer.lock file.
  • Divide your project’s dependencies into requirements for production and requirements for development using the require and require-dev keys. This way composer will not install packages intended for development (eg: phpunit) in a production environment.
  • Make sure you only autoload the files and directories you need. As with the requirements, you can also split the autoloading into production and development using the autoload and autoload-dev keys. There is no reason to autoload the migrations and seeds directories in production.
  • Use the composer install --no-dev --optimize-autoloader to install packages and optimize the autoloader for production. The --no-dev flag instructs composer to ignore development-only packages, while the --optimize-autoloader flag converts the dynamic PSR-0/PSR-4 autoloading into a static classmap. This makes loading files faster because with classmap the autoloader knows exactly where a file is located, while when using PSR-0/PSR-4, it always has to check whether that file exists or not.

There you have it. Everything there is to know about composer – at least from the user’s point of view. If you’re interested in how to create and publish a package on packagist.org, this is a good tutorial to follow.

You should also read these:

Eloquent tricks – replacing conditionals with “when”

It happens very often that we want to apply certain eloquent query conditions based on what a request sends in. Sometimes it’s a “search by name” thing, other times we just need to filter the records based on a status column.

Usually it looks like this:

public function index(Request $request) 
{
    $posts = Post::newQuery();

    if ($request->term) {
        $posts->where('name', 'LIKE', "%{$request->term}%");
    }

    if ($request->status) {
        $posts->where('status', $request->status);
    }

    return view('posts.index', ['posts' => $posts->paginate()]);
}

I recently discovered there’s an alternative to using conditionals. The “when” method executes a callback (second parameter) when the first parameter evaluates to true.

public function index(Request $request) 
{
    $posts = Post::when($request->term, function($q, $term) {
        $q->where('name', 'LIKE', "%{$term}%");
    })->when($request->status, function($q, $status) {
        $q->where('status', $status);
    });

    return view('posts.index', ['posts' => $posts->paginate()]);
}

It’s not all that better if you ask me. Yes, it hides the conditionals, but it also makes the code harder to read. Especially if you have more conditions to add.

I’d use this “when” approach for single conditions only.

public function index(Request $request) 
{
    $posts = Post::when($request->term, function($q, $term) {
        $q->where('name', 'LIKE', "%{$term}%");
    });

    return view('posts.index', ['posts' => $posts->paginate()]);
}