It's time for makefiles to make a comeback

An LED-lit computer keyboard sitting ready to be used in front of a monitor displaying code.

Last Updated: September 4th, 2017 Make and makefiles are lost in the past for many developers, its advantages lost in the stream of tools that are constantly reinventing the wheel of building software. It's time we get off that crazy carousel.

If you ask many developers the first thing that comes to mind with Make and makefiles, you will likely get several answers: C/C++, native projects, huge, archaic, or perhaps even old. Some younger developers that have grown up in the JavaScript ecosystem may not have even heard of Make and don't realize the advantages they could harvest by using an existing, well-proven, and stable tool. Do we really need to be learning a new task runner or build system every 18 months as JavaScript frameworks come and go?

Those who do not understand Unix are condemned to reinvent it, poorly.

Henry Spencer, Usenet signature, 1987

What does this have to do with modern JavaScript development?

It's not uncommon for larger JavaScript-based projects today to use a language that compiles down to JavaScript, either for powerful language features or for stronger type checking. We use Browserify to package all our JavaScript modules together into a bundle to allow front-end developers to have a more composition-based developer experience through modules (like back-end developers have in Node.js). We use LESS or SASS and compile out to CSS the browser can understand. We minimize our JavaScript to make the files smaller, resulting in faster downloads and page load times.

What do all these things have in common with each other? At their core, they are each about taking a set of input files and transforming them into a set of output files. And this is exactly what Make is so incredibly good at.

What do makefiles do? Makefiles are simply a declarative way to transform one file (or series of files) into another. That's it, it's that simple. It's not specific to C, C++, or even to programming languages. You could just as easily use a makefile to transform markdown documentation into shipped HTML files, or to pack important files into a zip/tar archive, or do a myriad of other transformations.

Thanks to its declarative nature and implementation, Make is able to use makefiles to only run the bare transforms needed to reach the final destination format. If a source file hasn't changed since the last transformation, the source file doesn't have to be processed again. In larger projects, this is a huge win to speed and a boon to the developer experience.

Make first appeared over 40 years ago and a lot of software has been built with it since that time. It's a battle-tested and stable piece of software that excels at exactly what it was meant to do: transforming files from a source format to a target format, with a very simple and easy-to-understand mechanism for declaring dependencies. It's all text-based, doesn't try to solve everything itself, and has a great integration experience through calling out to shell scripts. In other words, it is very Unix-like. This is hardly surprising given it's birth within the Unix environment.

Some developers have had experiences with very complicated and convoluted makefiles in larger projects. But it doesn't need to be that way. In fact, it can be quite simple to build an NPM package that is implemented in Typescript (from building the source code to packaging the NPM package):


PATH := node_modules/.bin:$(PATH)

source_files    := $(wildcard lib/*.ts)
build_files     := $(source_files:lib/%.ts=dist/%.js)
PACKAGE         := build/my-package-1.0.0.tgz

.PHONY: all

all: $(PACKAGE)

$(build_files): $(source_files) package.json
    npm i
    tsc

$(PACKAGE): $(build_files) .npmignore
    @mkdir -p $(dir $@)
    @cd $(dir $@) && npm pack $(CURDIR)
  

Yes, the above example was for building a very small project. But just because the project becomes larger, doesn't mean the user experience of Make diminishes. Let's look at an example.

At work, I'm currently working on a larger project that is based on AWS Kinesis and Lambda Functions, a stream-processing system architecture that is serverless. The “service” is based out of one git repository for convenience. But we want easily accessible shared libraries between our different Lambda handlers that can also be independently deployed projects. This makes deploying fixes or new functionality into production much quicker and with much less overhead than deploying the entire service as one large monolith.

Our project structure is inspired by a post by StrongLoop on creating a modular Node.js project structure. Even though we are using TypeScript, this structure still definitely applies to us. So we started with the linking and npm scripts approach outlined in the blog post.

Our project structure ended up looking like this in the abstract:

- src
    |- lib
        |- foo
            |- dist
                |- *.js (compiled JavaScript files)
            |- lib
                |- *.ts
            |- package.json
            |- makefile
        |- bar
            |- package.json
            |- makefile
        |- baz
            |- package.json
            |- makefile
    |- handlers
        |- alpha (depends on foo and bar)
            |- package.json
            |- makefile
        |- omega (depends on bar and baz)
            |- package.json
            |- makefile
- package.json
- makefile
  

But as the number of modules grew and the different ordering of dependencies started cropping up (as they do in larger Enterprise software), this approach quickly became unwieldy and painful. We found ourselves with a whole mix of preinstall, postinstall, and prestart scripts. It was very difficult to understand what was happening at build time to bootstrap the service. And integrating new sub-projects was a pain. It was also a “build everything or nothing” type of solution without us putting in a non-trivial amount of extra work.

Before grabbing the latest build hotness like Gulp off the shelf, we decided to take a look at what Make could do for this since it's an established tool and this is right up its alley. That decision is what kicked off my growing appreciation of Make (and inspired this blog post).

Being a larger and growing project, we were naturally concerned about whether our build solution would scale. I happen to think that using Make, it most definitely does. And other than the Make quirks you get used to after you first use it for a while, I think that a junior developer could integrate their own libraries into this Make process.

Here's what a potential makefile for the above project would look like:


deps_install    := $(CURDIR)/build/last-install-time
pkg_lib_foo     := $(CURDIR)/build/foo-1.0.0.tgz
pkg_lib_bar     := $(CURDIR)/build/bar-1.0.0.tgz
pkg_lib_baz     := $(CURDIR)/build/baz-1.0.0.tgz
pkg_alpha       := $(CURDIR)/build/alpha-1.0.0.tgz
pkg_omega       := $(CURDIR)/build/omega-1.0.0.tgz

.PHONY: all handlers libs

all: libs handlers

handlers: $(deps_install)
    $(MAKE) -C src/handlers/alpha
    $(MAKE) -C src/handlers/omega

libs:
    $(MAKE) -C src/lib/foo
    $(MAKE) -C src/lib/bar
    $(MAKE) -C src/lib/baz

$(deps_install): $(pkg_lib_foo) $(pkg_lib_bar) $(pkg_lib_baz)
    @if [ "$(pkg_lib_foo)" = "$(findstring $(pkg_lib_foo),$?)" ]; then \
        cd $(CURDIR)/src/handlers/alpha && npm i $(pkg_lib_foo); \
    fi
    @if [ "$(pkg_lib_bar)" = "$(findstring $(pkg_lib_bar),$?)" ]; then \
        cd $(CURDIR)/src/handlers/alpha && npm i $(pkg_lib_bar); \
        cd $(CURDIR)/src/handlers/omega && npm i $(pkg_lib_bar); \
    fi
    @if [ "$(pkg_lib_baz)" = "$(findstring $(pkg_lib_baz),$?)" ]; then \
        cd $(CURDIR)/src/handlers/omega && npm i $(pkg_lib_baz); \
    fi
    @touch $(deps_install)
  

As you can see, it doesn't need to be incredibly complicated. As this is a back-end service, we don't have browserify, less, or minification. But it should help paint the picture that even with those additions, it should be pretty straightforward.

If you make a change to the baz library, only baz is rebuilt and only baz is re-installed into the omega handler sub-project. Throw a watcher on this process and your build process becomes more rich and improves the local development experience.

The Upsides

One thing I really like about this is that things only run if they need to. You don't even need an incremental compiler to make it possible. If source files haven't been updated, there is no need to regenerate the target files. Make knows this by comparing the last modified times of the source files compared to the target files. You can see the easy integration into existing tools like tsc and npm. I didn't need to wait until a wrapper was created (or to create my own wrapper) in a code-based build tool.

Another less obvious benefit when comparing Make with code-based build tools like Grunt or Gulp is being declarative vs. imperative. You get to focus on the end result (declaring what needs to be done) instead of focusing on how the actual work is done.

Make is also a standalone tool, so there is no need to bring in a bunch of other code dependencies like code-based task runners do. This not only makes the user experience better, but it also means there are fewer ways the software can break (e.g. a new version of a dependency that breaks functionality in the core tool).

The Downsides

Yes, it's another tool and language that developers need to learn. But that's what we get paid to do as developers, right? We always need to be learning new tools and techniques (or re-learning old tools and techniques in this case :P). We accept this forever-learning experience as the latest-and-greatest programming languages or software libraries roll out every month.

But remember, in this case, we are learning a general tool that we will be to leverage in many different ways for a long time. Alton Brown need not worry, this tool is very much a multi-tasker. Make has been around for over 40 years and it's not going anywhere anytime soon. Can we say the same about Grunt, Gulp, or the next Task Runner du jour?

An accurate concern of using Make historically has been the lack of decent support on Windows. By leveraging Make, you were potentially making life more difficult for all your Windows users. That was a non-starter for many projects. But with the recent addition of Linux support in Windows and the ongoing change of heart under Satya Nadella's leadership at Microsoft, this concern is hopefully a relic of the past. With all the great stuff to be learned from and used in Linux, I feel this trend is a major boon to software developers.

Now is a great time to learn Make

So today is a great time to learn and start leveraging Make and makefiles. They are still very much relevant to our work today as developers. There's no need for an ever-revolving door of task runners du jour. Don't succumb to the build tool treadmill and burn yourself out. Learn a powerful tool you will be able to leverage for a long time and isn't going anywhere any time soon.

Yes, it's time for Makefiles to make a comeback! Let's do this!