Isolating For Lack of Change Versus Complexity
Last Updated: November 12th, 2023 NOTE: This is one of many different lenses through which to look at software design. This article outlines some thoughts I've been throwing around in my noggin' as I experiment with different approaches and reasons to split software elements out from each other as a piece of software grows. These thoughts were formulated working on above-average-to-small code bases. The applicability of the discussion will vary on the size of the problem space and granularity level you are working at. It is not meant to apply to the extremely large areas, the Googles, the Microsofts, the Amazons.
I think we may get something wrong as engineers. How many times do we isolate sources of complexity off into their own corners? Their own functions, their own classes, their own microservices. "Oh, this is going to be changing a lot, split it off into its own thing so it can change without impacting other systems". This feels natural and intuitive. But I'm not sure it's the logical choice.
I personally find that I don't want to isolate my complexity any more. An important skill in writing software is managing complexity. The more features that are added to a piece of software, the more complex it naturally becomes. If you don't keep this complexity in check, you can find it becomes more expensive to add features to your software as time passes.
So I try not to isolate my complexity. I have found that I like to see my complexity up front and center in my code. For the areas of my code that are a bit intertwined or need to change rapidly, I want to have all that code close at hand to me and easy and quick to change.
Optimizing for Change
It is cheaper to change things that are grouped locally together. It's much simpler to change the structure of a couple variables within a single function than change the structure of several domain objects returned from different API endpoints across several different API services.
Don't get me wrong. I don't believe microservices are inherently a Bad Thing. I have just come to feel that it's best to leverage them to "spin off" cohesive elements of your software whose requirements have stopped evolving and whose rate of change has settled down. As this code has shown it needs to change much less frequently, we can spin it off so it can just plug away doing it's job. For us, it can be out-of-sight, out-of-mind as we continue to ship feature after feature in our software.
This in turn reduces the concepts that developers need to keep in their mind when working in the core of the code that changes. By reducing the total critical mass of the changing code, the code can become easier to change as well as there is less stuff for all your software elements to collide against or be interfered with.
As software design is fractal, this same concept applies in the small as well. Modules, down to Functions, are great for isolating out the cohesive elements of your code as their change rate "cools down" and the code starts to need to change less often.
At different times during the life of a piece of software, the same piece of code may rotate in and out of the "hot spot" of change. It might become dormant for a while and solidify itself off into a corner. You might then get new features coming in that require that element to start changing again. So you might inline the code again, allowing it to change structure and evolve to accept the new features into the software.
Give it a try! You can practice at nearly any level you are working on. Try to get a feel for how your ability to change code changes based on different software designs when you isolate complexity versus isolate cooled code. If you are interested in learning more about how you might better experiment in the small, Kent Beck's latest book Tidy First? is a great read as usual. Kent Beck hit it out of the park again!
Welcome to my new blog
Last Updated: October 28th, 2023 All this time away from blogging. Much too long in fact. So why yet another "new blog"? For the longest time, this blog used Jekyll and was hosted with Github Pages. So why the switch?
For the last 10-15+ years, I haven't been an active blogger. Blogging became more like an occasional family holiday. It really started crawling along around the time I started having kids. I would write infrequently enough that every time I came back to blog, I would have to spend a bunch of time figuring out Ruby upgrades, Jekyll upgrades, dependencies that had been deprecated, etc. It became simply too tiring for me. So the blog died.
Recently, I've been on a kick of finally building my own tools and leaning-in to YAGNI (You Ain't Gonna Need It). I'm approaching my personal life in a more agile way. I'm learning to do what I am motivated to do instead of what I think others want me to do. I'm learning to embrace a more scientific and logical mindset and learn by doing, not afraid to run a "failed" experiment.
So I started over. New project. Fresh slate. Blank `index.ts`. Deep breathe. Here we go.
The only out-of-the-box piece I'm starting with is TufteCSS. Everything else is fresh from my own index.ts file. I handcode the HTML for each post (I don't care, I'm a developer for goodness sakes). This blog does not use a single external runtime dependency outside of TufteCSS. It's just the code I need. No more, no less. A pure static, old-school website.
Sometimes it feels like the memories are long gone. Sometimes not. But I always wish I had the solid memories that many of my colleagues over the years have had. You see, I have a horrible memory. I know I'll forget this code. I know I'll need to be able to get back up to speed with it quickly. As I'm making my own improvements, I need to know that I haven't broken anything. My only guarantee is that I will forgot the details and nuances of the code by the next time I work on it.
So the site generation is fully covered with automated tests. Yes, I wrote it with Test-Driven Development. Not because it's some religious dogma I follow, but because it helps me with my own anxiety and poor memory when working with code. Well, and "hacking my brain" with a high-frequency positive feedback loop during development. I like my positive reinforcement!
There's going to be more to come here. It may not be super frequent at first. But I'm going to do what I should have done a long time ago: listen to Scott Hanselman's wisdom and control my own content and URLs. I can't go back now and change that. So here I am building up my own brand at my own little corner of the ol' World Wide Web.
Like olden times. Each one of my keystrokes in a location I control. Keystrokes preserved and with a lifetime that is not tied to the lifetime of social media companies.
Goodbye Facebook
“Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, an end which it was already but too easy to arrive at…”
Last Updated: March 25th, 2018 It seems everywhere you turn today, there is bad news, resentment, and festering anger. We see more stories about "The Others." People with different political beliefs, people with different religious beliefs, people with different sociological beliefs than us. We get infuriated. We don't understand The Others. "They are ruining the world," we think. "Why can't they see how wrong they are?"
At the same time, the data that are the digital echoes of our daily rituals are being spread to the ends of our online world. The digital ripples left behind from the wake of our beliefs and social constructs are being harvested and monetized as part of a new Gold Rush for 21st century corporations. Our energy and outrage is lining the pockets of the plutocrats. We are being sold. We are the product. We are digital chattel.
So I'm finally doing it. After many years and many interactions, I am deleting my Facebook account. You are probably seeing several other people doing the same in the light of recent news (e.g. tens of millions of accounts having data stolen from Facebook by Cambridge Analytica, Facebook possibly contributing to genocide ). Initially I had the same reaction. Instead, I removed much of my personal information from my profile, deleted photos, trimmed my friends list, cleared my ad preferences, opted out of 3rd party ad platform, removed linked apps, changed my name, and other actions. But it wasn't enough, something happened. I was exposed to the work of Marshall McLuhan.
The Medium Is The Message
In 1964, Marshall McLuhan published his book "Understanding Media: The Extensions of Man." This book is a quite prescient take on the consequences of any medium. McLuhan proposed that it is not the content carried by a medium that is important. The personal and social consequences are shaped by the medium itself. These consequences "result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology."
Facebook, as a medium, has a couple of characteristics that are disconcerting when it comes to how information flows through society.
The information we are exposed to is self-selected from our own circle of friends. We tend to gravitate towards those that share our own world view, those that we have much in common with. Our default behavior on Facebook is to live within an echo chamber where like-minded thoughts are amplified and reverberate throughout our social network.
Facebook also promotes a culture of communication that is devoid of context and subtlety. Every status update and notification competes for our limited attention span. There is a quest to maximize user engagement: the strive for more likes, shares, and comments. Sensational news headlines, pithy quotes, the latest “what TV show character are you?” quiz. We live in a world of instant gratification and a quest for the next hit of Dopamine.
On the other hand, Facebook has added benefits of drastically reducing the time and effort it takes to stay in contact with loved ones. We can more easily schedule time to get together with others that are geographically separated. We can stay in contact with friends that are no longer feasible to see in person. We get to see and hear about their day. We get to hear about their joy, and their pain. It has the possibility of bringing us closer together. But...
A Cost I'm Not Willing To Pay
For Facebook to exist, there is a cost. It takes money to run a business, to run servers, to maintain the code that the medium is built upon. Historically, we have directly traded goods or paid money in exchange for services. But Facebook operates on a model where the users no longer front the cost. The funding has shifted. We the users subsidize the cost by becoming the product that is sold. All our interactions, all our data; it's liquid gold for those wanting to sell products or services.
TANSTAAFL
There Aren't No Such Thing As A Free Lunch.
What sells most? Fear sells. At its core, Facebook is a medium defined and driven by Fear. Fear is what drives Facebook's continued success. We can spread love until the cows come home on Facebook, but it won't make a difference in the long run. It's not the content, it's the medium. We have to get out into the world and get our hands dirty.
I refuse to continue to be a participant on a medium that actively betrays the very principles that I believe in. I refuse to be the product. I refuse to be bullied into closing myself off from others. My life will not be driven by fear.
Goodbye Facebook
I choose compassion. I choose empathy. I choose listening and understanding. I choose deeper connections. I choose books and blog posts. I choose deeper thought and rational debate. I choose to pay directly for the services that augment my life. I choose not to live surrounded by a daily digest of Pithy Slogans and Fear, Uncertainty, and Doubt.
Goodbye, Facebook. I'm not sure I'll even miss you.
My Desert Island Tech Books and Movies
Last Updated: September 4th, 2017 The other day on Twitter, some friends and I were discussing the narrative books we like that are about technology (not technical books themselves). These are books about notable figures in the technology field, about the history of a specific technology, and so on. Here is my list of books that I consider must-reads for those techies out there that like this type of stuff (in no particular order).
If one of your favorite books is not on this list, please leave a comment as I would love to have more books to add to my list!
Jason's Must-Read
iWoz: Computer Geek to Cult Icon: How I Invented the Personal Computer, Co-Founded Apple, and Had Fun Doing It. This is perhaps one of my most favorite books on a notable figure in the technology field. Though I'm not personally a hardware guy, it's hard to find somebody to look up to more than Steve Wozniak. He is most definitely a Mozart of hardware design. I usually find the last part of the book (when it's less technical) less interesting, but don't let that deter you from a great book!
Just for Fun: The Story of an Accidental Revolutionary. This is a book all about Linus Torvalds and the creation of Linux. I think it's best when it's being very technical. Then again, I'm a geek 😀. This book gives interesting insight into the motivations and life of Linus.
The Second Coming of Steve Jobs. It's amazing the difference a single person can make in the direction and drive of a large company. I personally think it is one of the reasons Apple is who they are. There is no doubt that Steve Jobs was a major influence and he probably single-handedly saved the company when he returned.
Showstopper! The Breakneck Race to Create Windows NT and the Next Generation at Microsoft. Ah, the creation of Windows NT. Quite the milestone for Microsoft. This book gives good insights into the culture of the Windows division at the time and the many different conflicts that were happening that impacted the development of Windows NT. If you're a "Microsoft Guy" (as in, you know of executives like Soma, Bob Muglia, and the like), you will find yourself recognizing many names within this book. At the heart of it is Dave Cutler. There are many times where you'll either laugh or be incredibly scared at some of the behavior by Dave Cutler as he pushed his team to ship Windows NT.
Microserfs. Let's say one thing, this book is not based on a true story. However, it might as well be. It probably shouldn't be on this list as it is a fictional book. Oh well! From what I've heard, it captures much of the feeling and culture of what it was like to work at Microsoft in the late 1980s and early 1990s. Even before working at Microsoft, I enjoyed this book very much. However, after working at Microsoft, I really like this book 😀. It's fun to get some of the inside jokes after knowing Microsoft from the inside (like the main character working in Building 7 which doesn't exist and is a part of Microsoft lore). If you're familiar with the areas of Redmond and Bellevue that surround Microsoft's main campus, you'll smile several times during this book. The book isn't entirely based at Microsoft as many of the characters move on to different things. Interestingly enough, I have a version of this book as a “Book on Tape” that is narrated by Matthew Perry.
The Soul of a New Machine. This book tells the story of the one-year development of a 32-bit minicomputer built by Data General engineers in the 1970s. It's a crazy time shown in this book and a fun read. This book was a page-turner for me. I'm glad I don't work on a team like this in real life though. It was definitely quite the time to be involved in the computer industry.
Coders at Work. A masterful set of interviews done with well-known subjects from all over the computer field. In this book, you will find interviews ranging from Guy Steel (father of Scheme/Lisp), to Donald Knuth, to Simon Peyton Jones (of Haskell fame), and many others. This book is very suited to sudden starts and stops so I found myself reading it before bed and when I had quick breaks during the day. Though I skipped around a bunch, I found myself with a compulsive need to read every single interview in the book (and it's hard to find an interview in the book less than 40 pages long). A must read, especially for programmers.
Fire in the Valley: The Making of The Personal Computer. This is most definitely not a narrative. It's actually a borderline encyclopedia on the making of the personal computer. You don't sit down and read this book through in one sitting (it's massive to-boot). But it's still fun to have this one on the shelf to go through from time to time to get a fairly complete breakdown on all the events (and their timelines) that led to the development of the personal computer (and how it evolved).
Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software. If you like The Soul Of A New Machine, you should like this book as well (and vice versa). This book is largely considered a “modern retelling” of Soul Of A New Machine by many people. Having read both books, I can't exactly disagree with that sentiment. Provides much of the same context and insight into the computer field for a more modern age that Soul Of A New Machine did for the 1970s.
Smalltalk-80: The Language and its Implementation. Alan Kay is one of my tech heroes as I have mentioned in the past. I love many of the concepts that come from Smalltalk-80. As a developer who writes object-oriented code from time to time, it's a fascinating insight into the language whose inventor coined the term "object-oriented programming" in the first place.
The Early History of Smalltalk (free PDF available). Yes, this is not a book. This is actually a paper written by Alan Kay. This means you can't get any more accurate about all the events leading up to the birth of Smalltalk, considering it was written by the visionary behind Smalltalk. In this paper, Alan goes back all the way to his college days and talks about many of the cutting edge research being done in the computer field that gave the inspiration that ultimately led to the development of Smalltalk. I can't express enough how much I believe this to be a must-read for any language fan/geek. Do it now, there is no excuse to download the PDF as you are reading this!
Masterminds of Programming: Conversations with the Creators of Major Programming Languages. If you don't know already, I'm a language geek. I'm fascinated by the design and implementation of programming languages. With that in mind, there should be no surprise on why I like this book so much. This book, as the title says, contains a bunch of interviews with many different implementers of major programming languages. If you're a language geek like myself, this book is a Must Read.
The Annotated Turing: A Guided Tour Through Alan Turing's Historic Paper on Computability and the Turing Machine. Heard of Turing Machines? Perhaps you're not a mathematician? If so, this is a fantastic read. In this book, Charles Petzold provides the background knowledge and context that is required to understand Alan Turing's groundbreaking paper. Charles Petzold walks you through the paper practically sentence-for-sentence, breaking down what it means and the ramifications for each section. This book can border on “brain explosion” from time to time, but is well worth the effort to read it.
Code: The Hidden Language of Computer Hardware and Software. I'm fascinated by how things work, and this book walks you all the way from logical gates (AND, OR, XOR, etc.) to how memory works all the way down to the logical gate level. It then builds on this to show the guts of how CPUs work. You'll find yourself starting with basics like light bulbs and switches, and ending with the basics of computers. Another “gold-star” to Charles Petzold for this book. Get it!
In Code: A Mathematical Journey. I'm not a mathematician. In fact, compared to other software developers I know, my math skills and knowledge is pretty poor. Yet I continue to find myself intrigued by topics like Cryptography. If that sounds like you, this book is a must-have. It chronicles the journey of a young woman through the land of mathematics and into the realm of cryptography. It is amazing what this young woman accomplishes at her age. This book tells the narrative in such a way that you get to understand how it works as well as she walks along her learning path. And we're not talking about “Cryptography 101” either.
Revolution In The Valley. This is about how the Mac was made and is written by one of the main engineers of the Mac as well (Andy Hertzfeld). It is filled with fun stories behind the creation of the Mac and the personalities on the team.
The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography. I love classic cryptography, its history, and how it applies to our modern computers today. This is a fascinating and thoroughly enjoyable read through the history of cryptography.
Jason's Must-Watch
The following ones are technically "must-watch" as they are videos, not books. But I find myself popping them in when I'm in one of those moods (and want some video playing in the background while I code).
Pirates of Silicon Valley. This is a made-for-TV movie based on the book Fire In The Valley from above. It mostly focuses on the early days of Apple and Microsoft and the interaction between the two. Not 100% accurate, but still fun to watch.
Triumph of the Nerds. An early (pre-Windows 95 launch) three-part documentary done by Robert X. Cringley. Has some interesting interviews with Steve Ballmer, Bill Gates, Larry Ellison, Steve Jobs, and more. Discusses some of the conflicts between IBM and Microsoft towards the end of that partnership as well as diving back into some of the early history of the Altair and Apple 2.
Revolution OS. A documentary about the creation and initial rise of Linux. Has interviews with Bruce Perens, Linus Torvalds, Richard M. Stallman, Eric S. Raymond, and the like. Granted, it is a fairly weighted and one-sided presentation (the narrative over Bill Gates' letter to hobbyists is especially laugh-inducing). But don't let that get in the way of what is a good watch.
Hackers — Wizards of the Electronic Age. It contains a bunch of legendary hackers, what else is there to say?
I still need to get to these...
Here are some more that are on my own must-read-this-in-the-future list, I just haven't gotten around to buying and reading them yet.
Hackers: Heroes of the Computer Revolution. I think the title says it all. Sounds like a fun read!
A Few Good Men from UNIVAC (History of Computing). Based on how much I enjoyed The Soul Of A New Machine and how much I like computer history, I definitely want give this one a read.
Fumbling the Future: How Xerox Invented, then Ignored, the First Personal Computer. I've always had great respect and admiration for those that worked at Xerox PARC in its heyday. I think every geek should learn about the sheer amount of technologies that our modern computer world is built on that were all founded and built at Xerox PARC. An amazing story.
Analog Days: The Invention and Impact of the Moog Synthesizer. It could be argued that the original Moog synthesizers are what really put synthesis on the map. Though I'm not much of a hardware geek myself, I have always been intrigued by the hardware side of things. This is definitely on my wish list.
Hard Drive: Bill Gates and the Making of the Microsoft Empire. It seems ridiculous that I was a Microsoft employee for nearly 10 years and don't have a single book on Bill Gates (yet my bookshelf is stocked with books on Linus, Steve Jobs, Steve Wozniak, etc.). This one looks like a fairly good read too.
It's time for makefiles to make a comeback
Last Updated: September 4th, 2017 Make and makefiles are lost in the past for many developers, its advantages lost in the stream of tools that are constantly reinventing the wheel of building software. It's time we get off that crazy carousel.
If you ask many developers the first thing that comes to mind with Make and makefiles, you will likely get several answers: C/C++, native projects, huge, archaic, or perhaps even old. Some younger developers that have grown up in the JavaScript ecosystem may not have even heard of Make and don't realize the advantages they could harvest by using an existing, well-proven, and stable tool. Do we really need to be learning a new task runner or build system every 18 months as JavaScript frameworks come and go?
Those who do not understand Unix are condemned to reinvent it, poorly.
What does this have to do with modern JavaScript development?
It's not uncommon for larger JavaScript-based projects today to use a language that compiles down to JavaScript, either for powerful language features or for stronger type checking. We use Browserify to package all our JavaScript modules together into a bundle to allow front-end developers to have a more composition-based developer experience through modules (like back-end developers have in Node.js). We use LESS or SASS and compile out to CSS the browser can understand. We minimize our JavaScript to make the files smaller, resulting in faster downloads and page load times.
What do all these things have in common with each other? At their core, they are each about taking a set of input files and transforming them into a set of output files. And this is exactly what Make is so incredibly good at.
What do makefiles do? Makefiles are simply a declarative way to transform one file (or series of files) into another. That's it, it's that simple. It's not specific to C, C++, or even to programming languages. You could just as easily use a makefile to transform markdown documentation into shipped HTML files, or to pack important files into a zip/tar archive, or do a myriad of other transformations.
Thanks to its declarative nature and implementation, Make is able to use makefiles to only run the bare transforms needed to reach the final destination format. If a source file hasn't changed since the last transformation, the source file doesn't have to be processed again. In larger projects, this is a huge win to speed and a boon to the developer experience.
Make first appeared over 40 years ago and a lot of software has been built with it since that time. It's a battle-tested and stable piece of software that excels at exactly what it was meant to do: transforming files from a source format to a target format, with a very simple and easy-to-understand mechanism for declaring dependencies. It's all text-based, doesn't try to solve everything itself, and has a great integration experience through calling out to shell scripts. In other words, it is very Unix-like. This is hardly surprising given it's birth within the Unix environment.
Some developers have had experiences with very complicated and convoluted makefiles in larger projects. But it doesn't need to be that way. In fact, it can be quite simple to build an NPM package that is implemented in Typescript (from building the source code to packaging the NPM package):
PATH := node_modules/.bin:$(PATH)
source_files := $(wildcard lib/*.ts)
build_files := $(source_files:lib/%.ts=dist/%.js)
PACKAGE := build/my-package-1.0.0.tgz
.PHONY: all
all: $(PACKAGE)
$(build_files): $(source_files) package.json
npm i
tsc
$(PACKAGE): $(build_files) .npmignore
@mkdir -p $(dir $@)
@cd $(dir $@) && npm pack $(CURDIR)
Yes, the above example was for building a very small project. But just because the project becomes larger, doesn't mean the user experience of Make diminishes. Let's look at an example.
At work, I'm currently working on a larger project that is based on AWS Kinesis and Lambda Functions, a stream-processing system architecture that is serverless. The “service” is based out of one git repository for convenience. But we want easily accessible shared libraries between our different Lambda handlers that can also be independently deployed projects. This makes deploying fixes or new functionality into production much quicker and with much less overhead than deploying the entire service as one large monolith.
Our project structure is inspired by a post by StrongLoop on creating a modular Node.js project structure. Even though we are using TypeScript, this structure still definitely applies to us. So we started with the linking and npm scripts approach outlined in the blog post.
Our project structure ended up looking like this in the abstract:
- src |- lib |- foo |- dist |- *.js (compiled JavaScript files) |- lib |- *.ts |- package.json |- makefile |- bar |- package.json |- makefile |- baz |- package.json |- makefile |- handlers |- alpha (depends on foo and bar) |- package.json |- makefile |- omega (depends on bar and baz) |- package.json |- makefile - package.json - makefile
But as the number of modules grew and the different ordering of dependencies started cropping up (as they do in larger Enterprise software), this approach quickly became unwieldy and painful. We found ourselves with a whole mix of preinstall, postinstall, and prestart scripts. It was very difficult to understand what was happening at build time to bootstrap the service. And integrating new sub-projects was a pain. It was also a “build everything or nothing” type of solution without us putting in a non-trivial amount of extra work.
Before grabbing the latest build hotness like Gulp off the shelf, we decided to take a look at what Make could do for this since it's an established tool and this is right up its alley. That decision is what kicked off my growing appreciation of Make (and inspired this blog post).
Being a larger and growing project, we were naturally concerned about whether our build solution would scale. I happen to think that using Make, it most definitely does. And other than the Make quirks you get used to after you first use it for a while, I think that a junior developer could integrate their own libraries into this Make process.
Here's what a potential makefile for the above project would look like:
deps_install := $(CURDIR)/build/last-install-time
pkg_lib_foo := $(CURDIR)/build/foo-1.0.0.tgz
pkg_lib_bar := $(CURDIR)/build/bar-1.0.0.tgz
pkg_lib_baz := $(CURDIR)/build/baz-1.0.0.tgz
pkg_alpha := $(CURDIR)/build/alpha-1.0.0.tgz
pkg_omega := $(CURDIR)/build/omega-1.0.0.tgz
.PHONY: all handlers libs
all: libs handlers
handlers: $(deps_install)
$(MAKE) -C src/handlers/alpha
$(MAKE) -C src/handlers/omega
libs:
$(MAKE) -C src/lib/foo
$(MAKE) -C src/lib/bar
$(MAKE) -C src/lib/baz
$(deps_install): $(pkg_lib_foo) $(pkg_lib_bar) $(pkg_lib_baz)
@if [ "$(pkg_lib_foo)" = "$(findstring $(pkg_lib_foo),$?)" ]; then \
cd $(CURDIR)/src/handlers/alpha && npm i $(pkg_lib_foo); \
fi
@if [ "$(pkg_lib_bar)" = "$(findstring $(pkg_lib_bar),$?)" ]; then \
cd $(CURDIR)/src/handlers/alpha && npm i $(pkg_lib_bar); \
cd $(CURDIR)/src/handlers/omega && npm i $(pkg_lib_bar); \
fi
@if [ "$(pkg_lib_baz)" = "$(findstring $(pkg_lib_baz),$?)" ]; then \
cd $(CURDIR)/src/handlers/omega && npm i $(pkg_lib_baz); \
fi
@touch $(deps_install)
As you can see, it doesn't need to be incredibly complicated. As this is a back-end service, we don't have browserify, less, or minification. But it should help paint the picture that even with those additions, it should be pretty straightforward.
If you make a change to the baz library, only baz is rebuilt and only baz is re-installed into the omega handler sub-project. Throw a watcher on this process and your build process becomes more rich and improves the local development experience.
The Upsides
One thing I really like about this is that things only run if they need to. You don't even need an incremental compiler to make it possible. If source files haven't been updated, there is no need to regenerate the target files. Make knows this by comparing the last modified times of the source files compared to the target files. You can see the easy integration into existing tools like tsc and npm. I didn't need to wait until a wrapper was created (or to create my own wrapper) in a code-based build tool.
Another less obvious benefit when comparing Make with code-based build tools like Grunt or Gulp is being declarative vs. imperative. You get to focus on the end result (declaring what needs to be done) instead of focusing on how the actual work is done.
Make is also a standalone tool, so there is no need to bring in a bunch of other code dependencies like code-based task runners do. This not only makes the user experience better, but it also means there are fewer ways the software can break (e.g. a new version of a dependency that breaks functionality in the core tool).
The Downsides
Yes, it's another tool and language that developers need to learn. But that's what we get paid to do as developers, right? We always need to be learning new tools and techniques (or re-learning old tools and techniques in this case :P). We accept this forever-learning experience as the latest-and-greatest programming languages or software libraries roll out every month.
But remember, in this case, we are learning a general tool that we will be to leverage in many different ways for a long time. Alton Brown need not worry, this tool is very much a multi-tasker. Make has been around for over 40 years and it's not going anywhere anytime soon. Can we say the same about Grunt, Gulp, or the next Task Runner du jour?
An accurate concern of using Make historically has been the lack of decent support on Windows. By leveraging Make, you were potentially making life more difficult for all your Windows users. That was a non-starter for many projects. But with the recent addition of Linux support in Windows and the ongoing change of heart under Satya Nadella's leadership at Microsoft, this concern is hopefully a relic of the past. With all the great stuff to be learned from and used in Linux, I feel this trend is a major boon to software developers.
Now is a great time to learn Make
So today is a great time to learn and start leveraging Make and makefiles. They are still very much relevant to our work today as developers. There's no need for an ever-revolving door of task runners du jour. Don't succumb to the build tool treadmill and burn yourself out. Learn a powerful tool you will be able to leverage for a long time and isn't going anywhere any time soon.
Yes, it's time for Makefiles to make a comeback! Let's do this!