Streamlining local development with Dnsmasq

Testing on localhost hides entire categories of bugs—cookie scope issues, CORS policies, authentication flows that behave differently on real domains. These problems surface after deployment, when fixing them costs hours instead of minutes. Dnsmasq eliminates this gap by making local development behave like production, turning any custom domain into localhost whilst preserving domain-based security policies.

I remember debugging a session management issue that appeared only when the application ran on a proper domain, not on localhost. The cookies were set with domain restrictions, and localhost didn't trigger the same code paths as the production domain. I'd caught the bug three hours before launch, only because I happened to test against a staging server. The fix took ten minutes. Finding it took three hours of confusion. Had I been testing with proper local domain names from the start, the issue would have been obvious during initial development.

Testing web applications on localhost works until it doesn't. Cookie scope issues, CORS policies, authentication flows that behave differently based on domain structure, service workers that refuse to register on localhost—these problems hide until you deploy, when discovering them costs significantly more than during local development. The gap between your local environment and production creates a testing blind spot where entire categories of bugs remain invisible until customers encounter them.

Dnsmasq eliminates this gap. It's a lightweight DNS forwarder and DHCP server that runs on your local machine, intercepting DNS queries and resolving custom domains to localhost. Instead of accessing your application via localhost:3000, you access it via api.myproject.local or app.myproject.local—domains that behave like production domains but resolve to your development machine. The browser treats them as real domains. Cookies work correctly. CORS policies trigger as they will in production. Authentication flows behave identically to how they'll behave when deployed. The testing environment matches reality.

The mechanics are straightforward. When you type a domain name into your browser, your system queries DNS servers to translate that domain into an IP address. Normally, this query goes to your ISP's DNS servers or public resolvers like Google's 8.8.8.8. Dnsmasq intercepts these queries before they leave your machine. You configure it to recognise specific domain patterns—say, anything ending in .local—and respond with 127.0.0.1, the localhost address. The browser receives this response and connects to your local development server, but as far as the browser knows, it's connecting to a legitimate domain. Security policies apply. Cookie restrictions work. Everything behaves as it will in production.

The difference becomes particularly obvious when working with microservices or applications split across multiple subdomains. Modern web applications rarely consist of a single service on a single domain. You might have an API at api.example.com, a web interface at app.example.com, and authentication at auth.example.com. Testing this architecture on localhost typically means running services on different ports: localhost:3000 for the API, localhost:3001 for the frontend, localhost:3002 for authentication. But port-based separation doesn't replicate how these services communicate in production, where they're distinguished by subdomains, not ports. Cross-origin policies behave differently. Cookie sharing fails. The local environment diverges from production in subtle but critical ways.

With Dnsmasq, you replicate the production domain structure locally: api.myproject.local, app.myproject.local, auth.myproject.local. Each subdomain resolves to localhost, but your reverse proxy or development servers route requests to the appropriate service based on the Host header. The architecture mirrors production. Cross-origin behaviour matches what you'll encounter after deployment. Issues that only manifest under production-like conditions become visible during development, when fixing them is trivial rather than urgent.

Beyond replicating domain structure, local DNS resolution provides isolation and security. When testing with actual domain names that you don't own—domains that exist on the public internet—there's always the risk of accidentally hitting the real service instead of your local version. Testing against production domains requires carefully managing host files or remembering to disconnect from the network. Local domain resolution using custom top-level domains like .local or .test eliminates this risk entirely. These domains don't exist on the public internet. Queries for them never leave your machine. You're guaranteed to be testing against your local environment, not accidentally interfering with or depending on external services.

Team consistency matters too. When every developer on a team uses the same local domain names, configuration becomes portable. Documentation references domains that work for everyone. Shared scripts and tooling don't need per-developer customisation. The cognitive overhead of translating between "localhost on your machine" and "localhost on my machine" disappears. Everyone's local environment aligns, making collaboration smoother and onboarding faster.

Installing and configuring Dnsmasq

The setup process involves two components: Dnsmasq for DNS resolution and Caddy for serving your application. Both install via Homebrew, assuming you're on macOS. For Linux systems, use your distribution's package manager—the configuration remains identical regardless of platform.

Install Dnsmasq:

brew install dnsmasq

Homebrew installs Dnsmasq but doesn't create a default configuration file. The configuration lives at /opt/homebrew/etc/dnsmasq.conf on Apple Silicon Macs or /usr/local/etc/dnsmasq.conf on Intel Macs. Open this file in your editor:

nano /opt/homebrew/etc/dnsmasq.conf

The configuration syntax is straightforward. To resolve a specific domain to localhost, add an address directive:

address=/myproject.local/127.0.0.1

This tells Dnsmasq that any query for myproject.local should return 127.0.0.1. The browser requests myproject.local, Dnsmasq responds with localhost, and the browser connects to whatever's running locally on port 80 (or whichever port your development server uses).

For applications using multiple subdomains—APIs, authentication services, frontend applications—configure Dnsmasq to resolve an entire domain wildcard:

address=/.myproject.local/127.0.0.1

The leading dot makes this a wildcard pattern. Now api.myproject.local, app.myproject.local, auth.myproject.local, and any other subdomain all resolve to localhost. You configure your reverse proxy or development server to route these domains to the appropriate service, and Dnsmasq ensures the DNS resolution works regardless of how many subdomains you create.

Save the configuration file and start Dnsmasq as a service:

brew services start dnsmasq

Homebrew's service management ensures Dnsmasq starts automatically on boot and restarts if it crashes. When you modify the configuration file—adding new domains, changing IP addresses—restart the service to apply changes:

brew services restart dnsmasq

Dnsmasq is now running and intercepting DNS queries, but your system doesn't know to use it yet. macOS and most Linux distributions maintain a list of DNS resolvers, and you need to add Dnsmasq to that list. On macOS, this involves creating a resolver configuration for your custom domain. Create a directory if it doesn't exist:

sudo mkdir -p /etc/resolver

Create a resolver file for your local domain:

sudo nano /etc/resolver/local

Add a single line pointing to Dnsmasq:

nameserver 127.0.0.1

This tells macOS that for any .local domain queries, it should ask the DNS server at 127.0.0.1—which is Dnsmasq. The resolver filename (local in this case) determines which domains this applies to. If you want to use .test instead, name the file test and configure Dnsmasq accordingly.

Test the DNS resolution:

dig myproject.local

You should see 127.0.0.1 in the answer section. If the query fails or returns an error, verify that Dnsmasq is running (brew services list) and that your resolver configuration exists in /etc/resolver/.

Serving applications with Caddy

Dnsmasq handles DNS resolution, but you need a web server to actually serve your application. Caddy fills this role elegantly. It's a modern web server with automatic HTTPS, a simple configuration syntax, and native support for reverse proxying—perfect for local development where you might be juggling multiple services.

Install Caddy via Homebrew:

brew install caddy

Caddy's configuration file is called a Caddyfile. Create one in your project directory or a central location you'll reference across projects:

nano Caddyfile

The simplest configuration serves static files from a directory:

myproject.local {
  root * /path/to/your/project
  file_server
}

Replace /path/to/your/project with the actual path to your project's public directory. The root directive tells Caddy where to find files. The file_server directive enables static file serving. That's the complete configuration for serving a static site.

For applications that run their own development server—a React app on port 3000, an Express API on port 4000—configure Caddy as a reverse proxy:

app.myproject.local {
  reverse_proxy localhost:3000
}

api.myproject.local {
  reverse_proxy localhost:4000
}

Now requests to app.myproject.local forward to your React development server, whilst requests to api.myproject.local forward to your Express API. Each service thinks it's being accessed via its real domain. Cookies set by the API with domain restrictions work correctly. CORS policies behave as they will in production. The architecture mirrors what you'll deploy.

Start Caddy from the directory containing your Caddyfile:

caddy run

Caddy reads the Caddyfile, starts serving on port 80 (HTTP) and port 443 (HTTPS), and handles requests based on the Host header. Open your browser and navigate to http://myproject.local or whichever domain you configured. Your application loads, served locally but accessed via a proper domain name.

Caddy attempts to provision HTTPS certificates automatically, even for local domains. For public domains, this works through Let's Encrypt. For local domains like .local, Caddy generates self-signed certificates. Your browser will warn about the self-signed certificate. For development, you can ignore this warning or install the certificate in your system's trust store. Alternatively, use mkcert to generate locally-trusted certificates:

brew install mkcert
mkcert -install
mkcert "*.myproject.local"

This generates certificate files that Caddy can use, and your browser trusts them without warnings. Update your Caddyfile to reference these certificates if needed, though Caddy often detects and uses them automatically.

The combination of Dnsmasq for DNS resolution and Caddy for serving creates a local environment that closely mirrors production. Domains resolve correctly. Services communicate as they will when deployed. Security policies trigger appropriately. The gap between local development and production narrows to near zero.


The bugs you catch during development cost minutes to fix. The same bugs discovered in production cost hours or days—debugging under pressure, coordinating emergency fixes, explaining outages to customers. Anything that moves bug discovery earlier in the development cycle pays for itself immediately.

Dnsmasq and Caddy together eliminate an entire category of environment-specific bugs by making your local setup behave like production. Cookie scope issues surface during initial development. CORS policies fail where they'll fail in production. Authentication flows using domain-based logic work or break in ways that mirror deployment reality. The testing gap between localhost and production collapses.

The configuration investment is minimal—fifteen minutes to install and configure both tools, then they work indefinitely across all projects. The payoff arrives the first time you catch a domain-dependent bug locally instead of after deployment. That session management issue I debugged three hours before launch would have been obvious during development had I been using proper local domains. The fix took ten minutes. The discovery, under time pressure, took three hours. Multiply that by every domain-specific bug across every project, and the value becomes clear.

Your local environment should mirror production as closely as possible. The gap between them is where bugs hide. Dnsmasq and Caddy close that gap, turning localhost into a production-like environment where issues surface early and fixing them is straightforward rather than urgent.

Published on:

Updated on:

Reading time:

9 min read

Article counts:

41 paragraphs, 1,687 words

Topics

TL;DR

localhost testing creates a blind spot where domain-dependent bugs hide until deployment—cookie scope restrictions, CORS policies, service workers, authentication flows that behave differently on real domains versus ports. Dnsmasq intercepts DNS queries locally, resolving custom domains like api.myproject.local to 127.0.0.1 whilst browsers treat them as legitimate domains. Combined with Caddy for reverse proxying, this replicates production domain architecture locally: multiple subdomains routing to separate services, each behaving exactly as they will when deployed. The configuration takes fifteen minutes, then works indefinitely across all projects. Bugs caught during development cost minutes to fix; the same bugs in production cost hours. Close the gap between localhost and production—catch environment-specific issues early when fixing them is straightforward rather than urgent.

Latest from the blog

15 min read

AWS sub-accounts: isolating resources with Organizations

Most teams dump client resources into their main AWS account, creating an administrative nightmare when projects end or security issues arise. AWS Organizations sub-accounts provide hard security boundaries that separate resources, limit blast radius from incidents, and make cleanup trivial—yet many developers avoid them, assuming the setup complexity outweighs the benefits.

More rabbit holes to fall down

10 min read

Turbocharge development: the magic of SSH port forwarding

Security policies block database ports. Firewalls prevent external connections. Remote services remain inaccessible except through carefully controlled channels. SSH port forwarding creates encrypted tunnels that make distant services appear local—you connect to localhost whilst traffic routes securely to remote resources, maintaining security boundaries without compromising workflow efficiency.
10 min read

Environment reproducibility: Docker vs. Nix vs. Vagrant

Production threw segmentation faults in unchanged code. Four hours revealed the cause: Node.js 18.16.0 versus 18.17.1—a patch version difference in native addon handling exposing a memory corruption issue. Environment drift creates space for bugs to hide. Docker, Nix, and Vagrant solve reproducibility at different levels with distinct trade-offs.
11 min read

Dotfiles: why and how

Working on someone else's machine feels like writing with their hands—common commands fail, shortcuts vanish, and everything feels wrong. Dotfiles transform this by capturing your accumulated workflow optimisation in version-controlled configuration files, turning any terminal into your terminal within minutes rather than days of manual reconfiguration.

Further musings for the properly obsessed

15 min read

AWS sub-accounts: isolating resources with Organizations

Most teams dump client resources into their main AWS account, creating an administrative nightmare when projects end or security issues arise. AWS Organizations sub-accounts provide hard security boundaries that separate resources, limit blast radius from incidents, and make cleanup trivial—yet many developers avoid them, assuming the setup complexity outweighs the benefits.
11 min read

The architecture autopsy: when 'we'll refactor later' becomes 'we need a complete rewrite'

Early architectural decisions compound over time, creating irreversible constraints that transform minor technical debt into catastrophic system failures. Understanding how seemingly innocent choices cascade into complete rewrites reveals why future-proofing architecture requires balancing immediate needs with long-term reversibility.
19 min read

The symptom-fix trap: Why patching consequences breeds chaos

In the relentless pressure to ship features and fix bugs quickly, development teams fall into a destructive pattern of treating symptoms rather than root causes. This reactive approach creates cascading technical debt, multiplies maintenance costs, and transforms codebases into brittle systems that break under the weight of accumulated shortcuts.
9 min read

The 2038 problem: when time runs out

At exactly 03:14:07 UTC on January 19, 2038, a significant portion of the world's computing infrastructure will experience temporal catastrophe. Unlike Y2K, this isn't a formatting problem - it's mathematics meets physics, and we can't patch the fundamental laws of binary arithmetic.
20 min read

The velocity trap: when speed metrics destroy long-term performance

Velocity metrics were meant to help teams predict and improve, but they have become weapons of productivity theatre that incentivise gaming the system while destroying actual productivity. Understanding how story points, velocity tracking, and sprint metrics create perverse incentives is essential for building truly effective development teams.
18 min read

Sprint overcommitment: the quality tax nobody measures

Three features in parallel, each "nearly done". The authentication refactor sits at 85% complete. The payment integration passed initial testing. The dashboard redesign awaits final review. None will ship this sprint—all will introduce bugs next sprint. Research shows teams planning above 70% capacity experience 60% more defects whilst delivering 40% less actual value.
12 min read

Technical debt triage: making strategic compromises

Simple CSV export: one day estimated, three weeks actual. User data spread across seven tables with inconsistent types—strings, epochs, ISO 8601 timestamps. Technical debt's real cost isn't messy code; it's velocity degradation. Features take weeks instead of days. Developers spend 17 hours weekly on maintenance from accumulated debt.
9 min read

Reproducible development environments: the Nix approach

Dozens of Go microservices in Docker, almost a dozen Node.js UI applications, PostgreSQL, Redis. Extensive setup process. Docker Desktop, Go 1.21 specifically, Node.js 18 specifically, PostgreSQL 14, build tools differing between macOS and Linux. When it breaks, debugging requires understanding which layer failed. Developers spend 10% of working time fighting environment issues.
10 min read

The hidden cost of free tooling: when open source becomes technical debt

Adding file compression should have taken a day. Three packages needed different versions of the same streaming library. Three days of dependency archaeology, GitHub issue spelunking, and version juggling later, we manually patched node_modules with a post-install script. Open source is free to download but expensive to maintain.
10 min read

Avoiding overkill: embracing simplicity

A contact form implemented with React, Redux, Webpack, TypeScript, and elaborate CI/CD pipelines—2.3MB production bundle for three fields and a submit button. Two days to set up the development environment. Thirty-five minutes to change placeholder text. This is overengineering: enterprise solutions applied to problems that need HTML and a server script.
10 min read

Terminal multiplexing: beyond the basics

Network drops during critical database migrations. SSH connections terminate mid-deployment. Terminal crashes destroy hours of workspace setup. tmux decouples your terminal interface from persistent sessions that continue running independently—network failures become irrelevant interruptions rather than catastrophic losses, whilst organised workspaces survive crashes and reconnections.
10 min read

SSH keys in 1Password: eliminating the file juggling ritual

SSH keys scattered across machines create a familiar nightmare—copying files between systems, remembering which key lives where, and the inevitable moment when you need to connect from a new laptop without access to your carefully managed ~/.ssh directory. 1Password's SSH agent transforms this by keeping encrypted keys available everywhere whilst ensuring private keys never touch disk outside the vault.
7 min read

SSH dotfiles: unlocking efficiency

Managing dozens of SSH connections means remembering complex hostnames, multiple keys, and elaborate commands you copy from text files. The .ssh/config file transforms this chaos into memorable aliases that map mental shortcuts to complete configurations, reducing cognitive load so you can focus on actual work rather than SSH incantations.
10 min read

Downtime of uptime percentages, deciphering the impact

Understanding the real-world implications of uptime percentages is paramount for businesses and consumers alike. What might seem like minor decimal differences in uptime guarantees can translate to significant variations in service availability, impacting operations, customer experience, and bottom lines.