At work, we are beginning to use Docker to simplify local and production development environments, and we are also beginning to build out a portfolio of mobile applications.

One problem I've invested some time into recently is how to access a web service running in Docker on my development machine, from my mobile application. There are two primary issues:

  • The native code rejects the self-signed SSL certificates we use for development
  • Our local environment runs on application-name.local - this makes DNS resolution difficult (yes, I know we shouldn't really be using .local)


Just a tiny bit of extra context:

  • Life would be a lot easier if we could host these applications on public URLs, but for various reasons that's not currently feasible
  • We need to talk to two different web services running on the same host
  • We use the jwilder/nginx-proxy image to help with that
  • The end result should be something like https://application-a.local, and https://application-b.local
  • Then we don't have to worry about all the instructions for developers to change the configuration settings to point to their IP, or to handle port conflicts, or any of that "noise"

Last of all: SAML. We have our-main-app.local configured as a Service Provider entry in our test environment. Using localhost doesn't work, because the mobile app can't resolve it. Using an IP address could work, but we'd need to add Service Provider entries for every developer's machine, and update them when they change. Running a local DNS resolver removes all of these problems.

One alternative implementation to this might be to use a private ACME provider, which can then be used with the docker-letsencrypt-nginx-proxy-companion service, reducing the need for developers to manage their own certificates to some degree.

A Serious Warning

Before you start:

You’re about to generate a private Certificate Authority and add it to your Trust Store. If anybody gets access to the CA key, they can impersonate any website on your machine and on your phone - they can then read your passwords, your emails, your bank details, and possibly your mind. You are STRONGLY RECOMMENDED to delete the certificate files when you are finished, because you can always regenerate a new set if you need to. You have been warned.

Android in particular is aware of this, and will display this warning when you import a private CA certificate:



Creating a Private Certificate Authority

We can solve the first issue with this great StackOverflow answer by user Brad Parks. This generates us a private Certificate Authority that we can use to sign local certificates for *.local domains.

This script generates us a private Certificate Auhority that we can use to sign other certificates, like the ones we pass to nginx. I've added one extra step to the script to convert the certificate into DER format so I can import it into Android. I've also modified the output directories slightly:

set -euo pipefail
mkdir -p ca
SUBJECT="/C=GB/ST=England/L=London/O=ACME Corp./OU=Development/CN=Personal CA"
openssl genrsa -out ca/dev.key 2048
openssl req -x509 -new -nodes -key ca/dev.key -sha256 -days 1024 -subj "$SUBJECT" -out ca/dev.pem
openssl x509 -inform PEM -outform DER -in ca/dev.pem -out ca/dev.crt

#!/usr/bin/env bash
# Modified from

set -euo pipefail
mkdir -p certs


if [ -z "$DOMAIN" ]
  echo "Please supply a subdomain to create a certificate for"
  echo "e.g. myapp.local"

if [ ! -f ca/dev.pem ]; then
  echo "Please run "" first, and try again!"
if [ ! -f v3.ext ]; then
  echo "Please create the "v3.ext" file and try again!"

# Create a new private key if one doesnt exist, or use the existing one if it does
if [ -f certs/private.key ]; then

SUBJECT="/C=GB/ST=England/L=London/O=ACME Corp./OU=Development/CN=$DOMAIN"
openssl req -new -newkey rsa:2048 -sha256 -nodes $KEY_OPT "certs/private.key" -subj "$SUBJECT" -out "certs/$DOMAIN.csr"
cat v3.ext | sed s/%%DOMAIN%%/"$DOMAIN"/g > /tmp/__v3.ext
openssl x509 -req -in "certs/$DOMAIN.csr" -CA ca/dev.pem -CAkey ca/dev.key -CAcreateserial -out "certs/$DOMAIN.crt" -days $NUM_OF_DAYS -sha256 -extfile /tmp/__v3.ext

echo "Done!"


keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names

DNS.1 = %%DOMAIN%%

Permissions and Execution

Make the two shell scripts executable:

$ chmod +x

Then generate a CA certificate:

$ ./
$ tree ca/
├── dev.crt
├── dev.key
├── dev.pem

0 directories, 4 files

Finally, generate a certificate:

$ ./ myapp.local
Signature ok
subject=C = GB, ST = England, L = London, O = ACME Corp., OU = Development, CN = myapp.local
Getting CA Private Key

$ tree certs/
├── myapp.local.crt
├── myapp.local.csr
└── private.key

0 directories, 3 files

Configuring jwilder/nginx-proxy

The private.key above is shared for all generated certificates, for convenience.

For the jwilder/nginx-proxy image, copy this key and rename it to match the basenames of the other certificates:

$ cp certs/myapp.local.* /wherever/your/ssl/mount/is
$ cp certs/private.key /wherever/your/ssl/mount/is/myapp.local.key

Add the CA to Linux Trust Store

If you want to avoid the SSL errors in your browser, you can add the CA certificate to your local trust store:

On Fedora:

$ sudo cp ca/dev.pem /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust extract

On Ubuntu:

$ sudo cp ca/dev.pem /usr/local/share/ca-certificates/dev.crt
$ sudo update-ca-certificates

Setting up Android

Next up we need to do two things:

  • Import our CA certificate into Android
  • Set up a dnsmasq resolver and get Android to use it

Some general pointers before we start:

  • DNS on Android is a royal pain. You can only override DNS when connected to a WiFi network, unless you root your phone
  • Android also has some opaque caching rules around DNS and will sometimes stop using the primary DNS if it stops responding for any reason (like switching between networks and back)

Basically; if it stops working, close the application you are trying to use it with. Disconect the DNS Changer application, then re-connect it, and try again.

Setting up a $WIFI_IP Environment Variable

Our DNS resolve needs to point to our current local IP on whichever network we are connected to. This is obviously different at work and at home, and may be different depending which office you are in.

This is good for convenience, but there's one other big reason: Docker uses dnsmasq to provide DNS within container networks - so running another dnsmasq container fails:

$ docker run -p 53:53 andyshinn/dnsmasq
docker: Error response from daemon: driver failed programming external connectivity on endpoint amazing_payne (1bfea52506e959057c0f2ef11c76097da74e3f29827972bfa878009c5eead3ca): Error starting userland proxy: listen tcp bind: address already in use.
ERRO[0000] error waiting for container: context canceled

By binding dnsmasq to only the specific interface we are interested in, we can avoid this problem - we could also use --except-interface, but I find this cleaner since we can re-use the same environment variable for our dnsmasq address configuration later.

One nice trick for this is provided by SAM in this StackOverflow answer:

$ ip -4 addr show wlp58s0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'

Replace wlp58s0 with the name of your WiFI interface (probably wlan0 on Ubuntu). Then you can add it to your .bashrc like so:

export WIFI_IP=`ip -4 addr show wlp58s0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'`

Setting up a Local dnsmasq Resolver

This one's nice and easy thanks to the anyshinn/dnsmasq image on Docker Hub.

I prefer to use docker-compose to maintain configuration, for example this is my ~/docker-compose.dnsmasq.yml file:


version: '3'
    container_name: dnsmasq
    image: andyshinn/dnsmasq
      - --address=/myapp.local/${WIFI_IP}
      - --address=/myotherapp.local/${WIFI_IP}
      - --no-resolv
      - --server=
      - "${WIFI_IP}:53:53/tcp"
      - "${WIFI_IP}:53:53/udp"
      - NET_ADMIN


  • --address adds each DNS entry we want to provide
  • --no-resolv prevents using the local /etc/resolv.conf and by extension, the local /etc/hosts entries, to prevent any conflicts
  • --server forwards any DNS entries we don't provide to Google's DNS - replace this with your internal office DNS if you have one
  • the NET_ADMIN capability is required to interact with the network interfaces, and is preferred over using --privileged (see the Docker Documentation for more info).

You can bring this up by running docker-compose -f ~/docker-compose.dnsmasq.yml up, and you can test it by running:

$ nslookup myapp.local $WIFI_IP

Name:	myapp.local

Overriding Android DNS

So far we have:

  • Private CA added to our local machine
  • Private certificates for our local services
  • Private DNS resolution on our WiFi network interface

All we need to do now is get Android to talk to our development machine when resolving these address. There's a fantastic free app that achieves this by pretending to be a VPN connection - DNS Changer.

(Unfortunately, this has some advertisements, some of which may be borderline NSFW - if you can find a better one, let me know)

Choose "Custom DNS" in the DNS Changer application; set the first DNS server to your WiFi IP address, and your second DNS server to either the office DNS server (if you have one) or Google's DNS server at


Click "Start" to initialise the connection:


You should now be able to visit your server in your web browser:


At the moment, this is only working for me on Android with Chrome. Firefox seems to be ignoring the local trust store. However, that's good enough for testing mobile applications through native HTTPS calls.