To open up a shell in some image:
docker run -it ruby:2.5 /bin/bash
Laptop notes
TODO: Go through and check these notes
Table of Contents
- Laptop notes
- Diving into Docker
- Docker Client
- Overriding default commands
- Listing running containers
- Container Lifecycle
- Restarting stopped containers
- Restarting Stopped Containers
- Removing stopped containers
- Retrieving Log Outputs
- Stopping containers
- Multi-command containers
- Executing commands in running containers
- The
-it
flag - Getting a command prompt in a container
- Starting with a shell
- Container isolation
- Building custom images through docker server
- Making real projects with Docker
- Docker Compose
- Creating a Production-Grade Workflow
Diving into Docker
Why use Docker?
Makes it easy to setup and run things on your computer.
docker run -it redis
What is Docker
There’s an ecosystem - client, server, machine, images, hub, compose
It’s about running containers.
Docker CLI downloaded an Image from the Docker Hub. Image - deps and config to run a program.
- single file on your HDD
A container is an instance of an image (like a running program).
- A program with an isolated set of hardware resources (eg, memory, ports)
Install
The Docker Client/CLI is how we interact with Docker. The Docker Server/Daemon is what creates images, runs containers, etc.
We don’t want to use Windows based containers. I should create a Docker ID/sign in. Docker has a command prompt
Using Docker Client
docker run hello-world
- The CLI tells the Docker Server.
- The Server looks for hello-world inside its Image Cache.
- The Server goes and gets the image from Docker Hub.
- Server loads the image and creates a container from it.
But Really… What’s a Container?
Most OS’s have a Kernel which governs access to the hardware from programs running on your computer. These programs interact with the program using system calls.
Namespacing - isolating resources per a process or group of processes. Eg users, harddrives, network, hostnames, IPC
A Control Group can be used to limit the amount of resources a process can use. Eg memory, CPU, HDD I/O, network
An Image is a:
- File system snapshot
- Start up command.
Image -> Container:
- Kernel creates a new segment of HDD.
- Docker copies the FS over.
- Docker runs the startup command.
How’s docker running on your computer?
Namespacing and Control Groups are specific to Linux. Docker runs a Linux VM - you can
see this in docker version
.
Docker Client
docker run <image name>
docker run hello-world
Overriding default commands
We can override the default startup command:
docker run <image name> <command>
docker run busybox echo hello Peter
docker run busybox ls
docker run /bin/sh -c "echo Hello && echo Peter"
Busybox seems like small image (1.22MB
). The hello-world
image doesn’t have echo
or ls
.
Listing running containers
docker ps
docker run busybox ping google.com
docker ps
docker ps --all
All containers ever run on the machine.
Container Lifecycle
Creating and starting a container are two different steps.
docker run = docker create + docker start
> docker create hello-world
12312312312412412
> docker start -a 12312312312412412
-a
causes Docker to print the output to the terminal. This is the default on docker run
.
Restarting stopped containers
docker ps --all
Restarting Stopped Containers
> docker run busybox echo Hello Peter
Hello Peter
> docker ps --all
> # Find the ID of the container you just run
> docker start -a 48299b31f223
Hello Peter
Containers contain their default command - we can’t change it.
Removing stopped containers
docker system prune
This will delete your build cache (any Image from Docker hub) as well.
I saved 26MB by running that.
But docker images still show quite a few things…
Retrieving Log Outputs
This is for when you docker start
without -a
.
docker logs <container id>
docker create busybox echo hello
docker start <id>
docker logs <id>
Stopping containers
docker create busybox ping google.com
docker start <id>
docker logs <id>
The container is still running. How do we stop it?
docker stop <id>
docker kill <id>
The difference is docker stop
sends a SIGTERM
to the primary process in the container.
This gives the process some time to clean itself up, whereas docker kill
will definitely kill it.
However, docker stop
will call docker kill
if the container hasn’t finished in 10 seconds.
Multi-command containers
For Redis, you have the Redis server which you communicate with with the Redis CLI.
docker run redis
How do we execute redis-cli within this container?
Executing commands in running containers
docker exec -it <id> <command>
-it
allows us to type input into the container.
docker exec -it f2bf57d675ab redis-cli
> set myvalue 5
> get myvalue 5
docker exec f2bf57d675ab redis-cli # No -it
# Closes instantly
The -it
flag
Each linux process has 3 communication channels - stdin, stdout, stderr.
-i
attach terminal to stdin.
-t
make sure the text is entered and shows up in a nice manner on the screen - eg prompt and autocomplete.
Getting a command prompt in a container
docker exec -it f2bf57d675ab sh
If Ctrl+C doesn’t exit, try Ctrl+D.
Starting with a shell
docker run -it busybox sh
Container isolation
Containers do not share their filesystem.
Building custom images through docker server
Creating Docker Images
We create a Dockerfile, pass it to the docker CLI which passes it to the server.
Docker file:
- Specify base image.
- Run commands to install additional things.
- Specify start up command.
Building a Dockerfile
Creating a docker file that runs redis-server.
FROM alpine
RUN apk add --update redis
CMD ["redis-server"]
Do we want to specify the version?
docker build .
In docker build
you get output for each command in your Dockerfile.
For each command you also get Running in <id>
and then Removing intermediate container <id>
.
On RUN apk add --update redis
it gets the alpine image and created a new container from that image
and executes the command in that container.
We then stop it and take a snapshot of the filesystem.
For CMD
we set the startup command of the image.
Image -> Create container -> Execute command -> Snapshot Image FS -> Image
Docker caches the intermediate images so put things less likely to change further up the file.
docker build -t peter/redis:latest .
docker build -t <docker id>/<project name>:<version> .
This tags the image with the name we just gave. You can leave out the version when running, eg:
docker run peter/redis
Manual image creation with Docker Commit
This is how we can turn a container back into an image, with a snapshot of the filesystem.
> docker run -it alpine sh
$ apk add --update redis
# Other terminal
> docker ps # Get id
> docker commit -c 'CMD ["redis-server"]' <id>
# Outputs image id
Making real projects with Docker
We’re going to go along and make a few intentional mistakes.
Node apps install dependencies with npm install
and then we start the server with npm start
.
alpine
in Node’s Docker Hub page is a tag, so we need to do node:alpine
.
COPY ./ ./
- the first path is the local filesystem, the second is inside the container.
Relative to the build context.
docker build -t peconn/simpleweb .
The docker container can make outgoing network requests and that works fine.
For incoming requests, you need to use EXPOSE
.
docker run -p 8080:8080 peconn/simpleweb
Use the WORKDIR
directory so you’re not copying stuff and running from root.
FROM node:alpine
WORKDIR /usr/app
COPY ./ ./
RUN npm install
CMD ["npm", "start"]
If we made a change to index.js
, the changes aren’t reflected.
In order to not re-run npm install
spuriously, copy the package.json
over separately.
FROM node:alpine
WORKDIR /usr/app
COPY ./package.json ./
RUN npm install
COPY ./ ./
CMD ["npm", "start"]
Docker Compose
docker run redis
To run redis
docker build -t peconn/visits .
docker run peconn/visits
We need to set up networking between them.
We’ll use docker-compose
, which makes you not have to run loads of docker-cli commands.
The docker-compose.yml
file.
Services are types of container.
-
in a yml file is how we start an array.
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
- "4001:8081"
Services inside the same services
will share networking.
The address of redis is redis-server
docker run myimage -> docker-compose up
docker build && docker run myimage -> docker-compose up --build
docker run -d redis
runs an image in the background. docker ps
shows the running things and docker stop
closes them.
For docker compose, docker-compose up -d
will start things in the background and docker-compose down
will shut them down.
Container maintenance
Containers may crash. Docker determines whether or not to restart containers by looking at the error code.
Docker restart policies:
"no"
, don’t bother.always
, always do so!on-failure
, only if failureunless-stopped
, always unless manually killed.
version: '3'
services:
redis-server:
image: 'redis'
node-app:
restart: always
build: .
ports:
- "4001:8081"
docker-compose ps
will show running Docker Compose set ups - it needs to be from the directory of the docker-compose.yml
file.
Creating a Production-Grade Workflow
The full flow:
- Developing
- Testing
- Deploying
This will involve creating a GitHub repo with a feature branch and a master branch. The master branch will be automatically deployed to production. We’ll pull and push code to the feature branch and occasionally we’ll make a pull request to move it into master. This pull request will trigger a Travis CI run (from master) and then if that passes it will be deployed to AWS.
We’re going to start off with a template React application.
npm install -g create-react-app
create-react-app frontend
npm -i --save-dev cross-env
Useful commands:
npm run start # For dev use only.
npm run test
npm run build # Builds a prod version.
Need to follow instructions here to stop auto-watch on your tests. I also needed to remove the browserlists because of this bug.
It’s going to make sense to have two different Docker files - one for dev (Dockerfile.dev
) and one for prod (Dockerfile
).
docker build -f Dockerfile.dev .
We should remove the local copy of `node_modules because that gets copied over into the image.
docker run -p 8080:3000 <id>
Create-React-App has some issues detecting when files get changed on Windows machines.
Create a file called .env
in the root and add CHOKIDAR_USEPOLLING=true
.
To get the changes reflected without rebuilding the container, we need to use Volumes.
Why didn’t we use volumes before? It’s a bit painful.
docker run -p 3000:3000 \
-v /app/node_modules \
-v $(pwd):/app \
<id>
The first -v
is a bookmark volume.
${PWD}
should work on windows.
Things fail if we omit the bookmark volume - because the node_modules file is probably missing.
If you go into the container and run npm install
then npm run start
things work again.
When you use -v
without a :
, that means don’t overwrite files with other volume commands.
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
We could keep the COPY . .
around in case we want to run things without the volume mounts.
I ran into the following error while running Docker Compose (but not while running Docker normally):
web_1 | Could not find a required file.
web_1 | Name: index.html
web_1 | Searched in: /app/public
web_1 | npm ERR! code ELIFECYCLE
web_1 | npm ERR! errno 1
web_1 | npm ERR! frontend@0.1.0 start: `react-scripts start`
web_1 | npm ERR! Exit status 1
web_1 | npm ERR!0
web_1 | npm ERR! Failed at the frontend@0.1.0 start script.
web_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
web_1 |
web_1 | npm ERR! A complete log of this run can be found in:
web_1 | npm ERR! /root/.npm/_logs/2019-06-19T06_10_45_706Z-debug.log
frontend_web_1 exited with code 1
I fixed it by Resetting my Credentials in the Shared Drive settings.
Testing
We want to first run tests in our dev environment and then on travic CI.
docker build -f Dockerfile.dev .
docker run <id> npm run test
"scripts": {
"test": "cross-env CI=true react-scripts test",
"test_watch": "react-scripts test",
},
You can also connect to an already running thing.
Or you can:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
tests:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/nodes_modules
- .:/app
command: ["npm", "run", "test_watch"]
Be sure to use docker-compose up --build
.
If you get:
ERROR: for frontend_web_1 Cannot start service web: driver failed programming external connectivity on endpoint frontend_web_1 (e2373456809e2b042b785abb9636d9485c7bbfec19b4a7763058b41Starting frontend_tests_1 ... done input/output error
ERROR: for web Cannot start service web: driver failed programming external connectivity on endpoint frontend_web_1 (e2373456809e2b042b785abb9636d9485c7bbfec19b4a7763058b4138aabb7b3): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:3000:tcp:172.19.0.3:3000: input/output error
ERROR: Encountered errors while bringing up the project.
Try restarting Docker.
Nginx
How are things going to work in a prod environment (npm run build
)?
We’ll need a proper server (not the npm dev one).
The dependencies for npm run build
aren’t going to be needed after building - we shouldn’t need to include them on the prod image.
Multi-step builds.
Build phase:
- Use node:alpine.
- Copy package.json.
- Install deps.
- Run
npm run build
.
Run phase:
- Use nginx.
- Copy over result.
- Start nginx.
FROM node:alpine AS builder
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
# The build dir will be in /app/build
# TODO: Tie down versions
# This will cause a new phase
FROM nginx
COPY --from=builder /app/build /usr/share/nginx/html
# nginx image has CMD already set.
To display a thing:
npm install -g serve
serve -s build