Docker offers following 3 ways to store persistent data.
bind mount
: Host machine’s folder/file is mounted into a container. If the container is tested on windows machine it may not work as expected on Linux or other places like cloud because it’s dependent on host file system.volume
: Recommended way to store data. The data is created and used by Docker. It is completely managed by Docker.tmps mount
: This is only for Linux host machine. It stores data on memory and it’s removed when a container stops.
When developing with Node.js, normally npm install
is used and the dependencies are downloaded from the internet. If a module has many dependencies it takes a while to download them and we don’t want to repeat the download process while developing. There are I think two ways to avoid this issue.
Download my source code first from here. Then, build the image.
cd <volume-mount directory>
docker image build -t dev-volume-mount .
This is one of Docker learning series posts.
- Start Docker from scratch
- Docker volume
- Bind host directory to Docker container for dev-env
- Communication with other Docker containers
- Run multi Docker containers with compose file
- Container’s dependency check and health check
- Override Docker compose file to have different environments
- Creating a cluster with Docker swarm and handling secrets
- Update and rollback without downtime in swarm mode
- Container optimization
- Visualizing log info with Fluentd, Elasticsearch and Kibana
Using the same container
If it is not necessary to have the same container --rm
option should be used to remove the container when the container stops. However, when we build Node.js application we want to keep node_modules
. Thus, the container shouldn’t be removed and we need to use docker container stop
and docker container start
instead of using --rm
option and docker container run
. If we have the same container the files in node_modules
are still there. But sometimes I want to have a look on the code in a module used in my application. If the files are only in the container it’s hard to read the source code, so I don’t want to apply this way for my development.
Using bind mount
If the directories and files in the Host machine can be shared with a container the problem is resolved. This function is offered by Docker option either --mount
, -v
or --volume
. --mount
option is basically the same as -v
or --volume
option. The difference is syntax. -v
option combines all configuration together whereas --mount
option separates them.
# From VSCode / Powershell
## Use --mount option
$ docker container run --rm -it --name dev-volume-mount --mount type=bind,source=$PWD,target=/src dev-volume-mount
## Use --volume (-v) option
$ docker container run --rm -it --name dev-volume-mount -v $(PWD):/src dev-volume-mount
# From git-bash
## Use --mount option
$ winpty docker container run --rm -it --name dev-volume-mount --mount type=bind,source=$PWD,target=/src dev-volume-mount
## Use --volume (-v) option
$ winpty docker container run --rm -it --name dev-volume-mount -v $(PWD):/src dev-volume-mount
When using git-bash and -it
option is used docker shows following error message.
Docker options
-i
or--interactive
: Keep STDIN open even if not attached-t
or--tty
: Allocate a pseudo-TTY
$ docker container run --rm -it --name dev-volume-mount --mount type=bind,source=$PWD,target=/src dev-volume-mount
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
But… What! The result was different! What is the ;C
at the end when using -v
option!
# When using --mount option
$ docker inspect dev-volume-mount
"Mounts": [
{
"Type": "bind",
"Source": "/run/desktop/mnt/host/c/<my directories>/src/Docker/volume-mount",
"Destination": "/src",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
# -v option
$ docker inspect dev-volume-mount
"Mounts": [
{
"Type": "bind",
"Source": "/run/desktop/mnt/host/c/<my directories>/src/Docker/volume-mount;C",
"Destination": "\\Program Files\\Git\\src",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
I didn’t specify ;C
but according to this comment in stack overflow, git-bash adds it. To solve this problem, /
needs to be added /$(PWD)
. Then, this problem is solved and the result becomes the same as --mount
option’s result. The files which exist in Host machine appears in the container too!
$ winpty docker container run --rm -it --name dev-volume-mount -v /$(PWD):/src dev-volume-mount
root@d687880edc22:/src# ls
Dockerfile lib node_modules package-lock.json package.json tsconfig.json
When executing npm install
in the container all modules will be available on the Host machine too.
Conclusion
For those developers who develop with Node.js handling node_modules
is one of important things. As a software grows the build time also increases. Let’s use bind-mount option to develop softwares. Remote - Container
is available from VSCode extension in order to dockerlize our environment but I faced some problems that I couldn’t do anything for some reason and it’s frustrating because it requires VSCode restart. Maybe files system is not controlled well by the extension. The extension is very useful in general but let’s use bind-mount function until Remote - container
is stable enough.
Comments