If you’ve ever searched for information on how to SSH into a Docker container, you’ve probably found conflicting advice. The truth is, you don’t usually need to run an SSH server inside a container at all. Instead, Docker provides built-in commands that give you secure, direct access to your containers.
In this guide, we’ll show you the correct ways to get a shell inside a running container, explain the difference between docker exec and docker attach, and demonstrate how to safely detach without stopping your application. We’ll also cover when copying files makes sense, and why adding a full SSH server inside a container should be avoided except in rare cases.
Let’s get started!
Why should you use CLI over Docker Desktop?
While graphical user interfaces for Docker exist, the CLI is the primary and most powerful way to interact with the Docker daemon and containers. There are several scenarios where you’ll need to access your Docker server via the CLI:
- Automation and Scripting: The most significant advantage is the ability to script any Docker operation. The CLI allows you to build a CI/CD pipeline, automate deployments, create complex multi-container setups with Docker Compose, and much more.
- Server Environments: Most servers run headless (without a graphical interface). When you SSH into a remote production server, the CLI is the only way to manage Docker containers.
- Precision and Control: The CLI gives you fine-grained control over every possible Docker option and flag. This level of precision is often abstracted or unavailable in GUIs.
- Resource Efficiency: The CLI is lightweight. GUIs consume additional system resources that are better allocated to your applications, which is critical on a production server.
- Universality: The Docker CLI is consistent across all platforms (Linux, macOS, and Windows). This universal experience makes it a reliable tool for developers and administrators, regardless of their local operating system.
Suggested read: Essential Commands for Getting Started with Docker
How To Get a Shell Into a Docker Container
When debugging an issue in a Docker container, shell access is extremely helpful because it lets you monitor and inspect individual services.
The phrase “SSH into a container” is common, but technically a misnomer. The most common and recommended methods do not involve running an SSH server inside the container at all. Let’s explore the primary techniques for gaining access and their common use cases.
Using the ‘docker exec’ command (Recommended)
The docker exec command is the most direct and recommended method for getting an interactive shell in a running container. This command starts a new process inside the container, letting you run commands without attaching to its primary process. Use the following command to get CLI access to your Docker container:
docker exec -it <container_name_or_id> <command>
-i
(–interactive): Keeps STDIN open, allowing you to type commands.-t
(–tty): Allocates a pseudo-TTY, which connects your terminal to the container’s shell, making it interactive.<command>
: The command to run. To get a shell, this is typically/bin/bash
or/bin/sh
.

In the above example, we executed the date
command inside a container with the ID ba06f65c55e7
. It returned the current date and time of the Docker container (which was different from the host system).
When to use the ‘docker exec’ Command
- Debugging a Live Application: If your web server is running in a container and throwing errors, you can use docker exec to get a shell, check log files that aren’t being piped to stdout, inspect the environment variables (env), or use tools like curl from inside the container to test network connectivity to other services.
docker exec -it my-web-app /bin/bash
- Running Database CLI Tools: You need to inspect a database running inside a container. To do so, you can exec into the container and use a command-line client like psql or mysql.
docker exec -it my-postgres-container psql -U myuser -d mydatabase
- Installing Debugging Tools: Your container uses a minimal base image and doesn’t include tools like ping or vim. You can use exec to open a shell and install them temporarily for the current session.
# Get a shell and then install curl
docker exec -it my-app /bin/sh
# Inside the container shell:
# apt-get update && apt-get install -y curl
Note: Changes made via docker exec (like installing a package) are ephemeral. If the container restarts, they will be gone. To make permanent changes, you should modify your Dockerfile and rebuild the image.
Using the ‘docker attach’ Command
The docker attach command connects your terminal’s input and output streams directly to the container’s main process (PID 1). This is fundamentally different from docker exec, which starts a new process. Execute the following command to attach to a Docker container:
docker attach <container_name_or_id>
When to use the ‘docker attach’ Command:
- Interactive Applications: If you use a container that runs an interactive process by default (like a Python REPL or a shell itself), you can use the docker attach command to connect to it.
# Start a basic container with an interactive shell process
docker run -it --name my-interactive-shell ubuntu /bin/bash
# If you detach, you can re-attach with:
docker attach my-interactive-shell
- Viewing Real-time Logs: If a container’s main process is an application that logs directly to stdout, docker attach will show you that live output, similar to the docker logs -f command.
How to Detach from a Docker Container Safely
When you use the docker attach command, you should understand that your terminal session becomes directly wired to the container’s primary process (PID 1). This means that keyboard signals like Ctrl-C are passed directly through to the application. If the main process within the container is a shell or an application that isn’t specifically programmed to handle this interrupt signal, it will interpret Ctrl-C as a command to terminate.
And since a Docker container’s lifecycle is tied directly to its main process, this action will cause the process to shut down, and as a result, the container itself will stop completely. This often surprises new users, who expect to just exit the logs, but instead end up shutting down the entire application.

To safely disconnect from an attached container without terminating it, you must use Docker’s specific escape sequence: holding the Ctrl key and pressing P, followed immediately by Q. This key combination is intercepted by the Docker client on your local machine and is not sent to the process running inside the container.
Suggesed Read: Self-Hosting vs Cloud-Based Docker
Copying Files in a Docker Container
The docker cp command copies files to and from a Docker container. Before using it, though, it’s worth understanding that Docker Volumes are usually a better way to handle persistent data.
When Not to use the docker cp Command
A Docker Volume is a standard mechanism for decoupling the data your application generates from the container’s lifecycle. A volume is like a USB drive that can be attached to one or more containers. The biggest advantage of using volumes is that the data within a volume persists even if the container is stopped, deleted, or rebuilt. This makes it ideal for databases, application logs, user-uploaded content, and critical configuration files.
If you need a consistent and reliable way to share files between your host machine and a container, or ensure that your data survives container restarts, you should always use volumes by mounting them when you first run the container (e.g., using the -v
or --mount
flag with docker run).
When to use the docker cp Command
While volumes are the correct solution for persistent application data, there are specific scenarios where you might need to perform a one-time, manual file transfer. This is where the docker cp command becomes an invaluable utility for ad-hoc operations.
The docker cp command is useful in several situations, such as:
- Quickly pulling a specific log file from a running container for analysis
- Pushing a hotfix configuration file without rebuilding the image
- Extracting a build artifact that was generated inside a temporary container.
Copy from container to host:
docker cp <container_name_or_id>:/path/to/file /path/on/host
Copy from host to container:
docker cp /path/on/host <container_name_or_id>:/path/to/file
Running an “Actual SSH” Server (Not Recommended)
While technically possible, running an SSH server inside your container is considered an anti-pattern. Docker containers are designed to be lightweight, disposable, and focused on a single process. Adding an SSH server adds unnecessary bulk and complexity.
When Might You Actually Need It?
- Legacy Systems: You’re containerizing a legacy application, and existing management scripts or tools rely exclusively on SSH to function.
- Providing Sandboxed User Environments: You are using a container to give a user a sandboxed environment on a shared server, and they need to connect with a standard SSH client.
If you must do this, you would need to:
- Modify your Dockerfile to install an SSH server (e.g., openssh-server).
- Configure the SSH server, add user accounts, and manage SSH keys.
- EXPOSE port 22 in the Dockerfile.
- Run the container, mapping a host port to the container’s port 22 (e.g., docker run -p 2222:22 …).
Final Thoughts
You now know the right way to SSH into a Docker container – by using Docker’s own tools like docker exec for most cases, and docker attach when you need to connect to a container’s main process. Running a full SSH server inside a container isn’t just unnecessary – it adds complexity and risks that Docker was designed to avoid.
Mastering these commands gives you confidence in managing containers directly. But if you want to go further – scaling deployments, automating workflows, and simplifying day-to-day server management – RunCloud gives you the best of both worlds: a clean interface with the full power of the CLI underneath.
Sign up for RunCloud today and make managing your servers and Docker environments faster, easier, and more reliable.